action #152941
closedcircleCI job runs into 20m timeout due to slow download from registry.opensuse.org
0%
Description
Observation¶
showing
8a9cdc78: Downloading [====> ] 19.03MB/192.9MB
Error pulling image registry.opensuse.org/devel/openqa/ci/containers/base:latest: context deadline exceeded... retrying
image cache not found on this host, downloading registry.opensuse.org/devel/openqa/ci/containers/base:latest
latest: Pulling from devel/openqa/ci/containers/base
similar one day before.
Likely a temporary issue which we can not realistically improve but should track.
Updated by okurz 10 months ago
A retriggered workflow passed then
https://app.circleci.com/pipelines/github/os-autoinst/openQA/12718/workflows/fe0e2ac2-9695-4835-805a-d5b0f47fe5ff
Updated by okurz 10 months ago
over night again failed in https://app.circleci.com/pipelines/github/os-autoinst/openQA/12719/workflows/33bfcf74-9117-4c0a-9a33-e659f72aaa54/jobs/118666. Reminds me of the same problem we had sime time ago.
Updated by okurz 10 months ago
- Related to action #67855: [tests][ci] circleci often abort in "cache" unable to read container image from registry.opensuse.org added
Updated by okurz 10 months ago
- Related to action #71554: unstable/flaky/sporadic t/full-stack.t test failing in script waits on CircleCI added
Updated by okurz 10 months ago
- Related to action #72316: [tests][ci] circleci can fail in `zypper ref` due to temporary repository problems added
Updated by okurz 10 months ago
- Related to action #66664: circleci jobs do not retry, are aborted with "Too long with no output (exceeded 10m0s): context deadline exceeded" added
Updated by okurz 10 months ago
- Status changed from In Progress to Feedback
https://github.com/os-autoinst/openQA/pull/5412 does not make sense, had to close it. I retriggered another failed pipeline. Either we accept this error with the occassional failure which seems to have happened more often only during vacation periods (?) or we go to another CI, e.g. GHA. I will wait for another couple of days.
Updated by okurz 9 months ago ยท Edited
Another occurence 2024-01-03. Retried https://app.circleci.com/pipelines/github/os-autoinst/openQA/12726/workflows/2671acf7-fd6f-4d3c-b265-28a76f05969b and again all good. Considering shifting time of the nightly schedule again. In commit 39b51cfb9 from 2020 as part of #67855 I already shifted from 0000Z to 0220Z, maybe try shifting further.
https://github.com/os-autoinst/openQA/pull/5413
EDIT: Merged. Let's wait some more days to see if that is at least not worse.
Updated by okurz 9 months ago
- Due date deleted (
2024-01-10) - Status changed from Feedback to Resolved
https://app.circleci.com/pipelines/github/os-autoinst/openQA?branch=master shows 7 passed runs in a go. Either problem fixed or not made worse and does not happen due to other reasons. Good enough to resolve I guess.