action #138377
closedbot-ng and qamops pipelines fail to pull containers from registry.suse.de size:M
Added by livdywan about 1 year ago. Updated 5 months ago.
0%
Description
Observation¶
https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1924326
ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: unknown: redis keepalive process: must be POST:/notify/redis request (manager.go:237:0s)
Same with registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest in https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1924350
Also https://gitlab.suse.de/qa-maintenance/qamops/-/jobs/2872910
Acceptance Criteria¶
- AC1: Pipelines relying on registry.suse.de are known to work
Rollback steps¶
- Re-activate pipelines
Suggestions¶
- Escalate this with OBS folks
- done, waiting on feedback
- Change pipeline scripts to run podman manually with retries so we can be more resilient to temporary issues
- Reach out to GitLab folks
- Can we get images cached to avoid issues?
- Maybe we can use containers on GitLab? It needs to be enabled in the instance
- https://docs.gitlab.com/ee/user/packages/container_registry/
Updated by okurz about 1 year ago
- Tags set to infra
- Priority changed from Normal to Urgent
Updated by okurz about 1 year ago
Updated by livdywan about 1 year ago
- Tags deleted (
infra) - Description updated (diff)
- Priority changed from Urgent to Normal
Updated by livdywan about 1 year ago
- Status changed from New to In Progress
- Priority changed from Normal to Urgent
Updated by livdywan about 1 year ago
- Description updated (diff)
- Status changed from In Progress to Feedback
- Priority changed from Urgent to High
Pipelines inactive for now.
Updated by livdywan about 1 year ago
- Status changed from Feedback to In Progress
- Priority changed from High to Urgent
Pipelines have been run since but seem to fail on and off. Waiting to confirm if they are being fixed.
Maybe we need to reconsider the container setup. Although right now this affects several teams beyond maintenance.
Apparently the to my mind correct ticket state was misleading so I'm changing it. Either way I have been monitoring this.
Updated by livdywan about 1 year ago
If anyone has an idea and would like to jump in please do. Right now I physically can't test a change to CI. Alternatively feel free to retriever pipelines and see if they work by chance
Updated by openqa_review about 1 year ago
- Due date set to 2023-11-08
Setting due date based on mean cycle time of SUSE QE Tools
Updated by livdywan about 1 year ago
Pipelines are consistently looking good. Unfortunately I still have no clue why https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines
This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.
Updated by livdywan about 1 year ago
- Status changed from In Progress to Feedback
- Priority changed from Urgent to High
livdywan wrote in #note-10:
Pipelines are consistently looking good. Unfortunately I still have no clue why https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines
This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.
Pipelines continue to look food. Still no clue what has or hasn't been fixed. Lowering priority accordingly.
Updated by livdywan about 1 year ago
- Subject changed from bot-ng pipelines fail to pull containers from registry.suse.de to bot-ng and qembot pipelines fail to pull containers from registry.suse.de
Well, we got another failure. I'm not opening another ticket since this is the same symptom but on openQABot and a new ticket for this would be a waste of time at this point:
ERROR: Job failed: failed to pull image "registry.opensuse.org/home/darix/apps/containers/gitlab-runner-helper:x86_64-latest" with specified policies [always]: Error response from daemon: received unexpected HTTP status: 503 Service Unavailable (manager.go:237:0s)
Updated by jbaier_cz about 1 year ago
That's "acknowledged" in https://suse.slack.com/archives/C029APBKLGK/p1698309285071959
Updated by livdywan about 1 year ago
- Subject changed from bot-ng and qembot pipelines fail to pull containers from registry.suse.de to bot-ng and qembot pipelines fail to pull containers from registry.suse.de size:M
- Description updated (diff)
Updated by livdywan about 1 year ago
- Reach out to GitLab folks
- Can we get images cached to avoid issues?
- Maybe we can use containers on GitLab? It needs to be enabled in the instance
- https://docs.gitlab.com/ee/user/packages/container_registry/
https://suse.slack.com/archives/C029APBKLGK/p1698321397573119
Updated by livdywan about 1 year ago
- Subject changed from bot-ng and qembot pipelines fail to pull containers from registry.suse.de size:M to bot-ng and openQABot pipelines fail to pull containers from registry.suse.de size:M
livdywan wrote in #note-16:
- Reach out to GitLab folks
- Can we get images cached to avoid issues?
- Maybe we can use containers on GitLab? It needs to be enabled in the instance
- https://docs.gitlab.com/ee/user/packages/container_registry/
https://suse.slack.com/archives/C029APBKLGK/p1698321397573119
I was under the impression retries wouldn't cover a broken container setup but a comment made me re-read the docs. Maybe we should enable retries for bot-ng and openQABot afterall.
Since openQABot was not appretly retrying I'll assume "system failure" is not enough.
Updated by okurz about 1 year ago
livdywan wrote in #note-17:
I was under the impression retries wouldn't cover a broken container setup but a comment made me re-read the docs. Maybe we should enable retries for bot-ng and openQABot afterall.
True, https://docs.gitlab.com/ee/ci/yaml/#retry says
script_failure: Retry when:
The script failed.
The runner failed to pull the Docker image. For docker, docker+machine, kubernetes executors.
that sounds promising
Updated by livdywan about 1 year ago
- Status changed from Feedback to Resolved
Let's assume this will work as expected. We won't be able to "test" this unless there's another outage so I'm resolving - if needed we can re-open.
Updated by livdywan 7 months ago
- Copied to action #160802: bot-ng and openQABot pipelines fail to pull containers from registry.suse.de - again added
Updated by nicksinger 6 months ago
- Status changed from Resolved to Feedback
- Priority changed from High to Normal
https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines/1199291/builds shows that we indeed retry but the retries happen so fast that they are depleted before anything really changes (especially if the gitlab contains already fail to get up). Can we wait between them or would that conflict with future scheduled runs?
(also feel free to resolve again and copy this into a new ticket)
Updated by livdywan 6 months ago
nicksinger wrote in #note-21:
https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines/1199291/builds shows that we indeed retry but the retries happen so fast that they are depleted before anything really changes (especially if the gitlab contains already fail to get up). Can we wait between them or would that conflict with future scheduled runs?
I don't think we can configure the delay of native gitlab retry.
Updated by livdywan 5 months ago ยท Edited
livdywan wrote in #note-24:
I don't think we can configure the delay of native gitlab retry.
No feedback so far and another wave just hit us:
ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: received unexpected HTTP status: 503 Service Unavailable (manager.go:250:0s)
If we get a negative response or decide waiting is not helpful we have two other options to consider here:
This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.
.
Change pipeline scripts to run podman manually with retries so we can be more resilient to
Updated by livdywan 5 months ago
- Copied to action #164415: SUSE IT storage is broken (was: Broken SUSE:CA.repo causes os-autoinst-needles-opensuse-mirror pipelines to fail) added
Updated by livdywan 5 months ago
- Subject changed from bot-ng and openQABot pipelines fail to pull containers from registry.suse.de size:M to bot-ng and qamops pipelines fail to pull containers from registry.suse.de size:M
- Description updated (diff)
Nothing about openQABot here, hence removing references
Updated by livdywan 5 months ago
I attempted to run a pipeline manually but it seems like we're still out of order:
ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: Get "https://registry.suse.de/v2/": dial tcp [2a07:de40:b204:2:10:145:56:20]:443: connect: no route to host (manager.go:250:3s)
Updated by okurz 5 months ago
- Category set to Regressions/Crashes
- Status changed from Blocked to Resolved
https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/2873964 works now.
I enabled all relevant schedules again on
- https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipeline_schedules
- https://gitlab.suse.de/qa-maintenance/qamops/-/pipeline_schedules
and triggered individual runs and monitored their success. I don't think the ideas in the description are relevant as 1. gitlab.suse.de also relies on the same backed storage regardless where the CI container images are and 2. IT will have for sure learned from this unique incident :)