action #138377
closedbot-ng and openQABot pipelines fail to pull containers from registry.suse.de size:M
0%
Description
Observation¶
https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1924326
ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: unknown: redis keepalive process: must be POST:/notify/redis request (manager.go:237:0s)
Same with registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest in https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1924350
Acceptance Criteria¶
- AC1: Pipelines relying on registry.suse.de are known to work
Rollback steps¶
Re-activate pipelines https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipeline_schedulesDONE
Suggestions¶
- Escalate this with OBS folks
- done, waiting on feedback
- Change pipeline scripts to run podman manually with retries so we can be more resilient to temporary issues
- Reach out to GitLab folks
- Can we get images cached to avoid issues?
- Maybe we can use containers on GitLab? It needs to be enabled in the instance
- https://docs.gitlab.com/ee/user/packages/container_registry/
Updated by livdywan 7 months ago
- Status changed from Feedback to In Progress
- Priority changed from High to Urgent
Pipelines have been run since but seem to fail on and off. Waiting to confirm if they are being fixed.
Maybe we need to reconsider the container setup. Although right now this affects several teams beyond maintenance.
Apparently the to my mind correct ticket state was misleading so I'm changing it. Either way I have been monitoring this.
Updated by openqa_review 7 months ago
- Due date set to 2023-11-08
Setting due date based on mean cycle time of SUSE QE Tools
Updated by livdywan 7 months ago
Pipelines are consistently looking good. Unfortunately I still have no clue why https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines
This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.
Updated by livdywan 7 months ago
- Status changed from In Progress to Feedback
- Priority changed from Urgent to High
livdywan wrote in #note-10:
Pipelines are consistently looking good. Unfortunately I still have no clue why https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines
This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.
Pipelines continue to look food. Still no clue what has or hasn't been fixed. Lowering priority accordingly.
Updated by livdywan 7 months ago
- Subject changed from bot-ng pipelines fail to pull containers from registry.suse.de to bot-ng and qembot pipelines fail to pull containers from registry.suse.de
Well, we got another failure. I'm not opening another ticket since this is the same symptom but on openQABot and a new ticket for this would be a waste of time at this point:
ERROR: Job failed: failed to pull image "registry.opensuse.org/home/darix/apps/containers/gitlab-runner-helper:x86_64-latest" with specified policies [always]: Error response from daemon: received unexpected HTTP status: 503 Service Unavailable (manager.go:237:0s)
Updated by jbaier_cz 7 months ago
That's "acknowledged" in https://suse.slack.com/archives/C029APBKLGK/p1698309285071959
Updated by livdywan 7 months ago
- Reach out to GitLab folks
- Can we get images cached to avoid issues?
- Maybe we can use containers on GitLab? It needs to be enabled in the instance
- https://docs.gitlab.com/ee/user/packages/container_registry/
https://suse.slack.com/archives/C029APBKLGK/p1698321397573119
Updated by livdywan 6 months ago
- Subject changed from bot-ng and qembot pipelines fail to pull containers from registry.suse.de size:M to bot-ng and openQABot pipelines fail to pull containers from registry.suse.de size:M
livdywan wrote in #note-16:
- Reach out to GitLab folks
- Can we get images cached to avoid issues?
- Maybe we can use containers on GitLab? It needs to be enabled in the instance
- https://docs.gitlab.com/ee/user/packages/container_registry/
https://suse.slack.com/archives/C029APBKLGK/p1698321397573119
I was under the impression retries wouldn't cover a broken container setup but a comment made me re-read the docs. Maybe we should enable retries for bot-ng and openQABot afterall.
Since openQABot was not appretly retrying I'll assume "system failure" is not enough.
Updated by okurz 6 months ago
livdywan wrote in #note-17:
I was under the impression retries wouldn't cover a broken container setup but a comment made me re-read the docs. Maybe we should enable retries for bot-ng and openQABot afterall.
True, https://docs.gitlab.com/ee/ci/yaml/#retry says
script_failure: Retry when:
The script failed.
The runner failed to pull the Docker image. For docker, docker+machine, kubernetes executors.
that sounds promising