Project

General

Profile

Actions

action #138377

closed

bot-ng and qamops pipelines fail to pull containers from registry.suse.de size:M

Added by livdywan about 1 year ago. Updated 5 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Regressions/Crashes
Start date:
2023-10-23
Due date:
% Done:

0%

Estimated time:
Tags:

Description

Observation

https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1924326

ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: unknown: redis keepalive process: must be POST:/notify/redis request (manager.go:237:0s)

Same with registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest in https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1924350

Also https://gitlab.suse.de/qa-maintenance/qamops/-/jobs/2872910

Acceptance Criteria

  • AC1: Pipelines relying on registry.suse.de are known to work

Rollback steps

Suggestions

  • Escalate this with OBS folks
    • done, waiting on feedback
  • Change pipeline scripts to run podman manually with retries so we can be more resilient to temporary issues
  • Reach out to GitLab folks

Related issues 2 (0 open2 closed)

Copied to openQA Infrastructure (public) - action #160802: bot-ng and openQABot pipelines fail to pull containers from registry.suse.de - againResolvedlivdywan2023-10-23

Actions
Copied to openQA Infrastructure (public) - action #164415: SUSE IT storage is broken (was: Broken SUSE:CA.repo causes os-autoinst-needles-opensuse-mirror pipelines to fail)Resolvedokurz2023-10-23

Actions
Actions #1

Updated by okurz about 1 year ago

  • Tags set to infra
  • Priority changed from Normal to Urgent
Actions #3

Updated by livdywan about 1 year ago

  • Tags deleted (infra)
  • Description updated (diff)
  • Priority changed from Urgent to Normal
Actions #4

Updated by livdywan about 1 year ago

  • Status changed from New to In Progress
  • Priority changed from Normal to Urgent
Actions #5

Updated by livdywan about 1 year ago

  • Description updated (diff)
  • Status changed from In Progress to Feedback
  • Priority changed from Urgent to High

Pipelines inactive for now.

Actions #6

Updated by livdywan about 1 year ago

  • Tags set to infra
Actions #7

Updated by livdywan about 1 year ago

  • Status changed from Feedback to In Progress
  • Priority changed from High to Urgent

Pipelines have been run since but seem to fail on and off. Waiting to confirm if they are being fixed.

Maybe we need to reconsider the container setup. Although right now this affects several teams beyond maintenance.

Apparently the to my mind correct ticket state was misleading so I'm changing it. Either way I have been monitoring this.

Actions #8

Updated by livdywan about 1 year ago

If anyone has an idea and would like to jump in please do. Right now I physically can't test a change to CI. Alternatively feel free to retriever pipelines and see if they work by chance

Actions #9

Updated by openqa_review about 1 year ago

  • Due date set to 2023-11-08

Setting due date based on mean cycle time of SUSE QE Tools

Actions #10

Updated by livdywan about 1 year ago

Pipelines are consistently looking good. Unfortunately I still have no clue why https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines

This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.

Actions #11

Updated by livdywan about 1 year ago

  • Status changed from In Progress to Feedback
  • Priority changed from Urgent to High

livdywan wrote in #note-10:

Pipelines are consistently looking good. Unfortunately I still have no clue why https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines

This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.

Pipelines continue to look food. Still no clue what has or hasn't been fixed. Lowering priority accordingly.

Actions #12

Updated by livdywan about 1 year ago

  • Description updated (diff)
Actions #13

Updated by livdywan about 1 year ago

  • Subject changed from bot-ng pipelines fail to pull containers from registry.suse.de to bot-ng and qembot pipelines fail to pull containers from registry.suse.de

Well, we got another failure. I'm not opening another ticket since this is the same symptom but on openQABot and a new ticket for this would be a waste of time at this point:

ERROR: Job failed: failed to pull image "registry.opensuse.org/home/darix/apps/containers/gitlab-runner-helper:x86_64-latest" with specified policies [always]: Error response from daemon: received unexpected HTTP status: 503 Service Unavailable (manager.go:237:0s)
Actions #15

Updated by livdywan about 1 year ago

  • Subject changed from bot-ng and qembot pipelines fail to pull containers from registry.suse.de to bot-ng and qembot pipelines fail to pull containers from registry.suse.de size:M
  • Description updated (diff)
Actions #16

Updated by livdywan about 1 year ago

https://suse.slack.com/archives/C029APBKLGK/p1698321397573119

Actions #17

Updated by livdywan about 1 year ago

  • Subject changed from bot-ng and qembot pipelines fail to pull containers from registry.suse.de size:M to bot-ng and openQABot pipelines fail to pull containers from registry.suse.de size:M

livdywan wrote in #note-16:

https://suse.slack.com/archives/C029APBKLGK/p1698321397573119

I was under the impression retries wouldn't cover a broken container setup but a comment made me re-read the docs. Maybe we should enable retries for bot-ng and openQABot afterall.

Since openQABot was not appretly retrying I'll assume "system failure" is not enough.

Actions #18

Updated by okurz about 1 year ago

livdywan wrote in #note-17:

I was under the impression retries wouldn't cover a broken container setup but a comment made me re-read the docs. Maybe we should enable retries for bot-ng and openQABot afterall.

True, https://docs.gitlab.com/ee/ci/yaml/#retry says

script_failure: Retry when:
The script failed.
The runner failed to pull the Docker image. For docker, docker+machine, kubernetes executors.

that sounds promising

Actions #19

Updated by livdywan about 1 year ago

  • Status changed from Feedback to Resolved

Let's assume this will work as expected. We won't be able to "test" this unless there's another outage so I'm resolving - if needed we can re-open.

Actions #20

Updated by livdywan 7 months ago

  • Copied to action #160802: bot-ng and openQABot pipelines fail to pull containers from registry.suse.de - again added
Actions #21

Updated by nicksinger 6 months ago

  • Status changed from Resolved to Feedback
  • Priority changed from High to Normal

https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines/1199291/builds shows that we indeed retry but the retries happen so fast that they are depleted before anything really changes (especially if the gitlab contains already fail to get up). Can we wait between them or would that conflict with future scheduled runs?

(also feel free to resolve again and copy this into a new ticket)

Actions #22

Updated by livdywan 6 months ago

  • Due date deleted (2023-11-08)
Actions #23

Updated by livdywan 6 months ago

nicksinger wrote in #note-21:

https://gitlab.suse.de/qa-maintenance/bot-ng/-/pipelines/1199291/builds shows that we indeed retry but the retries happen so fast that they are depleted before anything really changes (especially if the gitlab contains already fail to get up). Can we wait between them or would that conflict with future scheduled runs?

I don't think we can configure the delay of native gitlab retry.

Actions #24

Updated by livdywan 5 months ago

  • Status changed from Feedback to Blocked

I don't think we can configure the delay of native gitlab retry.

https://sd.suse.com/servicedesk/customer/portal/1/SD-161721

Let's ask for help from infra here.

Actions #25

Updated by livdywan 5 months ago ยท Edited

livdywan wrote in #note-24:

I don't think we can configure the delay of native gitlab retry.

https://sd.suse.com/servicedesk/customer/portal/1/SD-161721

No feedback so far and another wave just hit us:

ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: received unexpected HTTP status: 503 Service Unavailable (manager.go:250:0s)

If we get a negative response or decide waiting is not helpful we have two other options to consider here:

This was brought up in email to autobuild@suse.de as suggested by OBS folks and includes other affected parties. Waiting on a response there.

.

Change pipeline scripts to run podman manually with retries so we can be more resilient to

Actions #26

Updated by livdywan 5 months ago

  • Description updated (diff)
Actions #27

Updated by livdywan 5 months ago

  • Copied to action #164415: SUSE IT storage is broken (was: Broken SUSE:CA.repo causes os-autoinst-needles-opensuse-mirror pipelines to fail) added
Actions #28

Updated by livdywan 5 months ago

  • Subject changed from bot-ng and openQABot pipelines fail to pull containers from registry.suse.de size:M to bot-ng and qamops pipelines fail to pull containers from registry.suse.de size:M
  • Description updated (diff)

Nothing about openQABot here, hence removing references

Actions #29

Updated by livdywan 5 months ago

I attempted to run a pipeline manually but it seems like we're still out of order:

ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: Get "https://registry.suse.de/v2/": dial tcp [2a07:de40:b204:2:10:145:56:20]:443: connect: no route to host (manager.go:250:3s)
Actions #30

Updated by okurz 5 months ago

  • Category set to Regressions/Crashes
  • Status changed from Blocked to Resolved

https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/2873964 works now.

I enabled all relevant schedules again on

and triggered individual runs and monitored their success. I don't think the ideas in the description are relevant as 1. gitlab.suse.de also relies on the same backed storage regardless where the CI container images are and 2. IT will have for sure learned from this unique incident :)

Actions

Also available in: Atom PDF