Actions
action #133454
closedbot-ng - pipelines in GitLab fail to pull qam-ci-leap:latest size:M
Start date:
Due date:
2023-09-08
% Done:
0%
Estimated time:
Tags:
Description
Observation¶
The the following recent failures:
WARNING: Failed to pull image with policy "always": Error response from daemon: unknown: SSL_connect error: error:1408F10B:SSL routines:ssl3_get_record:wrong version number (manager.go:237:0s)
ERROR: Job failed: failed to pull image "registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest" with specified policies [always]: Error response from daemon: unknown: SSL_connect error: error:1408F10B:SSL routines:ssl3_get_record:wrong version number (manager.go:237:0s)
Acceptance criteria¶
- AC1: bot-ng pipelines are executed successfully repeatedly
Suggestions¶
- The jobs fail well before any script execution so nothing we control within .gitlab-ci.yml, or can we?
- Research upstream what can be done if the initial container image download fails. Maybe we can specify a retry for what the executor is trying to pull. Or we spawn an internal super-mini image and in there call the container pull nested
- Report SD ticket that they should fix the infrastructure
Updated by livdywan over 1 year ago
- Copied from action #123064: bot-ng - pipelines in GitLab fail to pull qam-ci-leap:latest added
Updated by okurz over 1 year ago
- Subject changed from bot-ng - pipelines in GitLab fail to pull qam-ci-leap:latest to bot-ng - pipelines in GitLab fail to pull qam-ci-leap:latest size:M
- Description updated (diff)
- Status changed from New to Workable
Updated by livdywan over 1 year ago
- Status changed from Workable to In Progress
- Assignee set to livdywan
As we've not seen failures recently we may not be able to test it but it's still useful as a research task. So as discussed on Jitsi I'm looking into a proof of concept of the container idea without spending too much time on it.
Updated by openqa_review over 1 year ago
- Due date set to 2023-08-25
Setting due date based on mean cycle time of SUSE QE Tools
Updated by okurz over 1 year ago
- Due date changed from 2023-08-25 to 2023-09-08
- Status changed from In Progress to Feedback
Discussed in weekly tools team meeting. gitlab has moved to PRG2 so maybe the error will not effectively happen again. I suggest you just wait some days/weeks if the error reproduces.
Updated by livdywan about 1 year ago
- Status changed from Feedback to Resolved
I think we're actually fine here at this point. Of course we can always re-open if needed.
Actions