action #77218
closedgitlab CI pipeline openqa/grafana-webhook-actions failed with "You have reached your pull rate limit" for docker hub but should not use docker hub at all
0%
Description
Observation¶
https://gitlab.suse.de/openqa/grafana-webhook-actions/-/jobs/283776 fails with
Running with gitlab-runner 13.5.0 (ece86343)
on gitlab-worker1:leap15.1 HsXF8SXP
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image registry.opensuse.org/home/okurz/container/containers/tumbleweed:ipmitool-ping-nc ...
Preparing environment
00:03
Waiting for pod gitlab/runner-hsxf8sxp-project-4652-concurrent-0dshjz to be running, status is Pending
ERROR: Job failed (system failure): prepare environment: image pull failed: rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
this is bad. We use registry.opensuse.org/home/okurz/container/containers/tumbleweed:ipmitool-ping-nc but looks like the gitlab CI runner itself might be using an image from docker hub?
Problem¶
We do not use docker hub and gitlab CI runners should not do as well – if they do.
Workaround¶
Run ARM worker recovery actions manually, e.g. ipmi-openqaworker-arm-2-ipmi power reset
Updated by okurz about 4 years ago
Asked in RC in https://chat.suse.de/channel/suse-it-ama?msg=G7cLjbgP5HzbqMgnL : Ricardo Klein ^ are the gitlab CI runners using images from hub.docker.com? if yes, this will run into severe rate limiting and we should use a gitlab.nue.suse.com local registry instead
Updated by okurz about 4 years ago
- Status changed from In Progress to Blocked
Reported to EngInfra in https://infra.nue.suse.com/SelfService/Display.html?id=179452
Updated by okurz about 4 years ago
- Status changed from Blocked to Resolved
The EngInfra ticket is still open and has more references but for us, for now, the issue is resolved as Ricardo found a temporary solution with a plan to improve for the future.