action #109055
closedBroken workers alert
0%
Description
Observation¶
Number of broken workers 2.000
Workers on OSD shows:
- powerqaworker-qam-1:1 powerqaworker-qam-1 qemu_ppc64le,qemu_ppc64le-large-mem,powerqaworker-qam-1 ppc64le Broken 1 25
- powerqaworker-qam-1:5 powerqaworker-qam-1 qemu_ppc64le,power8,powerqaworker-qam-1 ppc64le Broken 1 25
The alert is currently active, and was also active twice earlier today, and thrice yesterday.
Rollback steps¶
- Unpause broken workers alert
Updated by nicksinger over 2 years ago
instances 2-6 show that it is unable to lock the pool directory but it's hard to debug because I face ~50% package loss to that worker :/
Updated by okurz over 2 years ago
- Description updated (diff)
- Priority changed from Normal to High
- Target version set to Ready
@cdywan for alert related tickets in general I suggest to put them to our backlog, i.e. target version "Ready", and with high prio
Updated by okurz over 2 years ago
- Related to action #108845: Network performance problems, DNS, DHCP, within SUSE QA network auto_review:"(Error connecting to VNC server.*qa.suse.*Connection timed out|ipmitool.*qa.suse.*Unable to establish)":retry but also other symptoms size:M added
Updated by okurz over 2 years ago
- Status changed from New to Blocked
- Assignee set to okurz
This might as well be just related to #108845 and QA (related) switches. Due to recent development e.g. in #108845#note-21 I will take this ticket and track it.
Updated by okurz over 2 years ago
- Related to action #109734: Better way to prevent conflicts between openqa-worker@ and openqa-worker-auto-restart@ variants size:M added
Updated by okurz over 2 years ago
- Tags set to reactive work
- Description updated (diff)
- Status changed from Blocked to Resolved
#108845 was resolved but does not fix the issues. https://monitor.qa.suse.de/d/WebuiDb/webui-summary?orgId=1&editPanel=96&tab=alert still shows broken workers. On https://openqa.suse.de/admin/workers filtering for "Broken" I can find powerqaworker-qam-1:1 and others on the same host broken. I logged in to the system over ssh and found that openqa-worker@{1..6}
are running but they should not, see https://gitlab.suse.de/openqa/salt-states-openqa#remarks-about-the-systemd-units-used-to-start-workers . Who did that again?
I did
systemctl mask --now openqa-worker@{1..6} && systemctl enable --now openqa-worker-auto-restart@{1..6}
and everything looks ok again but I am expecting that someone by mistake tries the same approach again and again. Eventually we should find a better approach by not having multiple systemd services for the same but solve with configuration within the services or something like that. Reported in #109734
I also found openqaworker2:5 broken due to the same problem. Someone started openqa-worker@5
when they should not. Visible from the history of the root user.
I suspect it was jpupava:
openqaworker2:/home/okurz # history | grep 'openqa-worker@5'
613 2022-04-07 11:45:39 systemctl status openqa-worker@5
618 2022-04-07 11:46:58 systemctl restart openqa-worker@5
620 2022-04-07 11:47:48 systemctl status openqa-worker@5
624 2022-04-07 11:49:09 systemctl restart openqa-worker@5
626 2022-04-07 11:49:12 systemctl restart openqa-worker@5
628 2022-04-07 11:49:18 systemctl stop openqa-worker@5
630 2022-04-07 11:49:26 systemctl status openqa-worker@5
631 2022-04-07 11:49:31 systemctl start openqa-worker@5
…
openqaworker2:/home/okurz # last | head -n 20
jpupava pts/6 10.100.12.155 Thu Apr 7 12:08 - 14:22 (02:13)
jpupava pts/6 10.100.12.155 Thu Apr 7 11:45 - 12:03 (00:17)
I will tell him over chat. Did so in https://suse.slack.com/archives/C02CANHLANP/p1649499490207389
Now there should be no more broken workers. Monitored the alert and unpaused. Problem solved, rollback steps completed.