action #135644

Updated by mkittler 5 months ago

## Observation shows last job from 2023-09-13 early morning but many qemu-ppc64le are currently scheduled 

 The problem is *not* that any of the services is not responding. The worker slots and the scheduler and websocket server all look good. It looks like the source of the problem is the starvation prevention which does not work very well if there are only very few worker slots (for a certain worker class) available. 

 We currently only have 4 worker slots that are capable of running jobs with worker class `qemu_ppc64le`. The scheduler log the following messages about them: 

 martchus@openqa:~> tail -f /var/log/openqa_scheduler | grep -i 'holding worker' 
 [2023-09-13T10:46:09.216709+02:00] [debug] [pid:1553] Holding worker 887 for job 12082558 to avoid starvation (cluster A) 
 [2023-09-13T10:46:09.216741+02:00] [debug] [pid:1553] Holding worker 885 for job 12082559 to avoid starvation (cluster A) 
 [2023-09-13T10:46:09.216829+02:00] [debug] [pid:1553] Holding worker 914 for job 12082561 to avoid starvation (cluster B) 
 [2023-09-13T10:46:09.216929+02:00] [debug] [pid:1553] Holding worker 898 for job 12095650 to avoid starvation (cluster A) 

 I appended the `(cluster …)` braces to show which jobs are in the same cluster. It looks the same on subsequent scheduler ticks. 

 The cluster that I called "cluster A" really only consists of the three jobs that are mentioned in the log lines I've pasted. None of the jobs is blocked by other dependencies. So normally one would expect the cluster to be scheduled (using the workers that were held back). Maybe we have a one-off error here? Maybe the fact that the only other free worker slot is held back for another cluster is problematic?