coordination #98463
opencoordination #103944: [saga][epic] Scale up: More robust handling of diverse infrastructure with varying performance
[epic] Avoid too slow asset downloads leading to jobs exceeding the timeout with or run into auto_review:"(timeout: setup exceeded MAX_SETUP_TIME|Cache service queue already full)":retry
71%
Description
problem and scope¶
This epic is about the general problem that asset downloads can be quite slow leading to jobs exceeding MAX_SETUP_TIME
or being incompleted with Cache service queue already full
; it is not about worker host specific problems, e.g. broken filesystem or networking problems.
ideas to improve¶
There are multiple factors contributing to the problem so there's not one simple fix. Here is a list of the areas where we have room for improvement (feel free to add more items):
- The file system on OSD workers is re-created on every reboot so the cache needs to be completely renewed on every reboot. Hence this problem is almost only apparent on OSD (but not on o3).
- see #97409
- We would also benefit from using a bigger asset cache (although without 1. being addressed it is likely not of that much use)
- see #97412
- We should avoid processing downloads when their jobs have exceeded the timeout anyways. This of course only improves handling the symptom of the problem and might not be very useful anymore once the problem itself is fixed.
- see #96684
- We could try to tweak the parameter
OPENQA_CACHE_MAX_INACTIVE_JOBS
.- This parameter has been introduced by #96623.
- At this point, it is set to 10 via https://gitlab.suse.de/openqa/salt-states-openqa/-/commit/1e7e862475b40d94f46dc2a72af6b7a4dae6340b.
- Such broken workers are already ignored by our monitoring but a too low value can still cause unnecessary incomplete jobs.
acceptance criteria¶
- AC1: The figures for jobs exceeding
MAX_SETUP_TIME
are significantly lower than the ones mentioned under "further details" below. A specific worker host causing problems for reasons specific to that machine it is out of scope, though.
further details¶
Multiple worker hosts are affected:
openqa=> select host, count(id) as online_slots, (select array[count(distinct id), count(distinct id) / (extract(epoch FROM (timezone('UTC', now()) - '2021-09-07T00:00:00')) / 3600)] from jobs join jobs_assets on jobs.id = jobs_assets.job_id where assigned_worker_id = any(array_agg(w.id)) and t_finished >= '2021-09-07T00:00:00' and reason like '%setup exceeded MAX_SETUP_TIME%') as recently_abandoned_jobs_total_and_per_hour from workers as w where t_updated > (timezone('UTC', now()) - interval '1 hour') group by host order by recently_abandoned_jobs_total_and_per_hour desc;
host | online_slots | recently_abandoned_jobs_total_and_per_hour
---------------------+--------------+--------------------------------------------
openqaworker5 | 41 | {14,0.167352897235061}
openqaworker6 | 29 | {12,0.143445340487195}
openqaworker13 | 16 | {9,0.107584005365396}
openqaworker3 | 19 | {5,0.0597688918696647}
openqaworker8 | 16 | {5,0.0597688918696647}
openqaworker9 | 16 | {5,0.0597688918696647}
QA-Power8-5-kvm | 8 | {3,0.0358613351217988}
openqaworker11 | 10 | {0,0}
openqaworker2 | 34 | {0,0}
QA-Power8-4-kvm | 8 | {0,0}
powerqaworker-qam-1 | 8 | {0,0}
automotive-3 | 1 | {0,0}
grenache-1 | 50 | {0,0}
malbec | 4 | {0,0}
openqaworker-arm-1 | 10 | {0,0}
openqaworker-arm-2 | 20 | {0,0}
openqaworker10 | 10 | {0,0}
(17 Zeilen)
The ones which are affected most are also the ones needing the most assets:
openqa=> select host, count(id) as online_slots, (select array[((select sum(size) from assets where id = any(array_agg(distinct jobs_assets.asset_id))) / 1024 / 1024 / 1024), count(distinct id)] from jobs join jobs_assets on jobs.id = jobs_assets.job_id where assigned_worker_id = any(array_agg(w.id)) and t_finished >= '2021-09-07T00:00:00') as recent_asset_size_in_gb_and_job_count from workers as w where t_updated > (timezone('UTC', now()) - interval '1 hour') group by host order by recent_asset_size_in_gb_and_job_count desc;
host | online_slots | recent_asset_size_in_gb_and_job_count
---------------------+--------------+---------------------------------------
openqaworker11 | 10 | {NULL,0}
automotive-3 | 1 | {NULL,0}
openqaworker6 | 29 | {1739.5315849324688340,3444}
openqaworker5 | 41 | {1668.8964441129937744,3665}
openqaworker13 | 16 | {1591.4191119810566328,2221}
openqaworker8 | 16 | {1487.1783863399177842,2531}
openqaworker3 | 19 | {1447.2926171422004697,2350}
openqaworker9 | 16 | {1368.1286235852167031,2380}
openqaworker10 | 10 | {1117.2662402801215645,1706}
openqaworker2 | 34 | {781.0186277972534277,718}
grenache-1 | 50 | {663.5168796060606865,1477}
openqaworker-arm-2 | 20 | {346.2731295535340879,1123}
openqaworker-arm-1 | 10 | {332.1729393638670449,614}
QA-Power8-5-kvm | 8 | {239.5352552458643916,298}
powerqaworker-qam-1 | 8 | {238.9669120963662910,361}
QA-Power8-4-kvm | 8 | {223.1794419540092373,297}
malbec | 4 | {187.9319233968853955,141}
(17 Zeilen)
Updated by okurz about 3 years ago
- Category set to Feature requests
- Parent task set to #64746
Updated by mkittler about 3 years ago
Just for the record, we've just seen alerts again because downloads are piling up on openqaworker6 and openqaworker5. (The alert should actually not be firing for these kinds of broken workers so I'll have a look at the alerts query.)
Updated by mkittler about 3 years ago
- Subject changed from [epic] Avoid too slow asset downloads leading to jobs exceeding the timeout with auto_review:"timeout: setup exceeded MAX_SETUP_TIME":retry to [epic] Avoid too slow asset downloads leading to jobs exceeding the timeout with or run into auto_review:"(timeout: setup exceeded MAX_SETUP_TIME|Cache service queue already full)":retry
- Description updated (diff)
Updated by okurz about 3 years ago
- Assignee deleted (
mkittler) - Target version changed from Ready to future
To me this looks less of a priority for us after we fixed the missing qcow compression, hence removing from backlog. Agreed?
Updated by szarate almost 2 years ago
So it looks like this is still happening, and from what DimStar is reporting... the retry is not doing the work as it should: https://openqa.opensuse.org/tests/overview?result=failed&result=incomplete&result=timeout_exceeded&distri=microos&distri=opensuse&version=Tumbleweed&build=20221215&groupid=1
On top of this, some of the restart of the parent's don't restart the children properly, leaving the overview in an inconsistent stage: https://suse.slack.com/archives/C02CANHLANP/p1671188124915829?thread_ts=1671188061.368929&cid=C02CANHLANP
Updated by okurz almost 2 years ago
- Target version changed from future to Ready
Brought up by DimStar today again
Updated by okurz almost 2 years ago
The "cache service queue full" was introduced with openQA commit e16bdd68a as part of https://github.com/os-autoinst/openQA/pull/4122 during #96623
Updated by okurz almost 2 years ago
Ideas from estimation call:
- Ensure that openQA admins are notified if workers are reporting themselves as broken
- Can we bump the number for OPENQA_CACHE_MAX_INACTIVE_JOBS?
- Ensure that the incomplete jobs with "cache service full" are properly restarted -> #125276
- As https://openqa.opensuse.org/admin/workers shows no broken workers at the moment we should ensure that admins are notified that workers are broken and/or workers stay broken for longer for people to realize
Updated by okurz almost 2 years ago
- Status changed from New to Blocked
- Assignee set to okurz
We are looking into #125276 first
Updated by okurz over 1 year ago
- Status changed from Blocked to New
- Assignee deleted (
okurz)
#125276 completed, work can be continued here
Updated by okurz about 1 year ago
- Target version changed from Ready to Tools - Next