Project

General

Profile

Actions

coordination #98463

open

coordination #103944: [saga][epic] Scale up: More robust handling of diverse infrastructure with varying performance

[epic] Avoid too slow asset downloads leading to jobs exceeding the timeout with or run into auto_review:"(timeout: setup exceeded MAX_SETUP_TIME|Cache service queue already full)":retry

Added by mkittler about 3 years ago. Updated 4 months ago.

Status:
Blocked
Priority:
Normal
Assignee:
Category:
Feature requests
Target version:
Start date:
2021-08-06
Due date:
% Done:

71%

Estimated time:
(Total: 0.00 h)

Description

problem and scope

This epic is about the general problem that asset downloads can be quite slow leading to jobs exceeding MAX_SETUP_TIME or being incompleted with Cache service queue already full; it is not about worker host specific problems, e.g. broken filesystem or networking problems.

ideas to improve

There are multiple factors contributing to the problem so there's not one simple fix. Here is a list of the areas where we have room for improvement (feel free to add more items):

  1. The file system on OSD workers is re-created on every reboot so the cache needs to be completely renewed on every reboot. Hence this problem is almost only apparent on OSD (but not on o3).
  2. We would also benefit from using a bigger asset cache (although without 1. being addressed it is likely not of that much use)
  3. We should avoid processing downloads when their jobs have exceeded the timeout anyways. This of course only improves handling the symptom of the problem and might not be very useful anymore once the problem itself is fixed.
  4. We could try to tweak the parameter OPENQA_CACHE_MAX_INACTIVE_JOBS.

acceptance criteria

  • AC1: The figures for jobs exceeding MAX_SETUP_TIME are significantly lower than the ones mentioned under "further details" below. A specific worker host causing problems for reasons specific to that machine it is out of scope, though.

further details

Multiple worker hosts are affected:

openqa=> select host, count(id) as online_slots, (select array[count(distinct id), count(distinct id) / (extract(epoch FROM (timezone('UTC', now()) - '2021-09-07T00:00:00')) / 3600)] from jobs join jobs_assets on jobs.id = jobs_assets.job_id where assigned_worker_id = any(array_agg(w.id)) and t_finished >= '2021-09-07T00:00:00' and reason like '%setup exceeded MAX_SETUP_TIME%') as recently_abandoned_jobs_total_and_per_hour from workers as w where t_updated > (timezone('UTC', now()) - interval '1 hour') group by host order by recently_abandoned_jobs_total_and_per_hour desc;
        host         | online_slots | recently_abandoned_jobs_total_and_per_hour 
---------------------+--------------+--------------------------------------------
 openqaworker5       |           41 | {14,0.167352897235061}
 openqaworker6       |           29 | {12,0.143445340487195}
 openqaworker13      |           16 | {9,0.107584005365396}
 openqaworker3       |           19 | {5,0.0597688918696647}
 openqaworker8       |           16 | {5,0.0597688918696647}
 openqaworker9       |           16 | {5,0.0597688918696647}
 QA-Power8-5-kvm     |            8 | {3,0.0358613351217988}
 openqaworker11      |           10 | {0,0}
 openqaworker2       |           34 | {0,0}
 QA-Power8-4-kvm     |            8 | {0,0}
 powerqaworker-qam-1 |            8 | {0,0}
 automotive-3        |            1 | {0,0}
 grenache-1          |           50 | {0,0}
 malbec              |            4 | {0,0}
 openqaworker-arm-1  |           10 | {0,0}
 openqaworker-arm-2  |           20 | {0,0}
 openqaworker10      |           10 | {0,0}
(17 Zeilen)

The ones which are affected most are also the ones needing the most assets:

openqa=> select host, count(id) as online_slots, (select array[((select sum(size) from assets where id = any(array_agg(distinct jobs_assets.asset_id))) / 1024 / 1024 / 1024), count(distinct id)] from jobs join jobs_assets on jobs.id = jobs_assets.job_id where assigned_worker_id = any(array_agg(w.id)) and t_finished >= '2021-09-07T00:00:00') as recent_asset_size_in_gb_and_job_count from workers as w where t_updated > (timezone('UTC', now()) - interval '1 hour') group by host order by recent_asset_size_in_gb_and_job_count desc;
        host         | online_slots | recent_asset_size_in_gb_and_job_count 
---------------------+--------------+---------------------------------------
 openqaworker11      |           10 | {NULL,0}
 automotive-3        |            1 | {NULL,0}
 openqaworker6       |           29 | {1739.5315849324688340,3444}
 openqaworker5       |           41 | {1668.8964441129937744,3665}
 openqaworker13      |           16 | {1591.4191119810566328,2221}
 openqaworker8       |           16 | {1487.1783863399177842,2531}
 openqaworker3       |           19 | {1447.2926171422004697,2350}
 openqaworker9       |           16 | {1368.1286235852167031,2380}
 openqaworker10      |           10 | {1117.2662402801215645,1706}
 openqaworker2       |           34 | {781.0186277972534277,718}
 grenache-1          |           50 | {663.5168796060606865,1477}
 openqaworker-arm-2  |           20 | {346.2731295535340879,1123}
 openqaworker-arm-1  |           10 | {332.1729393638670449,614}
 QA-Power8-5-kvm     |            8 | {239.5352552458643916,298}
 powerqaworker-qam-1 |            8 | {238.9669120963662910,361}
 QA-Power8-4-kvm     |            8 | {223.1794419540092373,297}
 malbec              |            4 | {187.9319233968853955,141}
(17 Zeilen)

Subtasks 7 (2 open5 closed)

action #96623: Let workers declare themselves as broken if asset downloads are piling up size:MResolveddheidler2021-08-06

Actions
action #96684: Abort asset download via the cache service when related job runs into a timeout (or is otherwise cancelled) size:MRejectedmkittler2021-08-09

Actions
openQA Infrastructure - action #97409: Re-use existing filesystems on workers after reboot if possible to prevent full worker asset cache re-syncingNew

Actions
openQA Infrastructure - action #97412: Reduce I/O load on OSD by using more cache size on workers with using free disk space when available instead of hardcoded spaceNew

Actions
action #125276: Ensure that the incomplete jobs with "cache service full" are properly restarted size:MResolvedmkittler2023-03-02

Actions
action #128267: Restarting jobs (e.g. due to full cache queue) can lead to weird behavior for certain job dependencies (was: Ensure that the incomplete jobs with "cache service full" are properly restarted (take 2)) size:MResolvedmkittler

Actions
action #128276: Handle workers with busy cache service gracefully by a two-level wait size:MResolvedmkittler2023-04-25

Actions
Actions

Also available in: Atom PDF