action #28328
closedjob was triggered trying to download HDD image but it's already gone
0%
Description
Observation¶
https://openqa.suse.de/tests/1269748/file/autoinst-log.txt
start time: 2017-11-24 07:09:39
…
CACHE: Download of /var/lib/openqa/cache/SLES-15-aarch64-349.1@aarch64-minimal_with_sdk349.1_installed.qcow2 failed with: 404 - Not Found
+++ worker notes +++
end time: 2017-11-24 07:09:40
result: setup failure: Can't download SLES-15-aarch64-349.1@aarch64-minimal_with_sdk349.1_installed.qcow2
and from the parent:
end time: 2017-11-23 17:20:51
uploading install_and_reboot-y2logs.tar.bz2
uploading SLES-15-aarch64-349.1@aarch64-minimal_with_sdk349.1_installed.qcow2
Checksum comparison (actual:expected) 1032847561:1032847561 with size (actual:expected) 836435968:836435968
osd:openqa-gru:
[Fri Nov 24 02:53:09 2017] [28989:info] GRU: removing /var/lib/openqa/share/factory/hdd/SLES-15-aarch64-349.1@aarch64-minimal_with_sdk349.1_installed.qcow2
So it was deleted as expected after the parent job uploaded it but before the downstream job had a chance to act on it.
Problem¶
Isn't the asset being marked as "used" by a scheduled job to prevent GRU from cleaning that up?
Updated by coolo about 7 years ago
If it's used or not doesn't matter - if GRU deletes it, the job group was obviously not big enough to hold the working set. You have 100G for that group and the isos alone are around 70
But there is some subtle bug hidden, because SLES-15-x86_64-305.1-minimal_with_sdk305.1_installed.qcow2 is still present, but is 5 weeks old.
https://progress.opensuse.org/issues/19672#note-8 might be part of the puzzle
as is https://progress.opensuse.org/issues/16496 - it's just too hard atm to admin the job group sizes.
Updated by coolo about 7 years ago
I increased the job group size of functional group to 500GB, 100GB was a bit lightweight
Updated by okurz about 7 years ago
- Related to action #19672: GRU may delete assets while jobs are registered added
Updated by okurz about 7 years ago
- Related to action #16496: [tools][sprint 201711.2] display current disk space consumption of job groups added
Updated by okurz about 7 years ago
I guess I would be fine if the job would have been canceled with a message/state/comment instead of incomplete.
Updated by SLindoMansilla about 7 years ago
- Related to coordination #25380: [sle][functional][epic] test fails in install - tries to install SLE12 packages -> update test for sle15 added
Updated by AdamWill about 7 years ago
I have thought for a while that the cleanup code should avoid deleting assets associated with pending/running jobs...
Updated by okurz almost 6 years ago
- Related to action #34783: Don't let jobs incomplete if mandatory resources are missing added
Updated by okurz about 5 years ago
- Related to action #12180: [webui] Prevent tests to be triggered when required assets are not present (anymore) added
Updated by okurz about 5 years ago
- Blocks action #44885: Cache service hiccups - Assets are deleted after they are downloaded added
Updated by okurz almost 5 years ago
- Status changed from New to Rejected
- Assignee set to okurz
I guess by now we have changed the asset cleanup and quota management code enough again to call the behaviour currently described in this ticket as design. The alternative of locking assets for currently scheduled jobs even though job group quota is exceeded sounds dangerous as well. We could try to delete assets linked to not-unfinished jobs first but that there are also good arguments to prefer to keep assets of finished jobs so I don't think we should even make that call.