HDD images are cleaned-up too early for aarch64 Tumbleweed
HDD images are cleaned-up too early for aarch64 Tumbleweed.
For example in
qemu test, hdd image from
create_hdd_textmode is not available anymore https://openqa.opensuse.org/tests/1453577
Just for the record, it seems that the
create_hdd_textmode job really created and uploaded the image (
opensuse-Tumbleweedfirstname.lastname@example.org: Processing chunk 900/900, avg. speed ~406.312 KiB/s). So an issue here can be outruled.
I doubt that this is an aarch64 specific issue. Our cleanup code considers the architectures equally. Of course the size limit of the "openSUSE Tumbleweed AArch64" job group might not be big enough. However, the cleanup algorithm actually shouldn't consider assets of pending jobs (state is not "done" or "cancelled") at all anyways. I've also double checked some time ago whether that's actually the case when a similar issue came up. Nevertheless the asset has been removed by the cleanup algorithm:
martchus@ariel:~> xzgrep opensuse-Tumbleweedemail@example.com /var/log/openqa_gru.2.xz [2020-10-29T03:00:12.0988 UTC] [debug] [pid:28083] Checking whether asset hdd/opensuse-Tumbleweedfirstname.lastname@example.org (899416064) fits into group 3 (418768782) [2020-10-29T03:00:16.0400 UTC] [info] [pid:28083] Removing asset hdd/opensuse-Tumbleweedemail@example.com (belonging to job groups: 3) [2020-10-29T03:00:16.0421 UTC] [info] [pid:28083] GRU: removed /var/lib/openqa/share/factory/hdd/opensuse-Tumbleweedfirstname.lastname@example.org
3 is "openSUSE Tumbleweed AArch64".)
So there's either a bug in the cleanup algorithm after all or the asset hasn't been correctly associated with the pending jobs.
- Status changed from New to Feedback
- Assignee set to ggardet_arm
- Target version set to future
isn't that a duplicate of #71827 ?
repo/openSUSE-Tumbleweed-oss-armv7hl-Snapshot20201028-debuginfo 33.77 GiB
as you are already working on that and https://github.com/os-autoinst/openqa-trigger-from-obs/pull/102 was merged in the good case you already fixed it for any subsequent build :) I suggest you monitor the situation accordingly.
I am good at assigning tickets back to you, right? ;)