openSUSE Project Management Tool: Issueshttps://progress.opensuse.org/https://progress.opensuse.org/themes/openSUSE/favicon/favicon.ico?15829177842024-03-15T10:13:41ZopenSUSE Project Management Tool
Redmine openQA Project - action #157333 (Closed): Log all job setting changes in autoinst-log.txthttps://progress.opensuse.org/issues/1573332024-03-15T10:13:41ZMDouchamartin.doucha@suse.com
<p>All job settings should be logged in autoinst-log.txt with source of the value (e.g. the place where <code>set_var()</code> was called or whether they were added from product/medium/worker etc.)</p>
openQA Infrastructure - action #125798 (Resolved): Visual differences in GRUB menu on different x...https://progress.opensuse.org/issues/1257982023-03-10T17:09:54ZMDouchamartin.doucha@suse.com
<p>Here are 5 different LTP jobs booting the exact same UEFI/SecureBoot QCOW image on different workers:<br>
<a href="https://openqa.suse.de/tests/10651590" class="external">https://openqa.suse.de/tests/10651590</a> openqaworker16:14, GRUB needle mismatch<br>
<a href="https://openqa.suse.de/tests/10658203" class="external">https://openqa.suse.de/tests/10658203</a> openqaworker16:18, pass<br>
<a href="https://openqa.suse.de/tests/10659306" class="external">https://openqa.suse.de/tests/10659306</a> openqaworker16:7, GRUB needle mismatch<br>
<a href="https://openqa.suse.de/tests/10659346" class="external">https://openqa.suse.de/tests/10659346</a> openqaworker17:12, GRUB needle mismatch<br>
<a href="https://openqa.suse.de/tests/10659359" class="external">https://openqa.suse.de/tests/10659359</a> worker9:11, pass</p>
<p>It appears that GRUB menu size changes depending on not just the worker but also specific worker slot.</p>
<p>Possibly related to <a href="https://progress.opensuse.org/issues/114523" class="external">poo#114523</a> but this time it's happening on x86_64.</p>
QA - action #123748 (Resolved): [tools] Add support for excluding packages from test flavor in bo...https://progress.opensuse.org/issues/1237482023-01-27T12:53:19ZMDouchamartin.doucha@suse.com
<p>SLE-15SP4 livepatching channel will include packages for userspace livepatching which need standard single incident and aggregate tests. Incident scheduling logic in bot config therefore needs support for package exclusion so that the livepatching channel can be enabled for single incidents without flooding the job groups with kernel livepatch tests. Example:</p>
<pre><code>Server-DVD-Incidents:
archs:
- x86_64
issues:
...
exclude_packages:
- kernel-livepatch
</code></pre>
<p>Any incident that contains package with the given name (or name prefix) will be skipped for the parent flavor regardless of what else it contains.</p>
openQA Project - action #121774 (In Progress): LTP cgroup test appears to crash OpenQA worker ins...https://progress.opensuse.org/issues/1217742022-12-09T13:36:47ZMDouchamartin.doucha@suse.com
<p>LTP test cgroup_fj_stress_blkio_4_4_each on latest SLE-15SP1 KOTD kernel appears to crash the OpenQA worker instance it's running on. The test itself will succeed but the OpenQA job will stay stuck in <code>wait_serial()</code> for several hours (despite 90 second timeout) until the whole job fails on MAX_JOB_TIME. There are 3 examples so far:<br>
<a href="https://openqa.suse.de/tests/10089424#step/cgroup_fj_stress_blkio_4_4_each/7" class="external">https://openqa.suse.de/tests/10089424#step/cgroup_fj_stress_blkio_4_4_each/7</a><br>
<a href="https://openqa.suse.de/tests/10111009#step/cgroup_fj_stress_blkio_4_4_each/7" class="external">https://openqa.suse.de/tests/10111009#step/cgroup_fj_stress_blkio_4_4_each/7</a><br>
<a href="https://openqa.suse.de/tests/10113099#step/cgroup_fj_stress_blkio_4_4_each/7" class="external">https://openqa.suse.de/tests/10113099#step/cgroup_fj_stress_blkio_4_4_each/7</a></p>
<p>I've seen this issue only on SLE-15SP1 KOTD builds 156 and 157. I have not seen any cases on other SLE versions.</p>
<p>Typical autoinst-log.txt entries related to the timeout:</p>
<pre><code>[2022-12-06T08:52:27.432374+01:00] [debug] <<< testapi::script_run(cmd="vmstat -w", output="", quiet=undef, timeout=30, die_on_timeout=1)
[2022-12-06T08:52:27.432549+01:00] [debug] tests/kernel/run_ltp.pm:334 called testapi::script_run
[2022-12-06T08:52:27.432710+01:00] [debug] <<< testapi::wait_serial(record_output=undef, regexp="# ", quiet=undef, no_regex=1, buffer_size=undef, expect_not_found=0, timeout=90)
[2022-12-06T10:39:58.278597+01:00] [debug] autotest received signal TERM, saving results of current test before exiting
[2022-12-06T10:39:58.278622+01:00] [debug] isotovideo received signal TERM
[2022-12-06T10:39:58.278748+01:00] [debug] backend got TERM
</code></pre>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>Last good: <a href="https://openqa.suse.de/tests/10091628" class="external">4.12.14-150100.156.1.gb6c27ee</a> (or more recent)</p>
<a name="Further-details"></a>
<h2 >Further details<a href="#Further-details" class="wiki-anchor">¶</a></h2>
<p>Always latest result in this scenario: <a href="https://openqa.suse.de/tests/latest?arch=x86_64&distri=sle&flavor=Server-DVD-Incidents-Kernel-KOTD&machine=64bit&test=ltp_controllers&version=15-SP1" class="external">latest</a></p>
<a name="Steps-to-reproduce"></a>
<h2 >Steps to reproduce:<a href="#Steps-to-reproduce" class="wiki-anchor">¶</a></h2>
<ol>
<li>Run <code>ltp_controllers</code> testsuite on SLE-15SP1 KOTD</li>
<li>Wait.</li>
</ol>
openQA Infrastructure - action #120339 (Resolved): QEMU DNS fails to resolve openqa.suse.de via I...https://progress.opensuse.org/issues/1203392022-11-11T12:48:43ZMDouchamartin.doucha@suse.com
<a name="Observation"></a>
<h1 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h1>
<p>LTP test <code>host</code> <a href="https://openqa.suse.de/tests/9927713#step/host/8" class="external">started failing today</a>. The QEMU DNS service running at 10.0.2.3 correctly resolves hostnames to IP addresses but reverse lookup fails. <a href="https://openqa.suse.de/tests/9915478#step/host/8" class="external">Old tests</a> which passed up until yesterday are now <a href="https://openqa.suse.de/tests/9930979#step/host/8" class="external">also failing upon restart</a> so this appears to be a QEMU configuration issue. The physical worker machine can resolve IP addresses without issue.</p>
<p>This issue is confirmed on worker3, worker5, worker8 and worker13. Other workers may be affected as well. PPC64LE QEMU workers do not seem to be affected, though.</p>
<a name="Rollback-steps"></a>
<h2 >Rollback steps<a href="#Rollback-steps" class="wiki-anchor">¶</a></h2>
<ul>
<li><em>DONE</em> Revert removal of faulty DNS</li>
</ul>
<pre><code>sudo salt --no-color --state-output=changes -C 'G@roles:worker' cmd.run 'sudo sed -i "s/\(NETCONFIG_DNS_POLICY=\)\"\"/\1\"auto\"/;s/\(NETCONFIG_DNS_STATIC_SERVERS=\)\"10.160.0.1 10.100.2.10\"/\1\"\"/" /etc/sysconfig/network/config && sudo netconfig update -f'
</code></pre> openQA Tests - action #116287 (Rejected): [qe-core][s390x] SSH serial terminal connection issues ...https://progress.opensuse.org/issues/1162872022-09-06T13:54:08ZMDouchamartin.doucha@suse.com
<p>s390x livepatch tests had a lot of installation failures this month due to SSH serial terminal connection failures. Interestingly enough, the connection failures seem to happen around the same module step. serial_terminal.txt output appears to be out of sync with the terminal because part of the commands and output is missing even though it's listed in the update_kernel module details. The dmesg output in serial0.txt often (but not always) shows some key exchange SSH error followed by output from a completely different job:</p>
<pre><code>Welcome to SUSE Linux Enterprise Server 15 SP2 (s390x) - Kernel 5.3.18-24.83-default (ttysclp0).
eth0: 10.161.145.86 fe80::5054:ff:fe84:f877
susetest login: root
Password:
Last login: Mon Sep 5 10:18:10 from 10.160.0.147
susetest:~ #�(B systemctl is-active network
active
susetest:~ #�(B systemctl is-active sshd
active
susetest:~ #�(B 2022-09-05T10:25:03.604370-04:00 susetest sshd[4272]: error: kex_exchange_identification: Connection closed by remote host
2022-09-05T10:25:04.844743-04:00 susetest sshd[4273]: error: kex_exchange_identification: Connection closed by remote host
[ 107.444474] LTP: starting DI000 (dirty)
[ 107.445525] LTP: starting DS000 (dio_sparse)
[ 107.466125] LTP: starting abort01
[ 107.758318] LTP: starting accept01
</code></pre>
<p>12-SP4: <a href="https://openqa.suse.de/tests/9438804#step/update_kernel/337" class="external">https://openqa.suse.de/tests/9438804#step/update_kernel/337</a><br>
15-SP2: <a href="https://openqa.suse.de/tests/9457752#step/update_kernel/337" class="external">https://openqa.suse.de/tests/9457752#step/update_kernel/337</a><br>
15-SP3: <a href="https://openqa.suse.de/tests/9458645#step/update_kernel/337" class="external">https://openqa.suse.de/tests/9458645#step/update_kernel/337</a><br>
15-SP4: <a href="https://openqa.suse.de/tests/9455666#step/update_kernel/199" class="external">https://openqa.suse.de/tests/9455666#step/update_kernel/199</a></p>
<p>I could not find any such connection failure on SLE-12SP5. Other SLE releases don't support s390x livepatches and KOTD tests don't show this kind of issue. This looks like a kernel bug but I'd like an s390x expert to look at this before I create a Bugzilla ticket. And of course this has exposed logging issues in OpenQA.</p>
openQA Project - action #107701 (Resolved): [osd] Job detail page fails to loadhttps://progress.opensuse.org/issues/1077012022-02-28T14:34:19ZMDouchamartin.doucha@suse.com
<p>The job detail page for the following ltp_syscalls_secureboot job is timing out:<br>
<a href="https://openqa.suse.de/tests/8232404" class="external">https://openqa.suse.de/tests/8232404</a></p>
<p>Please investigate why and fix it if possible.</p>
openQA Project - action #106898 (Resolved): Protection against asset clobberinghttps://progress.opensuse.org/issues/1068982022-02-16T10:33:47ZMDouchamartin.doucha@suse.com
<p>QCOW images in OpenQA occasionally get corrupted because multiple jobs try to publish the same file at the same time, either due to <code>PUBLISH_*</code> setting misconfiguration or duplicate install jobs scheduled in parallel. For example, this job failed to start:<br>
<a href="https://openqa.suse.de/tests/8162749" class="external">https://openqa.suse.de/tests/8162749</a></p>
<p>because these three install jobs finished 20 minutes apart and tried to upload the same QCOW image:<br>
<a href="https://openqa.suse.de/tests/8162347" class="external">https://openqa.suse.de/tests/8162347</a><br>
<a href="https://openqa.suse.de/tests/8161501" class="external">https://openqa.suse.de/tests/8161501</a><br>
<a href="https://openqa.suse.de/tests/8160547" class="external">https://openqa.suse.de/tests/8160547</a></p>
<p>Please add some sort of protection against asset clobbering via <code>PUBLISH_*</code> variables:</p>
<ul>
<li>two jobs must not publish the same file in parallel</li>
<li>jobs must not publish a file while another job may be downloading the previous version</li>
<li><code>PUBLISH_*</code> misconfiguration (e.g. copy-paste mistakes among multiple testsuites) should be detected and reported in the WebUI, for example as the reason why install job was terminated</li>
</ul>
openQA Project - action #98841 (Resolved): qemu randomly fails to start on QA-Power8-5-kvm auto_r...https://progress.opensuse.org/issues/988412021-09-17T15:49:43ZMDouchamartin.doucha@suse.com
<a name="Observation"></a>
<h2 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h2>
<p>A few LTP jobs have failed to start today due to qemu error on QA-Power8-5-kvm worker:<br>
<a href="https://openqa.suse.de/tests/7138972">https://openqa.suse.de/tests/7138972</a><br>
<a href="https://openqa.suse.de/tests/7149857">https://openqa.suse.de/tests/7149857</a><br>
<a href="https://openqa.suse.de/tests/7153989">https://openqa.suse.de/tests/7153989</a></p>
<p>All of them have the following output in autoinst-log.txt:</p>
<pre><code>[2021-09-17T16:24:29.803 CEST] [info] ::: backend::baseclass::die_handler: Backend process died, backend errors are reported below in the following lines:
QEMU terminated before QMP connection could be established. Check for errors below
[2021-09-17T16:24:29.804 CEST] [info] ::: OpenQA::Qemu::Proc::save_state: Saving QEMU state to qemu_state.json
[2021-09-17T16:24:29.805 CEST] [debug] Passing remaining frames to the video encoder
[2021-09-17T16:24:29.805 CEST] [debug] Waiting for video encoder to finalize the video
[2021-09-17T16:24:29.805 CEST] [debug] The built-in video encoder (pid 110385) terminated
[2021-09-17T16:24:29.807 CEST] [debug] QEMU: QEMU emulator version 4.2.1 (openSUSE Leap 15.2)
[2021-09-17T16:24:29.807 CEST] [debug] QEMU: Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
[2021-09-17T16:24:29.807 CEST] [warn] !!! : qemu-system-ppc64: Failed to allocate KVM HPT of order 25 (try smaller maxmem?): Cannot allocate memory
</code></pre>
<a name="Problem"></a>
<h2 >Problem<a href="#Problem" class="wiki-anchor">¶</a></h2>
<p>QA-Power8-5-kvm has 256GB RAM. <a href="https://monitor.qa.suse.de/d/WDQA-Power8-5-kvm/worker-dashboard-qa-power8-5-kvm?viewPanel=12054&orgId=1&from=1631765162464&to=1632085860553">https://monitor.qa.suse.de/d/WDQA-Power8-5-kvm/worker-dashboard-qa-power8-5-kvm?viewPanel=12054&orgId=1&from=1631765162464&to=1632085860553</a> shows that some memory was used during the period when the test failed but nothing that should explain the inability to allocate the memory for the qemu VM. In the system journal there is</p>
<pre><code>Sep 17 16:24:28 QA-Power8-5-kvm worker[88148]: [debug] [pid:88148] REST-API call: POST http://openqa.suse.de/api/v1/jobs/7125263/status
Sep 17 16:24:29 QA-Power8-5-kvm worker[104911]: [info] [pid:108741] sle-15-SP4-ppc64le-Build36.1-HA-BV.qcow2: Processing chunk 501/5812, avg. speed ~976.562 KiB/s
Sep 17 16:24:29 QA-Power8-5-kvm worker[101413]: [debug] [pid:102598] Uploading artefact mq_timedreceive_15-1-2.txt
Sep 17 16:24:29 QA-Power8-5-kvm worker[96737]: [debug] [pid:96737] REST-API call: POST http://openqa.suse.de/api/v1/jobs/7125265/status
Sep 17 16:24:29 QA-Power8-5-kvm worker[109458]: [debug] [pid:110336] Uploading artefact bootloader_start-15.txt
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: 23 callbacks suppressed
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf00, 3cb100) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf04, 3cb104) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf08, 3cb108) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf0c, 3cb10c) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf10, 3cb110) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf14, 3cb114) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf18, 3cb118) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf1c, 3cb11c) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf20, 3cb120) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf24, 3cb124) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: cma: cma_alloc: alloc failed, req-size: 512 pages, ret: -16
Sep 17 16:24:30 QA-Power8-5-kvm worker[88148]: [debug] [pid:88148] Upload concluded (at wait_children)
Sep 17 16:24:30 QA-Power8-5-kvm worker[109557]: [info] [pid:109557] Isotovideo exit status: 1
Sep 17 16:24:30 QA-Power8-5-kvm worker[109557]: [debug] [pid:109557] Stopping job 7153989 from openqa.suse.de: 07153989-sle-15-SP3-Server-DVD-Incidents-Kernel-KOTD-ppc64le-Build5.3.18-302.1.g316993b-ltp_crashme@ppc64le-virtio - reason: died
Sep 17 16:24:30 QA-Power8-5-kvm worker[109557]: [debug] [pid:109557] REST-API call: POST http://openqa.suse.de/api/v1/jobs/7153989/status
Sep 17 16:24:30 QA-Power8-5-kvm worker[101413]: [debug] [pid:102598] Uploading artefact mq_timedreceive_7-1-2.txt
</code></pre>
<p>in particular the messages</p>
<pre><code>Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf20, 3cb120) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: alloc_contig_range: [3caf24, 3cb124) PFNs busy
Sep 17 16:24:29 QA-Power8-5-kvm kernel: cma: cma_alloc: alloc failed, req-size: 512 pages, ret: -16
</code></pre>
<p>so an allocation failure. We could report a bug about this but because KVM on SUSE with Power8 is unsupported so I don't expect any success.</p>
<p>We likely need to accept such issues and trigger a restart automatically by openQA.</p>
<a name="Acceptance-criteria"></a>
<h2 >Acceptance criteria<a href="#Acceptance-criteria" class="wiki-anchor">¶</a></h2>
<ul>
<li><strong>AC1:</strong> qemu ppc64le allocation errors cause automatic job retriggers by openQA</li>
</ul>
<a name="Suggestions"></a>
<h2 >Suggestions<a href="#Suggestions" class="wiki-anchor">¶</a></h2>
<ul>
<li>Catch the error and make it "Incomplete"</li>
<li>Restart the incomplete job</li>
<li>Make openQA automatically detect the issue and trigger restart, e.g. based on <a href="https://github.com/os-autoinst/openQA/blob/master/etc/openqa/openqa.ini#L76">https://github.com/os-autoinst/openQA/blob/master/etc/openqa/openqa.ini#L76</a></li>
</ul>
openQA Project - action #96007 (Resolved): OpenQA jobs randomly time out during setup phasehttps://progress.opensuse.org/issues/960072021-07-26T10:23:59ZMDouchamartin.doucha@suse.com
<p>OpenQA jobs have been incompleting more than usual in the past few weeks. The incompletes I've seen just today all show the following sequence of messages in worker.log:</p>
<pre><code>[2021-07-24T16:28:45.444 CEST] [debug] started mgmt loop with pid 59928
[2021-07-24T16:28:45.510 CEST] [debug] qemu version detected: 4.2.1
[2021-07-24T16:28:45.512 CEST] [debug] running /usr/bin/chattr -f +C /var/lib/openqa/pool/9/raid
[2021-07-24T18:28:42.557 CEST] [debug] isotovideo received signal TERM
[2021-07-24T18:28:42.558 CEST] [debug] backend got TERM
</code></pre>
<p><a href="https://openqa.suse.de/tests/6552459" class="external">https://openqa.suse.de/tests/6552459</a><br>
<a href="https://openqa.suse.de/tests/6555414" class="external">https://openqa.suse.de/tests/6555414</a><br>
<a href="https://openqa.suse.de/tests/6543695" class="external">https://openqa.suse.de/tests/6543695</a></p>
<p>I'll update this ticket if I find any similar jobs where the last operation before timeout isn't <code>chattr -f +C</code>.</p>
openQA Project - action #88193 (Resolved): [qe-core] virtio-terminal is missing for non root usershttps://progress.opensuse.org/issues/881932021-01-25T14:25:47ZMDouchamartin.doucha@suse.com
<p>Calling <code>$self->select_user_serial_terminal;</code> (alias for <code>$self->select_serial_terminal(0);</code>) in test on a QEMU backend results in the following error:</p>
<pre><code>[2021-01-25T14:06:53.271 CET] [debug] tests/x11/ghostscript.pm:45 called opensusebasetest::select_serial_terminal -> lib/opensusebasetest.pm:1243 called testapi::select_console
[2021-01-25T14:06:53.271 CET] [debug] <<< testapi::select_console(testapi_console="virtio-terminal")
console virtio-terminal does not exist at /usr/lib/os-autoinst/backend/driver.pm line 86.
[2021-01-25T14:06:53.319 CET] [info] ::: basetest::runtest: # Test died: Can't call method "select" on an undefined value at /usr/lib/os-autoinst/backend/baseclass.pm line 667.
</code></pre>
<p>The <code>select_serial_terminal</code> method expects to have a non-root virtio console named <code>virtio-terminal</code> but <code>lib/susedistribution.pm</code> does not define any non-root virtio consoles.</p>
openQA Infrastructure - action #63706 (Rejected): [zkvm] Connection loss between VM and host on o...https://progress.opensuse.org/issues/637062020-02-21T10:13:48ZMDouchamartin.doucha@suse.com
<p>The zkvm slots on openqaworker2 frequently lose VNC and/or SSH connection between the host and VM. The first recent appearance of this problem was on 2020-02-19 around 1AM and affects both SLE-15GA and SLE-15SP1. SLE-12* jobs use different worker class.</p>
<p><a href="https://openqa.suse.de/tests/3898309#step/install_ltp/24" class="external">https://openqa.suse.de/tests/3898309#step/install_ltp/24</a><br>
<a href="https://openqa.suse.de/tests/3898794#step/install_ltp/30" class="external">https://openqa.suse.de/tests/3898794#step/install_ltp/30</a><br>
<a href="https://openqa.suse.de/tests/3906656#step/update_kernel/30" class="external">https://openqa.suse.de/tests/3906656#step/update_kernel/30</a><br>
<a href="https://openqa.suse.de/tests/3909115#step/install_ltp/64" class="external">https://openqa.suse.de/tests/3909115#step/install_ltp/64</a><br>
<a href="https://openqa.suse.de/tests/3898244#step/update_kernel/37" class="external">https://openqa.suse.de/tests/3898244#step/update_kernel/37</a><br>
<a href="https://openqa.suse.de/tests/3906591#step/install_ltp/12" class="external">https://openqa.suse.de/tests/3906591#step/install_ltp/12</a></p>
openQA Infrastructure - action #61844 (Resolved): auto_review:"download failed: 521 - Connect tim...https://progress.opensuse.org/issues/618442020-01-07T14:21:57ZMDouchamartin.doucha@suse.com
<p>The cache service on openqaworker-arm-3 frequently fails to download assets with error 521:</p>
<pre><code>[2020-01-05T01:30:22.0405 CET] [info] [pid:49324] Downloading SLES-15-aarch64-minimal_installed_for_LTP.qcow2, request #3191 sent to Cache Service
[2020-01-05T01:30:48.0583 CET] [info] [pid:49324] Download of SLES-15-aarch64-minimal_installed_for_LTP.qcow2 processed:
[info] [#3191] Cache size of "/var/lib/openqa/cache" is 49GiB, with limit 50GiB
[info] [#3191] Downloading "SLES-15-aarch64-minimal_installed_for_LTP.qcow2" from "openqa.suse.de/tests/3754531/asset/hdd/SLES-15-aarch64-minimal_installed_for_LTP.qcow2"
[info] [#3191] Purging "/var/lib/openqa/cache/openqa.suse.de/SLES-15-aarch64-minimal_installed_for_LTP.qcow2" because the download failed: 521 - Connect timeout
</code></pre>
<p>The error may seem rare at first glance but that's most likely because of asset caching on workers. For example, of the last 10 jobs on openqaworker-arm-3:19 (at the time of writing), 2 jobs failed with connect timeout, 2 jobs downloaded at least one asset successfully and 6 jobs ran entirely from cache. It's not clear from logs whether the timeout happens during the initial connection or halfway through downloading a 2GB file.<br>
<a href="https://openqa.suse.de/admin/workers/1298" class="external">https://openqa.suse.de/admin/workers/1298</a></p>
<p>The oldest case confirmed by os-autoinst log is from 2019-12-15: <a href="https://openqa.suse.de/tests/3708066" class="external">https://openqa.suse.de/tests/3708066</a><br>
There may have been older cases but their logs have most likely been deleted by now.</p>
<p>I've also looked at 5 instances of openqaworker-arm-1 and found only 3 confirmed cases of the same error. That's low enough to be caused by chance.</p>
openQA Infrastructure - action #58945 (Resolved): OpenQA worker service not restarted after OpenQ...https://progress.opensuse.org/issues/589452019-10-31T13:12:21ZMDouchamartin.doucha@suse.com
<p>The openqa-worker service on some openqa.suse.de workers doesn't get restarted after update. This may cause version mismatch between os-autoinst and openQA-common packages.</p>
<p>One example of this mismatch are these three verification runs for <a href="https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/8329" class="external">https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/8329</a> below:<br>
openqaworker2: <a href="https://openqa.suse.de/tests/3541705" class="external">https://openqa.suse.de/tests/3541705</a> (openqa-worker service last restarted on 2019-10-30)<br>
openqaworker6: <a href="https://openqa.suse.de/tests/3541697" class="external">https://openqa.suse.de/tests/3541697</a> (openqa-worker service last restarted on 2019-09-18)<br>
openqaworker9: <a href="https://openqa.suse.de/tests/3544337" class="external">https://openqa.suse.de/tests/3544337</a> (openqa-worker service last restarted on 2019-09-18)</p>
<p>All three jobs ran the same test modules (see autoinst log) but all tests after intall_ltp were scheduled at runtime. Updating test schedule at runtime requires patches merged into OpenQA on 2019-09-27 so openqaworker6 and openqaworker9 didn't update test schedule due to still running openQA-common from mid-September, before the patches were merged.</p>
openQA Infrastructure - action #58805 (Resolved): [infra]Severe storage performance issue on open...https://progress.opensuse.org/issues/588052019-10-29T11:34:09ZMDouchamartin.doucha@suse.com
<p>Last week on Thursday, a handful of tests in two LTP testsuites started timing out. I've initially reported it as a kernel performance regression: <a href="https://bugzilla.suse.com/show_bug.cgi?id=1155018" class="external">https://bugzilla.suse.com/show_bug.cgi?id=1155018</a></p>
<p>However, I've tried to reproduce the problem on a released kernel version which didn't have the issue 3 weeks ago and succeeded: <a href="https://openqa.suse.de/tests/overview?build=15ga_mdoucha_bsc_1155018&version=15&distri=sle" class="external">https://openqa.suse.de/tests/overview?build=15ga_mdoucha_bsc_1155018&version=15&distri=sle</a></p>
<p>This successful reproduction on a known good kernel indicates that the problem is somewhere in OpenQA infrastructure, possibly a bug introduced during the weekly deployment on Wednesday, October 23rd. The timeout continues to appear in kernel-of-the-day LTP tests: <a href="https://openqa.suse.de/tests/3533819#step/DOR000/7" class="external">https://openqa.suse.de/tests/3533819#step/DOR000/7</a></p>
<p>Both PPC64LE and x86_64 are affected. Reproducibility on aarch64 and s390 is currently unknown because we don't run the affected testsuites on those two platforms. The failing tests mostly belong to the async & direct I/O stress testsuite.</p>