openSUSE Project Management Tool: Issueshttps://progress.opensuse.org/https://progress.opensuse.org/themes/openSUSE/favicon/favicon.ico?15829177842023-04-17T07:59:50ZopenSUSE Project Management Tool
Redmine openQA Project - action #127739 (New): ASSET_1 gets outdated value when using openqa-clone-custom...https://progress.opensuse.org/issues/1277392023-04-17T07:59:50Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>When using openqa-clone-custom-git-refspec in order to clone a job, the value of ASSET_1 or any other ASSET get's the value from settings. But usually, the ASSET uploaded after the original job is run, gets another file name. For example, if ASSET_1=dev_tools.dud in settings, the original job will have uploaded ASSET_1=10931294-dev_tools.dud and the cloned job will fail because it cannot find dev_tools.dud with error:<br>
[info] [<a class="issue tracker-4 status-5 priority-4 priority-default closed" title="action: Breadcrumbs (Closed)" href="https://progress.opensuse.org/issues/555">#555</a>] Downloading "dev_tools.dud" from "<a href="http://openqa.suse.de/tests/10932003/asset/other/dev_tools.dud" class="external">http://openqa.suse.de/tests/10932003/asset/other/dev_tools.dud</a>"<br>
[info] [<a class="issue tracker-4 status-5 priority-4 priority-default closed" title="action: Breadcrumbs (Closed)" href="https://progress.opensuse.org/issues/555">#555</a>] Download of "/var/lib/openqa/cache/openqa.suse.de/dev_tools.dud" failed: 404 Not Found</p>
<p>In order to have a successful cloning, ASSET_1 has to be manually set to ASSET_1=10931294-dev_tools.dud while typing the refspec command.<br>
#openqa-clone-custom-git-refspec <a href="https://github.com/sofiasyria/os-autoinst-distri-opensuse/tree/master" class="external">https://github.com/sofiasyria/os-autoinst-distri-opensuse/tree/master</a> <a href="https://openqa.suse.de/tests/10931666" class="external">https://openqa.suse.de/tests/10931666</a> ASSET_1='10931294-dev_tools.dud'</p>
<p>It would be useful to automate the above process. </p>
openQA Infrastructure - action #127337 (Resolved): Some s390x workers have been failing for all j...https://progress.opensuse.org/issues/1273372023-04-06T09:04:57Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>There are quite a few s390x failures on booting, so after comparing test results, the observation that workers:<br>
<a href="https://openqa.suse.de/admin/workers/1328" class="external">grenache-1:30</a><br>
<a href="https://openqa.suse.de/admin/workers/1348" class="external">grenache-1:40</a><br>
<a href="https://openqa.suse.de/admin/workers/1809" class="external">grenache-1:43</a><br>
<a href="https://openqa.suse.de/admin/workers/1812" class="external">grenache-1:44</a></p>
<p>have been constantly failing for all jobs for the last 11 months. It seeems that before this period of time, the workers (besides grenache-1:44 which was still s390x) were used for ipm and ppc tests and they are now switched to s390x service.</p>
<p>A very frequent error message is : Test died: unexpected end of data at /usr/lib/os-autoinst/consoles/VNC.pm line 187. <a href="https://openqa.suse.de/tests/10869590#step/bootloader_start/39" class="external">example</a> </p>
<p>Possibly related with: <a href="https://progress.opensuse.org/issues/108266" class="external">https://progress.opensuse.org/issues/108266</a></p>
<p>Note: we noticed this issue, after creating a generic s390x-kvm machine, to take advantage of s390-kvm WORKER_CLASS. It seems that other current s390x jobs that are designated to use s390x-kvm-sle12/15 do not run on these workers. For more information, see : <a href="https://progress.opensuse.org/issues/126293" class="external">https://progress.opensuse.org/issues/126293</a></p>
openQA Infrastructure - action #90692 (Rejected): [sporadic] script_output getting wrong output o...https://progress.opensuse.org/issues/906922021-04-06T07:50:59Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>Only for aarch64, the following command:<br>
script_output("cat /proc/sys/kernel/sysrq");<br>
should return the content of the sysrq file, but 10% of the times it returns the character [, which seemt to be taken mistakenly by the serial console.<br>
Failure:<br>
<a href="https://openqa.suse.de/tests/5754960#step/yast2_system_settings/51" class="external">https://openqa.suse.de/tests/5754960#step/yast2_system_settings/51</a></p>
openQA Project - action #87725 (New): MULTIPATH backend variable doesn't set HDDMODEL for aarch64https://progress.opensuse.org/issues/877252021-01-13T16:48:15Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>According to the definition of the variable, if MULTIPATH=1:<br>
"Add HDD drives as multipath devices. Override HDDMODEL to virtio-scsi-pci"</p>
<p>That indeed happens when used on 64bit or ppc64le, but for aarch64, HDDMODEL remains to default value "virtio-blk-device". Test fails with error: "Device 'virtio-blk-device' can't go on SCSI bus". In order for the test to work, HDDMODEL needs to be set to "scsi-hd".</p>
openQA Project - action #69313 (Resolved): When using refspec for ppc for a particular job, PRODU...https://progress.opensuse.org/issues/693132020-07-24T10:25:20Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>While attempting to run refspecs for test "autoyast_reinstall" job on various architectures, I faced an issue specifically on ppc64le. The PRODUCTDIR needs to be set manually like following: <br>
<code>openqa-clone-custom-git-refspec https://github.com/sofiasyria/os-autoinst-distri-opensuse/tree/ac68803 https://openqa.suse.de/tests/4427385 ASSET_1="04427383-autoinst.xml" YAML_TEST_DATA=test_data/yast/autoyast_reinstall/autoyast_reinstall_ppc64le-hmc.yaml PRODUCTDIR="os-autoinst-distri-opensuse/products/sle"</code></p>
<p>If the above command is used without the PRODUCTDIR specification, the variable gets the value "os-autoinst-distri-opensuseos-autoinst-distri-opensuse/products/sle" which leads to test failure as here:<br>
<a href="https://openqa.suse.de/tests/4482090" class="external">https://openqa.suse.de/tests/4482090</a></p>
<p>The particular test suite runs for ppc64le-2g, ppc64le-hmc-single-disk, 64bit, s390x and aarch64. For both the ppc machines, the PRODUCTDIR needs to be set manually. For rest of them, it's not necessary.</p>
<p>I have unsuccessfully tried to reproduce the issue with other jobs on ppc.</p>
openQA Project - action #67822 (Resolved): [tools] When using refspec for svirt-xen-hvm , openQA-...https://progress.opensuse.org/issues/678222020-06-08T11:17:32Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>Using refsep tool in order to run a test on svirt-xen-hvm, will fail at bootloader_svirt if the test is assigned to openQA-SUT-2.</p>
<pre><code>�[0m[2020-06-08T11:18:53.923 CEST] [debug] <<< backend::baseclass::run_ssh(cmd="cat > /var/lib/libvirt/images/openQA-SUT-2.xml", hostname="openqaw5-xen.qa.suse.de", password="SECRET", username=undef, keep_open=1)
[2020-06-08T11:18:53.923 CEST] [debug] <<< backend::baseclass::new_ssh_connection(blocking=1, username=undef, hostname="openqaw5-xen.qa.suse.de", password="SECRET", keep_open=1)
[2020-06-08T11:18:54.067 CEST] [debug] <<< backend::baseclass::run_ssh_cmd(cmd="virsh destroy openQA-SUT-2 |& grep -v \"\\(failed to get domain\\|Domain not found\\)\"", wantarray=0, keep_open=1, username=undef, hostname="openqaw5-xen.qa.suse.de", password="SECRET")
[2020-06-08T11:18:54.067 CEST] [debug] <<< backend::baseclass::run_ssh(cmd="virsh destroy openQA-SUT-2 |& grep -v \"\\(failed to get domain\\|Domain not found\\)\"", password="SECRET", hostname="openqaw5-xen.qa.suse.de", username=undef, wantarray=0, keep_open=1)
[2020-06-08T11:18:54.068 CEST] [debug] <<< backend::baseclass::new_ssh_connection(blocking=1, password="SECRET", hostname="openqaw5-xen.qa.suse.de", username=undef, keep_open=1, wantarray=0)
�[37m[2020-06-08T11:18:54.245 CEST] [debug] [run_ssh_cmd(virsh destroy openQA-SUT-2 |& grep -v "\(failed to get domain\|Domain not found\)")] stdout:
error: Failed to destroy domain openQA-SUT-2
error: internal error: Failed to destroy domain '817'
�[0m�[37m[2020-06-08T11:18:54.245 CEST] [debug] [run_ssh_cmd(virsh destroy openQA-SUT-2 |& grep -v "\(failed to get domain\|Domain not found\)")] exit-code: 0
�[0m[2020-06-08T11:18:54.245 CEST] [debug] <<< backend::baseclass::run_ssh_cmd(cmd="virsh undefine --snapshots-metadata openQA-SUT-2 |& grep -v \"\\(failed to get domain\\|Domain not found\\)\"", username=undef, hostname="openqaw5-xen.qa.suse.de", password="SECRET", wantarray=0, keep_open=1)
[2020-06-08T11:18:54.245 CEST] [debug] <<< backend::baseclass::run_ssh(cmd="virsh undefine --snapshots-metadata openQA-SUT-2 |& grep -v \"\\(failed to get domain\\|Domain not found\\)\"", keep_open=1, wantarray=0, username=undef, hostname="openqaw5-xen.qa.suse.de", password="SECRET")
[2020-06-08T11:18:54.246 CEST] [debug] <<< backend::baseclass::new_ssh_connection(keep_open=1, wantarray=0, blocking=1, username=undef, hostname="openqaw5-xen.qa.suse.de", password="SECRET")
�[37m[2020-06-08T11:18:54.420 CEST] [debug] [run_ssh_cmd(virsh undefine --snapshots-metadata openQA-SUT-2 |& grep -v "\(failed to get domain\|Domain not found\)")] stdout:
error: Failed to undefine domain openQA-SUT-2
error: Requested operation is not valid: cannot undefine transient domain
�[0m�[37m[2020-06-08T11:18:54.420 CEST] [debug] [run_ssh_cmd(virsh undefine --snapshots-metadata openQA-SUT-2 |& grep -v "\(failed to get domain\|Domain not found\)")] exit-code: 0
�[0m[2020-06-08T11:18:54.420 CEST] [debug] <<< backend::baseclass::run_ssh_cmd(cmd="virsh define /var/lib/libvirt/images/openQA-SUT-2.xml", username=undef, password="SECRET", hostname="openqaw5-xen.qa.suse.de", wantarray=0, keep_open=1)
[2020-06-08T11:18:54.421 CEST] [debug] <<< backend::baseclass::run_ssh(cmd="virsh define /var/lib/libvirt/images/openQA-SUT-2.xml", username=undef, password="SECRET", hostname="openqaw5-xen.qa.suse.de", keep_open=1, wantarray=0)
[2020-06-08T11:18:54.421 CEST] [debug] <<< backend::baseclass::new_ssh_connection(username=undef, password="SECRET", hostname="openqaw5-xen.qa.suse.de", blocking=1, wantarray=0, keep_open=1)
�[37m[2020-06-08T11:18:54.585 CEST] [debug] [run_ssh_cmd(virsh define /var/lib/libvirt/images/openQA-SUT-2.xml)] stdout:
�[0m�[37m[2020-06-08T11:18:54.585 CEST] [debug] [run_ssh_cmd(virsh define /var/lib/libvirt/images/openQA-SUT-2.xml)] stderr:
error: Failed to define domain from /var/lib/libvirt/images/openQA-SUT-2.xml
error: operation failed: domain 'openQA-SUT-2' already exists with uuid 9215f187-45ff-4915-87f7-c7f901a194b3
�[0m�[37m[2020-06-08T11:18:54.585 CEST] [debug] [run_ssh_cmd(virsh define /var/lib/libvirt/images/openQA-SUT-2.xml)] exit-code: 1
�[0m�[33m[2020-06-08T11:18:54.695 CEST] [info] ::: basetest::runtest: # Test died: {
"args" => [],
"json_cmd_token" => "geKeVGfx",
"console" => "svirt",
"cmd" => "backend_proxy_console_call",
"function" => "define_and_start"
}
virsh define failed at /usr/lib/os-autoinst/consoles/sshVirtsh.pm line 570, <$fh> line 28.
�[0m�[37m[2020-06-08T11:18:54.698 CEST] [debug] ||| finished bootloader_svirt installation at 2020-06-08 09:18:54 (20 s)
</code></pre>
<p>This issue is reproducible since Thursday 4/06/2020. <br>
<a href="https://openqa.suse.de/tests/4322931#next_previous">https://openqa.suse.de/tests/4322931#next_previous</a></p>
openQA Project - action #67537 (Resolved): [tools] Not possible to change hostname on Xen tests.https://progress.opensuse.org/issues/675372020-06-01T13:11:50Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>When running module yast2_lan on Xen hvm, there is a step for changing the server's hostname via Yast ncurses. After that, running "hostname | grep $hostname" fails. These steps work on other architectures for any hostname given. </p>
<p>For $hostname = 'notsusetest'</p>
<p>Xen hvm: <a href="https://openqa.suse.de/tests/4301636">https://openqa.suse.de/tests/4301636</a><br>
x86: <a href="http://falafel.suse.cz/tests/832">http://falafel.suse.cz/tests/832</a></p>
<p>From autoinst.log:</p>
<pre><code>[2020-06-01T14:48:15.253 CEST] [debug] <<< testapi::type_string(string="notsusetest", max_interval=250, wait_screen_changes=0, wait_still_screen=0, timeout=30, similarity_level=47)
[2020-06-01T14:48:15.628 CEST] [debug] tests/console/yast2_lan.pm:79 called y2lan_restart_common::close_yast2_lan -> lib/y2lan_restart_common.pm:394 called testapi::send_key
[2020-06-01T14:48:15.628 CEST] [debug] <<< testapi::send_key(key="alt-o", wait_screen_change=0, do_wait=0)
[ 125.773551] systemd-udevd[444]: Network interface NamePolicy= disabled by default.
[2020-06-01T14:48:15.963 CEST] [debug] tests/console/yast2_lan.pm:79 called y2lan_restart_common::close_yast2_lan -> lib/y2lan_restart_common.pm:395 called testapi::wait_serial
[2020-06-01T14:48:15.963 CEST] [debug] <<< testapi::wait_serial(timeout=180, quiet=undef, buffer_size=undef, expect_not_found=0, regexp="yast2-lan-status-0", no_regex=0, record_output=undef)
[ 125.824397] systemd-udevd[2826]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
[ 125.850886] systemd-udevd[2825]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
[ 125.897175] systemd[1]: Starting Generate issue file for login session...
[ 125.942205] systemd[1]: Started Generate issue file for login session.
[ 127.756837] systemd[1]: Reloading wicked managed network interfaces.
[ 127.810697] wickedd-dhcp4[1027]: eth0: Request to release DHCPv4 lease with UUID 9ef8d45e-e14e-0c00-0904-000004000000: releasing...
[ 127.822276] wickedd-dhcp6[1028]: eth0: Request to release DHCPv6 lease using UUID 9ef8d45e-e14e-0c00-0904-000005000000: releasing...
Welcome to SUSE Linux Enterprise Server 15 SP2 RC3 (x86_64) - Kernel 5.3.18-20-default (ttyS0).
eth0: fe80::216:3eff:fe55:57af
notsusetest login: [ 133.404351] firewalld[943]: ERROR: UNKNOWN_INTERFACE: 'eth0' is not in any zone
[ 133.765808] wickedd-dhcp4[1027]: eth0: Request to acquire DHCPv4 lease with UUID 9ef8d45e-e14e-0c00-0904-000008000000
[ 133.776497] wickedd-dhcp6[1028]: eth0: Request to acquire DHCPv6 lease with UUID 9ef8d45e-e14e-0c00-0904-000009000000 in mode auto
[ 134.589778] wickedd-dhcp4[1027]: eth0: Committed DHCPv4 lease with address 10.162.31.224 (lease time 86400 sec, renew in 43200 sec, rebind in 75600 sec)
[ 135.256270] systemd[1]: Reloading System Logging Service.
[ 135.274887] rsyslogd[974]: [origin software="rsyslogd" swVersion="8.39.0" x-pid="974" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
[ 135.308382] systemd[1]: Reloaded System Logging Service.
Welcome to SUSE Linux Enterprise Server 15 SP2 RC3 (x86_64) - Kernel 5.3.18-20-default (ttyS0).
eth0: 10.162.31.224 fe80::216:3eff:fe55:57af
1c224 login: [ 138.129352] wickedd-dhcp6[1028]: eth0: link confirmation in reply with status All addresses still on link.
[ 138.140672] wickedd-dhcp6[1028]: eth0: Committing DHCPv6 lease with:
[ 138.148900] wickedd-dhcp6[1028]: eth0 +ia-na.address 2620:113:80c0:80a0:10:162:30:f88f/0, pref-lft 1619880, valid-lft 2591880
Welcome to SUSE Linux Enterprise Server 15 SP2 RC3 (x86_64) - Kernel 5.3.18-20-default (ttyS0).
eth0: 10.162.31.224 2620:113:80c0:80a0:10:162:30:f88f
1c224 login: [ 141.637845] wicked[3004]: eth0 device-ready
[ 141.648879] wicked[3004]: eth0 up
[ 141.673913] systemd[1]: Reloaded wicked managed network interfaces.
yast2-lan-status-0
�[32m[2020-06-01T14:48:34.044 CEST] [debug] >>> testapi::wait_serial: yast2-lan-status-0: ok
�[0m[2020-06-01T14:48:34.045 CEST] [debug] tests/console/yast2_lan.pm:80 called testapi::wait_still_screen
[2020-06-01T14:48:34.045 CEST] [debug] <<< testapi::wait_still_screen(similarity_level=47, stilltime=7, timeout=30)
�[32m[2020-06-01T14:48:41.091 CEST] [debug] >>> testapi::wait_still_screen: detected same image for 7 seconds, last detected similarity is 50.3353530196853
�[0m[2020-06-01T14:48:41.091 CEST] [debug] tests/console/yast2_lan.pm:85 called opensusebasetest::clear_and_verify_console -> lib/opensusebasetest.pm:51 called utils::clear_console -> lib/utils.pm:336 called testapi::type_string
[2020-06-01T14:48:41.092 CEST] [debug] <<< testapi::type_string(string="clear\n", max_interval=250, wait_screen_changes=0, wait_still_screen=0, timeout=30, similarity_level=47)
[2020-06-01T14:48:41.295 CEST] [debug] tests/console/yast2_lan.pm:85 called opensusebasetest::clear_and_verify_console -> lib/opensusebasetest.pm:52 called testapi::assert_screen
[2020-06-01T14:48:41.295 CEST] [debug] <<< testapi::assert_screen(mustmatch="cleared-console", timeout=30)
�[37m[2020-06-01T14:48:41.399 CEST] [debug] no match: 89.9s, best candidate: cleared-console-root-wsl-20200525 (0.81)
�[0m�[32m[2020-06-01T14:48:42.402 CEST] [debug] >>> testapi::_handle_found_needle: found cleared-console-root-20190314, similarity 1.00 @ 65/3
�[0m[2020-06-01T14:48:42.402 CEST] [debug] tests/console/yast2_lan.pm:86 called testapi::assert_script_run
[2020-06-01T14:48:42.402 CEST] [debug] <<< testapi::assert_script_run(cmd="hostname|grep notsusetest", fail_message="", timeout=90, quiet=undef)
[2020-06-01T14:48:42.402 CEST] [debug] tests/console/yast2_lan.pm:86 called testapi::assert_script_run
[2020-06-01T14:48:42.402 CEST] [debug] <<< testapi::type_string(string="hostname|grep notsusetest", max_interval=250, wait_screen_changes=0, wait_still_screen=0, timeout=30, similarity_level=47)
[2020-06-01T14:48:43.271 CEST] [debug] tests/console/yast2_lan.pm:86 called testapi::assert_script_run
[2020-06-01T14:48:43.271 CEST] [debug] <<< testapi::type_string(string="; echo Zc7oL-\$?- > /dev/ttyS0\n", max_interval=250, wait_screen_changes=0, wait_still_screen=0, timeout=30, similarity_level=47)
Zc7oL-1-
[2020-06-01T14:48:44.391 CEST] [debug] tests/console/yast2_lan.pm:86 called testapi::assert_script_run
[2020-06-01T14:48:44.391 CEST] [debug] <<< testapi::wait_serial(regexp=qr/Zc7oL-\d+-/, record_output=undef, no_regex=0, buffer_size=undef, quiet=undef, timeout=90, expect_not_found=0)
�[32m[2020-06-01T14:48:45.461 CEST] [debug] >>> testapi::wait_serial: (?^:Zc7oL-\d+-): ok
�[0m�[33m[2020-06-01T14:48:45.529 CEST] [info] ::: basetest::runtest: # Test died: command 'hostname|grep notsusetest' failed at /var/lib/openqa/pool/12/os-autoinst-distri-opensuse/tests/console/yast2_lan.pm line 86.
�[0m�[37m[2020-06-01T14:48:45.530 CEST] [debug] ||| finished yast2_lan console at 2020-06-01 12:48:45 (93 s)
</code></pre>
<p>Digging old tickets, I found : <a href="https://progress.opensuse.org/issues/15740">https://progress.opensuse.org/issues/15740</a> <br>
but not sure if it could be related.</p>
openQA Tests - action #64688 (Resolved): [functional][y] Travis check detect_unused_modules is ta...https://progress.opensuse.org/issues/646882020-03-20T14:54:54Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>The original ticket: <a href="https://progress.opensuse.org/issues/47894" class="external">https://progress.opensuse.org/issues/47894</a><br>
is mentioning a constant check for unused modules. What the implemented check is doing, is verifying on every push build, that all modules in repository are used somewhere. This looks like an overkill as it takes about 4 minutes to be completed. We could investigate if this can be replaced by a more targeted check on the particular push and additionally a complete check running daily (like a cron job) that will parse the whole repository.</p>
<a name="Acceptance-criteria"></a>
<h2 >Acceptance criteria<a href="#Acceptance-criteria" class="wiki-anchor">¶</a></h2>
<ol>
<li>Unused modules detection is performed on the scope of the changes in the PR only</li>
</ol>
openQA Tests - action #64466 (Resolved): [functional][y][hyper-v][timeboxed:16h] test fails in sh...https://progress.opensuse.org/issues/644662020-03-12T13:14:17Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>Sporadic failure in shutdown module. Two types of failures observed:</p>
<ul>
<li>Display at password prompt is distorted and cause needle match failure ( <a href="https://openqa.suse.de/tests/3969518#step/shutdown/10">https://openqa.suse.de/tests/3969518#step/shutdown/10</a> )</li>
<li>After password is entered, system doesn't show desktop. ( <a href="https://openqa.suse.de/tests/3982260#step/shutdown/15">https://openqa.suse.de/tests/3982260#step/shutdown/15</a> ). According to autoinst-log, provided password was incorrect and then system "Stopped target Current graphical user session."</li>
</ul>
<p>As per Yanis, it fails constantly on uefi. Seems that it's issues with openQA setup, as Yanis managed to shutdown the system and it worked just fine. So we need to identify the actions needed to fix it.</p>
<p>[ 675.237199] gnome-keyring-daemon[2603]: couldn't initialize slot with master password: The password or PIN is incorrect</p>
<p>[ 675.297939] gdm-password][4597]: gkr-pam: unlocked login keyring</p>
<p>[2020-03-12T12:29:22.895 CET] [debug] tests/shutdown/shutdown.pm:28 called power_action_utils::power_action -> lib/power_action_utils.pm:255 called testapi::select_console -> lib/susedistribution.pm:883 called x11utils::ensure_unlocked_desktop -> lib/x11utils.pm:126 called testapi::wait_still_screen<br>
[2020-03-12T12:29:22.895 CET] [debug] <<< testapi::wait_still_screen(similarity_level=47, timeout=30, stilltime=1)<br>
[ 675.471356] systemd[4428]: Stopped target Current graphical user session.</p>
<p>[ 675.803774] systemd[4428]: Stopped target GNOME X11 Session (session: gnome-login).</p>
<p>[ 675.914330] gdm-Xorg-:1[4408]: (II) event3 - Microsoft Vmbus HID-compliant Mouse: device removed</p>
<p>[ 675.994240] gdm-Xorg-:1[4408]: (II) event4 - Power Button: device removed</p>
<p>[ 676.108023] gdm-Xorg-:1[4408]: (II) event2 - AT Translated Set 2 keyboard: device removed</p>
<p>[ 676.167890] gdm-Xorg-:1[4408]: (II) event0 - AT Translated Set 2 keyboard: device removed</p>
<p>[ 676.223982] gdm-Xorg-:1[4408]: (II) event1 - TPPS/2 IBM TrackPoint: device removed</p>
<p>[ 676.276254] gdm-Xorg-:1[4408]: (II) UnloadModule: "libinput"</p>
<p>[ 676.316927] gdm-Xorg-:1[4408]: (II) UnloadModule: "libinput"</p>
<p>[ 676.358856] gdm-Xorg-:1[4408]: (II) UnloadModule: "libinput"</p>
<p>[ 676.418850] gdm-Xorg-:1[4408]: (II) UnloadModule: "libinput"</p>
<p>[ 676.487558] gdm-Xorg-:1[4408]: (II) UnloadModule: "libinput"</p>
<p>[ 676.546600] gdm-Xorg-:1[4408]: (II) Server terminated successfully (0). Closing log file.</p>
<p>[ 676.648098] gnome-session[4457]: gnome-session-binary[4457]: WARNING: Lost name on bus: org.gnome.SessionManager</p>
<p>[ 676.732171] gdm-launch-environment][4424]: pam_unix(gdm-launch-environment:session): session closed for user gdm</p>
<p>[ 676.834667] systemd[4428]: Stopped target GNOME Session.</p>
<p>[ 676.908222] gnome-session[4457]: Unable to init server: Could not connect: Connection refused</p>
<a name="Observation"></a>
<h2 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h2>
<p>openQA test in scenario sle-15-SP2-Full-x86_64-skip_registration@svirt-hyperv fails in<br>
<a href="https://openqa.suse.de/tests/3982260/modules/shutdown/steps/14" class="external">shutdown</a></p>
<a name="Reproducible"></a>
<h2 >Reproducible<a href="#Reproducible" class="wiki-anchor">¶</a></h2>
<p>Fails since (at least) Build <a href="https://openqa.suse.de/tests/3658083#step/shutdown/10" class="external">101.1</a></p>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>Last good: <a href="https://openqa.suse.de/tests/3978952" class="external">154.1</a></p>
openQA Tests - action #64325 (Resolved): [functional][y] Add check in autoyast installation test ...https://progress.opensuse.org/issues/643252020-03-09T13:04:41Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>When users manually provide the path for the autoyast profile, there can be typos. After the system displays an error that the profile cannot be found, changing the path to the correct one, should work. </p>
<p>(Recently, I tried the above action manually and faced some unexpected warning messages. See <a href="https://bugzilla.suse.com/show_bug.cgi?id=1165464" class="external">https://bugzilla.suse.com/show_bug.cgi?id=1165464</a> )</p>
<p>Challenge with implementing the above check in openQA is that if the given autoyast profile path is not correct, openQA will not proceed to run the test.</p>
<p>use LWP::Simple 'head' can be used</p>
<a name="Acceptance-criteria"></a>
<h2 >Acceptance criteria<a href="#Acceptance-criteria" class="wiki-anchor">¶</a></h2>
<ol>
<li>Accessibility of the autoyast profile is validated before test is executed</li>
</ol>
openQA Tests - action #64204 (Resolved): [functional][y] test fails in yast2_lan_restarthttps://progress.opensuse.org/issues/642042020-03-04T15:46:31Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>It seems that x11/yast2_lan_restart.pm line 95, is expecting the network to be restarted after harware device name change, but there is no recorded restart of network in journal.log (according to lib/y2lan_restart_common.pm line 153, see <a href="https://progress.opensuse.org/issues/62465" class="external">https://progress.opensuse.org/issues/62465</a>). This leads to test failure.</p>
<a name="Reproducible"></a>
<h2 >Reproducible<a href="#Reproducible" class="wiki-anchor">¶</a></h2>
<p>Fails since (at least) Build <a href="https://openqa.suse.de/tests/3953889#step/yast2_lan_restart/213" class="external">150.1</a></p>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>Last good: <a href="https://openqa.suse.de/tests/3931089" class="external">146.1</a> (or more recent)</p>
<a name="Further-details"></a>
<h2 >Further details<a href="#Further-details" class="wiki-anchor">¶</a></h2>
<p>Always latest result in this scenario: <a href="https://openqa.suse.de/tests/latest?arch=x86_64&distri=sle&flavor=Online&machine=64bit&test=yast2_gui&version=15-SP2" class="external">latest</a></p>
openQA Tests - action #63922 (Rejected): [functional][y] Sporadic failure of mediacheck module in...https://progress.opensuse.org/issues/639222020-02-27T15:14:42Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>After build 131.1, it is observed that while module "mediacheck" runs on hyperV, "Check installation media" is selected, but instead of actually checking the media, test goes back to grub menu and selects "installation". Test fails when warning for Beta distribution appears, after timing out. </p>
<p><a href="https://openqa.suse.de/tests/3928000#step/mediacheck/11" class="external">https://openqa.suse.de/tests/3928000#step/mediacheck/11</a></p>
<p>The issue happens sporadically.</p>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>Last good: <a href="https://openqa.suse.de/tests/3915652" class="external">143.1</a> (or more recent)</p>
openQA Tests - action #63814 (Resolved): [functional][y][virtualization][hyperv][timeboxed:12h] m...https://progress.opensuse.org/issues/638142020-02-25T12:24:34Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>Even though same module works for other architectures, validate_fs_table cannot verify that partitions were successfully created.</p>
<p>The point of the code that fails is when the output of command "lsblk -n", split at every new line, is saved in @lsblk_output and it is parsed by:</p>
<p>foreach (@lsblk_output) {<br>
if ($_ =~ /(?\Q$args->{mount_point}\E\z)/) {<br>
$check = $+{check};<br>
last;<br>
}</p>
<p>On step <a href="https://openqa.suse.de/tests/3904536#step/validate_fs_table/14" class="external">https://openqa.suse.de/tests/3904536#step/validate_fs_table/14</a><br>
it is shown that the mount point "/" is indeed created, but it cannot be identified by "$_ =~ /(?\Q$args->{mount_point}\E\z)/".</p>
<a name="Observation"></a>
<h2 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h2>
<p>openQA test in scenario sle-15-SP2-Online-x86_64-msdos@svirt-hyperv fails in<br>
<a href="https://openqa.suse.de/tests/3904536/modules/validate_fs_table/steps/16" class="external">validate_fs_table</a></p>
<a name="Reproducible"></a>
<h2 >Reproducible<a href="#Reproducible" class="wiki-anchor">¶</a></h2>
<p>Always.</p>
openQA Tests - action #63460 (Rejected): serial console stopped working on hypervhttps://progress.opensuse.org/issues/634602020-02-14T12:51:49Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>I cannot find a single hyperv test that does not fail.</p>
<p>Looks like the tests fail either on bootloader_start with error like :</p>
<p>SHA256 checksum does not match for ISO:<br>
Calculated: <br>
Expected: aac524d3a46967c89bb8d8761bc6e9f2986b40bc1a1ae60abc746dff01b001ee</p>
<p>or after first boot, while running commands that time out.</p>
<p><strong>Test suite | Failed module</strong><br>
skip_registration@svirt-hyperv : integration_services [1]<br>
skip_registration@svirt-hyperv-uefi : bootloader_start<br>
lvm+RAID1@svirt-hyperv(/uefi) : validate_lvm_raid1 [2]<br>
mediacheck@svirt-hyperv(/uefi) : bootloader_hyperv<br>
minimal+base_yast@svirt-hyperv : system_prepare [3]<br>
minimal+base_yast@svirt-hyperv-uefi : first_boot [4]</p>
<p>[1] Test died: command 'curl -f -v <a href="http://10.160.0.147:20103/V0WsZ7j56nMjfBev/current_script">http://10.160.0.147:20103/V0WsZ7j56nMjfBev/current_script</a> > /tmp/scriptHwmlo.sh' timed out at /usr/lib/os-autoinst/distribution.pm line 260.<br>
[2] Test died: script timeout: lvscan | awk '{print $2}' | sed s/\'//g at /usr/lib/os-autoinst/testapi.pm line 1104.<br>
[3] Test died: command 'chown bernhard /dev/ttyS0 && usermod -a -G tty,dialout,$(stat -c %G /dev/ttyS0) bernhard' timed out at /var/lib/openqa/cache/openqa.suse.de/tests/sle/lib/utils.pm line 1185.<br>
[4] Test died: no candidate needle with tag(s) 'linux-login, emergency-shell, emergency-mode' matched (<--- looks like system hang on grub menu)</p>
<p>Example:</p>
<a name="Observation"></a>
<h2 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h2>
<p>openQA test in scenario sle-15-SP2-Online-x86_64-lvm+RAID1@svirt-hyperv fails in<br>
<a href="https://openqa.suse.de/tests/3885844/modules/validate_lvm_raid1/steps/11" class="external">validate_lvm_raid1</a></p>
<a name="Test-suite-description"></a>
<h2 >Test suite description<a href="#Test-suite-description" class="wiki-anchor">¶</a></h2>
<p>Maintainer: slindomansilla, jrauch</p>
<p>Combination of LVM and RAID1, installation of RAID1 on top of LVM using expert partitioner.</p>
<p>(crypt-)LVM installations can take longer, especially on non-x86_64 architectures.</p>
<a name="Reproducible"></a>
<h2 >Reproducible<a href="#Reproducible" class="wiki-anchor">¶</a></h2>
<p>Fails since (at least) Build <a href="https://openqa.suse.de/tests/3881706" class="external">139.1</a></p>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>Last good: <a href="https://openqa.suse.de/tests/3868027" class="external">136.2</a> (or more recent)</p>
<a name="Further-details"></a>
<h2 >Further details<a href="#Further-details" class="wiki-anchor">¶</a></h2>
<p>Always latest result in this scenario: <a href="https://openqa.suse.de/tests/latest?arch=x86_64&distri=sle&flavor=Online&machine=svirt-hyperv&test=lvm%2BRAID1&version=15-SP2" class="external">latest</a></p>
openQA Project - action #62243 (Resolved): After latest updates, openQA has problematic behavior ...https://progress.opensuse.org/issues/622432020-01-17T12:44:19Zsyrianidou_sofiasofia.syrianidou@suse.com
<p>After updating my workstation, OpenQA started having issue to run any job. The job remains on schedule, even though sometimes it runs. Sometimes it gets terminated without any error messages or logs. Sometimes I get the error "os-autoinst command server not available, job is likely not running" even though job runs. Video or Live view are never available. Initially, I thought this was a problem with Tumpleweed, so I formatted the workstation and installed Leap. I have the exact same behavior on Leap as well. Also in containerized version.</p>
<p>Link to OpenQA in container:<br>
<a href="http://falafel.suse.cz/" class="external">http://falafel.suse.cz/</a></p>