openSUSE Project Management Tool: Issueshttps://progress.opensuse.org/https://progress.opensuse.org/themes/openSUSE/favicon/favicon.ico?15829177842020-04-01T08:06:25ZopenSUSE Project Management Tool
Redmine openQA Tests - action #65121 (Resolved): [qac][public cloud][ltp] ioctl08 test from LTP syscalls ...https://progress.opensuse.org/issues/651212020-04-01T08:06:25Zjlausuchjalausuch@suse.com
<p>Some failed jobs: <br>
<a href="https://openqa.suse.de/tests/4070442" class="external">https://openqa.suse.de/tests/4070442</a><br>
<a href="https://openqa.suse.de/tests/4070449" class="external">https://openqa.suse.de/tests/4070449</a></p>
<pre><code>Summary:
passed 1
failed 0
skipped 0
warnings 0
cmd-exit-406-0
sh-4.4# ioctl08; echo cmd-exit-407-$?
tst_device.c:88: INFO: Found free device 0 '/dev/loop0'
tst_mkfs.c:90: INFO: Formatting /dev/loop0 with btrfs opts='' extra opts=''
cmd-exit-407-255
sh-4.4# printf tainted-; cat /proc/sys/kernel/tainted; echo cmd-exit-408-$?
tainted-0
cmd-exit-408-0
sh-4.4# ioctl_ns01; echo cmd-exit-409-$?
tst_test.c:1241: INFO: Timeout per run is 0h 05m 00s
ioctl_ns01.c:57: PASS: NS_GET_PARENT fails with EPERM
ioctl_ns01.c:57: PASS: NS_GET_PARENT fails with EPERM
</code></pre> openQA Tests - action #65115 (Resolved): [qac][public cloud] Storage perf test failing in ssh con...https://progress.opensuse.org/issues/651152020-04-01T06:24:17Zjlausuchjalausuch@suse.com
<p>PC tools helper VM fails to connect to the VM</p>
<p><a href="https://openqa.suse.de/tests/4070444#" class="external">https://openqa.suse.de/tests/4070444#</a><br>
<a href="https://openqa.suse.de/tests/4067508#" class="external">https://openqa.suse.de/tests/4067508#</a></p>
<pre><code># nc -vz -w 1 34.76.215.65 22; echo Hsf3L-$?-
Connection to 34.76.215.65 22 port [tcp/ssh] succeeded!
Hsf3L-0-
# cat > /tmp/scripthW6mh.sh << 'EOT_hW6mh'; echo hW6mh-$?-
> ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR -i '/root/.ssh/id_rsa' "susetest@34.76.215.65" -- 'uname -r'
> EOT_hW6mh
hW6mh-0-
# echo hW6mh; bash -oe pipefail /tmp/scripthW6mh.sh ; echo SCRIPT_FINISHEDhW6mh-$?-
hW6mh
susetest@34.76.215.65: Permission denied (publickey).
</code></pre> openQA Tests - action #64797 (Resolved): [kernel][public cloud] EC2 command fails to upload imageshttps://progress.opensuse.org/issues/647972020-03-25T07:43:38Zjlausuchjalausuch@suse.com
<p>There is a new issue happening in latest build corresponding to SLES build 163.11.</p>
<p><a href="https://openqa.suse.de/tests/overview?distri=sle&version=15-SP2&build=0011&groupid=274" class="external">https://openqa.suse.de/tests/overview?distri=sle&version=15-SP2&build=0011&groupid=274</a></p>
<pre><code># ec2uploadimg --access-id 'AKIAYWW2BWC7ZK6D6TDV' -s 'LhgJISWo2A9drw67gvybqU3whwAFViXyeos3ihq6' --backing-store ssd --grub2 --machine 'x86_64' -n 'openqa-SLES15-SP2-CHOST-BYOS.x86_64-0.9.5-EC2-Build1.3.raw.xz' --virt-type hvm --sriov-support --ena-support --verbose --regions 'eu-central-1' --ssh-key-pair 'openqa1585121012_0' --private-key-file QA_SSH_KEY.pem -d 'OpenQA tests' 'SLES15-SP2-CHOST-BYOS.x86_64-0.9.5-EC2-Build1.3.raw.xz'; echo i5Gu5-$?-
Successfully created VPC with id vpc-0c8dfe70b8a7b2ffe
Successfully created internet gateway igw-07ae0d90e57169a80
Successfully created route table rtb-0b44872ab7935b9d3
Successfully created VPC subnet with id subnet-0de5c035c815caa93
Creating temporary security group
Temporary Security Group Created sg-0b495d816b0c739d1 in vpc vpc-0c8dfe70b8a7b2ffe
Successfully allowed incoming SSH port 22 for security group sg-0b495d816b0c739d1 in vpc-0c8dfe70b8a7b2ffe
Waiting for instance: i-0ae73ed595a9aba4f
. .
Waiting for volume creation: vol-08e5155f3378cefee
.
Wait for volume attachment
.
Waiting to obtain instance IP address
.
Attempt ssh connection to 18.194.37.145
. . . /root/.venv_ec2uploadimg/lib/python3.6/site-packages/paramiko/client.py:837: UserWarning: Unknown ssh-ed25519 host key for 18.194.37.145: b'fa5f17f0f6a344c0504dde11927b4cf5'
key.get_name(), hostname, hexlify(key.get_fingerprint())
An error occurred (DependencyViolation) when calling the DeleteSecurityGroup operation: resource sg-0b495d816b0c739d1 has a dependent object
</code></pre> openQA Tests - action #64724 (Resolved): [kernel][xfstests] Parted fails to create a partitionhttps://progress.opensuse.org/issues/647242020-03-22T18:31:09Zjlausuchjalausuch@suse.com
<p><a href="https://openqa.suse.de/tests/4017525#step/partition/13" class="external">https://openqa.suse.de/tests/4017525#step/partition/13</a></p>
<pre><code>parted /dev/vdb --script -- mklabel gpt
Error: partition(s) on /dev/vdb are being used.
</code></pre> openQA Tests - action #64710 (Resolved): [qac][public cloud] Failed to get credentials form Vault...https://progress.opensuse.org/issues/647102020-03-21T17:38:19Zjlausuchjalausuch@suse.com
<p>All the tests in Azure-HPC-BYOS fail to talk to Vault server. For other flavors it works: I have re-run tests at the same time in one flavor and the others and it always fails in Azure-HPC-BYOS flavor.</p>
<p><a href="https://openqa.suse.de/tests/4020950#step/run_ltp/32" class="external">https://openqa.suse.de/tests/4020950#step/run_ltp/32</a></p>
<p>Not sure what is happening in the background, but looks like when vault_get_secrets is called, it calls vault_api and fails the 3 times it calls __vault_api.</p>
<p>We should increase the debug messages here, cause it's difficult to debug with only these calls:</p>
<pre><code>[2020-03-21T18:26:42.592 CET] [debug] tests/publiccloud/run_ltp.pm:59 called publiccloud::basetest::provider_factory -> lib/publiccloud/basetest.pm:65 called publiccloud::azure::init -> lib/publiccloud/azure.pm:44 called publiccloud::azure::vault_create_credentials -> lib/publiccloud/azure.pm:64 called testapi::record_info
[2020-03-21T18:26:42.592 CET] [debug] <<< testapi::record_info(title="INFO", output="Get credentials from VAULT server.", result="ok")
[2020-03-21T18:27:12.732 CET] [debug] Maximum number of Vault request retries exceeded. Check Vault Server is up and running at /var/lib/openqa/cache/openqa.suse.de/tests/sle/lib/publiccloud/provider.pm line 549.
[2020-03-21T18:27:12.735 CET] [debug] lib/publiccloud/basetest.pm:94 called publiccloud::basetest::_cleanup -> lib/publiccloud/basetest.pm:80 called (eval) -> lib/publiccloud/basetest.pm:80 called run_ltp::cleanup -> tests/publiccloud/run_ltp.pm:103 called testapi::type_string
</code></pre> openQA Tests - action #64707 (Resolved): [qac][public cloud][ltp] epoll_wait02 test from LTP sysc...https://progress.opensuse.org/issues/647072020-03-20T22:10:27Zjlausuchjalausuch@suse.com
<p><a href="https://openqa.suse.de/tests/4019168#step/epoll_wait02/1">https://openqa.suse.de/tests/4019168#step/epoll_wait02/1</a></p>
<p>However, on other images it passes, e.g. <a href="https://openqa.suse.de/tests/4017274#step/epoll_wait02/1">https://openqa.suse.de/tests/4017274#step/epoll_wait02/1</a></p>
<p>This is the output form the test execution:</p>
<pre><code>sh-4.4# epoll_wait02; echo cmd-exit-129-$?
tst_test.c:1229: INFO: Timeout per run is 0h 05m 00s
tst_timer_test.c:348: INFO: CLOCK_MONOTONIC resolution 1ns
tst_timer_test.c:360: INFO: prctl(PR_GET_TIMERSLACK) = 50us
tst_timer_test.c:264: INFO: epoll_wait() sleeping for 1000us 500 iterations, threshold 450.01us
tst_timer_test.c:307: INFO: min 1068us, max 1167us, median 1073us, trunc mean 1074.28us (discarded 25)
tst_timer_test.c:322: PASS: Measured times are within thresholds
tst_timer_test.c:264: INFO: epoll_wait() sleeping for 2000us 500 iterations, threshold 450.01us
tst_timer_test.c:307: INFO: min 2071us, max 2174us, median 2077us, trunc mean 2078.58us (discarded 25)
tst_timer_test.c:322: PASS: Measured times are within thresholds
tst_timer_test.c:264: INFO: epoll_wait() sleeping for 5000us 300 iterations, threshold 450.04us
tst_timer_test.c:307: INFO: min 5075us, max 5209us, median 5084us, trunc mean 5087.58us (discarded 15)
tst_timer_test.c:322: PASS: Measured times are within thresholds
tst_timer_test.c:264: INFO: epoll_wait() sleeping for 10000us 100 iterations, threshold 450.33us
tst_timer_test.c:307: INFO: min 10080us, max 10167us, median 10091us, trunc mean 10092.68us (discarded 5)
tst_timer_test.c:322: PASS: Measured times are within thresholds
tst_timer_test.c:264: INFO: epoll_wait() sleeping for 25000us 50 iterations, threshold 451.29us
tst_timer_test.c:307: INFO: min 25085us, max 25187us, median 25106us, trunc mean 25107.94us (discarded 2)
tst_timer_test.c:322: PASS: Measured times are within thresholds
tst_timer_test.c:264: INFO: epoll_wait() sleeping for 100000us 10 iterations, threshold 537.00us
tst_timer_test.c:307: INFO: min 100154us, max 104057us, median 100165us, trunc mean 100616.56us (discarded 1)
tst_timer_test.c:310: FAIL: epoll_wait() slept for too long
Time: us | Frequency
--------------------------------------------------------------------------------
100154 | ********************************************************************
100360 | *********+
100566 |
100772 |
100978 |
101184 |
101390 |
101596 |
101802 |
102008 |
102214 |
102420 |
102626 |
102832 |
103038 |
103244 |
103450 |
103656 |
103862 | *******************-
--------------------------------------------------------------------------------
206us | 1 sample = 9.71429 '*', 19.42857 '+', 38.85714 '-', non-zero '.'
tst_timer_test.c:264: INFO: epoll_wait() sleeping for 1000000us 2 iterations, threshold 4400.00us
tst_timer_test.c:307: INFO: min 1037244us, max 1039297us, median 1037244us, trunc mean 1037244.00us (discarded 1)
tst_timer_test.c:310: FAIL: epoll_wait() slept for too long
Time: us | Frequency
--------------------------------------------------------------------------------
1037244 | ********************************************************************
1037353 |
1037462 |
1037571 |
1037680 |
1037789 |
1037898 |
1038007 |
1038116 |
1038225 |
1038334 |
1038443 |
1038552 |
1038661 |
1038770 |
1038879 |
1038988 |
1039097 |
1039206 | ********************************************************************
--------------------------------------------------------------------------------
109us | 1 sample = 68.00000 '*', 136.00000 '+', 272.00000 '-', non-zero '.'
Summary:
passed 5
failed 2
skipped 0
warnings 0
</code></pre> openQA Tests - action #64547 (Resolved): [kernel][public cloud] Terraform init crashes with SIGSE...https://progress.opensuse.org/issues/645472020-03-17T22:11:47Zjlausuchjalausuch@suse.com
<p>There is a new error in Terraform while running the command terraform init.<br>
Needs investigation: </p>
<p>Examples:<br>
<a href="https://openqa.suse.de/tests/3997753" class="external">https://openqa.suse.de/tests/3997753</a> (EC2)<br>
<a href="https://openqa.suse.de/tests/3997743" class="external">https://openqa.suse.de/tests/3997743</a> (GCE)<br>
and some more in Azure.</p>
<pre><code># terraform init -no-color; echo 2t4Hw-$?-
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "random" (hashicorp/random) 2.2.1...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x55c999cd8dfc]
</code></pre> openQA Tests - action #62408 (Closed): [kernel][public cloud] Timeout when creating Azure-Standar...https://progress.opensuse.org/issues/624082020-01-21T07:59:24Zjlausuchjalausuch@suse.com
<p>All the Azure-Standard tests are failing with timeout in terraform apply. <br>
<a href="https://openqa.suse.de/tests/overview?distri=sle&version=15-SP2&build=0.9.3-1.28&groupid=219&flavor=Azure-Standard" class="external">https://openqa.suse.de/tests/overview?distri=sle&version=15-SP2&build=0.9.3-1.28&groupid=219&flavor=Azure-Standard</a></p>
<p>I did some tests increasing that timeout to 1 hour and I get this message:</p>
<pre><code>azurerm_virtual_machine.openqa-vm[0]: Still creating... [40m10s elapsed]
azurerm_virtual_machine.openqa-vm[0]: Still creating... [40m20s elapsed]
azurerm_virtual_machine.openqa-vm[0]: Still creating... [40m30s elapsed]
azurerm_virtual_machine.openqa-vm[0]: Still creating... [40m40s elapsed]
azurerm_virtual_machine.openqa-vm[0]: Still creating... [40m50s elapsed]
azurerm_virtual_machine.openqa-vm[0]: Still creating... [41m0s elapsed]
Error: Code="OSProvisioningTimedOut" Message="OS Provisioning for VM 'jlausuch-vm-b453e006c72d7fae' did not finish in the allotted time. The VM may still finish provisioning successfully. Please check provisioning state later. Also, make sure the image has been properly prepared (generalized).\r\n * Instructions for Windows: https://azure.microsoft.com/documentation/articles/virtual-machines-windows-upload-image/ \r\n * Instructions for Linux: https://azure.microsoft.com/documentation/articles/virtual-machines-linux-capture-image/ "
on plan.tf line 129, in resource "azurerm_virtual_machine" "openqa-vm":
129: resource "azurerm_virtual_machine" "openqa-vm" {
</code></pre> openQA Tests - action #59339 (Closed): [functional] Support Server based on SLE12-SP3 is brokenhttps://progress.opensuse.org/issues/593392019-11-12T11:05:59Zjlausuchjalausuch@suse.com
<a name="Observation"></a>
<h2 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h2>
<p>Support Server fails to install NFS related packages in ARM.<br>
<a href="https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/support_server/setup.pm#L530" class="external">https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/support_server/setup.pm#L530</a></p>
<a name="Test-suite-description"></a>
<h2 >Test suite description<a href="#Test-suite-description" class="wiki-anchor">¶</a></h2>
<p>Maintainer: Pavel Sladek <a href="mailto:psladek@suse.com">psladek@suse.com</a><br>
according to <a href="https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/support_server/setup.pm#L35" class="external">https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/support_server/setup.pm#L35</a></p>
<a name="Reproducible"></a>
<h2 >Reproducible<a href="#Reproducible" class="wiki-anchor">¶</a></h2>
<p>Fails since (at least) Build <a href="https://openqa.suse.de/tests/3583082" class="external">0372</a></p>
<pre><code># zypper -n in rpcbind nfs-kernel-server | cat; ( exit ${PIPESTATUS[0]} ); echo yjD1U-$?-
File '/content' not found on medium 'http://download.suse.de/install/SLP/SLE-12-SP3-Server-GM/x86_64/DVD1/'
</code></pre>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>Last good: <a href="https://openqa.suse.de/tests/3563133" class="external">0369</a></p>
<pre><code># zypper -n in rpcbind nfs-kernel-server | cat; ( exit ${PIPESTATUS[0]} ); echo yjD1U-$?-
Loading repository data...
Reading installed packages...
'rpcbind' is already installed.
Package 'rpcbind' is not available in your repositories. Cannot reinstall, upgrade, or downgrade.
'nfs-kernel-server' is already installed.
Package 'nfs-kernel-server' is not available in your repositories. Cannot reinstall, upgrade, or downgrade.
Resolving package dependencies...
Nothing to do.
</code></pre>
<a name="Further-details"></a>
<h2 >Further details<a href="#Further-details" class="wiki-anchor">¶</a></h2>
<p>The tests are using the same qcow2 image since a bunch of builds, and the reason why it got broken recently is because<br>
<a href="http://download.suse.de/install/SLP/SLE-12-SP3-Server-GM/x86_64/DVD1" class="external">http://download.suse.de/install/SLP/SLE-12-SP3-Server-GM/x86_64/DVD1</a> does not longer exist.</p>
<p>We should build a new support server image using a newer repo. E.g. SP4:<br>
<a href="http://download.suse.de/install/SLP/SLE-12-SP4-Server-GM/x86_64/DVD1" class="external">http://download.suse.de/install/SLP/SLE-12-SP4-Server-GM/x86_64/DVD1</a></p>
openQA Tests - action #58664 (Resolved): [kernel][wicked] Needs to improve handling of situation ...https://progress.opensuse.org/issues/586642019-10-24T13:56:53Zjlausuchjalausuch@suse.com
<a name="Observation"></a>
<h2 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h2>
<p>tcpdump commands fails</p>
<pre><code># ## START: t01_gre_tunnel_legacy
# tcpdump -s0 -U -w /tmp/tcpdumpt01_gre_tunnel_legacy.pcap >& /dev/null & export CHECK_TCPDUMP_PID=$!; echo ZUt9d-$?-
[1] 4252
ZUt9d-0-
[1]+ Exit 127 tcpdump -s0 -U -w /tmp/tcpdumpt01_gre_tunnel_legacy.pcap &> /dev/null
</code></pre>
<p><a href="https://openqa.suse.de/tests/3518286#step/t01_gre_tunnel_legacy/89" class="external">https://openqa.suse.de/tests/3518286#step/t01_gre_tunnel_legacy/89</a></p>
<a name="Test-suite-description"></a>
<h2 >Test suite description<a href="#Test-suite-description" class="wiki-anchor">¶</a></h2>
<p>Maintainer: <a href="mailto:asmorodskyi@suse.de">asmorodskyi@suse.de</a> <a href="mailto:jalausuch@suse.com">jalausuch@suse.com</a> <a href="mailto:cfamullaconrad@suse.de">cfamullaconrad@suse.de</a></p>
<a name="Reproducible"></a>
<h2 >Reproducible<a href="#Reproducible" class="wiki-anchor">¶</a></h2>
<p>Fails since (at least) Build <a href="https://openqa.opensuse.org/tests/1063727" class="external">20191022</a></p>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>Last good: None in QAM jobs, but it works on QA jobs. <br>
Example of expected result: <a href="https://openqa.suse.de/tests/3507684" class="external">https://openqa.suse.de/tests/3507684</a></p>
<a name="Further-details"></a>
<h2 >Further details<a href="#Further-details" class="wiki-anchor">¶</a></h2>
openQA Tests - action #58268 (Resolved): [kernel][public cloud] Fix OpenQA view with different ve...https://progress.opensuse.org/issues/582682019-10-16T11:57:33Zjlausuchjalausuch@suse.com
<p>Currently, we are grouping Public Cloud tests depending on the Cloud Build and the Kiwi Build, which are most of the same different for different providers. This makes the grouping of the tests cases a bit tricky, always depending on those build numbers. This has some drawbacks (e.g. searching specific tests, JDP, ...) and makes the view very complex.</p>
<p>The idea is to come up with a solution to group the tests in a smarter way. From the ISOS POST, we could change the BUILD variable we pass to OpenQA.</p>
<p>2 ways:</p>
<p>1) Use same BUILD number as the corresponding SLES build to the PC images<br>
Advantage: everything is in the same place, we can even merge the group with SLE1X job groups</p>
<p>It would look something like this:</p>
<pre><code>Build0358 (a day ago)
Build0357 (4 days ago)
Build0350 (8 days ago)
Build0346 (8 days ago)
</code></pre>
<p>2) Group them by SLES build number and also by Provider name:<br>
Advantage: easy to find tests for specific provider<br>
Disadvantage: we can't merge it in the SLE1X groups, we would need to keep it in Public Cloud group.</p>
<p>IT would look something like this:</p>
<pre><code>Build0358-Azure (a day ago)
Build0358-EC2 (a day ago)
Build0358-GCE (a day ago)
Build0357-Azure (a day ago)
Build0357-EC2 (a day ago)
Build0357-GCE (a day ago)
...
</code></pre> openQA Tests - action #58220 (Resolved): [kernel] fadump LVM test failshttps://progress.opensuse.org/issues/582202019-10-15T20:25:44Zjlausuchjalausuch@suse.com
<a name="Observation"></a>
<h2 >Observation<a href="#Observation" class="wiki-anchor">¶</a></h2>
<p>Failure when going into grub screen<br>
<a href="https://openqa.suse.de/tests/3479900#step/kdump_and_crash/64" class="external">grub screen</a></p>
<a name="Test-suite-description"></a>
<h2 >Test suite description<a href="#Test-suite-description" class="wiki-anchor">¶</a></h2>
<p>Maintainer: Petr Cervinka <a href="mailto:pcervinka@suse.com">pcervinka@suse.com</a></p>
<a name="Reproducible"></a>
<h2 >Reproducible<a href="#Reproducible" class="wiki-anchor">¶</a></h2>
<p>Fails since (at least) Build <a href="https://openqa.suse.de/tests/3479900" class="external">0358</a><br>
But it also failed in some <a href="https://openqa.suse.de/tests/3395111" class="external">older run</a></p>
<a name="Expected-result"></a>
<h2 >Expected result<a href="#Expected-result" class="wiki-anchor">¶</a></h2>
<p>This is an example of <a href="https://openqa.suse.de/tests/3470330#step/kdump_and_crash/64" class="external">successful run</a> in the previous build.</p>
<a name="Further-details"></a>
<h2 >Further details<a href="#Further-details" class="wiki-anchor">¶</a></h2>
openQA Tests - action #58148 (Rejected): [kernel][multipath] Timeout in qa_kernel_multipathhttps://progress.opensuse.org/issues/581482019-10-14T15:19:53Zjlausuchjalausuch@suse.com
<p>The test times out after 2 hours. Need to investigate why.</p>
<p><a href="https://openqa.suse.de/tests/3470142" class="external">https://openqa.suse.de/tests/3470142</a><br>
<a href="https://openqa.suse.de/tests/3470021" class="external">https://openqa.suse.de/tests/3470021</a></p>
openQA Project - action #40415 (Resolved): Concurrent jobs with dependencies don't work if they a...https://progress.opensuse.org/issues/404152018-08-29T11:44:49Zjlausuchjalausuch@suse.com
<a name="Reproducibility"></a>
<h1 >Reproducibility:<a href="#Reproducibility" class="wiki-anchor">¶</a></h1>
<p>We have 2 jobs, let's say PARENT and CHILD, where CHILD has PARALLEL_WITH=PARENT.</p>
<p>I have created 2 tests that are on the same machine "64bit", qemu with some other options:<br>
<a href="http://fromm.arch.suse.de/tests/1394" class="external">http://fromm.arch.suse.de/tests/1394</a><br>
<a href="http://fromm.arch.suse.de/tests/1395" class="external">http://fromm.arch.suse.de/tests/1395</a></p>
<p>The parent job needs the child ID for the mutex command:</p>
<pre><code>my $children = get_children();
my $child_id = (keys %$children)[0];
...
script_run("echo Waiting for child with child_id=$child_id");
mutex_wait("child_ready", $child_id);
</code></pre>
<p>This is one line of the parent's output:</p>
<pre><code>Waiting for child with child_id=1399
</code></pre>
<p>Everything OK so far. CHILD recognizes PARENT as its parent and locking api works without problems.</p>
<p>Then, I have created another machine "64bit-other" with the exact same characteristics as the other one. <a href="http://fromm.arch.suse.de/admin/machines" class="external">http://fromm.arch.suse.de/admin/machines</a><br>
And assign CHILD to "64bit-other" in the job group. </p>
<p>The result is that CHILD doesn't have the parent job in the settings panel any more, and the PARENT's output is now:</p>
<pre><code>Waiting for child with child_id=
</code></pre>
<p>Therefore, the command </p>
<pre><code>mutex_wait("child_ready", $child_id);
</code></pre>
<p>waits forever.</p>
<p>Why having different machines? Well, for virtual jobs it doesn't make sense, but for BareMetal jobs like NFV and InfiniBand tests we are using different workers and machines:<br>
ipmi-sonic and ipmi-tails with different worker classes: 64bit-mlx_con5_sonic and 64bit-mlx_con5_tails respectively. </p>
openQA Project - action #31417 (Rejected): Add support for SSH from Host to VMhttps://progress.opensuse.org/issues/314172018-02-06T14:21:54Zjlausuchjalausuch@suse.com
<p>The VMs that OpenQA launches have an internal IP that is not reachable from the Host. Therefore, there is no way to SSH into them, only VNC is available. </p>
<p>The problem of VNC is that it limits the user comfort when debugging. SSH allows much better experience as you can use your own console and use SCP, bidireccional copy/paste, mouse scrolling, etc. </p>
<p>However, it is possible to SSH from the VM to the Host, therefore Reverse SSH can be used, but it is just a workaround. Having ssh supported directly from the Host would be more convenient.</p>
<p>I think this could be achieved by changing the qemu command line to launch the VM adding a parameter <em>-net user,hostfwd=tcp::7777-:8001</em><br>
Not sure if this is the right place, but it could help to look at this line: <a href="https://github.com/os-autoinst/os-autoinst/blob/master/backend/qemu.pm#L581" class="external">https://github.com/os-autoinst/os-autoinst/blob/master/backend/qemu.pm#L581</a></p>