action #135938
open[qe-sap] test fails in hana_install copying from NFS server qesap-nfs.qa.suse.cz with timeout
0%
Description
Observation¶
openQA test in scenario sle-15-SP4-SAP-DVD-Updates-x86_64-qam_sles4sap_wmp_hana_node02@64bit-sap-qam fails in
hana_install
Test suite description¶
The base test suite is used for job templates defined in YAML documents. It has no settings of its own.
Reproducible¶
Fails since (at least) Build 20230915-1
Expected result¶
Last good: 20230913-1 (or more recent)
Further details¶
Always latest result in this scenario: latest
Updated by okurz about 1 year ago
- Copied to openqa-force-result #135980: [qe-sap] test fails in hana_install copying from NFS server qesap-nfs with timeout auto_review:"command .*(mnt/x86_64|mount -t nfs -o ro qesap-nfs|rsync -azr.*/sapinst/).*timed out" added
Updated by okurz about 1 year ago
- Related to action #135923: [qe-sap][tools]test fails in hana_install because NFS is too slow - Move NFS to OSD size:M added
Updated by acarvajal 12 months ago
Taken from: https://progress.opensuse.org/issues/110683#note-8
Situation with the NFS servers seem to be related not only to the move of the workers to PRG2 (both NFS servers are remote to the workers now, this was not the case in NUE1), but also to the increase of tap capable qemu_x86_64-large-mem workers in osd.
On September 26th, a load of close to 200 MM jobs out of which 117 required NFS access, created a total of 110 established connections on the server causing a drop of bandwidth to the clients below 1MB/s. These jobs with this bandwidth reached their timeout (of 2h15m, increased from 1h30m, but still below the original timeout before the move to PRG2 which was 4h30m) and failed. Later in the day with 45 parallel jobs running, a total of 24 established connections were observed in the NFS server resulting in clients being able to transfer data at speeds of 2 to 6MB/s, allowing these jobs to finish successfully. We're exploring the possibility of further changes to the test code (increasing timeouts, limiting parallel access to the NFS, etc.) while also adding more NFS servers in PRG1 and redistributing the NFS servers per job groups, but if the bottleneck is not in the NFS server itself but in the network links, then this will not help in solving the issue.
We are collecting our debug and results into the NFS issue in https://confluence.suse.com/display/qasle/NFS+disaster+recovery. It's still a WIP.
We are also tracking in https://progress.opensuse.org/issues/135980 and https://progress.opensuse.org/issues/135923 and https://jira.suse.com/browse/TEAM-8573
Updated by tinita 7 months ago
This ticket was set to High priority but was not updated within the SLO period. Please consider picking up this ticket or just set the ticket to the next lower priority.