Project

General

Profile

Actions

action #135938

open

[qe-sap] test fails in hana_install copying from NFS server qesap-nfs.qa.suse.cz with timeout

Added by okurz about 1 year ago. Updated 5 months ago.

Status:
In Progress
Priority:
Normal
Assignee:
Category:
Infrastructure
Target version:
-
Start date:
2023-09-18
Due date:
% Done:

0%

Estimated time:
Difficulty:

Description

Observation

openQA test in scenario sle-15-SP4-SAP-DVD-Updates-x86_64-qam_sles4sap_wmp_hana_node02@64bit-sap-qam fails in
hana_install

Test suite description

The base test suite is used for job templates defined in YAML documents. It has no settings of its own.

Reproducible

Fails since (at least) Build 20230915-1

Expected result

Last good: 20230913-1 (or more recent)

Further details

Always latest result in this scenario: latest


Related issues 2 (1 open1 closed)

Related to openQA Tests - action #135923: [qe-sap][tools]test fails in hana_install because NFS is too slow - Move NFS to OSD size:MResolvedokurz2023-09-18

Actions
Copied to openQA auto review - openqa-force-result #135980: [qe-sap] test fails in hana_install copying from NFS server qesap-nfs with timeout auto_review:"command .*(mnt/x86_64|mount -t nfs -o ro qesap-nfs|rsync -azr.*/sapinst/).*timed out"NewActions
Actions #1

Updated by okurz about 1 year ago

  • Copied to openqa-force-result #135980: [qe-sap] test fails in hana_install copying from NFS server qesap-nfs with timeout auto_review:"command .*(mnt/x86_64|mount -t nfs -o ro qesap-nfs|rsync -azr.*/sapinst/).*timed out" added
Actions #2

Updated by okurz about 1 year ago

  • Related to action #135923: [qe-sap][tools]test fails in hana_install because NFS is too slow - Move NFS to OSD size:M added
Actions #3

Updated by jstehlik 12 months ago

  • Assignee set to acarvajal

Assigning to Alvaro as SAP squad PO

Actions #4

Updated by acarvajal 12 months ago

Taken from: https://progress.opensuse.org/issues/110683#note-8

Situation with the NFS servers seem to be related not only to the move of the workers to PRG2 (both NFS servers are remote to the workers now, this was not the case in NUE1), but also to the increase of tap capable qemu_x86_64-large-mem workers in osd.

On September 26th, a load of close to 200 MM jobs out of which 117 required NFS access, created a total of 110 established connections on the server causing a drop of bandwidth to the clients below 1MB/s. These jobs with this bandwidth reached their timeout (of 2h15m, increased from 1h30m, but still below the original timeout before the move to PRG2 which was 4h30m) and failed. Later in the day with 45 parallel jobs running, a total of 24 established connections were observed in the NFS server resulting in clients being able to transfer data at speeds of 2 to 6MB/s, allowing these jobs to finish successfully. We're exploring the possibility of further changes to the test code (increasing timeouts, limiting parallel access to the NFS, etc.) while also adding more NFS servers in PRG1 and redistributing the NFS servers per job groups, but if the bottleneck is not in the NFS server itself but in the network links, then this will not help in solving the issue.

We are collecting our debug and results into the NFS issue in https://confluence.suse.com/display/qasle/NFS+disaster+recovery. It's still a WIP.

We are also tracking in https://progress.opensuse.org/issues/135980 and https://progress.opensuse.org/issues/135923 and https://jira.suse.com/browse/TEAM-8573

Actions #5

Updated by acarvajal 12 months ago

  • Category changed from Bugs in existing tests to Infrastructure
  • Status changed from New to In Progress
  • Priority changed from Urgent to High
Actions #6

Updated by tinita 7 months ago

This ticket was set to High priority but was not updated within the SLO period. Please consider picking up this ticket or just set the ticket to the next lower priority.

Actions #7

Updated by slo-gin 5 months ago

  • Priority changed from High to Normal

The ticket will be set to the next lower priority Normal

Actions

Also available in: Atom PDF