action #114814
closed[sle][migration][sle15sp5][HA] try to publish qcows based on sles15sp4 b151.1 for all arches in O.S.D (x86_64/ppc64le/aarch64/s390x)
100%
Updated by llzhao almost 2 years ago
Publish qcows are passed:
1. ARCH=x86_64
`# ARCH=x86_64; BUILD=151.1; FLAVOR=Full; apikey=xxx; apisecret=xxx; TEST=xxx
- # /usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 TEST={TEST} WARNING: openqa-client is deprecated and planned to be removed in the future. Please use openqa-cli instead { count => 4, failed => [], ids => [9158099 .. 9158102], scheduled_product_id => 947181, }
The qcows might need to move to fixed dir, fyi:
SLE-15-SP4-Full-x86_64-Build151.1-Media1.iso (12 GiB)
sle-15-SP4-x86_64-Build151.1-HA-BV_atmg.qcow2 (1.3 GiB)
sle-15-SP4-x86_64-ha-alpha-alpha-node01_atmg.qcow2 (1.3 GiB)
sle-15-SP4-x86_64-ha-alpha-alpha-node02_atmg.qcow2 (1.3 GiB)
ha_supportserver_upgrade_sle_15-SP4_x86_64_luns_atmg.qcow2 (221 MiB)
ha_supportserver_upgrade_sle_15-SP4_x86_64_atmg.qcow2 (927 MiB)
`
2. ARCH=s390x
`# ARCH=s390x; BUILD=151.1; FLAVOR=Online; apikey=xxx; apisecret=xxx; TEST=xxx
- # /usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 TEST={TEST}
WARNING: openqa-client is deprecated and planned to be removed in the future. Please use openqa-cli instead
{
count => 4,
failed => [],
ids => [9158570 .. 9158573],
scheduled_product_id => 947279,
}
`
Tried no support server:
ISCSI_LUN_INDEX: '55'
HA_REMOVE_NODE: '1'
ISCSI_SERVER: '10.162.20.4'
NFS_SUPPORT_SHARE: '1c119.qa.suse.de:/srv/nfs/ha_ss'
YAML_SCHEDULE: schedule/ha/bv/s390x_cluster.yaml
` - # /usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 TEST=ha_alpha_node01_publish_atmg,ha_alpha_node02_publish_atmg _SKIP_CHAINED_DEPS=1 WORKER_CLASS=s390-kvm-sle12
https://openqa.suse.de/tests/9260019#dependencies`
**NOTE: s390x using another way to publish the qcows
See more info: https://app.slack.com/client/T02863RC2AC/C02CU8X53RC/thread/C02CU8X53RC-1658399626.397459**
3. ARCH=ppc64le
- # ARCH=ppc64le; BUILD=151.1; FLAVOR=Online; apikey=xxx; apisecret=xxx
linux-kks7:/home/llzhao # /usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1
WARNING: openqa-client is deprecated and planned to be removed in the future. Please use openqa-cli instead
{
count => 4,
failed => [],
ids => [9158979 .. 9158982],
scheduled_product_id => 947337,
}
https://openqa.suse.de/tests/9158979#dependencies
NOTE: if has random fail please try retriger it can be passed
4. ARCH=aarch64
`- # ARCH=aarch64; BUILD=151.1; FLAVOR=Online; apikey=xxx; apisecret=xxx; TEST=xxx
- # /usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 WARNING: openqa-client is deprecated and planned to be removed in the future. Please use openqa-cli instead { count => 4, failed => [], ids => [9158983 .. 9158986], scheduled_product_id => 947338, }`
**NOTE:
It is failed need to revise the settings
Then revised settings (UEFI_PFLASH_VARS=openqa_support_server_sles12sp3.%ARCH%-uefi-vars.qcow2):**
** - # /usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 TEST=ha_supportserver_publish_atmg,ha_alpha_node01_publish_atmg,ha_alpha_node02_publish_atmg _SKIP_CHAINED_DEPS=1
WARNING: openqa-client is deprecated and planned to be removed in the future. Please use openqa-cli instead
{
count => 3,
failed => [],
ids => [9170650, 9170651, 9170652],
scheduled_product_id => 948078,
}
Updated by llzhao almost 2 years ago
s390x yaml fyi:
# Testing purpose
defaults:
x86_64:
machine: 64bit
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
s390x:
#machine: s390x-kvm-sle12
machine: s390x-kvm-sle12-mm
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
ppc64le:
machine: ppc64le
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
aarch64:
machine: aarch64
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
products:
sle-15-SP4-Full-x86_64:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-x86_64:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Online-ppc64le:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-ppc64le:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-aarch64:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-aarch64:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-s390x:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-s390x:
distri: sle
flavor: Full
version: 15-SP4
scenarios:
x86_64:
# Publish qcow2 (textmodehdd, supprotserver, node01, node02) tests: added postfix '_atmg' for qcows
sle-15-SP4-Full-x86_64: &tests
- create_hdd_ha_textmode_publish_atmg:
testsuite: null
settings:
# ADDONS: ''
DESKTOP: 'textmode'
HDDSIZEGB: '15'
INSTALLONLY: '1'
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV_atmg.qcow2'
PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-uefi-vars_atmg.qcow2'
SCC_ADDONS: 'ha,base'
# Note: comment out this line otherwise will fail on 'zypper in'
# SCC_DEREGISTER: '1'
SCC_REGISTER: 'installation'
VIDEOMODE: 'text'
_HDDMODEL: 'scsi-hd'
- ha_alpha_node01_publish_atmg:
testsuite: null
settings: &ha_alpha_node_publish_common
BOOT_HDD_IMAGE: '1'
CLUSTER_NAME: 'alpha'
DESKTOP: 'textmode'
HA_CLUSTER: '1'
HA_CLUSTER_DRBD: '1'
# Please set USE_LVMLOCKD=1 in case of this failure:
# "Test died: 'zypper -n in lvm2-clvm lvm2-cmirrord' failed with code 104 (ZYPPER_EXIT_INF_CAP_NOT_FOUND)"
USE_LVMLOCKD: '1'
START_AFTER_TEST: 'create_hdd_ha_textmode_publish_atmg'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV_atmg.qcow2'
INSTALLONLY: '1'
NICTYPE: 'tap'
#PARALLEL_WITH: 'ha_supportserver_publish_atmg'
####
ISCSI_LUN_INDEX: '55'
HA_REMOVE_NODE: '1'
ISCSI_SERVER: '10.162.20.4'
NFS_SUPPORT_SHARE: '1c119.qa.suse.de:/srv/nfs/ha_ss'
#YAML_SCHEDULE: 'schedule/ha/bv/s390x_cluster.yaml'
CLUSTER_INFOS: 'alpha:2:5'
####
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%_atmg.qcow2'
PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-uefi-vars_atmg.qcow2'
QEMU_DISABLE_SNAPSHOTS: '1'
UEFI_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-uefi-vars_atmg.qcow2'
#USE_SUPPORT_SERVER: '1'
#WORKER_CLASS: 'tap'
_HDDMODEL: 'scsi-hd'
<<: *ha_alpha_node_publish_common
HOSTNAME: '%CLUSTER_NAME%-node01'
HA_CLUSTER_INIT: 'yes'
- ha_alpha_node02_publish_atmg:
testsuite: null
settings:
<<: *ha_alpha_node_publish_common
HOSTNAME: '%CLUSTER_NAME%-node02'
HA_CLUSTER_INIT: 'no'
HA_CLUSTER_JOIN: '%CLUSTER_NAME%-node01'
PARALLEL_WITH: 'ha_alpha_node01_publish_atmg'
- ha_supportserver_publish_atmg:
testsuite: null
settings:
#+VERSION: '12-SP3'
+VERSION: '15-SP4'
BOOT_HDD_IMAGE: '1'
CLUSTER_INFOS: 'alpha:2:5'
DESKTOP: 'textmode'
HA_CLUSTER: '1'
HDDMODEL: 'scsi-hd'
HDDSIZEGB_2: '6'
HDDVERSION: '12-SP3'
# HDD_1: 'openqa_support_server_sles12sp3.%ARCH%.qcow2'
# The following HDD_1 will introduce failure: https://openqa.suse.de/tests/8922315#step/setup/48
# HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV_atmg.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV_atmg.qcow2'
INSTALLONLY: '1'
NICTYPE: 'tap'
NUMDISKS: '2'
NUMLUNS: '5'
PUBLISH_HDD_1: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%_atmg.qcow2'
PUBLISH_HDD_2: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%_luns_atmg.qcow2'
QEMU_DISABLE_SNAPSHOTS: '1'
START_AFTER_TEST: 'create_hdd_ha_textmode_publish_atmg'
SUPPORT_SERVER: '1'
SUPPORT_SERVER_ROLES: 'dhcp,dns,ntp,ssh,iscsi'
UEFI_PFLASH_VARS: 'openqa_support_server_sles12sp3.%ARCH%-uefi-vars.qcow2'
VIDEOMODE: 'text'
VIRTIO_CONSOLE: '0'
#WORKER_CLASS: 'tap'
s390x:
sle-15-SP4-Online-s390x:
*tests
aarch64:
sle-15-SP4-Online-aarch64:
*tests
ppc64le:
sle-15-SP4-Online-ppc64le:
*tests
Updated by llzhao almost 2 years ago
- Status changed from In Progress to Resolved
- % Done changed from 0 to 100
Updated by llzhao almost 2 years ago
yaml file for x86_86/ppc64le/aarch64:
# Testing purpose
defaults:
x86_64:
machine: 64bit
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
s390x:
machine: s390x-kvm-sle12
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
ppc64le:
machine: ppc64le
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
aarch64:
machine: aarch64
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
products:
sle-15-SP4-Full-x86_64:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-x86_64:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Online-ppc64le:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-ppc64le:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-aarch64:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-aarch64:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-s390x:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-s390x:
distri: sle
flavor: Full
version: 15-SP4
scenarios:
x86_64:
# Publish qcow2 (textmodehdd, supprotserver, node01, node02) tests: added postfix '_atmg' for qcows
sle-15-SP4-Full-x86_64: &tests
- create_hdd_ha_textmode_publish_atmg:
testsuite: null
settings:
# ADDONS: ''
DESKTOP: 'textmode'
HDDSIZEGB: '15'
INSTALLONLY: '1'
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV_atmg.qcow2'
PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-uefi-vars_atmg.qcow2'
SCC_ADDONS: 'ha,base'
# Note: comment out this line otherwise will fail on 'zypper in'
SCC_DEREGISTER: '1'
SCC_REGISTER: 'installation'
VIDEOMODE: 'text'
_HDDMODEL: 'scsi-hd'
- ha_alpha_node01_publish_atmg:
testsuite: null
settings: &ha_alpha_node_publish_common
BOOT_HDD_IMAGE: '1'
CLUSTER_NAME: 'alpha'
DESKTOP: 'textmode'
HA_CLUSTER: '1'
HA_CLUSTER_DRBD: '1'
USE_LVMLOCKD: '1'
START_AFTER_TEST: 'create_hdd_ha_textmode_publish_atmg'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV_atmg.qcow2'
INSTALLONLY: '1'
NICTYPE: 'tap'
PARALLEL_WITH: 'ha_supportserver_publish_atmg'
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%_atmg.qcow2'
PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-uefi-vars_atmg.qcow2'
QEMU_DISABLE_SNAPSHOTS: '1'
UEFI_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-uefi-vars_atmg.qcow2'
USE_SUPPORT_SERVER: '1'
WORKER_CLASS: 'tap'
_HDDMODEL: 'scsi-hd'
<<: *ha_alpha_node_publish_common
HOSTNAME: '%CLUSTER_NAME%-node01'
HA_CLUSTER_INIT: 'yes'
- ha_alpha_node02_publish_atmg:
testsuite: null
settings:
<<: *ha_alpha_node_publish_common
HOSTNAME: '%CLUSTER_NAME%-node02'
HA_CLUSTER_INIT: 'no'
HA_CLUSTER_JOIN: '%CLUSTER_NAME%-node01'
- ha_supportserver_publish_atmg:
testsuite: null
settings:
#+VERSION: '12-SP3'
+VERSION: '15-SP4'
BOOT_HDD_IMAGE: '1'
CLUSTER_INFOS: 'alpha:2:5'
DESKTOP: 'textmode'
HA_CLUSTER: '1'
HDDMODEL: 'scsi-hd'
HDDSIZEGB_2: '6'
HDDVERSION: '12-SP3'
HDD_1: 'openqa_support_server_sles12sp3.%ARCH%.qcow2'
# The following HDD_1 will introduce failure: https://openqa.suse.de/tests/8922315#step/setup/48
# HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV_atmg.qcow2'
INSTALLONLY: '1'
NICTYPE: 'tap'
NUMDISKS: '2'
NUMLUNS: '5'
PUBLISH_HDD_1: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%_atmg.qcow2'
PUBLISH_HDD_2: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%_luns_atmg.qcow2'
QEMU_DISABLE_SNAPSHOTS: '1'
START_AFTER_TEST: 'create_hdd_ha_textmode_publish_atmg'
SUPPORT_SERVER: '1'
SUPPORT_SERVER_ROLES: 'dhcp,dns,ntp,ssh,iscsi'
UEFI_PFLASH_VARS: 'openqa_support_server_sles12sp3.%ARCH%-uefi-vars_atmg.qcow2'
VIDEOMODE: 'text'
VIRTIO_CONSOLE: '0'
WORKER_CLASS: 'tap'
s390x:
sle-15-SP4-Online-s390x:
*tests
aarch64:
sle-15-SP4-Online-aarch64:
*tests
ppc64le:
sle-15-SP4-Online-ppc64le:
*tests
Updated by llzhao almost 2 years ago
- Subject changed from [sle][migration][sle15sp5][HA] try to publish qcows based on sles15sp4 b151.1 for all arches (x86_64/ppc64le/aarch64/s390x) in O.S.D to [sle][migration][sle15sp5][HA] try to publish qcows based on sles15sp4 b151.1 for all arches in O.S.D (x86_64/ppc64le/aarch64/s390x)
Updated by llzhao over 1 year ago
The qcows need to be copied to fixed directory:
ha_supportserver_upgrade_sle_15-SP4_aarch64_atmg.qcow2
ha_supportserver_upgrade_sle_15-SP4_aarch64_luns_atmg.qcow2
ha_supportserver_upgrade_sle_15-SP4_ppc64le_atmg.qcow2
ha_supportserver_upgrade_sle_15-SP4_ppc64le_luns_atmg.qcow2
ha_supportserver_upgrade_sle_15-SP4_x86_64_atmg.qcow2
ha_supportserver_upgrade_sle_15-SP4_x86_64_luns_atmg.qcow2
sle-15-SP4-aarch64-ha-alpha-alpha-node01-uefi-vars_atmg.qcow2
sle-15-SP4-aarch64-ha-alpha-alpha-node01_atmg.qcow2
sle-15-SP4-aarch64-ha-alpha-alpha-node02-uefi-vars_atmg.qcow2
sle-15-SP4-aarch64-ha-alpha-alpha-node02_atmg.qcow2
sle-15-SP4-ppc64le-ha-alpha-alpha-node01_atmg.qcow2
sle-15-SP4-ppc64le-ha-alpha-alpha-node02_atmg.qcow2
sle-15-SP4-x86_64-ha-alpha-alpha-node01_atmg.qcow2
sle-15-SP4-x86_64-ha-alpha-alpha-node02_atmg.qcow2
sle-15-SP4-s390x-ha-alpha-alpha-node01_atmg.qcow2
sle-15-SP4-s390x-ha-alpha-alpha-node02_atmg.qcow2
Updated by llzhao over 1 year ago
NOTE
I made mistakes when publishing the qcows: did not do de-register.
Please see following steps to publish the qcows.
Updated by llzhao over 1 year ago
For sle15sp4 qcows please follow this page:
https://confluence.suse.com/pages/viewpage.action?pageId=806551664
Here is the trigger commands:
set global variables:
# apikey=xxx; apisecret=xxx
# BUILD=151.1; FLAVOR=Full;
# cmd="/usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1"
x86 generate qcows:
# ARCH=x86_64
# $cmd
{
count => 5,
failed => [],
ids => [9265864 .. 9265868],
scheduled_product_id => 957277,
}
http://openqa.suse.de/tests/9265864#dependencies
ppc64le generate qcows:
# ARCH=ppc64le
# $cmd
{
count => 5,
failed => [],
ids => [9266589 .. 9266593],
scheduled_product_id => 957383,
}
But hit 'drbdadm status drbd_passive' timed out issue.
Then, rerun: https://openqa.suse.de/tests/9274804#dependencies
But failed on "filesystem": Test died: cluster_md is not running at sle/tests/ha/filesystem.pm line 59.
Then, rerun still failed on "timed out"
Then retrigger with "QEMURAM=8192) passed:
# $cmd _SKIP_CHAINED_DEPS=1 QEMURAM=8192
{
count => 3,
failed => [],
ids => [9274935, 9274936, 9274937],
scheduled_product_id => 958040,
}
aarch64 generate qcows:
# ARCH=aarch64
# $cmd
{
count => 5,
failed => [],
ids => [9266600 .. 9266604],
scheduled_product_id => 957384,
}
But failed on "iscsi_client": please check ip address response. host name
Then try rerun: https://openqa.suse.de/tests/9274878#
Hit node02 case no "power off" modules (the qcow can not be published), then add "INSTALLONLY: '1'" to node02 test case
Retrigger with "INSTALLONLY: '1'" passed:
# $cmd
http://openqa.suse.de/tests/9299978#dependencies
s390x generate qcows:
# ARCH=s390x
# $cmd
{
count => 4,
failed => [],
ids => [9266647 .. 9266650],
scheduled_product_id => 957427,
}
Hit "iscsi_client" failed
Then rerun passed: https://openqa.suse.de/tests/9274983#dependencies
Updated by llzhao over 1 year ago
Here is the new yaml schedule
https://openqa.suse.de/admin/job_templates/184?
# Testing purpose
defaults:
x86_64:
machine: 64bit
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
s390x:
machine: s390x-kvm-sle12
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
ppc64le:
machine: ppc64le
priority: -100
settings:
TIMEOUT_SCALE: '6'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
aarch64:
machine: aarch64
priority: -100
settings:
TIMEOUT_SCALE: '4'
VIDEOMODE: text
SCC_URL: 'https://scc.suse.com'
products:
sle-15-SP4-Full-x86_64:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-x86_64:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Online-ppc64le:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-ppc64le:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-aarch64:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-aarch64:
distri: sle
flavor: Full
version: 15-SP4
sle-15-SP4-Online-s390x:
distri: sle
flavor: Online
version: 15-SP4
sle-15-SP4-Full-s390x:
distri: sle
flavor: Full
version: 15-SP4
scenarios:
x86_64:
sle-15-SP4-Online-x86_64: &tests
- create_hdd_ha_textmode_publish_node1:
testsuite: create_hdd_ha_textmode
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
HDDMODEL: 'virtio-blk'
- create_hdd_ha_textmode_publish_node2:
testsuite: create_hdd_ha_textmode
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
HDDMODEL: 'virtio-blk'
- ha_supportserver_publish:
testsuite: ha_supportserver_alpha
settings:
START_AFTER_TEST: create_hdd_ha_textmode_publish_node1,create_hdd_ha_textmode_publish_node2
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-supportserver.qcow2'
PUBLISH_HDD_2: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-supportserver-luns.qcow2'
HDDMODEL: 'scsi-hd'
INSTALLONLY: '1'
- ha_alpha_node01_publish:
testsuite: ha_alpha_node01
settings:
SCC_URL: 'https://scc.suse.com'
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
HDDMODEL: 'virtio-blk'
SCC_DEREGISTER: '1'
PARALLEL_WITH: ha_supportserver_publish
INSTALLONLY: '1'
USE_LVMLOCKD: '1'
- ha_alpha_node02_publish:
testsuite: ha_alpha_node02
settings:
SCC_URL: 'https://scc.suse.com'
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
SCC_DEREGISTER: '1'
HDDMODEL: 'virtio-blk'
PARALLEL_WITH: ha_supportserver_publish
USE_LVMLOCKD: '1'
INSTALLONLY: '1'
s390x:
sle-15-SP4-Online-s390x:
- create_hdd_ha_textmode_publish_node1:
testsuite: create_hdd_ha_textmode
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
BACKEND: 'svirt'
ISCSI_LUN_INDEX: '55'
CLUSTER_INFOS: alpha:2:5
- create_hdd_ha_textmode_publish_node2:
testsuite: create_hdd_ha_textmode
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
BACKEND: 'svirt'
ISCSI_LUN_INDEX: '55'
CLUSTER_INFOS: alpha:2:5
- ha_zalpha_node01_publish:
testsuite: ha_zalpha_node01
settings:
BACKEND: 'svirt'
CLUSTER_INFOS: alpha:2:5
CLUSTER_NAME: alpha
ISCSI_LUN_INDEX: '55'
INSTALLONLY: '1'
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
START_AFTER_TEST: create_hdd_ha_textmode_publish_node1, create_hdd_ha_textmode_publish_node2
SCC_DEREGISTER: '1'
USE_LVMLOCKD: '1'
- ha_zalpha_node02_publish:
testsuite: ha_zalpha_node02
settings:
BACKEND: 'svirt'
CLUSTER_INFOS: alpha:2:5
CLUSTER_NAME: alpha
ISCSI_LUN_INDEX: '55'
INSTALLONLY: '1'
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
USE_LVMLOCKD: '1'
SCC_DEREGISTER: '1'
PARALLEL_WITH: ha_zalpha_node01_publish
aarch64:
sle-15-SP4-Online-aarch64:
- create_hdd_ha_textmode_publish_node1:
testsuite: create_hdd_ha_textmode
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
- create_hdd_ha_textmode_publish_node2:
testsuite: create_hdd_ha_textmode
machine: aarch64
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
- ha_supportserver_publish:
testsuite: ha_supportserver_alpha
settings:
START_AFTER_TEST: create_hdd_ha_textmode_publish_node1,create_hdd_ha_textmode_publish_node2
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-supportserver.qcow2'
PUBLISH_HDD_2: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-supportserver-luns.qcow2'
INSTALLONLY: '1'
- ha_alpha_node01_publish:
testsuite: ha_alpha_node01
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01.qcow2'
PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-uefi-vars.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
SCC_DEREGISTER: '1'
PARALLEL_WITH: ha_supportserver_publish
INSTALLONLY: '1'
USE_LVMLOCKD: '1'
- ha_alpha_node02_publish:
testsuite: ha_alpha_node02
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02.qcow2'
PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-uefi-vars.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
SCC_DEREGISTER: '1'
PARALLEL_WITH: ha_supportserver_publish
INSTALLONLY: '1'
USE_LVMLOCKD: '1'
ppc64le:
sle-15-SP4-Online-ppc64le:
- create_hdd_ha_textmode_publish_node1:
testsuite: create_hdd_ha_textmode
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
HDDMODEL: 'virtio-blk'
- create_hdd_ha_textmode_publish_node2:
testsuite: create_hdd_ha_textmode
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
HDDMODEL: 'virtio-blk'
- ha_supportserver_publish:
testsuite: ha_supportserver_alpha
settings:
START_AFTER_TEST: create_hdd_ha_textmode_publish_node1,create_hdd_ha_textmode_publish_node2
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-supportserver.qcow2'
PUBLISH_HDD_2: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-supportserver-luns.qcow2'
HDDMODEL: 'scsi-hd'
INSTALLONLY: '1'
- ha_alpha_node01_publish:
testsuite: ha_alpha_node01
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node01_base.qcow2'
HDDMODEL: 'virtio-blk'
SCC_DEREGISTER: '1'
PARALLEL_WITH: ha_supportserver_publish
INSTALLONLY: '1'
USE_LVMLOCKD: '1'
- ha_alpha_node02_publish:
testsuite: ha_alpha_node02
settings:
PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02.qcow2'
HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-alpha-alpha-node02_base.qcow2'
SCC_DEREGISTER: '1'
HDDMODEL: 'virtio-blk'
PARALLEL_WITH: ha_supportserver_publish
USE_LVMLOCKD: '1'
INSTALLONLY: '1'
Updated by llzhao over 1 year ago
Paste the contents of "https://app.slack.com/client/T02863RC2AC/C02CU8X53RC/thread/C02CU8X53RC-1658399626.397459" here:
Lili Zhao
Jul 21st at 6:33 PM
Hi experts, Recently I am creating/publishing the qcows for "Migration: HA" (before openQA do migration+verify we need to create the qcows).
It was succeeded on 3 arches (x86_64/ppc64le/aarch64), for example http://openqa.suse.de/tests/9158099#dependencies
But it failed on s390x, further based on my investigation it might not need the support server qcows on s390x, am I right?
Also it seems the "Settings" for this arch might be diff from other arches.
I revised the "Settings" a lot but still can not find the root cause. Any ideas?
fyi (s390x): http://openqa.suse.de/tests/9186553#dependencies (edited)
35 replies
Alvaro Carvajal
1 month ago
hello Lili.
Alvaro Carvajal
1 month ago
Indeed s390x is different than the other architectures with regards to the support server
Alvaro Carvajal
1 month ago
for x86_64/ppc64le/aarch6 the tests use qemu backend, so tap interfaces and virtual private network is possible
Alvaro Carvajal
1 month ago
in the case of s390x,the tests use svirt backend (workers of class s390x-kvm-sle12 IIRC), which boot with an specific IP address in the qa network, so we cannot create there a support server without impacting the infrastructure
Alvaro Carvajal
1 month ago
workaround in this case was to use an external iSCSI server, rely on the qa network DNS and also use an external NFS share
Alvaro Carvajal
1 month ago
zalpha nodes in the HA group are using these settings for iSCSI:
"ISCSI_LUN_INDEX" : "0",
"ISCSI_SERVER" : "10.162.20.4",
but DO NOT use the same for migrations, otherwise the sbd device gets overwritten and the qcow2 image generated becomes unusable
Alvaro Carvajal
1 month ago
we're keeping track of which LUNs to use in https://gitlab.suse.de/hsehic/qa-css-docs/-/blob/master/ha/openqa.md#information-specific-to-s390x-tests
Alvaro Carvajal
1 month ago
I guess we will need to add more LUNs to the iSCSI server, unless we can reuse the 12-SP3 one
Lili Zhao
1 month ago
thanks for the info, currently I can not understand all the info
Alvaro Carvajal
1 month ago
long story short: we use "alpha" cluster to generate qcow2 images for migrations on x86_64/ppc64le/aarch64, and use "zalpha" cluster to generate qcow2 images for migrations on s390x
Alvaro Carvajal
1 month ago
> thanks for the info, currently I can not understand all the info
Feel free to ping me with any question. Start by reading the gitlab doc I linked above, and then take a look at the jobs linked in https://openqa.suse.de/tests/8751263#dependencies
Lili Zhao
1 month ago
Thank you so much :slightly_smiling_face:
Lili Zhao
1 month ago
I will check.
Lili Zhao
1 month ago
and need time
Alvaro Carvajal
1 month ago
BTW, just checked available space in the iSCSI server (sam.qa.suse.de / 10.162.20.4) and we can create more LUNs. I will do that an update the gitlab doc
Alvaro Carvajal
1 month ago
FYI, I added 10 more LUNs to the iSCSI server, so feel free to use LUNs 60-64 or LUNs 65-69 for s390x / 15-SP4
Alvaro Carvajal
1 month ago
it's possible that with the change the list in https://openqa.suse.de/tests/8751263#step/iscsi_client/14 will be ordered differently .... you may need to update the ha/iscsi_client module so it cycles through the entries until it is able to match 0-openqa (I really expected that with a name like 0-openqa it will be always first ... but apparently the iSCSI YaST module does not order this alphanumerically)
Lili Zhao
27 days ago
I tried according to your info: https://openqa.suse.de/tests/9222919#dependencies The 2 qcows can be created
Lili Zhao
27 days ago
ISCSI_LUN_INDEX 55
Lili Zhao
27 days ago
But I did not hit the issue "you may need to update the ha/iscsi_client module so it cycles through the entries until it is able to match 0-openqa"
Lili Zhao
27 days ago
https://openqa.suse.de/tests/9222919#step/iscsi_client/14 works well without code change
Alvaro Carvajal
27 days ago
oh, wonderful! I thought it would come not ordered, but there's 0-openqa there at the top :slightly_smiling_face:
Alvaro Carvajal
27 days ago
> ISCSI_LUN_INDEX 55
Oh, those were used by the 12-SP3 qcow2 images. If 12-SP3 to 15-SP5 HA migrations are to be tested, probably the old 12-SP3 qcow2 images will not work. I'll update the gitlab document to reflect this
Alvaro Carvajal
27 days ago
this of course only applies to s390x
Lili Zhao
26 days ago
Thanks, should I take any actions for avoid this conflict?
Lili Zhao
26 days ago
I noticed "| 55 to 59 | 55 | 12-SP3 migration" was deleted
Lili Zhao
26 days ago
I chosed 55 as I noticed these 2 lines:
Lili Zhao
26 days ago
60 to 64 55 Available
65 to 69 55 Available
Alvaro Carvajal
26 days ago
yes, my bad :disappointed:
Alvaro Carvajal
26 days ago
I will fix the documentation
Alvaro Carvajal
26 days ago
> Thanks, should I take any actions for avoid this conflict?
None needed ATM as I don't believe migration from 12-SP3 to 15-SP5 is going to be supported
Alvaro Carvajal
26 days ago
but if we intend to test SLES+HA migration from 12-SP3 to anything in s390x, then the 12-SP3 qcow2 images will need to be recreated
Alvaro Carvajal
26 days ago
I also have an evil idea in mind, but let's cross that bridge if necessary
Alvaro Carvajal
26 days ago
(evil idea: I created the new LUNs with cp -i lun-5{5-9} lun-6{5-9} so there are unintentional backups of the old block devices)
Lee Martin
22 days ago
@Lili Zhao
&
@Alvaro Carvajal
To my mind 12 SP3 should not be relevant for testing anymore since it is EOL.
However it is still in the PRD, so I'm querying PM to get confirmation: Please see https://confluence.suse.com/display/SUSELinuxEnterpriseServer15SP5/%5BSLE+15+SP5%5D+PRD+all-in-one?focusedCommentId=1052180540#comment-1052180540