Project

General

Profile

Actions

action #113522

closed

[sle][migration][sle15sp5][HA] try to run the "migration", "verify" cases in O.S.D (based on sle15sp4)

Added by llzhao almost 2 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Spike/Research
Target version:
-
Start date:
2022-07-13
Due date:
% Done:

100%

Estimated time:
40.00 h
Difficulty:

Description

After poo#112664 - [sle][migration][sle15sp5][HA] Publish the "supportserver", "node01" and "node02" qcow2 in O.S.D
Then try migration and verify in O.S.D

Actions #1

Updated by llzhao almost 2 years ago

  • Status changed from New to In Progress
Actions #2

Updated by llzhao almost 2 years ago

Try "migration", "verify" cases in O.S.D on 15sp4 GM

  1. Publish qcows of “textmodehdd, supprotserver, node01, node02“ using BUILD=151.1
  2. Do migration: need to ignore this step as it doesn't support migration from 15sp4 to 15sp4
  3. Verify

Hit issues (copied from eamil):

1. During "verify" I hit issues:
I am trying HA migration openQA test case on 15sp4 and hit one issue, I tired a lot but really don't have idea on how to fix.

Please help to check and give some suggestions. Thanks.
Steps:

1. Do publish (PASSED): publish the qcows needed on 15sp4 (build 151.1)
http://openqa.suse.de/tests/9056250#dependencies

2. Do migration (ignored): this step can be ignored as there is no 15sp5 currently

3. Do verify (FAILED):
http://openqa.suse.de/tests/9101605#step/check_after_reboot/25
It failed on 'systemctl --no-pager status pacemaker' failed.

I checked the "check_after_reboot-journal.log" and found other errors [1] also.
I currently don't know this areas please help me out.

4. Checked the "Settings" just in case

The "Settings" when publishing qcows just fyi in case I missed anything:

    HA_CLUSTER     1
    HA_CLUSTER_DRBD     1
    HA_CLUSTER_INIT     yes

    USE_LVMLOCKD     1
    USE_SUPPORT_SERVER     1

The "Settings" when doing verification

    USE_SUPPORT_SERVER     1


FYI: [1]
...
Jul 10 21:37:08.160138 alpha-node01 corosync[1123]: Starting Corosync Cluster Engine (corosync): [FAILED]
Jul 10 21:37:08.160696 alpha-node01 systemd[1]: corosync.service: Control process exited, code=exited, status=1/FAILURE
Jul 10 21:37:08.160847 alpha-node01 systemd[1]: corosync.service: Failed with result 'exit-code'.
Jul 10 21:37:08.161434 alpha-node01 systemd[1]: Failed to start Corosync Cluster Engine.
Jul 10 21:37:08.162198 alpha-node01 systemd[1]: Dependency failed for Pacemaker High Availability Cluster Manager.
Jul 10 21:37:08.162453 alpha-node01 systemd[1]: pacemaker.service: Job pacemaker.service/start failed with result 'dependency'.
Jul 10 21:37:08.169699 alpha-node01 iscsiadm[1149]: iscsiadm: Could not get list of targets from firmware. (err 21)
Jul 10 21:37:08.171273 alpha-node01 systemd[1]: Finished Login and scanning of iSCSI devices.
Jul 10 21:37:08.183940 alpha-node01 systemd[1]: Starting Shared-storage based fencing daemon...
Jul 10 21:37:08.191311 alpha-node01 rsyslogd[1146]: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.2106.0]
Jul 10 21:37:08.191697 alpha-node01 rsyslogd[1146]: [origin software="rsyslogd" swVersion="8.2106.0" x-pid="1146" x-info="https://www.rsyslog.com"] start
Jul 10 21:37:08.191914 alpha-node01 systemd[1]: Started System Logging Service.
Jul 10 21:37:08.206506 alpha-node01 kernel: Loading iSCSI transport class v2.0-870.
Jul 10 21:37:08.233700 alpha-node01 systemd[1]: Started Open-iSCSI.
Jul 10 21:37:08.235037 alpha-node01 systemd[1]: Reached target Preparation for Remote File Systems.
Jul 10 21:37:08.239163 alpha-node01 systemd[1]: Reached target Remote File Systems.
Jul 10 21:37:08.245391 alpha-node01 systemd[1]: Starting Permit User Sessions...
Jul 10 21:37:08.246504 alpha-node01 kernel: iscsi: registered transport (tcp)
Jul 10 21:37:08.249360 alpha-node01 hawk-apiserver[1138]: level=info msg="Reading /etc/hawk/server.json..."
Jul 10 21:37:08.250543 alpha-node01 hawk-apiserver[1138]: level=info msg="Listening to https://0.0.0.0:7630\n"
Jul 10 21:37:08.261622 alpha-node01 hawk-apiserver[1138]: level=warning msg="Failed to connect to Pacemaker: -107: ENOTCONN Transport endpoint is not connected"...

How to fix (workaround): The workaround is "set fixed ip + restart pacemaker":

- Settings for IP: '10.0.2.15/16'
- In the end of tests/boot> vi boot_to_desktop.pm add workaround
     select_console 'root-console';
     my $ip = get_var('IP');
     my $netdev = get_var('NETDEV', 'eth0');
     assert_script_run("ip addr add $ip/24 dev $netdev");
     assert_script_run("systemctl restart pacemaker");

Also checked HA job group the original passed runs:
http://openqa.suse.de/tests/8751179


migration:
node01:
  http://openqa.suse.de/tests/8751157/logfile?filename=serial_terminal.txt
  eth0: 10.0.2.15 fec0::d9e6:8ddb:f3c5:2744
  eth0: 10.0.2.15 fec0::fc77:7ec1:b587:3dff
  eth0: 10.0.2.15 fec0::20c2:4178:cfa:986d

node02:
  eth0: 10.0.2.15 fec0::b8ad:da5f:da64:7558
  eth0: 10.0.2.15 fec0::9ce3:9034:8c09:dde6
  eth0: 10.0.2.15 fec0::ab4a:8ace:cd97:6b54

verify after migration:
  node01: eth0: 10.0.2.18 fe80::5054:ff:fe12:24b
  node02: eth0: 10.0.2.17 fe80::5054:ff:fe12:6fb
Actions #4

Updated by llzhao almost 2 years ago

Steps:

1. On 15sp4 publish qcows: yaml file

# Testing purpose
defaults:
  x86_64:
    machine: 64bit
    priority: -100
    settings:
      TIMEOUT_SCALE: '4'
      VIDEOMODE: text
products:
  sle-15-SP4-Full-x86_64:
    distri: sle
    flavor: Full
    version: 15-SP4
scenarios:
  x86_64:
    # Publish qcow2 (textmodehdd, supprotserver, node01, node02) tests: added postfix '_mig' for qcows
    sle-15-SP4-Full-x86_64:
    - create_hdd_ha_textmode_publish_mig:
        testsuite: null
        settings:
          #ADDONS: ''
          DESKTOP: 'textmode'
          HDDSIZEGB: '15'
          INSTALLONLY: '1'
          PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-mig.qcow2'
          PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-mig-uefi-vars.qcow2'
          SCC_ADDONS: 'ha,base'
          # Note: comment out this line otherwise will fail on "zypper in"
          #SCC_DEREGISTER: '1'
          SCC_REGISTER: 'installation'
          VIDEOMODE: 'text'
          _HDDMODEL: 'scsi-hd'
    - ha_alpha_node01_publish_mig:
        testsuite: null
        settings:  
          BOOT_HDD_IMAGE: '1'
          CLUSTER_NAME: 'alpha'
          DESKTOP: 'textmode'
          HA_CLUSTER: '1'
          HA_CLUSTER_DRBD: '1'
          HA_CLUSTER_INIT: 'yes'
          START_AFTER_TEST: 'create_hdd_ha_textmode_publish_mig'
          HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-mig.qcow2'
          HOSTNAME: '%CLUSTER_NAME%-node01'
          INSTALLONLY: '1'
          NICTYPE: 'tap'
          PARALLEL_WITH: 'ha_supportserver_publish_mig'
          PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-mig.qcow2'
          PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-mig-uefi-vars.qcow2'
          QEMU_DISABLE_SNAPSHOTS: '1'
          UEFI_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-mig-uefi-vars.qcow2'
          USE_LVMLOCKD: '1'
          USE_SUPPORT_SERVER: '1'
          WORKER_CLASS: 'tap'
          _HDDMODEL: 'scsi-hd'
    - ha_alpha_node02_publish_mig:
        testsuite: null
        settings:       
          BOOT_HDD_IMAGE: '1'
          CLUSTER_NAME: 'alpha'
          DESKTOP: 'textmode'
          HA_CLUSTER: '1'
          HA_CLUSTER_DRBD: '1'
          HA_CLUSTER_JOIN: '%CLUSTER_NAME%-node01'
          START_AFTER_TEST: 'create_hdd_ha_textmode_publish_mig'
          HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-mig.qcow2'
          HOSTNAME: '%CLUSTER_NAME%-node02'
          INSTALLONLY: '1'
          NICTYPE: 'tap'
          PARALLEL_WITH: 'ha_supportserver_publish_mig'
          PUBLISH_HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-mig.qcow2'
          PUBLISH_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-mig-uefi-vars.qcow2'
          QEMU_DISABLE_SNAPSHOTS: '1'
          UEFI_PFLASH_VARS: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-mig-uefi-vars.qcow2'
          USE_LVMLOCKD: '1'
          USE_SUPPORT_SERVER: '1'
          WORKER_CLASS: 'tap'
          _HDDMODEL: 'scsi-hd'
    - ha_supportserver_publish_mig:
        testsuite: null
        settings:         
          +VERSION: '12-SP3'
          BOOT_HDD_IMAGE: '1'
          CLUSTER_INFOS: 'alpha:2:5'
          DESKTOP: 'textmode'
          HA_CLUSTER: '1'
          HDDMODEL: 'scsi-hd'
          HDDSIZEGB_2: '6'
          HDDVERSION: '12-SP3'
          HDD_1: 'openqa_support_server_sles12sp3.%ARCH%.qcow2'
          # The following HDD_1 will introduce failure: https://openqa.suse.de/tests/8922315#step/setup/48
          # HDD_1: '%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-HA-BV-mig.qcow2'
          INSTALLONLY: '1'
          NICTYPE: 'tap'
          NUMDISKS: '2'
          NUMLUNS: '5'
          PUBLISH_HDD_1: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%-mig.qcow2'
          PUBLISH_HDD_2: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%-mig_luns.qcow2'
          QEMU_DISABLE_SNAPSHOTS: '1'
          START_AFTER_TEST: 'create_hdd_ha_textmode_publish_mig'
          SUPPORT_SERVER: '1'
          SUPPORT_SERVER_ROLES: 'dhcp,dns,ntp,ssh,iscsi'
          UEFI_PFLASH_VARS: 'openqa_support_server_sles12sp3.%ARCH%-mig-uefi-vars.qcow2'
          VIDEOMODE: 'text'
          VIRTIO_CONSOLE: '0'
          WORKER_CLASS: 'tap'

2. 15sp4 mig to 15sp4, and verify (failed on migration): yaml file

# Testing purpose
defaults:
  x86_64:
    machine: 64bit
    priority: -100
    settings:
      TIMEOUT_SCALE: '4'
      VIDEOMODE: text
      EXCLUDE_MODULES: 'patch_sle,record_disk_info,reboot_to_upgrade,version_switch_upgrade_target,bootloader,welcome,upgrade_select,scc_registration,addon_products_sle,resolve_dependency_issues,installation_overview,disable_grub_timeout,start_install,await_install,reboot_after_installation,handle_reboot,first_boot,post_upgrade'
products:
  sle-15-SP4-Full-x86_64:
    distri: sle
    flavor: Full
    version: 15-SP4
scenarios:
  x86_64:
    sle-15-SP4-Full-x86_64:
    ##########sle-15-SP4-Full-x86_64##########
    # sle-15-SP4-Full-x86_64:
    ########## migration ##########
    # MIGRATION OFFLINE DVD 15 SP4
    - migration_offline_dvd_sle15sp4_ha_alpha_node01_mig:
        testsuite: null
        settings: &ha_migration_offline_dvd_common_mig
          ADDONS: 'base,serverapp,ha'
          CLUSTER_NAME: 'alpha'
          HDD_1: '%DISTRI%-%HDDVERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-mig.qcow2'
          HDDMODEL: 'scsi-hd'
          # pulished qcow name fyi:
          #HDD_1: '%DISTRI%-%VERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-mig.qcow2'
          HDDVERSION: '%ORIGIN_SYSTEM_VERSION%'
          #HOSTNAME: '%CLUSTER_NAME%-node01'
          MEDIA_UPGRADE: '1'
          #ORIGIN_SYSTEM_VERSION: '15-SP4'
          PUBLISH_HDD_1: 'upgrade_from-%HDDVERSION%-%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-%CLUSTER_NAME%-%HOSTNAME%-HA-BV-offline_dvd-yaml-mig.qcow2'
          #SCC_ADDONS: 'ha'
          SCC_REGISTER: 'installation'
          YAML_SCHEDULE: 'schedule/ha/bv/migration/migration_offline_sles_ha.yaml'
          VIDEOMODE: 'text'      
          <<: *ha_migration_offline_dvd_common_mig
          HOSTNAME: '%CLUSTER_NAME%-node01'
          ORIGIN_SYSTEM_VERSION: '15-SP4'
          SCC_ADDONS: 'ha'
    - migration_offline_dvd_sle15sp4_ha_alpha_node02_mig:
        testsuite: null
        settings:
          <<: *ha_migration_offline_dvd_common_mig
          HOSTNAME: '%CLUSTER_NAME%-node02'
          ORIGIN_SYSTEM_VERSION: '15-SP4'
          SCC_ADDONS: 'ha'
    ########## verify ##########
    # MIGRATION OFFLINE DVD verify 15 SP4
    - migration_offline_dvd_verify_sle15sp4_ha_alpha_node01_mig:
        testsuite: null
        settings: &ha_migration_offline_dvd_verify_common_mig
          CLUSTER_NAME: 'alpha'
          HDD_1: 'upgrade_from-%HDDVERSION%-%DISTRI%-%VERSION%-%ARCH%-Build%BUILD%-%CLUSTER_NAME%-%HOSTNAME%-HA-BV-offline_dvd-yaml-mig.qcow2'
          HDDVERSION: '%ORIGIN_SYSTEM_VERSION%'
          #HOSTNAME: '%CLUSTER_NAME%-node01'
          NICTYPE: tap
          #ORIGIN_SYSTEM_VERSION: '15-SP4'
          PARALLEL_WITH: 'ha_supportserver_offline_dvd_sle%ORIGIN_SYSTEM_VERSION%_mig'
          #START_AFTER_TEST: 'migration_offline_dvd_sle15sp4_ha_alpha_node01_mig'
          USE_SUPPORT_SERVER: '1'
          WORKER_CLASS: 'tap'
          YAML_SCHEDULE: 'schedule/ha/bv/migration/migration_verify_sles_ha.yaml'      
          <<: *ha_migration_offline_dvd_verify_common_mig
          HOSTNAME: '%CLUSTER_NAME%-node01'
          ORIGIN_SYSTEM_VERSION: '15-SP4'
          START_AFTER_TEST: 'migration_offline_dvd_sle15sp4_ha_alpha_node01_mig'
    - migration_offline_dvd_verify_sle15sp4_ha_alpha_node02_mig:
        testsuite: null
        settings:
          <<: *ha_migration_offline_dvd_verify_common_mig
          HOSTNAME: '%CLUSTER_NAME%-node02'
          ORIGIN_SYSTEM_VERSION: '15-SP4'
          START_AFTER_TEST: 'migration_offline_dvd_sle15sp4_ha_alpha_node02_mig'
    - ha_supportserver_offline_dvd_sle15-SP4_mig:
        testsuite: null
        settings: &ha_supportserver_offline_dvd_common_mig
          #ORIGIN_SYSTEM_VERSION: '15-SP3'
          # All supportserver use same qcow of sle12sp3, for testing we will not change the name of it
          #ORIGIN_SYSTEM_VERSION: '12-SP3'
          HDD_1: 'ha_supportserver_upgrade_sle_%ORIGIN_SYSTEM_VERSION%_%ARCH%-mig.qcow2'
          HDD_2: 'ha_supportserver_upgrade_sle_%ORIGIN_SYSTEM_VERSION%_%ARCH%-mig_luns.qcow2'
          #UEFI_PFLASH_VARS: openqa_support_server_sles12sp3.%ARCH%-uefi-vars.qcow2
          #HDD_1: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%-mig.qcow2'
          #HDD_2: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%-mig_luns.qcow2'
          EFI_PFLASH_VARS: 'openqa_support_server_sles12sp3.%ARCH%-mig-uefi-vars.qcow2'
          HDDMODEL: 'scsi-hd'
          NUMDISKS: '2'
          CLUSTER_INFOS: 'alpha:2:5'
          WORKER_CLASS: tap
          NICTYPE: tap
          SUPPORT_SERVER_ROLES: 'dhcp,dns,ntp,ssh'
          YAML_SCHEDULE: schedule/ha/bv/ha_supportserver.yaml  
          <<: *ha_supportserver_offline_dvd_common_mig
          ORIGIN_SYSTEM_VERSION: '12-SP3'

NOTE:migration failed, please ignoring migration

3 try ignore migration, then do verify only: yaml file

# Testing purpose
defaults:
  x86_64:
    machine: 64bit
    priority: -100
    settings:
      TIMEOUT_SCALE: '4'
      VIDEOMODE: text
      #EXCLUDE_MODULES: 'patch_sle,record_disk_info,reboot_to_upgrade,version_switch_upgrade_target,bootloader,welcome,upgrade_select,scc_registration,addon_products_sle,resolve_dependency_issues,installation_overview,disable_grub_timeout,start_install,await_install,reboot_after_installation,handle_reboot,first_boot,post_upgrade'
products:
  sle-15-SP4-Full-x86_64:
    distri: sle
    flavor: Full
    version: 15-SP4
scenarios:
  x86_64:
    sle-15-SP4-Full-x86_64:
    ##########sle-15-SP4-Full-x86_64##########
    # sle-15-SP4-Full-x86_64:
    ########## migration ##########
    ########## verify ##########
    # MIGRATION OFFLINE DVD verify 15 SP4
    - migration_offline_dvd_verify_sle15sp4_ha_alpha_node01_mig:
        testsuite: null
        settings: &ha_migration_offline_dvd_verify_common_mig
          CLUSTER_NAME: 'alpha'
          HDD_1: '%DISTRI%-%HDDVERSION%-%ARCH%-ha-%CLUSTER_NAME%-%HOSTNAME%-mig.qcow2'
          HDDVERSION: '%ORIGIN_SYSTEM_VERSION%'
          #HOSTNAME: '%CLUSTER_NAME%-node01'
          NICTYPE: tap
          #ORIGIN_SYSTEM_VERSION: '15-SP4'
          PARALLEL_WITH: 'ha_supportserver_offline_dvd_sle%ORIGIN_SYSTEM_VERSION%_mig'
          #START_AFTER_TEST: 'migration_offline_dvd_sle15sp4_ha_alpha_node01_mig'
          USE_SUPPORT_SERVER: '1'
          WORKER_CLASS: 'tap'
          #DELAYED_START: '1'
          YAML_SCHEDULE: 'schedule/ha/bv/migration/migration_verify_sles_ha.yaml'      
          <<: *ha_migration_offline_dvd_verify_common_mig
          HOSTNAME: '%CLUSTER_NAME%-node01'
          ORIGIN_SYSTEM_VERSION: '15-SP4'
          #START_AFTER_TEST: 'migration_offline_dvd_sle15sp4_ha_alpha_node01_mig'
          IP: '10.0.2.15'
    - migration_offline_dvd_verify_sle15sp4_ha_alpha_node02_mig:
        testsuite: null
        settings:
          <<: *ha_migration_offline_dvd_verify_common_mig
          HOSTNAME: '%CLUSTER_NAME%-node02'
          ORIGIN_SYSTEM_VERSION: '15-SP4'
          #START_AFTER_TEST: 'migration_offline_dvd_sle15sp4_ha_alpha_node02_mig'
          IP: '10.0.2.16'
    - ha_supportserver_offline_dvd_sle15-SP4_mig:
        testsuite: null
        settings: &ha_supportserver_offline_dvd_common_mig
          #ORIGIN_SYSTEM_VERSION: '15-SP3'
          # All supportserver use same qcow of sle12sp3, for testing we will not change the name of it
          #ORIGIN_SYSTEM_VERSION: '12-SP3'
          HDD_1: 'ha_supportserver_upgrade_sle_%ORIGIN_SYSTEM_VERSION%_%ARCH%-mig.qcow2'
          HDD_2: 'ha_supportserver_upgrade_sle_%ORIGIN_SYSTEM_VERSION%_%ARCH%-mig_luns.qcow2'
          #UEFI_PFLASH_VARS: openqa_support_server_sles12sp3.%ARCH%-uefi-vars.qcow2
          #HDD_1: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%-mig.qcow2'
          #HDD_2: 'ha_supportserver_upgrade_%DISTRI%_%VERSION%_%ARCH%-mig_luns.qcow2'
          EFI_PFLASH_VARS: 'openqa_support_server_sles12sp3.%ARCH%-mig-uefi-vars.qcow2'
          HDDMODEL: 'scsi-hd'
          NUMDISKS: '2'
          CLUSTER_INFOS: 'alpha:2:5'
          WORKER_CLASS: tap
          NICTYPE: tap
          SUPPORT_SERVER_ROLES: 'dhcp,dns,ntp,ssh'
          YAML_SCHEDULE: schedule/ha/bv/ha_supportserver.yaml  
          <<: *ha_supportserver_offline_dvd_common_mig
          ORIGIN_SYSTEM_VERSION: '12-SP3'

NOTE 1:

Also hit issue, the issue is similar to "Bug 1129385 - [Build 188.1] iscsid and pacemaker start while network is not ready"

See more info: https://app.slack.com/client/T02863RC2AC/C02CU8X53RC/thread/C02CU8X53RC-1657537264.802649

NOTE 2: Then using workaround by using fixed ip and restart pacemaker, test case can pass

IP: '10.0.2.15/16'

in the end of tests/boot> vi boot_to_desktop.pm  add workaround
    select_console 'root-console';
    my $ip = get_var('IP');
     my $netdev = get_var('NETDEV', 'eth0');
     assert_script_run("ip addr add $ip/24 dev $netdev");
     assert_script_run("systemctl restart pacemaker");

NOTE 3: trigger cmd fyi:

# ARCH=xxx; BUILD=151.1; FLAVOR=Online; apikey=xxx; apisecret=xxx; TEST=xxx
# /usr/share/openqa/script/client isos post --host http://openqa.nue.suse.com --apikey ${apikey} --apisecret ${apisecret} _GROUP_ID=184 DISTRI=sle VERSION=15-SP4 ARCH=${ARCH} DISTRI=sle VERSION=15-SP4 FLAVOR=${FLAVOR} BUILD=${BUILD} ISO=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1.iso MIRROR_FTP=ftp://openqa.suse.de/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 MIRROR_HTTP=http://openqa.suse.de/assets/repo/SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 REPO_0=SLE-15-SP4-${FLAVOR}-${ARCH}-Build${BUILD}-Media1 TEST=${TEST}
Actions #5

Updated by llzhao almost 2 years ago

Passed:

  1. publish qcows: https://openqa.suse.de/tests/9158099#dependencies
  2. "ignore migration" + "verify directly" with workaround: https://openqa.suse.de/tests/9126046#dependencies
Actions #6

Updated by llzhao almost 2 years ago

  • Status changed from In Progress to Resolved
  • % Done changed from 0 to 100
Actions #7

Updated by llzhao almost 2 years ago

  • Subject changed from [sle][migration][sle15sp5][HA] try to run the "migration", "verify" cases in O.S.D to [sle][migration][sle15sp5][HA] try to run the "migration", "verify" cases in O.S.D (based on sle15sp4)
Actions #8

Updated by coolgw over 1 year ago

  • Estimated time set to 40.00 h
Actions

Also available in: Atom PDF