action #28955
closedcoordination #28949: [sle][functional][opensuse][epic] Improve test coverage of partitioning_raid
[sle][functional][easy][yast] Specific needle for each raid type
0%
Description
Observation¶
At the moment, there is a general needle used to assert the final step of partitioning. That means that we could have a false negative if the needle match with a non-expected result.
openQA test in scenario sle-15-Installer-DVD-x86_64-RAID0@64bit fails in
partitioning_raid
Acceptance criteria¶
- AC1: Each raid type has its own needle for acceptedpartitioning matching exactly the expected result.
- AC2: The code still uses the tag acceptedpartitioningraidefi.
- AC3: The verification runs match the proper specific needle for each raid type.
- AC4: The test module partitioning_raid doesn't fail on this needle.
Tasks¶
- Create specific needles for each raid type taking care of not breaking the current code base.
- See this sample for raid0 on uefi/aarch64: osd#1311060#step/partitioning_raid/233
Tags examples¶
SLE 15¶
"ENV-15ORLATER-1",
"acceptedpartitioningraidX", (use here the raid type instead of **X**, like _acceptedpartitioningraid0efi_)
"acceptedpartitioningraid"
SLE 15 EFI¶
"ENV-15ORLATER-1",
"ENV-UEFI-1",
"acceptedpartitioningraidXefi", (use here the raid type instead of **X**, like _acceptedpartitioningraid0efi_)
"acceptedpartitioningraidefi"
SLE 15 OFW¶
"ENV-15ORLATER-1",
"ENV-OFW-1",
"acceptedpartitioningraidXofw", (use here the raid type instead of **X**, like _acceptedpartitioningraid0ofw_)
"acceptedpartitioningraid"
SLE 12-SP4¶
"ENV-12SP4ORLATER-1",
"acceptedpartitioningraidX", (use here the raid type instead of **X**, like _acceptedpartitioningraid0_)
"acceptedpartitioningraid"
Reproducible¶
Fails since (at least) Build 288.8
Expected result¶
Last good: (unknown) (or more recent)
Further details¶
Always latest result in this scenario: latest
Updated by SLindoMansilla about 7 years ago
- Blocks action #28970: [sle][functional][easy][yast] Use the specific tag for each raid type in code added
Updated by SLindoMansilla about 7 years ago
- Description updated (diff)
Verification on OSD:¶
SLE 15¶
aarch64¶
- RAID0 osd#1311060#step/partitioning_raid/233
- RAID1 osd#1311078#step/partitioning_raid/233
- RAID10 http://openqa.suse.de/tests/1400173#step/partitioning_raid/187
- RAID5 http://openqa.suse.de/tests/1399049#step/partitioning_raid/187
- RAID6 http://openqa.suse.de/tests/1399059#step/partitioning_raid/187
LVM+RAID1 http://openqa.suse.de/tests/1403585#step/partitioning_raid/203
ppc64le¶
RAID0 http://openqa.suse.de/tests/1399197#step/partitioning_raid/224
RAID1 http://openqa.suse.de/tests/1399203#step/partitioning_raid/223
RAID10 http://openqa.suse.de/tests/1399285#step/partitioning_raid/222
RAID5 http://openqa.suse.de/tests/1399204#step/partitioning_raid/221
x86_64¶
- RAID0 https://openqa.suse.de/tests/1399536#step/partitioning_raid/191
- RAID1 https://openqa.suse.de/tests/1399567#step/partitioning_raid/190
- RAID10 https://openqa.suse.de/tests/1399752#step/partitioning_raid/190
- RAID5 https://openqa.suse.de/tests/1399562#step/partitioning_raid/190
- RAID6 https://openqa.suse.de/tests/1399581#step/partitioning_raid/190
- LVM+RAID1 http://openqa.suse.de/tests/1404093#step/partitioning_raid/206
- lvm+RAID1@svirt-hyperv, lvm+RAID1@svirt-hyperv-uefi, lvm+RAID1@svirt-xen-hvm, lvm+RAID1@svirt-xen-pv : never worked till now. So this are not required atm because we need to fix failure at first, just of record.
Leap 15¶
aarch64¶
- RAID0 N/A
- RAID1 N/A
- RAID10 N/A
- RAID5 N/A
- RAID6 N/A
- LVM+RAID N/A
ppc64le¶
- RAID0 N/A
- RAID1 N/A
- RAID10 N/A
- RAID5 N/A
- RAID6 N/A
- LVM+RAID N/A
s390x¶
- RAID0 N/A
- RAID1 N/A
- RAID10 N/A
- RAID5 N/A
- RAID6 N/A
- LVM+RAID N/A
x86_64¶
- RAID0 O3#559024#step/partitioning_raid/216
- RAID1 N/A
- RAID10 N/A
- RAID5 N/A
- RAID6 N/A
- LVM+RAID N/A
Updated by SLindoMansilla about 7 years ago
- Status changed from New to Workable
- Assignee set to zluo
Updated by zluo almost 7 years ago
for sles 15 part:
all needls are in place.
x86_64 still has following problems:
vm+RAID1@svirt-hyperv, lvm+RAID1@svirt-hyperv-uefi, lvm+RAID1@svirt-xen-hvm, lvm+RAID1@svirt-xen-pv : never worked for sles 15
Updated by zluo almost 7 years ago
- Assignee changed from zluo to SLindoMansilla
@SLindoMansilla: my part has been done, please take over, thanks!
Updated by SLindoMansilla almost 7 years ago
Updated by SLindoMansilla almost 7 years ago
- Status changed from Workable to In Progress
Updated by SLindoMansilla almost 7 years ago
Updated by riafarov almost 7 years ago
- Subject changed from [sle][functional][easy] Specific needle for each raid type to [sle][functional][easy][yast] Specific needle for each raid type
Updated by riafarov almost 7 years ago
- Due date changed from 2018-02-13 to 2018-02-27
Moving to the next sprint
Updated by SLindoMansilla almost 7 years ago
Updated by SLindoMansilla almost 7 years ago
- Due date changed from 2018-02-27 to 2018-02-13
- Status changed from In Progress to Resolved
As spoken, only verified on OSD so setting due date to sprint 10.
Updated by pcervinka almost 7 years ago
- Related to action #31867: [qam] test fails in partitioning_raid - failures after update to needles added
Updated by SLindoMansilla almost 7 years ago
- Blocks deleted (action #28970: [sle][functional][easy][yast] Use the specific tag for each raid type in code)
Updated by SLindoMansilla almost 7 years ago
- Has duplicate action #28970: [sle][functional][easy][yast] Use the specific tag for each raid type in code added