action #41300
closedcoordination #42128: [functional][y][epic] Adjust partitioning tests to changes in storage-ng
[sle][functional][y][fast] test fails in partitioning_raid - xvdb cannot be selected
0%
Description
As I can see that xvdb cannot be selected so the needle match failed.
Observation¶
openQA test in scenario sle-15-SP1-Installer-DVD-x86_64-lvm+RAID1@svirt-xen-hvm fails in
partitioning_raid
Reproducible¶
Fails since (at least) Build 43.1 (current job)
Expected result¶
Last good: 41.4 (or more recent)
Further details¶
Always latest result in this scenario: latest
Updated by okurz over 6 years ago
- Target version set to Milestone 20
so far happened only once. To be reviewed later.
Updated by okurz over 6 years ago
- Blocked by action #41849: [functional][y] test fails in partitioning_raid - RAID toolbar has changed thus shortcut is different added
Updated by okurz over 6 years ago
- Status changed from New to Blocked
- Assignee set to JERiveraMoya
Updated by riafarov over 6 years ago
- Subject changed from [sle][functional][y] test fails in partitioning_raid - xvdb cannot be selected to [sle][functional][y][fast] test fails in partitioning_raid - xvdb cannot be selected
- Due date set to 2018-10-23
Updated by riafarov over 6 years ago
- Blocked by deleted (action #41849: [functional][y] test fails in partitioning_raid - RAID toolbar has changed thus shortcut is different)
Updated by riafarov over 6 years ago
- Blocked by action #41849: [functional][y] test fails in partitioning_raid - RAID toolbar has changed thus shortcut is different added
Updated by riafarov over 6 years ago
https://openqa.suse.de/tests/2175049#settings now it fails on entering volume group name, not sure if that's because we have different performance on hyper-v or something else. If you could please take a look.
Updated by JERiveraMoya over 6 years ago
Updated by JERiveraMoya over 6 years ago
- Status changed from Feedback to In Progress
Updated by JERiveraMoya over 6 years ago
Besides some needles, it requires a bit of adjustment in the drop-down menu:
PR - Fix lvm on top of raid partitioning
Updated by JERiveraMoya over 6 years ago
- Status changed from In Progress to Feedback
Updated by JERiveraMoya over 6 years ago
- Status changed from Feedback to In Progress
All lvm on top of raid passing, jobs restarted and missing needles created .
Now we need to wait for results after PR - Update LVM workflow to recent storage-ng updates from @mloviska will be merged.
Updated by JERiveraMoya over 6 years ago
- Status changed from In Progress to Feedback
Forgot to branch code also for sle12sp4: PR - Fix partition raid for sle12sp4 to avoid this job failing Besides that, I think needs some other adjustment according last build sle12sp4.
Updated by JERiveraMoya over 6 years ago
PR includes reverse of if/else logic we missed in PR review after @mloviska PR was merged.
Updated by JERiveraMoya over 6 years ago
- Status changed from Feedback to In Progress
Updated by riafarov over 6 years ago
- Status changed from In Progress to Resolved
This is also already fixed. Thanks!
https://openqa.suse.de/tests/2197287