action #41276
closedcoordination #40469: [functional][y][epic] Adjust RAID/LVM/partitioning tests to the new changes and extend testing coverage
[functionality][y][research] Test partitions on an MD RAID
0%
Description
Now it's possible to manage partitions within a software MD RAID. As part of the change, the interface for managing RAIDs and the distribution of some buttons has changed slightly.
- In the table of the section "RAIDs" the set of buttons at the bottom change based on the selected row
- After creating a new RAID, it does not automatically start the workflow to "edit" it (format and so on)
- When clicking on a RAID, it jumps to its new view with tabs "Overview", "Used Devices" and "Partitions"
- The buttons for creating a new partition table and for cloning it to another device has been relocated/renamed
For more details see:
- https://trello.com/c/itizMqBG/ (and the linked epic card)
- https://github.com/yast/yast-storage-ng/pull/737
- https://github.com/yast/yast-storage-ng/pull/738
More changes in the locations of the buttons are expected soon. See the discussion at
https://gist.github.com/ancorgs/bf81a230e2a4634df7a05b90a1241116#file-partitioner-buttons-md
Acceptance criteria¶
- Draft test plan for the feature is created
- Follow-up ticket for automation is created if makes sense
Suggestions¶
- Plain partitions on raid 0,1,10,5,6 (btrfs for /, swap, separate /boot, /home)
- lvm on top of raid for different raid levels
- encrypted partitions on top of raid
- encrypted lvm on top of raid
- cloning partitions using expert partitioner
- creating 1-4 in the installed system
- system cloning with 1-4
- autoyast installation with 1-4
We have automated test which creates single partition on raid device. Feature is available starting from SLE/Leap 15 SP1, and TW
Updated by okurz about 6 years ago
- Category set to New test
- Target version set to Milestone 22
I see this is as a request for a test coverage extension which should not be that important unless tests break due to the imminent changes – which I expect to hit us anyway.
Updated by okurz almost 6 years ago
- Target version changed from Milestone 22 to future
M22 out of capacity
Updated by riafarov over 5 years ago
- Due date set to 2019-05-07
- Target version changed from future to Milestone 26
Updated by riafarov over 5 years ago
- Due date changed from 2019-05-07 to 2019-06-04
I believe we should aim getting our testing approach well defined first before aiming extending coverage of the UI tests, so moving to the later sprint.
Updated by riafarov over 5 years ago
- Due date changed from 2019-06-04 to 2019-07-02
Same as before, no capacity still.
Updated by riafarov over 5 years ago
- Due date changed from 2019-07-02 to 2019-07-30
Due to hackweek.
Updated by riafarov over 5 years ago
- Description updated (diff)
- Estimated time set to 8.00 h
Updated by riafarov over 5 years ago
- Due date changed from 2019-07-30 to 2019-08-13
Updated by JERiveraMoya over 5 years ago
- Due date changed from 2019-08-13 to 2019-08-27
Updated by riafarov over 5 years ago
- Target version changed from Milestone 26 to Milestone 27
Updated by riafarov over 5 years ago
- Subject changed from [functionality][y] Test partitions on an MD RAID to [functionality][y][research] Test partitions on an MD RAID
- Due date changed from 2019-08-27 to 2019-09-10
- Category changed from New test to Spike/Research
- Target version changed from Milestone 27 to Milestone 30+
Updated by JRivrain about 5 years ago
- Status changed from Workable to In Progress
Updated by JRivrain about 5 years ago
Updated by JRivrain about 5 years ago
- Status changed from In Progress to Blocked
For now, considering the bug in previous comment, I consider this feature as unreliable. I'd rather wait for the resolution of this bug before going any further.
Updated by JRivrain about 5 years ago
- Status changed from Blocked to In Progress
The bug was closed. The resulting discussion shows that:
- We cannot reliably boot from such raid array: we need to boot from a separate disk out of the array.
- "Managing partitions within a software MD RAID" is generally not a good idea, but has been implemented in the partitioner because it is technically possible, even if the feature is somehow experimental. Using commands like fdisk -l, parted -l are misleading with such setup, and it seems that it can be dangerous in some cases, see comment 27. The documentations are still recommending to use the ancient way, which is also the way we are testing raid in openqa.
So, I am not sure (to be discussed) that we should spend time automating the testing of this. There is a ticket for it, https://progress.opensuse.org/issues/40679.
Updated by JRivrain about 5 years ago
Updated by JRivrain about 5 years ago
- Status changed from In Progress to Feedback
I tested pretty much all the suggestions (except encryption), it all works, with the limitations I reported. there is no possibility to clone partition table from/to a mdraid, that could be suggested to yast team as enhancement.
Updated by JRivrain about 5 years ago
- Related to action #40679: [functional][y] whole disk as part of an MD RAID added
Updated by JRivrain almost 5 years ago
- Status changed from Feedback to Resolved
Updated by riafarov almost 5 years ago
- Start date set to 2018-09-06
due to changes in a related task