Project

General

Profile

action #41276

coordination #40469: [functional][y][epic] Adjust RAID/LVM/partitioning tests to the new changes and extend testing coverage

[functionality][y][research] Test partitions on an MD RAID

Added by ancorgs over 3 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Spike/Research
Target version:
SUSE QA - Milestone 30+
Start date:
2018-09-06
Due date:
% Done:

0%

Estimated time:
Difficulty:

Description

Now it's possible to manage partitions within a software MD RAID. As part of the change, the interface for managing RAIDs and the distribution of some buttons has changed slightly.

  • In the table of the section "RAIDs" the set of buttons at the bottom change based on the selected row
  • After creating a new RAID, it does not automatically start the workflow to "edit" it (format and so on)
  • When clicking on a RAID, it jumps to its new view with tabs "Overview", "Used Devices" and "Partitions"
  • The buttons for creating a new partition table and for cloning it to another device has been relocated/renamed

For more details see:

More changes in the locations of the buttons are expected soon. See the discussion at
https://gist.github.com/ancorgs/bf81a230e2a4634df7a05b90a1241116#file-partitioner-buttons-md

Acceptance criteria

  1. Draft test plan for the feature is created
  2. Follow-up ticket for automation is created if makes sense

Suggestions

  1. Plain partitions on raid 0,1,10,5,6 (btrfs for /, swap, separate /boot, /home)
  2. lvm on top of raid for different raid levels
  3. encrypted partitions on top of raid
  4. encrypted lvm on top of raid
  5. cloning partitions using expert partitioner
  6. creating 1-4 in the installed system
  7. system cloning with 1-4
  8. autoyast installation with 1-4

We have automated test which creates single partition on raid device. Feature is available starting from SLE/Leap 15 SP1, and TW


Related issues

Related to openQA Tests - action #40679: [functional][y] whole disk as part of an MD RAIDRejected2018-09-06

History

#1 Updated by okurz over 3 years ago

  • Category set to New test
  • Target version set to Milestone 22

I see this is as a request for a test coverage extension which should not be that important unless tests break due to the imminent changes – which I expect to hit us anyway.

#2 Updated by okurz over 3 years ago

  • Target version changed from Milestone 22 to future

M22 out of capacity

#3 Updated by riafarov about 3 years ago

  • Due date set to 2019-05-07
  • Target version changed from future to Milestone 26

#4 Updated by riafarov about 3 years ago

  • Due date changed from 2019-05-07 to 2019-06-04

I believe we should aim getting our testing approach well defined first before aiming extending coverage of the UI tests, so moving to the later sprint.

#5 Updated by riafarov about 3 years ago

  • Due date changed from 2019-06-04 to 2019-07-02

Same as before, no capacity still.

#6 Updated by riafarov almost 3 years ago

  • Description updated (diff)

#7 Updated by riafarov almost 3 years ago

  • Due date changed from 2019-07-02 to 2019-07-30

Due to hackweek.

#8 Updated by riafarov almost 3 years ago

  • Description updated (diff)
  • Estimated time set to 8.00 h

#9 Updated by riafarov almost 3 years ago

  • Status changed from New to Workable

#10 Updated by riafarov almost 3 years ago

  • Due date changed from 2019-07-30 to 2019-08-13

#11 Updated by JERiveraMoya almost 3 years ago

  • Due date changed from 2019-08-13 to 2019-08-27

#12 Updated by riafarov almost 3 years ago

  • Target version changed from Milestone 26 to Milestone 27

#13 Updated by riafarov almost 3 years ago

  • Subject changed from [functionality][y] Test partitions on an MD RAID to [functionality][y][research] Test partitions on an MD RAID
  • Due date changed from 2019-08-27 to 2019-09-10
  • Category changed from New test to Spike/Research
  • Target version changed from Milestone 27 to Milestone 30+

#14 Updated by JRivrain over 2 years ago

  • Assignee set to JRivrain

#15 Updated by JRivrain over 2 years ago

  • Assignee deleted (JRivrain)

#16 Updated by JRivrain over 2 years ago

  • Assignee set to JRivrain

#17 Updated by JRivrain over 2 years ago

  • Status changed from Workable to In Progress

#19 Updated by JRivrain over 2 years ago

  • Status changed from In Progress to Blocked

For now, considering the bug in previous comment, I consider this feature as unreliable. I'd rather wait for the resolution of this bug before going any further.

#20 Updated by JRivrain over 2 years ago

  • Status changed from Blocked to In Progress

The bug was closed. The resulting discussion shows that:

  • We cannot reliably boot from such raid array: we need to boot from a separate disk out of the array.
  • "Managing partitions within a software MD RAID" is generally not a good idea, but has been implemented in the partitioner because it is technically possible, even if the feature is somehow experimental. Using commands like fdisk -l, parted -l are misleading with such setup, and it seems that it can be dangerous in some cases, see comment 27. The documentations are still recommending to use the ancient way, which is also the way we are testing raid in openqa.

So, I am not sure (to be discussed) that we should spend time automating the testing of this. There is a ticket for it, https://progress.opensuse.org/issues/40679.

#22 Updated by JRivrain over 2 years ago

  • Status changed from In Progress to Feedback

I tested pretty much all the suggestions (except encryption), it all works, with the limitations I reported. there is no possibility to clone partition table from/to a mdraid, that could be suggested to yast team as enhancement.

#23 Updated by JRivrain over 2 years ago

  • Related to action #40679: [functional][y] whole disk as part of an MD RAID added

#24 Updated by JRivrain over 2 years ago

  • Status changed from Feedback to Resolved

#25 Updated by riafarov over 2 years ago

  • Start date set to 2018-09-06

due to changes in a related task

Also available in: Atom PDF