Project

General

Profile

action #103836

Updated by JERiveraMoya over 2 years ago

The goal is to increase the granularity of RAID tests to make them easy to dissect in case of failure. 
 In this case as we pass several time by the same screen, we cannot apply such atomicity like in other steps we did 
 for the interactive installation, but still we can pick and group some sequence of steps which can even verified in the UI 
 (that part of verification is not included include and it is not intended for now) because there is a summary or a table to check them so are joined by the same logic, 
 like adding a partitions at the hard disk level or adding a partition on a md. 

 This is how the final schedule should look increasing the granularity for this arch: 

 RAID{0,1,5,6,10} x86_64: 
     installation/partitioning/select_expert_partitioner 
     installation/expert_partitioner/add_hd_part_bios_boot 
     installation/expert_partitioner/add_hd_part_linux_raid_root_size 
     installation/expert_partitioner/add_hd_part_linux_raid_swap_size # from this point we will not tackle in this ticket yet 
     # until this point it should be completed in previous ticket of this Epic 
     installation/expert_partitioner/clone_partition_layout_to_all_targets 
     installation/expert_partitioner/add_md0_raid{0,1,5,6,10}_root_devices 
     installation/expert_partitioner/add_raid_part_md0_root 
     installation/expert_partitioner/add_md1_raid0_swap_devices 
     installation/expert_partitioner/add_raid_part_md1_swap 
     installation/partitioning/accept_proposed_layout 

 Notation {0,1,5,6,10} means that we need one test for each type of RAID. 
 It is recommended to break the interface created for lib/Installation/Partitioner/ and create new POM with this granularity under lib/Installation/ExpertPartitioner/ implementing it from scratch. 
 Leave validations and removal of test data out of scope here, but probably will need some adjustment due to make it homogeneous some partition might be in a different md. 

 Existing code contains three clear steps, (1) partition (2) cloning and (3) raid. 
 We will be deleting temporary test we created for (1) and we will tackle (2) and (3). This is the original code we will use for that: 

 ``` 
 sub setup_raid { 
     my ($self, $args) = @_; 
     # Create partitions with the data from yaml scheduling file on first disk 
     my @disks = @{$args->{disks}}; 
     my $first_disk = $disks[0]; 
     foreach my $partition (@{$first_disk->{partitions}}) { 
         $self->add_partition_gpt({disk => $first_disk->{name}, partition => $partition}); 
     } 
     # Clone partition table from first disk to all other disks 
     my @target_disks = map { $_->{name} } @disks[1 .. $#disks]; 
     $self->clone_partition_table({disk => $first_disk->{name}, target_disks => \@target_disks}); 
     # Create RAID partitions with the data from yaml scheduling file 
     foreach my $md (@{$args->{mds}}) { 
         $self->add_raid($md); 
     } 
 } 
 ``` 
 

 Scope of this ticket is x86_64 in YaST group. Follow-up ticket will tackle other groups, other archs. 
 Once done we can apply to test suite detect_yast2_failures which is using `raid_gpt`. 

Back