Project

General

Profile

action #105449

Updated by geor almost 3 years ago

## Motivation 
 The FCP topology on our z/VM testing infrastructure has recently been updated and the [zfcp testsuite](https://openqa.suse.de/tests/latest?distri=sle&flavor=Online&test=zfcp&version=15-SP4) needs a new test module to validate multipathing. 

 ## Task 
 The idea is to check the status of the known infrastructure, and the status of multipathing on top of it. 

 **AC1**: Verify that there are two host bus adapters attached, and that the corresponding channels (`0.0.fa00, 0.0.fc00`) are listed 
 ``` 
 # ls -l /sys/class/fc_host 
 total 0 
 lrwxrwxrwx 1 root root 0 Dec 19 17:53 host0 -> ../../devices/css0/0.0.0005/0.0.fa00/host0/fc_host/host0 
 lrwxrwxrwx 1 root root 0 Dec 19 17:53 host1 -> ../../devices/css0/0.0.0006/0.0.fc00/host1/fc_host/host1 
 ``` 

 **AC2**: Based on the output of lsscsi: 
 ``` 
 # lsscsi -xxgst 
 [0:0:0:0x4001403200000000]    disk      fc:0x500507630703d3b30x760f00     /dev/sda     /dev/sg0      214GB 
 [0:0:0:0x4001405000000000]    disk      fc:0x500507630703d3b30x760f00     /dev/sdb     /dev/sg1     42.9GB 
 [0:0:1:0x4001403200000000]    disk      fc:0x500507630708d3b30x761000     /dev/sdc     /dev/sg2      214GB 
 [0:0:1:0x4001405000000000]    disk      fc:0x500507630708d3b30x761000     /dev/sdd     /dev/sg3     42.9GB 
 [1:0:0:0x4001403200000000]    disk      fc:0x500507630718d3b30x771000     /dev/sde     /dev/sg4      214GB 
 [1:0:0:0x4001405000000000]    disk      fc:0x500507630718d3b30x771000     /dev/sdf     /dev/sg5     42.9GB 
 [1:0:1:0x4001403200000000]    disk      fc:0x500507630713d3b30x770f00     /dev/sdg     /dev/sg6      214GB 
 [1:0:1:0x4001405000000000]    disk      fc:0x500507630713d3b30x770f00     /dev/sdh     /dev/sg7     42.9GB 
 ``` 

 * verify that there are 8 SCSI block devices listed (as a result of having two adapters, two hard disks and two paths to each disk, 2^3 = 8) 
 * verify that LUN `0x4001403200000000` corresponds to the 214GB disk, and LUN `0x4001405000000000` to the 42.9GB disk 

 **AC3**: Based on the output of multipath -l: 
 ``` 
 # multipath -l 
 36005076307ffd3b30000000000000132 dm-0 IBM,2107900 
 size=200G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw 
 `-+- policy='service-time 0' prio=0 status=enabled 
   |- 0:0:0:1077035009 sda 8:0    active undef running 
   |- 0:0:1:1077035009 sdc 8:32 active undef running 
   |- 1:0:0:1077035009 sde 8:64 active undef running 
   `- 1:0:1:1077035009 sdg 8:96 active undef running 
 36005076307ffd3b30000000000000150 dm-41 IBM,2107900 
 size=40G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw 
 `-+- policy='service-time 0' prio=0 status=enabled 
   |- 0:0:0:1079001089 sdb 8:16    active undef running 
   |- 0:0:1:1079001089 sdd 8:48    active undef running 
   |- 1:0:1:1079001089 sdh 8:112 active undef running 
   `- 1:0:0:1079001089 sdf 8:80    active undef running 
 ``` 

 * verify that the multipath device with WWID `36005076307ffd3b30000000000000132` corresponds to the 200G disk and the multipath device with WWID `36005076307ffd3b30000000000000150` to the 40G disk. 
 * verify that there are 4 block devices listed for each virtual multipath device 

 
 * verify that there are four devices  

 ## Suggestion 
 It might help to check [this confluence article](https://confluence.suse.com/display/QYT/Mainframe+Musings%3A+Playing+around+with+FCP+and+multipath#MainframeMusings:PlayingaroundwithFCPandmultipath-Faulttoleranceinaction) for some more insight on the output of the above commands.

Back