Project

General

Profile

Actions

action #44750

closed

Enlarge disk-space of grenache-1

Added by nicksinger over 5 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
-
Start date:
2018-11-29
Due date:
% Done:

0%

Estimated time:

Description

Current configuration of HDDs in grenache:

padmin@grenache:~$ pvmctl lv list
Logical Volumes
+-------+---------+--------------+-----------+-----------------+--------------+
|  VIOS | VG Name | VG Size (GB) | Free (GB) |     LV Name     | LV Size (GB) |
+-------+---------+--------------+-----------+-----------------+--------------+
| vios1 |  rootvg |    532.0     |   130.0   |      hd9var     |     1.0      |
|       |         |              |           |       hd2       |     5.0      |
|       |         |              |           |       hd4       |     1.0      |
|       |         |              |           |       hd8       |     1.0      |
|       |         |              |           |       hd1       |     10.0     |
|       |         |              |           |       hd3       |     5.0      |
|       |         |              |           | root_grenache_4 |     30.0     |
|       |         |              |           |     paging00    |     1.0      |
|       |         |              |           | root_grenache_2 |     30.0     |
|       |         |              |           |       hd6       |     1.0      |
|       |         |              |           | root_grenache_3 |     30.0     |
|       |         |              |           |       hd5       |     1.0      |
|       |         |              |           |    novalinklv   |     30.0     |
|       |         |              |           | root_grenache_1 |    130.0     |
|       |         |              |           |     livedump    |     1.0      |
|       |         |              |           |      fwdump     |     2.0      |
|       |         |              |           |    hd11admin    |     1.0      |
|       |         |              |           | root_grenache_7 |     30.0     |
|       |         |              |           |    lg_dumplv    |     1.0      |
|       |         |              |           | root_grenache_8 |     30.0     |
|       |         |              |           | root_grenache_5 |     30.0     |
|       |         |              |           |     hd10opt     |     1.0      |
|       |         |              |           | root_grenache_6 |     30.0     |
+-------+---------+--------------+-----------+-----------------+--------------+

But we've way more space in this machine available (6x600GB):

padmin@grenache:~$ pvmctl pv list
Physical Volumes
+-------+--------+----------+--------+-----------------------+-----------+
|  VIOS |  Name  | Cap (MB) | State  |      Description      | Available |
+-------+--------+----------+--------+-----------------------+-----------+
| vios1 | hdisk0 |  544792  | active | SAS RAID 0 Disk Array |   False   |
| vios1 | hdisk1 |  544792  | active | SAS RAID 0 Disk Array |   False   |
| vios1 | hdisk2 |  544792  | active | SAS RAID 0 Disk Array |   False   |
| vios1 | hdisk3 |  544792  | active | SAS RAID 0 Disk Array |   False   |
| vios1 | hdisk4 |  544792  | active | SAS RAID 0 Disk Array |   False   |
| vios1 | hdisk5 |  544792  | active | SAS RAID 0 Disk Array |    True   |
+-------+--------+----------+--------+-----------------------+-----------+

Time to extend the VG with the available disks and create a new LV to attach it to the grenache-1 lpar as data disk,


Related issues 1 (0 open1 closed)

Related to openQA Infrastructure - action #44498: [ipmi][grenache-1] Incomplete job due to no space left on deviceResolvednicksinger2018-11-29

Actions
Actions #1

Updated by nicksinger over 5 years ago

  • Copied from action #44498: [ipmi][grenache-1] Incomplete job due to no space left on device added
Actions #2

Updated by nicksinger over 5 years ago

  • Copied from deleted (action #44498: [ipmi][grenache-1] Incomplete job due to no space left on device)
Actions #3

Updated by nicksinger over 5 years ago

  • Related to action #44498: [ipmi][grenache-1] Incomplete job due to no space left on device added
Actions #4

Updated by nicksinger over 5 years ago

On VIOS with oem_setup_env I discovered that we've multiple disks available:

# lspath
Available ses0   sas0
Available ses1   sas0
Available ses2   sas0
Available ses3   sas0
Available pdisk0 sas0
Available pdisk1 sas0
Available pdisk2 sas0
Enabled   hdisk0 sas0
Enabled   hdisk1 sas0
Enabled   hdisk2 sas0
Available ses5   sas1
Available ses6   sas1
Available ses7   sas1
Available ses8   sas1
Available pdisk3 sas1
Available pdisk4 sas1
Available pdisk5 sas1
Enabled   hdisk3 sas1
Enabled   hdisk4 sas1
Enabled   hdisk5 sas1

(hdisk0-5). The lv list from above looked suspicious. Checking the VIOS confirms:

# lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            532         0           00..00..00..00..00

Only one disk is used until today on that machine. Trying to extend this VG by all available disks fails:

# extendvg rootvg hdisk2
0516-1398 extendvg: The physical volume hdisk2, appears to belong to
another volume group. Use the force option to add this physical volume
to a volume group.
0516-792 extendvg: Unable to extend volume group.

I suspect that this is due to re-formating and the back and forth between NovaLink and HMC setup. To be sure I checked each disk explicitly:

# lspv hdisk0
PHYSICAL VOLUME:    hdisk0                   VOLUME GROUP:     rootvg
PV IDENTIFIER:      00fa0fd0fb89efe6 VG IDENTIFIER     00fa0fd000004c00000001612c8329e1
PV STATE:           active                                     
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            1024 megabyte(s)         LOGICAL VOLUMES:  23
TOTAL PPs:          532 (544768 megabytes)   VG DESCRIPTORS:   2
FREE PPs:           0 (0 megabytes)          HOT SPARE:        no
USED PPs:           532 (544768 megabytes)   MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..00..00..00..00                         
USED DISTRIBUTION:  107..106..106..106..107                    
MIRROR POOL:        None                                       
# lspv hdisk1
0516-320 : Physical volume hdisk4 is not assigned to
        a volume group.                                     
# lspv hdisk2
0516-320 : Physical volume hdisk2 is not assigned to
        a volume group.
# lspv hdisk3
0516-320 : Physical volume hdisk3 is not assigned to
        a volume group.
# lspv hdisk4 
0516-320 : Physical volume hdisk4 is not assigned to
        a volume group.
# lspv hdisk5
0516-1396 : The physical volume hdisk5, was not found in the
system database.

After comparing a known-good (hdisk0 -> VOLUME GROUP: rootvg) with the other disks I assume I can now follow extendvg's suggestion and use the -f (force) parameter to add them:

# extendvg -f rootvg hdisk1
# lspv hdisk1
PHYSICAL VOLUME:    hdisk1                   VOLUME GROUP:     rootvg
PV IDENTIFIER:      00fa0fd0020e5bf2 VG IDENTIFIER     00fa0fd000004c00000001612c8329e1
PV STATE:           active                                     
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            1024 megabyte(s)         LOGICAL VOLUMES:  1
TOTAL PPs:          532 (544768 megabytes)   VG DESCRIPTORS:   1
FREE PPs:           462 (473088 megabytes)   HOT SPARE:        no
USED PPs:           70 (71680 megabytes)     MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  107..36..106..106..107                     
USED DISTRIBUTION:  00..70..00..00..00                         
MIRROR POOL:        None

Seems to work. A quick check on the NovaLink-side confirms:

padmin@grenache:~$ pvmctl vg list
Volume Groups
+-------+---------+--------------+-----------+-----------------+--------------+
|  VIOS | VG Name | VG Size (GB) | Free (GB) |     LV Name     | LV Size (GB) |
+-------+---------+--------------+-----------+-----------------+--------------+
| vios1 |  rootvg |    1064.0    |   462.0   |      hd9var     |     1.0      |
|       |         |              |           |       hd2       |     5.0      |
|       |         |              |           |       hd4       |     1.0      |
|       |         |              |           |       hd8       |     1.0      |
|       |         |              |           |       hd1       |     10.0     |
|       |         |              |           |       hd3       |     5.0      |
|       |         |              |           | root_grenache_4 |     30.0     |
|       |         |              |           |     paging00    |     1.0      |
|       |         |              |           | root_grenache_2 |     30.0     |
|       |         |              |           |       hd6       |     1.0      |
|       |         |              |           | root_grenache_3 |     30.0     |
|       |         |              |           |       hd5       |     1.0      |
|       |         |              |           |    novalinklv   |     30.0     |
|       |         |              |           | root_grenache_1 |    130.0     |
|       |         |              |           |     livedump    |     1.0      |
|       |         |              |           |      fwdump     |     2.0      |
|       |         |              |           |    hd11admin    |     1.0      |
|       |         |              |           | root_grenache_7 |     30.0     |
|       |         |              |           |    lg_dumplv    |     1.0      |
|       |         |              |           | root_grenache_8 |     30.0     |
|       |         |              |           | root_grenache_5 |     30.0     |
|       |         |              |           |     hd10opt     |     1.0      |
|       |         |              |           | root_grenache_6 |     30.0     |
+-------+---------+--------------+-----------+-----------------+--------------+

The LV can be resized with extendlv. This would mean an increased disk-space for the root-volume of grenache-1:

# extendlv root_grenache_1 200G

Again, checking NovaLink confirms the change because the line with "root_grenache_1" changed from "LV Size (GB) 130.0" to:

|       |         |              |           | root_grenache_1 |    330.0     |

It can also be confirmed once again on the system itself. Before:

grenache-1:~ # lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  130G  0 disk 
├─sda1   8:1    0    7M  0 part 
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0  128G  0 part /

After:

grenache-1:~ # lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  330G  0 disk 
├─sda1   8:1    0    7M  0 part 
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0  128G  0 part /

Since resizing the root would require some offline-adjustments and could kill the worker I want to look into adding another LV to grenache-1 instead of just expanding the root-LV.

Actions #5

Updated by nicksinger over 5 years ago

Extending the VG with hdisk5 yielded the following warning:

# extendvg -f rootvg hdisk5
0516-1254 extendvg: Changing the PVID in the ODM.

Again, I assume this is caused by previous setups. The final VG configuration with all disks looks as following:

padmin@grenache:~$ pvmctl vg list
Volume Groups
+-------+---------+--------------+-----------+-----------------+--------------+
|  VIOS | VG Name | VG Size (GB) | Free (GB) |     LV Name     | LV Size (GB) |
+-------+---------+--------------+-----------+-----------------+--------------+
| vios1 |  rootvg |    3192.0    |   2590.0  |      hd9var     |     1.0      |
|       |         |              |           |       hd2       |     5.0      |
|       |         |              |           |       hd4       |     1.0      |
|       |         |              |           |       hd8       |     1.0      |
|       |         |              |           |       hd1       |     10.0     |
|       |         |              |           |       hd3       |     5.0      |
|       |         |              |           | root_grenache_4 |     30.0     |
|       |         |              |           |     paging00    |     1.0      |
|       |         |              |           | root_grenache_2 |     30.0     |
|       |         |              |           |       hd6       |     1.0      |
|       |         |              |           | root_grenache_3 |     30.0     |
|       |         |              |           |       hd5       |     1.0      |
|       |         |              |           |    novalinklv   |     30.0     |
|       |         |              |           | root_grenache_1 |    330.0     |
|       |         |              |           |     livedump    |     1.0      |
|       |         |              |           |      fwdump     |     2.0      |
|       |         |              |           |    hd11admin    |     1.0      |
|       |         |              |           | root_grenache_7 |     30.0     |
|       |         |              |           |    lg_dumplv    |     1.0      |
|       |         |              |           | root_grenache_8 |     30.0     |
|       |         |              |           | root_grenache_5 |     30.0     |
|       |         |              |           |     hd10opt     |     1.0      |
|       |         |              |           | root_grenache_6 |     30.0     |
+-------+---------+--------------+-----------+-----------------+--------------+
Actions #6

Updated by nicksinger over 5 years ago

Creating LV:

# mklv -y data_grenache_1 rootvg 500G
data_grenache_1
# lslv data_grenache_1
LOGICAL VOLUME:     data_grenache_1        VOLUME GROUP:   rootvg
LV IDENTIFIER:      00fa0fd000004c00000001612c8329e1.24 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       closed/syncd
TYPE:               jfs                    WRITE VERIFY:   off
MAX LPs:            512                    PP SIZE:        1024 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                500                    PPs:            500
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    32
MOUNT POINT:        N/A                    LABEL:          None
MIRROR WRITE CONSISTENCY: on/ACTIVE                              
EACH LP COPY ON A SEPARATE PV ?: yes                                    
Serialize IO ?:     NO                                     
INFINITE RETRY:     no
Actions #7

Updated by nicksinger over 5 years ago

Last step needed to connect the new PV to grenache-1 as separate disk (I only managed to do this with NovaLink, not from the VIOS itself):

pvmctl scsi create --type lv --lpar name=grenache-1 --stor-id name=data_grenache_1

Afterwards one can check if it was successful:

padmin@grenache:~$ pvmctl scsi list
Virtual SCSI Mappings
+------------------+-----------+-------+-----------+-----------------+
|       LPAR       | LPAR Slot |  VIOS | VIOS Slot |     Storage     |
+------------------+-----------+-------+-----------+-----------------+
|    grenache-1    |     3     | vios1 |     4     | root_grenache_1 |
|    grenache-1    |     3     | vios1 |     4     | data_grenache_1 |
|    grenache-2    |     2     | vios1 |     5     | root_grenache_2 |
|    grenache-3    |     2     | vios1 |     7     | root_grenache_3 |
|    grenache-4    |     2     | vios1 |     8     | root_grenache_4 |
|    grenache-5    |     2     | vios1 |     9     | root_grenache_5 |
|    grenache-6    |     2     | vios1 |     10    | root_grenache_6 |
|    grenache-7    |     2     | vios1 |     11    | root_grenache_7 |
|    grenache-8    |     2     | vios1 |     12    | root_grenache_8 |
| novalink_210FD0W |     4     | vios1 |     2     |    novalinklv   |
+------------------+-----------+-------+-----------+-----------------+

As you can see, grenache-1 now has two "Storages" (aka LV, aka "disks"): root_grenache_1 and data_grenache_1.
Afterwards I logged into grenache-1 and scanned the SCSI-bus for new devices:

grenache-1:~ # ls -lah /sys/class/scsi_host/
total 0
drwxr-xr-x  2 root root 0 Dec 10 14:34 .
drwxr-xr-x 38 root root 0 Dec  4 14:55 ..
lrwxrwxrwx  1 root root 0 Dec  4 14:55 host0 -> ../../devices/vio/30000003/host0/scsi_host/host0
grenache-1:~ # echo "- - -" > /sys/class/scsi_host/host0/scan

Which gives us now one additional disk:

grenache-1:~ # lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  330G  0 disk 
├─sda1   8:1    0    7M  0 part 
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0  128G  0 part /
sdb      8:16   0  500G  0 disk 

On this disk one can now create a new partition and FS with the known linux tools.

Actions #8

Updated by nicksinger over 5 years ago

  • Status changed from In Progress to Feedback

rsync'ed /var/lib/openqa to the new disk, added fstab entry for it, remounted it, restarted workers.
There was some trouble with the cacheservice for which I need to create follow-up tickets.

Actions #9

Updated by nicksinger over 5 years ago

  • Status changed from Feedback to Resolved
Actions

Also available in: Atom PDF