[How To] Add disk space to LVM with 2 physical volumes in VMware

[How To] Add disk space to LVM with 2 physical volumes in VMware

So, here is a scenario. Ubuntu 14.04 server running in VMware ESX farm with LVM partition has an extra 150GB drive mounted on /opt via a 2nd Hard Drive. This is used as a storage point. The OS has a 40GB dedicated disk with /home, /tmp, /usr and /var are given separate mount points via LVM partitioning. Now, disk space in /opt needs to be increased to 500GB as there is very little space left. Needless to say, data should not be lost while expanding disk size and free space.

There are 2 options to increase disk space in VMware.

1. ADD a 3rd Hard disk of 350 GB
2. Increase the existing 2nd HDD’s size from 150GB to 500GB.

Here, I choose the 2nd option. I will discuss the first option in detail in a separate post.  First things first, shut down the VM. Then, edit the settings from VMware vsphere console and change HDD size of 2nd hard disk to 500GB . Now, power up the VM.  Bear in mind, just changing the HDD value in VMware vshpere’s settings will not reflect in Ubuntu.  At this point, if you check the status with df -h, it will display as shown below. Note: I’m in root. You can choose to use sudo from your user login to run the commands.

root@ubuntu-vm:~# df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu_root-root  7.8G  359M  7.0G   5% /
none                        4.0K     0  4.0K   0% /sys/fs/cgroup
udev                        7.9G  4.0K  7.9G   1% /dev
tmpfs                       1.6G  1.5M  1.6G   1% /run
none                        5.0M     0  5.0M   0% /run/lock
none                        7.9G  152K  7.9G   1% /run/shm
none                        100M   20K  100M   1% /run/user
/dev/mapper/ubuntu_data-opt  148G   60M  140G   1% /opt
/dev/sda1                   232M   47M  169M  22% /boot
/dev/mapper/ubuntu_root-tmp   3.9G  7.6M  3.7G   1% /tmp
/dev/mapper/ubuntu_root-home  3.9G  197M  3.5G   6% /home
/dev/mapper/ubuntu_root-usr   3.9G  2.8G  948M  75% /usr
/dev/mapper/ubuntu_root-var   9.8G  1.1G  8.2G  12% /var

Check the existing Physical Volumes using pvs command. As you can see, the 2nd hard disk has the name /dev/sdb. So, no need of running fdisk -l to get the disk name. Don’t forget to make a note of it.

root@ubuntu-vm:~# pvs
PV         VG           Fmt  Attr   PSize   PFree
/dev/sda5  ubuntu_root  lvm2 a--    39.76g  8.11g
/dev/sdb   ubuntu_data  lvm2 a--    150.00g    0

STOP !! The following steps require good Linux Administration knowledge. Please perform the steps first in  a demo environment . Do not attempt them in Production machines if you have no knowledge about Linux LVMs. I will not be responsible for lost data/disks or jobs.

Now, we will use pvresize command and specify the volume size. In our case its going to be 500GB. Use the 2nd HDD name correctly. Adding the wrong disk name will throw error or worse, bring in unnecessary complications.

root@ubuntu-vm:~# pvresize --setphysicalvolumesize 500G /dev/sdb
Physical volume "/dev/sdb" changed
1 physical volume(s) resized / 0 physical volume(s) not resized

After resizing physical volume, check the status. You will see that the partition size (PSize) has increased to 500GB and you have 350GB free space

root@ubuntu-vm:~# pvs
PV         VG      Fmt  Attr      PSize    PFree
/dev/sda5  ubuntu_root  lvm2 a--  39.76g   5.76g
/dev/sdb   ubuntu_data  lvm2 a--  500.00g  350.00g

Also, if you check Volume Group size, it will also show 350GB is free. Since Physical volume /dev/sdb is already in volume group ubuntu_data, there is no need to extend the volume group.

root@ubuntu-vm:~# vgdisplay ubuntu_data | grep "Free"
Free  PE / Size       89600 / 350.00 GiB

Now, let’s extend the logical volume to cover all the free space. I did not put 350 GB as LVM refuses to take 350 GB,  giving 349 GB as value would mean 1 GB is lost. Hence, converted the total into MBs(358400) and reduced a couple of MBs to allow the lvmextend command to run successfully.

root@ubuntu-vm:~# lvextend -L+358300 /dev/mapper/ubuntu_data-opt
Extending logical volume opt to 499.90 GiB
Logical volume opt successfully resized

Now, run resize2fs to make sure the change is permanent.

root@ubuntu-vm:~# resize2fs /dev/mapper/ubuntu_data-opt
resize2fs 1.42.9 (4-Feb-2014)
Filesystem at /dev/mapper/ubuntu_data-opt is mounted on /opt; on-line resizing required
old_desc_blocks = 10, new_desc_blocks = 32
The filesystem on /dev/mapper/ubuntu_data-opt is now 131045376 blocks long.

The change is permanent and no reboot is needed. Run df -h to check the /opt size. You’ll see the size has increased to ~500GB.
root@ubuntu-vm:~# df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu_root-root  7.8G  359M  7.0G   5% /
none                        4.0K     0  4.0K   0% /sys/fs/cgroup
udev                        7.9G  4.0K  7.9G   1% /dev
tmpfs                       1.6G  1.5M  1.6G   1% /run
none                        5.0M     0  5.0M   0% /run/lock
none                        7.9G  152K  7.9G   1% /run/shm
none                        100M   20K  100M   1% /run/user
/dev/mapper/ubuntu_data-opt  492G   71M  471G   1% /opt
/dev/sda1                   232M   47M  169M  22% /boot
/dev/mapper/ubuntu_root-tmp   3.9G  7.6M  3.7G   1% /tmp
/dev/mapper/ubuntu_root-home  3.9G  197M  3.5G   6% /home
/dev/mapper/ubuntu_root-usr   3.9G  2.8G  948M  75% /usr
/dev/mapper/ubuntu_root-var   9.8G  1.1G  8.2G  12% /var

Sayantan Das

Sayantan is a DevOps Consultant by the day and works mainly with Ansible and Linux . He is a AWS - SA and RedHat Certified Engineer . He loves to tinker with Linux systems .

Leave a Reply

tuxtrixmod
Close Menu
%d bloggers like this: