Logical Volume Manager (LVM)

LVM (Logical Volume Manager) is a feature the Linux kernel uses to virtualize the management of the storage devices. Using it, you can add and remove disks from groups, create logical volumes inside a group, then change their size dynamically without taking the filesystem off-line.

Scenario:

  1. Create 3 partitions of size each 100MB.
  1. Convert them into physical  volumes.
  1. Combine physical volumes into volume group.
  1. Finally create a logical volume from the volume group.


Step-1. Create 3 Partition

# fdisk /dev/sdb

How to Create a partition...Read

Now we will check the existing partitions using 'fdisk' command.

# fdisk -l

 Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          14      112423+  83  Linux
/dev/sdb2              15          28      112455   83  Linux
/dev/sdb3              29          42      112455   83  Linux


Note: If you had installed the server in the minimal mode, the commands pvcreate, lvcreate, vgcreate etc., couldn’t be found. To use that commands install the lvm2 package first.

# yum install lvm2

Step-2. Create Physical Volumes

# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3 
______________________________________________
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created
  Physical volume "/dev/sdb3" successfully created
______________________________________________

# pvs                (Report information about physical volumes)
____________________________________________
 PV                VG       Fmt  Attr        PSize            PFree 
  /dev/sdb1                lvm2 a-        
109.82m      109.82m
  /dev/sdb2                lvm2 a-        
109.82m      109.82m
  /dev/sdb3                lvm2 a-        
109.82m      109.82m
 
____________________________________________

# pvscan                 (Scan all disks for physical volumes)

____________________________________________
  PV /dev/sdb1                      lvm2 [ 109.79 MiB]
  PV /dev/sdb2                     
lvm2 [ 109.79 MiB]
  PV /dev/sdb3                     
lvm2 [ 109.79 MiB]
  Total: 3 [329.46 MiB] / in use: 0 [0   ] / in no VG: 3 [329.46 MiB]

____________________________________________
 
# pvdisplay          (Display attributes of a physical volume or verify the physical volumes)
_______________________________________________________
"/dev/sdb1" is a new physical volume of "109.79 MiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name               
  PV Size               109.79 MiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               jQl5F4-DyLj-SkHu-4lhZ-J3nQ-zax9-aT8sc4
   
  "/dev/sdb2" is a new physical volume of "109.82 MiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb2
  VG Name               
  PV Size               109.82 MiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               i4MHvw-8hYB-Fwz8-fxTL-G3mu-fl5E-zGYhDO
   
  "/dev/sdb3" is a new physical volume of "109.82 MiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb3
  VG Name               
  PV Size               109.82 MiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               99qkNw-3oAw-vXwg-WE6U-zyKO-Ffs3-rDSqUY
_______________________________________________________

Step-3. Create Volume Groups

Using two physical volumes /dev/sdb1 and /dev/sdb2

# vgcreate vg1 /dev/sdb1 /dev/sdb2
  Volume group "vg1" successfully created
 
# vgs                 ( Report information about volume groups)
 
  VG      #PV    #LV  #SN    Attr      VSize              VFree
  vg1       2          0      0      wz--n-  
216.00 MiB   216.00 MiB

# vgscan           (Scan all disks for volume groups and rebuild caches)
 
  Reading all physical volumes.  This may take a while...
  Found volume group "vg1" using metadata type lvm2
 
# vgdisplay                  (Display attributes of volume groups)
______________________________________________________
--- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               216.00 MiB
  PE Size               4.00 MiB
  Total PE              54
  Alloc PE / Size       0 / 0   
  Free  PE / Size       54 / 216.00 MiB
  VG UUID               ds3OtP-DMUx-33nN-HDar-eqNj-uIED-41gjqI

______________________________________________________
   
 Step-4. Create Logical Volume

 Create a logical volume called lv1 with size 200MB.


# lvcreate -L 200M vg1 -n lv1   Logical volume "lv1" created

# lvs
______________________________________________________  
LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv1 vg1 -wi-a- 200.00m  
 ______________________________________________________ 

# lvdisplay 
______________________________________________________ 
 --- Logical volume ---
  LV Name                /dev/vg1/lv1
  VG Name                vg1
  LV UUID                dgLZ79-JZdn-NUSF-fUS1-YVFk-36qs-iuafhE
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                200.00 MiB
  Current LE             50
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 ______________________________________________________
 It’s done . vlm has been created...

 Step-7. Format and Mount the logical volume

# mkfs.ext4 /dev/vg1/lv1

Mount the logical volume ..
# mount /dev/vg1/lv1 /mnt/ ( /mnt- Mount Point )

Now the logical volume is successfully mounted in /mnt
You can use the new logical volume to store your datas.

# cd /mnt/
# touch ashu1 ashu2 ashu3 ashu4
# mkdir ashu-1
#ls

ashu1 ashu2 ashu3 ashu4 ashu-1 lost+found
Step-8. Extend Volume Group Size

If you’re running out of the space in the logical volume, you can extend the size of it easily if your physical disk contains free space or with additional physical disk(Hard disk).
Say for example let us extend the volume group vg1 using the physical volume /dev/sdb3. And let us add additonal 100MB to logical volume lv1.

# vgextend vg1 /dev/sdb3   Volume group "vg1" successfully extended

Step-9. Resize Volume Group Size
Then resize the logical vloume lv1.

# lvresize -L +100M /dev/vg1/lv1   
( Extending logical volume lv1 to 300.00 MiB   Logical volume lv1 successfully resized)

Step-10. Resize the filesystem of logical volume lv1
 
# resize2fs /dev/vg1/lv1  
(resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/vg1/lv1 is mounted on /mnt; on-line resizing required old desc_blocks = 1, new_desc_blocks = 2 Performing an on-line resize of /dev/vg1/lv1 to 307200 (1k) blocks. The filesystem on /dev/vg1/lv1 is now 307200 blocks long.)

Step-11. Now verify the new size of the logical volume lv1

# lvdisplay /dev/vg1/lv1 
________________________________________________________________
 --- Logical volume ---
  LV Name                /dev/vg1/lv1
  VG Name                vg1
  LV UUID                dgLZ79-JZdn-NUSF-fUS1-YVFk-36qs-iuafhE
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                300.00 MiB
  Current LE             75
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
________________________________________________________________
It’s done. Now the size of the logical volume lv1 is extended by 100MB.

Step-12. Remove Logical Volume

Come out of the /mnt mount point, unmount the logical volume lv1 and remove it using command lvremove.

# cd .. # umount /mnt/ # lvremove /dev/vg1/lv1  Do you really want to remove active logical volume lv1? [y/n]: y  Logical volume "lv1" successfully removed

Step-13. Remove Volume Group

# vgremove /dev/vg1   Volume group "vg1" successfully removed

Step-13. Remove Physical Volume
# pvremove /dev/sdb1 /dev/sdb2 /dev/sdb3
  Labels on physical volume "/dev/sdb1" successfully wiped   Labels on physical volume "/dev/sdb2" successfully wiped   Labels on physical volume "/dev/sdb3" successfully wiped



Moving an LVM to a Different Computer or Server-

To Export:
1- Make sure that no users are accessing files on the active volumes in the volume group, then unmount the logical volumes.   
Unmount the volume-
# unmount  /mnt                  (/path/to/mounted/LVM/)

2- Use the -an argument of the vgchange command to mark the volume group as inactive, which prevents any further activity on the volume group

# vgchange -an vg1

3- Use the vgexport command to export the volume group. This prevents it from being accessed by the system from which you are removing it.

# vgexport vg1

Note- After you export the volume group, the physical volume will show up as being in an exported volume group when you execute the pvscan command, as in the following example.

# pvscan

Done! Your LVM is now ready to be moved to an entirely different computer (well, as long as it's Linux)

To Import:
(Note: this is for LVM2)

1- When the disks are plugged into the new system, use the vgimport command to import the volume group, making it accessible to the new system. 

# vgimport vg1

2- Activate the volume group with the -ay argument of the vgchange command.

# vgchange -ay  vg1

3- Mount the file system to make it available for use.

# mount /dev/vg1/lv1  /var/ashu           (/var/ashu-path to new LVM)

That's it! Your LVM is now at a new server and ready to grow!


_____________________________________________________________________________________________
LVM Interview Question & Answer-

1.What are LVM1 and LVM2?

LVM1 and LVM2 are the versions of LVM. 
LVM2 uses device mapper driver contained in 2.6 kernel version.
LVM 1 was included in the 2.4 series kernels.

2.What is the maximum size of a single LV?

For 2.4 based kernels, the maximum LV size is 2TB. 
For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.
For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB. 

3.List of important LVM related files and Directories?

## Directories
/etc/lvm                - default lvm directory location
/etc/lvm/backup         - where the automatic backups go
/etc/lvm/cache          - persistent filter cache
/etc/lvm/archive        - where automatic archives go after a volume group change
/var/lock/lvm             - lock files to prevent metadata corruption

# Files
/etc/lvm/lvm.conf       - main lvm configuration file
$HOME/.lvm              - lvm history 

4.How to find server is configured with LVM RAID ? 

1.How to check linux LVM RAID ?

 check the RAID status in /proc/mdstat

 #cat /proc/mdstat 
 or
 # mdadm --detail /dev/mdx
  or
 # lsraid -a /dev/mdx

2.Check the Volume group disks 

 #vgdisplay -v vg01

 In disk we will get the device names like /dev/md1 , /dev/md2 . It means LVM RAID disks are configured and its added to Volume Group.


5.How to check server is configured with Multipath disks??

# ls -lrt /dev/mapper  //To View the Mapper disk paths and Lvols

#dmsetup table 

#dmsetup ls 

#dmsetup status

2.Using Multipathd Command ( Daemon ) 


#echo 'show paths' |multipathd -k

#echo 'show maps' |multipathd -k

3.Check multipath Daemon is running or not 

#ps -eaf |grep -i multipathd

4.check the VG disk paths

#vgs or vgdisplay -v vg01 

If multipath disks are added and configured with VG then we will get disk paths like /dev/mpath0 , /dev/mpath1.

5.If you want to check the disk path status u can use below command also

#multipathd -k

#multipathd> show multipaths status

#multipathd> show topology

#multipathd> show paths



_____________________________________________________________________________________________
Click Back..                                Click Home..


https://docs.google.com/forms/d/1iNRZlJJO6rBFizzPcFmyOTEtfkdjhdVRmpM74IbiT3o/viewform