Migration root disk into mirror in LVM
I’ve found several manuals describing how to add second disk to Volume Group and transform it into mirror. This does not looks complex, and everyone can do it, when everything is working like described in documentation. The problems start when something goes wrong, and one of steps failed.
In my environment there is virtual machine with CentOS 7 hosted by VMWare ESXi. This VM has system disk on one datastore. To provide redundation on OS level I decided to add second disk with the same capacity from another datastore and create RAID-1 (mirror) on them.
Current situation looks as follows:
[root@prod ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert root centos -wi-ao---- 11.60g swap centos -wi-ao---- 3.91g srv data rwi-aor--- 99.99g 100.00 [root@prod ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 15.51g 0 /dev/sdb data lvm2 a-- 100.00g 0 /dev/sdc data lvm2 a-- 100.00g 0 [root@prod ~]# pvscan PV /dev/sda2 VG centos lvm2 [15.51 GiB / 0 free] PV /dev/sdb VG data lvm2 [100.00 GiB / 0 free] PV /dev/sdc VG data lvm2 [100.00 GiB / 0 free] Total: 3 [215.50 GiB] / in use: 3 [215.50 GiB] / in no VG: 0 [0 ] [root@prod ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "centos" using metadata type lvm2 Found volume group "data" using metadata type lvm2
There is already VG “centos” created on physical volumene /dev/sda2, which contains two logical volumens: root and swap. There is also second VG named “data”, but this one is already mirrored, and we have nothing to do with that. Let’s take a look on /dev/sda2 volume:
[root@prod ~]# pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name centos PV Size 15.51 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 3970 Free PE 0 Allocated PE 3970 PV UUID RJaYrc-pV0r-qfNf-S92B-NSJc-mLpF-A5GOkf
As I already wrote, this is VMWare environment, so I created volume with the same capacity on second datastore and I connected it to Virtual Machine with CentOS. Operating system does not know yet about it, so we need to perform bus scan. The simpliest way is to reboot VM, but this requires downtime, and this can be inadvisable. Let’s do it without restart, we need to force bus scan in OS. But at first we take a look what we have:
[root@prod ~]# fdisk -l Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 17.2 GB, 17179869184 bytes, 33554432 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x00015ff8 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 33554431 16264192 8e Linux LVM Disk /dev/mapper/centos-swap: 4194 MB, 4194304000 bytes, 8192000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/centos-root: 12.5 GB, 12457082880 bytes, 24330240 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/data-srv_rmeta_0: 4 MB, 4194304 bytes, 8192 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/data-srv_rimage_0: 107.4 GB, 107365793792 bytes, 209698816 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/data-srv_rmeta_1: 4 MB, 4194304 bytes, 8192 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/data-srv_rimage_1: 107.4 GB, 107365793792 bytes, 209698816 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/data-srv: 107.4 GB, 107365793792 bytes, 209698816 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Please notice additional metadata in mirrored data-srv volume in opposition to centos-root and centos-swap.
Let’s scan the bus:
[root@prod ~]# echo "- - -" > /sys/class/scsi_host/host0/scan [root@prod ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
It is good to know to which bus is new disc connected, but there should be no problems when you scan all of them. It is worthwhile to check system logs if anything new appeared.
[root@prod ~]# less /var/log/messages [root@prod ~]# echo "- - -" > /sys/class/scsi_host/host2/scan [root@prod ~]# less /var/log/messages
Let’s take a look once more what we have:
[root@prod ~]# fdisk -l Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 17.2 GB, 17179869184 bytes, 33554432 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x00015ff8 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 33554431 16264192 8e Linux LVM Disk /dev/mapper/centos-swap: 4194 MB, 4194304000 bytes, 8192000 sectors [...] Disk /dev/sdd: 17.2 GB, 17179869184 bytes, 33554432 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
I snipped output, because it looks like before, the only change is that after logical volumes list there is new physical device /dev/sdd.
Now we need to create the same partition layout like on primary system disk:
[root@prod ~]# sfdisk -d /dev/sda # partition table of /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 1024000, Id=83, bootable /dev/sda2 : start= 1026048, size= 32528384, Id=8e /dev/sda3 : start= 0, size= 0, Id= 0 /dev/sda4 : start= 0, size= 0, Id= 0
And this partition layout will be saved to new disk:
[root@prod ~]# sfdisk -d /dev/sda |sfdisk /dev/sdd Checking that no-one is using this disk right now ... OK Disk /dev/sdd: 2088 cylinders, 255 heads, 63 sectors/track sfdisk: /dev/sdd: unrecognized partition table type Old situation: sfdisk: No partitions found New situation: Units: sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdd1 * 2048 1026047 1024000 83 Linux /dev/sdd2 1026048 33554431 32528384 8e Linux LVM /dev/sdd3 0 - 0 0 Empty /dev/sdd4 0 - 0 0 Empty Warning: partition 1 does not end at a cylinder boundary Warning: partition 2 does not start at a cylinder boundary Warning: partition 2 does not end at a cylinder boundary Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).)
Let’s take a look how it looks now on /dev/sdd:
[root@prod ~]# sfdisk -d /dev/sdd # partition table of /dev/sdd unit: sectors /dev/sdd1 : start= 2048, size= 1024000, Id=83, bootable /dev/sdd2 : start= 1026048, size= 32528384, Id=8e /dev/sdd3 : start= 0, size= 0, Id= 0 /dev/sdd4 : start= 0, size= 0, Id= 0
Looks good, let’s create phisycal volume:
[root@prod ~]# pvcreate /dev/sdd2 Physical volume "/dev/sdd2" successfully created [root@prod ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 15.51g 0 /dev/sdb data lvm2 a-- 100.00g 0 /dev/sdc data lvm2 a-- 100.00g 0 /dev/sdd2 lvm2 a-- 15.51g 15.51g
And expand “centos” Volume Group:
[root@prod ~]# vgextend centos /dev/sdd2 Volume group "centos" successfully extended [root@prod ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 15.51g 0 /dev/sdb data lvm2 a-- 100.00g 0 /dev/sdc data lvm2 a-- 100.00g 0 /dev/sdd2 centos lvm2 a-- 15.51g 15.51g
Done, now transform it into mirror:
[root@prod ~]# lvconvert -m 1 --corelog centos/root Insufficient free space: 1 extents needed, but only 0 available [root@prod ~]# lvconvert -m 1 --alloc anywhere centos/root Insufficient free space: 1 extents needed, but only 0 available
Ooops…
And here all of mentioned at beginning manuals becomes useless. Those which I found didn’t predict this situation or didn’t provide resolution for this. There was some tries with allocation of space for metadata in different places or shrinking partition, but it ended without success or total disaster with data consistency. During my research I didn’t found the solution which I applied, that’s why I’m describing this. I decided to switch off swap space and reduce swap partition, to provide some space for metadata. Let’s begin from checking:
[root@prod ~]# swapon NAME TYPE SIZE USED PRIO /dev/dm-0 partition 3.9G 0B -1
Let’s switch off swap space (if there is only one, we can use -a):
[root@prod ~]# swapoff -av swapoff /dev/dm-0
Let’s check again:
[root@prod ~]# swapon
Now it’s time to reduce logical volume (here is my mistake, where I put “512M” instead of “-512M”, and volume was reduced to 512M, so remember about minus sign):
[root@prod ~]# lvreduce centos/swap -L 512M WARNING: Reducing active logical volume to 512.00 MiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce swap? [y/n]: y Reducing logical volume swap to 512.00 MiB Logical volume swap successfully resized
Let’s check:
[root@prod ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert root centos -wi-ao---- 11.60g swap centos -wi-a----- 512.00m srv data rwi-aor--- 99.99g 100.00
Some space are freed, so let’s try to transform root filesystem into mirrored fs:
[root@prod ~]# lvconvert -m 1 centos/root [root@prod ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert root centos rwi-aor--- 11.60g 0.00 swap centos -wi-a----- 512.00m srv data rwi-aor--- 99.99g 100.00
Voila! Operation completed successfully, let’s do the same with swap:
[root@prod ~]# lvconvert -m 1 centos/swap
And check:
[root@prod ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert root centos rwi-aor--- 11.60g 55.66 swap centos rwi-a-r--- 512.00m 100.00 srv data rwi-aor--- 99.99g 100.00
Mirrors are done, root filesystem is syncing, but we can expand swap to maximal available capacity:
[root@prod ~]# lvextend -l 100%FREE centos/swap Extending 2 mirror images. Extending logical volume swap to 6.80 GiB device-mapper: resume ioctl on failed: Invalid argument Unable to resume centos-swap (253:0) Problem reactivating swap libdevmapper exiting with 1 device(s) still suspended.
Unfortunatelly system didn’t activate swap automatically, because swap signature was destroyed and needs to be restored:
[root@prod ~]# mkswap /dev/centos/swap
Now we can activate swap in system:
[root@prod ~]# swapon -av swapon /dev/mapper/centos-swap swapon: /dev/mapper/centos-swap: found swap signature: version 1, page-size 4, same byte order swapon: /dev/mapper/centos-swap: pagesize=4096, swapsize=4185915392, devsize=4185915392
Let’s check if it works:
[root@prod ~]# swapon NAME TYPE SIZE USED PRIO /dev/dm-4 partition 3.9G 0B -1
This way I went a crash course of LVM logic, because I didn’t have opportunity to manage LVM before. I hope this manual will help those which encountered similar problem.