Creating and Managing Software RAID Devices using LVM
-
Ensure that you have created enough physical volumes in a volume group to accommodate the LVM RAID logical volume requirements. For more information about creating physical volumes and volume groups, see Working With Logical Volume Manager.
- Review the
raid_fault_policy
value in the/etc/lvm/lvm.conf
file that specifies how a RAID instance that uses redundant devices reacts to a drive failure. The default value is"warn"
which indicates that RAID is configured to log a warning in the system logs. This means that in the event of a device failure, manual action is required to replace the failed device. -
Run the lvcreate command to create the LVM RAID device. See the following sections for examples:
-
Create the filesystem you want on your device. For example, the following command creates an ext4 file system on a RAID 6 logical volume.
sudo mkfs.ext4 /dev/myvg/mylvraid6 mke2fs 1.47.1 (20-May-2024) Creating filesystem with 264192 4k blocks and 66096 inodes Filesystem UUID: 4f46c829-f371-450c-8864-1b2c6acf670c Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done
- Consider persisting the logical volume by editing the
/etc/fstab
file. For example, adding the following line that includes the UUID created in the previous step ensures that the logical volume is remounted after a reboot.UUID=05a78be5-8a8a-44ee-9f3f-1c4764c957e8 /mnt ext4 defaults 0 0
For more information about using UUIDs with the
/etc/fstab
file, see Automatic Device Mappings for Partitions and File Systems . - If a device failure occurs for LVM RAID levels 5, 6, and 10, ensure that you have a
replacement physical volume attached to the volume group that contains the failed RAID
device, and do one of the following:
-
Use the following command to switch to a random spare physical volume present in the volume group:
sudo lvconvert --repair volume_group/logical_volume
In the previous example, volume_group is the volume group and logical_volume is the LVM RAID logical volume.
- Use the following command to switch to a specific physical volume present in the
volume group:
sudo lvconvert --repair volume_group/logical_volume physical_volume
In the previous example, physical_volume is the specific volume you want to replace the failed physical volume with. For example,
/dev/sdb1
.
-
RAID Level 0 (Striping) LVM Examples
mylv
raid0 of size 2 GB in
the volume group
myvg
.sudo lvcreate --type raid0 --size 2g --stripes 3 --stripesize 4 -n mylvraid0 myvg
The logical volume contains three stripes, which is the number of devices to use in
myvg
volume group. The stripesize of 4 kilobytes is the size of
data that can be written to one device before moving to the next device.
The following output is displayed:
Rounding size 2.00 GiB (512 extents) up to stripe boundary size 2.00 GiB (513 extents). Logical volume "mylvraid0" created.
lsblk
command shows that three out of the four physical volumes are
now running the myvg-mylvraid0
RAID 0 logical volume. Also, each
instances of myvg-mylvraid0
is a subvolume included in another
subvolume containing the data for the RAID logical volume. Each subvolume contains the
data subvolume that are labled myvg-mylvraid0_rimage_0
,
myvg-mylvraid0_rimage_1
, and
myvg-mylvraid0_rimage_2
.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sdb 8:16 0 50G 0 disk
└─myvg-mylvraid0_rimage_0 252:2 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sdc 8:32 0 50G 0 disk
└─myvg-mylvraid0_rimage_1 252:3 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sdd 8:48 0 50G 0 disk
└─myvg-mylvraid0_rimage_2 252:4 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sde 8:64 0 50G 0 disk
To display information about logical volumes, use the lvdisplay, lvs, and lvscan commands.
To remove a RAID 0 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
RAID Level 1 (Mirroring) LVM Examples
mylvraid1
of size 1 GB in
the volume group
myvg
.sudo lvcreate --type raid1 -m 1 --size 1G -n mylvraid1 myvg
The following output is displayed:
Logical volume "mylvraid1" created.
The -m specifies that you want 1 mirror device in the myvg
volume group
where identical data is written to the first device and second mirror device. You can
specify other mirror devices if you want. For example, -m 2 would create two mirrors of
the first device. If one device fails the other device mirrors can continue to process
requests.
lsblk
command shows that two out of the four available physical
volumes are now part of the myvg-mylvraid1
RAID 1 logical volume. Also,
each instance of myvg-mylvraid1
includes subvolume pairs for data and
metadata. Each data subvolumes are labled myvg-mylvraid1_rimage_0
and
myvg-mylvraid1_rimage_1
. Each metadata subvolumes are labled
myvg-mylvraid1_rmeta_0
and
myvg-mylvraid1_rmeta_1
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid1_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid1 252:6 0 1G 0 lvm
└─myvg-mylvraid1_rimage_0 252:3 0 1G 0 lvm
└─myvg-mylvraid1 252:6 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid1_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid1 252:6 0 1G 0 lvm
└─myvg-mylvraid1_rimage_1 252:5 0 1G 0 lvm
└─myvg-mylvraid1 252:6 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
sde 8:64 0 50G 0 disk
myvg
:sudo lvs -a -o name,sync_percent,devices myvg
LV Cpy%Sync Devices
mylvraid1 21.58 mylvraid1_rimage_0(0),mylvraid1_rimage_1(0)
[mylvraid1_rimage_0] /dev/sdf(1)
[mylvraid1_rimage_1] /dev/sdg(1)
[mylvraid1_rmeta_0] /dev/sdf(0)
[mylvraid1_rmeta_1] /dev/sdg(0)
To remove a RAID 1 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in your RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid1
Creating integrity metadata LV mylvraid1_rimage_0_imeta with size 20.00 MiB.
Logical volume "mylvraid1_rimage_0_imeta" created.
Creating integrity metadata LV mylvraid1_rimage_1_imeta with size 20.00 MiB.
Logical volume "mylvraid1_rimage_1_imeta" created.
Limiting integrity block size to 512 because the LV is active.
Using integrity block size 512 for file system block size 4096.
Logical volume myvg/mylvraid1 has added integrity.
lvconvert
to split a mirror into individual linear
logical volumes. For example, the following command splits the
mirror:sudo lvconvert --splitmirror 1 -n lvnewlinear myvg/mylvraid1
Are you sure you want to split raid1 LV myvg/mylvraid1 losing all resilience? [y/n]: y
If you had a three instance mirror, the same command would create a two way mirror and a linear logical volume.
sudo lvconvert -m 2 myvg/mylvraid1
Are you sure you want to convert raid1 LV myvg/mylvraid1 to 3 images enhancing resilience? [y/n]: y
Logical volume myvg/mylvraid1 successfully converted.
sudo lvconvert -m1 myvg/mylvraid1 /dev/sdd
Are you sure you want to convert raid1 LV myvg/mylvraid1 to 2 images reducing resilience? [y/n]: y
Logical volume myvg/mylvraid1 successfully converted.
For more information, see the lvmraid
, lvcreate
,
and lvconvert
manual pages.
RAID Level 5 (Striping with Distributed Parity) LVM Examples
mylvraid5
of size 1 GB in
the volume group
myvg
.sudo lvcreate --type raid5 -i 2 --size 1G -n mylvraid5 myvg
The following output is displayed:
Using default stripesize 64.00 KiB. Logical volume "mylvraid5" created.
The logical volume contains two stripes, which is the number of devices to use in the
myvg
volume group. However, the total usable number of devices
requires that an extra device be added to account for the parity information. And so, a
stripe size of two requires three available drives such that striping and parity
information is spread across all three, even though the total usable device space
available for striping is only equal to two devices. The parity information across all
three devices is enough to deal with the loss of one of the devices.
The stripesize is not specified in the creation command, so the default of 64 kilobytes is used. This is the size of data that can be written to one device before moving to the next device.
lsblk
command shows that three out of the four available physical
volumes are now part of the myvg-mylvraid5
RAID 5 logical volume. Also,
each instance of myvg-mylvraid5
includes subvolume pairs for data and
metadata. Each data subvolumes are labelled myvg-mylvraid5_rimage_0
,
myvg-mylvraid5_rimage_1
, and
myvg-mylvraid5_rimage_2
. Each metadata subvolumes are labelled
myvg-mylvraid5_rmeta_0
, myvg-mylvraid5_rmeta_1
and
myvg-mylvraid5_rmeta_2
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid5_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_0 252:3 0 512M 0 lvm
└─myvg-mylvraid5 252:8 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid5_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_1 252:5 0 512M 0 lvm
└─myvg-mylvraid5 252:8 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
├─myvg-mylvraid5_rmeta_2 252:6 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_2 252:7 0 512M 0 lvm
└─myvg-mylvraid5 252:8 0 1G 0 lvm
sde 8:64 0 50G 0 disk
myvg
:sudo lvs -a -o name,copy_percent,devices myvg
LV Cpy%Sync Devices
mylvraid5 25.00 mylvraid5_rimage_0(0),mylvraid5_rimage_1(0),mylvraid5_rimage_2(0)
[mylvraid5_rimage_0] /dev/sdf(1)
[mylvraid5_rimage_1] /dev/sdg(1)
[mylvraid5_rimage_2] /dev/sdh(1)
[mylvraid5_rmeta_0] /dev/sdf(0)
[mylvraid5_rmeta_1] /dev/sdg(0)
[mylvraid5_rmeta_2] /dev/sdh(0)
To remove a RAID 5 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in your RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid5
Using integrity block size 512 for unknown file system block size, logical block size 512, physical block size 4096.
Logical volume myvg/mylvraid5 has added integrity.
For more information, see the lvmraid
, lvcreate
, and
lvconvert
manual pages.
RAID Level 6 (Striping with Double Distributed Parity) LVM Examples
mylvraid6
of size 1 GB in
the volume group
myvg
.sudo lvcreate --type raid6 -i 3 -L 1G -n mylvraid6 myvg
The following output is displayed:
Using default stripesize 64.00 KiB. Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.01 GiB (258 extents). Logical volume "mylvraid6" created.
The logical volume contains three stripes, which is the number of devices to use in the
myvg
volume group. However, the total usable number of devices
requires that an extra two devices be added to account for the double parity
information. And so, a stripe size of three requires five available drives such that
striping and double parity information is spread across all five, even though the total
usable device space available for striping is only equal to three devices. The parity
information across all five devices is enough to deal with the loss of two of the
devices.
The stripesize isn't specified in the creation command, so the default of 64 kilobytes is used. This is the size of data that can be written to one device before moving to the next device.
lsblk
command shows that all five of the available physical volumes
are now part of the myvg-mylvraid6
RAID 6 logical volume. Also, each
instance of myvg-mylvraid6
includes subvolume pairs for data and
metadata. Each data subvolumes are labelled myvg-mylvraid6_rimage_0
,
myvg-mylvraid6_rimage_1
, myvg-mylvraid6_rimage_2
,
myvg-mylvraid6_rimage_3
, and
myvg-mylvraid6_rimage_4
. Each metadata subvolumes are labelled
myvg-mylvraid6_rmeta_0
, myvg-mylvraid6_rmeta_1
,
myvg-mylvraid6_rmeta_2
, myvg-mylvraid6_rmeta_3
,
and
myvg-mylvraid6_rmeta_4
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid5_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_0 252:3 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid5_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_1 252:5 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
├─myvg-mylvraid5_rmeta_2 252:6 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_2 252:7 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sde 8:64 0 50G 0 disk
├─myvg-mylvraid5_rmeta_3 252:8 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_3 252:9 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdf 8:80 0 50G 0 disk
├─myvg-mylvraid5_rmeta_4 252:10 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_4 252:11 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
myvg
:sudo lvs -a -o name,sync_percent,devices myvg
LV Cpy%Sync Devices
mylvraid6 31.26 mylvraid6_rimage_0(0),mylvraid6_rimage_1(0),mylvraid6_rimage_2(0),mylvraid6_rimage_3(0),mylvraid6_rimage_4(0)
[mylvraid6_rimage_0] /dev/sdf(1)
[mylvraid6_rimage_1] /dev/sdg(1)
[mylvraid6_rimage_2] /dev/sdh(1)
[mylvraid6_rimage_3] /dev/sdi(1)
[mylvraid6_rimage_4] /dev/sdj(1)
[mylvraid6_rmeta_0] /dev/sdf(0)
[mylvraid6_rmeta_1] /dev/sdg(0)
[mylvraid6_rmeta_2] /dev/sdh(0)
[mylvraid6_rmeta_3] /dev/sdi(0)
[mylvraid6_rmeta_4] /dev/sdj(0)
To remove a RAID 6 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in the RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid6
Using integrity block size 512 for file system block size 4096.
Logical volume myvg/mylvraid6 has added integrity.
For more information, see the lvmraid
, lvcreate
, and
lvconvert
manual pages.
RAID Level 10 (Striping of Mirrored Disks) LVM Examples
mylvraid10
of size 10 GB
in the volume group
myvg
.sudo lvcreate --type raid10 -i 2 -m 1 --size 10G -n mylvraid10 myvg
The following output is displayed:
Logical volume "mylvraid10" created.
The -m specifies that you want 1 mirror device in the myvg
volume group
where identical data is written to pairs of mirrored device sets which are also using
striping across the sets. Logical volume data remains available if one or more devices
remains in each mirrored device set.
lsblk
command shows that four out of the five available physical
volumes are now part of the myvg-mylvraid10
RAID 10 logical volume.
Also, each instance of myvg-mylvraid10
includes subvolume pairs for
data and metadata. Each data subvolumes are labelled
myvg-mylvraid10_rimage_0
,
myvg-mylvraid10_rimage_1
, myvg-mylvraid10_rimage_2
,
and myvg-mylvraid10_rimage_3
. Each metadata subvolumes are labelled
myvg-mylvraid10_rmeta_0
, myvg-mylvraid10_rmeta_1
,
myvg-mylvraid10_rmeta_2
, and
myvg-mylvraid10_rmeta_3
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid10_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_0 252:3 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid10_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_1 252:5 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sdd 8:48 0 50G 0 disk
├─myvg-mylvraid10_rmeta_2 252:6 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_2 252:7 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sde 8:64 0 50G 0 disk
├─myvg-mylvraid10_rmeta_3 252:8 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_3 252:9 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sdf 8:80 0 50G 0 disk
myvg
:sudo lvs -a -o name,sync_percent,devices myvg
LV Cpy%Sync Devices
mylvraid101 68.82 mylvraid10_rimage_0(0),mylvraid10_rimage_1(0),mylvraid10_rimage_2(0),mylvraid10_rimage_3(0)
[mylvraid10_rimage_0] /dev/sdf(1)
[mylvraid10_rimage_1] /dev/sdg(1)
[mylvraid10_rimage_2] /dev/sdh(1)
[mylvraid10_rimage_3] /dev/sdi(1)
[mylvraid10_rmeta_0] /dev/sdf(0)
[mylvraid10_rmeta_1] /dev/sdg(0)
[mylvraid10_rmeta_2] /dev/sdh(0)
[mylvraid10_rmeta_3] /dev/sdi(0)
To remove a RAID 10 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in your RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid10
Using integrity block size 512 for unknown file system block size, logical block size 512, physical block size 4096.
Logical volume myvg/mylvraid10 has added integrity.
For more information, see the lvmraid
, lvcreate
, and
lvconvert
manual pages.