How to Configure LVM on Multipath Devices in RHEL

Configure LVM on Multipath Devices in RHEL

Learn how to configure LVM on multipath devices in RHEL with this comprehensive step-by-step guide. Improve storage performance, ensure high availability, and master enterprise-level disk management using MPIO and LVM.

Table of Contents

🔈Introduction

When building enterprise-level storage systems, logical volume management (LVM) combined with multipath I/O (MPIO) ensures robust redundancy, scalability, and high availability. Red Hat Enterprise Linux (RHEL) supports both technologies natively, making it a powerful platform for storage-heavy workloads.

In this guide, we’ll walk through a comprehensive, from-scratch setup to configure LVM on multipath devices in RHEL, optimizing for both performance and resilience.


What is Multipath I/O and LVM?

Multipath I/O (MPIO)

Multipath I/O allows Linux to access storage devices over multiple paths, typically via iSCSI or Fibre Channel SAN. If one path fails, traffic reroutes automatically.

Benefits:

  • Redundancy
  • Load balancing
  • Improved throughput

Logical Volume Manager (LVM)

LVM abstracts the physical disk layer into flexible, resizable storage units:

  • Physical Volumes (PVs): Actual disks or partitions
  • Volume Groups (VGs): Pools of PVs
  • Logical Volumes (LVs): Usable volumes formatted and mounted as filesystems

🛠️ Why Use LVM on Multipath Devices?

FeatureMultipath + LVM
Redundancy
Flexible Resizing
Snapshots
Performance
Load Balancing
Dynamic Path Management
Configure LVM on Multipath Devices in RHEL

Photo by admingeek from Infotechys


✅Prerequisites

Ensure the following before starting:

RequirementDescription
OSRHEL 8 or RHEL 9
StorageAccess to SAN or iSCSI LUNs
PermissionsRoot or sudo access
Toolslvm2, device-mapper-multipath

Configure LVM on Multipath Devices: Step-by-Step Configuration

Step 1: Install Required Packages

				
					sudo dnf install -y device-mapper-multipath lvm2
				
			

Enable multipath support:

				
					sudo mpathconf --enable --with_multipathd y
				
			

Step 2: Identify Multipath-Capable Devices

Use the following command to check available paths:

				
					sudo multipath -ll
				
			

Example output:

				
					3600508b400105e210000900000490000 dm-2 IBM,2145
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 2:0:0:1 sdb 8:16 active ready running
| |- 3:0:0:1 sdd 8:48 active ready running
				
			

✅ The device name 3600508b400105e210000900000490000 will be used for LVM, not /dev/sdb or /dev/sdd.

Step 3: Configure Multipath

Create or edit the multipath configuration file:

				
					sudo vim /etc/multipath.conf
				
			

Example minimal config:

				
					defaults {
    user_friendly_names yes
    find_multipaths yes
}
				
			

Restart the service:

				
					sudo systemctl restart multipathd
				
			

Step 4: Enable and Start Services

				
					sudo systemctl enable --now multipathd
				
			

Check service status:

				
					sudo systemctl status multipathd
				
			

Step 5: Create Volume Groups and Logical Volumes

Now use the multipath name to initialize LVM.


Create Physical Volume

				
					sudo pvcreate /dev/mapper/mpatha
				
			

👀 Replace /dev/mapper/mpatha with your actual multipath alias.


Create Volume Group

				
					sudo vgcreate vg_multipath /dev/mapper/mpatha
				
			

 Create Logical Volume

				
					sudo lvcreate -L 50G -n lv_data vg_multipath
				
			

 Format and Mount

				
					sudo mkfs.xfs /dev/vg_multipath/lv_data
sudo mkdir /mnt/data
sudo mount /dev/vg_multipath/lv_data /mnt/data
				
			

Add to /etc/fstab for persistent mount:

				
					/dev/vg_multipath/lv_data /mnt/data xfs defaults 0 0
				
			

🧪 Verifying the Configuration

Use these commands to verify multipath and LVM:

CommandPurpose
multipath -llShow active multipath devices
lsblkSee logical volume layout
vgs, lvs, pvsView LVM hierarchy
cat /proc/mountsConfirm mounted volume

Example:

				
					sudo lsblk
				
			
				
					NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
mpatha                  252:0    0  100G  0 mpath
├─vg_multipath-lv_data  253:1    0   50G  0 lvm  /mnt/data
				
			

🧠 Best Practices

PracticeDescription
Use UUIDs or aliasesAvoid device name mismatches after reboot
Monitor pathsUse multipathd -k for CLI control
Schedule backupsUse LVM snapshots for fast backup points
Disable WWID renamingUse aliases in /etc/multipath.conf
Limit accessBlock raw disk access in udev for safety

🧩 Troubleshooting Tips

Multipath Not Detecting Devices

  • Verify find_multipaths is enabled
  • Check SAN zoning and LUN masking
  • Use udevadm settle after adding disks

LVM Not Seeing Devices

Ensure devices are initialized:

				
					sudo pvscan
sudo vgscan
				
			

Paths Flapping

Use logs:

				
					sudo journalctl -u multipathd
				
			

Consider disabling queue_if_no_path in HA setups.


🔚 Conclusion

Configuring LVM on multipath devices in RHEL ensures high availability, flexibility, and performance for enterprise storage environments. By following the best practices outlined above, your systems will be resilient against hardware failure and easily scalable as demand grows.

This guide walked you through a zero-to-hero approach for setting up LVM on MPIO devices in RHEL using real-world CLI examples and configuration tips.

Did you find this article helpful? Your feedback is invaluable to us! Feel free to share this post with those who may benefit, and let us know your thoughts in the comments section below.


Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *