How to Configure MultiPath I/O for SAN-Based Storage on RHEL 9 | CentOS 9

Configure MultiPath I/O on RHEL 9

Learn how to configure Multi-Path I/O (MPIO) for SAN-based storage on RHEL 9 and CentOS 9. Step-by-step guide with CLI examples, best practices, and troubleshooting tips.

Table of Contents

🔈Introduction

Enterprise Linux environments running RHEL 9 or CentOS Stream 9 frequently rely on SAN-based storage for performance, scalability, and resilience. To fully benefit from redundant storage paths provided by Fibre Channel (FC) or iSCSI SANs, Multi-Path I/O (MPIO) must be correctly configured.

This guide walks through a production-ready, search-optimized, and vendor-agnostic approach to configuring MPIO on RHEL 9 and CentOS 9. It includes clear explanations, CLI examples, tables, and best practices to help system administrators build a reliable storage stack.


✅ What Is Multi-Path I/O (MPIO)?

Multi-Path I/O allows a Linux host to access the same storage LUN through multiple physical paths. Instead of the OS seeing multiple block devices, MPIO presents a single logical device and manages path selection, failover, and load balancing.

🟢 Key Benefits of MPIO

  • High availability – automatic failover if a path fails
  • Improved performance – load balancing across active paths
  • Operational stability – consistent device naming
  • SAN best practice compliance

🤝 Supported Storage Protocols

RHEL 9 and CentOS 9 support MPIO across common SAN transports:

ProtocolSupportedNotes
Fibre Channel (FC)YesRequires HBA and zoning
iSCSIYesWorks with software and hardware initiators
FCoEYesRequires DCB-enabled NICs
NVMe-oFPartialUses native multipath (not dm-multipath)

This article focuses on dm-multipath, which applies to FC and iSCSI storage.


✅ Architecture Overview

Without multipathing:

				
					/dev/sdb   /dev/sdc   /dev/sdd
				
			

With multipathing:

				
					/dev/mapper/mpatha  →  /dev/sd[bcd]
				
			

The device-mapper multipath layer aggregates multiple block devices into one resilient logical volume.

🔧 Prerequisites

Before configuring MPIO, verify the following:

RequirementDescription
OSRHEL 9 or CentOS Stream 9
SAN ConnectivityFC or iSCSI paths visible
Root AccessRequired for configuration
LUNsPresented from SAN (do not format yet)

Confirm that LUNs are visible:

				
					lsblk
				
			

🔁 Step 1: Install Multipath Packages

Install the required utilities:

				
					sudo dnf install -y device-mapper-multipath device-mapper-multipath-libs
				
			

Verify installation:

				
					rpm -qa | grep multipath
				
			

🔁 Step 2: Enable and Start the Multipath Daemon

Enable multipathing using the built-in configuration helper:

				
					sudo mpathconf --enable --with_multipathd y
				
			

This:

  • Creates /etc/multipath.conf
  • Enables multipathd
  • Applies RHEL 9 defaults

Start and enable the service:

				
					sudo systemctl enable --now multipathd
				
			

Check status:

				
					systemctl status multipathd
				
			

🔁 Step 3: Understand the Default Configuration

The default RHEL 9 configuration includes:

				
					defaults {
    find_multipaths yes
    user_friendly_names no
}
				
			

🟢 Why These Defaults Matter

SettingReason
find_multipaths yesPrevents multipath on single-path devices
user_friendly_names noEnsures stable WWID-based naming
💡Best Practice: Keep user_friendly_names disabled in production SAN environments.

🔁 Step 4: Identify SAN Devices

List detected multipath devices:

				
					sudo multipath -ll
				
			

Example output:

				
					mpatha (3600508b400105e210000900000490000)
size=500G features='1 queue_if_no_path'
|-+- policy='service-time 0' prio=50 status=active
| |- sdb ready running
| |- sdc ready running
				
			

🔑 Key Fields Explained

FieldMeaning
WWIDUnique SAN identifier
PolicyLoad balancing method
PrioPath priority
StatusActive/ready paths

🔁 Step 5: Customize /etc/multipath.conf

For enterprise SANs (NetApp, Dell EMC, HPE, IBM), vendor-specific tuning is recommended.

🖥️ Example Production Configuration

				
					defaults {
    find_multipaths yes
    polling_interval 5
    path_selector "service-time 0"
    path_grouping_policy group_by_prio
    path_checker tur
    prio alua
    failback immediate
    no_path_retry queue
    features "1 queue_if_no_path"
}

blacklist {
    devnode "^sd[a]$"
}
				
			

Reload configuration:

				
					sudo systemctl reload multipathd
				
			

Re-scan paths:

				
					sudo multipath -r
				
			

🔁 Step 6: Verify Path Redundancy

Use the following commands to validate multipathing:

				
					multipath -ll
				
			
				
					lsblk -o NAME,KNAME,TYPE,SIZE
				
			

Expected result:

  • Only /dev/mapper/mpathX is used
  • No filesystem created directly on /dev/sdX

🔁 Step 7: Create Filesystems or LVM on Multipath Devices

🟢 Using LVM (Recommended)

				
					pvcreate /dev/mapper/mpatha
				
			
				
					vgcreate vg_san /dev/mapper/mpatha
				
			
				
					lvcreate -n lv_data -L 400G vg_san
				
			
				
					mkfs.xfs /dev/vg_san/lv_data
				
			

Mount persistently:

				
					mkdir /data
				
			
				
					mount /dev/vg_san/lv_data /data
				
			

Add to /etc/fstab:

				
					/dev/vg_san/lv_data  /data  xfs  defaults,_netdev  0 0
				
			

🔁 Step 8: Path Failure Testing

Simulate a path failure:

  • Disable an FC port or iSCSI session

Observe behavior:

				
					multipath -ll
				
			

Expected outcome:

  • I/O continues
  • Remaining paths remain active
  • No filesystem interruption

🧰 Common Troubleshooting Commands

CommandPurpose
multipath -llView path status
multipath -v3Debug device discovery
journalctl -u multipathdService logs
ls -l /dev/disk/by-id/Validate WWIDs

🏆 Best Practices for RHEL 9 Multipathing

  • ✅ Always use multipath devices, never raw /dev/sdX
  • ✅ Disable multipathing for local disks
  • ✅ Use ALUA-capable SANs where possible
  • ✅ Test failover before production deployment
  • ✅ Keep firmware and HBA drivers updated

🛡️ Security and Performance Considerations

  • Enable SAN-side LUN masking and zoning
  • Use consistent path counts across hosts
  • Monitor latency and path errors
  • Combine MPIO with LVM snapshots and XFS

🏁 Conclusion

Configuring Multi-Path I/O on RHEL 9 and CentOS 9 is a critical step for building resilient, enterprise-grade SAN storage environments. With proper configuration, multipathing ensures high availability, predictable performance, and seamless failover, all while simplifying device management.

By following the steps and best practices outlined in this guide, system administrators can deploy SAN storage with confidence and align with Red Hat–recommended standards.

Did you find this article helpful? Your feedback is invaluable to us! Feel free to share this post with those who may benefit, and let us know your thoughts in the comments section below.


📕 Related Posts

Leave a Reply

Logged in as admingeek. Edit your profile. Log out? Required fields are marked *