
Deploy resilient, scalable on-prem Kubernetes storage using Rook-Ceph. Learn architecture, setup steps, best practices, and examples for block, file, and object storage. Table of Contents
Learn how to configure Multi-Path I/O (MPIO) for SAN-based storage on RHEL 9 and CentOS 9. Step-by-step guide with CLI examples, best practices, and troubleshooting tips.
Enterprise Linux environments running RHEL 9 or CentOS Stream 9 frequently rely on SAN-based storage for performance, scalability, and resilience. To fully benefit from redundant storage paths provided by Fibre Channel (FC) or iSCSI SANs, Multi-Path I/O (MPIO) must be correctly configured.
This guide walks through a production-ready, search-optimized, and vendor-agnostic approach to configuring MPIO on RHEL 9 and CentOS 9. It includes clear explanations, CLI examples, tables, and best practices to help system administrators build a reliable storage stack.
Multi-Path I/O allows a Linux host to access the same storage LUN through multiple physical paths. Instead of the OS seeing multiple block devices, MPIO presents a single logical device and manages path selection, failover, and load balancing.
🟢 Key Benefits of MPIO |
|
|
|
|
RHEL 9 and CentOS 9 support MPIO across common SAN transports:
| Protocol | Supported | Notes |
|---|---|---|
| Fibre Channel (FC) | Yes | Requires HBA and zoning |
| iSCSI | Yes | Works with software and hardware initiators |
| FCoE | Yes | Requires DCB-enabled NICs |
| NVMe-oF | Partial | Uses native multipath (not dm-multipath) |
This article focuses on dm-multipath, which applies to FC and iSCSI storage.
Without multipathing:
/dev/sdb /dev/sdc /dev/sdd
With multipathing:
/dev/mapper/mpatha → /dev/sd[bcd]
The device-mapper multipath layer aggregates multiple block devices into one resilient logical volume.
🔧 Prerequisites |
Before configuring MPIO, verify the following:
| Requirement | Description |
|---|---|
| OS | RHEL 9 or CentOS Stream 9 |
| SAN Connectivity | FC or iSCSI paths visible |
| Root Access | Required for configuration |
| LUNs | Presented from SAN (do not format yet) |
Confirm that LUNs are visible:
lsblk
Install the required utilities:
sudo dnf install -y device-mapper-multipath device-mapper-multipath-libs
Verify installation:
rpm -qa | grep multipath
Enable multipathing using the built-in configuration helper:
sudo mpathconf --enable --with_multipathd y
This:
|
|
|
Start and enable the service:
sudo systemctl enable --now multipathd
Check status:
systemctl status multipathd
The default RHEL 9 configuration includes:
defaults {
find_multipaths yes
user_friendly_names no
}
🟢 Why These Defaults Matter |
| Setting | Reason |
|---|---|
find_multipaths yes | Prevents multipath on single-path devices |
user_friendly_names no | Ensures stable WWID-based naming |
💡Best Practice: Keep |
List detected multipath devices:
sudo multipath -ll
Example output:
mpatha (3600508b400105e210000900000490000)
size=500G features='1 queue_if_no_path'
|-+- policy='service-time 0' prio=50 status=active
| |- sdb ready running
| |- sdc ready running
🔑 Key Fields Explained |
| Field | Meaning |
|---|---|
| WWID | Unique SAN identifier |
| Policy | Load balancing method |
| Prio | Path priority |
| Status | Active/ready paths |
/etc/multipath.confFor enterprise SANs (NetApp, Dell EMC, HPE, IBM), vendor-specific tuning is recommended.
🖥️ Example Production Configuration |
defaults {
find_multipaths yes
polling_interval 5
path_selector "service-time 0"
path_grouping_policy group_by_prio
path_checker tur
prio alua
failback immediate
no_path_retry queue
features "1 queue_if_no_path"
}
blacklist {
devnode "^sd[a]$"
}
Reload configuration:
sudo systemctl reload multipathd
Re-scan paths:
sudo multipath -r
Use the following commands to validate multipathing:
multipath -ll
lsblk -o NAME,KNAME,TYPE,SIZE
Expected result:
|
|
🟢 Using LVM (Recommended) |
pvcreate /dev/mapper/mpatha
vgcreate vg_san /dev/mapper/mpatha
lvcreate -n lv_data -L 400G vg_san
mkfs.xfs /dev/vg_san/lv_data
Mount persistently:
mkdir /data
mount /dev/vg_san/lv_data /data
Add to /etc/fstab:
/dev/vg_san/lv_data /data xfs defaults,_netdev 0 0
Simulate a path failure:
|
Observe behavior:
multipath -ll
Expected outcome:
|
|
|
| Command | Purpose |
|---|---|
multipath -ll | View path status |
multipath -v3 | Debug device discovery |
journalctl -u multipathd | Service logs |
ls -l /dev/disk/by-id/ | Validate WWIDs |
|
|
|
|
|
|
|
|
|
Configuring Multi-Path I/O on RHEL 9 and CentOS 9 is a critical step for building resilient, enterprise-grade SAN storage environments. With proper configuration, multipathing ensures high availability, predictable performance, and seamless failover, all while simplifying device management.
By following the steps and best practices outlined in this guide, system administrators can deploy SAN storage with confidence and align with Red Hat–recommended standards.
Did you find this article helpful? Your feedback is invaluable to us! Feel free to share this post with those who may benefit, and let us know your thoughts in the comments section below.

Deploy resilient, scalable on-prem Kubernetes storage using Rook-Ceph. Learn architecture, setup steps, best practices, and examples for block, file, and object storage. Table of Contents

Self-explanatory placeholder (typically) matches the keyword catch phrase statement. Table of Contents 🔈Introduction Migrating from Ext4 to Btrfs has become a common objective for Linux

Learn how to configure LVM on multipath devices in RHEL with this comprehensive step-by-step guide. Improve storage performance, ensure high availability, and master enterprise-level disk
