
Learn how to implement full disk encryption on RHEL 9 or CentOS 9 for Zabbix running PostgreSQL database. This step-by-step guide helps secure your monitoring server
Learn how to benchmark and optimize disk performance on Linux using fio. This comprehensive guide covers installation, test scenarios, interpreting results, and tuning tips to maximize throughput and minimize latency.
When you’re managing Linux systems—whether they’re servers, virtual machines, or embedded devices—understanding your disk performance is critical. The versatile utility fio (Flexible I/O tester) offers a comprehensive way to benchmark and optimize storage throughput, latency, and IOPS. In this guide we’ll walk through a structured approach to benchmarking disk performance with fio, interpreting results, and applying optimization techniques to get the most out of your disk subsystem.
Before diving into commands, it’s worth asking: why use fio, and why benchmark at all?
|
|
|
|
|
|
|
fio🟢 Install |
On Debian/Ubuntu:
sudo apt update
sudo apt install –y fio
On RHEL/CentOS:
sudo yum install -y fio
On RHEL/CentOS version 8 or higher:
sudo dnf install -y fio
fio is available in standard repositories.
🟢 Choose the Target Device or Directory |
Decide whether to test a raw block device (e.g., /dev/nvme0n1) or a mounted filesystem (e.g., /mnt/data).
|
|
💡NOTE: Use |
🟢 Define Test Parameters |
Key parameters you’ll want to specify in your fio command or job file:
| Parameter | Purpose | Example |
|---|---|---|
--rw | Type of I/O: read, write, randread, randwrite, randrw | --rw=randread |
--bs | Block size for each I/O operation | --bs=4k, --bs=1M |
--iodepth | Queue depth (number of I/Os in flight) | --iodepth=32 |
--numjobs | Number of threads/processes | --numjobs=4 |
--direct=1 | Use direct I/O (bypass OS cache) | --direct=1 |
--time_based | Run for a specific runtime rather than size | --time_based --runtime=300 |
--size | Amount of data to operate on | --size=10G |
--ioengine | Method of issuing I/O (libaio, posixaio…) | --ioengine=libaio |
For example, to measure random reads:
sudo fio --filename=/mnt/testfile --direct=1 --ioengine=libaio --bs=4k --iodepth=64 --rw=randread --time_based --runtime=300 --group_reporting --name=rand_read_test
This approach is consistent with cloud-vendor guidance.
Here are a few key test cases to run regularly. Use them as templates, and adjust sizes or depths based on your hardware.
| Test name | Description | Sample command |
|---|---|---|
| Sequential Read | Large block reads (e.g., backup) | fio --filename=... --direct=1 --ioengine=libaio --bs=1M --iodepth=64 --rw=read --time_based --runtime=300 --group_reporting --name=seq_read |
| Sequential Write | Large block writes | ... --bs=1M --iodepth=64 --rw=write --name=seq_write |
| Random Read IOPS | 4 K random reads (database) | ... --bs=4k --iodepth=256 --rw=randread --name=rand_read_iops |
| Random Write IOPS | 4 K random writes | ... --bs=4k --iodepth=256 --rw=randwrite --name=rand_write_iops |
💡NOTE: When interpreting results, the key numbers you’ll see in the output include: BW (bandwidth), IOPS, latency (clat/slat), and utilization. One practical example: “random-read …bw=864KiB/s, iops=211” |
After running the test, fio will display an output summary. Here’s what to focus on:
|
|
|
|
🟢 Example interpretation |
Suppose you ran a 4 K random read test and got 70 k IOPS but latencies over 4 ms. If the application expects sub-millisecond responses, this may not be acceptable—even though the IOPS number looks high.
Once you’ve benchmarked and interpreted your results, you can apply optimizations. Here are some areas to review:
🔹Ensure correct alignment and file system settings |
Misalignment of partitions (especially on older hardware) can degrade performance. For example, start sectors not aligned to 4 KiB boundaries may cause penalty. Use tools like fdisk -lu or lsblk -t to inspect alignments.
🔹Adjust I/O scheduler and queue depth |
On modern NVMe or high-performance SSDs, using scheduler none or mq-deadline often yields better results than legacy CFQ. Experiment with echo none > /sys/block/<device>/queue/scheduler or use nvme specific tuning.
🔹Use appropriate filesystem options |
Choose a filesystem and mount options aligned with your workload (e.g., noatime, appropriate inode size). Also ensure that if you’re testing raw devices you use direct=1 to bypass caches.
🔹Match your benchmark to expected workload |
It’s pointless to benchmark with a huge queue depth if your production workload uses depth = 1. Tailor iodepth, numjobs, bs, and rw to match what your applications actually do. Many users on the web highlight mismatches causing misleading “great” results.
🔹RAID / virtualization / caching layers |
If you have underlying RAID arrays, virtualization, caching layers (like LVM cache, ZFS cache, etc), your results will reflect that stack—not just raw disk performance. Example: “while using fio I’ve run into … got write 20,000 MiB/s” when it was hitting cache rather than real disk. If you suspect caches, you may want to invalidate caches, mount fresh, or use disks with clean state.
🔹Re-run benchmarks when things change |
Any time you change hardware (SSD model, RAID level), firmware update, kernel, or filesystem, re-benchmark to validate the new configuration. Trend your results over time to detect degradation or anomalies (e.g., wear on SSDs, firmware bugs).
Here’s an example of how you might record results from different tests:
| Test | Block size | Depth | BW (MB/s) | IOPS | Latency (avg) | Comment |
|---|---|---|---|---|---|---|
| Seq Read | 1 M | 64 | 1200 | — | 0.5 ms | Good throughput |
| Seq Write | 1 M | 64 | 950 | — | 1.0 ms | Acceptable write speed |
| Rand Read (4 K) | 4 K | 256 | — | 75,000 | 0.8 ms | Meets database requirement |
| Rand Write (4 K) | 4 K | 256 | — | 55,000 | 1.2 ms | Slightly high latency |
Using a table like this simplifies comparison across hardware, configurations, or time.
|
|
|
|
|
Using fio to benchmark and optimize disk performance on Linux gives you actionable insight. The process in summary:
|
|
|
|
|
|
|
By adopting this systematic approach, you can make data -driven decisions about storage upgrades, workload matching, and disk configuration on Linux. Whether you’re optimizing a database server, a VM cluster, or an embedded device, fio offers the flexibility to test and verify performance with confidence.
Did you find this article helpful? Your feedback is invaluable to us! Feel free to share this post with those who may benefit, and let us know your thoughts in the comments section below.

Learn how to implement full disk encryption on RHEL 9 or CentOS 9 for Zabbix running PostgreSQL database. This step-by-step guide helps secure your monitoring server

Monitor and troubleshoot Linux disk I/O bottlenecks using iostat and iotop. Learn how to interpret %iowait, %util, await times and pinpoint heavy processes. Table of
