Introducing The System Storage Manager (SSM)

System Storage Manager - SSM

Table of Contents

Introduction

Redhat recently introduced the System Storage Manager (SSM), a unified user interface that allows uses to manage complicated systems in a simple way. SSM shines when it comes to simplifying how sysadmins create, extend, reduce, or delete partitions using the logical volume manager (LVM). Creating and mounting a logical volume takes several steps and involves in most cases, issuing some, if not all of the following commands pvcreate, vgcreate, lvcreate, mkfs, just to name a few. The ssm command narrows it all down to just one line on the command line (CLI).

In this article, we will review the steps involved in creating and mounting a logical volume on a Linux machine (In this case, CentOS7) and show how those steps are simplified using ssm.

System Storage Manager - SSM

Photo by K. Mishina from Pixabay

Installing System Storage Manager (SSM)

Installing SSM is simple using the yum command (below):

				
					$ sudo yum install -y system-storage-manager
				
			

Once the installation is complete, we can run a ssm list to view the current filesystem layout on our Linux machine.

				
					$ sudo ssm list
-----------------------------------------------------------
Device         Free      Used      Total  Pool  Mount point
-----------------------------------------------------------
/dev/vda                       100.00 GB                   
/dev/vda1                        2.00 GB        /boot      
/dev/vda2  59.00 GB  39.00 GB   98.00 GB  vg00             
-----------------------------------------------------------
-------------------------------------------------
Pool  Type  Devices      Free      Used     Total  
-------------------------------------------------
vg00  lvm   1        59.00 GB  39.00 GB  98.00 GB  
-------------------------------------------------
------------------------------------------------------------------------------------------------
Volume                       Pool  Volume size  FS    FS size      Free  Type    Mount point    
------------------------------------------------------------------------------------------------
/dev/vg00/root               vg00     25.00 GB  xfs  24.99 GB  22.15 GB  linear  /              
/dev/vg00/swap               vg00      3.88 GB                           linear                 
/dev/vg00/home               vg00     10.12 GB  xfs  10.11 GB  10.07 GB  linear  /home          
/dev/vda1                              2.00 GB  xfs   1.99 GB   1.80 GB          /boot          
------------------------------------------------------------------------------------------------

				
			

In the sample output (above), we can figure out some things about our Linux virtual machine. We know it is using the XFS filesystem which is a high performance and scalable filesystem originally designed by Silicon Graphics Inc, now known as Silicon Graphics International (SGI) after being bought by Fremont’s Rackable Systems. Techspot wrote an article detailing the rise and fall of Silicon Graphics that if you’re interested, can be found here.

Provisioning a new partition using The Logical Volume Manager (LVM)

Now, we are going to provision a new volume using LVM and the associated vgcreate and lvcreate commands. This is the old way of doing things that most sysadmins are familiar with.

First, we use the vgs command to inspect and display information about the volume group.

				
					$ sudo vgs
[sudo] password for admin:
VG #PV #LV #SN Attr VSize VFree
vg00 1 3 0 wz--n- <98.00g 59.00g
				
			

Right away, we can see in the output (above), the vg00 volume group has 59GB of available space that we can use to create a new volume or extend an existing one. 

Creating Logical Volumes

In this example, we will use the lvcreate command to provision a new 10GB logical volume called lv_test for a new mount point we want established on our machine called /local. If you receive the following warning message: WARNING: xfs signature detected on /dev/vg00/lv_test at offset 0. Wipe it? [y/n]:

				
					$ sudo lvcreate -L+10GB -n lv_test vg00
[sudo] password for admin:
WARNING: xfs signature detected on /dev/vg00/lv_test at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/vg00/lv_test.
Logical volume "lv_test" created.
				
			

Next, we run the lvs command to display information about the logical volumes on our machine.

				
					$ sudo lvs
LV       VG    Attr      LSize  Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home     vg00 -wi-ao---- 10.12g
lv_test  vg00 -wi-a----- 10.00g
root     vg00 -wi-ao---- 25.00g
swap     vg00 -wi-ao---- <3.88g
				
			

As you can see (above), the lv_test logical volume is available but not yet mounted as indicated by the missing o under the attributes section.

Provision a Filesystem

Next, we proceed with building a filesystem using our newly created logical volume and the mkfs command–which stands for make file system.

To make an EXT3 filesystem, run the following command:

				
					$ sudo mkfs.ext3 /dev/vg00/lv_test
				
			

To make an EXT4 filesystem, run the following command:

				
					$ sudo mkfs.ext4 /dev/vg00/lv_test
				
			

However, for the purposes of this tutorial, we will make an XFS filesystem by running the following command:

				
					$ sudo mkfs.xfs /dev/vg00/lv_test
				
			

In case you chose a different filesystem, you can force an overwrite and reformat your filesystem for XFS by running the same command with the -f option.

				
					$ sudo mkfs.xfs /dev/vg00/lv_test -f
meta-data=/dev/vg00/lv_test isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
				
			

Your output should look similar to the one above. The lv_test logical volume is now formatted to xfs and ready to be mounted.

Mounting Logical Volumes

First, we need to create the /local directory, which will reside under root as indicated by the “/”. Then, using the mount command, we’ll mount lv_test on our machine.

				
					$ sudo mkdir -p /local

$ ls -lh /
total 16K
lrwxrwxrwx.   1 root root    7 Jan 24  2022 bin -> usr/bin
dr-xr-xr-x.   5 root root 4.0K Mar 10 10:30 boot
drwxr-xr-x.  20 root root 3.3K Mar 10 14:01 dev
drwxr-xr-x.  95 root root 8.0K Mar 10 14:01 etc
drwxr-xr-x.   2 root root    6 Jan 24  2022 extras
drwxr-xr-x.   4 root root   41 Jan 29  2022 home
lrwxrwxrwx.   1 root root    7 Jan 24  2022 lib -> usr/lib
lrwxrwxrwx.   1 root root    9 Jan 24  2022 lib64 -> usr/lib64
drwxr-xr-x.   2 root root    6 Jan 30  2022 local
drwxr-xr-x.   2 root root    6 Apr 11  2018 media
drwxr-xr-x.   2 root root    6 Apr 11  2018 mnt

				
			

The example (above) shows the local directory residing under /root. It was created by issuing the mkdir command with the -p option–which suppresses the error message if the directory already exists and creates parent directories as needed.

				
					$ sudo mount /dev/vg00/lv_test /local
				
			

We can now mount lv_test using the mount command. Also, we can run the disk free or df command to verify that /local is mounted.

				
					$ df -TPh
Filesystem                         Type      Size  Used Avail Use% Mounted on
devtmpfs                           devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs                              tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs                              tmpfs     1.9G  8.7M  1.9G   1% /run
tmpfs                              tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/vg00-root              xfs        25G  4.4G   21G  18% /
/dev/vda1                          xfs       2.0G  326M  1.7G  16% /boot
/dev/mapper/vg00-lv_ssm_test       xfs        10G   33M   10G   1% /ssm_local
/dev/mapper/vg00-lv_test           xfs        10G   33M   10G   1% /local

				
			

Congratulations! We’ve now established a new mount point (/local) on our machine using LVM. This mount point will remain in place until we decide to remove or umount it, or reboot our machine. However, if we want the mount point to persist even after a reboot, we have one more step to complete.

Persistent mount points

To make a mount point persist after a reboot, open the /etc/fstab file using your favorite text editor or vim but first…

				
					$ sudo cp /etc/fstab /etc/fstab.backup
				
			

NOTE: It is always a good practice to copy a file to a backup–especially when you’re modifying system files.

				
					$ sudo vim /etc/fstab
				
			

Add an entry for /local to the file. Then save and quit.

				
					#
# /etc/fstab
# Created by anaconda on Mon Jan 24 03:58:37 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg00-root / xfs defaults 0 0
UUID=f28e5d1d-730e-454a-b296-5085a2ee2279 /boot xfs defaults 0 0
/dev/mapper/vg00-home /home xfs defaults 0 0
/dev/mapper/vg00-swap swap swap defaults 0 0
/dev/mapper/vg00-lv_test /local xfs defaults 0 0
				
			

Here’s what we just did (Step-by-step):

Open the /etc/fstab file and copy/paste or duplicate the /dev/mapper/vg00-home /home xfs defaults 0 0 line. Complete the following:

  1. Replace vg00-home with vg00-lv_test for the new logical volume.
  2. Replace /home with the new mount point /local.
  3. Save and quit the file.
				
					$ sudo umount /local; sudo mount /local
				
			

You can test the /etc/fstab entry is correct and working as expected by unmounting and remounting the (/local) mount point or filesystem. The following command (below) should run without errors. The “;” or “&&” allows us to run multiple commands on the same line.

				
					$ sudo umount /local; sudo mount /local

$ df -TPh
Filesystem                         Type      Size  Used Avail Use% Mounted on
devtmpfs                           devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs                              tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs                              tmpfs     1.9G  8.7M  1.9G   1% /run
tmpfs                              tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/vg00-root              xfs        25G  4.4G   21G  18% /
/dev/vda1                          xfs       2.0G  326M  1.7G  16% /boot
/dev/mapper/vg00-home              xfs        11G  3.0G  7.2G  29% /home
tmpfs                              tmpfs     379M     0  379M   0% /run/user/1001
/dev/mapper/vg00-lv_test           xfs        10G   33M   10G   1% /local
				
			

Notice how the mount commands are shorter (above), we no longer have to provide the /dev/mapper/lv_test information because we’ve entered it into the /etc/fstab file. The ultimate test involves rebooting your machine. Upon reboot, you should see that the /local partition persists.

Provisioning a new partition using The System Storage Manager (SSM)

If you’ve made it this far, you seen most of what it takes to provision a new partition on your machine using LVM. This is without accounting for a scenario in which the volume group has no available space or a new physical disk is attached to the machine and thus, needs to be provisioned.

Now, we are going the witness the power of ssm by repeating the same procedure we just completed using LVM, but much more efficiently. The only differences being the name of the logical volume (lv_ssm_test) and the mount point (/ssm_local).

Lets issue the ssm list command to see what changes we’ve made to our filesystem so far.

				
					$ sudo ssm list
-----------------------------------------------------------
Device         Free      Used      Total  Pool  Mount point
-----------------------------------------------------------
/dev/vda                       100.00 GB                   
/dev/vda1                        2.00 GB        /boot      
/dev/vda2  49.00 GB  49.00 GB   98.00 GB  vg00             
-----------------------------------------------------------
-------------------------------------------------
Pool  Type  Devices      Free      Used     Total  
-------------------------------------------------
vg00  lvm   1        49.00 GB  49.00 GB  98.00 GB  
-------------------------------------------------
------------------------------------------------------------------------------------------------
Volume                       Pool  Volume size  FS    FS size      Free  Type    Mount point    
------------------------------------------------------------------------------------------------
/dev/vg00/root               vg00     25.00 GB  xfs  24.99 GB  22.15 GB  linear  /              
/dev/vg00/swap               vg00      3.88 GB                           linear                 
/dev/vg00/home               vg00     10.12 GB  xfs  10.11 GB  10.07 GB  linear  /home          
/dev/vg00/lv_test            vg00     10.00 GB  xfs   9.99 GB   9.96 GB  linear  /local         
/dev/vda1                              2.00 GB  xfs   1.99 GB   1.80 GB          /boot          
------------------------------------------------------------------------------------------------

				
			

We see the available space has reduced by 10GB which is now being consumed by the lv_test logical volume mounted to /local. The following ssm command will provision another 10GB and assign it to a logical volume labeled lv_ssm_test and mount it to /ssm_local.

				
					$ sudo ssm create -s 10GB -n lv_ssm_test --fstype xfs -p vg00 /dev/vda2 /ssm_local
[sudo] password for admin:
Logical volume "lv_ssm_test" created.
meta-data=/dev/vg00/lv_ssm_test isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Directory '/ssm_local' does not exist! Create (Y/n/q) ? Y
				
			

In one line, we’ve managed to create the logical volume, allocate the appropriate size (10GB), make and format the filesystem, and finally mount it. This is the power of SSM. We reduced what would otherwise become 5 or 6 steps into 1 or 2. It will ask permission to create a directory for mounting the filesystem if one doesn’t already exist (highlighted line above).

The final step, just like the LVM procedure, is determining whether a mount point should remain persistent on your machine. For this tutorial, we want the new mount point to be available after reboot so, we’ve modified the /etc/fstab–adding an entry for /ssm_local (below).

				
					#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg00-root / xfs defaults 0 0
UUID=f28e5d1d-730e-454a-b296-5085a2ee2279 /boot xfs defaults 0 0
/dev/mapper/vg00-home /home xfs defaults 0 0
/dev/mapper/vg00-lv_test /local xfs defaults 0 0
/dev/mapper/vg00-swap swap swap defaults 0 0
/dev/mapper/vg00-lv_ssm_test /ssm_local xfs defaults
				
			

Conclusion

We reviewed the LVM method for adding a new partition or filesystem to a Linux machine. In doing so, we highlighted how simple and yet powerful, it is to use the ssm command in completing similar tasks. We’ve only scratched the surface when it comes to the many features and benefits of using ssm. You can find more information on ssm and ssm common tasks on the Red hat website. 
 
Was this article helpful to you? If so, leave us a comment below and share!

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *