Skip to main content

What’s New and Updated With Spectrum Scale

AIX expert Jaqui Lynch provides a primer on Spectrum Scale

Image of diagonal tiles in different shades of blue.

Spectrum Scale (formerly known as GPFS) is a non-blocking filesystem that is used in high performance computing clusters, Oracle environments and many other environments where you want a high performance, shared filesystem that goes beyond what NFS or Samba can provide. This article discusses recent updates for my Spectrum Scale systems from 4.2.3 and to

What’s New in Spectrum Scale 5.0.4

IBM has recently announced a Scale developer edition.  This is limited to 12TBs which is enough to build a robust test environment. It is free for non-production use such as test and upgrade preparation. It can be downloaded from the Spectrum Scale Try and Buy page.

5.0.4 includes a lot of new functionality, including support for Redhat Enterprise Linux 8, the ability to do reclaims on NVMe devices and improved recovery of CCR enabled clusters. There is also an RPQ available to provide thin provisioning support for file system data and metadata. One of the improvements in the network environment is provided on github—there is a C program called nsdperf that can be used for stress testing. Additionally, mmhealth has been updated to monitor more events. New monitoring and health events have been added to monitor SSD wear, firmware, nameserver issues and critical threads in mmfsd that may be overloaded or hung. There are also new thresholds for DiskloLatency_read, DiskloLatency_write and MemoryAvailable_percent. There are also significant updates to SMB and NFS support as well as to AFM (active file management).

Prior to 5.0.3 there was a mutex contention issue with the SGInodeMapMutex that impacted Spectrum Scale file create performance as the number of threads creating files on a given node increased. This issue has been mitigated in 5.0.3 with fixes plus the use of maxFilesToCache and by also setting maxInodeDeallocHistory to 0 (or just reducing it from its default of 4096). 5.0.3 also resolves some of the issues around token contention when moving directories out of inodes.

In enhancements were made to add an option to execute small sequential AIO/DIO writes as buffered I/O. This allows multiple small writes to be combined into a single larger I/O. The parameters involved are dioSmallSeqWriteBatching and dioSmallSeqWriteThreshold and they are set on the client nodes.

In (APAR IJ22412) significant improvement was made to resolve locking issues with mmap read performance. Performance is dramatically improved when multiple threads are reading the same file.

Cluster Upgrades

The initial cluster was just a single node that we were about to expand to add four x86 Linux nodes and one Power Linux node. It was running Spectrum Scale The initial step was to upgrade this node to and then bring in the new client nodes, which would be installed at Several months later we then went through the update process to go to  

The cluster has a single AIX node that is connected via fibre to all the disk LUNs. The five client nodes are all RHEL 7 (four are x86 and one is Linux on POWER). They are connected using the network (NSDs) to the storage. This, unfortunately, rules out rolling updates as the AIX node owns all the disks. In the future this will change, but for this upgrade series the upgrade was disruptive.

Upgrade Steps

The first step is to download the software from Fix Central.  It is a separate download for each operating system: I had to download the AIX, Linux 64 bit x86 and Linux Power PC 64 Little Endian versions. In all cases a backup was take prior of the operating system. to on AIX

Since this was still a single node this was a very straight forward upgrade. In AIX it is a 2 stage upgrade – first you have to upgrade to the level and then to the level. The cluster was shut down and smitty update_all (install) was used to upgrade from to I then checked the level using lslpp:

lslpp -l | grep gpfs showed the level as


Then smitty update_all was used again to update to and the levels were checked again

lslpp -l | grep gpfs

  gpfs.adv           COMMITTED  GPFS Advanced Features

  gpfs.base          COMMITTED  GPFS File Manager

  gpfs.crypto        COMMITTED  GPFS Cryptographic Subsystem

  gpfs.ext           COMMITTED  GPFS Extended Features

  gpfs.gskit       COMMITTED  GPFS GSKit Cryptography    COMMITTED  GPFS Data Management Edition

  gpfs.msg.en_US     COMMITTED  GPFS Server Messages - U.S.

  gpfs.base          COMMITTED  GPFS File Manager     COMMITTED  GPFS Server Manpages and

The cluster was then started and tested.

I then updated the cluster so would support the new functionality in using:

mmchconfig release=LATEST

Prior to running that command mmlsconfig showed:


After the command was run the system showed:


The additional nodes were then brought into the cluster, all at to on the whole cluster

Several months later came out with some features that made upgrading worthwhile.  After downloading the correct software to each node and taking backups, the cluster was shutdown completely. 

mmshutdown -a

mmgetstate -as was used to check everything was down

The AIX node was updated first, then the Power Linux node and finally the x86 nodes.

AIX Update

As with the earlier update smitty update_all was used to first update the node to and then to update it to Each time there were six filesets to go on. At the end lslpp showed:

lslpp -l | grep gpfs

  gpfs.adv           APPLIED    GPFS Advanced Features

  gpfs.base          APPLIED    GPFS File Manager

  gpfs.crypto        APPLIED    GPFS Cryptographic Subsystem

  gpfs.ext           APPLIED    GPFS Extended Features

  gpfs.gskit       COMMITTED  GPFS GSKit Cryptography    COMMITTED  GPFS Data Management Edition

  gpfs.msg.en_US     APPLIED    GPFS Server Messages - U.S.

  gpfs.base          APPLIED    GPFS File Manager     APPLIED    GPFS Server Manpages and

Spectrum Scale was then started just on the AIX node to make sure there were no issues and that the filesystems mounted, etc.

Linux Updates

The next node to be updated was the Linux on Power node followed by the x86 Linux nodes.  The process was the same for all of these but we had a different file to use for the update.

For Linux on power the file was:


For Linux on x86 the file was:


The Linux install differs from the AIX one in that you do not have to go to the release prior to going to  You go straight to

The first step after ensuring Scale is down is to unload the kernel

On Linux on Power you will see:

mmfsenv -u

Unloading modules from /lib/modules/3.10.0-1062.4.1.el7.ppc64le/extra

Unloading module tracedev

On Linux on x86 you will see:

 mmfsenv -u

Unloading modules from /lib/modules/3.10.0-957.el7.x86_64/extra

Unloading module tracedev

The Linux-install file should then be made executable and run – this extracts the Scale code for the actual install.  You will need to accept the license agreement during that step.

The files and rpms are extracted into: /usr/lpp/mmfs/

cd /usr/lpp/mmfs/

This is an upgrade so the command issued will be something like:

rpm -Uvh gpfs.base*.rpm gpfs.gpl*rpm gpfs.license*rpm gpfs.msg*rpm gpfs.compression*rpm gpfs.adv*rpm gpfs.crypto*rpm*rpm

You can then run rpm -qa | grep gpfs to check the levels and on Power Linux you should see:

rpm -qa | grep gpfs










On x86 it will have x86_64 where it says ppc64le above.

You will then need to build the portability layer:


Once that is done you can bring the node up and run your tests.

Once all the nodes have been upgraded and tested you can then update the release to LATEST.  This is done using:

mmchconfig release=LATEST

This will update minReleaseLevel


Finishing Touches

The final step is updating the filesystems to the latest level. In Scale new filesystems are created at format level 22.0. is level 20.01 and is level 21.00. Current filesystems will remain at the level they were created at.  Once all the nodes in the cluster have been upgraded then all the filesystems should be upgraded to level 22.0.  This can be done dynamically without an outage by running the following command against each filesystem.

mmchfs  filesystem -V full

Level 22.0 adds support for thin provisioned storage devices, NVMe SSDs and some additional AFM functionality.  Once you update to level 22.0 only nodes at 5.04 or higher can access the filesystems.

I usually wait a week before running the mmchfs against each filesystem. While it is highly unlikely that I would revert to the old cluster level I like to keep that open as an option.  Once you update the filesystem you can’t go back.

You can check the current filesystem level by running mmlsfs against the filesystem. About half way down you will see lines like:

-V                 21.00 (          Current file system version

                    20.01 (          Original file system version


In this article we have looked at what is new in and on what is involved in updating a Spectrum Scale cluster to While we were unable to perform a rolling upgrade that is certainly an option where you have redundancy in the servers that own the disks. Because this is a mixed cluster it was not possible to have the AIX node and the Linux nodes all be fibre connected. The upgrade itself was very straight forward and very easy to do. 

IBM Systems Webinar Icon

View upcoming and on-demand (IBM Z, IBM i, AIX, Power Systems) webinars.
Register now →