Chapter 3 Upgrading Gluster Storage for Oracle Linux

This section discusses upgrading to Release 8 of Gluster Storage for Oracle Linux from previous releases such as Releases 6, 5, 4.1 or 3.12.

Before you perform an upgrade, configure the Oracle Linux yum server repositories or ULN channels. For information on setting up access to the repositories or channels, see Section 2.3, “Enabling Access to the Gluster Storage for Oracle Linux Packages”.

Make sure you also disable the Gluster Storage for Oracle Linux repositories and channels for the previous releases:

  • Release 6.  ol7_gluster6 repository or ol7_arch_gluster6 ULN channel.

  • Release 5.  ol7_gluster5 repository or ol7_arch_gluster5 ULN channel.

  • Release 4.1.  ol7_gluster41 repository or ol7_arch_gluster41 ULN channel.

  • Release 3.12.  ol7_gluster312 repository or ol7_arch_gluster312 ULN channel.

Important

Do not make any configuration changes during the upgrade. Upgrade the servers first before upgrading clients. After the upgrade, ensure that your entire Gluster deployment is running the same Gluster server and client versions.

3.1 Performing an Online Upgrade

An online upgrade does not require any volume down time. During the upgrade, Gluster clients can continue to access the volumes.

Only replicated and distributed replicated volumes can be upgraded online. Any other volume types must be upgraded offline. See Section 3.2, “Performing an Offline Upgrade” for information on performing an offline upgrade.

Perform the following procedure on each Gluster node. The procedure assumes that multiple replicas of a replica set are not part of the same server in the trusted storage pool.

  1. Ensure that your system is running the latest update level of the operating system.

    If not, then run the following commands:

    sudo yum update
    sudo reboot
  2. Update to the new directory structure introduced in Release 8 of Gluster Storage for Oracle Linux.

    The new directory structure specifically serves changelog files that are related to geo-replication. The script moves these files to the new required location.

    1. Download the glusterfs-georep-upgrade.py script by using one of the following links:

    2. Stop the geo-replication session.

      sudo gluster volume geo-replication primary_volume georep@secondary-node1.example.com::secondaryvol stop
      
    3. Use the script to migrate the georeplication changelogs to the new structure.

      sudo glusterfs-georep-upgrade.py path-to-brick
  3. If you are upgrading from Gluster Storage for Oracle Linux Release 3.12, you need to unset some deprecated features. Do the following:

    1. Check if the lock-heal or grace-timeout features are being used.

      sudo gluster volume info
    2. If these parameters are listed under the Options Reconfigured section of the output, unset these parameters with the following commands:

      sudo gluster volume reset myvolume features.lock-heal
      sudo gluster volume reset myvolume features.grace-timeout
  4. Stop the Gluster service and all Gluster file system processes, and then verify that no related processes are running.

    sudo systemctl stop glusterd
    sudo killall glusterfs glusterfsd
    sudo ps aux | grep gluster
  5. Stop any Gluster related services, such as Samba and NFS-Ganesha.

    sudo systemctl stop smb
    sudo systemctl stop nfs-ganesha
  6. Update the Gluster Storage for Oracle Linux packages:

    sudo yum update glusterfs-server
  7. (Optional) If you are using NFS-Ganesha, upgrade the package.

    sudo yum update nfs-ganesha-gluster
  8. Start the Gluster service and any Gluster related services that you might be using, for example, Samba and NFS-Ganesha.

    sudo systemctl daemon-reload
    sudo systemctl start glusterd
    sudo systemctl start smb
    sudo systemctl start nfs-ganesha
  9. Check for bricks that might be offline and bring them online.

    sudo gluster volume status
    sudo gluster volume start volume_name force
  10. (Optional) For any replicated volumes, you should turn off usage of MD5 checksums during volume healing to enable you to run Gluster on FIPS-compliant systems.

    sudo gluster volume set myvolume fips-mode-rchecksum on
  11. When all bricks are online, heal the volumes, then optionally view the healing information for each volume.

    sudo for i in `gluster volume list`; do gluster volume heal $i; done
    sudo gluster volume heal volume_name info
  12. If a deployed geo-replication session was stopped for the upgrade, then restart the session.

    sudo gluster volume geo-replication primary_volume georep@secondary-node1.example.com::secondaryvol start
    

3.2 Performing an Offline Upgrade

An offline upgrade requires volume down time. During the upgrade, Gluster clients cannot access the volumes. Upgrading the Gluster nodes can be done in parallel to minimise volume down time.

  1. Stop the volume.

    sudo gluster volume stop myvolume
  2. Upgrade all Gluster nodes using the steps provided in Section 3.1, “Performing an Online Upgrade”.

    Note

    You do not need to perform the final step in the online upgrade procedure, which heals the volumes. As the volumes are taken offline during the upgrade, no volume healing is required.

  3. After the upgrade, restart the volume.

    sudo gluster volume start myvolume

3.3 Post Upgrade Requirements

Complete this procedure after either an online or an offline upgrade.

  1. Check the operating version number for all volumes.

    sudo gluster volume get all cluster.op-version
    Option                                  Value                                   
    ------                                  -----                                   
    cluster.op-version                      60000
  2. If the parameter value is not 6000, set the version number accordingly.

    sudo gluster volume set all cluster.op-version 60000
  3. Upgrade the clients that access the volumes. See Section 3.4, “Upgrading Gluster Clients” for information on upgrading Gluster clients.

3.4 Upgrading Gluster Clients

When the Gluster server nodes have been upgraded, you should upgrade Gluster clients. Complete this procedure on every client.

  1. Unmount all Gluster mount points on the client.

  2. Stop all applications that access the volumes.

  3. For Gluster native clients (FUSE), update Gluster.

    sudo yum update glusterfs glusterfs-fuse
  4. Mount all Gluster shares.

  5. Start any applications that were stopped for the upgrade.