Chapter 3 Upgrading Gluster Storage for Oracle Linux
This section discusses upgrading to Release 8 of Gluster Storage for Oracle Linux from previous releases such as Releases 6, 5, 4.1 or 3.12.
Before you perform an upgrade, configure the Oracle Linux yum server repositories or ULN channels. For information on setting up access to the repositories or channels, see Section 2.3, “Enabling Access to the Gluster Storage for Oracle Linux Packages”.
Make sure you also disable the Gluster Storage for Oracle Linux repositories and channels for the previous releases:
-
Release 6.
ol7_gluster6
repository orol7_
ULN channel.arch
_gluster6 -
Release 5.
ol7_gluster5
repository orol7_
ULN channel.arch
_gluster5 -
Release 4.1.
ol7_gluster41
repository orol7_
ULN channel.arch
_gluster41 -
Release 3.12.
ol7_gluster312
repository orol7_
ULN channel.arch
_gluster312
Do not make any configuration changes during the upgrade. Upgrade the servers first before upgrading clients. After the upgrade, ensure that your entire Gluster deployment is running the same Gluster server and client versions.
3.1 Performing an Online Upgrade
An online upgrade does not require any volume down time. During the upgrade, Gluster clients can continue to access the volumes.
Only replicated and distributed replicated volumes can be upgraded online. Any other volume types must be upgraded offline. See Section 3.2, “Performing an Offline Upgrade” for information on performing an offline upgrade.
Perform the following procedure on each Gluster node. The procedure assumes that multiple replicas of a replica set are not part of the same server in the trusted storage pool.
-
Ensure that your system is running the latest update level of the operating system.
If not, then run the following commands:
sudo yum update sudo reboot
-
Update to the new directory structure introduced in Release 8 of Gluster Storage for Oracle Linux.
The new directory structure specifically serves changelog files that are related to geo-replication. The script moves these files to the new required location.
-
Download the
glusterfs-georep-upgrade.py
script by using one of the following links: -
Stop the geo-replication session.
sudo gluster volume geo-replication
primary_volume
georep@secondary-node1.example.com::secondaryvol
stop -
Use the script to migrate the georeplication
changelogs
to the new structure.sudo glusterfs-georep-upgrade.py
path-to-brick
-
-
If you are upgrading from Gluster Storage for Oracle Linux Release 3.12, you need to unset some deprecated features. Do the following:
-
Check if the
lock-heal
orgrace-timeout
features are being used.sudo gluster volume info
-
If these parameters are listed under the
Options Reconfigured
section of the output, unset these parameters with the following commands:sudo gluster volume reset
myvolume
features.lock-heal sudo gluster volume resetmyvolume
features.grace-timeout
-
-
Stop the Gluster service and all Gluster file system processes, and then verify that no related processes are running.
sudo systemctl stop glusterd sudo killall glusterfs glusterfsd sudo ps aux | grep gluster
-
Stop any Gluster related services, such as Samba and NFS-Ganesha.
sudo systemctl stop smb sudo systemctl stop nfs-ganesha
-
Update the Gluster Storage for Oracle Linux packages:
sudo yum update glusterfs-server
-
(Optional) If you are using NFS-Ganesha, upgrade the package.
sudo yum update nfs-ganesha-gluster
-
Start the Gluster service and any Gluster related services that you might be using, for example, Samba and NFS-Ganesha.
sudo systemctl daemon-reload sudo systemctl start glusterd sudo systemctl start smb sudo systemctl start nfs-ganesha
-
Check for bricks that might be offline and bring them online.
sudo gluster volume status sudo gluster volume start
volume_name
force -
(Optional) For any replicated volumes, you should turn off usage of MD5 checksums during volume healing to enable you to run Gluster on FIPS-compliant systems.
sudo gluster volume set
myvolume
fips-mode-rchecksum on -
When all bricks are online, heal the volumes, then optionally view the healing information for each volume.
sudo for i in `gluster volume list`; do gluster volume heal $i; done sudo gluster volume heal
volume_name
info -
If a deployed geo-replication session was stopped for the upgrade, then restart the session.
sudo gluster volume geo-replication
primary_volume
georep@secondary-node1.example.com::secondaryvol
start
3.2 Performing an Offline Upgrade
An offline upgrade requires volume down time. During the upgrade, Gluster clients cannot access the volumes. Upgrading the Gluster nodes can be done in parallel to minimise volume down time.
-
Stop the volume.
sudo gluster volume stop
myvolume
-
Upgrade all Gluster nodes using the steps provided in Section 3.1, “Performing an Online Upgrade”.
NoteYou do not need to perform the final step in the online upgrade procedure, which heals the volumes. As the volumes are taken offline during the upgrade, no volume healing is required.
-
After the upgrade, restart the volume.
sudo gluster volume start
myvolume
3.3 Post Upgrade Requirements
Complete this procedure after either an online or an offline upgrade.
-
Check the operating version number for all volumes.
sudo gluster volume get all cluster.op-version Option Value ------ ----- cluster.op-version 60000
-
If the parameter value is not 6000, set the version number accordingly.
sudo gluster volume set all cluster.op-version 60000
-
Upgrade the clients that access the volumes. See Section 3.4, “Upgrading Gluster Clients” for information on upgrading Gluster clients.
3.4 Upgrading Gluster Clients
When the Gluster server nodes have been upgraded, you should upgrade Gluster clients. Complete this procedure on every client.
-
Unmount all Gluster mount points on the client.
-
Stop all applications that access the volumes.
-
For Gluster native clients (FUSE), update Gluster.
sudo yum update glusterfs glusterfs-fuse
-
Mount all Gluster shares.
-
Start any applications that were stopped for the upgrade.