Chapter 2 Installing Gluster Storage for Oracle Linux

This chapter discusses how to enable the repositories to install the Gluster Storage for Oracle Linux packages, and how to perform an installation of those packages. This chapter also discusses setting up Gluster trusted storage pools, and Transport Layer security (TLS). This chapter also contains information on upgrading from a previous release of Gluster Storage for Oracle Linux.

2.1 Hardware and Network Requirements

Gluster Storage for Oracle Linux does not require specific hardware; however, certain Gluster operations are CPU and memory intensive. The X6 and X7 line of Oracle x86 Servers are suitable to host Gluster nodes. For more information on Oracle x86 Servers, see:

https://www.oracle.com/servers/x86/index.html

Oracle provides support for Gluster Storage for Oracle Linux on 64-bit x86 (x86_64) and 64-bit Arm (aarch64) hardware.

A minimum node configuration is:

  • 2 CPU cores

  • 2GB RAM

  • 1GB Ethernet NIC

  • Dedicated storage sized for your data requirements and formatted as an XFS file system

Although a single 1GB Ethernet NIC is the supported minimum requirement per node, Oracle recommends using 2 x 1GB Ethernet NICs in a bonded (802.3ad/LACP) configuration. Due to the high throughput requirements for distributed and network-based storage 10GB or higher NICs are preferred.

A minimum of three nodes are required in a Gluster trusted storage pool. The examples in this guide use three nodes, named node1, node2, and node3. Node names for each of the nodes in the pool must be resolvable on each host. You can achieve this either by configuring DNS correctly, or you can add host entries to the /etc/hosts file on each node.

In the examples in this guide, each host is configured with an additional dedicated block storage device at /dev/sdb. The block device is formatted with the XFS file system and then mounted at /data/glusterfs/myvolume/mybrick.

Your deployment needs may require nodes with a larger footprint. Additional considerations are detailed in the Gluster upstream documentation.

2.2 Operating System Requirements

Release 6 of Gluster Storage for Oracle Linux is available on the platforms and operating systems shown in the following table.

Table 2.1 Operating System Requirements

Platform

Operating System Release

Minimum Operating System Maintenance Release

Kernel

x86_64

Oracle Linux 8

Oracle Linux 8 Update 2

Unbreakable Enterprise Kernel Release 6 (UEK R6)

Red Hat Compatible Kernel (RHCK)

aarch64

Oracle Linux 8

Oracle Linux 8 Update 2

UEK R6

RHCK

x86_64

Oracle Linux 7

Oracle Linux 7 Update 7

UEK R6

UEK R5

UEK R4

RHCK

aarch64

Oracle Linux 7

Oracle Linux 7 Update 7

UEK R6

UEK R5


If you are upgrading from a previous release of Gluster Storage for Oracle Linux, you must upgrade to the latest update level of the Oracle Linux release that you are using.

It is important that all nodes within your deployment are running the same Oracle Linux release and update level, particularly when using the geo-replication feature. When release levels differ some component software, such as the default Python version, may change and can cause issues when processing communications between nodes.

2.3 Enabling Access to the Gluster Storage for Oracle Linux Packages

Enabling access on Oracle Linux 8 systems

The Gluster Storage for Oracle Linux packages are available on the Oracle Linux yum server in the ol8_gluster_appstream repository, or on the Unbreakable Linux Network (ULN) in the ol8_arch_gluster_appstream channel.

Enabling Repositories with ULN

If you are registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.

To subscribe to the ULN channels:
  1. Log in to https://linux.oracle.com with your ULN user name and password.

  2. On the Systems tab, click the link named for the system in the list of registered machines.

  3. On the System Details page, click Manage Subscriptions.

  4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels to ensure that all dependencies are met:

    • ol8_arch_gluster_appstream

    • ol8_arch_baseos_latest

    • ol8_arch_appstream

  5. Click Save Subscriptions.

Enabling Repositories with the Oracle Linux Yum Server

If you are using the Oracle Linux yum server for system updates, enable the Gluster Storage for Oracle Linux yum repository.

To enable the yum repositories:
  1. Install the oracle-gluster-release-el8 release package to install the Gluster Storage for Oracle Linux yum repository configuration.

    # yum install oracle-gluster-release-el8
  2. Enable the following yum repositories:

    • ol8_gluster_appstream

    • ol8_baseos_latest

    • ol8_appstream

    Use the dnf config-manager tool to enable the yum repositories:

    # dnf config-manager --enable ol8_gluster_appstream ol8_baseos_latest ol8_appstream

Enabling the glusterfs Module and Application Stream

On Oracle Linux 8, the Gluster Storage for Oracle Linux packages are released as an application stream module. All of the packages specific to a particular release of the Gluster software are released within a stream. Furthermore, packages are bundled within a profile to be installed as a single step on a system depending on a use case.

Two modules are available for Gluster. The glusterfs module contains all supported packages required to install and run a Gluster server node or a Gluster client. The glusterfs-developer module contains packages that are released as a technical preview for developer use only. Packages that are available in the glusterfs-developer module are unsupported or may only be supported within very specific contexts, such as, when they are made available for other supported software.

To gain access to the Gluster Storage for Oracle Linux packages, enable the glusterfs application stream module:

# dnf module enable glusterfs

Once the module is enabled, you can install any of the packages within the module by following any of the other instructions in this documentation. Note that where the yum command is used in the documentation, you can substitute this with dnf for the same behavior.

You are also able to take advantage of the module profiles to simply install everything you might need for a use case. For example, on any server node, you can run:

# dnf install @glusterfs/server

This action installs all of the possible packages required for any Gluster server node, including core Gluster functionality, Gluster geo-replication functionality, NFS-Ganesha server and Heketi server packages.

On a client system where you intend to access or mount a Gluster share, you can run:

# dnf install @glusterfs/client

This action installs the Gluster packages required to mount a Gluster share or to act as a Heketi client.

Enabling access on Oracle Linux 7 systems

The Gluster Storage for Oracle Linux packages are available on the Oracle Linux yum server in the ol7_gluster6 repository, or on the Unbreakable Linux Network (ULN) in the ol7_arch_gluster6 channel, however there are also dependencies across other repositories and channels, and these must also be enabled on each system where Gluster is installed.

Enabling Repositories with ULN

If you are registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.

To subscribe to the ULN channels:
  1. Log in to https://linux.oracle.com with your ULN user name and password.

  2. On the Systems tab, click the link named for the system in the list of registered machines.

  3. On the System Details page, click Manage Subscriptions.

  4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels:

    • ol7_arch_gluster6

    • ol7_arch_addons

    • ol7_arch_latest

    • ol7_arch_optional_latest

    • ol7_arch_UEKR5 or ol7_arch_UEKR4

  5. Click Save Subscriptions.

Enabling Repositories with the Oracle Linux Yum Server

If you are using the Oracle Linux yum server for system updates, enable the Gluster Storage for Oracle Linux yum repository.

To enable the yum repositories:
  1. To enable the required repositories on the Oracle Linux yum server, make sure your system is using the modular yum repository configuration. If your system is not using the modular yum repository configuration, install the oraclelinux-release-el7 package and run the /usr/bin/ol_yum_configure.sh script.

    # yum install oraclelinux-release-el7
    # /usr/bin/ol_yum_configure.sh
  2. Install the oracle-gluster-release-el7 release package to install the Gluster Storage for Oracle Linux yum repository configuration.

    # yum install oracle-gluster-release-el7
  3. Enable the following yum repositories:

    • ol7_gluster6

    • ol7_addons

    • ol7_latest

    • ol7_optional_latest

    • ol7_UEKR5 or ol7_UEKR4

    Use the yum-config-manager tool to enable the yum repositories:

    # yum-config-manager --enable ol7_gluster6 ol7_addons ol7_latest ol7_optional_latest ol7_UEKR5

2.4 Installing and Configuring Gluster

A Gluster deployment consists of several systems, known as nodes. The nodes form a trusted storage pool or cluster. Each node in the pool must:

  • Have the Oracle Linux operating system installed

  • Have the same storage configured

  • Have synchronized time

  • Be able to resolve the fully qualified domain name of each node in the pool

  • Run the Gluster server daemon

The following sections discuss setting up nodes for a Gluster trusted storage pool.

2.4.1 Preparing Oracle Linux Nodes

There are some basic requirements for each Oracle Linux system that you intend to use as a node. These include the following items, for which some preparatory work may be required before you can begin your deployment.

To prepare Oracle Linux nodes:
  1. Gluster requires a dedicated file system on each node for the cluster data. The storage must be formatted with an XFS file system. The storage device can be an additional disk, a disk partition, an LVM volume, a loopback device, a multipath device, or a LUN. Do not use the root partition for cluster data.

    The cluster file system used in the examples in this guide is an XFS file system on a disk attached to /dev/sdb on each node. This disk is mounted on the directory /data/glusterfs/myvolume/mybrick. The inode size is set to 512 bytes to accommodate the extended attributes used by the Gluster file system. To set up this disk, you would use commands similar to:

    # mkfs.xfs -f -i size=512 -L glusterfs /dev/sdb
    # mkdir -p /data/glusterfs/myvolume/mybrick
    # echo 'LABEL=glusterfs /data/glusterfs/myvolume/mybrick xfs defaults  0 0' >> /etc/fstab
    # mount -a
  2. Time must be accurate and synchronized across the nodes in the pool. This is achieved by installing and configuring NTP on each node. If the NTP service is not already configured, install and start it. For more information on configuring NTP on Oracle Linux 7, see Oracle® Linux 7: Administrator's Guide.

    For more information on configuring NTP on Oracle Linux 8, see Oracle Linux: Update the system date and time from the command line interface.

  3. Pool network communications must be able to take place between nodes within the cluster. If firewall software is running on any of the nodes, it must either be disabled or, preferably, configured to facilitate network traffic on the required ports or between each node on the cluster.

    • If you have a dedicated network for Gluster traffic, you can add the interfaces to a trusted firewall zone and allow all traffic between nodes within the pool. For example, on each node in the pool, run:

      # firewall-cmd --permanent --change-zone=eno2 --zone=trusted
      # firewall-cmd --reload

      This command automatically updates the /etc/sysconfig/network-scripts/ifcfg-eno2 file for the network interface named eno2, to add the line zone=trusted. You must reload the firewall service for the change to be loaded into the firewall and for it to become active.

      In this configuration, your clients must either be on the same dedicated network and configured for the same firewall zone, or you may need to configure other rules specific to the interface that your clients are connecting on.

    • If your network interfaces are on a shared or untrusted network, you can configure the firewall to allow traffic on the ports specifically used by Gluster:

      # firewall-cmd --permanent --add-service=glusterfs
      # firewall-cmd --reload

      Note that adding the glusterfs service only exposes the ports required for Gluster. If you intend to add access via Samba, you must add these services as well.

  4. All nodes must be able to resolve the fully qualified domain name for each node within the pool. You may either use DNS for this purpose, or provide entries within /etc/hosts for each system. If you rely on DNS, it must have sufficient redundancy to ensure that the cluster is able to perform name resolution at any time. If you want to edit the /etc/hosts file on each node, add entries for the IP address and host name of all of the nodes in the pool, for example:

    192.168.1.51    node1.example.com     node1
    192.168.1.52    node2.example.com     node2
    192.168.1.53    node3.example.com     node3

You can now install and configure the Gluster server.

2.4.2 Installing the Gluster Server

The Gluster server packages should be installed on each node to be included in a trusted storage pool.

To install the Gluster server packages:
  1. Install the glusterfs-server package.

    # yum install glusterfs-server
  2. Start and enable the Gluster server service:

    # systemctl enable --now glusterd

2.5 Creating the Trusted Storage Pool

This section shows you how to create a trusted storage pool. In this example, a pool of three servers is created (node1, node2 and node3). You should nominate one of the nodes in the pool as the node on which you perform pool operations. In this example, node1 is the node on which the pool operations are performed.

To create a trusted storage pool:
  1. Add the nodes to the trusted server pool. You do not need to add the node on which you are performing the pool operations. For example:

    # gluster peer probe node2
    # gluster peer probe node3
  2. You can see the status of each node in the pool using:

    # gluster peer status
  3. You can see the nodes in the pool using:

    # gluster pool list

If you need to remove a server from a trusted server pool, use:

# gluster peer detach hostname

2.6 Setting up Transport Layer Security (TLS)

Gluster supports Transport Layer Security (TLS) using the OpenSSL library to authenticate Gluster nodes and clients. TLS encrypts communication between nodes in the trusted storage pool, and between client systems accessing the pool nodes. This is achieved through the use of private keys and public certificates.

Gluster performs mutual authentication in all transactions. This means that if one side of a connection is configured to use TLS then the other side must use it as well. Every node must either have a copy of the public certificate of every other node in the pool, or it must have a copy of the signing CA certificate that it can use to validate the certificates presented by each of the nodes in the pool. Equally, client systems accessing any node in the pool must have a copy of that node's certificate or the signing CA certificate, and the node needs a copy of a certificate for the accessing client.

TLS is enabled as a setting on the volume and can also be enabled for management communication within the pool.

Configuring TLS for your Gluster deployment is optional but recommended for better security.

In production environments, it is recommended you use certificates that are properly signed by a Certificate Authority (CA). This improves validation security and also reduces the complexity of configuration. However, it is not always practical, particularly if you have numerous clients accessing the pool. This section describes configuration for environments where certificates are signed by a CA and for when certificates are self-signed.

To configure TLS on nodes in a Gluster pool:
  1. Generate a private key on each node within the pool. You can do this using the openssl tool:

    # openssl genrsa -out /etc/ssl/glusterfs.key 2048
  2. Create either a self-signed certificate, or a certificate signing request (CSR) using the key that you have created.

    To use self-signed certificates:
    1. To create a self-signed certificate, do:

      # openssl req -new -x509 -days 365 -key /etc/ssl/glusterfs.key \
         -out /etc/ssl/glusterfs.pem
    2. When you have generated a self-signed certificate on each node in the storage pool, concatenate the contents of each of these files into a single file. This file should be written to /etc/ssl/glusterfs.ca on each node in the pool. Each node uses this file to validate the certificates presented by other nodes or clients that connect to it. If the public certificate for another participatory node or client is not present in this file, the node is unable to verify certificates and the connections fail.

    To use CA-signed certificates:
    1. If you intend to get your certificate signed by a CA, create a CSR by running:

      # openssl req -new -sha256 -key /etc/ssl/glusterfs.key -out /etc/ssl/glusterfs.csr
    2. If you generated a CSR and obtained the signed certificate back from your CA, save this file to /etc/ssl/glusterfs.pem.

    3. Save the CA certificate for your CA provider to /etc/ssl/glusterfs.ca on each node in the pool. Each node uses this file to validate the certificates presented by other nodes or clients that connect to it. If the public certificate for another participatory node or client cannot be verified by the CA signing certificate, attempts to connect by the client or node fail.

  3. Configure TLS encryption for management traffic within the storage pool. To do this, create an empty file at /var/lib/glusterd/secure-access on each node in the pool. Do the same on any client system where you intend to mount a volume:

    # touch /var/lib/glusterd/secure-access
  4. Enable TLS on the I/O path for an existing volume by setting the client.ssl and server.ssl parameters for that volume. For example, to enable TLS on a volume named myvolume, do:

    # gluster volume set myvolume client.ssl on
    # gluster volume set myvolume server.ssl on

    These parameters enable TLS validation and encryption on client traffic using the Gluster native client and on communications between nodes within the pool. Note that TLS is not automatically enabled on non-native file sharing protocols such as SMB by changing these settings.

  5. Restart the glusterd service on each of the nodes where you have enabled secure access for management traffic within the pool for these changes to take effect.

    # systemctl restart glusterd

2.7 Upgrading Gluster Storage for Oracle Linux to Release 6

This section discusses upgrading to Release 6 of Gluster Storage for Oracle Linux from Releases 5, 4.1 or 3.12.

Before you perform an upgrade, configure the Oracle Linux yum server repositories or ULN channels. For information on setting up access to the repositories or channels, see Section 2.3, “Enabling Access to the Gluster Storage for Oracle Linux Packages”.

Make sure you also disable the Gluster Storage for Oracle Linux repositories and channels for the previous releases:

  • Release 5.  ol7_gluster5 repository or ol7_arch_gluster5 ULN channel.

  • Release 4.1.  ol7_gluster41 repository or ol7_arch_gluster41 ULN channel.

  • Release 3.12.  ol7_gluster312 repository or ol7_arch_gluster312 ULN channel.

Do not make any configuration changes during the upgrade. You should upgrade the servers before clients are upgraded. After the upgrade, you should run the same Gluster server and client versions.

2.7.1 Performing an Online Upgrade

This procedure performs an online upgrade. An online upgrade does not require any volume down time. During the upgrade, Gluster clients can continue access the volumes.

You can perform an online upgrade with replicated and distributed replicated volumes only. Any other volume types must be upgraded offline. See Section 2.7.2, “Performing an Offline Upgrade” for information on performing an offline upgrade.

This procedure upgrades one server at a time, while keeping the volumes online and client IO ongoing. This procedure assumes that multiple replicas of a replica set are not part of the same server in the trusted storage pool.

The upgrade procedure should be performed on each Gluster node.

Performing an online upgrade:
  1. If you are upgrading from Gluster Storage for Oracle Linux Release 3.12, make sure the deprecated features lock-heal, and grace-timeout are unset on the volumes. You can see if these features are set using the gluster volume info command. If any of these features are listed in the Options Reconfigured section in the output, you must unset them before you upgrade.

    To unset these options for each volume, use the gluster volume reset command. For example:

    # gluster volume reset myvolume features.lock-heal
    # gluster volume reset myvolume features.grace-timeout
  2. Stop the Gluster service.

    # systemctl stop glusterd
  3. Stop all Gluster file system processes:

    # killall glusterfs glusterfsd

    You can make sure no Gluster file system processes are running using:

    # ps aux |grep gluster
  4. Stop any Gluster-related services, for example, stop Samba and NFS-Ganesha.

    # systemctl stop smb
    # systemctl stop nfs-ganesha
  5. Update the Gluster Storage for Oracle Linux packages:

    # yum update glusterfs-server
  6. (Optional) If you are using NFS-Ganesha, upgrade the package using:

    # yum update nfs-ganesha-gluster
  7. Start the Gluster service:

    # systemctl daemon-reload
    # systemctl start glusterd
  8. Reload and start any Gluster-related services, for example, Samba and NFS-Ganesha.

    # systemctl daemon-reload 
    # systemctl start smb
    # systemctl start nfs-ganesha
  9. Heal the volumes. You can see the status of the volumes using:

    # gluster volume status

    If any bricks in the volume are offline, bring the bricks online using:

    # gluster volume start volume_name force

    When all bricks are online, heal the volumes:

    # for i in `gluster volume list`; do gluster volume heal $i; done

    You can view healing information for each volume using:

    # gluster volume heal volume_name info

2.7.2 Performing an Offline Upgrade

This procedure performs an offline upgrade. An offline upgrade requires volume down time. During the upgrade, Gluster clients cannot access the volumes. Upgrading the Gluster nodes can be done in parallel to minimise volume down time.

Performing an offline upgrade:
  1. Stop any volumes, for example:

    # gluster volume stop myvolume
  2. Upgrade all Gluster nodes using the steps provided in Section 2.7.1, “Performing an Online Upgrade”.

    Note

    You do not need to perform the final step in the online upgrade procedure, which heals the volumes. As the volumes are taken offline during the upgrade, no volume healing is required.

  3. Start any volumes, for example:

    # gluster volume start myvolume

2.7.3 Post Upgrade Requirements

This section contains information on steps you should perform after upgrading the nodes in your cluster. You should perform these steps after you have performed either an online or an offline upgrade.

To complete the upgrade:
  1. Set the Gluster operating version number for all volumes. You can see the current version setting for all volumes using:

    # gluster volume get all cluster.op-version
    Option                                  Value                                   
    ------                                  -----                                   
    cluster.op-version                      60000

    If this is not set to 60000, set it using:

    # gluster volume set all cluster.op-version 60000
  2. Upgrade the clients that access the volumes. See Section 2.7.4, “Upgrading Gluster Clients” for information on upgrading Gluster clients.

  3. (Optional) For any replicated volumes, you should turn off usage of MD5 checksums during volume healing. This enables you to run Gluster on FIPS-compliant systems.

    # gluster volume set myvolume fips-mode-rchecksum on

2.7.4 Upgrading Gluster Clients

When the Gluster server nodes have been upgraded, you should upgrade any Gluster clients.

To upgrade Gluster clients:
  1. Unmount all Gluster mount points on the client.

  2. Stop all applications that access the volumes.

  3. For Gluster native client (FUSE) clients, update:

    # yum update glusterfs glusterfs-fuse
  4. Mount all Gluster shares.

  5. Start any applications that were stopped for the upgrade.