Chapter 2 Installing Gluster Storage for Oracle Linux

This chapter discusses how to enable the repositories to install the Gluster Storage for Oracle Linux packages, and how to perform an installation of those packages. This chapter also discusses setting up Gluster trusted storage pools, and Transport Layer security (TLS). This chapter also contains information on upgrading from a previous release of Gluster Storage for Oracle Linux.

2.1 Hardware and Network Requirements

Gluster Storage for Oracle Linux does not require specific hardware; however, certain Gluster operations are CPU and memory intensive. The X6 and X7 line of Oracle x86 Servers are suitable to host Gluster nodes. For more information on Oracle x86 Servers, see:

https://www.oracle.com/servers/x86/index.html

Oracle provides support for Gluster Storage for Oracle Linux on 64-bit x86 (x86_64) and 64-bit Arm (aarch64) hardware.

A minimum node configuration consists of the following:

  • 2 CPU cores

  • 2GB RAM

  • 1GB Ethernet NIC

    Recommended: 2 x 1GB Ethernet NICs in a bonded (802.3ad/LACP) configuration

  • Dedicated storage sized for your data requirements (10GB or higher) and formatted as an XFS file system

A minimum of three nodes are required in a Gluster trusted storage pool. The examples in this guide use three nodes, named node1, node2, and node3. Node names for each of the nodes in the pool must be resolvable on each host. You can achieve this either by configuring DNS correctly, or you can add host entries to the /etc/hosts file on each node.

In the examples in this guide, each host is configured with an additional dedicated block storage device at /dev/sdb. The block device is formatted with the XFS file system and then mounted at /data/glusterfs/myvolume/mybrick.

Your deployment needs may require nodes with a larger footprint. Additional considerations are detailed in the Gluster upstream documentation.

2.2 Operating System Requirements

Release 8 of Gluster Storage for Oracle Linux is available on the platforms and operating systems shown in the following table.

Table 2.1 Operating System Requirements

Platform

Operating System Release

Minimum Operating System Maintenance Release

Kernel

x86_64

Oracle Linux 8

Oracle Linux 8.2

Unbreakable Enterprise Kernel Release 6 (UEK R6)

Red Hat Compatible Kernel (RHCK)

aarch64

Oracle Linux 8

Oracle Linux 8.2

UEK R6

RHCK

x86_64

Oracle Linux 7

Oracle Linux 7.7

UEK R6

UEK R5

UEK R4

RHCK

aarch64

Oracle Linux 7

Oracle Linux 7.7

UEK R6

UEK R5


Important
  • All nodes within your deployment must run the same Oracle Linux release and update level. Otherwise, changes between releases in some component software, such as the default Python version, might cause communication issues between nodes.

  • If you are upgrading from a previous release of Gluster Storage for Oracle Linux, you must also upgrade to the latest update level of the Oracle Linux release that you are using.

2.3 Enabling Access to the Gluster Storage for Oracle Linux Packages

Note

When working on Oracle Linux 8 systems, where the documentation uses the yum command in the examples, you can substitute the command with dnf for the same behavior.

2.3.1 Enabling Access on Oracle Linux 8 Systems

The Gluster Storage for Oracle Linux packages are available on the Oracle Linux yum server in the ol8_gluster_appstream repository, or on the Unbreakable Linux Network (ULN) in the ol8_arch_gluster_appstream channel.

Enabling Repositories with ULN

If you are registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.

  1. Log in to https://linux.oracle.com with your ULN user name and password.

  2. On the Systems tab, click the link named for the system in the list of registered machines.

  3. On the System Details page, click Manage Subscriptions.

  4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels to ensure that all dependencies are met:

    • ol8_arch_gluster_appstream

    • ol8_arch_baseos_latest

    • ol8_arch_appstream

  5. Click Save Subscriptions.

Enabling Repositories with the Oracle Linux Yum Server

If you are using the Oracle Linux yum server for system updates, use the command line to enable the Gluster Storage for Oracle Linux yum repository.

  1. Install the oracle-gluster-release-el8 release package to install the Gluster Storage for Oracle Linux yum repository configuration.

    sudo dnf install oracle-gluster-release-el8
  2. Enable the following yum repositories:

    • ol8_gluster_appstream

    • ol8_baseos_latest

    • ol8_appstream

    Use the dnf config-manager tool to enable the yum repositories:

    sudo dnf config-manager --enable ol8_gluster_appstream ol8_baseos_latest ol8_appstream

Oracle Linux 8: Enabling the glusterfs Module and Application Stream

On Oracle Linux 8, the Gluster Storage for Oracle Linux packages are released as an application stream module. All of the packages specific to a particular release of the Gluster software are released within a stream. Furthermore, packages are bundled within a profile to be installed as a single step on a system depending on a use case.

Two modules are available for Gluster. The glusterfs module contains all supported packages required to install and run a Gluster server node or a Gluster client. The glusterfs-developer module contains packages that are released as a technical preview for developer use only. Packages that are available in the glusterfs-developer module are unsupported or may only be supported within very specific contexts, such as, when they are made available for other supported software.

To gain access to the Gluster Storage for Oracle Linux packages, enable the glusterfs application stream module:

sudo dnf module enable glusterfs

Once the module is enabled, you can install any of the packages within the module by following any of the other instructions in this documentation.

You are also able to take advantage of the module profiles to simply install everything you might need for a use case. For example, on any server node, you can run:

sudo dnf install @glusterfs/server

This action installs all of the possible packages required for any Gluster server node, including core Gluster functionality, Gluster geo-replication functionality, NFS-Ganesha server and Heketi server packages.

On a client system where you intend to access or mount a Gluster share, you can run:

sudo dnf install @glusterfs/client

This action installs the Gluster packages required to mount a Gluster share or to act as a Heketi client.

2.3.2 Enabling Access on Oracle Linux 7 Systems

The Gluster Storage for Oracle Linux packages are available on the Oracle Linux yum server in the ol7_gluster8 repository, or on the Unbreakable Linux Network (ULN) in the ol7_arch_gluster8 channel. However, there are also dependencies across other repositories and channels, and these must also be enabled on each system where Gluster is installed.

Enabling Repositories with ULN

If you are registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.

  1. Log in to https://linux.oracle.com with your ULN user name and password.

  2. On the Systems tab, click the link named for the system in the list of registered machines.

  3. On the System Details page, click Manage Subscriptions.

  4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels:

    • ol7_arch_gluster8

    • ol7_arch_addons

    • ol7_arch_latest

    • ol7_arch_optional_latest

    • ol7_arch_UEKR5 or ol7_arch_UEKR4

  5. Click Save Subscriptions.

Enabling Repositories with the Oracle Linux Yum Server

If you are using the Oracle Linux yum server for system updates, use the command line to enable the Gluster Storage for Oracle Linux yum repository.

  1. Install the oracle-gluster-release-el7 release package to install the Gluster Storage for Oracle Linux yum repository configuration.

    sudo yum install oracle-gluster-release-el7
  2. Enable the following yum repositories:

    • ol7_gluster8

    • ol7_addons

    • ol7_latest

    • ol7_optional_latest

    • ol7_UEKR5 or ol7_UEKR4

    Use the yum-config-manager tool to enable the yum repositories:

    sudo yum-config-manager --enable ol7_gluster8 ol7_addons ol7_latest ol7_optional_latest ol7_UEKR5

2.4 Installing and Configuring Gluster

A Gluster deployment consists of several systems, known as nodes. The nodes form a trusted storage pool or cluster.

The following sections discuss setting up nodes for a Gluster trusted storage pool.

2.4.1 Preparing Oracle Linux Nodes

In addition to the requirements listed in Section 2.1, “Hardware and Network Requirements” and Section 2.2, “Operating System Requirements”, all the Oracle Linux systems that you intend to use as nodes must have the following configurations and features:

  • The same storage configuration

  • Synchronized time

  • Resolvable fully qualified domain names (FQDNs)

Storage Configuration

Gluster requires a dedicated file system on each node for the cluster data. The storage must be formatted with an XFS file system. The storage device can be an additional disk, a disk partition, an LVM volume, a loopback device, a multipath device, or a LUN. Do not use the root partition for cluster data.

The cluster file system used in the examples in this guide is an XFS file system on a disk attached to /dev/sdb on each node. This disk is mounted on the directory /data/glusterfs/myvolume/mybrick. The inode size is set to 512 bytes to accommodate the extended attributes used by the Gluster file system. To set up this disk, you would use commands similar to:

sudo mkfs.xfs -f -i size=512 -L glusterfs /dev/sdb
sudo mkdir -p /data/glusterfs/myvolume/mybrick
echo 'LABEL=glusterfs /data/glusterfs/myvolume/mybrick xfs defaults  0 0' |sudo tee -a /etc/fstab
sudo mount -a

Synchronized Time

Time must be accurate and synchronized across the nodes in the pool. For this purpose, you can install and configure NTP or PTP on each node. For more information on configuring synchornizing time on Oracle Linux 7, see Configuring Network Time in Oracle® Linux 7: Setting Up Networking.

For equivalent information that applies to Oracle Linux 8, see Oracle Linux: Update the system date and time from the command line interface and Configuring Network Time in Oracle® Linux 8: Setting Up Networking.

Resolvable Host Names

All nodes must be able to resolve the FQDN for each node within the pool. You can either use DNS or provide entries within /etc/hosts for each system. If you rely on DNS, the service must have sufficient redundancy to ensure that the cluster is able to perform name resolution at any time. If you want to edit the /etc/hosts file on each node, add entries for the IP address and host name of all of the nodes in the pool, for example:

192.168.1.51    node1.example.com     node1
192.168.1.52    node2.example.com     node2
192.168.1.53    node3.example.com     node3

2.4.2 Installing the Gluster Server

Install the Gluster server packages on each node to be included in a trusted storage pool.

  1. Install the glusterfs-server package.

    If running Oracle Linux 8, type the following command:

    sudo dnf install @glusterfs/server

    Otherwise, type:

    sudo yum install glusterfs-server
  2. Start and enable the Gluster server service:

    sudo systemctl enable --now glusterd
  3. Adjust firewall configuration.

    Pool network communications must be able to take place between nodes within the cluster. If you want to deploy a firewall on any of the nodes, then configure the service to facilitate network traffic on the required ports or between each node on the cluster.

    • If you have a dedicated network for Gluster traffic, you can add the interfaces to a trusted firewall zone and allow all traffic between nodes within the pool. For example, on each node in the pool, run:

      sudo firewall-cmd --permanent --change-zone=if-name --zone=trusted
      sudo firewall-cmd --reload

      The commands automatically add the line zone=trusted to an interface's /etc/sysconfig/network-scripts/ifcfg-if-name file and then reload the firewall to activate the change.

      With this configuration, your clients must be on the same dedicated network and configured for the same firewall zone. If necessary, you can also configure additional rules specific to the interface on which your clients are connecting if needed to further customize the firewall configuration.

    • If your network interfaces are on a shared or untrusted network, configure the firewall to allow traffic on the ports specifically used by Gluster.

      sudo firewall-cmd --permanent --add-service=glusterfs
      sudo firewall-cmd --reload

      Note that adding the glusterfs service only exposes the ports required for Gluster. If you intend to add access via Samba, you must add these services as well.

2.4.3 Creating the Trusted Storage Pool

This section shows you how to create a trusted storage pool. In this example, a pool of three servers is created (node1, node2 and node3). You should nominate one of the nodes in the pool as the node on which you perform pool operations. In this example, node1 is the node on which the pool operations are performed.

  1. Add the nodes to the trusted server pool. You do not need to add the node on which you are performing the pool operations. For example:

    sudo gluster peer probe node2
    sudo gluster peer probe node3
  2. (Optional) Check the status of each node in the pool.

    sudo gluster peer status
  3. (Optional) List the nodes in the pool.

    sudo gluster pool list

If you need to remove a server from a trusted server pool, use:

sudo gluster peer detach hostname

2.4.4 Setting Up Transport Layer Security

Gluster supports Transport Layer Security (TLS) by using the OpenSSL library to authenticate Gluster nodes and clients. Through the use of private keys and public certificates, TLS encrypts communication between nodes in the trusted storage pool, and between client systems accessing the pool nodes.

Gluster performs mutual authentication in all transactions. If one side of a connection is configured to use TLS, then the other side must use TLS as well. Every node must have a copy either of the public certificate of every other node in the pool, or of the signing CA certificate that can be used to validate the certificates presented by each of the nodes in the pool. Equally, client systems accessing any node in the pool must have a copy of that node's certificate or the signing CA certificate. In turn, the node needs a copy of a certificate for the accessing client.

TLS is enabled as a setting on the volume and can also be enabled for management communication within the pool.

Configuring TLS for your Gluster deployment is optional but recommended for better security.

In production environments, you should use certificates that are properly signed by a Certificate Authority (CA) to improve validation security and also reduces the complexity of configuration. However, in cases where numerous clients access the pool, this method is not always practical. This section describes configuration for environments where certificates are signed by a CA and when certificates are self-signed.

  1. Generate a private key on each node within the pool. You can do this using the openssl tool:

    sudo openssl genrsa -out /etc/ssl/glusterfs.key 2048
  2. Perform one of the following substeps depending on the method you adopt.

    Using self-signed certificates
    1. Create a self-signed certificate on each node in the storage pool.

      sudo openssl req -new -x509 -days 365 -key /etc/ssl/glusterfs.key -out /etc/ssl/glusterfs.pem
    2. Concatenate the contents of each of *.pem files into a single file called /etc/ssl/glusterfs.ca.

    3. Ensure that the /etc/ssl/glusterfs.ca exists on every node in the pool.

      Each node uses this file to validate the certificates presented by other nodes or clients that connect to the node. If the public certificate for another participatory node or client is not present in this file, the node is unable to verify certificates and the connections fail.

    Using CA-signed certificates
    1. Create a certificate signing request. (CSR)

      sudo openssl req -new -sha256 -key /etc/ssl/glusterfs.key -out /etc/ssl/glusterfs.csr
    2. After obtaining the signed certificate back from your CA, save this file to /etc/ssl/glusterfs.pem.

    3. Save the CA certificate for your CA provider to /etc/ssl/glusterfs.ca on each node in the pool.

      Each node uses this file to validate the certificates presented by other nodes or clients that connect to it. If the public certificate for another participatory node or client cannot be verified by the CA signing certificate, attempts to connect by the client or node fail.

  3. Configure TLS encryption for management traffic within the storage pool by creating an empty file at /var/lib/glusterd/secure-access on each node in the pool.

    Perform this same step on any client system where you intend to mount a volume.

    sudo touch /var/lib/glusterd/secure-access
  4. Enable TLS on the I/O path for an existing volume by setting the client.ssl and server.ssl parameters for that volume.

    For example, to enable TLS on a volume named myvolume, type the following:

    sudo gluster volume set myvolume client.ssl on
    sudo gluster volume set myvolume server.ssl on

    These parameters enable TLS validation and encryption on client traffic using the Gluster native client and on communications between nodes within the pool. Note that TLS is not automatically enabled on non-native file sharing protocols such as SMB by changing these settings.

  5. Restart the glusterd service on each of the nodes where you have enabled secure access for management traffic within the pool.

    sudo systemctl restart glusterd