2 Installing Gluster Storage for Oracle Linux
WARNING:
Gluster on Oracle Linux 8 is no longer supported. See Oracle Linux: Product Life Cycle Information for more information.
Oracle Linux 7 is now in Extended Support. See Oracle Linux Extended Support and Oracle Open Source Support Policies for more information. Gluster on Oracle Linux 7 is excluded from extended support.
This chapter discusses how to enable the repositories to install the Gluster Storage for Oracle Linux packages, and how to perform an installation of those packages. This chapter also discusses setting up Gluster trusted storage pools, and Transport Layer security (TLS). This chapter also contains information on upgrading from a previous release of Gluster Storage for Oracle Linux.
Hardware and Network Requirements
Gluster Storage for Oracle Linux doesn't require specific hardware; however, certain Gluster operations are CPU and memory intensive. The X6 and X7 line of Oracle x86 Servers are suitable to host Gluster nodes. For more information on Oracle x86 Servers, see https://www.oracle.com/servers/x86/
Oracle provides support for Gluster Storage for Oracle Linux on 64-bit x86 (x86_64) and 64-bit Arm (aarch64) hardware.
A minimum node configuration consists of the following:
-
2 CPU cores
-
2GB RAM
-
1GB Ethernet NIC
Recommended: 2 x 1GB Ethernet NICs in a bonded (802.3ad/LACP) configuration
-
Dedicated storage sized for the data requirements (10GB or higher) and formatted as an XFS file system
A minimum of three nodes are required in a Gluster trusted storage
pool. The examples in this guide use three nodes, named
node1
, node2
, and
node3
. Node names for each of the nodes in the
pool must be resolvable on each host. You can achieve this either
by configuring DNS correctly, or you can add host entries to the
/etc/hosts
file on each node.
In the examples in this guide, each host is configured with an added dedicated block storage
device at /dev/sdb
. The block device is formatted with the XFS file system
and then mounted at /data/glusterfs/myvolume/mybrick
.
Your deployment needs may require nodes with a larger footprint. Added considerations are detailed in the Gluster upstream documentation.
Operating System Requirements
Release 8 of Gluster Storage for Oracle Linux is available on the platforms and oOS shown in the following table.
Table 2-1 Operating System Requirements
Platform | Operating System Release | Minimum Operating System Maintenance Release | Kernel |
---|---|---|---|
x86_64 |
Oracle Linux 8 |
Oracle Linux 8.2 |
Unbreakable Enterprise Kernel Release 6 (UEK R6) Red Hat Compatible Kernel (RHCK) |
aarch64 |
Oracle Linux 8 |
Oracle Linux 8.2 |
UEK R6 RHCK |
x86_64 |
Oracle Linux 7 |
Oracle Linux 7.7 |
UEK R6 UEK R5 UEK R4 RHCK |
aarch64 |
Oracle Linux 7 |
Oracle Linux 7.7 |
UEK R6 UEK R5 |
Important:
-
All nodes within the deployment must run the same Oracle Linux release and update level. Otherwise, changes between releases in some component software, such as the default Python version, might cause communication issues between nodes.
-
If you're upgrading from a previous release of Gluster Storage for Oracle Linux, you must also upgrade to the latest update level of the Oracle Linux release that you're using.
Enabling Access to the Gluster Storage for Oracle Linux Packages
Note:
When working on Oracle Linux 8 systems, where the documentation uses the yum command in the examples, you can substitute the command with dnf for the same behavior.
On Oracle Linux 8 Systems
The Gluster Storage for Oracle Linux packages are available on the Oracle Linux yum server in the
ol8_gluster_appstream
repository, or on the
Unbreakable Linux Network (ULN) in the
ol8_arch_gluster_appstream
channel.
Enabling Repositories with ULN
If you're registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.
-
Log in to https://linux.oracle.com with the ULN username and password.
-
On the Systems tab, click the link named for the system in the list of registered machines.
-
On the System Details page, click Manage Subscriptions.
-
On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels to ensure that all dependencies are met:
-
ol8_arch_gluster_appstream
-
ol8_arch_baseos_latest
-
ol8_arch_appstream
-
-
Click Save Subscriptions.
Enabling Repositories with the Oracle Linux Yum Server
If you're using the Oracle Linux yum server for system updates, use the command line to enable the Gluster Storage for Oracle Linux yum repository.
-
Install the
release package to install the Gluster Storage for Oracle Linux yum repository configuration.oracle-gluster-release-el8
sudo dnf install oracle-gluster-release-el8
-
Enable the following yum repositories:
-
ol8_gluster_appstream
-
ol8_baseos_latest
-
ol8_appstream
Use the dnf config-manager tool to enable the yum repositories:
sudo dnf config-manager --enable ol8_gluster_appstream ol8_baseos_latest ol8_appstream
-
Enabling the glusterfs Module and Application Stream
On Oracle Linux 8, the Gluster Storage for Oracle Linux packages are released as an application stream module. All the packages specific to a particular release of the Gluster software are released within a stream. Furthermore, packages are bundled within a profile to be installed as a single step on a system depending on a use case.
Two modules are available for Gluster. The glusterfs
module contains all
packages required to install and run a Gluster server node or a Gluster client. The
glusterfs-developer
module contains packages that are released as a
technical preview for developer use only. Packages that are available in the
glusterfs-developer
module might only be supported within specific
contexts, such as, when they're made available for other supported software.
To gain access to the Gluster Storage for Oracle Linux packages, enable the
glusterfs
application stream module:
sudo dnf module enable glusterfs
After the module is enabled, you can install any of the packages within the module by following any of the other instructions in this documentation.
You're also able to take advantage of the module profiles to install everything you might need for a use case. For example, on any server node, you can run:
sudo dnf install @glusterfs/server
This action installs all the possible packages required for any Gluster server node, including core Gluster functionality, Gluster geo-replication functionality, NFS-Ganesha server, and Heketi server packages.
On a client system where you intend to access or mount a Gluster share, type:
sudo dnf install @glusterfs/client
This action installs the Gluster packages required to mount a Gluster share or to act as a Heketi client.
On Oracle Linux 7 Systems
The Gluster Storage for Oracle Linux packages are available on the Oracle Linux yum server
in the ol7_gluster8
repository, or on the Unbreakable Linux Network (ULN) in
the ol7_arch_gluster8
channel. However, dependencies exist
across other repositories and channels, and these must also be enabled on each system where
Gluster is installed.
Enabling Repositories with ULN
If you're registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.
-
Log in to https://linux.oracle.com with the ULN username and password.
-
On the Systems tab, click the link named for the system in the list of registered machines.
-
On the System Details page, click Manage Subscriptions.
-
On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels:
-
ol7_arch_gluster8
-
ol7_arch_addons
-
ol7_arch_latest
-
ol7_arch_optional_latest
-
ol7_arch_UEKR5
orol7_arch_UEKR4
-
-
Click Save Subscriptions.
Enabling Repositories with the Oracle Linux Yum Server
If you're using the Oracle Linux yum server for system updates, use the command line to enable the Gluster Storage for Oracle Linux yum repository.
-
Install the
release package to install the Gluster Storage for Oracle Linux yum repository configuration.oracle-gluster-release-el7
sudo yum install oracle-gluster-release-el7
-
Enable the following yum repositories:
-
ol7_gluster8
-
ol7_addons
-
ol7_latest
-
ol7_optional_latest
-
ol7_UEKR5
orol7_UEKR4
Use the yum-config-manager tool to enable the yum repositories:
sudo yum-config-manager --enable ol7_gluster8 ol7_addons ol7_latest ol7_optional_latest ol7_UEKR5
-
Installing and Configuring Gluster
A Gluster deployment consists of several systems, known as nodes. The nodes form a trusted storage pool or cluster.
The following sections discuss setting up nodes for a Gluster trusted storage pool.
Preparing Oracle Linux Nodes
In addition to the requirements listed in Hardware and Network Requirements and Operating System Requirements, all the Oracle Linux systems that you intend to use as nodes must have the following configurations and features:
-
The same storage configuration
-
Synchronized time
-
Resolvable fully qualified domain names (FQDNs)
Storage Configuration
Gluster requires a dedicated file system on each node for the cluster data. The storage must be formatted with an XFS file system. The storage device can be an added disk, a disk partition, an LVM volume, a loopback device, a multipath device, or a LUN. Don't use the root partition for cluster data.
The cluster file system used in the examples in this guide is an
XFS file system on a disk attached to
/dev/sdb
on each node. This disk is mounted
on the directory
/data/glusterfs/myvolume/mybrick
. The inode
size is set to 512 bytes to accommodate the extended attributes
used by the Gluster file system. To set up this disk, you would
use commands similar to:
sudo mkfs.xfs -f -i size=512 -L glusterfs /dev/sdb sudo mkdir -p /data/glusterfs/myvolume/mybrick echo 'LABEL=glusterfs /data/glusterfs/myvolume/mybrick xfs defaults 0 0' |sudo tee -a /etc/fstab sudo mount -a
Synchronized Time
Time must be synchronized across the nodes in the pool. For tools, you can install configure NTP or PTP on each node. For more information on configuring synchronizing time on Oracle Linux 7, see Configuring Network Time in Oracle Linux 7: Setting Up Networking.
For information that applies to Oracle Linux 8, see Oracle Linux: Using the Cockpit Web Console and Configuring Network Time in Oracle Linux 8: Setting Up Networking.
Resolvable Host Names
All nodes must be configured to resolve the FQDN for each node within the pool. You can
either use DNS or provide entries within /etc/hosts
for each system. If you
rely on DNS, the service must have redundancy to ensure that the cluster is can perform name
resolution at any time. To edit the /etc/hosts
file on each node, add entries
for the IP address and host name of all the nodes in the pool, for example:
192.168.1.51 node1.example.com node1 192.168.1.52 node2.example.com node2 192.168.1.53 node3.example.com node3
Installing the Gluster Server
Install the Gluster server packages on each node to be included in a trusted storage pool.
-
Install the
glusterfs-server
package.If running Oracle Linux 8, type the following command:
sudo dnf install @glusterfs/server
Otherwise, type:
sudo yum install glusterfs-server
-
Start and enable the Gluster server service:
sudo systemctl enable --now glusterd
-
Adjust firewall configuration.
Pool network communications must be configured between nodes within the cluster. To deploy a firewall on any of the nodes, configure the service to accept network traffic on the required ports or between each node on the cluster.
-
If you have a dedicated network for Gluster traffic, you can add the interfaces to a trusted firewall zone and accept all traffic between nodes within the pool. For example, on each node in the pool, run:
sudo firewall-cmd --permanent --change-zone=if-name --zone=trusted sudo firewall-cmd --reload
The commands automatically add the line
zone=trusted
to an interface's/etc/sysconfig/network-scripts/ifcfg-if-name
file and then reload the firewall to activate the change.With this configuration, clients must be on the same dedicated network and configured for the same firewall zone. You can also configure more rules specific to the interface on which clients connect to further customize the firewall configuration.
-
If network interfaces are on a shared or untrusted network, configure the firewall to accept traffic on the ports used by Gluster.
sudo firewall-cmd --permanent --add-service=glusterfs sudo firewall-cmd --reload
Note that adding the
glusterfs
service only exposes the ports required for Gluster. If you intend to add access through Samba, you must also add these services.
-
Creating the Trusted Storage Pool
This section shows you how to create a trusted storage pool. In this example, a pool of
three servers is created (node1
, node2
and
node3
). Nominate one of the nodes in the pool as the node on which you
perform pool operations. In this example, node1
is the node on which the pool
operations are performed.
-
Add the nodes to the trusted server pool. You don't need to add the node on which you're performing the pool operations. For example:
sudo gluster peer probe node2 sudo gluster peer probe node3
-
(Optional) Check the status of each node in the pool.
sudo gluster peer status
-
(Optional) List the nodes in the pool.
sudo gluster pool list
If you need to remove a server from a trusted server pool, use:
sudo gluster peer detach hostname
Setting Up Transport Layer Security
Gluster works with Transport Layer Security (TLS) by using the OpenSSL library to authenticate Gluster nodes and clients. By using private keys and public certificates, TLS encrypts communication between nodes in the trusted storage pool, and between client systems accessing the pool nodes.
Gluster performs mutual authentication in all transactions. If one side of a connection is configured to use TLS, then the other side must also use TLS. Every node must have a copy either of the public certificate of every other node in the pool, or of the signing CA certificate that can be used to validate the certificates presented by each of the nodes in the pool. Equally, client systems accessing any node in the pool must have a copy of that node's certificate or the signing CA certificate. In turn, the node needs a copy of a certificate for the accessing client.
TLS is enabled as a setting on the volume and can also be enabled for management communication within the pool.
Configuring TLS for Gluster deployment is optional but recommended for better security.
In production environments, use certificates that are signed by a Certificate Authority (CA) to improve validation security and also reduces the complexity of configuration. However, in cases where various clients access the pool, this method isn't always practical. This section describes configuration for environments where certificates are signed by a CA and when certificates are self-signed.
-
Generate a private key on each node within the pool.
sudo openssl genrsa -out /etc/ssl/glusterfs.key 2048
-
Perform one of the following substeps depending on the method you adopt.
Using self-signed certificates
-
Create a self-signed certificate on each node in the storage pool.
sudo openssl req -new -x509 -days 365 -key /etc/ssl/glusterfs.key -out /etc/ssl/glusterfs.pem
-
Concatenate the contents of each of
*.pem
files into a single file called/etc/ssl/glusterfs.ca
. -
Ensure that the
/etc/ssl/glusterfs.ca
exists on every node in the pool.Each node uses this file to validate the certificates presented by other nodes or clients that connect to the node. If the public certificate for another participatory node or client isn't present in this file, the node is unable to verify certificates and the connections fail.
Using CA-signed certificates
-
Create a certificate signing request. (CSR)
sudo openssl req -new -sha256 -key /etc/ssl/glusterfs.key -out /etc/ssl/glusterfs.csr
-
After obtaining the signed certificate back from the CA, save this file to
/etc/ssl/glusterfs.pem
. -
Save the CA certificate for the CA provider to
/etc/ssl/glusterfs.ca
on each node in the pool.Each node uses this file to validate the certificates presented by other nodes or clients that connect to it. If the public certificate for another participatory node or client can't be verified by the CA signing certificate, client or node connections.
-
-
Configure TLS encryption for management traffic within the storage pool by creating an empty file at
/var/lib/glusterd/secure-access
on each node in the pool.Perform this same step on any client system where you intend to mount a volume.
sudo touch /var/lib/glusterd/secure-access
-
Enable TLS on the I/O path for an existing volume by setting the
client.ssl
andserver.ssl
parameters for that volume.For example, to enable TLS on a volume named
myvolume
, type the following:sudo gluster volume set myvolume client.ssl on sudo gluster volume set myvolume server.ssl on
These parameters enable TLS validation and encryption on client traffic using the Gluster native client and on communications between nodes within the pool. Note that TLS isn't automatically enabled on non native file sharing protocols such as SMB by changing these settings.
-
Restart the
glusterd
service on each of the nodes where you have enabled secure access for management traffic within the pool.sudo systemctl restart glusterd