The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

2.4 Installing and Configuring Gluster

A Gluster deployment consists of several systems, known as nodes. The nodes form a trusted storage pool or cluster. Each node in the pool must:

  • Have the Oracle Linux operating system installed

  • Have the same storage configured

  • Have synchronized time

  • Be able to resolve the fully qualified domain name of each node in the pool

  • Run the Gluster server daemon

The following sections discuss setting up nodes for a Gluster trusted storage pool.

2.4.1 Preparing Oracle Linux Nodes

There are some basic requirements for each Oracle Linux system that you intend to use as a node. These include the following items, for which some preparatory work may be required before you can begin your deployment.

To prepare Oracle Linux nodes:

  1. Gluster requires a dedicated file system on each node for the cluster data. The storage must be formatted with an XFS file system. The storage device can be an additional disk, a disk partition, an LVM volume, a loopback device, a multipath device, or a LUN. Do not use the root partition for cluster data.

    The cluster file system used in the examples in this guide is an XFS file system on a disk attached to /dev/sdb on each node. This disk is mounted on the directory /data/glusterfs/myvolume/mybrick. The inode size is set to 512 bytes to accommodate the extended attributes used by the Gluster file system. To set up this disk, you would use commands similar to:

    # mkfs.xfs -f -i size=512 -L glusterfs /dev/sdb
    # mkdir -p /data/glusterfs/myvolume/mybrick
    # echo 'LABEL=glusterfs /data/glusterfs/myvolume/mybrick xfs defaults  0 0' >> /etc/fstab
    # mount -a
  2. Time must be accurate and synchronized across the nodes in the pool. This is achieved by installing and configuring NTP on each node. If the NTP service is not already configured, install and start it. For more information on configuring NTP, see the Oracle Linux 7 Administrator's Guide at:

    https://docs.oracle.com/cd/E52668_01/E54669/html/ol7-nettime.html

  3. Pool network communications must be able to take place between nodes within the cluster. If firewall software is running on any of the nodes, it must either be disabled or, preferably, configured to facilitate network traffic on the required ports or between each node on the cluster.

    • If you have a dedicated network for Gluster traffic, you can add the interfaces to a trusted firewall zone and allow all traffic between nodes within the pool. For example, on each node in the pool, run:

      # firewall-cmd --permanent --change-zone=eno2 --zone=trusted
      # firewall-cmd --reload

      This command automatically updates the /etc/sysconfig/network-scripts/ifcfg-eno2 file for the network interface named eno2, to add the line zone=trusted. You must reload the firewall service for the change to be loaded into the firewall and for it to become active.

      In this configuration, your clients must either be on the same dedicated network and configured for the same firewall zone, or you may need to configure other rules specific to the interface that your clients are connecting on.

    • If your network interfaces are on a shared or untrusted network, you can configure the firewall to allow traffic on the ports specifically used by Gluster:

      # firewall-cmd --permanent --add-service=glusterfs
      # firewall-cmd --reload

      Note that adding the glusterfs service only exposes the ports required for Gluster. If you intend to add access via Samba, you must add these services as well.

  4. All nodes must be able to resolve the fully qualified domain name for each node within the pool. You may either use DNS for this purpose, or provide entries within /etc/hosts for each system. If you rely on DNS, it must have sufficient redundancy to ensure that the cluster is able to perform name resolution at any time. If you want to edit the /etc/hostsfile on each node, add entries for the IP address and host name of all of the nodes in the pool, for example:

    192.168.1.51    node1.example.com     node1
    192.168.1.52    node2.example.com     node2
    192.168.1.53    node3.example.com     node3

You can now install and configure the Gluster server.