Chapter 5 Accessing Volumes

This chapter the discusses the options available to access Gluster volumes from an Oracle Linux or Microsoft Windows client system.

Access to volumes is provided through a number of different network file system technologies including NFS, Samba and a Gluster native client that uses the File System in Userspace (FUSE) software interface to provide access to the volume.

If you need to mount the volume locally on one of the nodes, you should treat this as an additional mount exactly as if you were mounting from a remote host.

Warning

Editing the data within the volume directly on the file system on each node can quickly lead to split-brain scenarios and potential file system corruption.

5.1 Accessing Volumes by Using iSCSI

This section discusses setting up a volume as an iSCSI backstore to provide block storage using the gluster-block and tcmu-runner packages. Files on volumes are exported as block storage (iSCSI LUNs). The storage initiator logs into the LUN to access the block device.

The gluster-block package includes a CLI to create and manage iSCSI access to volumes. The tcmu-runner package handles access to volumes using the iSCSI protocol.

5.1.1 Installing iSCSI Services

This section discusses setting up the trusted storage pool to enable iSCSI access.

To install iSCSI services:

On each node in the trusted storage pool:

  1. Install the tcmu-runner and gluster-block packages.

    sudo yum install tcmu-runner gluster-block
  2. Start and enable the tcmu-runner and gluster-blockd services:

    sudo systemctl enable --now tcmu-runner gluster-blockd

5.1.2 Creating a Block Device

This section discusses creating a block device on an existing volume. For more information about creating and managing block devices, see the upstream documentation. To get help on using the gluster-block command, enter gluster-block help.

To create a block device:

On a node in the trusted storage pool:

  1. Create the block device using the gluster-block create command. This example creates a block device named myvolume-block for the volume named myvolume. The three nodes in the trusted storage pool form a high availability connection to the volume.

    sudo gluster-block create myvolume/myvolume-block ha 3 prealloc no 192.168.1.51,192.168.1.52,192.168.1.53  20GiB
    IQN: iqn.2016-12.org.gluster-block:4a015741-f455-4568-a0ee-b333b595ba4f
    PORTAL(S):  10.147.25.88:3260 10.147.25.89:3260 10.147.25.90:3260
    RESULT: SUCCESS
  2. To get a list of block devices for a volume, use the gluster-block list command.

    sudo gluster-block list myvolume
    myvolume-block
  3. You can get information on the block device using the gluster-block info command.

    sudo gluster-block info myvolume/myvolume-block
    NAME: myvolume-block
    VOLUME: myvolume
    GBID: 4a015741-f455-4568-a0ee-b333b595ba4f
    SIZE: 20.0 GiB
    HA: 3
    PASSWORD: 
    EXPORTED ON: 192.168.1.51 192.168.1.52 192.168.1.53
  4. To get a list of the iSCSI targets, use the targetcli ls command.

    sudo targetcli  ls
    ...
      o- iscsi .................................................................... [Targets: 1]
      | o- iqn.2016-12.org.gluster-block:4a015741-f455-4568-a0ee-b333b595ba4f ........ [TPGs: 3]
      |   o- tpg1 .......................................................... [gen-acls, no-auth]
      |   | o- acls .................................................................. [ACLs: 0]
      |   | o- luns .................................................................. [LUNs: 1]
      |   | | o- lun0 ................................. [user/myvolume-block (glfs_tg_pt_gp_ao)]
      |   | o- portals ............................................................ [Portals: 1]
      |   |   o- 192.168.1.51:3260 ........................................................ [OK]
    ...

5.1.3 Accessing an iSCSI Block Device

This section discusses accessing an iSCSI block device.

To access an iSCSI block device:

On a client node:

  1. Install the packages required to access the block storage.

    sudo yum install iscsi-initiator-utils device-mapper-multipath
  2. Enable the iscsid service:

    sudo systemctl enable iscsid
  3. Discover and log into the iSCSI target on any of the nodes in the trusted storage pool that is set to host block devices. For example:

    sudo iscsiadm -m discovery -t st -p 192.168.1.51 -l
  4. You can see a list of the iSCSI sessions using the iscsiadm -m session command:

    sudo iscsiadm -m session
    tcp: [1] 192.168.1.51:3260,1 iqn.2016-12.org.gluster-block:4a015741... (non-flash)
    tcp: [2] 192.168.1.52:3260,2 iqn.2016-12.org.gluster-block:4a015741... (non-flash)
    tcp: [3] 192.168.1.53:3260,3 iqn.2016-12.org.gluster-block:4a015741... (non-flash)
  5. (Optional) Set up multipath.

    1. Load and enable the multipath module.

      sudo modprobe dm_multipath
      sudo mpathconf --enable
    2. Restart and enable the multipathd service.

      sudo systemctl restart multipathd
      sudo systemctl enable multipathd
  6. To see the new iSCSI devices added, use the lsblk command:

    sudo lsblk
    NAME                MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
    sdd                   8:48   0   20G  0 disk  
    └─mpatha            252:2    0   20G  0 mpath 
    sdb                   8:16   0   10G  0 disk  
    sde                   8:64   0   20G  0 disk  
    └─mpatha            252:2    0   20G  0 mpath 
    sdc                   8:32   0   20G  0 disk  
    └─mpatha            252:2    0   20G  0 mpath 
    sda                   8:0    0 36.5G  0 disk  
    ├─sda2                8:2    0   36G  0 part  
    │ ├─vg_main-lv_swap 252:1    0    4G  0 lvm   [SWAP]
    │ └─vg_main-lv_root 252:0    0   32G  0 lvm   /
    └─sda1                8:1    0  500M  0 part  /boot

    New disks are added for the Gluster block storage. In this case, the disks are sdd, sde, and sdc.

  7. Create an XFS file system on the device:

    sudo mkfs.xfs /dev/mapper/mpatha
  8. Mount the block device. In this example the Gluster block storage is mounted to /mnt.

    sudo mount /dev/mapper/mpatha /mnt/

5.2 Accessing Volumes by Using NFS

You can expose volumes using NFS-Ganesha. NFS-Ganesha is a user space file server for the NFS protocol. It provides a FUSE-compatible File System Abstraction Layer (FSAL) to allow access from any NFS client.

Perform the following steps on each node in the trusted storage pool on which you want to enable NFS access:

  1. Install the Gluster NFS-Ganesha client packages:

    sudo yum install nfs-ganesha-gluster
  2. Create an export configuration file in the /etc/ganesha/exports directory. This file contains the NFS export information for NFS Ganesha. In this example we use the file name export.myvolume.conf to export a volume named myvolume to an NFS share located at /myvolume on the node.

    EXPORT{
        Export_Id = 1 ;   # Export ID unique to each export
        Path = "/myvolume";  # Path of the volume to be exported. Eg: "/test_volume"
    
        FSAL {
            name = GLUSTER;
            hostname = "localhost";  # IP of one of the nodes in the trusted pool
            volume = "myvolume";  # Volume name. Eg: "test_volume"
        }
    
        Access_type = RW;    # Access permissions
        Squash = No_root_squash; # To enable/disable root squashing
        Disable_ACL = TRUE;  # To enable/disable ACL
        Pseudo = "/myvolume";  # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
        Protocols = "3","4" ;    # NFS protocols supported
        Transports = "UDP","TCP" ; # Transport protocols supported
        SecType = "sys";     # Security flavors supported
    }

    Edit the /etc/ganesha/ganesha.conf file to include the new export configuration file, for example:

    ...
    %include "/etc/ganesha/exports/export.myvolume.conf"
  3. Enable and start the nfs-ganesha service:

    sudo systemctl enable --now nfs-ganesha
    Note

    If the volume is created after you set up access using NFS, you must reload the nfs-ganesha service:

    sudo systemctl reload-or-restart nfs-ganesha
  4. Check the volume is exported:

    sudo showmount -e localhost
    Export list for localhost:
    /myvolume (everyone)
  5. To connect to the volume from an NFS client, mount the NFS share, for example:

    sudo mkdir /gluster-storage
    sudo mount node1:/myvolume /gluster-storage

    Any files created in this /gluster-storage directory on the NFS client are written to the volume.

5.3 Accessing Volumes by Using the Gluster Native Client (FUSE)

You can use the Gluster native client on an Oracle Linux host to access a volume. The native client takes advantage of the File System in Userspace (FUSE) software interface that allows you to mount a volume without requiring a kernel driver or module.

Do the following:

  1. On the host where you intend to mount the volume, enable access to the Gluster Storage for Oracle Linux packages. For information on enabling access, see Section 2.3, “Enabling Access to the Gluster Storage for Oracle Linux Packages”.

  2. Install the Gluster native client packages:

    If running Oracle Linux 8, type the following command:

    sudo dnf install @glusterfs/client

    Otherwise, type:

    sudo yum install glusterfs glusterfs-fuse
  3. Create the directory where you intend to mount the volume. For example:

    sudo mkdir /gluster-storage
  4. If you have configured TLS for a volume, you may need to perform additional steps before a client system is able to mount the volume. See Section 2.4.4, “Setting Up Transport Layer Security” for more information. The following steps are required to complete client configuration for TLS:

    To set up TLS with the Gluster native client (FUSE):
    1. Set up a certificate and private key on the client system. You can either use a CA signed certificate or create a self-signed certificate, as follows:

      sudo openssl req -newkey rsa:2048 -nodes -keyout /etc/ssl/glusterfs.key -x509 -days 365 -out /etc/ssl/glusterfs.pem
    2. Append the client certificate to the /etc/ssl/glusterfs.ca file on each node in the trusted server pool. Equally, ensure that the client has a copy of the /etc/ssl/glusterfs.ca file that includes either the CA certificate that signed each node's certificate, or that contains all of the self-signed certificates for each node. Since Gluster performs mutual authentication, it is essential that both the client and the server node are able to validate each other's certificates.

    3. If you enabled encryption on management traffic, you must enable this facility on the client system to allow it to perform the initial mount. To do this, Gluster looks for a file at /var/lib/glusterfs/secure-access. This directory may not exist on a client system, so you might need to create it before touching the file:

      sudo mkdir -p /var/lib/glusterfs
      sudo touch /var/lib/glusterfs/secure-access
    4. If the volume is already set up and running before you added the client certificate to /etc/ssl/glusterfs.ca, you must stop the volume, restart the Gluster service and start up the volume again for the new certificate to be registered:

      sudo gluster volume stop myvolume
      sudo systemctl restart glusterd
      sudo gluster volume start myvolume 
  5. Mount the volume on the directory using the glusterfs mount type and by specifying a node within the pool along with the volume name. For example:

    sudo mount -t glusterfs node1:myvolume /gluster-storage

    If you have set up the volume to enable mounting a subdirectory, you can add the subdirectory name to the path on the Gluster file system:

    sudo mount -t glusterfs node1:myvolume/subdirectory /gluster-storage
  6. Check the permissions on the new mount to make sure the appropriate users can read and write to the storage. For example:

    sudo chmod 777 /gluster-storage
  7. To make the mount permanent, edit your /etc/fstab file to include the mount. For example:

    node1:/myvolume /gluster-storage glusterfs defaults,_netdev 0 0

    If you are mounting a subdirectory on the volume, add the subdirectory name to the path on the Gluster file system. For example:

    node1:/myvolume/subdirectory /gluster-storage glusterfs defaults,_netdev 0 0

If you have trouble mounting the volume, you can check the logs on the client system at /var/log/glusterfs/ to try to debug connection issues. For example, if TLS is not properly configured and the server node is unable to validate the client, you may see an error similar to the following in the logs:

… error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca

5.4 Accessing Volumes by Using Samba

You can expose volumes using the Common Internet File System (CIFS) or Server Message Block (SMB) by using Samba. This file sharing service is commonly used on Microsoft Windows systems.

Gluster provides hooks to preconfigure and export volumes automatically using a Samba Virtual File System (VFS) module plug-in. This reduces the complexity of configuring Samba to export the shares and also means that you do not have to pre-mount the volumes using the FUSE client, resulting in some performance gains. The hooks are triggered every time a volume is started, so your Samba configuration is updated the moment a volume is started within Gluster.

For more information on Samba, see Oracle® Linux 7: Administrator's Guide.

5.4.1 Setting Up the Volume for Samba Access

The following procedure sets up the nodes in the trusted storage pool to enable access to a volume by a Samba client. To use this service, you must make sure both the samba and samba-vfs-glusterfs packages are installed on any of the nodes in the pool where you intend a client to connect to the volume using Samba.

On each node in the trusted storage pool on which you want to enable Samba access:

  1. Install the Samba packages for Gluster:

    sudo yum install samba samba-vfs-glusterfs
  2. If you are running a firewall service, enable access to Samba on the node. For example:

    sudo firewall-cmd --permanent --add-service=samba 
    sudo firewall-cmd --reload
  3. Enable and start the Samba service:

    sudo systemctl enable --now smb
  4. (Optional) If you do not have an authentication system configured (for example, an LDAP server), you can create a Samba user to enable access to the Samba share from clients. This user should be created on all nodes set up to export the Samba share. For example:

    sudo adduser myuser
    sudo smbpasswd -a myuser
    New SMB password:
    Retype new SMB password:
    Added user myuser.

    Restart the Samba service:

    sudo systemctl restart smb
  5. (Optional) If you want to allow guest access to a Samba share (no authentication is required), add a new line containing map to guest = Bad User to the [global] section of the /etc/samba/smb.conf file on each node set up to export the Samba share. For example:

    [global]
           workgroup = SAMBA
           security = user
    
           passdb backend = tdbsam
    
           printing = cups
           printcap name = cups
           load printers = yes
           cups options = raw
           map to guest = Bad User

    Allowing guest access also requires that the [gluster-volume_name] section contains the guest ok = yes option, which is set automatically with the Gluster hook scripts in the next step.

    Restart the Samba service:

    sudo systemctl restart smb
  6. If you have a running volume, stop it, enable either SMB or CIFS on the volume and start it again. On any node in the trusted storage pool, run:

    sudo gluster volume stop myvolume
    sudo gluster volume set myvolume user.smb enable
    sudo gluster volume start myvolume

    Note that when setting the SMB or CIFS option on the volume, you can use either user.smb or user.cifs to enable the type of export you require. Note that if both options are enabled, user.smb has precedence.

    When you start a volume, a Gluster hook is triggered to automatically add a configuration entry for the volume to the /etc/samba/smb.conf file on each node, and to reload the Samba service, as long as the user.smb or user.cifs option is set for the volume. This script generates a Samba configuration entry similar to the following:

    [gluster-myvolume]
    comment = For samba share of volume myvolume
    vfs objects = glusterfs
    glusterfs:volume = myvolume
    glusterfs:logfile = /var/log/samba/glusterfs-myvolume.%M.log
    glusterfs:loglevel = 7
    path = /
    read only = no
    guest ok = yes
    kernel share modes = no
    Note

    The value of the [gluster-myvolume] entry sets the name you use to connect to the Samba share in the connection string.

  7. (Optional) If you do not want Gluster to automatically configure Samba to export shares for volumes, you can remove or rename the hook scripts that control this behavior. On each node on which you want to disable the Samba shares, rename the hook scripts, for example:

    sudo rename S30 disabled-S30 $(find /var/lib/glusterd -type f -name S30samba*)

    To re-enable the hooks, you can run:

    sudo rename disabled-S30 S30 $(find /var/lib/glusterd -type f -name *S30samba*)

5.4.2 Testing SMB Access to a Volume

The following procedure discusses testing SMB access to a volume that has been set up to export a Samba share. You can test SMB access to the volume from an Oracle Linux host. This host does not need to be part of the Gluster pool.

  1. On an Oracle Linux host, install the Samba client package:

    sudo yum install samba-client
  2. Use the smbclient command to list Samba shares on a node in the trusted storage pool where you set up Samba. For example:

    sudo smbclient -N -U% -L node1

    To look directly at the contents of the volume, you can do:

    sudo smbclient -N -U% //node1/gluster-myvolume -c ls

    In this command, you specify the Samba share name for the volume. This name can be found on a host where you set up the Samba share, in the /etc/samba/smb.conf file. Usually the Samba share name is gluster-volume_name.

5.4.3 Testing CIFS Access to a Volume

The following procedure discusses testing CIFS access to a volume that has been set up to export a Samba share. You can test CIFS access to the volume from an Oracle Linux host. This host does not need to be part of the Gluster pool.

To test access to the volume using CIFS:
  1. On an Oracle Linux host, install the cifs-utils package:

    sudo yum install cifs-utils
  2. Create a mount directory where you intend to mount the volume. For example:

    sudo mkdir /gluster-storage
  3. Mount the volume on the directory using the cifs mount type and by specifying a node within the pool, along with the Samba share name for the volume. This name can be found on a host where you set up the Samba share, in the /etc/samba/smb.conf file. Usually the Samba share name is gluster-volume_name. For example:

    sudo mount -t cifs -o guest //node1/gluster-myvolume /gluster-storage

    If you have set up the volume to enable mounting a subdirectory, you can add the subdirectory name to the path on the Gluster file system:

    sudo mount -t cifs -o guest //node1/gluster-myvolume/subdirectory /gluster-storage

    If you want to pass authentication credentials to the Samba share, first add them to a local file. In this example, the credentials are saved to the file /credfile.

    username=value
    password=value

    Set the permissions on the credentials file so other users cannot access it.

    sudo chmod 600 /credfile

    You can then use the credentials file to connect to the Samba share, for example:

    sudo mount -t cifs -o credentials=/credfile //node1/gluster-myvolume /gluster-storage
  4. Check the permissions on the new mount to make sure the appropriate users can read and write to the storage. For example:

    sudo chmod 777 /gluster-storage

5.4.4 Accessing the Volume from Microsoft Windows

On a Microsoft Windows host, you can mount the volume using the Samba share. By default, the Samba share is available in the Workgroup named SAMBA (as defined in the /etc/samba/smb.conf file on Samba share nodes).

You can map the volume by mapping a network drive using Windows Explorer using the format: \\node\volume, for example:

\\node1\gluster-myvolume

Alternatively, you can map a new drive using the Windows command line. Start the Command Prompt. Enter a command similar to the following:

net use z: \\node1\gluster-myvolume