Chapter 4 Accessing Volumes

This chapter the discusses the options available to access Gluster volumes from an Oracle Linux or Microsoft Windows client system.

Access to volumes is provided through a number of different network file system technologies including NFS, Samba and a Gluster native client that uses the File System in Userspace (FUSE) software interface to provide access to the volume.

If you need to mount the volume locally on one of the nodes, you should treat this as an additional mount exactly as if you were mounting from a remote host.

Warning

Editing the data within the volume directly on the file system on each node can quickly lead to split-brain scenarios and potential file system corruption.

4.1 Accessing Volumes using iSCSI

This section discusses setting up a volume as an iSCSI backstore to provide block storage using the gluster-block and tcmu-runner packages. Files on volumes are exported as block storage (iSCSI LUNs). The storage initiator logs into the LUN to access the block device.

The gluster-block package includes a CLI to create and manage iSCSI access to volumes. The tcmu-runner package handles access to volumes using the iSCSI protocol.

4.1.1 Installing iSCSI Services

This section discusses setting up the trusted storage pool to enable iSCSI access.

To install iSCSI services:

On each node in the trusted storage pool:

  1. Install the tcmu-runner and gluster-block packages.

    # yum install tcmu-runner gluster-block
  2. Start and enable the tcmu-runner and gluster-blockd services:

    # systemctl enable --now tcmu-runner gluster-blockd

4.1.2 Creating a Block Device

This section discusses creating a block device on an existing volume. For more information about creating and managing block devices, see the upstream documentation. To get help on using the gluster-block command, enter gluster-block help.

To create a block device:

On a node in the trusted storage pool:

  1. Create the block device using the gluster-block create command. This example creates a block device named myvolume-block for the volume named myvolume. The three nodes in the trusted storage pool form a high availability connection to the volume.

    # gluster-block create myvolume/myvolume-block ha 3 prealloc no \
       192.168.1.51,192.168.1.52,192.168.1.53  20GiB
    IQN: iqn.2016-12.org.gluster-block:4a015741-f455-4568-a0ee-b333b595ba4f
    PORTAL(S):  10.147.25.88:3260 10.147.25.89:3260 10.147.25.90:3260
    RESULT: SUCCESS
  2. To get a list of block devices for a volume, use the gluster-block list command.

    # gluster-block list myvolume
    myvolume-block
  3. You can get information on the block device using the gluster-block info command.

    # gluster-block info myvolume/myvolume-block
    NAME: myvolume-block
    VOLUME: myvolume
    GBID: 4a015741-f455-4568-a0ee-b333b595ba4f
    SIZE: 20.0 GiB
    HA: 3
    PASSWORD: 
    EXPORTED ON: 192.168.1.51 192.168.1.52 192.168.1.53
  4. To get a list of the iSCSI targets, use the targetcli ls command.

    # targetcli  ls
    ...
      o- iscsi .................................................................... [Targets: 1]
      | o- iqn.2016-12.org.gluster-block:4a015741-f455-4568-a0ee-b333b595ba4f ........ [TPGs: 3]
      |   o- tpg1 .......................................................... [gen-acls, no-auth]
      |   | o- acls .................................................................. [ACLs: 0]
      |   | o- luns .................................................................. [LUNs: 1]
      |   | | o- lun0 ................................. [user/myvolume-block (glfs_tg_pt_gp_ao)]
      |   | o- portals ............................................................ [Portals: 1]
      |   |   o- 192.168.1.51:3260 ........................................................ [OK]
    ...

4.1.3 Accessing an iSCSI Block Device

This section discusses accessing an iSCSI block device.

To access an iSCSI block device:

On a client node:

  1. Install the packages required to access the block storage.

    # yum install iscsi-initiator-utils device-mapper-multipath
  2. Enable the iscsid service:

    # systemctl enable iscsid
  3. Discover and log into the iSCSI target on any of the nodes in the trusted storage pool that is set to host block devices. For example:

    # iscsiadm -m discovery -t st -p 192.168.1.51 -l
  4. You can see a list of the iSCSI sessions using the iscsiadm -m session command:

    # iscsiadm -m session
    tcp: [1] 192.168.1.51:3260,1 iqn.2016-12.org.gluster-block:4a015741... (non-flash)
    tcp: [2] 192.168.1.52:3260,2 iqn.2016-12.org.gluster-block:4a015741... (non-flash)
    tcp: [3] 192.168.1.53:3260,3 iqn.2016-12.org.gluster-block:4a015741... (non-flash)
  5. (Optional) Set up multipath.

    To set up multipath:

    1. Load and enable the multipath module.

      # modprobe dm_multipath
      # mpathconf --enable
    2. Restart and enable the multipathd service.

      # systemctl restart multipathd
      # systemctl enable multipathd
  6. To see the new iSCSI devices added, use the lsblk command:

    # lsblk
    NAME                MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
    sdd                   8:48   0   20G  0 disk  
    └─mpatha            252:2    0   20G  0 mpath 
    sdb                   8:16   0   10G  0 disk  
    sde                   8:64   0   20G  0 disk  
    └─mpatha            252:2    0   20G  0 mpath 
    sdc                   8:32   0   20G  0 disk  
    └─mpatha            252:2    0   20G  0 mpath 
    sda                   8:0    0 36.5G  0 disk  
    ├─sda2                8:2    0   36G  0 part  
    │ ├─vg_main-lv_swap 252:1    0    4G  0 lvm   [SWAP]
    │ └─vg_main-lv_root 252:0    0   32G  0 lvm   /
    └─sda1                8:1    0  500M  0 part  /boot

    New disks are added for the Gluster block storage. In this case, the disks are sdd, sde, and sdc.

  7. Create an XFS file system on the device:

    # mkfs.xfs /dev/mapper/mpatha
  8. Mount the block device. In this example the Gluster block storage is mounted to /mnt.

    # mount /dev/mapper/mpatha /mnt/