5 Accessing Volumes
WARNING:
Gluster on Oracle Linux 8 is no longer supported. See Oracle Linux: Product Life Cycle Information for more information.
Oracle Linux 7 is now in Extended Support. See Oracle Linux Extended Support and Oracle Open Source Support Policies for more information. Gluster on Oracle Linux 7 is excluded from extended support.
This chapter the discusses the options available to access Gluster volumes from an Oracle Linux or Microsoft Windows client system.
Access to volumes is provided through different network file system technologies including NFS, Samba, and a Gluster native client that uses the File System in Userspace (FUSE) software interface to provide access to the volume.
If you need to mount the volume locally on one of the nodes, treat this as an added mount exactly as if you were mounting from a remote host.
Attention:
Editing the data within the volume directly on the file system on each node can quickly lead to split-brain scenarios and potential file system corruption.
Accessing Volumes by Using iSCSI
This section discusses setting up a volume as an iSCSI backstore to provide block storage
using the gluster-block
and tcmu-runner
packages. Files on
volumes are exported as block storage (iSCSI LUNs). The storage initiator logs in to the LUN
to access the block device.
The gluster-block
package includes a CLI to
create and manage iSCSI access to volumes. The
tcmu-runner
package handles access to volumes
using the iSCSI protocol.
Installing iSCSI Services
This section discusses setting up the trusted storage pool to enable iSCSI access.
To install iSCSI services, do these steps on each node in the trusted storage pool:
-
Install the
tcmu-runner
andgluster-block
packages.sudo yum install tcmu-runner gluster-block
-
Start and enable the
tcmu-runner
andgluster-blockd
services:sudo systemctl enable --now tcmu-runner gluster-blockd
Creating a Block Device
This section discusses creating a block device on an existing volume. For more information about creating and managing block devices, see the upstream documentation. To get help on using the gluster-block command, enter gluster-block help.
To create a block device, do the following on a node in the trusted storage pool:
-
Create the block device using the gluster-block create command. This example creates a block device named
myvolume-block
for the volume namedmyvolume
. The three nodes in the trusted storage pool form a high availability connection to the volume.sudo gluster-block create myvolume/myvolume-block ha 3 prealloc no 192.168.1.51,192.168.1.52,192.168.1.53 20GiB
IQN: iqn.2016-12.org.gluster-block:4a015741-f455-4568-a0ee-b333b595ba4f PORTAL(S): 10.147.25.88:3260 10.147.25.89:3260 10.147.25.90:3260 RESULT: SUCCESS
-
To get a list of block devices for a volume, use the gluster-block list command.
sudo gluster-block list myvolume
myvolume-block
-
You can get information on the block device using the gluster-block info command.
sudo gluster-block info myvolume/myvolume-block
NAME: myvolume-block VOLUME: myvolume GBID: 4a015741-f455-4568-a0ee-b333b595ba4f SIZE: 20.0 GiB HA: 3 PASSWORD: EXPORTED ON: 192.168.1.51 192.168.1.52 192.168.1.53
-
To get a list of the iSCSI targets, use the targetcli ls command.
sudo targetcli ls
... o- iscsi .................................................................... [Targets: 1] | o- iqn.2016-12.org.gluster-block:4a015741-f455-4568-a0ee-b333b595ba4f ........ [TPGs: 3] | o- tpg1 .......................................................... [gen-acls, no-auth] | | o- acls .................................................................. [ACLs: 0] | | o- luns .................................................................. [LUNs: 1] | | | o- lun0 ................................. [user/myvolume-block (glfs_tg_pt_gp_ao)] | | o- portals ............................................................ [Portals: 1] | | o- 192.168.1.51:3260 ........................................................ [OK] ...
Accessing an iSCSI Block Device
This section discusses accessing an iSCSI block device.
To access an iSCSI block device, do these steps on a client node:
-
Install the packages required to access the block storage.
sudo yum install iscsi-initiator-utils device-mapper-multipath
-
Enable the
iscsid
service:sudo systemctl enable iscsid
-
Discover and log into the iSCSI target on any of the nodes in the trusted storage pool that is set to host block devices. For example:
sudo iscsiadm -m discovery -t st -p 192.168.1.51 -l
-
You can see a list of the iSCSI sessions using the iscsiadm -m session command:
sudo iscsiadm -m session
tcp: [1] 192.168.1.51:3260,1 iqn.2016-12.org.gluster-block:4a015741... (non-flash) tcp: [2] 192.168.1.52:3260,2 iqn.2016-12.org.gluster-block:4a015741... (non-flash) tcp: [3] 192.168.1.53:3260,3 iqn.2016-12.org.gluster-block:4a015741... (non-flash)
-
(Optional) Set up multipath.
-
Load and enable the multipath module.
sudo modprobe dm_multipath sudo mpathconf --enable
-
Restart and enable the
multipathd
service.sudo systemctl restart multipathd sudo systemctl enable multipathd
-
-
To see the new iSCSI devices added, use the lsblk command:
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdd 8:48 0 20G 0 disk └─mpatha 252:2 0 20G 0 mpath sdb 8:16 0 10G 0 disk sde 8:64 0 20G 0 disk └─mpatha 252:2 0 20G 0 mpath sdc 8:32 0 20G 0 disk └─mpatha 252:2 0 20G 0 mpath sda 8:0 0 36.5G 0 disk ├─sda2 8:2 0 36G 0 part │ ├─vg_main-lv_swap 252:1 0 4G 0 lvm [SWAP] │ └─vg_main-lv_root 252:0 0 32G 0 lvm / └─sda1 8:1 0 500M 0 part /boot
New disks are added for the Gluster block storage. In this case, the disks are
sdd
,sde
, andsdc
. -
Create an XFS file system on the device:
sudo mkfs.xfs /dev/mapper/mpatha
-
Mount the block device. In this example the Gluster block storage is mounted to
/mnt
.sudo mount /dev/mapper/mpatha /mnt/
Accessing Volumes by Using NFS
You can expose volumes using NFS-Ganesha. NFS-Ganesha is a user space file server for the NFS protocol. It provides a FUSE-compatible File System Abstraction Layer (FSAL) to grant access from any NFS client.
Perform the following steps on each node in the trusted storage pool on which you want to enable NFS access:
-
Install the Gluster NFS-Ganesha client packages:
sudo yum install nfs-ganesha-gluster
-
Create an export configuration file in the
/etc/ganesha/exports
directory. This file contains the NFS export information for NFS Ganesha. In this example we use the file nameexport.myvolume.conf
to export a volume namedmyvolume
to an NFS share at/myvolume
on the node.EXPORT{ Export_Id = 1 ; # Export ID unique to each export Path = "/myvolume"; # Path of the volume to be exported. Eg: "/test_volume" FSAL { name = GLUSTER; hostname = "localhost"; # IP of one of the nodes in the trusted pool volume = "myvolume"; # Volume name. Eg: "test_volume" } Access_type = RW; # Access permissions Squash = No_root_squash; # To enable/disable root squashing Disable_ACL = TRUE; # To enable/disable ACL Pseudo = "/myvolume"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo" Protocols = "3","4" ; # NFS protocols supported Transports = "UDP","TCP" ; # Transport protocols supported SecType = "sys"; # Security flavors supported }
Edit the
/etc/ganesha/ganesha.conf
file to include the new export configuration file, for example:... %include "/etc/ganesha/exports/export.myvolume.conf"
-
Enable and start the
nfs-ganesha
service:sudo systemctl enable --now nfs-ganesha
Note:
If the volume is created after you set up access using NFS, you must reload the
nfs-ganesha
service:sudo systemctl reload-or-restart nfs-ganesha
-
Check the volume is exported:
sudo showmount -e localhost
Export list for localhost: /myvolume (everyone)
-
To connect to the volume from an NFS client, mount the NFS share, for example:
sudo mkdir /gluster-storage sudo mount node1:/myvolume /gluster-storage
Any files created in this
/gluster-storage
directory on the NFS client are written to the volume.
Accessing Volumes by Using the Gluster Native Client (FUSE)
You can use the Gluster native client on an Oracle Linux host to access a volume. The native client takes advantage of the File System in Userspace (FUSE) software interface through which you can mount a volume without requiring a kernel driver or module.
-
On the host where you intend to mount the volume, enable access to the Gluster Storage for Oracle Linux packages. For information on enabling access, see Enabling Access to the Gluster Storage for Oracle Linux Packages.
-
Install the Gluster native client packages:
If running Oracle Linux 8, type the following command:
sudo dnf install @glusterfs/client
Otherwise, type:
sudo yum install glusterfs glusterfs-fuse
-
Create the directory where you intend to mount the volume. For example:
sudo mkdir /gluster-storage
-
If you have configured TLS for a volume, you might need to perform added steps before a client system can mount the volume. See Setting Up Transport Layer Security for more information. The following steps are required to complete client configuration for TLS:
-
Set up a certificate and private key on the client system. You can either use a CA signed certificate or create a self-signed certificate, as follows:
sudo openssl req -newkey rsa:2048 -nodes -keyout /etc/ssl/glusterfs.key -x509 -days 365 -out /etc/ssl/glusterfs.pem
-
Append the client certificate to the
/etc/ssl/glusterfs.ca
file on each node in the trusted server pool. Equally, ensure that the client has a copy of the/etc/ssl/glusterfs.ca
file that includes either the CA certificate that signed each node's certificate, or that contains all the self-signed certificates for each node. Gluster performs mutual authentication. Therefore, you must configure both client and server nodes to validate each other's certificates. -
If you enabled encryption on management traffic, you must enable this facility on the client system so that it can perform the initial mount. Gluster looks for a file at
/var/lib/glusterfs/secure-access
. This directory might not exist on a client system, so you might need to create it before touching the file:sudo mkdir -p /var/lib/glusterfs sudo touch /var/lib/glusterfs/secure-access
-
If the volume is already set up and running before you added the client certificate to
/etc/ssl/glusterfs.ca
, you must stop the volume, restart the Gluster service and start up the volume again for the new certificate to be registered:sudo gluster volume stop myvolume sudo systemctl restart glusterd sudo gluster volume start myvolume
-
-
Mount the volume on the directory using the
glusterfs
mount type and by specifying a node within the pool along with the volume name. For example:sudo mount -t glusterfs node1:myvolume /gluster-storage
If you have set up the volume to enable mounting a subdirectory, you can add the subdirectory name to the path on the Gluster file system:
sudo mount -t glusterfs node1:myvolume/subdirectory /gluster-storage
-
Check the permissions on the new mount to ensure that the appropriate users can read and write to the storage. For example:
sudo chmod 777 /gluster-storage
-
To make the mount permanent, edit the
/etc/fstab
file to include the mount. For example:node1:/myvolume /gluster-storage glusterfs defaults,_netdev 0 0
If you're mounting a subdirectory on the volume, add the subdirectory name to the path on the Gluster file system. For example:
node1:/myvolume/subdirectory /gluster-storage glusterfs defaults,_netdev 0 0
If you have trouble mounting the volume, you can check the logs on the client system at
/var/log/glusterfs/
to try to debug connection issues. For example, if TLS
isn't configured and the server node is unable to validate the client, you might see an error
similar to the following in the logs:
… error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca
Accessing Volumes by Using Samba
You can expose volumes using the Common Internet File System (CIFS) or Server Message Block (SMB) by using Samba. This file sharing service is commonly used on Microsoft Windows systems.
Gluster provides hooks to preconfigure and export volumes automatically using a Samba Virtual File System (VFS) module plugin. This reduces the complexity of configuring Samba to export the shares and also means that you don't have to premount the volumes using the FUSE client, resulting in some performance gains. The hooks are triggered every time a volume is started, so Samba configuration is updated the moment a volume is started within Gluster.
For more information on Samba, see Oracle Linux 9: Managing Shared File SystemsOracle Linux 8: Managing Shared File Systems .
Setting Up the Volume for Samba Access
The following procedure sets up the nodes in the trusted storage pool to enable access to a
volume by a Samba client. To use this service, you must ensure that both the
samba
and samba-vfs-glusterfs
packages are installed on
any of the nodes in the pool where you intend a client to connect to the volume using Samba.
On each node in the trusted storage pool on which you want to enable Samba access:
-
Install the Samba packages for Gluster:
sudo yum install samba samba-vfs-glusterfs
-
If you're running a firewall service, enable access to Samba on the node. For example:
sudo firewall-cmd --permanent --add-service=samba sudo firewall-cmd --reload
-
Enable and start the Samba service:
sudo systemctl enable --now smb
-
(Optional) If you don't have an authentication system configured (for example, an LDAP server), you can create a Samba user to enable access to the Samba share from clients. Create this user on all nodes set up to export the Samba share. For example:
sudo adduser myuser sudo smbpasswd -a myuser
New SMB password: Retype new SMB password: Added user myuser.
Restart the Samba service:
sudo systemctl restart smb
-
(Optional) To grant guest access to a Samba share, where no authentication is required, add a new line containing
map to guest = Bad User
to the[global]
section of the/etc/samba/smb.conf
file on each node set up to export the Samba share. For example:[global] workgroup = SAMBA security = user passdb backend = tdbsam printing = cups printcap name = cups load printers = yes cups options = raw map to guest = Bad User
Granting guest access also requires that the
[gluster-volume_name]
section contain theguest ok = yes
option, which is set automatically with the Gluster hook scripts in the next step.Restart the Samba service:
sudo systemctl restart smb
-
If you have a running volume, stop it. Then enable either SMB or CIFS on the volume and start it again. On any node in the trusted storage pool, run:
sudo gluster volume stop myvolume sudo gluster volume set myvolume user.smb enable sudo gluster volume start myvolume
Note that when setting the SMB or CIFS option on the volume, you can use either
user.smb
oruser.cifs
to enable the type of export you require. Note that if both options are enabled,user.smb
has precedence.When you start a volume, a Gluster hook is triggered to automatically add a configuration entry for the volume to the
/etc/samba/smb.conf
file on each node, and to reload the Samba service, assuming that theuser.smb
oruser.cifs
option is set for the volume. This script generates a Samba configuration entry similar to the following:[gluster-myvolume] comment = For samba share of volume myvolume vfs objects = glusterfs glusterfs:volume = myvolume glusterfs:logfile = /var/log/samba/glusterfs-myvolume.%M.log glusterfs:loglevel = 7 path = / read only = no guest ok = yes kernel share modes = no
Note:
The value of the
[gluster-myvolume]
entry sets the name you use to connect to the Samba share in the connection string. -
(Optional) If you don't want Gluster to automatically configure Samba to export shares for volumes, remove or rename the hook scripts that control this behavior. On each node on which you want to disable the Samba shares, rename the hook scripts, for example:
sudo rename S30 disabled-S30 $(find /var/lib/glusterd -type f -name S30samba*)
To reenable the hooks, you can run:
sudo rename disabled-S30 S30 $(find /var/lib/glusterd -type f -name *S30samba*)
Testing SMB Access to a Volume
The following procedure discusses testing SMB access to a volume that has been set up to export a Samba share. You can test SMB access to the volume from an Oracle Linux host. This host doesn't need to be part of the Gluster pool.
-
On an Oracle Linux host, install the Samba client package:
sudo yum install samba-client
-
Use the smbclient command to list Samba shares on a node in the trusted storage pool where you set up Samba. For example:
sudo smbclient -N -U% -L node1
To look directly at the contents of the volume, you can do:
sudo smbclient -N -U% //node1/gluster-myvolume -c ls
In this command, you specify the Samba share name for the volume. This name can be found on a host where you set up the Samba share, in the
/etc/samba/smb.conf
file. Typically, the Samba share name isgluster-volume_name
.
Testing CIFS Access to a Volume
The following procedure discusses testing CIFS access to a volume that has been set up to export a Samba share. You can test CIFS access to the volume from an Oracle Linux host. This host doesn't need to be part of the Gluster pool.
To test access to the volume using CIFS:
-
On an Oracle Linux host, install the
cifs-utils
package:sudo yum install cifs-utils
-
Create a mount directory where you intend to mount the volume. For example:
sudo mkdir /gluster-storage
-
Mount the volume on the directory using the
cifs
mount type and by specifying a node within the pool, along with the Samba share name for the volume. This name can be found on a host where you set up the Samba share, in the/etc/samba/smb.conf
file. Typically, the Samba share name isgluster-volume_name
. For example:sudo mount -t cifs -o guest //node1/gluster-myvolume /gluster-storage
If you have set up the volume to enable mounting a subdirectory, you can add the subdirectory name to the path on the Gluster file system:
sudo mount -t cifs -o guest //node1/gluster-myvolume/subdirectory /gluster-storage
To pass authentication credentials to the Samba share, first add them to a local file. In this example, the credentials are saved to the file
/credfile
.username=value password=value
Set the permissions on the credentials file so other users can't access it.
sudo chmod 600 /credfile
You can then use the credentials file to connect to the Samba share, for example:
sudo mount -t cifs -o credentials=/credfile //node1/gluster-myvolume /gluster-storage
-
Check the permissions on the new mount to ensure that the appropriate users can read and write to the storage. For example:
sudo chmod 777 /gluster-storage
Accessing the Volume from Microsoft Windows
On a Microsoft Windows host, you can mount the volume using the Samba share. By
default, the Samba share is available in the Workgroup named SAMBA
(as
defined in the /etc/samba/smb.conf
file on Samba share nodes).
You can map the volume by mapping a network drive using Windows Explorer using the
format:
\\node\volume
,
for example:
\\node1\gluster-myvolume
Alternatively, you can map a new drive using the Windows command line. Start the Command Prompt. Enter a command similar to the following:
net use z: \\node1\gluster-myvolume