Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Use Gluster with Oracle Cloud Native Environment
Introduction
Using the Gluster Container Storage Interface module, Oracle Cloud Native Environment administrators can set up statically or dynamically provisioned persistent storage for Kubernetes applications using Gluster Storage.
The Gluster Container Storage Interface module creates a Kubernetes Glusterfs StorageClass provisioner to access existing storage on Gluster. Kubernetes uses the Glusterfs plug-in to provision Gluster volumes for use as Kubernetes PersistentVolumes. The Oracle Cloud Native Environment Platform API Server communicates with the Heketi API to provision and manage Gluster volumes using PersistentVolumeClaims. When deleting the PersistentVolumeClaims, the Gluster volumes automatically get destroyed.
In this example, we will create an integrated system where Kubernetes worker nodes provide persistent storage using Gluster on Oracle Cloud Native Environment.
Deprecation Notice for Release 1.6
Gluster Container Storage Interface Module
The Gluster Container Storage Interface module, used to install Gluster and set up Glusterfs, is deprecated. Future releases may remove the Gluster Container Storage Interface module.
This free lab environment deploys Release 1.6 and may not be updated due to this deprecation notice.
Objectives
In this lab, you will learn how to install and configure Gluster Storage for Oracle Linux on Oracle Cloud Native Environment to create storage volumes for Kubernetes applications.
Prerequisites
This section lists the host systems to perform the steps in this tutorial. To be successful requires:
-
5 Oracle Linux systems to use as:
- Operator node (ocne-operator)
- Kubernetes control plane node (ocne-control01)
- 3 Kubernetes worker nodes (ocne-worker01, ocne-worker02, ocne-worker03)
-
Systems should have a minimum of the following:
- Latest Oracle Linux 8 (x86_64) installed and running the Unbreakable Enterprise Kernel Release 6 (UEK R6)
-
The pre-configured setup on these systems is:
- An
oracle
user account withsudo
privileges - Passwordless SSH between each node
- Additional block storage attached to each worker node for Gluster
- Oracle Cloud Native Environment installed and configured
- An
Set up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
This lab involves multiple systems, requiring you to perform different steps on each. The recommended way to start is by opening five terminal windows or tabs and connecting to each node. This action prevents you from having to log in and out repeatedly. The nodes are:
- ocne-control01
- ocne-operator
- ocne-worker01
- ocne-worker02
- ocne-worker03
Important: The free lab environment deploys a fully installed Oracle Cloud Native Environment across the provided nodes. This deployment takes approximately 25-30 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
-
Open a terminal and connect via ssh to each node.
ssh oracle@<ip_address_of_ol_node>
Validate the Kubernetes Environment
-
(On the ocne-control01 node) Verify
kubectl
works.kubectl get nodes
Example Output:
[oracle@ocne-control01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-control01 Ready control-plane 12m v1.25.11+1.el8 ocne-worker01 Ready <none> 11m v1.25.11+1.el8 ocne-worker02 Ready <none> 11m v1.25.11+1.el8 ocne-worker03 Ready <none> 11m v1.25.11+1.el8
Configure the Worker Nodes
Install and configure the Gluster service.
In this section, complete each of the tasks for each worker node:
- ocne-worker01
- ocne-worker02
- ocne-worker03
Note: This approach avoids repetition in the documentation because the required actions are identical on each node.
-
Install the Gluster yum repository configuration.
sudo dnf install -y oracle-gluster-release-el8
Note: (This only applies when using Oracle Cloud Instances.)
Installing packages from the Oracle Gluster repository may result in the following error:
[oracle@ocne-worker01 ~]$ sudo dnf install -y glusterfs-server glusterfs-client Oracle Linux 8 Gluster Appstream (x86_64) 0.0 B/s | 0 B 00:00 Errors during downloading metadata for repository 'ol8_gluster_appstream': - Curl error (6): Couldn't resolve host name for https://yum.eu-frankfurt-1.oracle.com/repo/OracleLinux/OL8/gluster/appstream/x86_64/repodata/repomd.xml [Could not resolve host: yum.eu-frankfurt-1.oracle.com] Error: Failed to download metadata for repo 'ol8_gluster_appstream': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
The error occurs due to Oracle Cloud attempting to use a repository that may be temporarily unavailable. Use the following steps to resolve the issue.
-
Modify the Oracle Gluster Repository file.
sudo sed -i 's/yum$ociregion/yum/g' /etc/yum.repos.d/oracle-gluster-ol8.repo
-
Retry the installation of the failed packages.
-
-
Install the Gluster software.
sudo dnf install -y glusterfs-server glusterfs-client
-
Configure the firewall to allow traffic on Gluster’s default ports.
sudo firewall-cmd --permanent --zone=trusted --add-source=10.0.0.0/24 sudo firewall-cmd --permanent --zone=trusted --add-service=glusterfs sudo firewall-cmd --reload
-
Enable Gluster encryption.
Set up the Gluster environment with TLS to encrypt management traffic between Gluster nodes. Rather than creating new certificates, re-use the x.509 certificates provided by Oracle Cloud Native Environment.
sudo cp /etc/olcne/configs/certificates/production/ca.cert /etc/ssl/glusterfs.ca sudo cp /etc/olcne/configs/certificates/production/node.key /etc/ssl/glusterfs.key sudo cp /etc/olcne/configs/certificates/production/node.cert /etc/ssl/glusterfs.pem sudo touch /var/lib/glusterd/secure-access
-
Enable and start the Gluster service.
sudo systemctl enable --now glusterd.service
Configure the Control Plane Node
Configure Heketi, which will use the Gluster nodes to provision storage.
In this section, complete each task on the control plane node, ocne-control01.
-
Install the Gluster yum repository configuration.
sudo dnf install -y oracle-gluster-release-el8
-
(Optional) Modify the Oracle Gluster Repository file.
Note: Only run this command if you previously modified the `oracle-gluster-ol8.repo on the worker nodes.
sudo sed -i 's/yum$ociregion/yum/g' /etc/yum.repos.d/oracle-gluster-ol8.repo
-
Install the Heketi software.
sudo dnf install -y heketi heketi-client
-
Allow the required port through the firewall for Heketi.
sudo firewall-cmd --permanent --zone=trusted --add-source=10.0.0.0/24 sudo firewall-cmd --permanent --zone=trusted --add-port=8080/tcp sudo firewall-cmd --reload
-
Create the SSH authentication key for Heketi to use when communicating with worker nodes.
sudo ssh-keygen -m PEM -t rsa -b 4096 -q -f /etc/heketi/heketi_key -N '' sudo cat /etc/heketi/heketi_key.pub | ssh -t -o StrictHostKeyChecking=no ocne-worker01 "sudo tee -a /root/.ssh/authorized_keys" > /dev/null 2>&1 sudo cat /etc/heketi/heketi_key.pub | ssh -t -o StrictHostKeyChecking=no ocne-worker02 "sudo tee -a /root/.ssh/authorized_keys" > /dev/null 2>&1 sudo cat /etc/heketi/heketi_key.pub | ssh -t -o StrictHostKeyChecking=no ocne-worker03 "sudo tee -a /root/.ssh/authorized_keys" > /dev/null 2>&1 sudo chown heketi:heketi /etc/heketi/heketi_key*
-
Configure the heketi.json file.
Warning: The
sed
commands below only work the first time running against an unmodified heketi.json file.Make a backup of the heketi.json file.
sudo cp /etc/heketi/heketi.json /etc/heketi/heketi.json.bak
Update the use_auth section to true.
sudo sed -i 's/"use_auth": false/"use auth": true/' /etc/heketi/heketi.json
Define a passphrase for the
admin
anduser
accounts.sudo sed -i '0,/"My Secret"/{s/"My Secret"/"Admin Password"/}' /etc/heketi/heketi.json sudo sed -i '0,/"My Secret"/{s/"My Secret"/"User Password"/}' /etc/heketi/heketi.json
Change the Glusterfs executor from mock to ssh.
sudo sed -i 's/"executor": "mock"/"executor": "ssh"/' /etc/heketi/heketi.json
Define the sshexec properties.
sudo sed -i 's+"path/to/private_key"+"/etc/heketi/heketi_key"+' /etc/heketi/heketi.json sudo sed -i 's+"sshuser"+"root"+' /etc/heketi/heketi.json sudo sed -i 's+"Optional: ssh port. Default is 22"+"22"+' /etc/heketi/heketi.json sudo sed -i 's+"Optional: Specify fstab file on node. Default is /etc/fstab"+"/etc/fstab"+' /etc/heketi/heketi.json
-
Enable the service.
sudo systemctl enable --now heketi.service
-
Validate Heketi is working.
curl -w "\n" localhost:8080/hello
Example Output:
[oracle@ocne-operator ~]$ curl localhost:8080/hello Hello from Heketi.
-
Create a Heketi topology file.
This file declares the hostnames to use, the host IP addresses, and the block devices available to Gluster.
cat << 'EOF' | sudo tee /etc/heketi/topology.json > /dev/null { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "ocne-worker01" ], "storage": [ "10.0.0.160" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] }, { "node": { "hostnames": { "manage": [ "ocne-worker02" ], "storage": [ "10.0.0.161" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] }, { "node": { "hostnames": { "manage": [ "ocne-worker03" ], "storage": [ "10.0.0.162" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] } ] } ] } EOF
-
Load the Heketi topology file.
Use the username and password defined in step 5 of this section.
heketi-cli --user admin --secret "Admin Password" topology load --json=/etc/heketi/topology.json
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --user admin --secret "Admin Password" topology load --json=/etc/heketi/topology.json Creating cluster ... ID: 523081a5a77aa16ef0ea98d9be5720fd Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node ocne-worker01 ... ID: a3213791a722b8a8843e595fd5f631f4 Adding device /dev/sdb ... OK Creating node ocne-worker02 ... ID: 403ff6243ac8e3f7f3bf99e9532f18f6 Adding device /dev/sdb ... OK Creating node ocne-worker03 ... ID: 60e28bdc3fa17d76846aa5e8ea7c25e5 Adding device /dev/sdb ... OK
-
List the nodes of known clusters.
heketi-cli --secret "Admin Password" --user admin node list
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --secret "Admin Password" --user admin node list Id:403ff6243ac8e3f7f3bf99e9532f18f6 Cluster:523081a5a77aa16ef0ea98d9be5720fd Id:60e28bdc3fa17d76846aa5e8ea7c25e5 Cluster:523081a5a77aa16ef0ea98d9be5720fd Id:a3213791a722b8a8843e595fd5f631f4 Cluster:523081a5a77aa16ef0ea98d9be5720fd
Install the Gluster Container Storage Interface Module
Gluster and Heketi are now available on Oracle Cloud Native Environment and ready to use with the Gluster Container Storage Interface module.
On the ocne-operator node:
-
Write the global arguments for an environment to a local configuration file.
This action allows running the
olcnectl
commands without passing the global arguments for every Platform API Server call.olcnectl environment update olcne --config-file myenvironment.yaml --update-config
-
Create the Gluster module.
olcnectl module create --environment-name myenvironment --module gluster --name mygluster --gluster-kubernetes-module mycluster --gluster-server-url http://ocne-control01:8080
-
Validate the modules.
olcnectl module validate --environment-name myenvironment --name mygluster
-
Install the modules.
Note: This may take a few minutes to complete.
olcnectl module install --environment-name myenvironment --name mygluster
-
Show the installed modules.
olcnectl module instances --environment-name myenvironment
Example Output:
[oracle@ocne-operator ~]$ olcnectl module instances --environment-name myenvironment INFO[19/07/23 14:45:28] Starting local API server INFO[19/07/23 14:45:29] Starting local API server INSTANCE MODULE STATE mycluster kubernetes installed mygluster gluster installed ocne-control01:8090 node installed ocne-worker01:8090 node installed ocne-worker02:8090 node installed ocne-worker03:8090 node installed
Create Gluster Volumes
In this section, complete each task on the control plane node, ocne-control01.
-
Create some example PersistentVolumeClaims.
for x in {0..5}; do cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-pvc-${x} spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF done
Example Output:
persistentvolumeclaim/gluster-pvc-0 created persistentvolumeclaim/gluster-pvc-1 created persistentvolumeclaim/gluster-pvc-2 created persistentvolumeclaim/gluster-pvc-3 created persistentvolumeclaim/gluster-pvc-4 created persistentvolumeclaim/gluster-pvc-5 created
-
Verify the PersistentVolumeClaims get dynamically filled by Gluster volumes.
Note: Repeat the command below a few times, as completing the assignment may take a few moments.
kubectl get pvc
Example Output:
[oracle@ocne-control01 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE gluster-pvc-0 Bound pvc-6f38ff62-aea2-41b1-836a-3fc9796f067f 1Gi RWX hyperconverged 2m20s gluster-pvc-1 Bound pvc-2877b912-f8f0-403a-abc7-5c375d7dcd94 1Gi RWX hyperconverged 2m20s gluster-pvc-2 Bound pvc-9fd2d0e8-266e-4b7a-a38f-28ffb5a9ce53 1Gi RWX hyperconverged 2m20s gluster-pvc-3 Bound pvc-f656153c-af56-4eb5-a3c9-2d718ca0c79c 1Gi RWX hyperconverged 2m19s gluster-pvc-4 Bound pvc-80f7e971-527e-416f-9d72-836c5b831731 1Gi RWX hyperconverged 2m19s gluster-pvc-5 Bound pvc-864de23d-e030-44db-b145-a6e626090d5a 1Gi RWX hyperconverged 2m19s
-
Create an example Deployment that uses a PersistentVolumeClaim defined above.
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: run: demo-nginx name: demo-nginx spec: replicas: 1 selector: matchLabels: run: demo-nginx template: metadata: labels: run: demo-nginx spec: containers: - image: nginx name: demo-nginx ports: - containerPort: 80 volumeMounts: - name: demo-nginx-pvc mountPath: /usr/share/nginx/html volumes: - name: demo-nginx-pvc persistentVolumeClaim: claimName: gluster-pvc-1 EOF
-
Ensure that our example Gluster-backed nginx pod has started successfully
kubectl get pod -l run=demo-nginx
Example Output:
[oracle@ocne-control01 ~]$ kubectl get pod -l run=demo-nginx NAME READY STATUS RESTARTS AGE demo-nginx-9b86b6cb-hvdcr 1/1 Running 0 13s
-
Verify the volume used is Glusterfs
Important: Change the command to the pod’s name identified in the previous step.
kubectl exec demo-nginx-<replace> -ti -- mount -t fuse.glusterfs
Example Output:
[oracle@ocne-control01 ~]$ kubectl exec demo-nginx-9b86b6cb-hvdcr -ti -- mount -t fuse.glusterfs 10.0.0.162:vol_9d2f56d3d5b7b31dc92d7f60e302dbc0 on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
At this point, the Kubernetes environment creates Gluster volumes when creating a PersistantVolumeClaim and removes them when deleting the PersistantVolumeClaim.
Summary
When making a PersistentVolumeClaim, the Kubernetes API Server running on the operator node will request a volume from Heketi running on the control plane node. Heketi will create a Gluster volume on one of the three Gluster nodes (ocne-worker01, ocne-worker02, or ocne-worker03) and respond to the Kubernetes API server with volume details. When directed to start the pod, the worker will mount the Gluster filesystem and present it to a pod.
Note: that Gluster volumes created by Heketi will not have I/O encryption enabled. The above configuration only enables encryption of management traffic.
(Optional) Enabling TLS in Heketi
When deploying in production, it may be required to encrypt communications between the Kubernetes API server and Heketi. In section Create Gluster Volumes, we configured Heketi and the StorageClass to use HTTP; this section describes how to update this deployment to HTTPS.
In this section, complete each task on each control plane, ocne-control01.
-
Copy OCNE certificates to the Heketi folder.
sudo cp /etc/olcne/configs/certificates/production/node* /etc/heketi/ sudo chown heketi:heketi /etc/heketi/node*
-
Update the heketi.json file.
Insert the following after the port definition.
cat << EOF | sudo sed -i '/"port": "8080",/ r /dev/stdin' /etc/heketi/heketi.json "_enable_tls_comment": "Enable TLS in Heketi Server", "enable_tls": true, "_cert_file_comment": "Path to a valid certificate file", "cert_file": "/etc/heketi/node.cert", "_key_file_comment": "Path to a valid private key file", "key_file": "/etc/heketi/node.key", EOF
Example Output:
{ "_port_comment": "Heketi Server Port Number", "port": "8080", "_enable_tls_comment": "Enable TLS in Heketi Server", "enable_tls": true, "_cert_file_comment": "Path to a valid certificate file", "cert_file": "/etc/heketi/node.cert", "_key_file_comment": "Path to a valid private key file", "key_file": "/etc/heketi/node.key", ...
-
Restart the service.
sudo systemctl restart heketi.service
-
Trust the example Certificate Authority
sudo cp /etc/olcne/configs/certificates/production/ca.cert /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract
-
Validate HTTPS Heketi is working
curl -w "\n" https://localhost:8080/hello
Example Output:
Hello from Heketi
-
(Optional) Delete an Existing StorageClass Object
Note: If a StorageClass is already hyperconverged, updating StorageClass parameters is not permitted. You must delete the StorageClass before continuing.
kubectl delete storageclass hyperconverged
Example Output:
storageclass.storage.k8s.io "hyperconverged" deleted
-
Create the StorageClass object with an HTTPS
resturl
.cat << EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hyperconverged annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/glusterfs parameters: resturl: "https://ocne-control01:8080" restauthenabled: "true" restuser: "admin" secretNamespace: "default" secretName: "heketi-admin" EOF
Example Output:
storageclass.storage.k8s.io/hyperconverged created
-
To use further
heketi-cli
commands, you must first declare the HTTPS URL.export HEKETI_CLI_SERVER=https://ocne-control01:8080
Heketi communications are now encrypted.
(Optional) Example Gluster Output
-
(Optional) (On control node) Define the Heketi server URL.
You must declare the updated URL if you previously completed the (Optional) Enabling TLS in Heketi step.
export HEKETI_CLI_SERVER=https://ocne-control01:8080
-
(On control node) List volumes.
heketi-cli --user admin --secret "Admin Password" volume list
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --user admin --secret "Admin Password" volume list Id:00c4c28f2e711daa31233d11dc6d7ba2 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_00c4c28f2e711daa31233d11dc6d7ba2 Id:10ec635cb65339542d8abc7b1a066b29 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_10ec635cb65339542d8abc7b1a066b29 Id:30bc4be77b0550a9df43225b858e8ab7 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_30bc4be77b0550a9df43225b858e8ab7 Id:948bbf1668b06424e9b1781a78919bab Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_948bbf1668b06424e9b1781a78919bab Id:9d2f56d3d5b7b31dc92d7f60e302dbc0 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_9d2f56d3d5b7b31dc92d7f60e302dbc0 Id:f6d04bb04c26ab1e937f248e5cfe4130 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_f6d04bb04c26ab1e937f248e5cfe4130
-
(On control node) Show volume info.
Note: Change the volume id to the id of one of the volumes identified in the List volumes step.
heketi-cli --user admin --secret "Admin Password" volume info <replace>
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --user admin --secret "Admin Password" volume info 00c4c28f2e711daa31233d11dc6d7ba2 Name: vol_00c4c28f2e711daa31233d11dc6d7ba2 Size: 1 Volume Id: 00c4c28f2e711daa31233d11dc6d7ba2 Cluster Id: 523081a5a77aa16ef0ea98d9be5720fd Mount: 10.0.0.161:vol_00c4c28f2e711daa31233d11dc6d7ba2 Mount Options: backup-volfile-servers=10.0.0.162,10.0.0.160 Block: false Free Size: 0 Reserved Size: 0 Block Hosting Restriction: (none) Block Volumes: [] Durability Type: replicate Distribute Count: 1 Replica Count: 3 Snapshot Factor: 1.00
-
(On any worker node) Show the state of the Gluster volume from a worker node perspective.
Note: Change the volume name to the name of one of the volumes identified in the List volumes step.
sudo gluster volume status <replace>
Example Output:
[oracle@ocne-worker01 ~]$ sudo gluster volume status vol_10ec635cb65339542d8abc7b1a066b29 Status of volume: vol_10ec635cb65339542d8abc7b1a066b29 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.0.0.161:/var/lib/heketi/mounts/vg_ d30f47edef2f0fed556c67c8599f519e/brick_d6b6 9ba067b01110acaafd1b37e8b951/brick 49152 0 Y 97976 Brick 10.0.0.160:/var/lib/heketi/mounts/vg_ 6526c527b6cc74975878c83bbd538a53/brick_ad22 a4e50a73cdeb11f8a11d5715174f/brick 49153 0 Y 98121 Brick 10.0.0.162:/var/lib/heketi/mounts/vg_ df7540be1a1b309f99f29d4451c6f960/brick_d5b1 6c5a34acc54fe9b7132d81b51935/brick 49154 0 Y 98120 Self-heal Daemon on localhost N/A N/A Y 98111 Self-heal Daemon on ocne-worker02.pub.linux virt.oraclevcn.com N/A N/A Y 97993 Self-heal Daemon on 10.0.0.162 N/A N/A Y 98109 Task Status of Volume vol_10ec635cb65339542d8abc7b1a066b29 ------------------------------------------------------------------------------ There are no active volume tasks
Want to Learn More?
- Glusterfs StorageClass
- Gluster: Setting up Transport Layer Security
- Gluster: Setting up Heketi
- Gluster: Setting up Volumes
For More Information
- Oracle Cloud Native Environment Documentation
- Oracle Cloud Native Environment Training
- Oracle Linux Training Station
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Use Gluster with Oracle Cloud Native Environment
F57351-02
July 2023
Copyright © 2022, Oracle and/or its affiliates.