3 Using Gluster Storage
Important:
The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.
We recommend that you upgrade the software described by this documentation as soon as possible.
This chapter discusses how to install and use the Gluster Container Storage Interface module to set up dynamically provisioned persistent storage for Kubernetes applications using Gluster Storage for Oracle Linux and Heketi in Oracle Cloud Native Environment.
Prerequisites
You need to have a Gluster Storage for Oracle Linux cluster set up and ready to use. You must also install Heketi in the Gluster cluster. The Platform API Server communicates with the Heketi API to provision and manage Gluster volumes.
You do not need to create any Gluster volumes as these are dynamically provisioned as required.
The basic requirements for setting up Gluster are:
-
Install Gluster on each node in the Gluster cluster.
-
Set up the cluster to access volumes using the Gluster native client (FUSE) method.
-
Install Heketi and create the Gluster cluster.
-
Make sure you can connect to the Heketi API from the operator node.
For information on installing and setting up Gluster Storage for Oracle Linux and Heketi, see Oracle® Linux: Gluster Storage for Oracle Linux User's Guide.
Deploying the Gluster Module
You can deploy all the modules required to set up Gluster storage
for a Kubernetes cluster using a single olcnectl module
create
command. This method might be useful if you want
to deploy the Gluster Container Storage Interface module at the same time as
deploying a Kubernetes cluster.
If you have an existing deployment of the Kubernetes module, you can specify that instance when deploying the Gluster Container Storage Interface module.
This section guides you through installing each component required to deploy the Gluster Container Storage Interface module.
For the full list of the Platform CLI command options available when creating modules, see
the olcnectl module create
command in Platform Command-Line Interface.
To deploy the Gluster Container Storage Interface module:
-
If you do not already have an environment set up, create one into which the modules can be deployed. For information on setting up an environment, see Getting Started. The name of the environment in this example is
myenvironment
. -
If you do not already have a Kubernetes module set up or deployed, set one up. For information on adding a Kubernetes module to an environment, see Container Orchestration. The name of the Kubernetes module in this example is
mycluster
. -
If you do not already have a Helm module created and installed, create one. The Helm module in this example is named
myhelm
and is associated with the Kubernetes module namedmycluster
using the--helm-kubernetes-module
option.olcnectl module create \ --environment-name myenvironment \ --module helm \ --name myhelm \ --helm-kubernetes-module mycluster
-
If you are deploying a new Helm module, use the
olcnectl module validate
command to validate the Helm module can be deployed to the nodes. For example:olcnectl module validate \ --environment-name myenvironment \ --name myhelm
-
If you are deploying a new Helm module, use the
olcnectl module install
command to install the Helm module. For example:olcnectl module install \ --environment-name myenvironment \ --name myhelm
The Helm software packages are installed on the control plane nodes, and the Helm module is deployed into the Kubernetes cluster.
-
Create a Gluster Container Storage Interface module and associate it with the Helm module named
myhelm
using the--gluster-helm-module
option. In this example, the Gluster Container Storage Interface module is namedmygluster
.olcnectl module create \ --environment-name myenvironment \ --module gluster \ --name mygluster \ --gluster-helm-module myhelm \ --gluster-server-url https:\\mygluster.example.com:8080
The
--module
option sets the module type to create, which isgluster
. You define the name of the Gluster Container Storage Interface module using the--name
option, which in this case ismygluster
.The
--gluster-helm-module
option sets the name of the Helm module. If there is an existing Helm module with the same name, the Platform API Server uses that instance of Helm.The
--gluster-server-url
option sets the location of the Heketi API server, which in this example ishttps:\\mygluster.example.com:8080
. You do not need to include this option if Heketi is on the operator node and using HTTP, as the default for this option ishttp://127.0.0.1:8080
.Tip:
Make sure you can reach the Heketi API from the operator node using
curl
, for example:curl -w "\n" https:\\mygluster.example.com:8080/hello
Or if Heketi is on the operator node using HTTP:
curl -w "\n" http:\\127.0.0.1:8080/hello
You should see returned:
Hello from Heketi.
If you do not include all the required options when adding the modules, you are prompted to provide them.
There are some optional command options that you may need to include if you are not using the default values, such as
--gluster-server-user
and--gluster-secret-key
. -
Use the
olcnectl module validate
command to validate the Gluster Container Storage Interface module can be deployed to the nodes. For example:olcnectl module validate \ --environment-name myenvironment \ --name mygluster
-
Use the
olcnectl module install
command to install the Gluster Container Storage Interface module. For example:olcnectl module install \ --environment-name myenvironment \ --name mygluster
The Gluster Container Storage Interface module is deployed into the Kubernetes cluster.
Verifying the Gluster Module Deployment
You can verify the Gluster Container Storage Interface module is deployed using
the olcnectl module instances
command on the
operator node. For example:
olcnectl module instances \ --environment-name myenvironment INSTANCE MODULE STATE mycluster kubernetes installed myhelm helm installed mygluster gluster installed control1.example.com node installed ...
Note the entry for gluster
in the
MODULE
column is in the
installed
state.
In addition, use the olcnectl module report
command to review information about the module. For example, use
the following command to review the Gluster Container Storage Interface module
named mygluster
in
myenvironment
:
olcnectl module report \ --environment-name myenvironment \ --name mygluster \ --children
For more information on the syntax for the olcnectl
module report
command, see Platform Command-Line Interface.
On a control plane node, you can also verify the StorageClass for
the Glusterfs provisioner is created using the kubectl
get sc
command:
kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ... hyperconverged (default) kubernetes.io/glusterfs Delete Immediate ...
In this case, the StorageClass is named
hyperconverged
, which is the default name.
You can get more details about the StorageClass using the
kubectl describe sc
command. For example:
kubectl describe sc hyperconverged Name: hyperconverged IsDefaultClass: Yes Annotations: meta.helm.sh/release-name=mygluster,meta.helm.sh/release-namespace=defau... Provisioner: kubernetes.io/glusterfs Parameters: restauthenabled=true,resturl=http://...:8080,restuser=admin,secretName=a... AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
Creating a Gluster Volume
This section contains a basic test to verify you can create a Gluster volume to provide persistent storage to applications running on Kubernetes.
To create a test application to use Glusterfs:
-
Create a Kubernetes PersistentVolumeClaim file. On a control plane node, create a file named
pvc.yaml
. Copy the following into the file.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mygluster-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi
-
Create the Kubernetes PersistentVolumeClaim.
kubectl apply -f pvc.yaml persistentvolumeclaim/mygluster-pvc created
-
You can see the PersistentVolumeClaim is created using the
kubectl get pvc
command:kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mygluster-pvc Bound pvc-59f70... 1Gi RWX hyperconverged 18s
You can get more details about the PersistentVolumeClaim using the
kubectl describe pvc
command. For example:kubectl describe pvc mygluster-pvc Name: mygluster-pvc Namespace: default StorageClass: hyperconverged Status: Bound Volume: pvc-59f7047b-9287-4163-9cff-c669cfbd4970 Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Gi Access Modes: RWX VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ProvisioningSucceeded 73s persistentvolume-controller Successfully provi...
-
Create a Kubernetes application that uses the PersistentVolumeClaim. Create a file named
nginx.yaml
and copy the following into the file.apiVersion: apps/v1 kind: Deployment metadata: labels: run: mynginx name: mynginx spec: replicas: 1 selector: matchLabels: run: mynginx template: metadata: labels: run: mynginx spec: containers: - image: container-registry.oracle.com/olcne/nginx:1.17.7 name: mynginx ports: - containerPort: 80 volumeMounts: - name: nginx-pvc mountPath: /usr/share/nginx/html volumes: - name: nginx-pvc persistentVolumeClaim: claimName: mygluster-pvc
-
Start the application:
kubectl apply -f nginx.yaml deployment.apps/mynginx created
-
You can see the application is running using the
kubectl get deployment
command:kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE mynginx 1/1 1 1 16s
-
You can see the application is using the PersistentVolumeClaim to provide persistent storage on Glusterfs using the
kubectl describe deployment
command:kubectl describe deployment mynginx ... Pod Template: Labels: run=mynginx Containers: mynginx: Image: container-registry.oracle.com/olcne/nginx:1.17.7 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: /usr/share/nginx/html from nginx-pvc (rw) Volumes: nginx-pvc: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the ... ClaimName: mygluster-pvc ReadOnly: false
-
You can delete the test application using:
kubectl delete deployment mynginx deployment.apps "mynginx" deleted
-
You can delete the PersistentVolumeClaim using:
kubectl delete pvc mygluster-pvc persistentvolumeclaim "mygluster-pvc" deleted
Removing the Gluster Module
You can remove a deployment of the Gluster Container Storage Interface module and leave the Kubernetes cluster in place. To do this, you remove the Gluster Container Storage Interface module from the environment.
Use the olcnectl module uninstall
command to
remove the Gluster Container Storage Interface module. For example, to uninstall
the Gluster Container Storage Interface module named
mygluster
in the environment named
myenvironment
:
olcnectl module uninstall \ --environment-name myenvironment \ --name mygluster
The Gluster Container Storage Interface module is removed from the environment.