3 Using Ceph Storage
This chapter discusses how to use the Rook module to set up dynamically provisioned persistent storage using Ceph for Kubernetes applications in Oracle Cloud Native Environment.
Creating a CephCluster
This section contains a basic example on how to create a CephCluster.
If you don't create a Ceph cluster using a Rook configuration file, you can create one or
more clusters after the Rook module is deployed. You do this using the
kubectl
command to deploy a CephCluster CRD.
For example, if you use the following CephCluster CRD in a YAML file:
---
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook
spec:
cephVersion:
image: container-registry.oracle.com/olcne/ceph:v17.2.5
imagePullPolicy: Always
dataDirHostPath: /var/lib/rook
mon:
count: 3
allowMultiplePerNode: false
dashboard:
enabled: false
storage:
useAllNodes: true
useAllDevices: false
deviceFilter: sdb
On a control plane node, use the kubectl apply
command to create the
CephCluster with the file:
kubectl apply -f filename.yaml
The CephCluster is created. You can verify the CephCluster is created using:
kubectl get cephcluster --namespace rook
The output looks similar to:
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH ...
rook-ceph /var/lib/rook 3 4m29s Ready Cluster created successfully HEALTH ...
Creating CephBlockPool Storage
This section contains a basic test to verify you can create and use CephBlockPool storage to provide persistent block storage to applications running on Kubernetes.
If you don't create a CephBlockPool using a Rook configuration file, you can create one or
more after the Rook module is deployed. You do this using the kubectl
command
to deploy a CephBlockPool CRD.
For example, if you use the following CephBlockPool CRD in a YAML file:
---
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook
spec:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
On a control plane node, use the kubectl apply
command to create the
CephBlockPool with the file:
kubectl apply -f filename.yaml
The CephBlockPool is created. You can verify the CephBlockPool is created using:
kubectl get cephblockpool --namespace rook
The output looks similar to:
NAME PHASE
replicapool Ready
If you don't create a StorageClass for the CephBlockPool using a Rook configuration file, you can create one after the Rook module is deployed.
For example, if you use the following StorageClass CRD in a YAML file:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rook.rbd.csi.ceph.com
parameters:
clusterID: rook
pool: replicapool
imageFormat: "2"
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook
csi.storage.k8s.io/fstype: ext4
allowVolumeExpansion: true
reclaimPolicy: Delete
On a control plane node, use the kubectl apply
command to create the
StorageClass with the file:
kubectl apply -f filename.yaml
The StorageClass is created. You can verify the StorageClass is created using:
kubectl get sc
The output looks similar to:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ...
rook-ceph-block (default) rook.rbd.csi.ceph.com Delete Immediate ...
To create a test application to use the CephBlockPool storage:
-
Create a Kubernetes PersistentVolumeClaim file. On a control plane node, create a file named
pvc-cephblock.yaml
. Copy the following into the file.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myrook-block-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
-
Create the Kubernetes PersistentVolumeClaim.
kubectl apply -f pvc-cephblock.yaml
-
You can see the PersistentVolumeClaim is created using the
kubectl get pvc
command:kubectl get pvc
The output looks similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myrook-block-pvc Bound pvc-72da7... 1Gi RWO rook-ceph-block 1h
You can get more details about the PersistentVolumeClaim using the
kubectl describe pvc
command. For example:kubectl describe pvc myrook-block-pvc
The output looks similar to:
Name: myrook-block-pvc Namespace: default StorageClass: rook-ceph-block Status: Bound Volume: pvc-72da7cbf-9e4e-49c9-92cf-65047e3780dd Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: rook.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner: rook.rbd.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Gi Access Modes: RWO VolumeMode: Filesystem Used By: <none> Events: ...
-
Create a Kubernetes application that uses the PersistentVolumeClaim. Create a file named
nginx-block.yaml
and copy the following into the file.apiVersion: apps/v1 kind: Deployment metadata: labels: run: mynginx name: mynginx-block spec: replicas: 1 selector: matchLabels: run: mynginx template: metadata: labels: run: mynginx spec: containers: - image: container-registry.oracle.com/olcne/nginx:1.17.7 name: mynginx ports: - containerPort: 80 volumeMounts: - name: nginx-pvc mountPath: /usr/share/nginx/html volumes: - name: nginx-pvc persistentVolumeClaim: claimName: myrook-block-pvc
-
Start the application:
kubectl apply -f nginx-block.yaml
-
You can see the application is running using the
kubectl get deployment
command:kubectl get deployment
The output looks similar to:
NAME READY UP-TO-DATE AVAILABLE AGE mynginx-block 1/1 1 1 65s
-
You can see the application is using the PersistentVolumeClaim to provide persistent storage on CephBlockPool storage using the
kubectl describe deployment
command:kubectl describe deployment mynginx-block
The output looks similar to:
... Pod Template: Labels: run=mynginx Containers: mynginx: Image: container-registry.oracle.com/olcne/nginx:1.17.7 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: /usr/share/nginx/html from nginx-pvc (rw) Volumes: nginx-pvc: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: myrook-block-pvc ReadOnly: false ...
-
You can delete the test application using:
kubectl delete deployment mynginx-block
-
You can delete the PersistentVolumeClaim using:
kubectl delete pvc myrook-block-pvc
Creating CephFilesystem Storage
This section contains a basic test to verify you can use CephFilesystem storage to provide persistent storage to applications running on Kubernetes.
If you don't create a CephFilesystem using a Rook configuration file, you can create one or
more after the Rook module is deployed. You do this using the kubectl
command
to deploy a CephFilesystem CRD.
For example, if you use the following CephFilesystem CRD in a YAML file:
---
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: myfs
namespace: rook
spec:
metadataPool:
replicated:
size: 3
dataPools:
- name: replicated
replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
On a control plane node, use the kubectl apply
command to create the
CephFilesystem with the file:
kubectl apply -f filename.yaml
The CephFilesystem is created. You can verify the CephFilesystem is created using:
kubectl get cephfilesystem --namespace rook
The output looks similar to:
NAME ACTIVEMDS AGE PHASE
myfs 1 18s Ready
If you don't create a StorageClass for the CephFilesystem using a Rook configuration file, you can create one after the Rook module is deployed.
For example, if you use the following StorageClass CRD in a YAML file:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
provisioner: rook.cephfs.csi.ceph.com
parameters:
clusterID: rook
fsName: myfs
pool: myfs-replicated
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook
reclaimPolicy: Delete
On a control plane node, use the kubectl apply
command to create the
StorageClass with the file:
kubectl apply -f filename.yaml
The StorageClass is created. You can verify the StorageClass is created using:
kubectl get sc
The output looks similar to:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ...
rook-ceph-block (default) rook.rbd.csi.ceph.com Delete Immediate ...
rook-cephfs rook.cephfs.csi.ceph.com Delete Immediate ...
To create a test application to use the CephFilesystem storage:
-
Create a Kubernetes PersistentVolumeClaim file. On a control plane node, create a file named
pvc-cephfs.yaml
. Copy the following into the file.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myrook-pvc-fs spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: rook-cephfs
-
Create the Kubernetes PersistentVolumeClaim.
kubectl apply -f pvc-cephfs.yaml
-
You can see the PersistentVolumeClaim is created using the
kubectl get pvc
command:kubectl get pvc
The output looks similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myrook-pvc-fs Bound pvc-... 1Gi RWX rook-cephfs 18s
You can get more details about the PersistentVolumeClaim using the
kubectl describe pvc
command. For example:kubectl describe pvc myrook-pvc-fs
The output looks similar to:
Name: myrook-pvc-fs Namespace: default StorageClass: rook-cephfs Status: Bound Volume: pvc-b98f9230-03d9-401d-9e19-81491eb785f9 Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: rook.cephfs.csi.ceph.com volume.kubernetes.io/storage-provisioner: rook.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Gi Access Modes: RWX VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From ... ---- ------ ---- ---- ... Normal ExternalProvisioning 106s persistentvolume-controller ... Normal Provisioning 106s rook.cephfs.csi.ceph.com_csi-cephfsplugin-provisio... Normal ProvisioningSucceeded 106s rook.cephfs.csi.ceph.com_csi-cephfsplugin-provisio...
-
Create a Kubernetes application that uses the PersistentVolumeClaim. Create a file named
nginx-cephfs.yaml
and copy the following into the file.apiVersion: apps/v1 kind: Deployment metadata: labels: run: mynginx name: mynginx-cephfs spec: replicas: 1 selector: matchLabels: run: mynginx template: metadata: labels: run: mynginx spec: containers: - image: container-registry.oracle.com/olcne/nginx:1.17.7 name: mynginx ports: - containerPort: 80 volumeMounts: - name: nginx-pvc mountPath: /usr/share/nginx/html volumes: - name: nginx-pvc persistentVolumeClaim: claimName: myrook-pvc-fs
-
Start the application:
kubectl apply -f nginx-cephfs.yaml
-
You can see the application is running using the
kubectl get deployment
command:kubectl get deployment
The output looks similar to:
NAME READY UP-TO-DATE AVAILABLE AGE mynginx-cephfs 1/1 1 1 16s
-
You can see the application is using the PersistentVolumeClaim to provide persistent storage on CephFilesystem using the
kubectl describe deployment
command:kubectl describe deployment mynginx-cephfs
The output looks similar to:
... Pod Template: Labels: run=mynginx Containers: mynginx: Image: container-registry.oracle.com/olcne/nginx:1.17.7 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: /usr/share/nginx/html from nginx-pvc (rw) Volumes: nginx-pvc: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the ... ClaimName: myrook-pvc-fs ReadOnly: false ...
-
You can delete the test application using:
kubectl delete deployment mynginx-cephfs
-
You can delete the PersistentVolumeClaim using:
kubectl delete pvc myrook-pvc-fs
Creating CephObjectStore Storage
This section contains a basic test to verify you can use CephObjectStore storage to provide object storage to applications running on Kubernetes.
The example in this section is based on the upstream Rook documentation example to create a
CephObjectStore, and then test it. The content here is changed to use the
rook
Kubernetes namespace (the default namespace for Rook in Oracle Cloud Native Environment), but is otherwise the same. To create an
application to test that you can put
or get
an object from
the CephObjectStore, see the upstream documentation. The information here shows you how to set
up the CephObjectStore and create an ObjectBucketClaim, but not how to create an application
to test it.
If you don't create a CephObjectStore using a Rook configuration file, you can create one or
more after the Rook module is deployed. You do this using the kubectl
command
to deploy a CephObjectStore CRD.
For example, if you use the following CephObjectStore CRD in a YAML file:
---
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: my-store
namespace: rook
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
dataPool:
failureDomain: host
erasureCoded:
dataChunks: 2
codingChunks: 1
preservePoolsOnDelete: true
gateway:
sslCertificateRef:
port: 80
instances: 1
healthCheck:
startupProbe:
disabled: false
readinessProbe:
disabled: false
periodSeconds: 5
failureThreshold: 2
On a control plane node, use the kubectl apply
command to create the
CephObjectStore with the file:
kubectl apply -f filename.yaml
The CephObjectStore is created. You can verify the CephObjectStore is created using:
kubectl get cephobjectstore --namespace rook
The output looks similar to:
NAME PHASE
my-store Ready
You can confirm the object store is configured by showing the pod is started:
kubectl get pod -l app=rook-ceph-rgw --namespace rook
The output looks similar to:
NAME READY STATUS RESTARTS AGE
rook-ceph-rgw-my-store-a-... 1/1 Running 0 3m35s
If you don't create a StorageClass (bucket) for the CephObjectStore using a Rook configuration file, you can create one after the Rook module is deployed.
For example, if you use the following StorageClass CRD in a YAML file:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-bucket
provisioner: rook-ceph.ceph.rook.io/bucket
reclaimPolicy: Delete
parameters:
objectStoreName: my-store
objectStoreNamespace: rook
On a control plane node, use the kubectl apply
command to create the
StorageClass with the file:
kubectl apply -f filename.yaml
The StorageClass is created. You can verify the StorageClass is created using:
kubectl get sc
The output looks similar to:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBIND ...
rook-ceph-block (default) rook.rbd.csi.ceph.com Delete Immediate ...
rook-ceph-bucket rook-ceph.ceph.rook.io/bucket Delete Immediate ...
rook-cephfs rook.cephfs.csi.ceph.com Delete Immediate ...
To create an ObjectBucketClaim to access the CephObjectStore storage:
-
Create a Kubernetes ObjectBucketClaim file. On a control plane node, create a file named
obc.yaml
. Copy the following into the file.apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-bucket spec: generateBucketName: ceph-bkt storageClassName: rook-ceph-bucket
-
Create the Kubernetes ObjectBucketClaim.
kubectl apply -f obc.yaml
-
You can see the ObjectBucketClaim is created using the
kubectl get obc
command:kubectl get obc
The output looks similar to:
NAME AGE ceph-bucket 31s
You can get more details about the ObjectBucketClaim using the
kubectl describe obc
command. For example:kubectl describe obc ceph-bucket
The output looks similar to:
Name: ceph-bucket Namespace: default Labels: <none> Annotations: <none> API Version: objectbucket.io/v1alpha1 Kind: ObjectBucketClaim Metadata: Creation Timestamp: <date> Generation: 1 Managed Fields: API Version: objectbucket.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:generateBucketName: f:storageClassName: Manager: kubectl-client-side-apply Operation: Update Time: <date> Resource Version: 354339 UID: c53e5eb7-f460-435a-b31d-2eaab1bcddd3 Spec: Generate Bucket Name: ceph-bkt Storage Class Name: rook-ceph-bucket Events: <none>
-
To create a Kubernetes application that uses the ObjectBucketClaim, follow the rest of the example in the upstream Rook documentation.
-
You can delete the ObjectBucketClaim using:
kubectl delete obc ceph-bucket