2 Installing the Rook Module
This chapter discusses how to install the Rook module to set up dynamically provisioned persistent storage for Kubernetes applications using Ceph on Oracle Cloud Native Environment.
Prerequisites
This section contains the prerequisites for installing the Rook module.
Setting up the Worker Nodes
The Rook module deploys Ceph as containers to the Kubernetes worker nodes. You need at least three worker nodes in the Kubernetes cluster.
In addition, at least one of these local storage options must be available on the Kubernetes worker nodes:
-
Raw devices (no partitions or formatted file systems).
-
Raw partitions (no formatted file system).
-
LVM Logical Volumes (no formatted file system).
-
Persistent Volumes available from a storage class in
blockmode.
Tip:
Use the lsblk -f command to ensure no file system is on the device or
partition. If the FSTYPE field is empty, no file system is on the disk
and it can be used with Ceph.
Creating a Rook Configuration File
If you deploy the Rook module without a configuration file, the Rook operator pod
(rook-ceph-operator) is created. You can then create a Ceph cluster and
storage using the kubectl command. You can optionally provide a Rook
configuration file to set up a Ceph cluster and storage, which is set up for you when you
deploy the Rook module.
You can provide a Rook configuration file on the operator node in YAML format. The configuration file contains the information to configure one or more Ceph clusters and storage types. You use Ceph-related Kubernetes CRDs in the configuration file to perform the setup. Include as many CRDs in the configuration file as you need to set up Ceph clusters, storage options, and storage providers. For example:
---
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook
spec:
...
---
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook
spec:
...
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rook.rbd.csi.ceph.com
parameters:
...
The Platform API Server uses the information contained in the configuration file when creating the Rook module. Rook performs all the set up and configuration for Ceph, using the information you provide in this file.
Use the upstream documentation to create CRDs. For information on the options available to use in the configuration file, see the upstream Rook documentation for Ceph CRDs.
Important:
The example CRDs in this section include CRDs to set up a basic Ceph cluster, storage types, and storage class providers. These are examples only and aren't recommended for a production environment.
CephCluster CRD
The CephCluster CRD is used to create a Ceph cluster. The following example configuration
uses a Kubernetes cluster with 3 worker nodes that have a RAW disk attached to each node
as sdb. This example uses a Ceph cluster name of
rook-ceph in the rook namespace. Note that the Ceph
image is pulled from the Oracle Container Registry.
---
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook
spec:
cephVersion:
image: container-registry.oracle.com/olcne/ceph:v17.2.5
imagePullPolicy: Always
dataDirHostPath: /var/lib/rook
mon:
count: 3
allowMultiplePerNode: false
dashboard:
enabled: false
storage:
useAllNodes: true
useAllDevices: false
deviceFilter: sdb
CephBlockPool CRD
Use a CephBlockPool CRD to create the Ceph storage pool. This example sets up a replica
set of 3 in the CephBlockPool named replicapool in the
rook namespace.
---
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook
spec:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
StorageClass CRD for CephBlockPool
To allow pods to access the Ceph block storage, you need to create a StorageClass. An example CRD for this follows:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rook.rbd.csi.ceph.com
parameters:
clusterID: rook
pool: replicapool
imageFormat: "2"
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook
csi.storage.k8s.io/fstype: ext4
allowVolumeExpansion: true
reclaimPolicy: Delete
CephFilesystem CRD
You might also want to set up a CephFilesystem. You do this by including the
CephFilesystem CRD information in the configuration file. This example creates a
CephFilesystem named myfs in the rook namespace, with a
replica count of 3.
---
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: myfs
namespace: rook
spec:
metadataPool:
replicated:
size: 3
dataPools:
- name: replicated
replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
StorageClass CRD for CephFilesystem
To allow pods to access the CephFilesystem storage, you need to create a StorageClass. An example CRD for this follows:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
provisioner: rook.cephfs.csi.ceph.com
parameters:
clusterID: rook
fsName: myfs
pool: myfs-replicated
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook
reclaimPolicy: Delete
CephObjectStore CRD
You might also want to set up a CephObjectStore. You do this by including the CephObjectStore CRD information in the configuration file. For example:
---
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: my-store
namespace: rook
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
dataPool:
failureDomain: host
erasureCoded:
dataChunks: 2
codingChunks: 1
preservePoolsOnDelete: true
gateway:
sslCertificateRef:
port: 80
instances: 1
healthCheck:
startupProbe:
disabled: false
readinessProbe:
disabled: false
periodSeconds: 5
failureThreshold: 2
StorageClass (bucket) CRD for CephObjectStore
To allow pods to access the CephObjectStorage storage, you need to create a StorageClass, which creates a bucket. An example CRD for this follows:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-bucket
provisioner: rook-ceph.ceph.rook.io/bucket
reclaimPolicy: Delete
parameters:
objectStoreName: my-store
objectStoreNamespace: rook
Deploying the Rook Module
You can deploy all the modules required to set up Ceph storage for a Kubernetes cluster
using a single olcnectl module create command. This method might be useful
to deploy the Rook module at the same time as deploying a Kubernetes cluster.
If you have an existing deployment of the Kubernetes module, you can specify that instance when deploying the Rook module.
This section guides you through installing each component required to deploy the Rook module.
For the full list of the Platform CLI command options available when creating modules, see
the olcnectl module create command in Platform Command-Line Interface.
To deploy the Rook module:
-
If you don't already have an environment set up, create one into which the modules can be deployed. For information on setting up an environment, see Installation. The name of the environment in this example is
myenvironment. -
If you don't already have a Kubernetes module set up or deployed, set one up. For information on adding a Kubernetes module to an environment, see Kubernetes Module. The name of the Kubernetes module in this example is
mycluster. -
Create a Rook module and associate it with the Kubernetes module named
myclusterusing the--rook-kubernetes-moduleoption. In this example, the Rook module is namedmyrook.olcnectl module create \ --environment-name myenvironment \ --module rook \ --name myrook \ --rook-kubernetes-module mycluster \ --rook-config rook-config.yamlThe
--moduleoption sets the module type to create, which isrook. You define the name of the Rook module using the--nameoption, which in this case ismyrook.The
--rook-kubernetes-moduleoption sets the name of the Kubernetes module.The
--rook-configoption sets the location of a YAML file that contains the configuration information for the Rook module. This is optional.If you don't include all the required options when adding the module, you're prompted to provide them.
-
Use the
olcnectl module installcommand to install the Rook module. For example:olcnectl module install \ --environment-name myenvironment \ --name myrookYou can optionally use the
--log-leveloption to set the level of logging displayed in the command output. By default, error messages are displayed. For example, you can set the logging level to show all messages when you include:--log-level debugThe log messages are also saved as an operation log. You can view operation logs as commands are running, or when they've completed. For more information using operation logs, see Platform Command-Line Interface.
The Rook module is deployed into the Kubernetes cluster.
Verifying the Rook Module Deployment
You can verify the Rook module is deployed using
the olcnectl module instances command on the
operator node. For example:
olcnectl module instances \
--environment-name myenvironmentThe output looks similar to:
INSTANCE MODULE STATE
mycluster kubernetes installed
myrook rook installed
... Note the entry for rook in the MODULE column is in the
installed state.
In addition, you can use the olcnectl module report command to review
information about the module. For example, use the following command to review the Rook module
named myrook in myenvironment:
olcnectl module report \
--environment-name myenvironment \
--name myrook \
--children
For more information on the syntax for the olcnectl
module report command, see Platform Command-Line Interface.
On a control plane node, verify the rook-ceph deployments are running in
the rook namespace. Confirm that the Ceph operator pod
(rook-ceph-operator) deployment is running.
kubectl get deployments --namespace rook
The output looks similar to:
NAME READY UP-TO-DATE AVAILABLE AGE
rook-ceph-operator 1/1 1 1 163m
If you used a configuration file to deploy extra Ceph objects, such as a Ceph cluster, storage, and storage class provisioners, you might have more deployments running. For example:
NAME READY UP-TO-DATE AVAILABLE AGE
csi-cephfsplugin-provisioner 2/2 2 2 159m
csi-rbdplugin-provisioner 2/2 2 2 159m
rook-ceph-operator 1/1 1 1 163m
rook-ceph-mgr-a 1/1 1 1 163m
rook-ceph-mon-b 1/1 1 1 163m
...
kubectl logs --namespace rook rook-ceph-operator-... On a control plane node, you can also verify any StorageClasses for the Ceph provisioner are
created using the kubectl get sc command. These are only created if you
used a configuration file to create them when deploying the module. For example:
kubectl get scThe output looks similar to:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ...
rook-ceph-block (default) rook.rbd.csi.ceph.com Delete Immediate ...
rook-cephfs rook.cephfs.csi.ceph.com Delete Immediate ... In this case, two StorageClasses exist, one named rook-ceph-block, which is
the default provider. This is the provider for Ceph block storage. The other StorageClass is
named rook-cephfs, which is the provider for CephFilesystem (CephFS).
You can get more details about a StorageClass using the kubectl describe
sc command. For example:
kubectl describe sc rook-ceph-blockThe output looks similar to:
Name: rook-ceph-block
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: rook.rbd.csi.ceph.com
Parameters: clusterID=rook,csi.storage.k8s.io/controller-expand-secret-name=rook- ...
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>