2 Using Oracle Cloud Infrastructure Storage

Important:

The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.

We recommend that you upgrade the software described by this documentation as soon as possible.

This chapter discusses how to install and use the Oracle Cloud Infrastructure Cloud Controller Manager module to set up dynamically provisioned persistent storage for Kubernetes applications in Oracle Cloud Native Environment on Oracle Cloud Infrastructure instances.

Prerequisites

Before you set up the Oracle Cloud Infrastructure Cloud Controller Manager module, you need to gather information about your Oracle Cloud Infrastructure environment. The most common information you need is:

  • The identifier for the region.

  • The OCID for the tenancy.

  • The OCID for the compartment.

  • The OCID for the user.

  • The public key fingerprint for the API signing key pair.

  • The private key file for the API signing key pair. The private key must be copied to the primary control plane node. This is the first control plane node listed in the --master-nodes option when you create the Kubernetes module.

You may need more information related to your Oracle Cloud Infrastructure networking or other components.

For information on finding each of these identifiers or components, see the Oracle Cloud Infrastructure documentation.

Deploying the Oracle Cloud Infrastructure Cloud Controller Manager Module

If you have already installed the Oracle Cloud Infrastructure Cloud Controller Manager module to make use of Oracle Cloud Infrastructure application load balancers, you do not need to create another module to provision storage. The Oracle Cloud Infrastructure Cloud Controller Manager module is used to provision both Oracle Cloud Infrastructure storage and load balancers.

You can deploy all the modules required to set up Oracle Cloud Infrastructure storage for a Kubernetes cluster using a single olcnectl module create command. This method might be useful if you want to deploy the Oracle Cloud Infrastructure Cloud Controller Manager module at the same time as deploying a Kubernetes cluster.

If you have an existing deployment of the Kubernetes module, you can specify that instance when deploying the Oracle Cloud Infrastructure Cloud Controller Manager module.

This section guides you through installing each component required to deploy the Oracle Cloud Infrastructure Cloud Controller Manager module.

For the full list of the Platform CLI command options available when creating modules, see the olcnectl module create command in Platform Command-Line Interface.

To deploy the Oracle Cloud Infrastructure Cloud Controller Manager module:

  1. If you do not already have an environment set up, create one into which the modules can be deployed. For information on setting up an environment, see Getting Started. The name of the environment in this example is myenvironment.

  2. If you do not already have a Kubernetes module set up or deployed, set one up.

    For information on adding a Kubernetes module to an environment, see Container Orchestration. The name of the Kubernetes module in this example is mycluster.

  3. If you do not already have a Helm module created and installed, create one. The Helm module in this example is named myhelm and is associated with the Kubernetes module named mycluster using the --helm-kubernetes-module option.

    olcnectl module create \
    --environment-name myenvironment \
    --module helm \
    --name myhelm \
    --helm-kubernetes-module mycluster 
  4. If you are deploying a new Helm module, use the olcnectl module validate command to validate the Helm module can be deployed to the nodes. For example:

    olcnectl module validate \
    --environment-name myenvironment \
    --name myhelm
  5. If you are deploying a new Helm module, use the olcnectl module install command to install the Helm module. For example:

    olcnectl module install \
    --environment-name myenvironment \
    --name myhelm 

    The Helm software packages are installed on the control plane nodes, and the Helm module is deployed into the Kubernetes cluster.

  6. Create an Oracle Cloud Infrastructure Cloud Controller Manager module and associate it with the Helm module named myhelm using the --oci-ccm-helm-module option. In this example, the Oracle Cloud Infrastructure Cloud Controller Manager module is named myoci.

    olcnectl module create \
    --environment-name myenvironment \
    --module oci-ccm \
    --name myoci \
    --oci-ccm-helm-module myhelm \
    --oci-region us-ashburn-1 \
    --oci-tenancy ocid1.tenancy.oc1..unique_ID \
    --oci-compartment ocid1.compartment.oc1..unique_ID \
    --oci-user ocid1.user.oc1..unique_ID \
    --oci-fingerprint b5:52:... \
    --oci-private-key /home/opc/.oci/oci_api_key.pem 

    The --module option sets the module type to create, which is oci-ccm. You define the name of the Oracle Cloud Infrastructure Cloud Controller Manager module using the --name option, which in this case is myoci.

    The --oci-ccm-helm-module option sets the name of the Helm module. If there is an existing Helm module with the same name, the Platform API Server uses that instance of Helm.

    The --oci-region option sets the Oracle Cloud Infrastructure region to use. The region in this example is us-ashburn-1.

    The --oci-tenancy option sets the OCID for your tenancy.

    The --oci-compartment option sets the OCID for your compartment.

    The --oci-user option sets the OCID for the user.

    The --oci-fingerprint option sets the fingerprint for the public key for the Oracle Cloud Infrastructure API signing key.

    The --oci-private-key option sets the location of the private key for the Oracle Cloud Infrastructure API signing key. The private key must be available on the primary control plane node.

    If you do not include all the required options when adding the modules, you are prompted to provide them.

  7. Use the olcnectl module validate command to validate the Oracle Cloud Infrastructure Cloud Controller Manager module can be deployed to the nodes. For example:

    olcnectl module validate \
    --environment-name myenvironment \
    --name myoci
  8. Use the olcnectl module install command to install the Oracle Cloud Infrastructure Cloud Controller Manager module. For example:

    olcnectl module install \
    --environment-name myenvironment \
    --name myoci

    The Oracle Cloud Infrastructure Cloud Controller Manager module is deployed into the Kubernetes cluster.

Verifying the Oracle Cloud Infrastructure Cloud Controller Manager Deployment

You can verify the Oracle Cloud Infrastructure Cloud Controller Manager module is deployed using the olcnectl module instances command on the operator node. For example:

olcnectl module instances \
--environment-name myenvironment
INSTANCE                  MODULE    	STATE    
mycluster                 kubernetes	installed
myhelm                    helm      	installed
myoci                     oci-ccm   	installed
control1.example.com      node      	installed
...

Note the entry for oci-ccm in the MODULE column is in the installed state.

In addition, use the olcnectl module report command to review information about the module. For example, use the following command to review the Oracle Cloud Infrastructure Cloud Controller Manager module named myoci in myenvironment:

olcnectl module report \
--environment-name myenvironment \
--name myoci \
--children

For more information on the syntax for the olcnectl module report command, see Platform Command-Line Interface.

On a control plane node, you can also verify the oci-bv StorageClass for the Oracle Cloud Infrastructure provisioner is created using the kubectl get sc command:

kubectl get sc
NAME     PROVISIONER                       RECLAIMPOLICY   VOLUMEBINDINGMODE      ...
oci-bv   blockvolume.csi.oraclecloud.com   Delete          WaitForFirstConsumer   ...

You can get more details about the StorageClass using the kubectl describe sc command. For example:

kubectl describe sc oci-bv
Name:                  oci-bv
IsDefaultClass:        No
Annotations:           meta.helm.sh/release-name=myoci,meta.helm.sh/release-namespace=default
Provisioner:           blockvolume.csi.oraclecloud.com
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

Creating Oracle Cloud Infrastructure Block Storage

This section contains a basic test to verify you can create Oracle Cloud Infrastructure block storage to provide persistent storage to applications running on Kubernetes.

To create a test application to use Oracle Cloud Infrastructure storage:

  1. Create a Kubernetes PersistentVolumeClaim file. On a control plane node, create a file named pvc.yaml. Copy the following into the file.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: myoci-pvc
    spec:
     accessModes:
      - ReadWriteOnce
     storageClassName: oci-bv
     resources:
      requests:
        storage: 50Gi

    Note that the accessModes setting for Oracle Cloud Infrastructure storage must be ReadWriteOnce. The minimum Oracle Cloud Infrastructure block size is 50Gi.

  2. Create the Kubernetes PersistentVolumeClaim.

    kubectl apply -f pvc.yaml
    persistentvolumeclaim/myoci-pvc created
  3. You can see the PersistentVolumeClaim is created using the kubectl get pvc command:

    kubectl get pvc 
    NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    myoci-pvc   Pending                                      oci-bv         15s

    The STATUS is Pending and means the claim is waiting for an application to claim it.

    You can get more details about the PersistentVolumeClaim using the kubectl describe pvc command. For example:

    kubectl describe pvc myoci-pvc 
    Name:          myoci-pvc
    Namespace:     default
    StorageClass:  oci-bv
    Status:        Pending
    Volume:        
    Labels:        <none>
    Annotations:   <none>
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
    Used By:       <none>
    Events:
      Type    Reason                Age                     From                         ...
      ----    ------                ----                    ----                         
      Normal  WaitForFirstConsumer  2m18s (x26 over 8m29s)  persistentvolume-controller  ...
  4. Create a Kubernetes application that uses the PersistentVolumeClaim. Create a file named nginx.yaml and copy the following into the file.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        run: mynginx
      name: mynginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          run: mynginx
      template:
        metadata:
          labels:
            run: mynginx
        spec:
          containers:
          - image: container-registry.oracle.com/olcne/nginx:1.17.7 
            name: mynginx
            ports:
            - containerPort: 80
            volumeMounts:
            - name: nginx-pvc
              mountPath: /usr/share/nginx/html
          volumes:
          - name: nginx-pvc
            persistentVolumeClaim:
              claimName: myoci-pvc
  5. Start the application:

    kubectl apply -f nginx.yaml
    deployment.apps/mynginx created
  6. You can see the application is running using the kubectl get deployment command:

    kubectl get deployment
    NAME      READY   UP-TO-DATE   AVAILABLE   AGE
    mynginx   1/1     1            1           63s
  7. You can see the application is using the PersistentVolumeClaim to provide persistent storage on Oracle Cloud Infrastructure using the kubectl describe deployment command:

    kubectl describe deployment mynginx
    ...
    Pod Template:
      Labels:  run=mynginx
      Containers:
       mynginx:
        Image:        container-registry.oracle.com/olcne/nginx:1.17.7
        Port:         80/TCP
        Host Port:    0/TCP
        Environment:  <none>
        Mounts:
          /usr/share/nginx/html from nginx-pvc (rw)
      Volumes:
       nginx-pvc:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  myoci-pvc
        ReadOnly:   false
    ...

    Note the ClaimName is myoci-pvc, which is the name of the PersistentVolumeClaim created earlier.

    You can see the PersistentVolumeClaim is now bound to this application using the kubectl get pvc command:

    kubectl get pvc 
    NAME        STATUS   VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    myoci-pvc   Bound    csi-84175067-...   50Gi       RWO            oci-bv         1m

    Tip:

    If you log in to Oracle Cloud Infrastructure, you can see there is a block volume created with the name listed in the VOLUME column. The block volume is attached to the compute instance on which the Kubernetes application is running.

  8. You can delete the test application using:

    kubectl delete deployment mynginx 
    deployment.apps "mynginx" deleted
  9. You can delete the PersistentVolumeClaim using:

    kubectl delete pvc myoci-pvc 
    persistentvolumeclaim "myoci-pvc" deleted

    The storage is deleted.

    Tip:

    If you log in to Oracle Cloud Infrastructure, you can see the block volume is terminated.

Removing the Oracle Cloud Infrastructure Cloud Controller Manager Module

You can remove a deployment of the Oracle Cloud Infrastructure Cloud Controller Manager module and leave the Kubernetes cluster in place. To do this, you remove the Oracle Cloud Infrastructure Cloud Controller Manager module from the environment.

Use the olcnectl module uninstall command to remove the Oracle Cloud Infrastructure Cloud Controller Manager module. For example, to uninstall the Oracle Cloud Infrastructure Cloud Controller Manager module named myoci in the environment named myenvironment:

olcnectl module uninstall \
--environment-name myenvironment \
--name myoci

The Oracle Cloud Infrastructure Cloud Controller Manager module is removed from the environment.