2 Installing the Oracle Cloud Infrastructure Cloud Controller Manager Module

This chapter discusses how to install the Oracle Cloud Infrastructure Cloud Controller Manager module on Oracle Cloud Native Environment on Oracle Cloud Infrastructure instances.

Prerequisites

This section contains the prerequisite information you need to set up the Oracle Cloud Infrastructure Cloud Controller Manager module.

Gather Oracle Cloud Infrastructure Identifiers

Before you set up the Oracle Cloud Infrastructure Cloud Controller Manager module, you need to gather information about the Oracle Cloud Infrastructure environment. The most common information you need is:

  • The identifier for the region.

  • The OCID for the tenancy.

  • The OCID for the compartment.

  • The OCID for the user.

  • The public key fingerprint for the API signing key pair.

  • The private key file for the API signing key pair.

You might need more information related to the Oracle Cloud Infrastructure networking or other components.

If you're using the Oracle Cloud Infrastructure Cloud Controller Manager module to provide load balancers for Kubernetes pods, you must also gather:

  • The OCID for the Virtual Cloud Network (VCN).

  • The OCIDs for two subnets in the VCN for high availability if required.

  • The quota to use for the load balancers.

  • The shape to use for the load balancers.

For information on finding each of these identifiers or components, see the Oracle Cloud Infrastructure documentation.

Setting up the Health Check Endpoint Network Ports

When using a Kubernetes LoadBalancer service with the ServiceInternalTrafficPolicy set to Cluster (the default), a health check endpoint is expected to be available on TCP port 10256. kube-proxy creates a listener on this port, which sets access to the LoadBalancer service to verify that kube-proxy is healthy on the nodes. The LoadBalancer service decides which nodes can have traffic routed to them using this policy. To allow traffic on this port, you must open TCP port 10256 on all Kubernetes nodes. On each Kubernetes node, run:

sudo firewall-cmd --zone=public --add-port=10256/tcp
sudo firewall-cmd --zone=public --add-port=10256/tcp --permanent
sudo systemctl restart firewalld.service

For more information on the ServiceInternalTrafficPolicy, see the upstream Kubernetes documentation.

Ensure traffic is allowed for TCP port 10256 in the network security list.

Deploying the Module

The Oracle Cloud Infrastructure Cloud Controller Manager module is used to provision both Oracle Cloud Infrastructure storage and application load balancers. This section guides you through installing each component required to deploy the Oracle Cloud Infrastructure Cloud Controller Manager module.

For the full list of the Platform CLI command options available when creating modules, see the olcnectl module create command in Platform Command-Line Interface.

To deploy the Oracle Cloud Infrastructure Cloud Controller Manager module:

  1. If you don't already have an environment set up, create one into which the modules can be deployed. For information on setting up an environment, see Getting Started. The name of the environment in this example is myenvironment.

  2. If you don't already have a Kubernetes module set up or deployed, set one up.

    For information on adding a Kubernetes module to an environment, see Kubernetes Module. The name of the Kubernetes module in this example is mycluster.

  3. Create an Oracle Cloud Infrastructure Cloud Controller Manager module and associate it with the Kubernetes module named mycluster using the --oci-ccm-kubernetes-module option. In this example, the Oracle Cloud Infrastructure Cloud Controller Manager module is named myoci.

    olcnectl module create \
    --environment-name myenvironment \
    --module oci-ccm \
    --name myoci \
    --oci-ccm-kubernetes-module mycluster \
    --oci-region us-ashburn-1 \
    --oci-tenancy ocid1.tenancy.oc1..unique_ID \
    --oci-compartment ocid1.compartment.oc1..unique_ID \
    --oci-user ocid1.user.oc1..unique_ID \
    --oci-fingerprint b5:52:... \
    --oci-private-key-file /home/opc/.oci/oci_api_key.pem \
    --oci-vcn ocid1.vcn.oc1..unique_ID \
    --oci-lb-subnet1 ocid1.subnet.oc1..unique_ID 

    The --module option sets the module type to create, which is oci-ccm. You define the name of the Oracle Cloud Infrastructure Cloud Controller Manager module using the --name option, which in this case is myoci.

    The --oci-ccm-kubernetes-module option sets the name of the Kubernetes module.

    The --oci-region option sets the Oracle Cloud Infrastructure region to use. The region in this example is us-ashburn-1.

    The --oci-tenancy option sets the OCID for the tenancy.

    The --oci-compartment option sets the OCID for the compartment.

    The --oci-user option sets the OCID for the user.

    The --oci-fingerprint option sets the fingerprint for the public key for the Oracle Cloud Infrastructure API signing key.

    The --oci-private-key-file path option sets the location of the private key for the Oracle Cloud Infrastructure API signing key. This must be on the operator node.

    The --oci-vcn option sets the OCID for the VCN on which to create load balancers. You don't need to include this option if you're not using a load balancer.

    The --oci-lb-subnet1 option sets the OCID for the VCN subnet on which to create load balancers. You don't need to include this option if you aren't using a load balancer.

    To set up high availability for a load balancer, provide a second subnet on a different availability domain using the --oci-lb-subnet2 option. For example:

    --oci-lb-subnet2 ocid1.subnet.oc1..unique_ID \

    If you don't include all the required options when adding the module, you're prompted to provide them.

  4. Use the olcnectl module install command to install the Oracle Cloud Infrastructure Cloud Controller Manager module. For example:

    olcnectl module install \
    --environment-name myenvironment \
    --name myoci

    The Oracle Cloud Infrastructure Cloud Controller Manager module is deployed into the Kubernetes cluster.

Verifying the Module Deployment

You can verify the Oracle Cloud Infrastructure Cloud Controller Manager module is deployed using the olcnectl module instances command on the operator node. For example:

olcnectl module instances \
--environment-name myenvironment
INSTANCE                  MODULE    	STATE    
mycluster                 kubernetes	installed
myoci                     oci-ccm   	installed
...

Note the entry for oci-ccm in the MODULE column is in the installed state.

In addition, use the olcnectl module report command to review information about the module. For example, use the following command to review the Oracle Cloud Infrastructure Cloud Controller Manager module named myoci in myenvironment:

olcnectl module report \
--environment-name myenvironment \
--name myoci \
--children

For more information on the syntax for the olcnectl module report command, see Platform Command-Line Interface.

If you have included the options to use application load balancers, on a control plane node, verify the oci-bv StorageClass for the Oracle Cloud Infrastructure provisioner is created using the kubectl get sc command:

kubectl get sc

The output looks similar to:

NAME     PROVISIONER                       RECLAIMPOLICY   VOLUMEBINDINGMODE      ...
oci-bv   blockvolume.csi.oraclecloud.com   Delete          WaitForFirstConsumer   ...

You can get more details about the StorageClass using the kubectl describe sc command. For example:

kubectl describe sc oci-bv

The output looks similar to:

Name:                  oci-bv
IsDefaultClass:        No
Annotations:           meta.helm.sh/release-name=myoci,meta.helm.sh/release-namespace=default
Provisioner:           blockvolume.csi.oraclecloud.com
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>