Deploy Oracle Cloud Native Environment

Introduction

Oracle Cloud Native Environment is a fully integrated suite for the development and management of cloud native applications. The Kubernetes module is the core module. It is used to deploy and manage containers and also automatically installs and configures CRI-O, runC and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster. The runtime may be either runC or Kata Containers.

Objectives

This tutorial shows you how to install and set up Oracle Cloud Native Environment Release 1.4.

In this tutorial, you configure a cluster with a single control plane node. You also configure X.509 Private CA Certificates used to manage the communication between the nodes and for the externalIPs Kubernetes service. There are other methods to manage and deploy the certificates, such as by using HashiCorp Vault secrets manager, or by using your own certificates, signed by a trusted Certificate Authority (CA). These other methods are not included in this tutorial.

Prerequisites

The host systems to perform the steps in this tutorial are listed in this section. You need:

Set up the Operator Node

The operator node performs and manages the deployment of environments, including deploying the Kubernetes cluster. An operator node may be a node in the Kubernetes cluster, or a separate host. In this tutorial, the operator node is a separate host. On the operator node, install the Platform CLI, Platform API Server, and utilities. Enable the olcne-api-server service, but do not start it.

If you are using Oracle Linux 7, do the following:

sudo yum install olcnectl olcne-api-server olcne-utils
sudo systemctl enable olcne-api-server.service

If you are using Oracle Linux 8, do the following:

sudo dnf install olcnectl olcne-api-server olcne-utils
sudo systemctl enable olcne-api-server.service

Set up the Kubernetes Nodes

Perform these steps on both Kubernetes control plane and worker nodes. Install the Platform Agent package and utilities. Enable the olcne-agent service, but do not start it.

If you are using Oracle Linux 7, do the following:

sudo yum install olcne-agent olcne-utils
sudo systemctl enable olcne-agent.service

If you are using Oracle Linux 8, do the following:

sudo dnf install olcne-agent olcne-utils
sudo systemctl enable olcne-agent.service

If you use a proxy server, configure it with CRI-O. On each Kubernetes node, create a CRI-O systemd configuration directory. Create a file named proxy.conf in the directory and add the proxy server information.

sudo mkdir /etc/systemd/system/crio.service.d
sudo vi /etc/systemd/system/crio.service.d/proxy.conf

Substitute the appropriate proxy values for those in your environment using the example proxy.conf file:

[Service]
Environment="HTTP_PROXY=proxy.example.com:80"
Environment="HTTPS_PROXY=proxy.example.com:80"
Environment="NO_PROXY=.example.com,192.0.2.*"

If the docker service is running, or if the containerd service is running, stop and disable them.

sudo systemctl disable --now docker.service
sudo systemctl disable --now containerd.service

Set up X.509 Private CA Certificates

Use the /etc/olcne/gen-certs-helper.sh script to generate a private CA and certificates for the nodes. Run the script from the /etc/olcne directory on the operator node. The script saves the certificate files in the current directory. Use the --nodes option followed by the nodes for which you want to create certificates. Create a certificate for each node that runs the Platform API Server or Platform Agent. That is, for the operator node, and each Kubernetes node. Provide the private CA information using the --cert-request* options. Some of these options are given in the example. You can get a list of all command options using the gen-certs-helper.sh --help command.

For the --cert-request-common-name option, provide the appropriate Domain Name System (DNS) Domain Name for your environment. For the --nodes option value, provide the fully qualified domain name (FQDN) of your operator, control plane, and worker nodes.

cd /etc/olcne
sudo ./gen-certs-helper.sh \
--cert-request-organization-unit "My Company Unit" \
--cert-request-organization "My Company" \
--cert-request-locality "My Town" \
--cert-request-state "My State" \
--cert-request-country US \
--cert-request-common-name example.com \
--nodes operator.example.com,control.example.com,worker.example.com

Transfer Certificates

The /etc/olcne/gen-certs-helper.sh script used to generate a private CA and certificates for the nodes was run on the operator node. Make sure the operator node has passwordless ssh access to the Kubernetes control plane and worker node (not shown in this tutorial), then run the following command on the operator node to transfer certificates from the operator node to the Kubernetes nodes.

bash -ex /etc/olcne/configs/certificates/olcne-tranfer-certs.sh

Configure the Platform API Server to Use the Certificates

On the operator node, run the /etc/olcne/bootstrap-olcne.sh script as shown to configure the Platform API Server to use the certificates. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial.

sudo /etc/olcne/bootstrap-olcne.sh \
--secret-manager-type file \
--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \
--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \
--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \
--olcne-component api-server

Configure the Platform Agent to Use the Certificates

On each Kubernetes node, run the /etc/olcne/bootstrap-olcne.sh script as shown to configure the Platform Agent to use the certificates. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial.

sudo /etc/olcne/bootstrap-olcne.sh \
--secret-manager-type file \
--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \
--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \
--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \
--olcne-component agent

Set up Certificates for the externalIPs Kubernetes Service

The externalip-validation-webhook-service Kubernetes service requires X.509 certificates be set up prior to deploying Kubernetes. Generate certificates using the gen-certs-helper.sh script. On the operator node, run:

cd /etc/olcne 
sudo ./gen-certs-helper.sh \ 
--cert-dir /etc/olcne/configs/certificates/restrict_external_ip \
--cert-request-organization-unit "My Company Unit" \
--cert-request-organization "My Company" \
--cert-request-locality "My Town" \
--cert-request-state "My State" \
--cert-request-country US \ 
--cert-request-common-name cloud.example.com \ 
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \ 
--one-cert

Make sure the user that is to run the olcnectl command is set as the owner of the /etc/olcne/configs/certificates/restrict_external_ip/ directory. In this example the user and group is opc:opc:

sudo chown -R opc:opc /etc/olcne/configs/certificates/restrict_external_ip/

Create a Deployment Configuration File

On the operator node, create a configuration file for your deployment. This file contains all the information required to create an environment and deploy the Kubernetes module.

You must provide the fully qualified domain name (FQDN) of your control plane and worker nodes.

The location of the private CA and certificates for the nodes is set in the globals section. You should also provide the location of the certificates for the externalip-validation-webhook-service Kubernetes service in the args section of the Kubernetes module.

The configuration file should be in the YAML format as shown. You should only need to change:

The filename for this configuration file in this tutorial is myenvironment.yaml.

More information on how to create a configuration file is in the documentation at Using a Configuration File.

environments:
  - environment-name: myenvironment
    globals:
      api-server: 127.0.0.1:8091
      secret-manager-type: file
      olcne-ca-path: /etc/olcne/configs/certificates/production/ca.cert
      olcne-node-cert-path: /etc/olcne/configs/certificates/production/node.cert
      olcne-node-key-path:  /etc/olcne/configs/certificates/production/node.key         
    modules:
      - module: kubernetes
        name: mycluster
        args:
          container-registry: container-registry.oracle.com/olcne
          master-nodes: control.example.com:8090
          worker-nodes: worker.example.com:8090
          selinux: enforcing
          restrict-service-externalip: true
          restrict-service-externalip-ca-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert
          restrict-service-externalip-tls-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/node.cert
          restrict-service-externalip-tls-key: /etc/olcne/configs/certificates/restrict_external_ip/production/node.key

Create the Environment

On the operator node, run the olcnectl environment create command to create the environment. Pass the location of the configuration file you created using the --config-file option.

olcnectl environment create --config-file myenvironment.yaml

Create the Kubernetes Module

On the operator node, run the olcnectl module create command to create the Kubernetes module.

olcnectl module create --config-file myenvironment.yaml

Validate the Kubernetes Module

On the operator node, use the olcnectl module validate command to validate the nodes are configured correctly to deploy the Kubernetes module. In this example, there are no validation errors. If there are any errors, the commands required to fix the nodes are provided as output of this command.

olcnectl module validate --config-file myenvironment.yaml

Install the Kubernetes Module

On the operator node, use the olcnectl module install command to deploy the Kubernetes module to the environment.

olcnectl module install --config-file myenvironment.yaml

The deployment of Kubernetes to the nodes may take several minutes to complete.

Validate the Deployment

On the operator node, verify the Kubernetes Module is deployed and the nodes are set up using the olcnectl module instances command.

olcnectl module instances --config-file myenvironment.yaml

The output should look similar to:

INSTANCE                      MODULE                    	STATE    
control.example.com:8090      node                      	installed
worker.example.com:8090       node                      	installed
mycluster                     kubernetes                	installed

Set up kubectl

On a control plane node, set up the kubectl command.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

Verify you can use the kubectl command.

kubectl get nodes

The output should look similar to:

NAME                  STATUS   ROLES                  AGE   VERSION
control.example.com   Ready    control-plane,master   18m   ...
worker.example.com    Ready    <none>                 18m   ...

For More Information

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.