Note:

Deploy Oracle Cloud Native Environment

Introduction

Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud-native applications. The Kubernetes module is the core module. It deploys and manages containers and automatically installs and configures CRI-O, runC, and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster, which may be either runC or Kata Containers.

Objectives

This tutorial demonstrates how to:

Prerequisites

Deploy Oracle Cloud Native Environment

Note: If running in your own tenancy, read the linux-virt-labs GitHub project README.md and complete the prerequisites before deploying the lab environment.

  1. Open a terminal on the Luna Desktop.

  2. Clone the linux-virt-labs GitHub project.

    git clone https://github.com/oracle-devrel/linux-virt-labs.git
    
  3. Change into the working directory.

    cd linux-virt-labs/ocne
    
  4. Install the required collections.

    ansible-galaxy collection install -r requirements.yaml
    
  5. Deploy the lab environment.

    ansible-playbook create_instance.yaml -e ansible_python_interpreter="/usr/bin/python3.6" -e ocne_type=none
    

    The free lab environment requires the extra variable ansible_python_interpreter because it installs the RPM package for the Oracle Cloud Infrastructure SDK for Python. The location for this package’s installation is under the python3.6 modules.

    Important: Wait for the playbook to run successfully and reach the pause task. The Oracle Cloud Native Environment installation is complete at this stage of the playbook, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys.

Update Oracle Linux

  1. Open a terminal and connect via SSH to the ocne-operator node.

    ssh oracle@<ip_address_of_ol_node>
    
  2. Update Oracle Linux and reboot all the nodes.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo dnf -y update; \
        sudo reboot"
    done 
    
    

    Depending on the number of packages to upgrade, this step may take a while to complete.

  3. Reconnect to the ocne-operator via ssh after the reboot.

    ssh oracle@<ip_address_of_ol_node>
    

Install and Enable the Oracle Cloud Native Environment Yum Repository

  1. Install the Yum repository, enable the current Oracle Cloud Native Environment repository, and disable the previous versions on all the nodes.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo dnf -y install oracle-olcne-release-el8; \
        sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_UEKR7; \
        sudo dnf config-manager --disable ol8_olcne12 ol8_olcne13 ol8_olcne14 ol8_olcne15 ol8_olcne16 ol8_olcne17 ol8_developer"
    done 
    

Install and Enable Chrony

  1. Install and enable the chrony service on all the nodes.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo dnf -y install chrony; \
        sudo systemctl enable --now chronyd"
    done 
    

    If the chrony package already exists on the system, then dnf reports that there’s _Nothing to do._.

Disable Swap

  1. Disable swap on all the nodes.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo swapoff -a; \
        sudo sed -i '/swap/ s/^#*/#/' /etc/fstab"
    done 
    

Configure the Oracle Linux Firewall

The firewalld service is installed and running by default on Oracle Linux.

  1. Set the firewall rules for the operator node.

    sudo firewall-cmd --add-port=8091/tcp --permanent
    sudo firewall-cmd --reload
    
  2. Set the firewall rules for the control plane node(s).

    ssh ocne-control-01 \
      "sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent; \
       sudo firewall-cmd --add-port=8090/tcp --permanent; \
       sudo firewall-cmd --add-port=10250/tcp --permanent; \
       sudo firewall-cmd --add-port=10255/tcp --permanent; \
       sudo firewall-cmd --add-port=8472/udp --permanent; \
       sudo firewall-cmd --add-port=6443/tcp --permanent; \
       sudo firewall-cmd --reload"
    
  3. Add the following to the control plane node(s) to ensure high availability and pass validation tests.

    ssh ocne-control-01 \
      "sudo firewall-cmd --add-port=10251/tcp --permanent; \
       sudo firewall-cmd --add-port=10252/tcp --permanent; \
       sudo firewall-cmd --add-port=2379/tcp --permanent; \
       sudo firewall-cmd --add-port=2380/tcp --permanent; \
       sudo firewall-cmd --reload"
    
  4. Set the firewall rules for the worker node(s).

    for host in ocne-worker-01 ocne-worker-02
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent; \
         sudo firewall-cmd --add-port=8090/tcp --permanent; \
         sudo firewall-cmd --add-port=10250/tcp --permanent; \
         sudo firewall-cmd --add-port=10255/tcp --permanent; \
         sudo firewall-cmd --add-port=8472/udp --permanent; \
         sudo firewall-cmd --reload"
    done 
    

Load the Bridge Filtering Module

  1. Enable and load the module on the control plane and worker nodes.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo modprobe br_netfilter; \
        sudo sh -c 'echo "\""br_netfilter"\"" > /etc/modules-load.d/br_netfilter.conf'"
    done 
    

Set up the Operator Node

The operator node performs and manages environment deployments, including the Kubernetes cluster. It may be a node in the Kubernetes cluster or a separate host, as in this tutorial. Install the Oracle Cloud Native Environment Platform CLI, Platform API Server, and utilities on the operator node.

  1. Install the Platform CLI, Platform API Server, and utilities.

    sudo dnf -y install olcnectl olcne-api-server olcne-utils
    
  2. Enable the API server service, but do not start it.

    sudo systemctl enable olcne-api-server.service
    

Set up the Kubernetes Nodes

The Kubernetes control plane and worker nodes contain the Oracle Cloud Native Environment Platform Agent and utility packages.

  1. Install the Platform Agent package and utilities.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo dnf -y install olcne-agent olcne-utils"
    done    
    
  2. Enable the agent service, but do not start it.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo systemctl enable olcne-agent.service"
    done    
    

These initial steps complete the setup and software installation for each node.

(Optional) Proxy Server Configuration

If using a Proxy Server, configure it with CRI-O on each Kubernetes node.

Note: This is not required in the free lab environment.

  1. Create the CRIO Service.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo mkdir /etc/systemd/system/crio.service.d"
    done    
    
  2. Create the proxy configuration file and substitute the appropriate proxy values for those in your environment.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator
    do
    printf "======= $host =======\n\n" 
    ssh $host \
    cat <<-'MOD' | sudo tee /etc/systemd/system/crio.service.d/proxy.conf > /dev/null
    [Service]
    Environment="HTTP_PROXY=proxy.example.com:80"
    Environment="HTTPS_PROXY=proxy.example.com:80"
    Environment="NO_PROXY=.example.com,192.0.2.*"
    MOD
    done
    

    This script has no indention as the EOF token must be at the beginning of the line for a heredoc. The <<- allows for the use of Tab characters to indent, but when using copy and paste, the characters convert to spaces.

Set up X.509 Private CA Certificates

Use the olcne certificates distribute command to generate and distribute private CA and certificates for the nodes.

  1. Add the oracle user to the olcne group.

    sudo usermod -a -G olcne oracle
    
  2. Log off the ocne-operator node and then connect again using SSH.

    exit
    
    ssh oracle@<ip_address_of_ol_node>
    
  3. Generate and distribute the node certificates.

    olcnectl certificates distribute --nodes ocne-operator,ocne-control-01,ocne-worker-01,ocne-worker-02
    
    • --nodes: Provide the fully qualified domain name (FQDN), hostname, or IP address of your operator, control plane, and worker nodes.

Set up X.509 Certificates for the ExternalIPs Kubernetes Service

The externalip-validation-webhook-service Kubernetes service requires setting up X.509 certificates before deploying Kubernetes.

  1. Generate the certificates.

    olcnectl certificates generate \
    --nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
    externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
    --cert-dir $HOME/certificates/restrict_external_ip/ \
    --byo-ca-cert $HOME/certificates/ca/ca.cert \
    --byo-ca-key $HOME/certificates/ca/ca.key \
    --one-cert
    
    • --byo-ca-*: This option uses the previously created CA certificate and key.

    Note: The $HOME variable represents the location where this example executes the olcnectl certificates generate command. However, you can change this to any location using the --cert-dir option (see the documentation for more details).

Bootstrap the Platform API Server

  1. Configure the Platform API Server to use the certificates on the operator node.

    sudo /etc/olcne/bootstrap-olcne.sh \
    --secret-manager-type file \
    --olcne-component api-server
    
  2. Confirm the Platform API Server is running.

    sudo systemctl status olcne-api-server.service
    

    If the check status command does not automatically exit, type q to exit.

Bootstrap the Platform Agents

  1. Run the bootstrap script to configure the Platform Agent to use the certificates on the control plane and worker nodes.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo /etc/olcne/bootstrap-olcne.sh \
        --secret-manager-type file \
        --olcne-component agent"
    done    
    
  2. Confirm the Platform API Server is running.

    for host in ocne-control-01 ocne-worker-01 ocne-worker-02
    do
      printf "======= $host =======\n\n" 
      ssh $host \
        "sudo systemctl status olcne-agent.service"
    done    
    

Create a Platform CLI Configuration File

Administrators can use a configuration file to simplify creating and managing environments and modules. The configuration file, written in valid YAML syntax, includes all information about the environments and modules to create. Using a configuration file saves repeated entries of Platform CLI command options.

More information on creating a configuration file is in the documentation at Using a Configuration File.

  1. Create a configuration file.

    cat << EOF | tee ~/myenvironment.yaml > /dev/null
    environments:
      - environment-name: myenvironment
        globals:
          api-server: 127.0.0.1:8091
          secret-manager-type: file
          olcne-ca-path: /etc/olcne/certificates/ca.cert
          olcne-node-cert-path: /etc/olcne/certificates/node.cert
          olcne-node-key-path:  /etc/olcne/certificates/node.key
        modules:
          - module: kubernetes
            name: mycluster
            args:
              container-registry: container-registry.oracle.com/olcne
              control-plane-nodes:
                - ocne-control-01:8090
              worker-nodes:
                - ocne-worker-01:8090
                - ocne-worker-02:8090
              selinux: enforcing
              restrict-service-externalip: true
              restrict-service-externalip-ca-cert: /home/oracle/certificates/ca/ca.cert
              restrict-service-externalip-tls-cert: /home/oracle/certificates/restrict_external_ip/node.cert
              restrict-service-externalip-tls-key: /home/oracle/certificates/restrict_external_ip/node.key
    EOF
    

Create the Environment and Kubernetes Module

  1. Create the environment.

    cd ~
    olcnectl environment create --config-file myenvironment.yaml
    
  2. Create the Kubernetes module.

    olcnectl module create --config-file myenvironment.yaml
    
  3. Validate the Kubernetes module.

    olcnectl module validate --config-file myenvironment.yaml
    

    In this example, there are no validation errors. If there are any errors, the output of this command provides the commands required to fix the nodes.

  4. Install the Kubernetes module.

    olcnectl module install --config-file myenvironment.yaml
    

    The deployment of Kubernetes to the nodes may take several minutes to complete.

  5. Validate the deployment of the Kubernetes module.

    olcnectl module instances --config-file myenvironment.yaml
    

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module instances --config-file myenvironment.yaml
    INSTANCE              MODULE    	STATE    
    mycluster             kubernetes	installed
    ocne-control-01:8090	node      	installed
    ocne-worker-01:8090 	node      	installed
    ocne-worker-02:8090 	node      	installed
    

Set up kubectl

  1. Set up the kubectl command on the control plane node.

    ssh ocne-control-01 "mkdir -p $HOME/.kube; \
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; \
    sudo chown $(id -u):$(id -g) $HOME/.kube/config; \
    export KUBECONFIG=$HOME/.kube/config; \
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc"
    
  2. Verify that kubectl works.

    ssh ocne-control-01 "kubectl get nodes"
    

    The output shows that each node in the cluster is ready, along with its current role and version.

For More Information

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.