Install Oracle Cloud Native Environment Manually on Oracle Cloud Infrastructure

In this tutorial, you will learn how to create and configure the Oracle Cloud Infrastructure (OCI) resources needed for a quick installation of the Oracle Cloud Native Environment on OCI and then perform the quick installation.

Introduction

Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud-native applications. The core module is the Kubernetes module. It deploys and manages containers and automatically installs and configures CRI-O, runC, and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster, which may be either runC or Kata Containers.

Oracle Cloud Native Environment Release 1.5.7 introduced the ability to use the Oracle Cloud Native Environment Platform CLI to perform a quick installation of itself. The installation runs using the olcnectl provision command on an installation host (the operator node). The olcnectl provision command can perform the following operations on the target nodes:

To create more complex installation topologies, you can write your own Oracle Cloud Native Environment configuration file and then pass it to the olcnectl provision command using the --config-file option. For more information, see the Platform Command-Line Interface Guide.

Objectives

This lab demonstrates how to:

Prerequisites

An OCI account and a compartment in OCI.

Set Up Virtual Cloud Network

Configure the VCN on OCI for an Oracle Cloud Native Environment quick installation.

  1. Log in to the Oracle Cloud Console.

  2. Create a VCN.

    1. Open the navigation menu, click Networking, and then click Virtual cloud networks.

      vcn

    2. Click Create VCN.

      create-vcn-button

    3. Enter a name for your virtual cloud network.

    4. For Create In Compartment, select the desired compartment.

    5. For IPv4 CIDR Blocks, enter a valid CIDR range. For example, 10.0.0.0/16.

      vcn-creation

    6. Click Create VCN.

  3. Create a subnet for the VCN.

    1. On the Virtual cloud networks page, select your newly created VCN.

      vcn-location

    2. Click Create Subnet.

      subnet-location

    3. Enter a name for the subnet.

    4. For IPv4 CIDR Blocks, enter a valid subset of the VCN’s CIDR range. For example, 10.0.1.0/24.

      subnet

    5. Click Create Subnet.

  4. Create an internet gateway for the VCN.

    1. Under Resources, click Internet Gateways.

      gateway-location

    2. Click Create Internet Gateway.

      create-gateway-button

    3. Enter a name for the gateway.

      gateway

    4. Click Create Internet Gateway.

  5. Create a route rule for the internet gateway.

    1. Under Resources, click Route Tables.

      route-table-location

    2. Select the default route table. It should be named Default Route Table for <VCN name>.

      add-route-rule

    3. Click Add Route Rules.

      route-rule-button

    4. From Target Type, select Internet Gateway.

    5. For Destination CIDR Block, enter 0.0.0.0/0.

    6. For Target Internet Gateway, select the new internet gateway.

      route-rule

    7. Click Add Route Rules.

  6. Set up the security list rules for the VCN.

    1. Open the navigation menu, click Networking, and then click Virtual cloud networks.

      vcn

    2. Select the VCN created earlier.

      vcn-location

    3. Under Resources, click Security Lists.

      security-list-location

    4. Select the default security list. It should be named Default Security List for <VCN name>.

      security-list

    5. Select the ICMP protocol rule for source 10.0.1.0/16 and then click Edit.

    6. For IP Protocol, select All Protocols.

      rule-one

    7. Click Save changes.

    8. Select the TCP protocol rule and then click Edit.

    9. For Source CIDR, enter <your_public_ip_address>. This will ensure that only your IP address may connect via SSH to the instances.

      rule-two

      Note: The Source CIDR value of 0.0.0.0/32 is a placeholder for your public IP. You can find your public IP address by visiting a website such as whatismyip.org.

    10. Click Save changes.

Set Up Instances

Create a minimum of five OCI instances. You’ll use each as a node for Oracle Cloud Native Environment:

  1. Create an instance.

    Note: Use default values, unless otherwise mentioned.

    1. In the Oracle Cloud Console, open the navigation menu. Click Compute, and then click Instances.

      instance-navigate

    2. Click Create Instance.

      create-instance-button

    3. Enter a name for the instance that identifies which node it will be (such as operator-node, control-node 1, worker-node 2, and so on).

      instance-one

    4. Select the image to use. By default, the image will be Oracle Linux 8. You may also choose Oracle Linux 9.

      instance-two

      Important Note: All the instances must use the same shape series. Oracle Cloud Native Environment 1.7 and earlier does not support the ARM architecture.

    5. Under Primary VNIC information, select the VCN and subnet created earlier.

      instance-three

    6. Under Add SSH keys, click Save private key. You will need the private key later to connect to the instance using SSH.

      instance-four

      Note: For the operator node, disable the OS Management Service Agent. Click Show advanced options on the bottom on the instance creation page, click the Oracle Cloud Agent tab and then toggling the OS Management Service Agent off.

      advanced-options

    7. Click Create.

  2. Repeat until you have created five instances. Make sure to save the private key for each instance.

  3. Add the public IP addresses of the instances to the security list rules for the VCN.

    1. Open the navigation menu, click Networking, and then click Virtual cloud networks.

      vcn

    2. Select the VCN created earlier.

      vcn-location

    3. Under Resources, select Security List.

      security-list-location

    4. Select the default security list. It should be named something like Default Security List for <VCN name>.

      security-list

    5. Click Add Ingress Rules.

      add-ingress-rules

    6. For IP Protocol, select All Protocols.

    7. For Source CIDR, enter the public IP address of one of your instances. Find the public IP address for the on the OCI Instances page.

    8. Click Save changes.

    9. Repeat until you have added an ingress rule for each instance, as shown below.

      io-error

Set Up Load Balancer

Create a load balancer to ensure that the Oracle Cloud Native Environment deployment is highly available.

  1. Create a load balancer.

    1. In the Oracle Cloud Console, open the navigation menu. Click Networking, and then click Load balancer.

      lb-navigate

    2. Click Create load balancer.

      create-load-balancer

    3. Select the VCN and subnet created earlier.

      lb-vcn

    4. Click Next.

    5. Click Add backends.

    6. Select the two control node instances created earlier.

      lb-backend

    7. Click Add selected backends.

    8. Change the port for the instances to 6443.

      lb-backend-two

    9. Under Specify health check policy, select TCP and set the port to 6443.

      lb-healthcheck

    10. Click Next.

    11. For Specify the type of traffic that your listener handles, select TCP.

    12. For Specify the port your listener monitors for ingress traffic, enter 6443.

      lb-listener

    13. Click Next.

    14. For Log Group, select the default group. Its name should contain something like Default_Group, but this may vary between compartments.

      lb-loggroup

    15. Click Submit.

  2. Add a rule for the load balancer to the security list of the VCN.

    1. Open the navigation menu, click Networking, and then click Virtual cloud networks.

      vcn

    2. Select the VCN created earlier.

      vcn-location

    3. Under Resources, click Security Lists.

      security-list-location

    4. Select the default security list. It should be named something like Default Security List for <VCN name>.

      security-list

    5. Click Add Ingress Rules.

      add-ingress-rules

    6. For Source CIDR, enter the load balancer’s public IP address. You can find this value on the Load Balancers page.

    7. For IP Protocol, select TCP.

    8. For Source Port Range and Destination Port Range, enter 6443.

      lb-security-list

    9. Click Add Ingress Rules.

Set Up Passwordless SSH for Nodes

To successfully deploy Oracle Cloud Native Environment, the operator node must be able to SSH into the control and worker nodes.

Important Note: Replace the placeholder values with their actual values when running commands. For example, replace <operator_node_public_ip> with the public IP of your operator node.

  1. Open a terminal and connect with SSH to the operator node. You can find the public IP address for the node on the Instances page in OCI. The private key is the key file you downloaded when creating the instance.

    ssh opc@<operator_node_public_ip> -i <operator_node_private_key>
    
  2. On the operator node, generate an SSH key pair to use for communicating with the worker and control nodes. Do not enter a file in which to save the key, and leave the passphrase empty.

    ssh-keygen -t rsa
    

    This will generate a public and private key in the /home/<username>/.ssh directory where id_rsa is the private key and id_rsa.pub is the public key.

  3. Use the cat command to print the contents of the entire public key. The public key will be used in the next steps as .

    cat ~/.ssh/id_rsa.pub
    

    Use the echo command to concatenate the contents of the public key to the authorized_keys file on the operator node.

    echo '<public_key_content>' >> ~/.ssh/authorized_keys
    
  4. For each control and worker node, add the public key:

    SSH into the control or worker node from your local machine. Use the private key downloaded when creating the instance.

    ssh opc@<node_public_ip> -i <node_private_key>
    

    Use the echo command to concatenate the contents of the public key to the authorized_keys file.

    echo '<public_key_content>' >> ~/.ssh/authorized_keys
    

Set Up the Install Host (operator node) on Oracle Linux

Configure the Oracle Linux host (operator node) for the quick installation of Oracle Cloud Native Environment. Follow the steps that match the instance operating system.

Oracle Linux 8

  1. On the operator node, install the oracle-olcne-release-el8 release package.

    sudo dnf -y install oracle-olcne-release-el8
    
  2. Enable the current Oracle Cloud Native Environment repository.

    sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
    
  3. Disable all previous repository versions.

    sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12
    
  4. Disable any developer yum repositories. First, list the name of all active developer repositories.

    sudo dnf repolist --enabled | grep developer
    

    Disable each of the listed repositories.

    sudo dnf config-manager --disable <repository_name>
    
  5. Install the olcnectl software package.

    sudo dnf -y install olcnectl
    

Oracle Linux 9

  1. On the operator node, install the oracle-olcne-release-el9 release package.

    sudo dnf -y install oracle-olcne-release-el9
    
  2. Enable the current Oracle Cloud Native Environment repository.

    sudo dnf config-manager --enable ol9_olcne18 ol9_addons ol9_baseos_latest ol9_appstream ol9_UEKR7
    
  3. Disable any developer yum repositories. First, list the name of all active developer repositories.

    sudo dnf repolist --enabled | grep developer
    

    Disable each of the listed repositories.

    sudo dnf config-manager --disable <repository_name>
    
  4. Install the olcnectl software package.

    sudo dnf -y install olcnectl
    

Perform a Quick Install

The following steps describe the fastest method to set up a basic deployment of Oracle Cloud Native Environment and install a Kubernetes cluster. It requires a minimum of five nodes, which are:

Run olcnectl provision on the Operator Node

On the operator node, use the olcnectl provision command to begin the installation. This operation can take 10-15 minutes to complete and there will be no visible indication that anything is occurring until it finishes.

olcnectl provision \
--api-server <operator_node_name> \
--control-plane-nodes <control_node1_name_>,<control_node2_name> \
--worker-nodes <worker_node1_name_>,<worker_node2_name> \
--environment-name <environment_name> \
--name <cluster_name> \
--load-balancer <load_balancer_public_ip>:6443 \
--selinux enforcing

Where:

Note: When executing this command, a prompt is displayed that lists the changes to be made to the hosts and asks for confirmation. To avoid this prompt, use the --yes option. This option sets the olcnectl provision command to assume that the answer to any confirmation prompt is affirmative (yes).

Example Output: This shows example output without using --yes:

 [opc@operator-node-1 ~]$ olcnectl provision \
 --api-server operator-node-1 \
 --control-plane-nodes control-node-1,control-node-2 \
 --worker-nodes worker-node-1,worker-node-2 \
 --environment-name cne1-ha-env \
 --name cne1-ha-cluster \
 --load-balancer 129.153.192.253:6443 \
 -- selinux enforcing
 INFO[29/11/23 16:34:59] Using existing certificate authority
 INFO[29/11/23 16:34:59] Creating directory "/etc/olcne/certificates/" on operator-node-1
 INFO[29/11/23 16:34:59] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on operator-node-1
 INFO[29/11/23 16:34:59] Copying local file at "certificates/operator-node-1/node.cert" to "/etc/olcne/certificates/node.cert" on operator-node-1
 INFO[29/11/23 16:34:59] Copying local file at "certificates/operator-node-1/node.key" to "/etc/olcne/certificates/node.key" on operator-node-1
 INFO[29/11/23 16:34:59] Creating directory "/etc/olcne/certificates/" on control-node-1
 ...
 ? Apply api-server configuration on operator-node-1:
 * Install oracle-olcne-release
 * Enable olcne18 repo
 * Install API Server
 Add firewall port 8091/tcp
 
 Proceed? yes/no(default) yes
 ...
 ? Apply worker configuration on worker-node-2:
 * Install oracle-olcne-release
 * Enable olcne18 repo
 * Configure firewall rule:
    Add interface cni0 to trusted zone
    Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp
 * Disable swap
 * Load br_netfilter module
 * Load Bridge Tunable Parameters:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
 * Set SELinux to permissive
 * Install and enable olcne-agent

 Proceed? yes/no(default) yes

Troubleshooting Errors

If you run into an error with creating the kubeadm join config when running the provision command, connect to the control nodes using SSH and run the kubeadm reset command by running:

sudo kubeadm reset

Confirm the Installation

The Oracle Cloud Native Environment platform and Kubernetes cluster software is now installed and configured on all of the nodes.

On the operator node, confirm the installation:

olcnectl module instances \
--environment-name <environment_name>

Example Output:

[oracle@ocne-operator ~]$ olcnectl module instances --environment-name myenvironment
INSTANCE                MODULE          STATE
mycluster               kubernetes      installed
worker-node-1:8090      node            installed
worker-node-2:8090      node            installed
control-node-1:8090     node            installed
control-node-2:8090     node            installed

Copy the Certificates

To run olcnectl module commands on the operator node, copy the certificates to the appropriate folder using the cp command.

On the operator node, run:

cp -rp /home/opc/.olcne/certificates/<operator_node_name>:8091/* /home/opc/.olcne/certificates/

This allows you to run olcnectl commands for environment operations and commands for modifying modules.

Get More Deployment Information with the olcnectl module report Command

On the operator node, run:

olcnectl module report \
--environment-name <environment_name> \
--name <cluster_name> \
--children \
--format yaml

Example Output:

[oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster  --children --format yaml
Environments:
  myenvironment:
    ModuleInstances:
    - Name: mycluster
      Properties:
      - Name: worker:ocne-worker:8090
      - Name: podnetworking
        Value: running
      - Name: extra-node-operations
      - Name: cloud-provider
      - Name: module-operator
        Value: running
      - Name: extra-node-operations-update
        Value: running
      - Name: status_check
        Value: healthy
      - Name: kubectl
      - Name: master:ocne-control:8090
      - Name: kubecfg
        Value: file exist
...
        Properties:
        - Name: kubeadm
          Value: 1.26.6-1
        - Name: kubectl
          Value: 1.26.6-1
        - Name: kubelet
          Value: 1.26.6-1
        - Name: helm
          Value: 3.12.0-1
      - Name: sysctl
        Properties:
        - Name: vm.max_map_count
          Value: "262144"
      - Name: kubecfg
        Value: file exist

Note: It is possible to alter the output to return this output in a table format. However, this requires that the terminal application’s encoding be set to UTF-8 (Set the following in the terminal application’s menu: Terminal -> Set Encoding -> Unicode -> UTF-8). Then run the command again without the --format yaml option.

olcnectl module report \
--environment-name <environment_name> \
--name cluster_name> \
--children

Example Output:

[oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster --children
╭─────────────────────────────────────────────────────────────────────┬─────────────────────────╮
│ myenvironment                                                       │                         │
├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
│ mycluster                                                           │                         │
├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
│ Property                                                            │ Current Value           │
├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
│ kubecfg                                                             │ file exist              │
│ podnetworking                                                       │ running                 │
│ module-operator                                                     │ running                 │
│ extra-node-operations                                               │                         │
│ extra-node-operations-update                                        │ running                 │
│ worker:ocne-worker:8090                                             │                         │
│ externalip-webhook                                                  │ uninstalled             │
│ status_check                                                        │ healthy                 │
│ kubectl                                                             │                         │
│ cloud-provider                                                      │                         │
│ master:ocne-control-1:8090                                            │                         │
├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
│ ocne-control-1:8090                                                   │                         │
├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
...
├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
│ service                                                             │                         │
├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
│ containerd.service                                                  │ not enabled/not running │
│ crio.service                                                        │ enabled/running         │
│ kubelet.service                                                     │ enabled                 │
╰─────────────────────────────────────────────────────────────────────┴─────────────────────────╯

Set up kubectl

  1. On one of the control nodes, setup the kubectl command.

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
    
  2. Verify kubectl works.

    kubectl get nodes
    

    Example Outptut:

    [oracle@ocne-control ~]$ kubectl get nodes
    NAME           STATUS   ROLES           AGE   VERSION
    control-node-1   Ready    control-plane   15d   v1.26.6+1.el9
    control-node-2   Ready    control-plane   15d   v1.26.6+1.el9
    worker-node-1    Ready    <none>          15d   v1.26.6+1.el9
    worker-node-2    Ready    <none>          15d   v1.26.6+1.el9
    

    or

    kubectl get pods --all-namespaces
    

    Example Output:

    [oracle@ocne-control ~]$ kubectl get pods --all-namespaces
    NAMESPACE              NAME                                          READY   STATUS    RESTARTS        AGE
    kube-system            coredns-965bbb7bb-7qc4n                       1/1     Running   0               4m11s
    kube-system            coredns-965bbb7bb-x9wwv                       1/1     Running   0               4m11s
    kube-system            etcd-ocne-control                             1/1     Running   0               5m12s
    kube-system            kube-apiserver-ocne-control                   1/1     Running   0               5m10s
    kube-system            kube-controller-manager-ocne-control          1/1     Running   1 (4m49s ago)   5m10s
    kube-system            kube-flannel-ds-dz22m                         1/1     Running   0               3m52s
    kube-system            kube-flannel-ds-nw57l                         1/1     Running   0               3m52s
    kube-system            kube-proxy-qs4q2                              1/1     Running   0               3m53s
    kube-system            kube-proxy-rvknz                              1/1     Running   0               4m11s
    kube-system            kube-scheduler-ocne-control                   1/1     Running   1 (4m49s ago)   5m10s
    kubernetes-dashboard   kubernetes-dashboard-76b8b45c77-t9lrt         1/1     Running   0               3m51s
    verrazzano-install     verrazzano-module-operator-78c95444c7-59qt7   1/1     Running   0               3m51s
    

This confirms that Oracle Cloud Native Environment is setup and running on the five nodes.

For More Information

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.