Use Quick Install to Deploy Oracle Cloud Native Environment


Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud-native applications. The Kubernetes module is the core module. It deploys and manages containers and automatically installs and configures CRI-O, runC, and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster, which may be either runC or Kata Containers.

Oracle Cloud Native Environment Release 1.5.7 introduced the ability to use the Oracle Cloud Native Environment Platform CLI to perform a quick installation. This installation type uses the olcnectl provision command on the operator node. Running this command, you can perform the following operations on the target nodes:

This tutorial describes how to perform a quick installation using the most straightforward steps to install Oracle Cloud Native Environment and a Kubernetes cluster using private CA Certificates. Oracle recommends you use your own CA Certificates for a production environment.

You can achieve more complex topologies by writing an Oracle Cloud Native Environment configuration file and passing it to the olcnectl provision command using the --config-file option. For more information on the syntax options provided by the olcnectl provision command and also on how to write a configuration file, please refer to the Platform Command-Line Interface guide.


This tutorial demonstrates how to:


Deploy Oracle Cloud Native Environment

Note: If running in your own tenancy, read the linux-virt-labs GitHub project and complete the prerequisites before deploying the lab environment.

  1. Open a terminal on the Luna Desktop.

  2. Clone the linux-virt-labs GitHub project.

    git clone
  3. Change into the working directory.

    cd linux-virt-labs/ocne
  4. Install the required collections.

    ansible-galaxy collection install -r requirements.yaml
  5. Deploy the lab environment.

    ansible-playbook create_instance.yaml -e ansible_python_interpreter="/usr/bin/python3.6" -e ocne_type=none

    The free lab environment requires the extra variable ansible_python_interpreter because it installs the RPM package for the Oracle Cloud Infrastructure SDK for Python. The location for this package’s installation is under the python3.6 modules.

    Important: Wait for the playbook to run successfully and reach the pause task. The Oracle Cloud Native Environment installation is complete at this stage of the playbook, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys.

Set Up the Operator Node on Oracle Linux

These steps configure the ocne-operator node to install Oracle Cloud Native Environment quickly.

  1. Open a terminal and connect via SSH to the ocne-operator node.

    ssh oracle@<ip_address_of_ol_node>
  2. Install the Oracle Cloud Native Environment release package.

    sudo dnf -y install oracle-olcne-release-el8
  3. Enable the current Oracle Cloud Native Environment repository.

    sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
  4. Disable all previous repository versions.

    sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_developer
  5. Install the Platform CLI.

    sudo dnf -y install olcnectl

Perform a Quick Install

The following steps describe the fastest method to set up a basic deployment of Oracle Cloud Native Environment and install a Kubernetes cluster. It requires a minimum of three node types, which are:

  1. Begin the installation.

    olcnectl provision \
    --api-server ocne-operator \
    --control-plane-nodes ocne-control-01 \
    --worker-nodes ocne-worker-01,ocne-worker-02 \
    --environment-name myenvironment \
    --name mycluster \
    --selinux enforcing \

    Important Note: This operation can take 10-15 minutes to complete, and there will be no visible indication of anything occurring until it finishes.


    • --api-server: the FQDN of the node where the installation sets up the Platform API
    • --control-plane-nodes: the FQDN, comma-separated list of the nodes that host the Platform Agent and get assigned the Kubernetes control plane role
    • --worker-nodes: the FQDN, comma-separated list of the nodes that host the Platform Agent and get assigned the Kubernetes worker role
    • --environment-name: identifies the environment
    • --name: sets the name of the Kubernetes module
    • --selinux enforcing: sets SELinux to enforcing (default) or permissive mode

    Note After executing this command, a prompt lists the changes it’ll make to the hosts and asks for confirmation. To avoid this prompt, use the --yes option.

    Important: In previous releases, the command syntax uses the --master-nodes option rather than --control-plane-nodes. The older option is deprecated and throws the following message if used:

    Flag --master-nodes has been deprecated, Please migrate to --control-plane-nodes.

  2. Confirm the cluster installation.

    olcnectl module instances \
    --api-server ocne-operator:8091 \
    --environment-name myenvironment

    The output shows the nodes and Kubernetes module with a STATE of installed.

  3. Add the’– update-config’ option to avoid using the --api-server flag in future Platform CLI commands.

    olcnectl module instances \
    --api-server ocne-operator:8091 \
    --environment-name myenvironment \
  4. Get more detailed information related to the deployment in YAML format.

    olcnectl module report \
    --environment-name myenvironment \
    --name mycluster \
    --children \
    --format yaml

    Example Output:

        - Name: mycluster
          - Name: cloud-provider
          - Name: master:ocne-control:8090
          - Name: worker:ocne-worker:8090
          - Name: module-operator
            Value: running
          - Name: extra-node-operations-update
            Value: running
          - Name: status_check
            Value: healthy
          - Name: kubectl
          - Name: kubecfg
            Value: file exist
          - Name: podnetworking
            Value: running
          - Name: externalip-webhook
            Value: uninstalled
          - Name: extra-node-operations
        - Name: ocne-worker:8090
         - Name: module
            - Name: br_netfilter
              Value: loaded
            - Name: conntrack
              Value: loaded
          - Name: networking
            Value: active
          - Name: firewall
            - Name: 10255/tcp
              Value: closed
            - Name: 2381/tcp
              Value: closed
            - Name: 6443/tcp
              Value: closed
            - Name: 10250/tcp
              Value: closed
            - Name: 8472/udp
              Value: closed
            - Name: 10257/tcp
              Value: closed
            - Name: 10259/tcp
              Value: closed
            - Name: 10249/tcp
              Value: closed
            - Name: 9100/tcp
              Value: closed
          - Name: connectivity
          - Name: selinux
            Value: enforcing
  5. Get more detailed information related to the deployment in table format.

    It is possible to alter the output to return in a table format. However, this requires setting the Oracle Linux Terminal application’s encoding to UTF-8 by accessing its menu and clicking: Terminal -> Set Encoding -> Unicode -> UTF-8. Then, rerun the command without the --format yaml option.

    olcnectl module report \
    --environment-name myenvironment \
    --name mycluster \

    Example Output:

    │ myenvironment                                                       │                         │
    │ mycluster                                                           │                         │
    │ Property                                                            │ Current Value           │
    │ kubecfg                                                             │ file exist              │
    │ podnetworking                                                       │ running                 │
    │ module-operator                                                     │ running                 │
    │ extra-node-operations                                               │                         │
    │ extra-node-operations-update                                        │ running                 │
    │ worker:ocne-worker:8090                                             │                         │
    │ externalip-webhook                                                  │ uninstalled             │
    │ status_check                                                        │ healthy                 │
    │ kubectl                                                             │                         │
    │ cloud-provider                                                      │                         │
    │ master:ocne-control:8090                                            │                         │
    │ ocne-control:8090                                                   │                         │
    │ swap                                                                │ off                     │
    │ package                                                             │                         │
    │ helm                                                                │ 3.12.0-4.el8            │
    │ kubeadm                                                             │ 1.28.3-3.el8            │
    │ kubectl                                                             │ 1.28.3-3.el8            │
    │ kubelet                                                             │ 1.28.3-3.el8            │

Set up kubectl

  1. Set up the kubectl command on the control plane node.

    ssh ocne-control-01 "mkdir -p $HOME/.kube; \
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; \
    sudo chown $(id -u):$(id -g) $HOME/.kube/config; \
    export KUBECONFIG=$HOME/.kube/config; \
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc"
  2. Verify that kubectl works.

    ssh ocne-control-01 "kubectl get nodes"

    The output shows that each node in the cluster is ready, along with its current role and version.


    ssh ocne-control-01 "kubectl get pods --all-namespaces"

    The output shows all Pods in a running status.


This output from kubectl confirms the successful installation of Oracle Cloud Native Environment using the quick installation method.

For More Information

More Learning Resources

Explore other labs on or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.