9 Quick HA Install using Configuration File on Oracle Cloud Infrastructure

Install a basic deployment of Oracle Cloud Native Environment on Oracle Cloud Infrastructure, including a Kubernetes cluster. Any extra modules you want to install can be added to a configuration file. The example in this topic installs all modules available for Oracle Cloud Infrastructure.

Important:

The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.

We recommend that you upgrade the software described by this documentation as soon as possible.

This sets up a deployment of Oracle Cloud Native Environment on Oracle Cloud Infrastructure, including a Kubernetes cluster and the Oracle Cloud Infrastructure Cloud Controller Manager module.

Nodes Required: As many nodes as required for High Availability. (See Kubernetes High Availability Requirements). They are:

  • Operator Node: One node to use as the operator node, which is used to perform the installation using the Platform CLI (olcnectl), and to host the Platform API Server.

  • Kubernetes control plane: At least three nodes to use as Kubernetes control plane nodes.

  • Kubernetes worker: At least two nodes to use as Kubernetes worker nodes.

Before you begin: Complete the prerequisite set up. See Prerequisites.

To do a quick HA install on Oracle Cloud Infrastructure using a configuration file:

  1. Set up the Oracle Cloud Infrastructure load balancer.
    1. Log into the Oracle Cloud Infrastructure User Interface.
    2. Create a load balancer.

    3. Add a backend set to the load balancer using weighted round robin. Set the health check to be TCP port 6443.

    4. Add the control plane nodes to the backend set. Set the port for the control plane nodes to port 6443.

    5. Create a listener for the backend set using TCP port 6443.

  2. On the operator node, create an Oracle Cloud Native Environment configuration file for your deployment. For information on creating an Oracle Cloud Native Environment configuration file, see Platform Command-Line Interface. This example uses the file name myenvironment.yaml for the configuration file.

    A basic example configuration file that installs the Kubernetes module, and the Oracle Cloud Infrastructure Cloud Controller Manager module is:

    environments:
      - environment-name: myenvironment
        globals:
          api-server: operator.example.com:8091
          selinux: enforcing
        modules:
          - module: kubernetes
            name: mycluster
            args:
              container-registry: container-registry.oracle.com/olcne
              load-balancer: lb.example.com:6443
              master-nodes: 
                - control1.example.com:8090
                - control2.example.com:8090
                - control3.example.com:8090
              worker-nodes:
                - worker1.example.com:8090
                - worker2.example.com:8090
                - worker3.example.com:8090
          - module: helm
            name: myhelm
            args:
              helm-kubernetes-module: mycluster
          - module: oci-ccm
            name: myoci
            args: 
              oci-ccm-helm-module: myhelm 
              oci-region: us-ashburn-1 
              oci-tenancy: ocid1.tenancy.oc1..unique_ID
              oci-compartment: ocid1.compartment.oc1..unique_ID
              oci-user: ocid1.user.oc1..unique_ID
              oci-fingerprint: b5:52:...
              oci-private-key: /home/opc/.oci/oci_api_key.pem 
              oci-vcn: ocid1.vcn.oc1..unique_ID 
              oci-lb-subnet1: ocid1.subnet.oc1..unique_ID 

    This example configuration file uses the default settings to create a Kubernetes cluster with a three control plane nodes, three worker nodes and uses an external load balancer that is already set up on Oracle Cloud Infrastructure. You should change the nodes listed to those of your own hosts. You should also change the URL to that of your own Oracle Cloud Infrastructure load balancer. A number of values are required to set up the Oracle Cloud Infrastructure Cloud Controller Manager module, and you can find information about what to provide for this module in Storage and Application Load Balancers.

    Tip:

    Private CA Certificates are automatically generated for communication between the Kubernetes nodes and for the Kubernetes externalIPs service. If you want to use your own CA Certificates, or to add additional modules to the configuration file, see the information about these options in Quick Install using Configuration File.

  3. On the operator node, use the olcnectl provision command with the --config-file option to start the installation. For example:

    olcnectl provision --config-file myenvironment.yaml

    There are a number of other command options that you may need, such as the SSH log in credentials, proxy server information, and the option to automatically accept any prompts using the --yes option. For information on the syntax options for the olcnectl provision command, see Platform Command-Line Interface.

  4. A list of the steps to be performed on each node is displayed and a prompt is displayed to proceed. For example, on a control plane node, the changes may look similar to:

    ? Apply control-plane configuration on control1.example.com:
    * Install oracle-olcne-release
    ...
    * Install and enable olcne-agent
    
    Proceed? yes/no(default) yes                          

    Enter yes to continue. The node is set up.

    Information about the changes on each node is displayed. You need to confirm the set up steps for each node.

    Tip:

    If you want to avoid accepting the changes on each node, use the --yes command option with the olcnectl provision command.

  5. The nodes are set up with the Oracle Cloud Native Environment platform and the module(s) are installed. You can show information about the environment using the syntax:

    olcnectl module instances 
    --api-server host_name:8091 
    --environment-name name

    Tip:

    To avoid having to enter the --api-server option in future olcnectl commands, add the --update-config option.

    For example:

    olcnectl module instances \
    --api-server operator.example.com:8091 \
    --environment-name myenvironment \
    --update-config

    The output looks similar to:

    INFO[...] Global flag configuration for myenvironment has been written to the 
    local Platform config and you don't need to specify them for any future calls 
    INSTANCE                   MODULE      STATE    
    control1.example.com:8090  node        installed
    ...
    mycluster                  kubernetes  installed

    If you want to see more information about the deployment, use the olcnectl module report command. For example:

    olcnectl module report \
    --environment-name myenvironment \
    --name mycluster \
    --children
  6. Set up the Kubernetes CLI (kubectl) on a control plane node. The kubectl command is installed on each control plane node in the cluster. To use it to access the cluster, you need to configure it using the Kubernetes configuration file.

    Log into a control plane node and copy and paste these commands to a terminal in your home directory:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

    Verify that you can use the kubectl command using any kubectl command such as:

    kubectl get pods --all-namespaces

    The output looks similar to:

    NAMESPACE              NAME                          READY   STATUS    RESTARTS   AGE
    externalip-validat...  externalip-validation-...     1/1     Running   0          1h
    kube-system            coredns-...                   1/1     Running   0          1h
    kube-system            coredns-...                   1/1     Running   0          1h
    kube-system            etcd-...                      1/1     Running   2          1h
    kube-system            etcd-...                      1/1     Running   2          1h
    kube-system            kube-apiserver-...            1/1     Running   2          1h
    kube-system            kube-apiserver-...            1/1     Running   2          1h
    kube-system            kube-controller-manager-...   1/1     Running   5 (1h ago) 1h
    kube-system            kube-controller-manager-...   1/1     Running   2          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-scheduler-...            1/1     Running   5 (1h ago) 1h
    kube-system            kube-scheduler-...            1/1     Running   2          1h
    kubernetes-dashboard   kubernetes-dashboard-...      1/1     Running   0          1h

    Note:

    After the deployment, a Kubernetes configuration file is created in the local directory of the operator node. The file is named kubeconfig.environment_name.cluster_name and contains information about the Kubernetes cluster. This file is created for your convenience and is not required to set up kubectl on the control plane nodes.

    You may want to use this file to add to a larger Kubernetes configuration file if you have multiple clusters. See the upstream Kubernetes documentation for more information on configuring access to multiple clusters.