5 Quick HA Install with Internal Load Balancer

Install a Highly Available Oracle Cloud Native Environment on bare metal hosts or virtual machines, including a Kubernetes cluster. This example uses the internal containerized NGINX and Keepalived load balancer deployed by the Platform CLI.

This is the fastest method to set up a basic Highly Available deployment of Oracle Cloud Native Environment on bare metal hosts or virtual machines. This method sets up the nodes, installs the Oracle Cloud Native Environment platform and installs a Kubernetes cluster. A load balancer is deployed by the Platform CLI to the control plane nodes and configured with the Kubernetes cluster. The load balancer is a container-based deployment of NGINX and Keepalived.

Security Considerations: Consider the following security settings when you use this installation example:

  • Private CA Certificates are used to secure network communication between the Kubernetes nodes.

  • SELinux is set to permissive mode on the host OS on each Kubernetes node.

  • The Kubernetes externalIPs service isn't deployed.

To perform a more complex deployment and change these security settings, use a configuration file as shown in Quick Install using Configuration File.

Nodes Required: As many nodes as required for High Availability. (See Kubernetes High Availability Requirements).

  • Operator Node: One node to use as the operator node, which is used to perform the installation using the Platform CLI (olcnectl), and to host the Platform API Server.

  • Kubernetes control plane: At least three nodes to use as Kubernetes control plane nodes.

  • Kubernetes worker: At least two nodes to use as Kubernetes worker nodes.

Before you begin: Complete the prerequisite set up. See Prerequisites.

To do a quick install with an internal load balancer:

  1. Use the --virtual-ip option when creating the Kubernetes module to nominate a virtual IP address that can be used for the primary control plane node. This IP address must not be in use on any node and is assigned dynamically to the control plane node assigned as the primary controller by the load balancer. If the primary node fails, the load balancer reassigns the virtual IP address to another control plane node, and that, in turn, becomes the primary node.

  2. On the operator node, use the olcnectl provision command to start the installation. The mandatory syntax is:

    olcnectl provision 
    --api-server host 
    --control-plane-nodes hosts
    --master-nodes hosts (Deprecated)
    --worker-nodes hosts
    --environment-name name 
    --name name
    --virtual-ip IP_address

    Use the --api-server option to set the FQDN of the node on which the Platform API Server is to be set up.

    Use the --control-plane-nodes option to set the FQDN of the nodes to be set up with the Platform Agent and assigned the role of Kubernetes control plane nodes. This is a comma separated list.

    Use the --worker-nodes option to set the FQDN of the nodes to be set up with the Platform Agent and assigned the role of Kubernetes worker nodes. This is a comma separated list.

    Use the --environment-name option to set the name to identify the environment.

    Use the --name option to set the name to identify the Kubernetes module.

    Use the --virtual-ip option to set the virtual IP address.

    Several other command options might be required, such as the SSH login credentials, proxy server information, and the option to automatically accept any prompts using the --yes option. For information on the syntax options for the olcnectl provision command, see Platform Command-Line Interface.

    For example:

    olcnectl provision \
    --api-server operator.example.com \
    --control-plane-nodes control1.example.com,control2.example.com,control3.example.com \
    --worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
    --environment-name myenvironment \
    --name mycluster \
    --virtual-ip 192.0.2.100
  3. A list of the steps to be performed on each node is displayed and a prompt is displayed to proceed. For example, on a control plane node, the changes might look similar to:

    ? Apply control-plane configuration on control1.example.com:
    * Install oracle-olcne-release
    ...
    * Install and enable olcne-agent
    
    Proceed? yes/no(default) yes                          

    Enter yes to continue. The node is set up.

    Information about the changes on each node is displayed. You need to confirm the set up steps for each node.

    Tip:

    To avoid accepting the changes on each node, use the --yes command option with the olcnectl provision command.

  4. The nodes are set up with the Oracle Cloud Native Environment platform and a Kubernetes module is installed to set up a Kubernetes cluster. You can show information about the environment using the syntax:

    olcnectl module instances 
    --api-server host_name:8091 
    --environment-name name

    Tip:

    To avoid using the --api-server option in future olcnectl commands, add the --update-config option.

    For example:

    olcnectl module instances \
    --api-server operator.example.com:8091 \
    --environment-name myenvironment \
    --update-config

    The output looks similar to:

    INFO[...] Global flag configuration for myenvironment has been written to the 
    local Platform config and you don't need to specify them for any future calls 
    INSTANCE                   MODULE      STATE    
    control1.example.com:8090  node        installed
    ...
    mycluster                  kubernetes  installed

    To see more information about the deployment, use the olcnectl module report command. For example:

    olcnectl module report \
    --environment-name myenvironment \
    --name mycluster \
    --children
  5. Set up the Kubernetes CLI (kubectl) on a control plane node. The kubectl command is installed on each control plane node in the cluster. To use it to access the cluster, you need to configure it using the Kubernetes configuration file.

    Login to a control plane node and copy and paste these commands to a terminal in the user's home directory:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

    Verify that you can use the kubectl command using any kubectl command such as:

    kubectl get pods --all-namespaces

    The output looks similar to:

    NAMESPACE              NAME                          READY   STATUS    RESTARTS   AGE
    externalip-validat...  externalip-validation-...     1/1     Running   0          1h
    kube-system            coredns-...                   1/1     Running   0          1h
    kube-system            coredns-...                   1/1     Running   0          1h
    kube-system            etcd-...                      1/1     Running   2          1h
    kube-system            etcd-...                      1/1     Running   2          1h
    kube-system            kube-apiserver-...            1/1     Running   2          1h
    kube-system            kube-apiserver-...            1/1     Running   2          1h
    kube-system            kube-controller-manager-...   1/1     Running   5 (1h ago) 1h
    kube-system            kube-controller-manager-...   1/1     Running   2          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-scheduler-...            1/1     Running   5 (1h ago) 1h
    kube-system            kube-scheduler-...            1/1     Running   2          1h
    kubernetes-dashboard   kubernetes-dashboard-...      1/1     Running   0          1h

    Note:

    After the deployment, a Kubernetes configuration file is created in the local directory of the operator node. The file is named kubeconfig.environment_name.cluster_name and contains information about the Kubernetes cluster. This file is created for convenience and isn't required to set up kubectl on the control plane nodes.

    You might want to use this file to add to a larger Kubernetes configuration file if you have multiple clusters. See the upstream Kubernetes documentation for more information on configuring access to multiple clusters.

Tip:

Adding and Removing Nodes to Scale a Kubernetes Cluster

To change the nodes in the Kubernetes cluster, run the olcnectl provision command again with updated control plane and worker node lists: any nodes you omit of a new node list are removed from the cluster, whilst any new nodes you specify are added to it.

If you're adding nodes, new certificates are automatically generated for you and installed on the new nodes, the Oracle Cloud Native Environment software is installed, and the nodes are added to the Kubernetes cluster. However, you still need to ensure that all new nodes have been set up with the required prerequisites (see Prerequisites), and that any new control plane nodes have been added to the load balancer if you're using an external load balancer.