5 Quick HA Install with Internal Load Balancer
Install a Highly Available Oracle Cloud Native Environment on bare metal hosts or virtual machines, including a Kubernetes cluster. This example uses the internal containerized NGINX and Keepalived load balancer deployed by the Platform CLI.
Important:
The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.
We recommend that you upgrade the software described by this documentation as soon as possible.
This is the fastest method to set up a basic Highly Available deployment of Oracle Cloud Native Environment on bare metal hosts or virtual machines. This method sets up the nodes, installs the Oracle Cloud Native Environment platform and installs a Kubernetes cluster. A load balancer is deployed by the Platform CLI to the control plane nodes and configured with the Kubernetes cluster. The load balancer is a container-based deployment of NGINX and Keepalived.
Security Considerations: You should consider the following security settings when you use this installation example:
-
Private CA Certificates are used to secure network communication between the Kubernetes nodes.
-
SELinux is set to
permissive
mode on the host operating system on each node. -
The Kubernetes
externalIPs
service is not deployed.
If you want to perform a more complex deployment and change these security settings, use a configuration file as shown in Quick Install using Configuration File.
Nodes Required: As many nodes as required for High Availability. (See Kubernetes High Availability Requirements). They are:
-
Operator Node: One node to use as the operator node, which is used to perform the installation using the Platform CLI (
olcnectl
), and to host the Platform API Server. -
Kubernetes control plane: At least three nodes to use as Kubernetes control plane nodes.
-
Kubernetes worker: At least two nodes to use as Kubernetes worker nodes.
Before you begin: Complete the prerequisite set up. See Prerequisites.
To do a quick install with an internal load balancer:
-
Use the
--virtual-ip
option when creating the Kubernetes module to nominate a virtual IP address that can be used for the primary control plane node. This IP address should not be in use on any node and is assigned dynamically to the control plane node assigned as the primary controller by the load balancer. If the primary node fails, the load balancer reassigns the virtual IP address to another control plane node, and that, in turn, becomes the primary node. -
On the operator node, use the
olcnectl provision
command to start the installation. The mandatory syntax is:olcnectl provision --api-server host --master-nodes hosts --worker-nodes hosts --environment-name name --name name --virtual-ip IP_address
Use the
--api-server
option to set the FQDN of the node on which the Platform API Server should be set up.Use the
--master-nodes
option to set the FQDN of the nodes that should be set up with the Platform Agent and assigned the role of Kubernetes control plane nodes. This is a comma separated list.Use the
--worker-nodes
option to set the FQDN of the nodes that should be set up with the Platform Agent and assigned the role of Kubernetes worker nodes. This is a comma separated list.Use the
--environment-name
option to set the name to identify the environment.Use the
--name
option to set the name to identify the Kubernetes module.Use the
--virtual-ip
option to set the virtual IP address.There are a number of other command options that you may need, such as the SSH log in credentials, proxy server information, and the option to automatically accept any prompts using the
--yes
option. For information on the syntax options for theolcnectl provision
command, see Platform Command-Line Interface.For example:
olcnectl provision \ --api-server operator.example.com \ --master-nodes control1.example.com,control2.example.com,control3.example.com \ --worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \ --environment-name myenvironment \ --name mycluster \ --virtual-ip 192.0.2.100
-
A list of the steps to be performed on each node is displayed and a prompt is displayed to proceed. For example, on a control plane node, the changes may look similar to:
? Apply control-plane configuration on control1.example.com: * Install oracle-olcne-release ... * Install and enable olcne-agent Proceed? yes/no(default) yes
Enter
yes
to continue. The node is set up.Information about the changes on each node is displayed. You need to confirm the set up steps for each node.
Tip:
If you want to avoid accepting the changes on each node, use the
--yes
command option with theolcnectl provision
command. -
The nodes are set up with the Oracle Cloud Native Environment platform and a Kubernetes module is installed to set up a Kubernetes cluster. You can show information about the environment using the syntax:
olcnectl module instances --api-server host_name:8091 --environment-name name
Tip:
To avoid having to enter the
--api-server
option in futureolcnectl
commands, add the--update-config
option.For example:
olcnectl module instances \ --api-server operator.example.com:8091 \ --environment-name myenvironment \ --update-config
The output looks similar to:
INFO[...] Global flag configuration for myenvironment has been written to the local Platform config and you don't need to specify them for any future calls INSTANCE MODULE STATE control1.example.com:8090 node installed ... mycluster kubernetes installed
If you want to see more information about the deployment, use the
olcnectl module report
command. For example:olcnectl module report \ --environment-name myenvironment \ --name mycluster \ --children
-
Set up the Kubernetes CLI (
kubectl
) on a control plane node. Thekubectl
command is installed on each control plane node in the cluster. To use it to access the cluster, you need to configure it using the Kubernetes configuration file.Log into a control plane node and copy and paste these commands to a terminal in your home directory:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
Verify that you can use the
kubectl
command using anykubectl
command such as:kubectl get pods --all-namespaces
The output looks similar to:
NAMESPACE NAME READY STATUS RESTARTS AGE externalip-validat... externalip-validation-... 1/1 Running 0 1h kube-system coredns-... 1/1 Running 0 1h kube-system coredns-... 1/1 Running 0 1h kube-system etcd-... 1/1 Running 2 1h kube-system etcd-... 1/1 Running 2 1h kube-system kube-apiserver-... 1/1 Running 2 1h kube-system kube-apiserver-... 1/1 Running 2 1h kube-system kube-controller-manager-... 1/1 Running 5 (1h ago) 1h kube-system kube-controller-manager-... 1/1 Running 2 1h kube-system kube-flannel-... 1/1 Running 0 1h kube-system kube-flannel-... 1/1 Running 0 1h kube-system kube-flannel-... 1/1 Running 0 1h kube-system kube-proxy-... 1/1 Running 0 1h kube-system kube-proxy-... 1/1 Running 0 1h kube-system kube-proxy-... 1/1 Running 0 1h kube-system kube-scheduler-... 1/1 Running 5 (1h ago) 1h kube-system kube-scheduler-... 1/1 Running 2 1h kubernetes-dashboard kubernetes-dashboard-... 1/1 Running 0 1h
Note:
After the deployment, a Kubernetes configuration file is created in the local directory of the operator node. The file is named
kubeconfig.environment_name.cluster_name
and contains information about the Kubernetes cluster. This file is created for your convenience and is not required to set upkubectl
on the control plane nodes.You may want to use this file to add to a larger Kubernetes configuration file if you have multiple clusters. See the upstream Kubernetes documentation for more information on configuring access to multiple clusters.