9 Quick HA Install using Configuration File on Oracle Cloud Infrastructure
Install a basic deployment of Oracle Cloud Native Environment on Oracle Cloud Infrastructure, including a Kubernetes cluster. Any extra modules you want to install can be added to a configuration file. The example in this topic installs all modules available for Oracle Cloud Infrastructure.
This sets up a deployment of Oracle Cloud Native Environment on Oracle Cloud Infrastructure, including a Kubernetes cluster and the Oracle Cloud Infrastructure Cloud Controller Manager module.
Nodes Required: As many nodes as required for High Availability. (See Kubernetes High Availability Requirements).
-
Operator Node: One node to use as the operator node, which is used to perform the installation using the Platform CLI (
olcnectl
), and to host the Platform API Server. -
Kubernetes control plane: At least three nodes to use as Kubernetes control plane nodes.
-
Kubernetes worker: At least two nodes to use as Kubernetes worker nodes.
Before you begin: Complete the prerequisite set up. See Prerequisites.
To do a quick HA install on Oracle Cloud Infrastructure using a configuration file:
- Set up the Oracle Cloud Infrastructure load balancer.
-
Sign-in to Oracle Cloud Infrastructure.
-
Create a load balancer.
-
Add a backend set to the load balancer using weighted round robin. Set the health check to be TCP port 6443.
-
Add the control plane nodes to the backend set. Set the port for the control plane nodes to port 6443.
-
Create a listener for the backend set using TCP port 6443.
-
-
On the operator node, create an Oracle Cloud Native Environment configuration file for the deployment. For information on creating an Oracle Cloud Native Environment configuration file, see Platform Command-Line Interface. This example uses the file name
myenvironment.yaml
for the configuration file.A basic example configuration file that installs the Kubernetes module, and the Oracle Cloud Infrastructure Cloud Controller Manager module is:
environments: - environment-name: myenvironment globals: api-server: operator.example.com:8091 selinux: enforcing modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 control-plane-nodes: - control1.example.com:8090 - control2.example.com:8090 - control3.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 - worker3.example.com:8090 - module: oci-ccm name: myoci args: oci-ccm-kubernetes-module: mycluster oci-region: us-ashburn-1 oci-tenancy: ocid1.tenancy.oc1..unique_ID oci-compartment: ocid1.compartment.oc1..unique_ID oci-user: ocid1.user.oc1..unique_ID oci-fingerprint: b5:52:... oci-private-key-file: /home/opc/.oci/oci_api_key.pem oci-vcn: ocid1.vcn.oc1..unique_ID oci-lb-subnet1: ocid1.subnet.oc1..unique_ID
This example configuration file uses the default settings to create a Kubernetes cluster with a three control plane nodes, three worker nodes, and uses an external load balancer that's already set up on Oracle Cloud Infrastructure. Change the nodes listed to reflect the ones in your environment. You should also change the URL to that of the Oracle Cloud Infrastructure load balancer. Several values are required to set up the Oracle Cloud Infrastructure Cloud Controller Manager module, and you can find information about what to provide for this module in Oracle Cloud Infrastructure Cloud Controller Manager Module.
Tip:
Private CA Certificates are automatically generated for communication between the Kubernetes nodes and for the Kubernetes
externalIPs
service. To use CA Certificates, or to add more modules to the configuration file, see the information about these options in Quick Install using Configuration File. -
On the operator node, use the
olcnectl provision
command with the--config-file
option to start the installation. For example:olcnectl provision --config-file myenvironment.yaml
Several other command options might be required, such as the SSH login credentials, proxy server information, and the option to automatically accept any prompts using the
--yes
option. For information on the syntax options for theolcnectl provision
command, see Platform Command-Line Interface. -
A list of the steps to be performed on each node is displayed and a prompt is displayed to proceed. For example, on a control plane node, the changes might look similar to:
? Apply control-plane configuration on control1.example.com: * Install oracle-olcne-release ... * Install and enable olcne-agent Proceed? yes/no(default) yes
Enter
yes
to continue. The node is set up.Information about the changes on each node is displayed. You need to confirm the set up steps for each node.
Tip:
To avoid accepting the changes on each node, use the
--yes
command option with theolcnectl provision
command. -
The nodes are set up with the Oracle Cloud Native Environment platform and the modules are installed. You can show information about the environment using the syntax:
olcnectl module instances --api-server host_name:8091 --environment-name name
Tip:
To avoid using the
--api-server
option in futureolcnectl
commands, add the--update-config
option.For example:
olcnectl module instances \ --api-server operator.example.com:8091 \ --environment-name myenvironment \ --update-config
The output looks similar to:
INFO[...] Global flag configuration for myenvironment has been written to the local Platform config and you don't need to specify them for any future calls INSTANCE MODULE STATE control1.example.com:8090 node installed ... mycluster kubernetes installed
To see more information about the deployment, use the
olcnectl module report
command. For example:olcnectl module report \ --environment-name myenvironment \ --name mycluster \ --children
-
Set up the Kubernetes CLI (
kubectl
) on a control plane node. Thekubectl
command is installed on each control plane node in the cluster. To use it to access the cluster, you need to configure it using the Kubernetes configuration file.Login to a control plane node and copy and paste these commands to a terminal in the user's home directory:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
Verify that you can use the
kubectl
command using anykubectl
command such as:kubectl get deployments --all-namespaces
The output looks similar to:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE externalip-validation-system externalip-validation-webhook 1/1 1 1 29m kube-system coredns 2/2 2 2 30m kubernetes-dashboard kubernetes-dashboard 1/1 1 1 29m ocne-modules ocne-module-operator 1/1 1 1 29m
Note:
After the deployment, a Kubernetes configuration file is created in the local directory of the operator node. The file is named
kubeconfig.environment_name.cluster_name
and contains information about the Kubernetes cluster. This file is created for convenience and isn't required to set upkubectl
on the control plane nodes.You might want to use this file to add to a larger Kubernetes configuration file if you have multiple clusters. See the upstream Kubernetes documentation for more information on configuring access to multiple clusters.
Tip:
Adding and Removing Nodes to Scale a Kubernetes Cluster
To change the nodes in the Kubernetes cluster, run the olcnectl
provision
command again with updated control plane and worker node lists: any
nodes you omit of a new node list are removed from the cluster, whilst any new nodes you
specify are added to it.
If you're adding nodes, new certificates are automatically generated for you and installed on the new nodes, the Oracle Cloud Native Environment software is installed, and the nodes are added to the Kubernetes cluster. However, you still need to ensure that all new nodes have been set up with the required prerequisites (see Prerequisites), and that any new control plane nodes have been added to the load balancer if you're using an external load balancer.