The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

Chapter 2 Creating a Kubernetes Cluster

This chapter shows you how to use the Platform CLI (olcnectl) to create a Kubernetes cluster. This chapter assumes you have installed the Oracle Cloud Native Environment software packages on the nodes, configured them to be used in a cluster and created an environment in which to install the Kubernetes module, as discussed in Getting Started.

The high level steps to create a Kubernetes cluster are:

  • Create a Kubernetes module to specify information about the cluster.

  • Validate the Kubernetes module to make sure Kubernetes can be installed on the nodes.

  • Install the Kubernetes module to install the Kubernetes packages on the nodes and create the cluster.

The olcnectl command is used to perform these steps. For more information on the syntax for the olcnectl command, see Platform Command-Line Interface.

2.1 Creating a Kubernetes Module

The Kubernetes module can be set up to create a:

  • Highly available (HA) cluster with an external load balancer

  • HA cluster with an internal load balancer

  • Cluster with a single control plane node (non-HA cluster)

To create an HA cluster you need at least three control plane nodes and two worker nodes.

For information on setting up an external load balancer, or for information on preparing the control plane nodes to use the internal load balancer installed by the Platform CLI, see Getting Started.

A number of additional ports are required to be open on control plane nodes in an HA cluster. For information on opening the required ports for an HA cluster, see Getting Started.

Use the olcne module create command to create a Kubernetes module. If you do not include all the required options when using this command, you are prompted to provide them. For the full list of the options available for the Kubernetes module, see Platform Command-Line Interface.

2.1.1 Creating an HA Cluster with External Load Balancer

This section shows you how to create a Kubernetes module to create an HA cluster using an external load balancer.

The following example creates an HA cluster using your own load balancer, available on the host lb.example.com and running on port 6443.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--load-balancer lb.example.com:6443 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090,worker4.example.com:8090 \
--restrict-service-externalip-ca-cert=/etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert \
--restrict-service-externalip-tls-cert=/etc/olcne/configs/certificates/restrict_external_ip/production/node.cert \
--restrict-service-externalip-tls-key=/etc/olcne/configs/certificates/restrict_external_ip/production/node.key

The --environment-name sets the name of the environment in which to create the Kubernetes module. This example sets it to myenvironment.

The --module option sets the module type to create. To create a Kubernetes module this must be set to kubernetes.

The --name option sets the name used to identify the Kubernetes module. This example sets it to mycluster.

The --container-registry option specifies the container registry from which to pull the Kubernetes images. This example uses the Oracle Container Registry, but you may also use an Oracle Container Registry mirror, or a local registry with the Kubernetes images mirrored from the Oracle Container Registry. For information on using an Oracle Container Registry mirror, or creating a local registry, see Getting Started.

However, you can set a new default container registry value during an update or upgrade of the Kubernetes module.

The --load-balancer option sets the hostname and port of an external load balancer. This example sets it to lb.example.com:6443.

The --master-nodes option includes a comma separated list of the hostnames or IP addresses of the control plane nodes to be included in the cluster and the port number on which the Platform Agent is available. The default port number is 8090.

Note

You can create a cluster that uses an external load balancer with a single control plane node. However, HA and failover features are not available until you reach at least three control plane nodes in the cluster. To increase the number of control plane nodes, scale up the cluster. For information on scaling up the cluster, see Section 6.1, “Scaling Up a Kubernetes Cluster”.

The --worker-nodes option includes a comma separated list of the hostnames or IP addresses of the worker nodes to be included in the cluster and the port number on which the Platform Agent is available. If a worker node is behind a NAT gateway, use the public IP address for the node. The worker node's interface behind the NAT gateway must have an public IP address using the /32 subnet mask that is reachable by the Kubernetes cluster. The /32 subnet restricts the subnet to one IP address, so that all traffic from the Kubernetes cluster flows through this public IP address (for more information about configuring NAT, see Getting Started). The default port number is 8090.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service. These certificates must be located on the operator node. The --restrict-service-externalip-ca-cert option sets the location of the CA certificate. The --restrict-service-externalip-tls-cert sets the location of the node certificate. The --restrict-service-externalip-tls-key option sets the location of the node key. For information on setting up these certificates, see Getting Started.

Important

In Releases 1.2.0 and 1.1.8 or lower, the options to set the options for the externalip-validation-webhook-service Kubernetes service are not required and cannot be used. These options are only available and required in Releases 1.2.2 and 1.1.10 or later.

You can optionally use the --restrict-service-externalip-cidrs option to set the external IP addresses that can be accessed by Kubernetes services. For example:

--restrict-service-externalip-cidrs=192.0.2.0/24,198.51.100.0/24

In this example, the IP ranges that are allowed are within the 192.0.2.0/24 and 198.51.100.0/24 CIDR blocks.

You can optionally set the network interface to use for the Kubernetes data plane (the interface used by the pods running on Kubernetes). By default, the interface used by the the Platform Agent (set with the --master-nodes and --worker-nodes options) is used for both the Kubernetes control plane node and the data plane. If you want to specify a separate network interface to use for the data plane, include the --pod-network-iface option. For example, --pod-network-iface ens1. This results in the control plane node using the network interface used by the Platform Agent, and the data plane using a separate network interface, which in this example is ens1.

Note

You can also use a regex expression with the --pod-network-iface option. For example:

--pod-network-iface "ens[1-5]|eth5"

If you use regex to set the interface name, the first matching interface returned by the kernel is used.

If you set SELinux to enforcing mode (the operating system default and the recommended mode) on the control plane node and worker nodes, you must also use the --selinux enforcing option when you create the Kubernetes module. When you validate the module, the Platform CLI checks whether SELinux is set to enforcing mode on the Kubernetes control plane node and worker nodes. If enforcing mode is enabled, the Platform CLI exits with an error requesting that you set SELinux to permissive mode (the Oracle Cloud Native Environment default). To avoid this error message, use the --selinux enforcing option when you want to use enforcing mode.

2.1.2 Creating an HA Cluster with Internal Load Balancer

This section shows you how to create a Kubernetes module to create an HA cluster using an internal load balancer, installed by the Platform CLI on the control plane nodes.

This example creates an HA cluster using the internal load balancer installed by the Platform CLI.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090,worker4.example.com:8090 \
--restrict-service-externalip-ca-cert=/etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert \
--restrict-service-externalip-tls-cert=/etc/olcne/configs/certificates/restrict_external_ip/production/node.cert \
--restrict-service-externalip-tls-key=/etc/olcne/configs/certificates/restrict_external_ip/production/node.key

The --virtual-ip option sets the virtual IP address to be used for the primary control plane node, for example, 192.0.2.100. This IP address should be available on the network and should not be assigned to any hosts on the network. This IP address is dynamically assigned to the control plane node assigned as the primary controller by the load balancer.

If you are using a container registry mirror, you must also set the location of the NGINX image using the --nginx-image option. This option must be set to the location of your registry mirror in the format:

registry:port/olcne/nginx:version

For example:

--nginx-image myregistry.example.com:5000/olcne/nginx:1.17.7

All other options used in this example are described in Section 2.1.1, “Creating an HA Cluster with External Load Balancer”.

2.1.3 Creating a Cluster with a Single Control Plane Node

This section shows you how to create Kubernetes module to create a cluster with a single control plane node. No load balancer is used or required with this type of cluster.

This example creates a cluster with a single control plane node.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes --name mycluster \
--container-registry container-registry.oracle.com/olcne \
--master-nodes control1.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090 \
--restrict-service-externalip-ca-cert=/etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert \
--restrict-service-externalip-tls-cert=/etc/olcne/configs/certificates/restrict_external_ip/production/node.cert \
--restrict-service-externalip-tls-key=/etc/olcne/configs/certificates/restrict_external_ip/production/node.key

The --master-nodes option should contain only one node.

All other options used in this example are described in Section 2.1.1, “Creating an HA Cluster with External Load Balancer”.

2.2 Validating a Kubernetes Module

When you have created a Kubernetes module in an environment, you should validate the nodes are configured correctly to install the module.

Use the olcnectl module validate command to validate the nodes are configured correctly. For example, to validate the Kubernetes module named mycluster in the myenvironment environment:

olcnectl module validate \
--environment-name myenvironment \
--name mycluster

If there are any validation errors, the commands required to fix the nodes are provided in the output. If you want to save the commands as scripts, use the --generate-scripts option. For example:

olcnectl module validate \
--environment-name myenvironment \
--name mycluster \
--generate-scripts

A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh. You can copy the script to the appropriate node, and run it to fix any validation errors.

2.3 Installing a Kubernetes Module

When you have created and validated a Kubernetes module, you use it to install Kubernetes on the nodes and create a cluster.

Use the olcnectl module install command to install Kubernetes on the nodes to create a cluster.

As part of installing the Kubernetes module:

  • The Kubernetes packages are installed on the nodes. The kubeadm package installs the packages required to run CRI-O and Kata Containers. CRI-O is needed to delegate containers to a runtime engine (either runc or kata-runtime). For more information about container runtimes, see Container Runtimes.

  • The crio and kubelet services are enabled and started.

  • If you are installing an internal load balancer, the olcne-nginx and keepalived services are enabled and started on the control plane nodes.

For example, use the following command to use the Kubernetes module named mycluster in the myenvironment environment to create a cluster:

olcnectl module install \
--environment-name myenvironment \
--name mycluster

The Kubernetes module is used to install Kubernetes on the nodes and the cluster is started and validated for health.

Important

Installing Kubernetes may take several minutes to complete.