2 Creating a Kubernetes Cluster
This chapter shows you how to use the Platform CLI
(olcnectl
) to create a Kubernetes cluster. This
chapter assumes you have installed the Oracle Cloud Native Environment software packages on
the nodes, configured them to be used in a cluster and created an
environment in which to install the Kubernetes module,
as discussed in Getting Started.
The high level steps to create a Kubernetes cluster are:
-
Create a Kubernetes module to specify information about the cluster.
-
Validate the Kubernetes module to make sure Kubernetes can be installed on the nodes.
-
Install the Kubernetes module to install the Kubernetes packages on the nodes and create the cluster.
The olcnectl
command is used to perform these
steps. For more information on the syntax for the
olcnectl
command, see Platform Command-Line Interface.
Tip:
You can also use a configuration file to create modules. The configuration file is a YAML
file that contains the information about the environments and modules you want to deploy.
Using a configuration file reduces the information you need to provide with
olcnectl
commands. For information on creating and using a
configuration file, see Platform Command-Line Interface.
Setting the Kubernetes Pod Network
- Flannel. The default networking option when you create a Kubernetes module.
- Calico. Calico can be set up when you create the Kubernetes module, or afterwards as the Calico module, using your own configuration.
- Multus. Multus can be set up as the Multus module after the Kubernetes module is installed. Multus is installed as a module on top of either Flannel or Calico.
Flannel Networking
Flannel is the default networking option when you create a Kubernetes module. You do not need to set any command options to use Flannel, it is installed by default.
Calico Networking
Calico is an optional pod networking technology. For more information on Calico, see the upstream documentation.
Prerequisites
This section contains the prerequisite information you need to set up Calico.
Disabling firewalld Service
sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service
Updating Proxy Configuration
If you are using a proxy server in your environment, edit the CRI-O proxy
configuration file and add the Kubernetes service IP (the default is 10.96.0.1
) to the NO_PROXY
variable,
for example, on each Kubernetes node, edit the
/etc/systemd/system/crio.service.d/proxy.conf
file:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:3128"
Environment="HTTPS_PROXY=https://proxy.example.com:3128"
Environment="NO_PROXY=mydomain.example.com,10.96.0.1"
Reload the configuration file and restart the crio service:
sudo systemctl daemon-reload
sudo systemctl restart crio.service
Note:
You don't need to perform this step if you're using the olcnectl
provision
command to perform a quick installation. This is set up for you
automatically when using that installation method and you provide any proxy information.
Creating a Calico Configuration File
If you are installing the Calico module, you should create a Calico configuration
file to configure Calico to your requirements. This file should contain the
spec
portion of an
operator.tigera.io/v1/Installation
. This file should be
available on the operator node.
For information on the Calico configuration file, see the upstream documentation.
An example Calico configuration file is:
installation:
cni:
type: Calico
calicoNetwork:
bgp: Disabled
ipPools:
- cidr: 198.51.100.0/24
encapsulation: VXLAN
registry: container-registry.oracle.com
imagePath: olcne
Deploying Calico with the Kubernetes Module
If you want to use the default configuration for Calico, you can set this as an option
when you create the Kubernetes module using the --pod-network calico
option of the olcnectl module create --module kubernetes
command. This
sets Calico as the networking option instead of Flannel.
A minimum default configuration is used for Calico with this installation method. If you want to configure Calico, you should install the Calico module.
Deploying the Calico Module
You can optionally install the Calico module. This allows you to use your own configuration file for Calico.
To use this method, you must create the Kubernetes module using the
--pod-network none
option. This option sets no networking for pods
in the Kubernetes cluster. You then install the Calico module to configure the pod
networking.
For the syntax to use to create a Calico module, see the calico
option
of the olcnectl module create
command in Platform Command-Line Interface.
To deploy the Calico module:
-
Create and install a Kubernetes module using the
--pod-network none
option of theolcnectl module create --module kubernetes
command. This option sets no networking for pods in the Kubernetes cluster. The name of the Kubernetes module in this example ismycluster
.Note:
When you install the Kubernetes module with the
--pod-network none
option, allkube-system
pods are marked aspending
until you install the Calico module. When Calico is installed, thesekube-system
pods are marked asrunning
. -
Create a Calico module and associate it with the Kubernetes module named
mycluster
using the--calico-kubernetes-module
option. In this example, the Calico module is namedmycalico
.olcnectl module create \ --environment-name myenvironment \ --module calico \ --name mycalico \ --calico-kubernetes-module mycluster \ --calico-installation-config calico-config.yaml
The
--module
option sets the module type to create, which iscalico
. You define the name of the Calico module using the--name
option, which in this case ismycalico
.The
--calico-kubernetes-module
option sets the name of the Kubernetes module.The
--calico-installation-config
option sets the location for the Calico configuration file. This file must be available on the operator node under the provided path. For information on creating this configuration file, see Creating a Calico Configuration File.If you do not include all the required options when adding the module, you are prompted to provide them.
-
Use the
olcnectl module validate
command to validate the Calico module can be deployed to the nodes. For example:olcnectl module validate \ --environment-name myenvironment \ --name mycalico
-
Use the
olcnectl module install
command to install the Calico module. For example:olcnectl module install \ --environment-name myenvironment \ --name mycalico
The Calico module is deployed into the Kubernetes cluster.
Multus Networking
Multus is an optional pod networking technology that creates a networking bridge to either Flannel or Calico. For more information on Multus, see the upstream documentation.
You can install Multus as a module to create a network bridge to Flannel or Calico. You can create a Multus module using the default configuration, or write a configuration file to suit your own requirements.
Prerequisites
This section contains the prerequisite information you need to set up Multus.
Updating Proxy Configuration
If you are using a proxy server in your environment, edit the CRI-O proxy
configuration file and add the Kubernetes service IP (the default is 10.96.0.1
) to the NO_PROXY
variable,
for example, on each Kubernetes node, edit the
/etc/systemd/system/crio.service.d/proxy.conf
file:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:3128"
Environment="HTTPS_PROXY=https://proxy.example.com:3128"
Environment="NO_PROXY=mydomain.example.com,10.96.0.1"
Reload the configuration file and restart the crio service:
sudo systemctl daemon-reload
sudo systemctl restart crio.service
Note:
You don't need to perform this step if you're using the olcnectl
provision
command to perform a quick installation. This is set up for you
automatically when using that installation method and you provide any proxy information.
Creating a Multus Configuration File
If you are installing the Multus module, it is recommended you create a configuration file to set up the networking to suit your requirements. The default configuration of Multus is not recommended for a production environment.
The configuration file should contain zero or more Kubernetes
NetworkAttachmentDefinition
s Custom Resource Definitions. These
definitions set up the network attachment, that is, the secondary interface for the
pods. This file should be available on the operator node.
For information on creating the Multus configuration file, see the upstream documentation.
An example Multus configuration file is:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: bridge-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "bridge",
"bridge": "mybr0",
"ipam": {
"type": "host-local",
"subnet": "192.168.12.0/24",
"rangeStart": "192.168.12.10",
"rangeEnd": "192.168.12.200"
}
}'
Deploying the Multus Module
This section contains the information on how to install the Multus module. You must have a Kubernetes module installed before you install Multus. The Kubernetes module can use either Flannel or Calico as the Kubernetes pod networking technology.
For the syntax to use to create a Multus module, see the multus
option
of the olcnectl module create
command in Platform Command-Line Interface.
To deploy the Multus module:
-
Create and install a Kubernetes module using either Flannel or Calico. The name of the Kubernetes module in this example is
mycluster
. -
Create a Multus module and associate it with the Kubernetes module named
mycluster
using the--multus-kubernetes-module
option. In this example, the Multus module is namedmymultus
.olcnectl module create \ --environment-name myenvironment \ --module multus \ --name mymultus \ --multus-kubernetes-module mycluster \ --multus-config multus-config.conf
The
--module
option sets the module type to create, which ismultus
. You define the name of the Multus module using the--name
option, which in this case ismymultus
.The
--multus-kubernetes-module
option sets the name of the Kubernetes module.The
--multus-config
option sets the location for the Multus configuration file. This file must be available on the operator node under the provided path. For information on creating this configuration file, see Creating a Multus Configuration File.If you do not include all the required options when adding the module, you are prompted to provide them.
-
Use the
olcnectl module validate
command to validate the Multus module can be deployed to the nodes. For example:olcnectl module validate \ --environment-name myenvironment \ --name mymultus
-
Use the
olcnectl module install
command to install the Multus module. For example:olcnectl module install \ --environment-name myenvironment \ --name mymultus
The Multus module is deployed into the Kubernetes cluster.
Creating a Kubernetes Module
The Kubernetes module can be set up to create a:
-
Highly available (HA) cluster with an external load balancer
-
HA cluster with an internal load balancer
-
Cluster with a single control plane node (non-HA cluster)
To create an HA cluster you need at least three control plane nodes and two worker nodes.
For information on setting up an external load balancer, or for information on preparing the control plane nodes to use the internal load balancer installed by the Platform CLI, see Getting Started.
A number of additional ports are required to be open on control plane nodes in an HA cluster. For information on opening the required ports for an HA cluster, see Getting Started.
Use the olcne module create
command to create a Kubernetes module. If you
do not include all the required options when using this command, you are prompted to provide
them. For the full list of the options available for the Kubernetes module, see Platform Command-Line Interface.
Creating an HA Cluster with External Load Balancer
This section shows you how to create a Kubernetes module to create an HA cluster using an external load balancer.
The following example creates an HA cluster using your own load
balancer, available on the host
lb.example.com
and running on port
6443
.
olcnectl module create \ --environment-name myenvironment \ --module kubernetes \ --name mycluster \ --container-registry container-registry.oracle.com/olcne \ --load-balancer lb.example.com:6443 \ --control-plane-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090,worker4.example.com:8090 \ --selinux enforcing \ --restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \ --restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \ --restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key
The --environment-name
sets the name of the
environment in which to create the
Kubernetes module. This example sets it to
myenvironment
.
The --module
option sets the module type to
create. To create a Kubernetes module this must be
set to kubernetes
.
The --name
option sets the name used to
identify the Kubernetes module. This example sets
it to mycluster
.
The --container-registry
option specifies the container registry from which
to pull the Kubernetes images. This example uses the Oracle Container Registry, but you may
also use an Oracle Container Registry mirror, or a local registry with the Kubernetes images
mirrored from the Oracle Container Registry. For information on using an Oracle Container
Registry mirror, or creating a local registry, see Getting Started.
However, you can set a new default container registry value during an update or upgrade of the Kubernetes module.
The --load-balancer
option sets the hostname
and port of an external load balancer. This example sets it to
lb.example.com:6443
.
The --control-plane-nodes
option includes a comma separated list of the
hostnames or IP addresses of the control plane nodes to be included in the cluster and the
port number on which the Platform Agent is available. The default port number is
8090
.
Note:
You can create a cluster that uses an external load balancer with a single control plane node. However, HA and failover features are not available until you reach at least three control plane nodes in the cluster. To increase the number of control plane nodes, scale up the cluster. For information on scaling up the cluster, see Scaling Up a Kubernetes Cluster.
The --worker-nodes
option includes a comma
separated list of the hostnames or IP addresses of the worker
nodes to be included in the cluster and the port number on which
the Platform Agent is available. If a worker node is behind
a NAT gateway, use the public IP address for the node. The
worker node's interface behind the NAT gateway must have an
public IP address using the /32 subnet mask that is reachable by
the Kubernetes cluster. The /32 subnet restricts the subnet to one IP
address, so that all traffic from the Kubernetes cluster flows
through this public IP address (for more information about
configuring NAT, see Getting Started). The default port
number is 8090
.
If SELinux is set to enforcing
mode (the
operating system default and the recommended mode) on the
control plane node and worker nodes, you must also use the
--selinux enforcing
option when you create
the Kubernetes module.
You must also include the location of the certificates for the
externalip-validation-webhook-service
Kubernetes service. These
certificates must be located on the operator node. The
--restrict-service-externalip-ca-cert
option sets the location of the CA
certificate. The --restrict-service-externalip-tls-cert
sets the location of
the node certificate. The --restrict-service-externalip-tls-key
option sets
the location of the node key. For information on setting up these certificates, see Getting Started.
You can optionally use the
--restrict-service-externalip-cidrs
option to
set the external IP addresses that can be accessed by Kubernetes
services. For example:
--restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24
In this example, the IP ranges that are allowed are within the
192.0.2.0/24
and
198.51.100.0/24
CIDR blocks.
The default pod networking uses Flannel. You can optionally set the pod networking technology
to Calico or to none. Set the pod networking using the --pod-network
option.
Using --pod-network calico
sets Calico to be the CNI for pods instead of
Flannel. Using --pod-network none
sets no CNI, which allows you to use the
Calico module to install Calico with a configuration file that suits your pod networking
requirements. For more information on pod networking options, see Setting the Kubernetes Pod Network.
You can optionally set the network interface to use for the Kubernetes data plane (the
interface used by the pods running on Kubernetes). By default, the interface used by the the
Platform Agent (set with the --control-plane-nodes
and
--worker-nodes
options) is used for both the Kubernetes control plane node
and the data plane. If you want to specify a separate network interface to use for the data
plane, include the --pod-network-iface
option. For example,
--pod-network-iface ens1
. This results in the control plane node using the
network interface used by the Platform Agent, and the data plane using a separate network
interface, which in this example is ens1
.
Note:
You can also use a regex expression with the
--pod-network-iface
option. For example:
--pod-network-iface "ens[1-5]|eth5"
If you use regex to set the interface name, the first matching interface returned by the kernel is used.
Creating an HA Cluster with Internal Load Balancer
This section shows you how to create a Kubernetes module to create an HA cluster using an internal load balancer, installed by the Platform CLI on the control plane nodes.
This example creates an HA cluster using the internal load balancer installed by the Platform CLI.
olcnectl module create \ --environment-name myenvironment \ --module kubernetes \ --name mycluster \ --container-registry container-registry.oracle.com/olcne \ --virtual-ip 192.0.2.100 \ --control-plane-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090,worker4.example.com:8090 \ --selinux enforcing \ --restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \ --restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \ --restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key
The --virtual-ip
option sets the virtual IP
address to be used for the primary control plane node, for
example, 192.0.2.100
. This IP address should
be available on the network and should not be assigned to any
hosts on the network. This IP address is dynamically assigned to
the control plane node assigned as the primary controller by the
load balancer.
If you are using a container registry mirror, you must also set
the location of the NGINX image using the
--nginx-image
option. This option must be set
to the location of your registry mirror in the format:
registry:port/olcne/nginx:version
For example:
--nginx-image myregistry.example.com:5000/olcne/nginx:1.17.7
All other options used in this example are described in Creating an HA Cluster with External Load Balancer.
Creating a Cluster with a Single Control Plane Node
This section shows you how to create Kubernetes module to create a cluster with a single control plane node. No load balancer is used or required with this type of cluster.
This example creates a cluster with a single control plane node.
olcnectl module create \ --environment-name myenvironment \ --module kubernetes --name mycluster \ --container-registry container-registry.oracle.com/olcne \ --control-plane-nodes control1.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090 \ --selinux enforcing \ --restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \ --restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \ --restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key
The --control-plane-nodes
option should contain only one node.
All other options used in this example are described in Creating an HA Cluster with External Load Balancer.
Validating a Kubernetes Module
When you have created a Kubernetes module in an environment, you should validate the nodes are configured correctly to install the module.
Use the olcnectl module validate
command to
validate the nodes are configured correctly. For example, to
validate the Kubernetes module named
mycluster
in the
myenvironment
environment:
olcnectl module validate \ --environment-name myenvironment \ --name mycluster
If there are any validation errors, the commands required to fix
the nodes are provided in the output. If you want to save the
commands as scripts, use the --generate-scripts
option. For example:
olcnectl module validate \ --environment-name myenvironment \ --name mycluster \ --generate-scripts
A script is created for each node in the module, saved to the
local directory, and named
hostname:8090.sh
.
You can copy the script to the appropriate node, and run it to fix
any validation errors.
Installing a Kubernetes Module
When you have created and validated a Kubernetes module, you use it to install Kubernetes on the nodes and create a cluster.
Use the olcnectl module install
command to
install Kubernetes on the nodes to create a cluster.
As part of installing the Kubernetes module:
-
The Kubernetes packages are installed on the nodes. The
kubeadm
package installs the packages required to run CRI-O and Kata Containers. CRI-O is needed to delegate containers to a runtime engine (eitherrunc
orkata-runtime
). For more information about container runtimes, see Container Runtimes. -
The
crio
andkubelet
services are enabled and started. -
If you are installing an internal load balancer, the
olcne-nginx
andkeepalived
services are enabled and started on the control plane nodes.
For example, use the following command to use the
Kubernetes module named mycluster
in the myenvironment
environment to create a
cluster:
olcnectl module install \ --environment-name myenvironment \ --name mycluster
The Kubernetes module is used to install Kubernetes on the nodes and the cluster is started and validated for health.
Important:
Installing Kubernetes may take several minutes to complete.
Reporting Information about the Kubernetes Module
When you have installed a Kubernetes module, you can review information about the Kubernetes module and its properties.
Use the olcnectl module report
command to
review information about the module.
For example, use the following command to review the
Kubernetes module named mycluster
in myenvironment
:
olcnectl module report \ --environment-name myenvironment \ --name mycluster \ --children
For more information on the syntax for the olcnectl
module report
command, see Platform Command-Line Interface.