The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

Chapter 2 Oracle Cloud Native Environment Prerequisites

This chapter describes the prerequisites for the systems to be used in an installation of Oracle Cloud Native Environment. This chapter also discusses how to enable the repositories to install the Oracle Cloud Native Environment packages.

2.1 Enabling Access to the Oracle Cloud Native Environment Packages

This section contains information on setting up the locations for the operating system on which you want to install the Oracle Cloud Native Environment software packages.

2.1.1 Oracle Linux 7

The Oracle Cloud Native Environment packages for Oracle Linux 7 are available on the Oracle Linux yum server in the ol7_olcne12 and ol7_olcne13 repositories, or on the Unbreakable Linux Network (ULN) in the ol7_x86_64_olcne12 and ol7_x86_64_olcne13 channels. However, there are also dependencies across other repositories and channels, and these must also be enabled on each system where Oracle Cloud Native Environment is installed.

Warning

Oracle does not support Kubernetes on systems where the ol7_preview, ol7_developer or ol7_developer_EPEL yum repositories or ULN channels are enabled, or where software from these repositories or channels is currently installed on the systems where Kubernetes runs. Even if you follow the instructions in this document, you may render your platform unsupported if these repositories or channels are enabled or software from these channels or repositories is installed on your system.

2.1.1.1 Enabling Channels with ULN

If you are registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.

To subscribe to the ULN channels:
  1. Log in to https://linux.oracle.com with your ULN user name and password.

  2. On the Systems tab, click the link named for the system in the list of registered machines.

  3. On the System Details page, click Manage Subscriptions.

  4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels.

    Oracle Cloud Native Environment Release 1.3

    To install Oracle Cloud Native Environment Release 1.3, subscribe the system to the following channels:

    • ol7_x86_64_olcne13

    • ol7_x86_64_kvm_utils

    • ol7_x86_64_addons

    • ol7_x86_64_latest

    • ol7_x86_64_UEKR5 or ol7_x86_64_UEKR6

    Make sure the systems are not subscribed to the following channels:

    • ol7_x86_64_olcne

    • ol7_x86_64_olcne11

    • ol7_x86_64_olcne12

    • ol7_x86_64_developer

    Oracle Cloud Native Environment Release 1.2

    To install Oracle Cloud Native Environment Release 1.2, subscribe the system to the following channels:

    • ol7_x86_64_olcne12

    • ol7_x86_64_kvm_utils

    • ol7_x86_64_addons

    • ol7_x86_64_latest

    • ol7_x86_64_UEKR5 or ol7_x86_64_UEKR6

    Make sure the systems are not subscribed to the following channels:

    • ol7_x86_64_olcne

    • ol7_x86_64_olcne11

    • ol7_x86_64_olcne13

    • ol7_x86_64_developer

2.1.1.2 Enabling Repositories with the Oracle Linux Yum Server

If you are using the Oracle Linux yum server for system updates, enable the required yum repositories.

To enable the yum repositories:
  1. Install the oracle-olcne-release-el7 release package to install the Oracle Cloud Native Environment yum repository configuration.

    sudo yum install oracle-olcne-release-el7
  2. Set up the repositories for the release you want to install.

    Oracle Cloud Native Environment Release 1.3

    To install Oracle Cloud Native Environment Release 1.3, enable the following yum repositories:

    • ol7_olcne13

    • ol7_kvm_utils

    • ol7_addons

    • ol7_latest

    • ol7_UEKR5 or ol7_UEKR6

    Use the yum-config-manager tool to enable the yum repositories:

    sudo yum-config-manager --enable ol7_olcne13 ol7_kvm_utils ol7_addons ol7_latest

    Make sure the ol7_olcne, ol7_olcne11, ol7_olcne12, and ol7_developer yum repositories are disabled:

    sudo yum-config-manager --disable ol7_olcne ol7_olcne11 ol7_olcne12 ol7_developer
    Oracle Cloud Native Environment Release 1.2

    To install Oracle Cloud Native Environment Release 1.2, enable the following yum repositories:

    • ol7_olcne12

    • ol7_kvm_utils

    • ol7_addons

    • ol7_latest

    • ol7_UEKR5 or ol7_UEKR6

    Use the yum-config-manager tool to enable the yum repositories:

    sudo yum-config-manager --enable ol7_olcne12 ol7_kvm_utils ol7_addons ol7_latest

    Make sure the ol7_olcne, ol7_olcne11, ol7_olcne13, and ol7_developer yum repositories are disabled:

    sudo yum-config-manager --disable ol7_olcne ol7_olcne11 ol7_olcne13 ol7_developer

2.1.2 Oracle Linux 8

The Oracle Cloud Native Environment packages for Oracle Linux 8 are available on the Oracle Linux yum server in the ol8_olcne12 and ol8_olcne13 repositories, or on the Unbreakable Linux Network (ULN) in the ol8_x86_64_olcne12 and ol8_x86_64_olcne13 channels. However, there are also dependencies across other repositories and channels, and these must also be enabled on each system where Oracle Cloud Native Environment is installed.

Warning

Oracle does not support Kubernetes on systems where the ol8_developer or ol8_developer_EPEL yum repositories or ULN channels are enabled, or where software from these repositories or channels is currently installed on the systems where Kubernetes runs. Even if you follow the instructions in this document, you may render your platform unsupported if these repositories or channels are enabled or software from these channels or repositories is installed on your system.

2.1.2.1 Enabling Channels with ULN

If you are registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.

To subscribe to the ULN channels:
  1. Log in to https://linux.oracle.com with your ULN user name and password.

  2. On the Systems tab, click the link named for the system in the list of registered machines.

  3. On the System Details page, click Manage Subscriptions.

  4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels.

    Oracle Cloud Native Environment Release 1.3

    To install Oracle Cloud Native Environment Release 1.3, subscribe the system to the following channels:

    • ol8_x86_64_olcne13

    • ol8_x86_64_addons

    • ol8_x86_64_baseos_latest

    • ol8_x86_64_appstream

    • ol8_x86_64_UEKR6

    Make sure the systems are not subscribed to the following channels:

    • ol8_x86_64_olcne12

    • ol8_x86_64_developer

    Oracle Cloud Native Environment Release 1.2

    To install Oracle Cloud Native Environment Release 1.2, subscribe the system to the following channels:

    • ol8_x86_64_olcne12

    • ol8_x86_64_addons

    • ol8_x86_64_baseos_latest

    • ol8_x86_64_appstream

    • ol8_x86_64_UEKR6

    Make sure the systems are not subscribed to the following channels:

    • ol8_x86_64_olcne13

    • ol8_x86_64_developer

  5. Click Save Subscriptions.

2.1.2.2 Enabling Repositories with the Oracle Linux Yum Server

If you are using the Oracle Linux yum server for system updates, enable the required yum repositories.

To enable the yum repositories:
  1. Install the oracle-olcne-release-el8 release package to install the Oracle Cloud Native Environment yum repository configuration.

    sudo dnf install oracle-olcne-release-el8
  2. Set up the repositories for the release you want to install.

    Oracle Cloud Native Environment Release 1.3

    To install Oracle Cloud Native Environment Release 1.3, enable the following yum repositories:

    • ol8_olcne13

    • ol8_addons

    • ol8_baseos_latest

    • ol8_appstream

    • ol8_UEKR6

    Use the dnf config-manager tool to enable the yum repositories:

    sudo dnf config-manager --enable ol8_olcne13 ol8_addons ol8_baseos_latest ol8_appstream ol8_UEKR6

    Make sure the ol8_olcne12, and ol8_developer yum repositories are disabled:

    sudo dnf config-manager --disable ol8_olcne12 ol8_developer
    Oracle Cloud Native Environment Release 1.2

    To install Oracle Cloud Native Environment Release 1.2, enable the following yum repositories:

    • ol8_olcne12

    • ol8_addons

    • ol8_baseos_latest

    • ol8_appstream

    • ol8_UEKR6

    Use the dnf config-manager tool to enable the yum repositories:

    sudo dnf config-manager --enable ol8_olcne12 ol8_addons ol8_baseos_latest ol8_appstream ol8_UEKR6

    Make sure the ol8_olcne13, and ol8_developer yum repositories are disabled:

    sudo dnf config-manager --disable ol8_olcne13 ol8_developer

2.2 Accessing the Oracle Container Registry

The container images that are deployed by the Platform CLI are hosted on the Oracle Container Registry. For more information about the Oracle Container Registry, see the Oracle® Linux: Oracle Container Runtime for Docker User's Guide.

For a deployment to use the Oracle Container Registry, each node within the environment must be provisioned with direct access to the Internet.

You can optionally use an Oracle Container Registry mirror, or create your own private registry mirror within your network.

When you create a Kubernetes module you must specify the registry from which to pull the container images. This is set using the --container-registry option of the olcnectl module create command. If you use the Oracle Container Registry the container registry must be set to:

container-registry.oracle.com/olcne

If you use a private registry that mirrors the Oracle Cloud Native Environment container images on the Oracle Container Registry, make sure you set the container registry to the domain name and port of the private registry, for example:

myregistry.example.com:5000/olcne

When you set the container registry to use during an installation, it becomes the default registry from which to pull images during updates and upgrades of the Kubernetes module. You can set a new default value during an update or upgrade using the --container-registry option.

2.2.1 Using an Oracle Container Registry Mirror

The Oracle Container Registry has many mirror servers located around the world. You can use a registry mirror in your global region to improve download performance of container images. While the Oracle Container Registry mirrors are hosted on Oracle Cloud Infrastructure, they are also accessible external to Oracle Cloud Infrastructure. Using a mirror that is closest to your geographical location should result in faster download speeds.

To use an Oracle Container Registry mirror to pull images, use the format:

container-registry-region-key.oracle.com/olcne

For example, to use the Oracle Container Registry mirror in the US East (Ashburn) region, which has a region key of IAD, the registry should be set (using the using the --container-registry option) to:

container-registry-iad.oracle.com/olcne

For more information on Oracle Container Registry mirrors and finding the region key for a mirror in your location, see the Oracle Cloud Infrastructure documentation at:

https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm

2.2.2 Using a Private Registry

In some cases, nodes within your environment may not be provisioned with direct access to the Internet. In these cases, you can use a private registry that mirrors the Oracle Cloud Native Environment container images on the Oracle Container Registry. Each node requires direct access to the mirror registry host in this scenario.

You can use an existing container registry in your network, or create a private registry using Podman on an Oracle Linux 8 host. If you use an existing private container registry, skip the first step in the following procedure that creates a registry.

To create a private registry:
  1. Select an Oracle Linux 8 host to use for your Oracle Container Registry mirror service. The mirror host must have access to the Internet and should be able to pull images directly from the Oracle Container Registry, or alternately should have access to the correct image files stored locally. Ideally, the host should not be a node within your Oracle Cloud Native Environment, but should be accessible to all of the nodes that are part of the environment.

    On the mirror host, install Podman and set up a private registry, following the instructions in the Setting up a Local Container Registry section in Oracle® Linux: Podman User's Guide.

  2. On the mirror host, enable access to the Oracle Cloud Native Environment software packages. For information on enabling access to the packages, see Section 2.1, “Enabling Access to the Oracle Cloud Native Environment Packages”.

  3. Install the olcne-utils package so you have access to the registry mirroring utility.

    sudo dnf install olcne-utils

    If you are using an existing container registry in your network that is running on Oracle Linux 7, use yum instead of dnf to install olcne-utils.

  4. Copy the required container images from the Oracle Container Registry to the private registry using the registry-image-helper.sh script with the required options:

    registry-image-helper.sh --to host.example.com:5000/olcne

    Where host.example.com:5000 is the resolvable domain name and port on which your private registry is available.

    You can optionally use the --from option to specify an alternate registry from which to pull the images. For example, to pull the images from an Oracle Container Registry mirror:

    registry-image-helper.sh \
    --from container-registry-iad.oracle.com/olcne \
    --to host.example.com:5000/olcne

    If the host where you are running the script does not have access to the Internet, you can replace the --from option with the --local option to load the container images directly from a local directory. The local directory which contains the images should be either:

    • /usr/local/share/kubeadm/

    • /usr/local/share/olcne/

    The image files should be archives in TAR format. All TAR files in the directory are loaded into the private registry when the script is run with the --local option.

    You can use the --version option to specify the Kubernetes version you want to mirror. If not specified, the latest release is used. The available versions you can pull are those listed in Release Notes.

2.3 Setting up the Operating System

The following sections describe the requirements that must be met to install and configure Oracle Cloud Native Environment on Oracle Linux 7 and Oracle Linux 8 systems.

2.3.1 Setting up a Network Time Service

As a clustering environment, Oracle Cloud Native Environment requires that the system time is synchronized across each Kubernetes control plane and worker node within the cluster. Typically, this can be achieved by installing and configuring a Network Time Protocol (NTP) daemon on each node. Oracle recommends installing and setting up the chronyd daemon for this purpose.

The chronyd service is enabled and started by default on Oracle Linux 8 systems.

Systems running on Oracle Cloud Infrastructure are configured to use the chronyd time service by default, so there is no requirement to add or configure NTP if you are installing into an Oracle Cloud Infrastructure environment.

To set up chronyd on Oracle Linux 7:
  1. On each Kubernetes control plane and worker node, install the chrony package, if it is not already installed:

    sudo yum install chrony
  2. Edit the NTP configuration in /etc/chrony.conf. Your requirements may vary. If you are using DHCP to configure the networking for each node, it is possible to configure NTP servers automatically. If you have not got a locally configured NTP service that your systems can sync to, and your systems have Internet access, you can configure them to use the public pool.ntp.org service. See https://www.ntppool.org/.

  3. Make sure NTP is enabled to restart at boot and that it is started before you proceed with the Oracle Cloud Native Environment installation. For example:

    sudo systemctl enable --now chronyd.service

For information on configuring a Network Time Service, see the Oracle® Linux 7: Administrator's Guide.

2.3.2 Disabling Swap

You must disable swap on the Kubernetes control plane and worker nodes. To disable swap, enter:

sudo swapoff -a

To make this permanent over reboots, edit the /etc/fstab file to remove or comment out any swap disks.

2.4 Setting up the Network

This section contains information about the networking requirements for Oracle Cloud Native Environment nodes.

The following table shows the network ports used by the services in a deployment of Kubernetes in an environment.

From Node Type

To Node Type

Port

Protocol

Reason

Worker

Operator

8091

TCP(6)

Platform API Server

Control plane

Operator

8091

TCP(6)

Platform API Server

Control plane

Control plane

2379-2380

TCP(6)

Kubernetes etcd (highly available clusters)

Operator

Control plane

6443

TCP(6)

Kubernetes API server

Worker

Control plane

6443

TCP(6)

Kubernetes API server

Control plane

Control plane

6443

TCP(6)

Kubernetes API server

Control plane

Control plane

6444

TCP(6)

Alternate Kubernetes API server (highly available clusters)

Operator

Control plane

8090

TCP(6)

Platform Agent

Control plane

Control plane

10250

10251

10252

10255

TCP(6)

TCP(6)

TCP(6)

TCP(6)

Kubernetes kubelet API server

Kubernetes kube-scheduler (highly available clusters)

Kubernetes kube-controller-manager (highly available clusters)

Kubernetes kubelet API server for read-only access with no authentication

Control plane

Control plane

8472

UDP(11)

Flannel

Control plane

Worker

8472

UDP(11)

Flannel

Worker

Control plane

8472

UDP(11)

Flannel

Worker

Worker

8472

UDP(11)

Flannel

Control plane

Control plane

N/A

VRRP(112)

Keepalived for Kubernetes API server (highly available clusters)

Operator

Worker

8090

TCP(6)

Platform Agent

Control plane

Worker

10250

10255

TCP(6)

TCP(6)

Kubernetes kubelet API server

Kubernetes kubelet API server for read-only access with no authentication

The following sections show you how to set up the network on each node to enable the communication between nodes in an environment.

2.4.1 Setting up the Firewall Rules

Oracle Linux 7 installs and enables firewalld, by default. The Platform CLI notifies you of any rules that you may need to add during the deployment of the Kubernetes module. The Platform CLI also provides the commands to run to modify your firewall configuration to meet the requirements.

Make sure that all required ports are open. The ports required for a Kubernetes deployment are:

  • 2379/tcp: Kubernetes etcd server client API (on control plane nodes in highly available clusters)

  • 2380/tcp: Kubernetes etcd server client API (on control plane nodes in highly available clusters)

  • 6443/tcp: Kubernetes API server (control plane nodes)

  • 8090/tcp: Platform Agent (control plane and worker nodes)

  • 8091/tcp: Platform API Server (operator node)

  • 8472/udp: Flannel overlay network, VxLAN backend (control plane and worker nodes)

  • 10250/tcp: Kubernetes kubelet API server (control plane and worker nodes)

  • 10251/tcp: Kubernetes kube-scheduler (on control plane nodes in highly available clusters)

  • 10252/tcp: Kubernetes kube-controller-manager (on control plane nodes in highly available clusters)

  • 10255/tcp: Kubernetes kubelet API server for read-only access with no authentication (control plane and worker nodes)

The commands to open the ports and to set up the firewall rules are provided below.

2.4.1.1 Non-HA Cluster Firewall Rules

For a cluster with a single control plane node, the following ports are required to be open in the firewall.

Operator Node

On the operator node, run:

sudo firewall-cmd --add-port=8091/tcp --permanent

Restart the firewall for these rules to take effect:

sudo systemctl restart firewalld.service
Worker Nodes

On the Kubernetes worker nodes run:

sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
sudo firewall-cmd --add-port=8090/tcp --permanent
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --add-port=10255/tcp --permanent
sudo firewall-cmd --add-port=8472/udp --permanent

If you are installing Oracle Cloud Native Environment Release 1.3.0 on Oracle Linux 7, you also need to enable masquerading. This is not required for all other installation types or for later releases. On the Kubernetes worker nodes run:

sudo firewall-cmd --add-masquerade --permanent

Restart the firewall for these rules to take effect:

sudo systemctl restart firewalld.service
Control Plane Nodes

On the Kubernetes control plane nodes run:

sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
sudo firewall-cmd --add-port=8090/tcp --permanent
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --add-port=10255/tcp --permanent
sudo firewall-cmd --add-port=8472/udp --permanent
sudo firewall-cmd --add-port=6443/tcp --permanent

If you are installing Oracle Cloud Native Environment Release 1.3.0 on Oracle Linux 7, you also need to enable masquerading. This is not required for all other installation types or for later releases. On the Kubernetes control plane nodes run:

sudo firewall-cmd --add-masquerade --permanent

Restart the firewall for these rules to take effect:

sudo systemctl restart firewalld.service

2.4.1.2 Highly Available Cluster Firewall Rules

For a highly available cluster, open all the firewall ports as described in Section 2.4.1.1, “Non-HA Cluster Firewall Rules”, along with the following additional ports on the control plane node.

On the Kubernetes control plane nodes run:

sudo firewall-cmd --add-port=10251/tcp --permanent
sudo firewall-cmd --add-port=10252/tcp --permanent
sudo firewall-cmd --add-port=2379/tcp --permanent
sudo firewall-cmd --add-port=2380/tcp --permanent

Restart the firewall for these rules to take effect:

sudo systemctl restart firewalld.service

2.4.2 Setting up Other Network Options

This section contains information on other network related configuration that affects an Oracle Cloud Native Environment deployment. You may not need to make changes from this section, but they are provided to help you understand any issues you may encounter related to network configuration.

2.4.2.1 Internet Access

The Platform CLI checks it is able to access the container registry, and possibly other Internet resources, to be able to pull any required container images. Unless you intend to set up a local registry mirror for container images, the systems where you intend to install Oracle Cloud Native Environment must either have direct internet access, or must be configured to use a proxy.

2.4.2.2 Flannel Network

The Platform CLI configures a flannel network as the network fabric used for communications between Kubernetes pods. This overlay network uses VxLANs to facilitate network connectivity. For more information on flannel, see the upstream documentation at:

https://github.com/coreos/flannel

By default, the Platform CLI creates a network in the 10.244.0.0/16 range to host this network. The Platform CLI provides an option to set the network range to an alternate range, if required, during installation. Systems in an Oracle Cloud Native Environment deployment must not have any network devices configured for this reserved IP range.

2.4.2.3 br_netfilter Module

The Platform CLI checks whether the br_netfilter module is loaded and exits if it is not available. This module is required to enable transparent masquerading and to facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster. If you need to check whether it is loaded, run:

sudo lsmod|grep br_netfilter
br_netfilter 24576 0 bridge 155648 2 br_netfilter,ebtable_broute

If you see the output similar to shown, the br_netfilter module is loaded. Kernel modules are usually loaded as they are needed, and it is unlikely that you need to load this module manually. If necessary, you can load the module manually and add it as a permanent module by running:

sudo modprobe br_netfilter
sudo sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'

2.4.2.4 Bridge Tunable Parameters

Kubernetes requires that packets traversing a network bridge are processed for filtering and for port forwarding. To achieve this, tunable parameters in the kernel bridge module are automatically set when the kubeadm package is installed and a sysctl file is created at /etc/sysctl.d/k8s.conf that contains the following lines:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

If you modify this file, or create anything similar yourself, run the following command to load the bridge tunable parameters:

sudo /sbin/sysctl -p /etc/sysctl.d/k8s.conf

2.4.2.5 Network Address Translation

Network Address Translation (NAT) is sometimes required when one or more Kubernetes worker nodes in a cluster are behind a NAT gateway. For example, you might want to have a control plane node in a secure company network while having other worker nodes in a publicly accessible demilitarized zone which is less secure. The control plane node would access the worker nodes through the worker node's NAT gateway. Or you may have a worker node in a legacy network that you want to use in your cluster that is primarily on a newer network. The NAT gateway, in these cases, translates requests for an IP address accessible to the Kubernetes cluster into the IP address on the subnet behind the NAT gateway.

Note

Only worker nodes can be behind a NAT. Control plane nodes cannot be behind a NAT.

Regardless of what switches or network equipment you use to set up your NAT gateway, you must configure the following for a node behind a NAT gateway:

  • The node's interface behind the NAT gateway must have an public IP address using the /32 subnet mask that is reachable by the Kubernetes cluster. The /32 subnet restricts the subnet to one IP address, so that all traffic from the Kubernetes cluster flows through this public IP address.

  • The node's interface must also include a private IP address behind the NAT gateway that your switch uses NAT tables to match the public IP address to.

For example, you can use the following command to add the reachable IP address on the ens5 interface:

sudo ip addr add 192.168.64.6/32 dev ens5 

You can then use the following command to add the private IP address on the same interface:

sudo ip addr add 192.168.192.2/18 dev ens5

2.5 Setting FIPS Mode

You can optionally configure Oracle Cloud Native Environment operator, control plane, and worker hosts to run in Federal Information Processing Standards (FIPS) mode as described in Oracle® Linux 8: Enhancing System Security. Oracle Cloud Native Environment uses the cryptographic binaries of OpenSSL from Oracle Linux 8 when the host runs in FIPS mode.

Note

You cannot use Oracle Cloud Native Environment on Oracle Linux 7 hosts running in FIPS mode.