The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

Chapter 1 Introduction to Updating and Upgrading

This document shows you how to update Oracle Cloud Native Environment and Kubernetes to the latest errata release, or upgrade from Release 1.3 to Release 1.4. This chapter uses the term upgrade to mean both upgrade and update as the overall process is the same.

The first step to upgrading is to upgrade the Oracle Cloud Native Environment software packages. This involves stopping the Platform API Server or Platform Agent on the node, upgrading the Oracle Cloud Native Environment packages, and restarting the Platform API Server or Platform Agent.

The next step is to upgrade the Kubernetes software packages. This is performed by the Platform API Server when you issue the appropriate olcnectl module update command.

You can upgrade a highly available cluster without bringing down the cluster. Control plane nodes are upgraded serially, so as one control plane node is taken offline, another control plane node takes control of the cluster. In a cluster with a single control plane node, the control plane node is offline for a short time while the upgrade is performed.

Worker nodes are also upgraded serially. If your applications are running on more than one worker node, they should remain up and available during an upgrade.

Note

Certain Kubernetes rules may prevent a node from being taken offline for upgrade. A PodDisruptionBudget is one of these objects. To allow a node to be taken offline, increase the number of running pods to exceed the MinAvailable value. For more information about PodDisruptionBudgets see the upstream documentation at:

https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work

Before an upgrade begins, a back up is taken of the control plane nodes to assist in any recovery that might be needed if a failure occurs.

Important

In the event of a module update failure, you can recover control plane nodes using the back up. For information on restoring from a control plane node back up, see Container Orchestration.

The Kubernetes release (either an errata or a new release) is then upgraded on each node. Control plane nodes are upgraded first, then the worker nodes. During the node upgrade process, the following steps are performed:

  1. The node is drained (using the kubectl drain command) from the cluster, which evicts the pods.

  2. The kubeadm package is upgraded.

  3. The node is upgraded using the kubeadm upgrade command.

  4. The kubectl and kubelet packages are upgraded.

  5. The kubelet service is restarted.

  6. The node is returned to the cluster (using the kubectl uncordon command) and is made available to run pods.

To update or upgrade Kubernetes, you update the Kubernetes module in an environment using the olcnectl module update command. The olcnectl module update command options shown in the following chapters are the minimum commands required to upgrade Kubernetes. You may also want to use these additional options:

  • The --generate-scripts option generates scripts you can run for each node in the event of any validation failures encountered during the update of the module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

  • The --force option suppresses the prompt displayed to confirm you want to update the module.

  • The --container-registry option allows you to specify a new container registry that becomes the default whenever running updates or upgrades. For example:

    --container-registry container-registry-austin-mirror.oracle.com/olcne/