Upgrading Managed Nodes to Kubernetes version 1.35 (or later)
Find out how to upgrade managed nodes that currently use an OKE OL7 image, an OL7 platform image, or a custom image based on an OL7 image, to run Kubernetes version 1.35 (or later) and OL8, using Kubernetes Engine (OKE).
This section applies to managed nodes only. For information about upgrading self-managed nodes, see Upgrading Self-Managed Nodes to a Newer Kubernetes Version by Replacing an Existing Self-Managed Node.
Starting with Kubernetes version 1.35, Kubernetes requires cgroups v2 for container resource management.
Control Groups (cgroups) is a Linux kernel feature that provides a mechanism for managing and controlling resource allocation for processes or groups of processes. Control groups version 2 (cgroups v2) groups provide a single control group hierarchy against which all resource controllers are mounted. In this hierarchy, you can coordinate resource use across different resource controllers.
Oracle Linux 7 (OL7) supports cgroups v1, but does not support cgroups v2. Oracle Linux 8 (OL8) and later versions support both cgroups v1 and cgroups v2, but cgroups v2 is not always enabled by default in OL8.
In summary, Kubernetes version 1.35 therefore requires OL8 (or later) with cgroups v2 enabled.
Cgroups v2 is enabled by default in OKE OL8 images that have a build number of 1367 or greater. However, in OKE OL8 images that have a build number lower than 1367 (and in OL8 platform images), cgroups v1 is enabled by default.
For worker nodes on clusters running Kubernetes version 1.35 (and later), Kubernetes Engine supports the following images:
- OKE OL8 images, and custom images based on OKE OL8 images, that have a build number of 1367 or greater.
- OKE OL8 images, and custom images based on OKE OL8 images, that have a build number lower than 1367, but only if you enable cgroups v2 (see Enabling Cgroups v2 on OL8 Worker Nodes Using Custom Images).
- Custom images based on OL8 platform images, but only if you enable cgroups v2 (see Enabling Cgroups v2 on OL8 Worker Nodes Using Custom Images).
Note the following images are not supported with Kubernetes version 1.35:
- OKE OL7 images are not supported.
- Platform images (both OL7 platform images and OL8 platform images) are not supported.
- Custom images based on OKE OL7 images are not supported.
- Custom images based on OL7 platform images are not supported.
When upgrading managed nodes that currently use an OL8 image to Kubernetes version 1.35 (and later), note the following:
- You can upgrade managed nodes that currently use an OKE OL8 image (or a custom image based on an OKE OL8 image) with a build number lower than 1367, by performing an in-place upgrade and specifying an OKE OL8 image with a build number of 1367 or greater.
- You can upgrade managed nodes that currently use an OL8 platform image by performing an in-place upgrade and specifying an OKE OL8 image with a build number of 1367 or greater.
- You can perform in-place upgrades of managed nodes that currently use an OL8 image, regardless of whether the Linux kernels of compute instances hosting the nodes already have cgroups v2 enabled.
- For instructions to perform in-place upgrades, see Upgrading Managed Nodes to a Newer Kubernetes Version.
To upgrade managed nodes that currently use an OKE OL7 image, an OL7 platform image, or a custom image based on an OL7 image, to run Kubernetes version 1.35 (or later), perform an out-of-place upgrade to replace the existing node pool with a new node pool:
-
Identify existing node pools that use OL7, using the Console or the CLI as follows:
-
Using the Console:
- On the Clusters list page, select the name of the cluster containing the node pools you want to see. If you need help finding the list page or the cluster, see Listing Clusters.
- Select the Node pools tab.
- Use the Image name column to identify node pools that use
Oracle-Linux-7.9.ximages.
-
Using the CLI: Identify existing node pools that use OL7 by entering a command similar to the following::
oci ce node-pool list --cluster-id <cluster-ocid> --compartment-id <compartment-ocid> --query 'data[*].{name:"name", image:"node-source"."image-id"}'
-
-
Create a replacement node pool that uses an OL8 image, using the Console or the CLI as follows:
-
Using the Console:
- On the Node pools tab, select Add node pool, and enter details for the new node pool.
- Select an OKE Worker Node Image based on an
Oracle Linux 8.ximage. - Configure the new node pool with the same shape, size, and placement as the OL7 node pool.
- Select Create.
-
Using the CLI: Create a replacement node pool that uses an OL8 image by entering a command similar to the following:
oci ce node-pool create --cluster-id <cluster-ocid> --compartment-id <compartment-ocid> --name "nodepool-ol8-workers" --node-shape "VM.Standard.E4.Flex" --kubernetes-version "v1.35.0" --node-image-id <ol8-image-ocid> --size 3 --placement-configs '[{"availabilityDomain":"AD-1","subnetId":"<subnet-ocid>"}]'
-
-
Cordon the OL7 nodes:
-
Get the names of OL7 nodes by entering:
kubectl get nodes -l node.kubernetes.io/os=ol7 -
Cordon each node by entering:
kubectl cordon <node-name>
-
-
Drain the OL7 nodes to gracefully migrate workloads:
- Drain each OL7 node by entering:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data --force - Verify that workloads have been rescheduled on the new nodes by entering:
kubectl get pods -A -o wide | grep <node-name>
- Drain each OL7 node by entering:
-
Delete the OL7 node pool using the Console or the CLI, as follows:
-
Using the Console:
- On the Node pools tab, select Delete node pool from the Actions menu (three dots) beside the OL7 node pool.
- Click Delete.
- Confirm that you want to delete the node pool, and select Delete.
-
Using the CLI: Delete the OL7 node pool:
oci ce node-pool delete --node-pool-id <ol7-nodepool-ocid>
-
-
Upgrade the cluster (for example, to Kubernetes version 1.35):
oci ce cluster update --cluster-id <cluster-ocid> --kubernetes-version "v1.35.0"
For more information, see Performing an Out-of-Place Managed Node Kubernetes Upgrade by Replacing an Existing Node Pool with a New Node Pool.