Upgrading the Kubernetes Version on Worker Nodes in a Cluster

You can upgrade the version of Kubernetes running on the worker nodes in a cluster in two ways:

  • Perform an 'in-place' upgrade of a node pool in the cluster, by specifying a more recent Kubernetes version for new worker nodes starting in the existing node pool. First, you modify the existing node pool's properties to specify the more recent Kubernetes version. Then, you delete each worker node in turn, selecting appropriate cordon and drain options to prevent new pods starting and to delete existing pods. A new worker node is started to take the place of each worker node you delete. When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified. See Performing an In-Place Worker Node Upgrade by Updating an Existing Node Pool.
  • Perform an 'out-of-place' upgrade of a node pool in the cluster, by replacing the original node pool with a new node pool. First, you create a new node pool with a more recent Kubernetes version. Then, you drain existing worker nodes in the original node pool to prevent new pods starting, and to delete existing pods. Finally, you delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified. See Performing an Out-of-Place Worker Node Upgrade by Replacing an Existing Node Pool with a New Node Pool.

Note that in both cases:

  • The more recent Kubernetes version you specify for the worker nodes in the node pool must be compatible with the Kubernetes version running on the control plane nodes in the cluster. See Upgrading Clusters to Newer Kubernetes Versions).
  • You must drain existing worker nodes in the original node pool. If you don't drain the worker nodes, workloads running on the cluster are subject to disruption.

Performing an In-Place Worker Node Upgrade by Updating an Existing Node Pool

You can upgrade the version of Kubernetes running on worker nodes in a node pool by specifying a more recent Kubernetes version for the existing node pool.

You delete each worker node in turn, selecting appropriate cordon and drain options to prevent new pods starting and to delete existing pods. A new worker node is started to take the place of the each worker node you delete. When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified.

To perform an in-place upgrade of a node pool in a cluster, by specifying a more recent Kubernetes version for the existing node pool:

  1. In the Console, open the navigation menu and click Developer Services. Under Containers, click Kubernetes Clusters (OKE).
  2. Choose a Compartment you have permission to work in.
  3. On the Cluster List page, click the name of the cluster where you want to change the Kubernetes version running on worker nodes.
  4. On the Cluster page, display the Node Pools tab, and click the name of the node pool where you want to upgrade the Kubernetes version running on the worker nodes.

  5. On the Node Pool page, click Edit and in the Version field, specify the required Kubernetes version for worker nodes.

    The Kubernetes version you specify must be compatible with the version that is running on the control plane nodes.

  6. Click Save changes to save the change.

    You now have to delete existing worker nodes so that new worker nodes are started, running the Kubernetes version you specified.

    Recommended: Leverage pod disruption budgets as appropriate for your application to ensure that there's a sufficient number of replica pods running throughout the deletion operation. For more information, see Specifying a Disruption Budget for your Application in the Kubernetes documentation.

  7. For the first worker node in the node pool:

    1. On the Node Pool page, display the Nodes tab and select Delete Node from the Actions menu beside the node you want to delete.
    2. Either accept the defaults for advanced options, or click Show Advanced Options and specify alternatives as follows:

      • Cordon and drain: Specify when and how to cordon and drain worker nodes before terminating them.

        • Eviction grace period (mins): The length of time to allow to cordon and drain worker nodes before terminating them. Either accept the default (60 minutes), or specify an alternative. For example, you might want to allow 30 minutes to cordon worker nodes and drain them of their workloads. To terminate worker nodes immediately, without cordoning and draining them, specify 0 minutes.
        • Force terminate after grace period: Whether to terminate worker nodes at the end of the eviction grace period, even if they have not been successfully cordoned and drained. By default, this option is not selected.

          Select this option if you always want worker nodes terminated at the end of the eviction grace period, even if they have not been successfully cordoned and drained.

          De-select this option if you do not want worker nodes that have not been successfully cordoned and drained to be terminated at the end of the eviction grace period. Node pools containing worker nodes that could not be terminated within the eviction grace period have the Needs attention status (see Monitoring Clusters). The status of the work request that initiated the termination operation is set to Failed and the termination operation is cancelled.

        For more information, see Notes on cordoning and draining worker nodes before termination.

    3. Click Delete to delete the worker node.

    The worker node is deleted and a new worker node is started in its place, running the Kubernetes version you specified.

  8. Repeat the previous step for each remaining worker node in the node pool, until all worker nodes in the node pool are running the Kubernetes version you specified.

Performing an Out-of-Place Worker Node Upgrade by Replacing an Existing Node Pool with a New Node Pool

You can 'upgrade' the version of Kubernetes running on worker nodes in a node pool by replacing the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having drained existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified.

To perform an 'out-of-place' upgrade of a node pool in a cluster, by creating a new node pool to 'upgrade' the Kubernetes version on worker nodes:

  1. In the Console, open the navigation menu and click Developer Services. Under Containers, click Kubernetes Clusters (OKE).
  2. Choose a Compartment you have permission to work in.
  3. On the Cluster List page, click the name of the cluster where you want to change the Kubernetes version running on worker nodes.
  4. On the Cluster page, display the Node Pools tab, and then click Add Node Pool to create a new node pool and specify the required Kubernetes version for its worker nodes.

    The Kubernetes version you specify must be compatible with the version that is running on the control plane nodes.

  5. If there are labels attached to worker nodes in the original node pool and those labels are used by selectors (for example, to determine the nodes on which to run pods), then use the kubectl label nodes command to attach the same labels to the new worker nodes in the new node pool. See Assigning Pods to Nodes in the Kubernetes documentation.
  6. For the first worker node in the original node pool, prevent new pods from starting and delete existing pods by entering:

    kubectl drain <node_name>

    For more information:

    Recommended: Leverage pod disruption budgets as appropriate for your application to ensure that there's a sufficient number of replica pods running throughout the drain operation.

  7. Repeat the previous step for each remaining worker node in the node pool, until all the worker nodes have been drained from the original node pool.

    When you have drained all the worker nodes from the original node pool and pods are running on worker nodes in the new node pool, you can delete the original node pool.

  8. On the Cluster page, display the Node Pools tab, and then select Delete Node Pool from the Actions menu beside the original node pool.

    The original node pool and all its worker nodes are deleted.