Modifying Node Pool and Worker Node Properties

You can use Container Engine for Kubernetes to modify the properties of node pools and worker nodes in existing Kubernetes clusters.

You can change:

  • the name of a node pool
  • the version of Kubernetes to run on new worker nodes
  • the number of worker nodes in a node pool, and the availability domains and subnets in which to place them
  • the image to use for new worker nodes
  • the shape to use for new worker nodes
  • the boot volume size to use for new worker nodes
  • the public SSH key to use to access new worker nodes

Note that you must not change the auto-generated names of resources that Container Engine for Kubernetes has created (such as the names of worker nodes).

Also note the following:

  • Any changes you make to worker node properties will only apply to new worker nodes. You cannot change the properties of existing worker nodes.
  • In some situations, you might want to update properties of all the worker nodes in a node pool simultaneously, rather than just the properties of new worker nodes that start in the node pool. For example, to upgrade all worker nodes to a new version of Oracle Linux. In this case, you can create a new node pool with worker nodes that have the required properties, and shift work from the original node pool to the new node pool using the kubectl drain command and pod disruption budgets. For more information, see Updating Worker Nodes by Creating a New Node Pool.
  • If you use the UpdateNodePool API operation to modify properties of an existing node pool, be aware of the Worker node properties out-of-sync with updated node pool properties known issue and its workarounds.
  • Do not use the kubectl delete node command to scale down or terminate worker nodes in a cluster that was created byContainer Engine for Kubernetes. Instead, reduce the number of worker nodes by changing the corresponding node pool properties using the Console or the API. The kubectl delete node command does not change a node pool's properties, which determine the desired state (including the number of worker nodes). Also, although the kubectl delete node command removes the worker node from the cluster's etcd key-value store, the command does not delete the underlying compute instance.

Using the Console

To modify the properties of node pools and worker nodes of existing Kubernetes clusters:

  1. In the Console, open the navigation menu and click Developer Services. Under Containers, click Kubernetes Clusters (OKE).
  2. Choose a Compartment you have permission to work in.
  3. On the Cluster List page, click the name of the cluster you want to modify.
  4. On the Cluster page, click the name of the node pool that you want to modify.
  5. Use the Node Pool Details tab to view information about the node pool, including:

    • The status of the node pool.
    • The node pool's OCID.
    • The configuration currently used when starting new worker nodes in the node pool, including:
      • the version of Kubernetes to run on worker nodes
      • the shape to use for worker nodes
      • the image to use on worker nodes
    • The availability domains, and different regional subnets (recommended) or AD-specific subnets hosting worker nodes.
  6. (optional) Change properties of the node pool and worker nodes by clicking Edit and specifying:

    • Name: A different name for the node pool. Avoid entering confidential information.
    • Version: A different version of Kubernetes to run on new worker nodes in the node pool when performing an in-place upgrade. The Kubernetes version on worker nodes must be either the same version as that on the control plane nodes, or an earlier version that is still compatible (see Kubernetes Versions and Container Engine for Kubernetes). To start new worker nodes running the Kubernetes version you specify, 'drain' existing worker nodes in the node pool (to prevent new pods starting and to delete existing pods) and then terminate each of the existing worker nodes in turn.

      You can also specify a different version of Kubernetes to run on new worker nodes by performing an out-of-place upgrade. For more information about upgrading worker nodes, see Upgrading the Kubernetes Version on Worker Nodes in a Cluster.

    • Image: A different image to use on the nodes in the node pool. An image is a template of a virtual hard drive that determines the operating system and other software for the node. See Supported Images (Including Custom Images) and Shapes for Worker Nodes.
    • Shape: A different shape to use for the nodes in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes. See Supported Images (Including Custom Images) and Shapes for Worker Nodes.
    • Boot Volume Size in GB: A different boot volume size for worker nodes. The default size of worker node boot volumes is determined from the image specified for worker nodes, but you can specify a custom boot volume size. If you do specify a custom boot volume size, it must be larger than the image's default boot volume size. The minimum and maximum sizes you can specify are 50 GB and 32 TB respectively (see Custom Boot Volume Sizes). If you change the boot volume size for worker nodes, consider extending the partition for the boot volume to take advantage of the larger size (see Extending the Partition for a Boot Volume).
    • Public SSH Key: (Optional) A different public key portion of the key pair you want to use for SSH access to the nodes in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot use SSH to access directly any worker nodes in private subnets (see Connecting to Worker Nodes in Private Subnets Using SSH).
  7. (optional) Change the number and placement of worker nodes in the node pool by clicking Scale and specifying:

    • the number of worker nodes you want in the node pool after the scale operation is complete
    • the availability domains in which to place the worker nodes
    • the regional subnets (recommended) or AD-specific subnets to host the worker nodes
  8. Use the Nodes tab to see information about specific worker nodes in the node pool. Optionally edit the configuration details of a specific worker node by clicking the worker node's name.