Modifying Node Pool and Worker Node Properties
You can use Container Engine for Kubernetes to modify the properties of node pools and worker nodes in existing Kubernetes clusters.
You can change:
- the name of a node pool
- the version of Kubernetes to run on new worker nodes
- the number of worker nodes in a node pool, and the availability domains, fault domains, and subnets in which to place them
- the image to use for new worker nodes
- the shape to use for new worker nodes
- the boot volume size and encryption settings to use for new worker nodes
- the cordon and drain options to use when terminating worker nodes
- the cloud-init script to use for instances hosting worker nodes
- the public SSH key to use to access new worker nodes
Note that you must not change the auto-generated names of resources that Container Engine for Kubernetes has created (such as the names of worker nodes).
Any changes you make to worker node properties will only apply to new worker nodes. You cannot change the properties of existing worker nodes. If you want the changes to take effect immediately, consider creating a new node pool with the necessary settings and shift work from the original node pool to the new node pool (see Updating Worker Nodes by Creating a New Node Pool)
Also note the following:
- In some situations, you might want to update properties of all the worker nodes in a
node pool simultaneously, rather than just the properties of new worker nodes that
start in the node pool. For example, to upgrade all worker nodes to a new version of
Oracle Linux. In this case, you can create a new node pool with worker nodes that
have the required properties, and shift work from the original node pool to the new
node pool using the
kubectl draincommand and pod disruption budgets. For more information, see Updating Worker Nodes by Creating a New Node Pool.
- If you change a node pool's placement configuration (the availability domains, fault domains, and subnets in which worker nodes are placed), existing worker nodes are terminated and new worker nodes are created in the new locations.
- If you use the UpdateNodePool API operation to modify properties of an existing node pool, be aware of the Worker node properties out-of-sync with updated node pool properties known issue and its workarounds.
- Do not use the
kubectl delete nodecommand to scale down or terminate worker nodes in a cluster that was created by Container Engine for Kubernetes. Instead, reduce the number of worker nodes by changing the corresponding node pool properties using the Console or the API. The
kubectl delete nodecommand does not change a node pool's properties, which determine the desired state (including the number of worker nodes). Also, although the
kubectl delete nodecommand removes the worker node from the cluster's etcd key-value store, the command does not delete the underlying compute instance.
Using the Console
To modify the properties of node pools and worker nodes of existing Kubernetes clusters:
- In the Console, open the navigation menu and click Developer Services. Under Containers, click Kubernetes Clusters (OKE).
- Choose a Compartment you have permission to work in.
- On the Cluster List page, click the name of the cluster you want to modify.
- On the Cluster page, click the name of the node pool that you want to modify.
Use the Node Pool Details tab to view information about the node pool, including:
- The status of the node pool.
- The node pool's OCID.
- The configuration currently used when starting new worker nodes in the node pool, including:
- the version of Kubernetes to run on worker nodes
- the shape to use for worker nodes
- the image to use on worker nodes
- The availability domains, fault domains, and different regional subnets (recommended) or AD-specific subnets hosting worker nodes.
(optional) Change properties of the node pool and worker nodes.
- Click Edit and specify:
- Name: A different name for the node pool. Avoid entering confidential information.
Version: A different version of Kubernetes to run on new worker nodes in the node pool when performing an in-place upgrade. The Kubernetes version on worker nodes must be either the same version as that on the control plane nodes, or an earlier version that is still compatible (see Kubernetes Versions and Container Engine for Kubernetes).
Note that if you specify an OKE image for worker nodes, the Kubernetes version you select here must be the same as the version of Kubernetes in the OKE image.
To start new worker nodes running the Kubernetes version you specify, 'drain' existing worker nodes in the node pool (to prevent new pods starting and to delete existing pods) and then terminate each of the existing worker nodes in turn.
You can also specify a different version of Kubernetes to run on new worker nodes by performing an out-of-place upgrade. For more information about upgrading worker nodes, see Upgrading the Kubernetes Version on Worker Nodes in a Cluster.
Image: A different image to use on worker nodes in the node pool. An image is a template of a virtual hard drive that determines the operating system and other software for the node.
To change the image, click Change image. In the Browse all images window, choose an Image source and select an image as follows:
OKE images: Recommended. Provided by Oracle and built on top of platform images. OKE images are optimized to serve as base images for worker nodes, with all the necessary configurations and required software. Select an OKE image if you want to minimize the time it takes to provision worker nodes at runtime when compared to platform images and custom images.
OKE image names include the version number of the Kubernetes version they contain. Note that if you specify a Kubernetes version for the node pool, the OKE image you select here must have the same version number as the node pool's Kubernetes version.
- Platform images: Provided by Oracle and only contain an Oracle Linux operating system. Select a platform image if you want Container Engine for Kubernetes to download, install, and configure required software when the compute instance hosting a worker node boots up for the first time.
Shape: A different shape to use for worker nodes in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node.
To change the shape, click Change shape. In the Browse all shapes window, select a shape. Only those shapes available in your tenancy that are supported by Container Engine for Kubernetes are shown.
If you select a flexible shape, you can explicitly specify the number of CPUs and the amount of memory.
- Network Security Group: Control access to the node pool using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists. For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes.
Boot volume: Change the size and encryption options for the worker node's boot volume:
- To specify a custom size for the boot volume, select the Specify a custom boot volume size check box. Then, enter a custom size from 50 GB to 32 TB. The specified size must be larger than the default boot volume size for the selected image. See Custom Boot Volume Sizes for more information. If you change the boot volume size for worker nodes, consider extending the partition for the boot volume to take advantage of the larger size (see Extending the Partition for a Boot Volume).
- For VM instances, you can optionally select the Use in-transit encryption check box. For bare metal instances that support in-transit encryption, it is enabled by default and is not configurable. See Block Volume Encryption for more information about in-transit encryption. If you are using your own Vault service encryption key for the boot volume, then this key is also used for in-transit encryption. Otherwise, the Oracle-provided encryption key is used.
- Boot volumes are encrypted by default, but you can optionally use your own Vault service encryption key to encrypt the data in this volume. To use the Vault service for your encryption needs, select the Encrypt this volume with a key that you manage check box. Then, select the Vault compartment and Vault that contain the master encryption key you want to use. Also select the Master encryption key compartment and Master encryption key. For more information about encryption, see Overview of Vault. If you enable this option, this key is used for both data at rest encryption and in-transit encryption.Important
The Block Volume service does not support encrypting volumes with keys encrypted using the Rivest-Shamir-Adleman (RSA) algorithm. When using your own keys, you must use keys encrypted using the Advanced Encryption Standard (AES) algorithm. This applies to block volumes and boot volumes.
Note that to use your own Vault service encryption key to encrypt data, an IAM policy must grant access to the service encryption key. See Create Policy to Access User-Managed Encryption Keys for Encrypting Boot and Block Volumes.
Either accept the existing values for advanced node pool options, or click Show Advanced Options and specify alternatives as follows:
Cordon and drain: Change when and how to cordon and drain worker nodes before terminating them.
- Eviction grace period (mins): The length of time to allow to cordon and drain worker nodes before terminating them. Either accept the default (60 minutes), or specify an alternative. For example, when scaling down a node pool or changing its placement configuration, you might want to allow 30 minutes to cordon worker nodes and drain them of their workloads. To terminate worker nodes immediately, without cordoning and draining them, specify 0 minutes.
- Force terminate after grace period: Whether to terminate worker nodes at the end of the eviction grace period, even if they have not been successfully cordoned and drained. By default, this option is not selected.
Select this option if you always want worker nodes terminated at the end of the eviction grace period, even if they have not been successfully cordoned and drained.
De-select this option if you do not want worker nodes that have not been successfully cordoned and drained to be terminated at the end of the eviction grace period. Node pools containing worker nodes that could not be terminated within the eviction grace period have the Needs attention status (see Monitoring Clusters). The status of the work request that initiated the termination operation is set to Failed and the termination operation is cancelled.
For more information, see Notes on cordoning and draining worker nodes before termination.
- Pod communication: When the cluster's Network type is VCN-native pod networking, change how pods in the node pool communicate with each other using a pod subnet: :
- Subnet: A regional subnet configured to host pods. The pod subnet you specify can be public or private. In some situations, the worker node subnet and the pod subnet can be the same subnet (in which case, Oracle recommends defining security rules in network security groups rather than in security lists). See Subnet Configuration.
- Network Security Group: Control access to the pod subnet using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists. For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes and Pods.
Optionally click Show Advanced Options to specify the maximum number of pods that you want to run on a single worker node in a node pool, up to a limit of 110. The limit of 110 is imposed by Kubernetes. If you want more than 31 pods on a single worker node, the shape you specify for the node pool must support three or more VNICs (one VNIC to connect to the worker node subnet, and at least two VNICs to connect to the pod subnet). See Maximum Number of VNICs and Pods Supported by Different Shapes.
For more information about pod communication, see Pod Networking.
- Initialization Script: (Optional) A different script for cloud-init to run on instances hosting worker nodes when the instance boots up for the first time. The script you specify must be written in one of the formats supported by cloud-init (for example, cloud-config), and must be a supported filetype (for example, .yaml). Specify the script as follows:
- Choose Cloud-Init Script: Select a file containing the cloud-init script, or drag and drop the file into the box.
- Paste Cloud-Init Script: Copy the contents of a cloud-init script, and paste it into the box.
If you have not previously written cloud-init scripts for initializing worker nodes in clusters created by Container Engine for Kubernetes, you might find it helpful to click Download Custom Cloud-Init Script Template. The downloaded file contains the default logic provided by Container Engine for Kubernetes. You can add your own custom logic either before or after the default logic, but do not modify the default logic. For examples, see Example Usecases for Custom Cloud-init Scripts.
- Public SSH Key: (Optional) A different public key portion of the key pair you want to use for SSH access to the nodes in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot use SSH to access directly any worker nodes in private subnets (see Connecting to Worker Nodes in Private Subnets Using SSH).
- Click Save Changes to save the updated properties.
- Click Edit and specify:
(optional) Click Scale to change:
- the number of worker nodes in the node pool
- the network security groups with security rules to control traffic into and out of the node pool
- the availability domains and fault domains in which to place the worker nodes
- the regional subnets (recommended) or AD-specific subnets to host the worker nodes
- a capacity reservation to use
See Scaling Node Pools.
- Use the Node Pool tags tab and the Node tags tab to add or modify tags applied to the node pool, and tags applied to compute instances hosting worker nodes in the node pool. Tagging enables you to group disparate resources across compartments, and also enables you to annotate resources with your own metadata. See Tagging Kubernetes Cluster-Related Resources.
Use the Nodes tab to see information about specific worker nodes in the node pool. Optionally edit the configuration details of a specific worker node by clicking the worker node's name.
Using the API
For information about using the API and signing requests, see REST APIs and Security Credentials. For information about SDKs, see Software Development Kits and Command Line Interface.
Use the UpdateNodePool operation to modify an existing node pool.