Creating an OKE Worker Node Pool

These procedures describe how to create a pool of worker nodes for an OKE workload cluster. Nodes are Private Cloud Appliance compute instances.

You cannot customize the OKE cloud-init scripts.

If your network requires proxy settings to enable worker nodes to reach outside registries or repositories, for example, use the OCI CLI or OCI SDK.

Using the Compute Web UI

  1. On the dashboard, click Containers / View Kubernetes Clusters (OKE).

    If the cluster to which you want to attach a node pool is not listed, select a different compartment from the compartment menu above the list.

  2. Click the name of the cluster to which you want to add a node pool.

  3. On the cluster details page, scroll to the Resources section, and click Node Pools.

  4. On the Node Pools list, click the Add Node Pool button.

  5. In the Add Node Pool dialog, provide the following information:

    • Name: The name of the new node pool. Avoid using confidential information.

    • Compartment: The compartment in which to create the new node pool.

    • Node pool options: In the Node Count field, enter the number of nodes you want in this node pool. The default is 0. The maximum number is 128 per cluster, which can be distributed across multiple node pools.

    • Network Security Group: If you check the box to enable network security groups, click the Add Network Security Group button and select an NSG from the drop-down list. You might need to change the compartment to find the NSG you want.

    • Placement configuration

      • Subnet: Select a subnet that has configuration like the "worker" subnet described in Creating an OKE Worker Subnet. Select only one subnet. The subnet must have rules set to communicate with the control plane endpoint. The subnet must use the private route table and must have a security list like the worker-seclist security list described in Creating an OKE Worker Subnet.

      • Fault domain: Select a fault domain or select "Automatically select the best fault domain," which is the default option.

    • Source Image: Select an image.

      1. Select the Platform Image Source Type.

      2. Select an image from the list.

        The image list has columns Operating System, OS Version, and Kubernetes Version. You can use the drop-down menu arrow to the right of the OS Version or Kubernetes Version to select a different version.

        If the image that you want to use is not listed, use the OCI CLI procedure and specify the OCID of the image. To get the OCID of the image you want, use the ce node-pool get command for a node pool where you used this image before.

        Note:

        The image that you specify must not have a Kubernetes version that is newer than the Kubernetes version that you specified when you created the cluster. The Kubernetes Version for the cluster is in a column of the cluster list table.

    • Shape: Select a shape for the worker nodes. For a description of each shape, see Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide. For Private Cloud Appliance X10 systems, the shape is VM.PCAStandard.E5.Flex and you cannot change it.

      If you select a shape that is not a flexible shape, the amount of memory and number of OCPUs are displayed. These numbers match the numbers shown for this shape in the table in the Oracle Private Cloud Appliance Concepts Guide.

      If you select a flexible shape, then you must specify the number of OCPUs you want. You can optionally specify the total amount of memory you want. The default value for gigabytes of memory is 16 times the number you specify for OCPUs. Click inside each value field to see the minimum and maximum allowed values.

    • Boot Volume: (Optional) Check the box to specify a custom boot volume size.

      Boot volume size (GB): The default boot volume size for the selected image is shown. To specify a larger size, enter a value from 50 to 16384 in gigabytes (50 GB to 16 TB) or use the increment and decrement arrows.

      If you specify a custom boot volume size, you need to extend the partition to take advantage of the larger size. Oracle Linux platform images include the oci-utils package. Use the oci-growfs command from that package to extend the root partition and then grow the file system. See oci-growfs.

    • Cordon and Drain: (Optional) Use the arrows to decrease or increase the number of minutes of eviction grace duration. You cannot deselect "Force terminate after grace period." Nodes are deleted after their pods are evicted or at the end of the eviction grace duration, even if not all pods are evicted.

      For descriptions of cordon and drain and eviction grace duration, see "Node and node pool deletion settings" in "Using the OCI CLI".

    • SSH Key: The public SSH key for the worker nodes. Either upload the public key file or copy and paste the content of the file.

    • Node Pool Tags: Defined or free-form tags for the node pool resource.

      Note:

      Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the node pool resource.

    • Node Tags: Defined or free-form tags that are applied to every node in the node pool.

      Important:

      Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated.

  6. Click the Add Node Pool button.

    The details page for the node pool is displayed. Scroll to the Resources section and click Work Requests to see the progress of the node pool creation and see nodes being added to the Nodes list.

    To identify these nodes in a list of instances, note that the names of these nodes are in the format oke-ID, where ID is the first 32 characters after the pca_name in the node pool OCID. Search for the instances in the list whose names contain the ID string from this node pool OCID.

Using the OCI CLI

  1. Get the information you need to run the command.

    • The OCID of the compartment where you want to create the node pool: oci iam compartment list

    • The OCID of the cluster for this node pool: oci ce cluster list

    • The name of the node pool. Avoid using confidential information.

    • The placement configuration for the nodes, including the subnet and fault domain. See the "Placement configuration" description in the Compute Web UI procedure. Use the following command to show the content and format of this option:

      $ oci ce node-pool create --generate-param-json-input placement-configs

      Use the following command to list fault domains: oci iam fault-domain list. Do not specify more than one fault domain or more than one subnet in the placement configuration. To allow the system to select the best fault domains, do not specify any fault domain.

    • The OCID of the image to use for the nodes in this node pool.

      Use the following command to get the OCID of the image that you want to use:

      $ oci compute image list --compartment-id compartment_OCID

      If the image that you want to use is not listed, you can get the OCID of the image from the output of the ce node-pool get command for a node pool where you used this image before.

      Note:

      The image that you specify must have "-OKE-" in its display-name and must not have a Kubernetes version that is newer than the Kubernetes version that you specified when you created the cluster.

      The Kubernetes version for the cluster is shown in cluster list output. The Kubernetes version for the image is shown in the display-name property in image list output. The Kubernetes version of the following image is 1.28.3.

      "display-name": "uln-pca-Oracle-Linux8-OKE-1.28.3-20240210.oci"

      Do not specify the --kubernetes-version option in the node-pool create command.

      You can specify a custom boot volume size in gigabytes. The default boot volume size is 50 GB. To specify a custom boot volume size, use the --node-source-details option to specify both the boot volume size and the image. You cannot specify both --node-image-id and --node-source-details. Use the following command to show the content and format of the node source details option.

      $ oci ce node-pool create --generate-param-json-input node-source-details

      If you specify a custom boot volume size, you need to extend the partition to take advantage of the larger size. Oracle Linux platform images include the oci-utils package. Use the oci-growfs command from that package to extend the root partition and then grow the file system. See oci-growfs.

    • The name of the shape of the worker nodes in this node pool. For Private Cloud Appliance X10 systems, the shape of the control plane nodes is VM.PCAStandard.E5.Flex and you cannot change it. For all other Private Cloud Appliance systems, the default shape is VM.PCAStandard1.1, and you can specify a different shape.

      If you specify a flexible shape, then you must also specify the shape configuration, as shown in the following example. You must provide a value for ocpus. The memoryInGBs property is optional; the default value in gigabytes is 16 times the number of ocpus.

      --node-shape-config '{"ocpus": 32, "memoryInGBs": 512}'

      If you specify a shape that is not a flexible shape, do not specify --node-shape-config. The number of OCPUs and amount of memory are set to the values shown for this shape in "Standard Shapes" in Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide.

    • (Optional) The OCID of the Network Security Group to use for the nodes in this node pool. Do not specify more than one NSG.

    • (Optional) Proxy settings. If your network requires proxy settings to enable worker nodes to reach outside registries or repositories, for example, create an argument for the --node-metadata option.

      In the --node-metadata option argument, provide values for crio-proxy and crio-noproxy as shown in the following example file argument:

      {
        "crio-proxy": "http://your_proxy.your_domain_name:your_port",
        "crio-noproxy": "localhost,127.0.0.1,your_domain_name,ocir.io,Kubernetes_cidr,pods_cidr"
      }
    • (Optional) Node and node pool deletion settings. You can specify how to handle node deletion when you delete a node pool, delete a specified node, decrement the size of the node pool, or change the node pool nodes placement configuration. These node deletion parameters can also be set or changed when you update the node pool, delete a specified node, or delete the node pool.

      To specify node pool deletion settings, create an argument for the --node-eviction-node-pool-settings option. You can specify the eviction grace duration (evictionGraceDuration) for nodes. Nodes are always deleted after their pods are evicted or at the end of the eviction grace duration.

      • Eviction grace duration. This value specifies the amount of time to allow to cordon and drain worker nodes.

        A node that is cordoned cannot have new pods placed on it. Existing pods on that node are not affected.

        When a node is drained, each pod's containers terminate gracefully and perform any necessary cleanup.

        The eviction grace duration value is expressed in ISO 8601 format. The default value and the maximum value are 60 minutes (PT60M). The minimum value is 20 seconds (PT20S). OKE always attempts to drain nodes for at least 20 seconds.

      • Force delete. Nodes are always deleted after their pods are evicted or at the end of the eviction grace duration. After the default or specified eviction grace duration, the node is deleted, even if one or more pod containers are not completely drained.

      The following shows an example argument for the --node-eviction-node-pool-settings option. If you include the isForceDeleteAfterGraceDuration property, then its value must be true. Nodes are always deleted after their pods are evicted or at the end of the eviction grace duration.

      --node-eviction-node-pool-settings '{"evictionGraceDuration": "PT30M", "isForceDeleteAfterGraceDuration": true}'

      Note:

      If you use Terraform and you specify node_eviction_node_pool_settings, then you must explicitly set is_force_delete_after_grace_duration to true, even though true is the default value. The is_force_delete_after_grace_duration property setting is not optional if you are using Terraform.

    • (Optional) Tags. Add defined or free-form tags for the node pool resource by using the --defined-tags or --freeform-tags options. Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the node pool resource.

      To add defined or free-form tags to all nodes in the node pool, use the --node-defined-tags and --node-freeform-tags options.

      Important:

      Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated.

  2. Run the create node pool command.

    Example:

    See the preceding Compute Web UI procedure for information about the options shown in this example and other options such as --node-boot-volume-size-in-gbs and nsg-ids.

    $ oci ce node-pool create \
    --cluster-id ocid1.cluster.unique_ID --compartment-id ocid1.compartment.unique_ID \
    --name node_pool_name --node-shape shape_name --node-image-id ocid1.image.unique_ID \
    --placement-configs '[{"availabilityDomain":"AD-1","subnetId":"ocid1.subnet.unique_ID"}]' \
    --size 10 --ssh-public-key "public_key_text"

    The output from this node-pool create command is the same as the output from the node-pool get command. The cluster OCID is shown, and a brief summary of each node is shown. For more information about a node, use the compute instance get command with the OCID of the node.

    Use the work-request get command to check the status of the node pool create operation. The work request OCID is in created-by-work-request-id in the metadata section of the cluster get output.

    $ oci ce work-request get --work-request-id workrequest_OCID

    To identify these nodes in a list of instances, note that the names of these nodes are in the format oke-ID, where ID is the first 32 characters after the pca_name in the node pool OCID. Search for the instances in the list whose names contain the ID string from this node pool OCID.

Node Pool Next Steps

  1. Configure any registries or repositories that the worker nodes need. Ensure you have access to a self-managed public or intranet container registry to use with the OKE service and your application images.

  2. Create a service to expose containerized applications outside the Private Cloud Appliance. See Exposing Containerized Applications.

  3. Create persistent storage for applications to use. See Adding Storage for Containerized Applications.