Creating an OKE Cluster

These procedures describe how to create an OKE cluster.

The Network Load Balancer and public IP address are created and assigned as part of cluster creation.

Important:

Before you can create a cluster, the following conditions must be met:

  • The OraclePCA-OKE/cluster_id defined tag must exist in the tenancy.

  • All fault domains must be healthy.

  • Each fault domain must have at least one healthy compute instance.

  • Sufficient resources must be available to create a cluster.

  • Ensure that no appliance upgrade is scheduled during the cluster create.

The OraclePCA-OKE/cluster_id defined tag is required to create or update an OKE cluster or node pool. This tag also is used to identify instances that need to be in a dynamic group. To verify the tag exists, in the Compute Web UI select Governance > Tag Namespaces and make sure the tenancy (root compartment) is selected on the compartment menu above the list. In the OCI CLI, use the following command:

$ oci iam tag-namespace list --compartment-id $OCI_CLI_TENANCY

If notifications are configured for operations such as system upgrade, ensure you are on the list to be notified of such planned outages.

After you create a cluster, see the Cluster Next Steps section.

Using the Compute Web UI

  1. On the dashboard, click Containers / View Kubernetes Clusters (OKE).

  2. On the clusters list page, click the Create Cluster button.

  3. On the Cluster page in the Create Cluster dialog, provide the following information:

    • Name: The name of the new cluster. Avoid entering confidential information.

    • Compartment: The compartment in which to create the new cluster.

    • Kubernetes Version: The version of Kubernetes to run on the control plane nodes. Accept the default version or select a different version.

    • Tagging: Add defined or free-form tags for the cluster resource.

      Note:

      Do not specify values for the OraclePCA-OKE defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the cluster resource.

      Use free-form tags to provide the following information for control plane nodes:

      • Your public SSH key.

        Specify sshkey for the tag key. Paste your public SSH key into the Value field.

        Important:

        You cannot add an SSH key after the cluster is created.

      • Number of nodes.

        By default, the number of nodes in the control plane is 3. You can specify 1, 3, or 5 nodes. To specify the number of control plane nodes, specify cp_node_count for the tag key, and enter 1, 3, or 5 in the Value field.

      • Node shape.

        For Private Cloud Appliance X10 systems, the shape of the control plane nodes is VM.PCAStandard.E5.Flex and you cannot change it. For all other Private Cloud Appliance systems, the default shape is VM.PCAStandard1.1, and you can specify a different shape.

        To use a different shape, specify cp_node_shape for the tag key, and enter the name of the shape in the Value field. For a description of each shape, see Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide.

      • Node shape configuration.

        If you specify a shape that is not a flexible shape, do not specify a shape configuration. The number of OCPUs and amount of memory are set to the values shown for this shape in "Standard Shapes" in Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide.

        If you specify a flexible shape, you can change the default shape configuration.

        To provide shape configuration information, specify cp_node_shape_config for the tag key. You must specify the number of OCPUs (ocpus) you want. You can optionally specify the total amount of memory you want (memoryInGBs). The default value for gigabytes of memory is 16 times the number you specify for OCPUs.

        The following are examples of node shape configuration values. Enter everything, including the surrounding single quotation marks, in the Value field for the tag. In the first example, the default amount of memory will be configured.

        '{"ocpus":1}'
        '{"ocpus":2, "memoryInGBs":24}'
  4. Click Next.

  5. On the Network page in the Create Cluster dialog, provide the following information:

    • Network Type. Specifies how pods running on nodes in the cluster communicate with each other, with the cluster's control plane nodes, with pods on other clusters, with other services (such as storage services), and with the internet.

      The Flannel overlay network type encapsulates communication between pods in the flannel overlay network. The flannel overlay network is a simple private overlay virtual network that satisfies the requirements of the OKE networking model by attaching IP addresses to containers. The pods in the private overlay network are only accessible from other pods in the same cluster.

    • VCN. Select the VCN that has the configuration of the "oke_vcn" VCN described in Creating an OKE VCN.

    • Kubernetes Service LB Subnet. The subnet that is configured to host the load balancer in an OKE cluster. Select the subnet that has configuration like the "service-lb" subnet described in Creating an OKE Worker Load Balancer Subnet.

    • Kubernetes API Endpoint Subnet. The regional subnet in which to place the cluster endpoint. Select the subnet that has configuration like the "control-plane-endpoint" subnet described in Creating an OKE Control Plane Load Balancer Subnet.

    • Kubernetes Service CIDR Block. (Optional) The default value is 10.96.0.0/16.

    • Pods CIDR Block. (Optional) The default value is 10.244.0.0/16.

    • Network Security Group. If you check the box to enable network security groups, click the Add Network Security Group button and select an NSG from the drop-down list. You might need to change the compartment to find the NSG you want.

  6. Click Next.

  7. Review your entries and click Submit.

    The details page for the cluster is displayed. Scroll to the Resources section and click Work Requests to see the progress of the cluster creation. When the cluster is in the Active state, click Node Pools to add a node pool. See the Cluster Next Steps section.

    The cluster details page does not list the cluster control plane nodes. To view the control plane nodes, view the list of instances in the compartment where you created this cluster. Names of control plane nodes are in the following format:

    oke-ID1-control-plane-ID2
    • ID1 - The first 32 characters after the pca_name in the cluster OCID.

    • ID2 - A unique identifier added when the cluster has more than one control plane node.

    Search for the instances in the list whose names contain the ID1 string from this cluster OCID.

Using the OCI CLI

  1. Get the information you need to run the command.

    • The OCID of the compartment where you want to create the cluster: oci iam compartment list

    • The name of the cluster. Avoid using confidential information.

    • OCID of the virtual cloud network (VCN) in which you want to create the cluster. Specify the VCN that has the configuration of the "oke_vcn" VCN described in Creating an OKE VCN.

    • OCID of the OKE service LB subnet. Specify the subnet that has configuration like the "service-lb" subnet described in Creating an OKE Worker Load Balancer Subnet. Specify only one OKE Service LB subnet.

    • OCID of the Kubernetes API endpoint subnet. Specify the subnet that has configuration like the "control-plane-endpoint" subnet described in Creating an OKE Control Plane Load Balancer Subnet.

    • OKE service CIDR block. (Optional) The default value is 10.96.0.0/16.

    • Pods CIDR block. (Optional) The default value is 10.244.0.0/16.

    • (Optional) The OCID of the Network Security Group to apply to the cluster endpoint. Do not specify more than one NSG. If you specify an NSG, use the following syntax:

      --endpoint-nsg-ids '["ocid1.networksecuritygroup.unique_ID"]'
    • (Optional) Your public SSH key in RSA format. You cannot add or update an SSH key after the cluster is created.

    • The network type. You do not need to specify the network type because FLANNEL_OVERLAY is used by default. See the descriptions in the Compute Web UI procedure. If you specify the network type, you must specify the following:

      --cluster-pod-network-options '{"cniType":"FLANNEL_OVERLAY"}'
  2. (Optional) Add defined or free-form tags for the cluster resource by using the --defined-tags and --freeform-tags options.

    Note:

    Do not specify values for the OraclePCA-OKE defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the cluster resource.

    Define an argument for the --freeform-tags option to provide the following information for control plane nodes:

    • Your public SSH key.

      Specify sshkey for the tag key, and paste your public SSH key as the value.

      Important:

      You cannot add an SSH key after the cluster is created.

    • Number of nodes.

      By default, the number of nodes in the control plane is 3. You can specify 1, 3, or 5 nodes. To specify the number of control plane nodes, specify cp_node_count for the tag key, and enter 1, 3, or 5 in the Value field.

    • Node shape.

      For Private Cloud Appliance X10 systems, the shape of the control plane nodes is VM.PCAStandard.E5.Flex and you cannot change it. For all other Private Cloud Appliance systems, the default shape is VM.PCAStandard1.1, and you can specify a different shape.

      To use a different shape, specify cp_node_shape for the tag key, and enter the name of the shape as the value. Use the following command to list the available shapes and their characteristics.

    • Node shape configuration.

      If you specify a shape that is not a flexible shape, do not specify a shape configuration. The number of OCPUs and amount of memory are set to the values shown for this shape in "Standard Shapes" in Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide.

      If you specify a flexible shape, you can change the default shape configuration.

      To provide shape configuration information, specify cp_node_shape_config for the tag key. You must specify the number of OCPUs (ocpus) you want. You can optionally specify the total amount of memory you want (memoryInGBs). The default value for gigabytes of memory is 16 times the number you specify for OCPUs.

      Specify free-form tags either inline or in a file in JSON format, such as the following example file:

      {
        "sshkey": "ssh-rsa remainder_of_key_text",
        "cp_node_count": 1,
        "cp_node_shape": "VM.PCAStandard1.Flex",
        "cp_node_shape_config": {
          "ocpus": 2,
          "memoryInGBs": 24
        }
      }

      Use the following syntax to specify a file of tags. Specify the full path to the .json file unless the file is in the same directory where you are running the command.

      --freeform-tags file://cluster_tags.json
  3. Run the create cluster command.

    Example:

    The --endpoint-public-ip-enabled true option is required when --endpoint-subnet-id or --endpoint-nsg-ids is specified.

    $ oci ce cluster create \
    --compartment-id ocid1.compartment.unique_ID --kubernetes-version version \
    --name "Cluster One" --vcn-id ocid1.vcn.unique_ID \
    --endpoint-public-ip-enabled true \
    --endpoint-subnet-id control-plane-endpoint_subnet_OCID \
    --service-lb-subnet-ids '["service-lb_subnet_OCID"]' \
    --freeform-tags '{"sshkey":"ssh-rsa remainder_of_key_text"}'

    The output from this cluster create command is the same as the output from the cluster get command.

    Use the work-request get command to check the status of the create operation. The work request OCID is in created-by-work-request-id in the metadata section of the cluster create output.

    $ oci ce work-request get --work-request-id workrequest_OCID

    When the cluster is in the ACTIVE state, you can add a node pool. See the Cluster Next Steps section.

    To identify the control plane nodes for this cluster, list instances in the compartment where you created the cluster. Names of control plane nodes are in the following format:

    oke-ID1-control-plane-ID2
    • ID1 - The first 32 characters after the pca_name in the cluster OCID.

    • ID2 - A unique identifier added when the cluster has more than one control plane node.

    Search for the instances in the list whose names contain the ID1 string from this cluster OCID.

Cluster Next Steps

  1. Create a Kubernetes configuration file for the cluster. See Creating a Kubernetes Configuration File.

  2. Deploy a Kubernetes Dashboard to manage the cluster and to manage and troubleshoot applications running in the cluster. On the https://kubernetes.io/ site, see Deploy and Access the Kubernetes Dashboard.

  3. Create a node pool for the cluster. See Creating an OKE Worker Node Pool.

  4. Create a backup for the workload cluster. For example, see Backing up an etcd cluster and Restoring up an etcd cluster in Operating etcd clusters for Kubernetes. Use the etcd backup to recover OKE clusters under disaster scenarios such as losing all control plane nodes. An etcd backup contains all OKE states and critical information. An etcd backup does not back up applications or other content on cluster nodes.