Creating a Kubernetes Cluster

You can use Container Engine for Kubernetes to create new Kubernetes clusters. To create a cluster, you must either belong to the tenancy's Administrators group, or belong to a group to which a policy grants the CLUSTER_MANAGE permission. See Policy Configuration for Cluster Creation and Deployment.

Using the Console, you first specify basic details for the new cluster (the cluster name, and the Kubernetes version to install on control plane nodes). You can then create the cluster in one of two ways:

  • Using default settings in the 'Quick Create' workflow to create a cluster with new network resources as required. This approach is the fastest way to create a new cluster. If you accept all the default values, you can create a new cluster in just a few clicks. New network resources for the cluster are created automatically, including regional subnets for the Kubernetes API endpoint, for worker nodes, and for load balancers. The regional subnet for load balancers is public, but you specify whether the regional subnets for the Kubernetes API endpoint and for worker nodes are public or private. To create a cluster in the 'Quick Create' workflow, you must belong to a group to which a policy grants the necessary permissions to create the new network resources (see Create One or More Additional Policies for Groups).

  • Using custom settings in the 'Custom Create' workflow. This approach gives you the most control over the new cluster. You can explicitly define the new cluster's properties. And you can explicitly specify which existing network resources to use, including the existing public or private subnets in which to create the Kubernetes API endpoint, worker nodes, and load balancers.

    Note that although you will usually define node pools immediately when defining a new cluster in the 'Custom Create' workflow, you don't have to. You can create a cluster with no node pools, and add node pools later. One reason to create a cluster that initially has no node pools is if you intend to install and configure a CNI network provider like Calico to support Kubernetes NetworkPolicy resources. If you install Calico on a cluster that has existing node pools in which pods are already running, you'll have to recreate the pods when the Calico installation is complete. For example, by running the kubectl rollout restart command. If you install Calico on a cluster before creating any node pools in the cluster (recommended), you can be sure that there will be no pods to recreate. See Example: Installing Calico and Setting Up Network Policies.

Regardless of how you create a cluster, Container Engine for Kubernetes gives names to worker nodes in the following format:

oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

where:

  • oke is the standard prefix for all worker nodes created by Container Engine for Kubernetes
  • c<part-of-cluster-OCID> is a portion of the cluster's OCID, prefixed with the letter c
  • n<part-of-node-pool-OCID> is a portion of the node pool's OCID, prefixed with the letter n
  • s<part-of-subnet-OCID> is a portion of the subnet's OCID, prefixed with the letter s
  • <slot> is an ordinal number of the node in the subnet (for example, 0, 1)

For example, if you specified a cluster is to have two nodes in a node pool, the two nodes might be named:

  • oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-0
  • oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-1

Do not change the auto-generated names that Container Engine for Kubernetes gives to worker nodes.

To ensure high availability, Container Engine for Kubernetes:

  • creates the Kubernetes Control Plane on multiple Oracle-managed control plane nodes (distributing the control plane nodes across different availability domains in a region, where supported)
  • creates worker nodes in each of the fault domains in an availability domain (distributing the worker nodes as evenly as possible across the fault domains, subject to any other infrastructure restrictions)