Using the Console to create a Cluster with Default Settings in the 'Quick Create' workflow

Find out how to use the 'Quick Create' workflow to create a Kubernetes cluster with default settings and new network resources using Container Engine for Kubernetes (OKE).

To create a cluster with default settings and new network resources in the 'Quick Create' workflow using Container Engine for Kubernetes:

  1. Open the navigation menu and click Developer Services. Under Containers & Artifacts, click Kubernetes Clusters (OKE).
  2. Choose a Compartment you have permission to work in.

  3. On the Cluster List page, click Create cluster.
  4. In the Create cluster dialog, select Quick create and click Submit.
  5. On the Create cluster page, either just accept the default configuration details for the new cluster, or specify alternatives as follows:

    • Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid entering confidential information.
    • Compartment: The compartment in which to create the new cluster and the associated network resources.
    • Kubernetes version: The version of Kubernetes to run on the control plane nodes and worker nodes of the cluster. Either accept the default version or select a version of your choice. Amongst other things, the Kubernetes version determines the default set of admission controllers that are turned on in the created cluster (see Supported Admission Controllers).
    • Kubernetes API endpoint: The type of access to the cluster's Kubernetes API endpoint. The Kubernetes API endpoint is either private (accessible by other subnets in the VCN) or public (accessible directly from internet):

      • Private endpoint: A private regional subnet is created and the Kubernetes API endpoint is hosted in that subnet. The Kubernetes API endpoint is assigned a private IP address.
      • Public endpoint: A public regional subnet is created and the Kubernetes API endpoint is hosted in that subnet. The Kubernetes API endpoint is assigned a public IP address as well as a private IP address.

      Private and public endpoints are assigned a security rule (as part of a security list) that grants access to the Kubernetes API endpoint (TCP/6443).

      For more information, see Kubernetes Cluster Control Plane and Kubernetes API.

    • Node type: Specify the type of worker nodes in the first node pool in the cluster (see Virtual Nodes and Managed Nodes). Select one of the following options:
      • Managed: Select this option when you want to have responsibility for managing the worker nodes in the node pool. Managed nodes run on compute instances (either bare metal or virtual machine) in your tenancy. As you are responsible for managing managed nodes, you have the flexibility to configure them to meet your specific requirements. You are responsible for upgrading Kubernetes on managed nodes, and for managing cluster capacity.
      • Virtual: Select this option when you want to benefit from a 'serverless' Kubernetes experience. Virtual nodes enable you to run Kubernetes pods at scale without the operational overhead of upgrading the data plane infrastructure and managing the capacity of clusters.

      For more information, see Comparing Virtual Nodes with Managed Nodes.

  6. If you selected Managed as the Node type:

    1. Specify managed node details:
      • Kubernetes worker nodes: The type of access to the cluster's worker nodes. The worker nodes are either private (accessible through other VCN subnets) or public (accessible directly from internet):

        • Private workers: Recommended. A private regional subnet is created to host worker nodes. The worker nodes are assigned a private IP address.
        • Public workers: A public regional subnet is created to host worker nodes. The worker nodes are assigned a public IP address as well as a private IP address.

        Note that a public regional subnet is always created to host load balancers in clusters created in the 'Quick Create' workflow, regardless of your selection here.

      • Node shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node. If you select a flexible shape, you can explicitly specify the number of CPUs and the amount of memory. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes. See Supported Images (Including Custom Images) and Shapes for Worker Nodes.
      • Image: The image to use on worker nodes in the managed node pool. An image is a template of a virtual hard drive that determines the operating system and other software for the managed node pool.

        To change the default image, click Change image. In the Browse all images window, choose an Image source and select an image as follows:

        • OKE Worker Node Images: Recommended. Provided by Oracle and built on top of platform images. OKE images are optimized to serve as base images for worker nodes, with all the necessary configurations and required software. Select an OKE image if you want to minimize the time it takes to provision worker nodes at runtime when compared to platform images and custom images.

          OKE image names include the version number of the Kubernetes version they contain. Note that if you specify a Kubernetes version for the node pool, the OKE image you select here must have the same version number as the node pool's Kubernetes version.

        • Platform images: Provided by Oracle and only contain an Oracle Linux operating system. Select a platform image if you want Container Engine for Kubernetes to download, install, and configure required software when the compute instance hosting a worker node boots up for the first time.

        See Supported Images (Including Custom Images) and Shapes for Worker Nodes.

      • Node count: The number of worker nodes to create in the node pool, placed in the regional subnet created for the cluster. The nodes are distributed as evenly as possible across the availability domains in a region (or in the case of a region with a single availability domain, across the fault domains in that availability domain).
    2. Either accept the defaults for advanced cluster options, or click Show advanced options and specify alternatives as follows:

      • Boot volume: Configure the size and encryption options for the worker node's boot volume:

        • To specify a custom size for the boot volume, select the Specify a custom boot volume size check box. Then, enter a custom size from 50 GB to 32 TB. The specified size must be larger than the default boot volume size for the selected image. See Custom Boot Volume Sizes for more information. If you increase the boot volume size, extend the partition for the boot volume to take advantage of the larger size (see Extending the Partition for a Boot Volume).
        • For VM instances, you can optionally select the Use in-transit encryption check box. For bare metal instances that support in-transit encryption, it is enabled by default and is not configurable. See Block Volume Encryption for more information about in-transit encryption. If you are using your own Vault service encryption key for the boot volume, then this key is also used for in-transit encryption. Otherwise, the Oracle-provided encryption key is used.
        • Boot volumes are encrypted by default, but you can optionally use your own Vault service encryption key to encrypt the data in this volume. To use the Vault service for your encryption needs, select the Encrypt this volume with a key that you manage check box. Select the vault compartment and vault that contains the master encryption key that you want to use, and then select the master encryption key compartment and master encryption key. If you enable this option, this key is used for both data at rest encryption and in-transit encryption.
          Important

          The Block Volume service does not support encrypting volumes with keys encrypted using the Rivest-Shamir-Adleman (RSA) algorithm. When using your own keys, you must use keys encrypted using the Advanced Encryption Standard (AES) algorithm. This applies to block volumes and boot volumes.

        Note that to use your own Vault service encryption key to encrypt data, an IAM policy must grant access to the service encryption key. See Create Policy to Access User-Managed Encryption Keys for Encrypting Boot Volumes, Block Volumes, and/or File Systems.

      • Enable image verification policies on this cluster:  (Optional) Whether to only allow the deployment of images from Oracle Cloud Infrastructure Registry that have been signed by particular master encryption keys. Specify the encryption key and the vault that contains it. See Enforcing the Use of Signed Images from Registry.
      • Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that if you specify that you want the worker nodes in the cluster to be hosted in a private regional subnet, you cannot use SSH to access them directly (see Connecting to Managed Nodes in Private Subnets Using SSH).
      • Kubernetes Labels: (Optional) One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools. For example, to exclude all the nodes in a node pool from the list of backend servers in a load balancer backend set, specify node.kubernetes.io/exclude-from-external-load-balancers=true (see node.kubernetes.io/exclude-from-external-load-balancers).
  7. If you selected Virtual as the Node Type:

    1. Specify virtual node details:
      • Node count: The number of virtual nodes to create in the virtual node pool, placed in the regional subnet created for the cluster. The nodes are distributed as evenly as possible across the availability domains in a region (or in the case of a region with a single availability domain, across the fault domains in that availability domain).
      • Pod shape: The shape to use for pods running on virtual nodes in the virtual node pool. The shape determines the processor type on which to run the pod.

        Only those shapes available in your tenancy that are supported by Container Engine for Kubernetes are shown. See Supported Images (Including Custom Images) and Shapes for Worker Nodes.

        Note that you explicitly specify the CPU and memory resource requirements for virtual nodes in the pod spec (see Assign Memory Resources to Containers and Pods and Assign CPU Resources to Containers and Pods in the Kubernetes documentation).

    2. Either accept the defaults for advanced cluster options, or click Show advanced options and specify alternatives as follows:

      • Kubernetes labels and taints: (Optional) Enable the targeting of workloads at specific node pools by adding labels and taints to virtual nodes:
        • Labels: One or more labels (in addition to a default label) to add to virtual nodes in the virtual node pool to enable the targeting of workloads at specific node pools.
        • Taints: One or more taints to add to virtual nodes in the virtual node pool. Taints enable virtual nodes to repel pods, thereby ensuring that pods do not run on virtual nodes in a particular virtual node pool. Note that you can only apply taints to virtual nodes.

        For more information, see Assigning Pods to Nodes in the Kubernetes documentation.

  8. Click Next to review the details you entered for the new cluster.
  9. If you have not selected any of the enhanced cluster features and you want to create the new cluster as a basic cluster rather than as an enhanced cluster, choose the Create a Basic cluster option on the Review page. See Working with Enhanced Clusters and Basic Clusters.
  10. Click Create cluster to create the new network resources and the new cluster now.

    Container Engine for Kubernetes starts creating resources (as shown in the Creating cluster and associated network resources dialog):

    • the network resources (such as the VCN, internet gateway, NAT gateway, route tables, security lists, a regional subnet for worker nodes and another regional subnet for load balancers), with auto-generated names in the format oke-<resource-type>-quick-<cluster-name>-<creation-date>
    • the cluster, with the name you specified
    • the node pool, named pool1
    • worker nodes, with auto-generated names (managed node names have the format oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>, virtual node names are the same as the node's private IP address)

    Do not change the resource names that Container Engine for Kubernetes has auto-generated. Note that if the cluster is not created successfully for some reason (for example, if you have insufficient permissions or if you've exceeded the cluster limit for the tenancy), any network resources created during the cluster creation process are not deleted automatically. You will have to manually delete any such unused network resources.

    Note that rather than creating the new network resources and the new cluster immediately, you can create them later using Resource Manager and Terraform, by clicking Save as stack to save the resource definitions as a Terraform configuration. For more information about saving stacks from resource definitions, see Creating a Stack from a Resource Creation Page.

  11. Click Close to return to the Console.

Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active.

Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the cluster using kubectl.