Using the Console to create a Cluster with Explicitly Defined Settings in the 'Custom Create' workflow

To create a cluster with explicitly defined settings and existing network resources in the 'Custom Create' workflow using Container Engine for Kubernetes:

  1. In the Console, open the navigation menu and click Developer Services. Under Containers, click Kubernetes Clusters (OKE).
  2. Choose a Compartment you have permission to work in.
  3. On the Cluster List page, click Create Cluster.
  4. In the Create Cluster dialog, select Custom Create and click Launch Workflow.
  5. On the Create Cluster page, either just accept the default configuration details for the new cluster, or specify alternatives as follows:

    • Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid entering confidential information.
    • Compartment: The compartment in which to create the new cluster.
    • Kubernetes Version: The version of Kubernetes to run on the cluster's control plane nodes and worker nodes. Either accept the default version or select a version of your choice. Amongst other things, the Kubernetes version you select determines the default set of admission controllers that are turned on in the created cluster (see Supported Admission Controllers).
  6. Either accept the defaults for advanced cluster options, or click Show Advanced Options and set the options as follows:

    1. Specify whether to only allow the deployment of images from Oracle Cloud Infrastructure Registry that have been signed by particular master encryption keys. To enforce the use of signed images, select Enable image verification policies on this cluster, and then specify the encryption key and the vault that contains it. See Enforcing the Use of Signed Images from Registry.

    2. Specify how to encrypt Kubernetes secrets at rest in the etcd key-value store for the cluster:

      • Encrypt using an Oracle-managed key: Encrypt Kubernetes secrets in the etcd key-value store using a master encryption key that is managed by Oracle.
      • Encrypt using a key that you manage: Encrypt Kubernetes secrets in the etcd key-value store using a master encryption key (stored in the Vault service) that you manage. If you select this option, specify:

        • Choose a Vault in <compartment-name>: The vault that contains the master encryption key, from the list of vaults in the specified compartment. By default, <compartment-name> is the compartment in which you are creating the cluster, but you can select a different compartment by clicking Change Compartment.
        • Choose a Key in <compartment-name>: The name of the master encryption key, from the list of keys in the specified compartment. By default, <compartment-name> is the compartment in which you are creating the cluster, but you can select a different compartment by clicking Change Compartment. Note that you cannot change the master encryption key after the cluster has been created.

      Note that if you want to manage the master encryption key, a suitable key, dynamic group, and policy must already exist before you can create the cluster. For more information, see Encrypting Kubernetes Secrets at Rest in Etcd.

    3. Specify whether to control the operations that pods are allowed to perform on the cluster by enforcing pod security policies:

      • Not Enforced: Do not enforce pod security policies.
      • Enforced: Do enforce pod security policies, by enabling the PodSecurityPolicy admission controller. Only pods that meet the conditions in a pod security policy are accepted by the cluster. For more information, see Using Pod Security Policies with Container Engine for Kubernetes.
      Caution

      It is very important to note that when you enable a cluster's PodSecurityPolicy admission controller, no application pods can start on the cluster unless suitable pod security policies exist, along with roles (or clusterroles) and rolebindings (or clusterrolebindings) to associate pods with policies. You will not be able to run application pods on a cluster with an enabled PodSecurityPolicy admission controller unless these prerequisites are met.

      We strongly recommend you use PodSecurityPolicy admission controllers as follows:

      • Whenever you create a new cluster, enable the Pod Security Admission Controller.
      • Immediately after creating a new cluster, create pod security policies, along with roles (or clusterroles) and rolebindings (or clusterrolebindings).
    4. Specify whether to add Cluster tags to the cluster, Initial load balancer tags to load balancers created by Kubernetes services of type LoadBalancer, and Initial block volume tags to block volumes created by Kubernetes persistent volume claims. Tagging enables you to group disparate resources across compartments, and also enables you to annotate resources with your own metadata. See Tagging Kubernetes Cluster-Related Resources.
  7. Click Next and specify the existing network resources to use for the new cluster on the Network Setup page:

    • VCN in <compartment-name>: The existing virtual cloud network that has been configured for cluster creation and deployment. By default, <compartment-name> is the compartment in which you are creating the cluster, but you can select a different compartment by clicking Change Compartment. See VCN Configuration.
    • Kubernetes Service LB Subnets: Optionally, the existing subnets that have been configured to host load balancers. Load balancer subnets must be different from worker node subnets, can be public or private, and can be regional (recommended) or AD-specific. You don't have to specify any load balancer subnets. However, if you do specify load balancer subnets, the number of load balancer subnets to specify depends on the region in which you are creating the cluster and whether the subnets are regional or AD-specific.

      If you are creating a cluster in a region with three availability domains, you can specify:

      • Zero or one load balancer regional subnet (recommended).
      • Zero or two load balancer AD-specific subnets. If you specify two AD-specific subnets, the two subnets must be in different availability domains.

      If you are creating a cluster in a region with a single availability domain, you can specify:

      • Zero or one load balancer regional subnet (recommended).
      • Zero or one load balancer AD-specific subnet.

      See Subnet Configuration.

    • Kubernetes API Endpoint Subnet: A regional subnet to host the cluster's Kubernetes API endpoint. The Kubernetes API endpoint is assigned a private IP address. The subnet you specify can be public or private. To simplify access management, Oracle recommends the Kubernetes API endpoint is in a different subnet to worker nodes and load balancers. For more information, see Kubernetes Cluster Control Plane and Kubernetes API.
    • Use network security groups to control traffic: Control access to the cluster's Kubernetes API endpoint using security rules defined for one or more network security groups (NSGs) that you specify. You can use security rules defined for NSGs instead of, or as well as, those defined for security lists. For more information about the security rules to specify for the NSG, see Security Rules for the Kubernetes API Endpoint.
    • Assign a public IP address to the API endpoint: If you selected a public subnet for the Kubernetes API endpoint, you can assign a public IP to the Kubernetes API endpoint (as well as the private IP address).
  8. Either accept the defaults for advanced cluster options, or click Show Advanced Options and specify alternatives as follows:

    • Kubernetes Service CIDR Block: The available group of network addresses that can be exposed as Kubernetes services (ClusterIPs), expressed as a single, contiguous IPv4 CIDR block. For example, 10.96.0.0/16. The CIDR block you specify must not overlap with the CIDR block for the VCN. See CIDR Blocks and Container Engine for Kubernetes.
    • Pods CIDR Block: The available group of network addresses that can be allocated to pods running in the cluster, expressed as a single, contiguous IPv4 CIDR block. For example, 10.244.0.0/16. The CIDR block you specify must not overlap with the CIDR blocks for subnets in the VCN, and can be outside the VCN CIDR block. See CIDR Blocks and Container Engine for Kubernetes.
  9. Click Next and specify configuration details for the first node pool in the cluster on the Node Pools page:

    • Name: A name of your choice for the new node pool. Avoid entering confidential information.
    • Version: The version of Kubernetes to run on each worker node in the node pool. By default, the version of Kubernetes specified for the control plane nodes is selected. The Kubernetes version on worker nodes must be either the same version as that on the control plane nodes, or an earlier version that is still compatible. See Kubernetes Versions and Container Engine for Kubernetes.
    • Shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node. If you select a flexible shape, you can explicitly specify the number of CPUs and the amount of memory. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes. See Supported Images (Including Custom Images) and Shapes for Worker Nodes.
    • Image: The image to use on each node in the node pool. An image is a template of a virtual hard drive that determines the operating system and other software for the node. See Supported Images (Including Custom Images) and Shapes for Worker Nodes.
    • Number of Nodes: The number of worker nodes to create in the node pool, placed in the availability domains you select, and in the regional subnet (recommended) or AD-specific subnet you specify for each availability domain.
    • Network Security Group: Control access to the node pool using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists. For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes.
    • Boot volume: Configure the size and encryption options for the worker node's boot volume:

      • To specify a custom size for the boot volume, select the Specify a custom boot volume size check box. Then, enter a custom size from 50 GB to 32 TB. The specified size must be larger than the default boot volume size for the selected image. See Custom Boot Volume Sizes for more information.
      • For VM instances, you can optionally select the Use in-transit encryption check box. For bare metal instances that support in-transit encryption, it is enabled by default and is not configurable. See Block Volume Encryption for more information about in-transit encryption. If you are using your own Vault service encryption key for the boot volume, then this key is also used for in-transit encryption. Otherwise, the Oracle-provided encryption key is used.
      • Boot volumes are encrypted by default, but you can optionally use your own Vault service encryption key to encrypt the data in this volume. To use the Vault service for your encryption needs, select the Encrypt this volume with a key that you manage check box. Then, select the Vault compartment and Vault that contain the master encryption key you want to use. Also select the Master encryption key compartment and Master encryption key. For more information about encryption, see Overview of Vault. If you enable this option, this key is used for both data at rest encryption and in-transit encryption.
        Important

        The Block Volume service does not support encrypting volumes with keys encrypted using the Rivest-Shamir-Adleman (RSA) algorithm. When using your own keys, you must use keys encrypted using the Advanced Encryption Standard (AES) algorithm. This applies to block volumes and boot volumes.

      Note that to use your own Vault service encryption key to encrypt data, an IAM policy must grant access to the service encryption key. See Create Policy to Access User-Managed Encryption Keys for Encrypting Boot and Block Volumes.

    • Placement Configuration:
      • Availability Domain: An availability domain in which to place worker nodes.
      • Subnet: A regional subnet (recommended) or AD-specific subnet configured to host worker nodes. If you specified load balancer subnets, the worker node subnets must be different. The subnets you specify can be public or private, and can be regional (recommended) or AD-specific. See Subnet Configuration.

      Optionally click Show Advanced Options to specify a capacity reservation to use (see Using Capacity Reservations to Provision Worker Nodes).

      Optionally click Another Row to select additional domains and subnets in which to place worker nodes.

      When they are created, the worker nodes are distributed as evenly as possible across the availability domains you select (or in the case of a single availability domain, across the fault domains in that availability domain).
    • Kubernetes Labels: (Optional) One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools.
    • Initialization Script: (Optional) A script for cloud-init to run on each instance hosting worker nodes when the instance boots up for the first time. The script you specify must be written in one of the formats supported by cloud-init (for example, cloud-config), and must be a supported filetype (for example, .yaml). Specify the script as follows:
      • Choose Cloud-Init Script: Select a file containing the cloud-init script, or drag and drop the file into the box.
      • Paste Cloud-Init Script: Copy the contents of a cloud-init script, and paste it into the box.

      If you have not previously written cloud-init scripts for initializing worker nodes in clusters created by Container Engine for Kubernetes, you might find it helpful to click Download Custom Cloud-Init Script Template. The downloaded file contains the default logic provided by Container Engine for Kubernetes. You can add your own custom logic either before or after the default logic, but do not modify the default logic. For examples, see Example Usecases for Custom Cloud-init Scripts.

    • Kubernetes Labels: (Optional) One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools.
    • Node Pool tags and Node tags: (Optional) One or more tags to add to the node pool, and to compute instances hosting worker nodes in the node pool. Tagging enables you to group disparate resources across compartments, and also enables you to annotate resources with your own metadata. See Tagging Kubernetes Cluster-Related Resources.
    • Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot use SSH to access directly any worker nodes in private subnets (see Connecting to Worker Nodes in Private Subnets Using SSH).
  10. (Optional) Click Another node pool and specify configuration details for a second and subsequent node pools in the cluster.

    If you define multiple node pools in a cluster, you can host all of them on a single AD-specific subnet. However, it's best practice to host different node pools for a cluster on a regional subnet (recommended) or on different AD-specific subnets (one in each availability domain in the region).

  11. Click Next to review the details you entered for the new cluster.
  12. Click Create Cluster to create the new cluster.

    Container Engine for Kubernetes starts creating the cluster with the name you specified.

    If you specified details for one or more node pools, Container Engine for Kubernetes creates:

    • node pools with the names you specified
    • worker nodes with auto-generated names in the format oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

    Do not change the auto-generated names of worker nodes.

  13. Click Close to return to the Console.

Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active.

Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the cluster using kubectl.