Creating an OKE Cluster
These procedures describe how to create an OKE cluster.
If you create a public cluster, the Network Load Balancer and public IP address are created and assigned as part of cluster creation.
Important:
Before you can create a cluster, the following conditions must be met:
-
The OraclePCA-OKE.cluster_id defined tag must exist in the tenancy. See Creating the OraclePCA-OKE.cluster_id Tag.
-
All fault domains must be healthy.
-
Each fault domain must have at least one healthy compute instance.
-
Sufficient resources must be available to create a cluster.
-
Ensure that no appliance upgrade is scheduled during the cluster create.
If notifications are configured for operations such as system upgrade, ensure you are on the list to be notified of such planned outages.
To create a node pool at the same time that you create the cluster, you must use the Compute Web UI.
To specify tags to be applied to all load balancers created by Kubernetes services, you must use the OCI CLI.
After you create a cluster, see the Cluster Next Steps section.
Using the Compute Web UI
-
On the dashboard, select Containers / View Kubernetes Clusters (OKE).
-
On the clusters list page, select the Create Cluster button.
-
On the Cluster page in the Create Cluster dialog, provide the following information:
-
Name: The name of the new cluster. Avoid entering confidential information.
-
Compartment: The compartment in which to create the new cluster.
-
Kubernetes Version: The version of Kubernetes to run on the control plane nodes. Accept the default version or select a different version.
If the Kubernetes version that you want to use is not listed, use the OCI CLI or the OCI API to create the cluster and specify the Kubernetes version.
-
Tagging: Add defined or free-form tags for the cluster resource.
Note:
Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the cluster resource.
Use OraclePCA defined tags to provide the following information for control plane nodes. If these tags are not listed in the Compute Web UI Tagging menus, you must create them. See Creating OraclePCA Tags.
Important:
If you are using Private Cloud Appliance Release 3.0.2-b1081557, these defined tags are not recognized. You must use free-form tags to specify these values as described in the workaround in Create Cluster Does Not Support Extension Parameters. In Private Cloud Appliance Release 3.0.2-b1185392 and later, the free-form tags are deprecated; use the defined tags described below for SSH key, number of control plane nodes, node shape, and node configuration in Private Cloud Appliance Release 3.0.2-b1185392 and later.
Note:
None of these values - SSH key, number of nodes, node shape, or node shape configuration - can be set or changed after the cluster is created. If you set these tags when you update the cluster, the new values are ignored.
-
Your public SSH key.
Specify sshkey for the tag key (OraclePCA.sshkey). Paste your public SSH key into the Value field.
Important:
You cannot add an SSH key after the cluster is created.
-
Number of nodes.
By default, the number of nodes in the control plane is 3. You can specify 1, 3, or 5 nodes. To specify the number of control plane nodes, specify cpNodeCount for the tag key (OraclePCA.cpNodeCount), and select 1, 3, or 5 in the Value field.
-
Node shape.
For Private Cloud Appliance X10 systems, the shape of the control plane nodes is VM.PCAStandard.E5.Flex and you cannot change it. For all other Private Cloud Appliance systems, the default shape is VM.PCAStandard1.1, and you can specify a different shape.
To use a different shape, specify cpNodeShape for the tag key (OraclePCA.cpNodeShape), and enter the name of the shape in the Value field. For a description of each shape, see Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide.
-
Node shape configuration.
If you specify a shape that is not a flexible shape, do not specify a shape configuration. The number of OCPUs and amount of memory are set to the values shown for this shape in "Standard Shapes" in Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide.
If you specify a flexible shape, you can change the default shape configuration.
To provide shape configuration information, specify cpNodeShapeConfig for the tag key (OraclePCA.cpNodeShapeConfig). You must specify the number of OCPUs (
ocpus
) you want. You can optionally specify the total amount of memory you want (memoryInGBs
). The default value for gigabytes of memory is 16 times the number you specify for OCPUs.Note:
If the cluster will have 1-10 worker nodes, specify at least 16 GB memory. If the cluster will have 11-128 worker nodes, specify at least 2 OCPUs and 32 GB memory. Note that you cannot change the number of OCPUs or amount of memory when you update the cluster.
In the Value field for the tag, enter the node shape configuration value as shown in the following examples.
In the following example, the default amount of memory will be configured:
{"ocpus":1}
In the following example, the amount of memory is specified:
{"ocpus":2, "memoryInGBs":48}
Note:
If you use Terraform to specify a complex value (a value that is a key/value pair), then you must escape the double quotation marks in the value as shown in the following example:
"OraclePCA.cpNodeShapeConfig"="{\"ocpus\":2,\"memoryInGBs\":48}"
-
-
Add-ons: This section shows a tile for each add-on that is available for this cluster. In the Create Cluster dialog, all add-ons are Disabled. See Installing the WebLogic Kubernetes Operator Add-on.
-
-
Select Next.
-
On the Network page in the Create Cluster dialog, provide the following information:
-
Network Type. Specifies how pods running on nodes in the cluster communicate with each other, with the cluster's control plane nodes, with pods on other clusters, with other services (such as storage services), and with the internet.
The Flannel Overlay network type encapsulates communication between pods in the Flannel Overlay network. The Flannel Overlay network is a simple private overlay virtual network that satisfies the requirements of the OKE networking model by attaching IP addresses to containers. The pods in the private overlay network are only accessible from other pods in the same cluster. For more description, see Creating Flannel Overlay Network Resources.
VCN-Native Pod Networking connects nodes in a Kubernetes cluster to pod subnets in the OKE VCN. As a result, pod IP addresses within the OKE VCN are directly routable from other VCNs that are connected (peered) to the OKE VCN, and from on-premises networks. For more description, see Creating VCN-Native Pod Networking Resources.
Note:
If you specify VCN-Native Pod Networking, then the VCN you specify must have a subnet named "pod". See Creating VCN-Native Pod Networking Resources.
-
VCN. Select the VCN that has the configuration of the "oke_vcn" VCN described in Creating a Flannel Overlay VCN or Creating a VCN-Native Pod Networking VCN.
-
Kubernetes Service LB Subnet. The subnet that is configured to host the load balancer in an OKE cluster. To create a public cluster, create and specify here the public version of the "service-lb" subnet described in Creating a Flannel Overlay Worker Load Balancer Subnet or Creating a VCN-Native Pod Networking Worker Load Balancer Subnet. To create a private cluster, create and specify here the private version of the "service-lb" subnet.
-
Kubernetes API Endpoint Subnet. The regional subnet in which to place the cluster endpoint. To create a public cluster, create and specify here the public version of the "control-plane-endpoint" subnet described in Creating a Flannel Overlay Control Plane Load Balancer Subnet or Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet. To create a private cluster, create and specify here the private version of the "control-plane-endpoint" subnet.
-
Kubernetes Service CIDR Block. (Optional) The default value is 10.96.0.0/16.
-
Pods CIDR Block. (Optional) The default value is 10.244.0.0/16.
-
Network Security Group. If you check the box to enable network security groups, select the Add Network Security Group button and select an NSG from the drop-down list. You might need to change the compartment to find the NSG you want.
-
-
Select Next.
-
On the Node Pool page, select the Add Node Pool button to optionally add a node pool as part of creating this cluster. See Creating an OKE Worker Node Pool to add node pools after the cluster is created.
If you select the Add Node Pool button, enter the following information in the Add Node Pool section:
-
Name: The name of the new node pool. Avoid using confidential information.
-
Compartment: The compartment in which to create the new node pool.
-
Node Count: Enter the number of nodes you want in this node pool. The default is 0. The maximum number is 128 per cluster, which can be distributed across multiple node pools.
-
See Creating an OKE Worker Node Pool for information about Network Security Groups, Placement Configuration, Source Image, Shape, and Pod Communication.
-
-
Review your entries.
If you created a node pool, you have the opportunity to edit or delete the node pool in this review.
-
Select Submit.
The details page for the cluster is displayed. Scroll to the Resources section and select Work Requests to see the progress of the cluster creation. If you created a node pool, the NODEPOOL_CREATE work request might still be In Progress for a time after the cluster is Active and the CLUSTER_CREATE work request is Succeeded.
The cluster details page does not list OraclePCA tags on the Tags tab (and you cannot filter a list of clusters by the values of OraclePCA tags). To review the settings of the OraclePCA tags, use the CLI.
The cluster details page does not list the cluster control plane nodes. To view the control plane nodes, view the list of instances in the compartment where you created this cluster. Names of control plane nodes are in the following format:
oke-ID1-control-plane-ID2
-
ID1
- The first 32 characters after thepca_name
in the cluster OCID. -
ID2
- A unique identifier added when the cluster has more than one control plane node.
Search for the instances in the list whose names contain the
ID1
string from this cluster OCID. -
Using the OCI CLI
To install a cluster add-on, use the cluster install-addon
command after you have created the cluster. See Installing the WebLogic Kubernetes Operator Add-on.
-
Get the information you need to run the command.
-
The OCID of the compartment where you want to create the cluster:
oci iam compartment list
-
The name of the cluster. Avoid using confidential information.
-
The Kubernetes version that you want to use. Use the following command to show a list of available Kubernetes versions:
oci ce cluster-options get --cluster-option-id all
You might be able to list more Kubernetes versions by using the
compute image list
command and looking in the display name. In the following example, the Kubernetes version in the image is 1.29.9:"display-name": "uln-pca-Oracle-Linux8-OKE-1.29.9-20250325.oci"
Another way to specify a version that is not listed is to use the OCID of an older cluster instead of the keyword
all
as the argument of the--cluster-option-id
option to list the Kubernetes version used for that specified cluster:oci ce cluster-options get --cluster-option-id cluster_OCID
If you are using Private Cloud Appliance Release 3.0.2-b1081557, the
cluster-options get
command is not available. Use thecompute image list
command to get the Kubernetes version from the image display name. -
OCID of the virtual cloud network (VCN) in which you want to create the cluster. Specify the VCN that has the configuration of the "oke_vcn" VCN described in Creating a Flannel Overlay VCN or Creating a VCN-Native Pod Networking VCN.
-
OCID of the OKE service LB subnet. Specify the subnet that has configuration like the "service-lb" subnet described in Creating a Flannel Overlay Worker Load Balancer Subnet or Creating a VCN-Native Pod Networking Worker Load Balancer Subnet. For a public cluster, follow the instructions to create the public version of the "service-lb" subnet. For a private cluster, create the private version of the "service-lb" subnet. Specify only one OKE Service LB subnet.
-
OCID of the Kubernetes API endpoint subnet. Specify the subnet that has configuration like the "control-plane-endpoint" subnet described in Creating a Flannel Overlay Control Plane Load Balancer Subnet or Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet. For a public cluster, follow the instructions to create the public version of the "control-plane-endpoint" subnet. For a private cluster, create the private version of the "control-plane-endpoint" subnet.
-
OKE service CIDR block. (Optional) The default value is 10.96.0.0/16.
-
Pods CIDR block. (Optional) The default value is 10.244.0.0/16.
-
(Optional) The OCID of the Network Security Group to apply to the cluster endpoint. Do not specify more than one NSG. If you specify an NSG, use the following syntax:
--endpoint-nsg-ids '["ocid1.networksecuritygroup.unique_ID"]'
-
(Optional) Your public SSH key in RSA format. You cannot add or update an SSH key after the cluster is created.
-
The network type. (Optional) Specify either
OCI_VCN_IP_NATIVE
orFLANNEL_OVERLAY
for the value of thecniType
parameter in the argument for the--cluster-pod-network-options
option. See the descriptions of Flannel Overlay and VCN-Native Pod Networking in the Compute Web UI procedure. If you do not specify the--cluster-pod-network-options
option,FLANNEL_OVERLAY
is used.--cluster-pod-network-options '[{"cniType": "OCI_VCN_IP_NATIVE"}]'
Note:
If you specify
OCI_VCN_IP_NATIVE
, then the VCN you specify must have a subnet namedpod
. See Creating VCN-Native Pod Networking Resources.
-
-
(Optional) Add defined or free-form tags for the cluster resource by using the
--defined-tags
or--freeform-tags
options.Note:
Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the cluster resource.
Define an argument for the
--defined-tags
option to provide the following information for control plane nodes. Specify OraclePCA as the tag namespace.Important:
If you are using Private Cloud Appliance Release 3.0.2-b1081557, these defined tags are not recognized. You must use free-form tags to specify these values as described in the workaround in Create Cluster Does Not Support Extension Parameters. In Private Cloud Appliance Release 3.0.2-b1185392 and later, the free-form tags are deprecated; use the defined tags described below for SSH key, number of control plane nodes, node shape, and node configuration in Private Cloud Appliance Release 3.0.2-b1185392 and later.
Note:
None of these values - SSH key, number of nodes, node shape, or node shape configuration - can be set or changed after the cluster is created. If you set these tags when you update the cluster, the new values are ignored.
-
Your public SSH key.
Specify
sshkey
for the tag key, and paste your public SSH key as the value.Important:
You cannot add an SSH key after the cluster is created.
-
Number of nodes.
By default, the number of nodes in the control plane is 3. You can specify 1, 3, or 5 nodes. To specify the number of control plane nodes, specify
cpNodeCount
for the tag key, and enter 1, 3, or 5 in the Value field. -
Node shape.
For Private Cloud Appliance X10 systems, the shape of the control plane nodes is VM.PCAStandard.E5.Flex and you cannot change it. For all other Private Cloud Appliance systems, the default shape is VM.PCAStandard1.1, and you can specify a different shape.
To use a different shape, specify
cpNodeShape
for the tag key, and enter the name of the shape as the value. Use the following command to list the available shapes and their characteristics.$ oci compute shape list --compartment-id compartment_OCID
-
Node shape configuration.
If you specify a shape that is not a flexible shape, do not specify a shape configuration. The number of OCPUs and amount of memory are set to the values shown for this shape in "Standard Shapes" in Compute Shapes in the Oracle Private Cloud Appliance Concepts Guide.
If you specify a flexible shape, you can change the default shape configuration.To provide shape configuration information, specify
cpNodeShapeConfig
for the tag key. You must specify the number of OCPUs (ocpus
) you want. You can optionally specify the total amount of memory you want (memoryInGBs
). The default value for gigabytes of memory is 16 times the number you specify for OCPUs.Note:
If the cluster will have 1-10 worker nodes, specify at least 16 GB memory. If the cluster will have 11-128 worker nodes, specify at least 2 OCPUs and 32 GB memory. Note that you cannot change the number of OCPUs or amount of memory when you update the cluster.
Specify defined tags either inline or in a file in JSON format, such as the following example file:
{ "OraclePCA": { "sshkey": "ssh-rsa remainder_of_key_text", "cpNodeCount": 1, "cpNodeShape": "VM.PCAStandard1.Flex", "cpNodeShapeConfig": { "ocpus": 2, "memoryInGBs": 48 } } }
Use the following syntax to specify a file of tags. Specify the full path to the
.json
file unless the file is in the same directory where you are running the command.--defined-tags file://cluster_tags.json
-
-
(Optional) You can use the
--service-lb-defined-tags
or--service-lb-freeform-tags
options to specify tags to be applied to all load balancers created by Kubernetes services. Ensure that the applicable dynamic group includes theuse tag-namespaces
policy. See Exposing Containerized Applications. -
Run the create cluster command.
If the
--endpoint-subnet-id
that you specify is a public subnet, then a public endpoint is created, and the--endpoint-public-ip-enabled
option must be set totrue
.If the
--endpoint-subnet-id
that you specify is a private subnet, then a private endpoint is created, and the--endpoint-public-ip-enabled
option must be set tofalse
.Example:
$ oci ce cluster create \ --compartment-id ocid1.compartment.unique_ID --kubernetes-version version \ --name "Native Cluster" --vcn-id ocid1.vcn.unique_ID \ --cluster-pod-network-options '{"cniType":"OCI_VCN_IP_NATIVE"}' \ --endpoint-subnet-id control-plane-endpoint_subnet_OCID \ --endpoint-public-ip-enabled false \ --service-lb-subnet-ids '["service-lb_subnet_OCID"]' \ --defined-tags '{"OraclePCA":{"sshkey":"ssh-rsa remainder_of_key_text"}}'
The output from this
cluster create
command is the same as the output from thecluster get
command.Use the
work-request get
command to check the status of the create operation. The work request OCID is increated-by-work-request-id
in themetadata
section of thecluster create
output.$ oci ce work-request get --work-request-id workrequest_OCID
To identify the control plane nodes for this cluster, list instances in the compartment where you created the cluster. Names of control plane nodes are in the following format:
oke-ID1-control-plane-ID2
-
ID1
- The first 32 characters after thepca_name
in the cluster OCID. -
ID2
- A unique identifier added when the cluster has more than one control plane node.
Search for the instances in the list whose names contain the
ID1
string from this cluster OCID. -
Cluster Next Steps
-
Create a Kubernetes configuration file for the cluster. See Creating a Kubernetes Configuration File.
-
Deploy a Kubernetes Dashboard to manage the cluster and to manage and troubleshoot applications running in the cluster. On the https://kubernetes.io/ site, see Deploy and Access the Kubernetes Dashboard.
-
Create a node pool for the cluster. See Creating an OKE Worker Node Pool.
-
Create a backup for the workload cluster. For example, see Backing up an etcd cluster and Restoring up an etcd cluster in Operating etcd clusters for Kubernetes. Use the etcd backup to recover OKE clusters under disaster scenarios such as losing all control plane nodes. An etcd backup contains all OKE states and critical information. An etcd backup does not back up applications or other content on cluster nodes.