Creating VCN-Native Pod Networking Resources

VCN-Native Pod Networking enables you to directly manage the traffic from pods because pod IP addresses come directly from the VCN CIDR block and not from a network overlay such as Flannel Overlay. VCN-Native Pod Networking offers more flexibility and control over the traffic and allows you to use different security rules.

VCN-Native Pod Networking connects nodes in a Kubernetes cluster to pod subnets in the OKE VCN. Pod IP addresses within the OKE VCN are directly routable from other VCNs that are connected (peered) to the OKE VCN, and from on-premises networks.

When you create a cluster that uses VCN-Native Pod Networking, the VCN that you specify must have a subnet named "pod". You must provide a subnet named "pod" so that the system can find that subnet. The pod subnet has security rules that enable pods on control plane nodes to communicate directly with pods on worker nodes and with other pods and other resources. See Creating a VCN-Native Pod Networking Pod Subnet. If you select VCN-Native Pod Networking and do not have a subnet named "pod", the cluster creation will fail.

When you create a node pool for a cluster that is using VCN-Native Pod Networking, the pod subnet that you specify (Pod Communication > Pod Communication Subnet or --pod-subnet-ids) serves the function of a pod subnet for pods on worker nodes. That pod subnet should have security rules that enable pods on worker nodes to communicate directly with other pods on worker nodes and control plane nodes. You can optionally specify the worker node subnet as the pod subnet. The CIDR of the pod subnet that you specify must be larger than /25. The pod subnet should be larger than the worker node subnet.

In general, when you use VCN-Native Pod Networking, security rules can enable pods to communicate directly with other pods on the same node or on other nodes in the cluster, with other clusters, with other services, and with the internet.

Node Shapes and Number of Pods

When using the OCI VCN-Native Pod Networking CNI plugin, each pod needs a private IP address. By default, 31 IP addresses are assigned to a VNIC for use by pods running on the worker node.

You can specify the maximum number of pods that you want to run on a worker node. The default maximum is 31 pods per worker node. You can specify up to 110.

A node shape, and therefore a worker node, has a minimum of two VNICs. The first VNIC is connected to the worker subnet. The second VNIC is connected to the pod subnet. Therefore a worker node can support at least 31 pods. If you want more than 31 pods on a single worker node, specify a shape for the node pool that supports three or more VNICs: one VNIC to connect to the worker node subnet, and at least two VNICs to connect to the pod subnet.

A VM.PCAStandard1.4 standard node shape can have a maximum of four VNICs, and the worker node can support up to 93 pods. A VM.PCAStandard.E5.Flex node shape with five OCPUs can have a maximum of five VNICs, and the worker node can support up to 110 pods. A node cannot have more than 110 pods (see OKE Service Limits).

The following formula summarizes the maximum number of pods supported per node:

MIN( (Number of VNICs - 1) * 31 ), 110)

For information about all node shapes, see "Compute Shapes" in the Compute Instance Concepts chapter in the Oracle Private Cloud Appliance Concepts Guide.

VCN-Native Pod Networking Resources

The resource definitions in the following sections in this topic create a working example set of network resources for workload clusters when you are using VCN-Native Pod Networking. Use this configuration as a guide when you create these resources. You can change the values of properties such as CIDR blocks and IP addresses. You should not change the values of properties such as the network protocol, the stateful setting, or the private/public setting.

See Workload Cluster Network Ports for VCN-Native Pod Networking for specific ports that must be open for specific purposes.

Create the following network resources. To use Terraform, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

Note:

Create all of these network resources in the same compartment on the appliance.

Workload Cluster Network CIDR Ranges for VCN-Native Pod Networking

Throughout this documentation, variables are used to represent CIDR ranges for instances in different subnets. The following table lists the CIDR variables and example values for use with VCN-Native Pod Networking.

Note:

These are examples only. The CIDR ranges you use depend on the number of clusters you have, the number of nodes in each cluster, the shape you select for the worker nodes, and the type of networking you are using.

For VCN-Native Pod Networking, every pod gets an IP address assigned from the IP address pool that is defined in the pod subnet CIDR. The shape you specify for the node pool determines the maximum number of VNICs (pods) for each worker node, as described in Node Shapes and Number of Pods.

The primary difference between IP address requirements of VCN-Native Pod Networking and Flannel Overlay networking is that VCN-Native Pod Networking requires more IP addresses to be available. The table in Workload Cluster Network CIDR Ranges for Flannel Overlay Networking shows smaller CIDR ranges than the following table for VCN-Native Pod Networking CIDR ranges.

Note:

The pod subnet CIDR must be larger than /25. The pod subnet should be larger than the worker node subnet.

Table 4-11 Example CIDR Values to Use with VCN-Native Pod Networking

Variable Name Description Example Value

vcn_cidr

VCN CIDR range

This Is a small VCN with 8192 IP's for creating OKE infrastructure.

172.31.0.0/19

worker_cidr

Worker subnet CIDR

172.31.8.0/21

workerlb_cidr

Worker load balancer subnet CIDR

172.31.0.0/23

kmi_cidr

OKE control plane subnet CIDR

172.31.4.0/22

kmilb_cidr

OKE control plane load balancer subnet CIDR

172.31.2.0/23

pod_cidr

Pod subnet CIDR

172.31.16.0/20

kube_client_cidr

CIDR for clients that are allowed to contact the Kubernetes API server

10.0.0.0/8

public_ip_cidr

Public IP CIDR configured in the Private Cloud Appliance Service Enclave

10.0.0.0/8

kube_internal_cidr

CIDR used by the Kubernetes infrastructure to allocate IP addresses for various internal services and components

253.255.0.0/16

The IP Subnet Calculator on Calculator.net is one tool for finding all available networks for a given IP address and prefix length.

Workload Cluster Network Ports for VCN-Native Pod Networking

The following table lists ports that are used by workload clusters when you use VCN-Native Pod Networking. These ports must be available to configure workload cluster networking. You might need to open additional ports for other purposes.

All protocols are TCP. All port states are Stateful. Port 6443 is the port used for Kubernetes API and is also known as kubernetes_api_port in this guide.

See also the tables in Port Matrix in the Oracle Private Cloud Appliance Security Guide.

If you are using a separate administration network, see OKE Cluster Management with Administration Network.

Table 4-12 Ports that Must Be Available for Use by Workload Clusters for VCN-Native Pod Networking

Source IP Address Destination IP Address Port Description

bastion host: vcn_cidr

Worker nodes subnet: worker_cidr

22

Outbound connections from the bastion host to the worker CIDR.

bastion host: vcn_cidr

Control plane subnet: kmi_cidr

22

Outbound connections from the bastion host to the control plane nodes.

Worker nodes subnet: worker_cidr

yum repository

80

Outbound connections from the worker CIDR to external applications.

Worker nodes subnet: worker_cidr

Control plane subnet: kmi_cidr

6443

Outbound connections from the worker CIDR to the Kubernetes API. This is necessary to allow nodes to join through either a public IP address on one of the nodes or the load balancer public IP address.

Port 6443 is called the kubernetes_api_port.

Worker nodes subnet: worker_cidr

Control plane load balancer

6443

Inbound connections from the worker CIDR to the Kubernetes API.

CIDR for clients: kube_client_cidr

Control plane load balancer

6443

Inbound connections from clients to the Kubernetes API server.

Worker nodes subnet: worker_cidr

Control plane subnet: kmi_cidr

6443

Private outbound connections from the worker CIDR to kubeapi on the control plane subnet.

kube_client_cidr

Worker nodes subnet: worker_cidr

30000-32767

Inbound traffic for applications from Kubernetes clients.

kmi_cidr

worker_cidr, pod_cidr

10250

Kubernetes API endpoint to worker node communication.

kmi_cidr

worker_cidr, pod_cidr

10256

Allow load balancer or network load balancer to communicate with kube-proxy on worker nodes or pod subnet.

pod_cidr

kmilb_cidr

12250

Pod to Kubernetes API endpoint communication.

kmi_cidr

kmi_cidr

2379-2381

Communication between the etcd server and metrics services. Ports 2379 and 2380 are used by Kubernetes to communicate with the etcd server. Port 2381 is used by Kubernetes to collect metrics from etcd.

kmi_cidr

kmi_cidr

10257-10260

Inbound connection for Kubernetes components.

Example Terraform Scripts for VCN-Native Pod Networking Resources

The following Terraform scripts create the network resources that are required by OKE when you are using VCN-Native Pod Networking. Other sections in this chapter show other ways to define these same network resources.

Most of the values shown in these scripts, such as resource display names and CIDRs, are examples. Some ports must be specified as shown (see Workload Cluster Network Ports for VCN-Native Pod Networking), the OKE pod subnet must be named pod, and the OKE control plane subnet must be named control-plane. See Workload Cluster Network CIDR Ranges for VCN-Native Pod Networking for comments about CIDR values.

variables.tf

This file creates several variables that are used to configure OKE network resources when you are using VCN-Native Pod Networking. Many of these variables are not assigned values in this file. One port and five CIDRs are assigned values. The kubernetes_api_port, port 6443, is the port used to access the Kubernetes API. See also Workload Cluster Network Ports for VCN-Native Pod Networking. The six CIDRs that are defined in this file are for the OKE VCN, pod subnet, worker subnet, worker load balancer subnet, control plane subnet, and control plane load balancer subnet.

variable "oci_config_file_profile" {
  type    = string
  default = "DEFAULT"
}

variable "tenancy_ocid" {
  description = "tenancy OCID"
  type        = string
  nullable    = false
}

variable "compartment_id" {
  description = "compartment OCID"
  type        = string
  nullable    = false
}

variable "vcn_name" {
  description = "VCN name"
  nullable    = false
}

variable "kube_client_cidr" {
  description = "CIDR of Kubernetes API clients"
  type        = string
  nullable    = false
}

variable "kubernetes_api_port" {
  description = "Port used for Kubernetes API"
  type        = string
  default     = "6443"
}

# IP network addressing
variable "vcn_cidr" {
  default = "172.31.0.0/19"
}

# Subnet for KMIs where kube-apiserver and other control
# plane applications run, max 9 nodes
variable "kmi_cidr" {
  description = "Kubernetes control plane subnet CIDR"
  default     = "172.31.4.0/22"
}

# Subnet for KMI load balancer
variable "kmilb_cidr" {
  description = "Kubernetes control plane LB subnet CIDR"
  default     = "172.31.2.0/23"
}

# Subnet CIDR configured for VCN public IP for NAT in Network
variable "public_ip_cidr" {
  description = "Public IP CIDR configured in the Service Enclave"
  type        = string
  nullable    = false
}

# Subnet for worker nodes, max 128 nodes
variable "worker_cidr" {
  description = "Kubernetes worker subnet CIDR"
  default     = "172.31.8.0/21"
}

# Subnet for worker load balancer (for use by CCM)
variable "workerlb_cidr" {
  description = "Kubernetes worker LB subnet CIDR"
  default     = "172.31.0.0/23"
}

# Subnet for pod communication
variable "pod_cidr" {
  description = "Kubernetes pod communication subnet CIDR"
  default     = "172.31.16.0/20"
}

# Flag to Enable private endpoint
variable "enable_private_endpoint" {
  description = "Flag to create private control plane endpoint/service-lb"
  type = bool
  default = false
  nullable = false
}

terraform.tfvars

This file assigns values to some of the variables that were created in variables.tf.

# name of the profile to use from $HOME/.oci/config
oci_config_file_profile = "DEFAULT"

# tenancy ocid from the above profile
tenancy_ocid = "tenancy_OCID"

# compartment in which to build the OKE cluster
compartment_id = "compartment_OCID"

# display-name for the OKE VCN
vcn_name = "oketest"

provider.tf

This file is required in order to use the OCI provider. The file initializes the OCI module using the OCI profile configuration file.

provider "oci" {
  config_file_profile = var.oci_config_file_profile
  tenancy_ocid        = var.tenancy_ocid
}

main.tf

This file specifies the provider to use (oracle/oci), defines several security list rules, and initializes required local variables.

The version of the OCI provider that you use must be at least v4.50.0 but no greater than v6.36.0.

terraform {
  required_providers {
    oci = {
      source  = "oracle/oci"
      version = ">= 4.50.0, <= 6.36.0"
      # If necessary, you can pin a specific version here
      # version = "4.71.0"
    }
  }
  required_version = ">= 1.1"
}

locals {
  kube_internal_cidr      = "253.255.0.0/16"
  worker_lb_ingress_rules = [
    {
      source   = var.kube_client_cidr
      port_min = 80
      port_max = 80
    },
    {
      source   = var.kube_client_cidr
      port_min = 443
      port_max = 443
    }
  ]
  worker_ingress_rules = [
    {
      source   = var.kube_client_cidr
      port_min = 30000
      port_max = 32767
    },
    {
      source   = var.kmi_cidr
      port_min = 22
      port_max = 22
    },
    {
      source   = var.worker_cidr
      port_min = 22
      port_max = 22
    },
    {
      source   = var.worker_cidr
      port_min = 10250
      port_max = 10250
    },
    {
      source   = var.worker_cidr
      port_min = 10256
      port_max = 10256
    },
    {
      source   = var.worker_cidr
      port_min = 30000
      port_max = 32767
    },
    {
      source   = var.workerlb_cidr
      port_min = 10256
      port_max = 10256
    },
    {
      source   = var.workerlb_cidr
      port_min = 30000
      port_max = 32767
    },
    {
      source   = var.kmi_cidr
      port_min = 10250
      port_max = 10250
    },
    {
      source   = var.kmi_cidr
      port_min = 10256
      port_max = 10256
    },
    {
      source   = var.pod_cidr
      port_min = 30000
      port_max = 32767
    },
  ]
  kmi_lb_ingress_rules = [
    {
      source   = local.kube_internal_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.kube_client_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.kmi_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.worker_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.worker_cidr
      port_min = 12250
      port_max = 12250
    },
    {
      source   = var.pod_cidr
      port_min = 12250
      port_max = 12250
    },
  ]
  kmi_ingress_rules = [
    {
      source   = var.kube_client_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.kmilb_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.kmilb_cidr
      port_min = 12250
      port_max = 12250
    },
    {
      source   = var.worker_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.worker_cidr
      port_min = 12250
      port_max = 12250
    },
    {
      source   = var.kmi_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.kmi_cidr
      port_min = 2379
      port_max = 2381
    },
    {
      source   = var.kmi_cidr
      port_min = 10250
      port_max = 10250
    },
    {
      source   = var.kmi_cidr
      port_min = 10257
      port_max = 10260
    },
    {
      source   = var.pod_cidr
      port_min = var.kubernetes_api_port
      port_max = var.kubernetes_api_port
    },
    {
      source   = var.pod_cidr
      port_min = 12250
      port_max = 12250
    },
  ]
  pod_ingress_rules = [
    {
      source   = var.vcn_cidr
      port_min = 22
      port_max = 22
    },
    {
      source   = var.workerlb_cidr
      port_min = 10256
      port_max = 10256
    },
    {
      source   = var.worker_cidr
      port_min = 10250
      port_max = 10250
    },
    {
      source   = var.worker_cidr
      port_min = 10256
      port_max = 10256
    },
    {
      source   = var.worker_cidr
      port_min = 80
      port_max = 80
    },
  ]
}

oke_vcn.tf

This file defines a VCN, NAT gateway, internet gateway, private route table, and public route table. The private route table is the default route table for the VCN.

resource "oci_core_vcn" "oke_vcn" {
  cidr_block     = var.vcn_cidr
  dns_label      = var.vcn_name
  compartment_id = var.compartment_id
  display_name   = "${var.vcn_name}-vcn"
}

resource "oci_core_nat_gateway" "vcn_ngs" {
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id
  count          = var.enable_private_endpoint ? 0:1

  display_name = "VCN nat g6s"
}

resource "oci_core_internet_gateway" "vcn_igs" {
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id
  count          = var.enable_private_endpoint ? 0:1

  display_name = "VCN i6t g6s"
  enabled      = true
}

resource "oci_core_default_route_table" "default_private" {
  manage_default_resource_id = oci_core_vcn.oke_vcn.default_route_table_id
  display_name   = "Default - private"
  count          = var.enable_private_endpoint ? 1:0
}

resource "oci_core_default_route_table" "private" {
  count          = var.enable_private_endpoint ? 0:1
  manage_default_resource_id = oci_core_vcn.oke_vcn.default_route_table_id
  display_name   = "Default - private"

  route_rules {
    destination       = "0.0.0.0/0"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = oci_core_nat_gateway.vcn_ngs[0].id
  }
}

resource "oci_core_route_table" "public" {
  count          = var.enable_private_endpoint ? 0:1
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  display_name = "public"
  route_rules {
    destination       = "0.0.0.0/0"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = oci_core_internet_gateway.vcn_igs[0].id
  }
}

oke_pod_seclist.tf

This file defines the security list for the pod subnet. The rules for this security list were defined in other Terraform files in this set.

resource "oci_core_security_list" "pod" {
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  display_name = "${var.vcn_name}-pod"

  dynamic "ingress_security_rules" {
    iterator = port
    for_each = local.pod_ingress_rules

    content {
      source      = port.value.source
      source_type = "CIDR_BLOCK"
      protocol    = "6"
      tcp_options {
        min = port.value.port_min
        max = port.value.port_max
      }
    }
  }

   dynamic "ingress_security_rules" {
    iterator = icmp_type
    for_each = [0, 8]

    content {
      # ping from VCN; unreachable/TTL from anywhere
      source      = var.kmi_cidr
      source_type = "CIDR_BLOCK"
      protocol    = "1"
      icmp_options {
        type = icmp_type.value
      }
    }
  }

  dynamic "ingress_security_rules" {
    for_each = var.pod_cidr != null ? [var.pod_cidr] : []

    content {
      source      = ingress_security_rules.value
      source_type = "CIDR_BLOCK"
      protocol    = "all"
    }
  }
}

oke_pod_subnet.tf

This file defines the pod subnet.

Important:

The name of the pod subnet must be exactly pod.

resource "oci_core_subnet" "pod" {
  cidr_block     = var.pod_cidr
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  display_name               = "pod"
  dns_label                  = "pod"
  prohibit_public_ip_on_vnic = true

  security_list_ids = [
    oci_core_default_security_list.oke_vcn.id,
    oci_core_security_list.pod.id
  ]
}

oke_worker_seclist.tf

This file defines the security lists for both the worker subnet and the worker load balancer subnet. The rules for these security lists were defined in other Terraform files in this set.

resource "oci_core_security_list" "workerlb" {
  display_name   = "${var.vcn_name}-workerlb"
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  dynamic "ingress_security_rules" {
    iterator = port
    for_each = local.worker_lb_ingress_rules

    content {
      source      = port.value.source
      source_type = "CIDR_BLOCK"
      protocol    = "6"
      tcp_options {
        min = port.value.port_min
        max = port.value.port_max
      }
    }
  }
}

resource "oci_core_security_list" "worker" {
  display_name   = "${var.vcn_name}-worker"
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  dynamic "ingress_security_rules" {
    iterator = port
    for_each = local.worker_ingress_rules

    content {
      source      = port.value.source
      source_type = "CIDR_BLOCK"
      protocol    = "6"
      tcp_options {
        min = port.value.port_min
        max = port.value.port_max
      }
    }
  }

  dynamic "ingress_security_rules" {
    iterator = icmp_type
    for_each = [0, 8]

    content {
      # ping from VCN; unreachable/TTL from anywhere
      source      = var.kmi_cidr
      source_type = "CIDR_BLOCK"
      protocol    = "1"
      icmp_options {
        type = icmp_type.value
      }
    }
  }
}

oke_worker_subnet.tf

This file defines the worker and worker load balancer subnets. The worker load balancer subnet is named service-lb.

resource "oci_core_subnet" "worker" {
  cidr_block     = var.worker_cidr
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  display_name               = "worker"
  dns_label                  = "worker"
  prohibit_public_ip_on_vnic = true

  security_list_ids = [
    oci_core_default_security_list.oke_vcn.id,
    oci_core_security_list.worker.id
  ]
}

resource "oci_core_subnet" "worker_lb" {
  cidr_block     = var.workerlb_cidr
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  display_name               = "service-lb"
  dns_label                  = "servicelb"
  prohibit_public_ip_on_vnic = var.enable_private_endpoint
  route_table_id             = var.enable_private_endpoint==false ? oci_core_route_table.public[0].id : oci_core_vcn.oke_vcn.default_route_table_id

  security_list_ids = [
    oci_core_default_security_list.oke_vcn.id,
    oci_core_security_list.workerlb.id
  ]
}

oke_kmi_seclist.tf

This file defines the security lists for the control plane and control plane load balancer subnets. This file also defines updates to make to the default security list for the VCN.

resource "oci_core_default_security_list" "oke_vcn" {
  manage_default_resource_id = oci_core_vcn.oke_vcn.default_security_list_id

  egress_security_rules {
    destination      = "0.0.0.0/0"
    destination_type = "CIDR_BLOCK"
    protocol         = "all"
  }

  dynamic "ingress_security_rules" {
    iterator = icmp_type
    for_each = [3, 8, 11]

    content {
      # ping from VCN; unreachable/TTL from anywhere
      source      = (icmp_type.value == "8" ? var.vcn_cidr : "0.0.0.0/0")
      source_type = "CIDR_BLOCK"
      protocol    = "1"
      icmp_options {
        type = icmp_type.value
      }
    }
  }
}

resource "oci_core_security_list" "kmilb" {
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  display_name = "${var.vcn_name}-kmilb"

  dynamic "ingress_security_rules" {
    iterator = port
    for_each = local.kmi_lb_ingress_rules

    content {
      source      = port.value.source
      source_type = "CIDR_BLOCK"
      protocol    = "6"
      tcp_options {
        min = port.value.port_min
        max = port.value.port_max
      }
    }
  }

  dynamic "ingress_security_rules" {
    for_each = var.enable_private_endpoint ? [1] : []
    content {
      source   = var.kmilb_cidr
      source_type = "CIDR_BLOCK"
      protocol = "6"
      tcp_options {
        min = var.kubernetes_api_port
        max = var.kubernetes_api_port
      }
    }
  }

  dynamic "ingress_security_rules" {
    for_each = var.enable_private_endpoint ? [] : [0]
    content {
      source   = var.public_ip_cidr
      source_type = "CIDR_BLOCK"
      protocol = "6"
      tcp_options {
        min = var.kubernetes_api_port
        max = var.kubernetes_api_port
      }
    }
  }

  depends_on = []
}

resource "oci_core_security_list" "kmi" {
  compartment_id = var.compartment_id
  vcn_id         = oci_core_vcn.oke_vcn.id

  display_name = "${var.vcn_name}-kmi"

  dynamic "ingress_security_rules" {
    iterator = port
    for_each = local.kmi_ingress_rules

    content {
      source      = port.value.source
      source_type = "CIDR_BLOCK"
      protocol    = "6"
      tcp_options {
        min = port.value.port_min
        max = port.value.port_max
      }
    }
  }
}

oke_kmi_subnet.tf

This file defines the control plane and control plane load balancer subnets.

Important:

The name of the kmi subnet must be exactly control-plane.

resource "oci_core_subnet" "kmi" {
  cidr_block                 = var.kmi_cidr
  compartment_id             = var.compartment_id
  display_name               = "control-plane"
  dns_label                  = "kmi"
  vcn_id                     = oci_core_vcn.oke_vcn.id
  prohibit_public_ip_on_vnic = true
  security_list_ids = [
    oci_core_default_security_list.oke_vcn.id,
    oci_core_security_list.kmi.id
  ]
}

resource "oci_core_subnet" "kmi_lb" {
  cidr_block                 = var.kmilb_cidr
  compartment_id             = var.compartment_id
  dns_label                  = "kmilb"
  vcn_id                     = oci_core_vcn.oke_vcn.id
  display_name               = "control-plane-endpoint"
  prohibit_public_ip_on_vnic = var.enable_private_endpoint
  route_table_id             = var.enable_private_endpoint==false ? oci_core_route_table.public[0].id : oci_core_default_route_table.default_private[0].id
  security_list_ids = [
    oci_core_default_security_list.oke_vcn.id,
    oci_core_security_list.kmilb.id
  ]
}

Creating a VCN-Native Pod Networking VCN

Create the following resources in the order listed:

  1. VCN

  2. Route rules

    • Public clusters:

      • Internet gateway and a route table with a route rule that references that internet gateway.

      • NAT gateway and a route table with a route rule that references that NAT gateway.

    • Private clusters:

      • Route table with no route rules.

      • (Optional) Dynamic Routing Gateway (DRG), attach the OKE VCN to that DRG, and create a route table with a route rule that references that DRG. See Private Clusters.

      • (Optional) Local Peering Gateway (LPG) and a route table with a route rule that references that LPG. See Private Clusters.

  3. Security list. Modify the VCN default security list

Resource names and CIDR blocks are example values.

VCN

To create the VCN, use the instructions in Creating a VCN in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

For this example, use the following input to create the VCN. The VCN covers one contiguous CIDR block. The CIDR block cannot be changed after the VCN is created.

Compute Web UI property OCI CLI property
  • Name: oketest-vcn

  • CIDR Block: vcn_cidr

  • DNS Label: oketest

    This label must be unique across all VCNs in the tenancy.

  • --display-name: oketest-vcn

  • --cidr-blocks: '["vcn_cidr"]'

  • --dns-label: oketest

    This label must be unique across all VCNs in the tenancy.

Note the OCID of the new VCN. In the examples in this guide, this VCN OCID is ocid1.vcn.oke_vcn_id.

Next Steps

  • Public internet access. For traffic on a public subnet that connects to the internet using public IP addresses, create an internet gateway and a route rule that references that internet gateway.

  • Private internet access. For traffic on a private subnet that needs to connect to the internet without exposing private IP addresses, create a NAT gateway and a route rule that references that NAT gateway.

  • VCN-only access. To restrict communication to only other resources on the same VCN, use the default route table, which has no route rules.

  • Instances in another VCN. To enable communication between the cluster and an instance running on a different VCN, create a Local Peering Gateway (LPG) and a route rule that references that LPG.

  • Data center IP address space. To enable communication between the cluster and the on-premises network IP address space, create a Dynamic Routing Gateway (DRG) and a route rule that references that DRG.

VCN Private Route Table

Edit the default route table that was created when you created the VCN. Change the name of the route table to vcn_private. This route table does not have any route rules. Do not add any route rules.

NAT Private Route Table

Create a NAT gateway and a route table with a route rule that references the NAT gateway.

NAT Gateway

To create the NAT gateway, use the instructions in Enabling Public Connections through a NAT Gateway in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

Note the name and OCID of the NAT gateway for assignment to the private route rule.

Private Route Rule

To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for Flannel Overlay Network Resources.

For this example, use the following input to create the route table with a private route rule that references the NAT gateway that was created in the preceding step.

Compute Web UI property OCI CLI property
  • Name: nat_private

Route rule

  • Target Type: NAT Gateway

  • NAT Gateway: Name of the NAT gateway that was created in the preceding step

  • CIDR Block: 0.0.0.0/0

  • Description: NAT private route rule

  • --display-name: nat_private

--route-rules

  • networkEntityId: OCID of the NAT gateway that was created in the preceding step

  • destinationType: CIDR_BLOCK

  • destination: 0.0.0.0/0

  • description: NAT private route rule

Note the name and OCID of this route table for assignment to private subnets.

Local Peering Gateway

Create a Local Peering gateway (LPG) and a route table with a route rule that references the LPG.

Local Peering Gateway

To create the LPG, use the instructions in "Connecting VCNs through a Local Peering Gateway" in the Networking chapter of the Oracle Private Cloud Appliance User Guide.

Note the name and OCID of the LPG for assignment to the private route rule.

Private Route Rule

To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide.

For this example, use the following input to create the route table with a private route rule that references the LPG that was created in the preceding step.

Compute Web UI property OCI CLI property
  • Name: lpg_rt

Route rule

  • Target Type: Local Peering Gateway

  • Local Peering Gateway: Name of the LPG that was created in the preceding step

  • CIDR Block: CIDR_for_the_second_VCN

  • Description: LPG private route rule

  • --display-name: lpg_rt

--route-rules

  • networkEntityId: OCID of the LPG that was created in the preceding step

  • destinationType: CIDR_BLOCK

  • destination: CIDR_for_the_second_VCN

  • description: LPG private route rule

Note the name and OCID of this route table for assignment to the "control-plane-endpoint" subnet (Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet).

Add the same route rule on the second VCN (the peered VCN), specifying the OKE VCN CIDR as the destination.

Dynamic Routing Gateway

Create a Dynamic Routing gateway (DRG) and a route table with a route rule that references the DRG.

Dynamic Routing Gateway

To create the DRG and attach the OKE VCN to that DRG, use the instructions in "Connecting to the On-Premises Network through a Dynamic Routing Gateway" in the Networking chapter of the Oracle Private Cloud Appliance User Guide. Create the DRG in the OKE VCN compartment, and then attach the OKE VCN to that DRG.

Note the name and OCID of the DRG for assignment to the private route rule.

Private Route Rule

To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide.

For this example, use the following input to create the route table with a private route rule that references the DRG that was created in the preceding step.

Compute Web UI property OCI CLI property
  • Name: drg_rt

Route rule

  • Target Type: Dynamic Routing Gateway

  • Dynamic Routing: Name of the DRG that was created in the preceding step

  • CIDR Block: 0.0.0.0/0

  • Description: DRG private route rule

  • --display-name: drg_rt

--route-rules

  • networkEntityId: OCID of the DRG that was created in the preceding step

  • destinationType: CIDR_BLOCK

  • destination: 0.0.0.0/0

  • description: DRG private route rule

Note the name and OCID of this route table for assignment to the "control-plane-endpoint" subnet (Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet).

Public Route Table

Create an Internet gateway and a route table with a route rule that references the Internet gateway.

Internet Gateway

To create the internet gateway, use the instructions in Providing Public Access through an Internet Gateway in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

Note the name and OCID of the internet gateway for assignment to the public route rule.

Public Route Rule

To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

For this example, use the following input to create the route table with a public route rule that references the internet gateway that was created in the preceding step.

Compute Web UI property OCI CLI property
  • Name: public

Route rule

  • Target Type: Internet Gateway

  • Internet Gateway: Name of the internet gateway that was created in the preceding step

  • CIDR Block: 0.0.0.0/0

  • Description: OKE public route rule

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: public

--route-rules

  • networkEntityId: OCID of the internet gateway that was created in the preceding step

  • destinationType: CIDR_BLOCK

  • destination: 0.0.0.0/0

  • description: OKE public route rule

Note the name and OCID of this route table for assignment to public subnets.

VCN Default Security List

Modify the default security list, using the input shown in the following table. Delete all of the default rules and create the rules shown in the following table.

To modify a security list, use the instructions in "Updating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

Compute Web UI property OCI CLI property
  • Name: Default Security List for oketest-vcn

--security-list-id: ocid1.securitylist.default_securitylist_id

One egress security rule:

  • Stateless: uncheck the box

  • Egress CIDR: 0.0.0.0/0

  • IP Protocol: All protocols

  • Description: "Allow all outgoing traffic."

One egress security rule:

--egress-security-rules

  • isStateless: false

  • destination: 0.0.0.0/0

  • destinationType: CIDR_BLOCK

  • protocol: all

  • description: "Allow all outgoing traffic."

Three ingress security rules:

Three ingress security rules:

--ingress-security-rules

Ingress Rule 1

  • Stateless: uncheck the box

  • Ingress CIDR: vcn_cidr

  • IP Protocol: ICMP

    • Parameter Type: 8: Echo

  • Description: "Allow ping from VCN."

Ingress Rule 1

  • isStateless: false

  • source: vcn_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 1

  • icmpOptions

    • type: 8

  • description: "Allow ping from VCN."

Ingress Rule 2

  • Stateless: uncheck the box

  • Ingress CIDR: 0.0.0.0/0

  • IP Protocol: ICMP

    • Parameter Type: 3: Destination Unreachable

  • Description: "Blocks incoming requests from any source."

Ingress Rule 2

  • isStateless: false

  • source: 0.0.0.0/0

  • sourceType: CIDR_BLOCK

  • protocol: 1

  • icmpOptions

    • type: 3

  • description: "Blocks incoming requests from any source."

Ingress Rule 3

  • Stateless: uncheck the box

  • Ingress CIDR: 0.0.0.0/0

  • IP Protocol: ICMP

    • Parameter Type: 11: Time Exceeded

  • Description: "Time exceeded."

Ingress Rule 3

  • isStateless: false

  • source: 0.0.0.0/0

  • sourceType: CIDR_BLOCK

  • protocol: 1

  • icmpOptions

    • type: 11

  • description: "Time exceeded."

Note the name and OCID of this default security list for assignment to subnets.

Creating a VCN-Native Pod Networking Pod Subnet

The instructions in this topic create a pod subnet named "pod" in the VCN that provides the private IP addresses for pods running on the control plane nodes. The number of IP addresses in this subnet should be equal to or greater than the number of IP addresses in the control plane subnet. The pod subnet must be a private subnet.

The pod subnet supports communication between pods and direct access to individual pods using private pod IP addresses. The pod subnet must be private. The pod subnet enables pods to communicate with other pods on the same worker node, with pods on other worker nodes, with OCI services (through a service gateway) and with the internet (through a NAT gateway).

Create the following resources in the order listed:

  1. Pod security list

  2. Pod subnet

Create a Pod Security List

To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

The security rules shown in the following table define traffic that is allowed to contact pods directly. Use these security rules as part of network security groups (NSGs) or in security lists. Oracle recommends using NSGs. See Security Best Practices.

The security rules apply to all pods in all the worker nodes connected to the pod subnet specified for a node pool.

Route incoming requests to pods based on routing policies specified by routing rules and route tables. See the route tables defined in Creating a VCN-Native Pod Networking VCN.

For this example, use the following input for the pod subnet security list.

Compute Web UI property OCI CLI property
  • Name: pod-seclist

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: pod-seclist

One egress security rule:

  • Stateless: uncheck the box

  • Egress CIDR: 0.0.0.0/0

  • IP Protocol: All protocols

  • Description: "Allow all outgoing traffic."

One egress security rule:

--egress-security-rules

  • isStateless: false

  • destination: 0.0.0.0/0

  • destinationType: CIDR_BLOCK

  • protocol: all

  • description: "Allow all outgoing traffic."

Eight ingress security rules:

Eight ingress security rules:

--ingress-security-rules

Ingress Rule 1

  • Stateless: uncheck the box

  • Ingress CIDR: vcn_cidr

  • IP Protocol: TCP

    • Destination Port Range: 22

  • Description: "Allow SSH connection to the pod subnet from all subnets in the VCN."

Ingress Rule 1

  • isStateless: false

  • source: vcn_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 22

    • min: 22

  • description: "Allow SSH connection to the pod subnet from all subnets in the VCN."

Ingress Rule 2

  • Stateless: uncheck the box

  • Ingress CIDR: workerlb_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10256

  • Description: "Allow the worker load balancer to contact the pods."

Ingress Rule 2

  • isStateless: false

  • source: workerlb_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10256

    • min: 10256

  • description: "Allow the worker load balancer to contact the pods."

Ingress Rule 3

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10250

  • Description: "Allow Kubernetes API endpoint to pod (via worker node) communication."

Ingress Rule 3

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10250

    • min: 10250

  • description: "Allow Kubernetes API endpoint to pod (via worker node) communication."

Ingress Rule 4

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10256

  • Description: "Allow Load Balancer or Network Load Balancer to communicate with the kube-proxy pod (via the worker subnet)."

Ingress Rule 4

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10256

    • min: 10256

  • description: "Allow Load Balancer or Network Load Balancer to communicate with the kube-proxy pod (via the worker subnet)."

Ingress Rule 5

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 80

  • Description: "Allow the worker node to contact the pods."

This ingress is optional. This port is open for an end user application. This rule could be different based on what applications are deployed.

Ingress Rule 5

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 80

    • min: 80

  • description: "Allow the worker node to contact the pods."

This ingress is optional. This port is open for an end user application. This rule could be different based on what applications are deployed.

Ingress Rule 6

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: ICMP

    • Parameter Type: 8: Echo

  • Description: "Test the reachability of a network pod from kmi_cidr by sending a request."

Ingress Rule 6

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 1

  • icmpOptions

    • type: 8

  • description: "Test the reachability of a network pod from kmi_cidr by sending a request."

Ingress Rule 7

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: ICMP

    • Parameter Type: 0: Echo Reply

  • Description: "If the destination pod is reachable from kmi_cidr, respond with an ICMP Echo Reply."

Ingress Rule 7

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 1

  • icmpOptions

    • type: 0

  • description: "If the destination pod is reachable from kmi_cidr, respond with an ICMP Echo Reply."

Ingress Rule 8

  • Stateless: uncheck the box

  • Ingress CIDR: pod_cidr

  • IP Protocol: All protocols

  • Description: "Allow the pod CIDR to communicate with itself."

Ingress Rule 8

  • isStateless: false

  • source: pod_cidr

  • sourceType: CIDR_BLOCK

  • protocol: all

  • description: "Allow the pod CIDR to communicate with itself."

Create the Pod Subnet

To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

For this example, use the following input to create the pod subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the pod subnet in the same compartment where you created the VCN.

Important:

The name of this subnet must be exactly "pod".

Compute Web UI property OCI CLI property
  • Name: pod

  • CIDR Block: pod_cidr

  • Route Table: Select "nat_private" from the list

  • Private Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: pod

  • Security Lists: Select "pod-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: pod

  • --cidr-block: pod_cidr

  • --dns-label: pod

  • --prohibit-public-ip-on-vnic: true

  • --route-table-id: OCID of the "nat_private" route table

  • --security-list-ids: OCIDs of the "pod-seclist" security list and the "Default Security List for oketest-vcn" security list

Creating a VCN-Native Pod Networking Worker Subnet

Create the following resources in the order listed:

  1. Worker security list

  2. Worker subnet

Create a Worker Security List

To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

This security list defines traffic that is allowed to contact worker nodes directly.

For this example, use the following input for the worker subnet security list.

Compute Web UI property OCI CLI property
  • Name: worker-seclist

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: worker-seclist

One egress security rule:

  • Stateless: uncheck the box

  • Egress CIDR: 0.0.0.0/0

  • IP Protocol: All protocols

  • Description: "Allow all outgoing traffic."

One egress security rule:

--egress-security-rules

  • isStateless: false

  • destination: 0.0.0.0/0

  • destinationType: CIDR_BLOCK

  • protocol: all

  • description: "Allow all outgoing traffic."

Thirteen ingress security rules:

Thirteen ingress security rules:

--ingress-security-rules

Ingress Rule 1

  • Stateless: uncheck the box

  • Ingress CIDR: kube_client_cidr

  • IP Protocol: TCP

    • Destination Port Range: 30000-32767

  • Description: "Allow worker nodes to receive connections through the pod subnet."

Ingress Rule 1

  • isStateless: false

  • source: kube_client_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 32767

    • min: 30000

  • description: "Allow worker nodes to receive connections through the pod subnet."

Ingress Rule 2

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: 22

  • Description: "Allow SSH connection from the control plane subnet."

Ingress Rule 2

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 22

    • min: 22

  • description: "Allow SSH connection from the control plane subnet."

Ingress Rule 3

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 22

  • Description: "Allow SSH connection from the worker subnet."

Ingress Rule 3

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 22

    • min: 22

  • description: "Allow SSH connection from the worker subnet."

Ingress Rule 4

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10250

  • Description: "Allow Kubernetes API endpoint to worker node communication."

Ingress Rule 4

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10250

    • min: 10250

  • description: "Allow Kubernetes API endpoint to worker node communication."

Ingress Rule 5

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10256

  • Description: "Allow Load Balancer or Network Load Balancer to communicate with kube-proxy on worker nodes."

Ingress Rule 5

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10256

    • min: 10256

  • description: "Allow Load Balancer or Network Load Balancer to communicate with kube-proxy on worker nodes."

Ingress Rule 6

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 30000-32767

  • Description: "Allow traffic to worker nodes."

Ingress Rule 6

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 32767

    • min: 30000

  • description: "Allow traffic to worker nodes."

Ingress Rule 7

  • Stateless: uncheck the box

  • Ingress CIDR: workerlb_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10256

  • Description: "Allow Load Balancer or Network Load Balancer to communicate with kube-proxy on worker nodes."

Ingress Rule 7

  • isStateless: false

  • source: workerlb_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10256

    • min: 10256

  • description: "Allow Load Balancer or Network Load Balancer to communicate with kube-proxy on worker nodes."

Ingress Rule 8

  • Stateless: uncheck the box

  • Ingress CIDR: workerlb_cidr

  • IP Protocol: TCP

    • Destination Port Range: 30000-32767

  • Description: "Allow worker nodes to receive connections through Network Load Balancer."

Ingress Rule 8

  • isStateless: false

  • source: workerlb_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 32767

    • min: 30000

  • description: "Allow worker nodes to receive connections through Network Load Balancer."

Ingress Rule 9

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10250

  • Description: "Allow Kubernetes API endpoint to worker node communication."

Ingress Rule 9

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10250

    • min: 10250

  • description: "Allow Kubernetes API endpoint to worker node communication."

Ingress Rule 10

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10256

  • Description: "Allow Load Balancer or Network Load Balancer to communicate with kube-proxy on worker nodes."

Ingress Rule 10

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10256

    • min: 10256

  • description: "Allow Load Balancer or Network Load Balancer to communicate with kube-proxy on worker nodes."

Ingress Rule 11

  • Stateless: uncheck the box

  • Ingress CIDR: pod_cidr

  • IP Protocol: TCP

    • Destination Port Range: 30000-32767

  • Description: "Allow worker nodes to receive connections through the pod subnet."

Ingress Rule 11

  • isStateless: false

  • source: pod_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 32767

    • min: 30000

  • description: "Allow worker nodes to receive connections through the pod subnet."

Ingress Rule 12

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: ICMP

    • Parameter Type: 8: Echo

  • Description: "Test the reachability of a network pod from kmi_cidr by sending a request."

Ingress Rule 12

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 1

  • icmpOptions

    • type: 8

  • description: "Test the reachability of a network pod from kmi_cidr by sending a request."

Ingress Rule 13

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: ICMP

    • Parameter Type: 0: Echo Reply

  • Description: "If the destination pod is reachable from kmi_cidr, respond with an ICMP Echo Reply."

Ingress Rule 13

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 1

  • icmpOptions

    • type: 0

  • description: "If the destination pod is reachable from kmi_cidr, respond with an ICMP Echo Reply."

Create the Worker Subnet

To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

For this example, use the following input to create the worker subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the worker subnet in the same compartment where you created the VCN.

Create either a NAT private worker subnet or a VCN private worker subnet. Create a NAT private worker subnet to communicate outside the VCN.

Table 4-13 Create a NAT Private Worker Subnet

Compute Web UI property OCI CLI property
  • Name: worker

  • CIDR Block: worker_cidr

  • Route Table: Select "nat_private" from the list

  • Private Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: worker

  • Security Lists: Select "worker-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: worker

  • --cidr-block: worker_cidr

  • --dns-label: worker

  • --prohibit-public-ip-on-vnic: true

  • --route-table-id: OCID of the "nat_private" route table

  • --security-list-ids: OCIDs of the "worker-seclist" security list and the "Default Security List for oketest-vcn" security list

The difference in the following private subnet is the VCN private route table is used instead of the NAT private route table.

Table 4-14 Create a VCN Private Worker Subnet

Compute Web UI property OCI CLI property
  • Name: worker

  • CIDR Block: worker_cidr

  • Route Table: Select "vcn_private" from the list

  • Private Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: worker

  • Security Lists: Select "worker-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: worker

  • --cidr-block: worker_cidr

  • --dns-label: worker

  • --prohibit-public-ip-on-vnic: true

  • --route-table-id: OCID of the "vcn_private" route table

  • --security-list-ids: OCIDs of the "worker-seclist" security list and the "Default Security List for oketest-vcn" security list

Creating a VCN-Native Pod Networking Worker Load Balancer Subnet

Create the following resources in the order listed:

  1. Worker load balancer security list

  2. Worker load balancer subnet

Create a Worker Load Balancer Security List

To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

This security list defines traffic, such as applications, that is allowed to contact the worker load balancer.

For this example, use the following input for the worker load balancer subnet security list. These sources and destinations are examples; adjust these for your applications.

Note:

When you create an external load balancer for your containerized applications (see Exposing Containerized Applications), remember to add that load balancer service front-end port to this security list.

Compute Web UI property OCI CLI property
  • Name: workerlb-seclist

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: workerlb-seclist

One egress security rule:

  • Stateless: uncheck the box

  • Egress CIDR: 0.0.0.0/0

  • IP Protocol: All protocols

  • Description: "Allow all outgoing traffic."

One egress security rule:

--egress-security-rules

  • isStateless: false

  • destination: 0.0.0.0/0

  • destinationType: CIDR_BLOCK

  • protocol: all

  • description: "Allow all outgoing traffic."

Two ingress security rules:

Two ingress security rules:

--ingress-security-rules

Ingress Rule 1

  • Stateless: uncheck the box

  • Ingress CIDR: kube_client_cidr

  • IP Protocol: TCP

    • Destination Port Range: 80

  • Description: "Allow inbound traffic for applications."

Ingress Rule 1

  • isStateless: false

  • source: kube_client_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 80

    • min: 80

  • description: "Allow inbound traffic for applications."

Ingress Rule 2

  • Stateless: uncheck the box

  • Ingress CIDR: kube_client_cidr

  • IP Protocol: TCP

    • Destination Port Range: 443

  • Description: "Allow inbound traffic for applications."

Ingress Rule 2

  • isStateless: false

  • source: kube_client_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 443

    • min: 443

  • description: "Allow inbound traffic for applications."

Create the Worker Load Balancer Subnet

To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

For this example, use the following input to create the worker load balancer subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the worker load balancer subnet in the same compartment where you created the VCN.

Create either a private or a public worker load balancer subnet. Create a public worker load balancer subnet to use with a public cluster. Create a private worker load balancer subnet to expose applications in a private cluster.

Table 4-15 Create a Public Worker Load Balancer Subnet

Compute Web UI property OCI CLI property
  • Name: service-lb

  • CIDR Block: workerlb_cidr

  • Route Table: Select "public" from the list

  • Public Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: servicelb

  • Security Lists: Select "workerlb-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: service-lb

  • --cidr-block: workerlb_cidr

  • --dns-label: servicelb

  • --prohibit-public-ip-on-vnic: false

  • --route-table-id: OCID of the "public" route table

  • --security-list-ids: OCIDs of the "workerlb-seclist" security list and the "Default Security List for oketest-vcn" security list

The difference in the following private subnet is the VCN private route table is used instead of the public route table.

Table 4-16 Create a VCN Private Worker Load Balancer Subnet

Compute Web UI property OCI CLI property
  • Name: service-lb

  • CIDR Block: workerlb_cidr

  • Route Table: Select "vcn_private" from the list

  • Private Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: servicelb

  • Security Lists: Select "workerlb-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: service-lb

  • --cidr-block: workerlb_cidr

  • --dns-label: servicelb

  • --prohibit-public-ip-on-vnic: true

  • --route-table-id: OCID of the "vcn_private" route table

  • --security-list-ids: OCIDs of the "workerlb-seclist" security list and the "Default Security List for oketest-vcn" security list

Creating a VCN-Native Pod Networking Control Plane Subnet

Create the following resources in the order listed:

  1. Control plane security list

  2. Control plane subnet

Create a Control Plane Security List

To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

For this example, use the following input for the control plane subnet security list. The kubernetes_api_port is the port used to access the Kubernetes API: port 6443. See also Workload Cluster Network Ports for VCN-Native Pod Networking.

Compute Web UI property OCI CLI property
  • Name: kmi-seclist

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: kmi-seclist

One egress security rule:

  • Stateless: uncheck the box

  • Egress CIDR: 0.0.0.0/0

  • IP Protocol: All protocols

  • Description: "Allow all outgoing traffic."

One egress security rule:

--egress-security-rules

  • isStateless: false

  • destination: 0.0.0.0/0

  • destinationType: CIDR_BLOCK

  • protocol: all

  • description: "Allow all outgoing traffic."

Eleven ingress security rules:

Eleven ingress security rules:

--ingress-security-rules

Ingress Rule 1

  • Stateless: uncheck the box

  • Ingress CIDR: kube_client_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow clients to communicate with Kubernetes API."

Ingress Rule 1
  • isStateless: false

  • source: kube_client_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow clients to communicate with Kubernetes API."

Ingress Rule 2
  • Stateless: uncheck the box

  • Ingress CIDR: kmilb_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow the load balancer to communicate with Kubernetes control plane APIs."

Ingress Rule 2
  • isStateless: false

  • source: kmilb_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow the load balancer to communicate with Kubernetes control plane APIs."

Ingress Rule 3
  • Stateless: uncheck the box

  • Ingress CIDR: kmilb_cidr

  • IP Protocol: TCP

    • Destination Port Range: 12250

  • Description: "Allow Kubernetes worker to Kubernetes API endpoint communication via the control plane load balancer."

Ingress Rule 3
  • isStateless: false

  • source: kmilb_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 12250

    • min: 12250

  • description: "Allow Kubernetes worker to Kubernetes API endpoint communication via the control plane load balancer."

Ingress Rule 4
  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow worker nodes to access the Kubernetes API."

Ingress Rule 4
  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow worker nodes to access the Kubernetes API."

Ingress Rule 5
  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 12250

  • Description: "Allow Kubernetes worker to Kubernetes API endpoint communication."

Ingress Rule 5
  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 12250

    • min: 12250

  • description: "Allow Kubernetes worker to Kubernetes API endpoint communication."

Ingress Rule 6
  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow the control plane to reach itself."

Ingress Rule 6
  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow the control plane to reach itself."

Ingress Rule 7
  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: 2379-2381

  • Description: "Allow the control plane to reach etcd services and metrics. Ports 2379 and 2380 are used by Kubernetes to communicate with the etcd server. Port 2381 is used by Kubernetes to collect metrics from etcd."

Ingress Rule 7
  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 2381

    • min: 2379

  • description: "Allow the control plane to reach etcd services and metrics. Ports 2379 and 2380 are used by Kubernetes to communicate with the etcd server. Port 2381 is used by Kubernetes to collect metrics from etcd."

Ingress Rule 8
  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10250

  • Description: "Allow Kubernetes API endpoint to control plane node communication."

Ingress Rule 8
  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10250

    • min: 10250

  • description: "Allow Kubernetes API endpoint to control plane node communication."

Ingress Rule 9
  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: 10257-10260

  • Description: "Allow inbound connection for Kubernetes components."

Ingress Rule 9
  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 10260

    • min: 10257

  • description: "Allow inbound connection for Kubernetes components."

Ingress Rule 10
  • Stateless: uncheck the box

  • Ingress CIDR: pod_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow pods to communicate with Kubernetes APIs."

Ingress Rule 10
  • isStateless: false

  • source: pod_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow pods to communicate with Kubernetes APIs."

Ingress Rule 11
  • Stateless: uncheck the box

  • Ingress CIDR: pod_cidr

  • IP Protocol: TCP

    • Destination Port Range: 12250

  • Description: "Allow Kubernetes pods to Kubernetes API endpoint communication."

Ingress Rule 11
  • isStateless: false

  • source: pod_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 12250

    • min: 12250

  • description: "Allow Kubernetes pods to Kubernetes API endpoint communication."

Create the Control Plane Subnet

To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

Use the following input to create the control plane subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the control plane subnet in the same compartment where you created the VCN.

Create either a NAT private control plane subnet or a VCN private control plane subnet. Create a NAT private control plane subnet to communicate outside the VCN.

Important:

The name of this subnet must be exactly "control-plane".

Table 4-17 Create a Data Center Private Control Plane Subnet

Compute Web UI property OCI CLI property
  • Name: control-plane

  • CIDR Block: kmi_cidr

  • Route Table: Select "nat_private" from the list

  • Private Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: kmi

  • Security Lists: Select "kmi-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: control-plane

  • --cidr-block: kmi_cidr

  • --dns-label: kmi

  • --prohibit-public-ip-on-vnic: true

  • --route-table-id: OCID of the "nat_private" route table

  • --security-list-ids: OCIDs of the "kmi-seclist" security list and the "Default Security List for oketest-vcn" security list

The difference in the following private subnet is the VCN private route table is used instead of the NAT private route table.

Table 4-18 Create a VCN Private Control Plane Subnet

Compute Web UI property OCI CLI property
  • Name: control-plane

  • CIDR Block: kmi_cidr

  • Route Table: Select "vcn_private" from the list

  • Private Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: kmi

  • Security Lists: Select "kmi-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: control-plane

  • --cidr-block: kmi_cidr

  • --dns-label: kmi

  • --prohibit-public-ip-on-vnic: true

  • --route-table-id: OCID of the "vcn_private" route table

  • --security-list-ids: OCIDs of the "kmi-seclist" security list and the "Default Security List for oketest-vcn" security list

Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet

Create the following resources in the order listed:

  1. Control plane load balancer security list

  2. Control plane load balancer subnet

Create a Control Plane Load Balancer Security List

To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

The control plane load balancer accepts traffic on port 6443, which is also called kubernetes_api_port in this guide. Adjust this security list to only accept connections from where you expect the network to run. Port 6443 must accept connections from the cluster control plane instances and worker instances.

For this example, use the following input for the control plane load balancer subnet security list.

Compute Web UI property OCI CLI property
  • Name: kmilb-seclist

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: kmilb-seclist

One egress security rule:

  • Stateless: uncheck the box

  • Egress CIDR: 0.0.0.0/0

  • IP Protocol: All protocols

  • Description: "Allow all outgoing traffic."

One egress security rule:

--egress-security-rules

  • isStateless: false

  • destination: 0.0.0.0/0

  • destinationType: CIDR_BLOCK

  • protocol: all

  • description: "Allow all outgoing traffic."

Eight ingress security rules:

Eight ingress security rules:

--ingress-security-rules

Ingress Rule 1:

  • Stateless: uncheck the box

  • Ingress CIDR: kube_internal_cidr

    This value is required. Do not change this CIDR value.

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow a Kubernetes container to communicate with Kubernetes APIs."

Ingress Rule 1:

  • isStateless: false

  • source: kube_internal_cidr

    This value is required. Do not change this CIDR value.

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow a Kubernetes container to communicate with Kubernetes APIs."

Ingress Rule 2:

  • Stateless: uncheck the box

  • Ingress CIDR: kube_client_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow clients to connect with the Kubernetes cluster."

Ingress Rule 2:

  • isStateless: false

  • source: kube_client_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow clients to connect with the Kubernetes cluster."

Ingress Rule 3:

  • Stateless: uncheck the box

  • Ingress CIDR: kmi_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow the control plane to reach itself via the load balancer."

Ingress Rule 3:

  • isStateless: false

  • source: kmi_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow the control plane to reach itself via the load balancer."

Ingress Rule 4:

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Allow worker nodes to connect with the cluster via the control plane load balancer."

Ingress Rule 4:

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Allow worker nodes to connect with the cluster via the control plane load balancer."

Ingress Rule 5:

  • Stateless: uncheck the box

  • Ingress CIDR: worker_cidr

  • IP Protocol: TCP

    • Destination Port Range: 12250

  • Description: "Allow Kubernetes worker to Kubernetes API endpoint communication via the load balancer."

Ingress Rule 5:

  • isStateless: false

  • source: worker_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 12250

    • min: 12250

  • description: "Allow Kubernetes worker to Kubernetes API endpoint communication via the load balancer."

Ingress Rule 6:

  • Stateless: uncheck the box

  • Ingress CIDR: pod_cidr

  • IP Protocol: TCP

    • Destination Port Range: 12250

  • Description: "Allow Kubernetes pods to Kubernetes API endpoint communication via the load balancer."

Ingress Rule 6:

  • isStateless: false

  • source: pod_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: 12250

    • min: 12250

  • description: "Allow Kubernetes pods to Kubernetes API endpoint communication via the load balancer."

Ingress Rule 7: Private Endpoint

  • Stateless: uncheck the box

  • Ingress CIDR: kmilb_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Used to create a private control plane endpoint."

Ingress Rule 7: Private Endpoint

  • isStateless: false

  • source: kmilb_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Used to create a private control plane endpoint."

Ingress Rule 8: Public Endpoint

  • Stateless: uncheck the box

  • Ingress CIDR: public_ip_cidr

  • IP Protocol: TCP

    • Destination Port Range: kubernetes_api_port

  • Description: "Used to access the control plane endpoint from the public CIDR. The public IP CIDR is configured in the Service Enclave. If you do not know what your public IP CIDR is, ask your Service Enclave administrator."

Ingress Rule 8: Public Endpoint

  • isStateless: false

  • source: public_ip_cidr

  • sourceType: CIDR_BLOCK

  • protocol: 6

  • tcpOptions

    destinationPortRange

    • max: kubernetes_api_port

    • min: kubernetes_api_port

  • description: "Used to access the control plane endpoint from the public CIDR. The public IP CIDR is configured in the Service Enclave. If you do not know what your public IP CIDR is, ask your Service Enclave administrator."

Create the Control Plane Load Balancer Subnet

To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.

For this example, use the following input to create the control plane load balancer subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the control plane load balancer subnet in the same compartment where you created the VCN.

Create either a private or a public control plane load balancer subnet. Create a public control plane load balancer subnet to use with a public cluster. Create a private control plane load balancer subnet to use with a private cluster.

See Private Clusters for information about using Local Peering Gateways to connect a private cluster to other instances on the Private Cloud Appliance and using Dynamic Routing Gateways to connect a private cluster to the on-premises IP address space. To create a private control plane load balancer subnet, specify one of the following route tables (see Creating a Flannel Overlay VCN):

  • vcn_private

  • lpg_rt

  • drg_rt

Table 4-19 Create a Public Control Plane Load Balancer Subnet

Compute Web UI property OCI CLI property
  • Name: control-plane-endpoint

  • CIDR Block: kmilb_cidr

  • Route Table: Select "public" from the list

  • Public Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: kmilb

  • Security Lists: Select "kmilb-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: control-plane-endpoint

  • --cidr-block: kmilb_cidr

  • --dns-label: kmilb

  • --prohibit-public-ip-on-vnic: false

  • --route-table-id: OCID of the "public" route table

  • --security-list-ids: OCIDs of the "kmilb-seclist" security list and the "Default Security List for oketest-vcn" security list

The difference in the following private subnet is the VCN private route table is used instead of the public route table. Depending on your needs, you could specify the LPG route table or the DRG route table instead.

Table 4-20 Create a Private Control Plane Load Balancer Subnet

Compute Web UI property OCI CLI property
  • Name: control-plane-endpoint

  • CIDR Block: kmilb_cidr

  • Route Table: Select "vcn_private" from the list

  • Private Subnet: check the box

  • DNS Hostnames:

    Use DNS Hostnames in this Subnet: check the box

    • DNS Label: kmilb

  • Security Lists: Select "kmilb-seclist" and "Default Security List for oketest-vcn" from the list

  • --vcn-id: ocid1.vcn.oke_vcn_id

  • --display-name: control-plane-endpoint

  • --cidr-block: kmilb_cidr

  • --dns-label: kmilb

  • --prohibit-public-ip-on-vnic: true

  • --route-table-id: OCID of the "vcn_private" route table

  • --security-list-ids: OCIDs of the "kmilb-seclist" security list and the "Default Security List for oketest-vcn" security list