Creating VCN-Native Pod Networking Resources
VCN-Native Pod Networking enables you to directly manage the traffic from pods because pod IP addresses come directly from the VCN CIDR block and not from a network overlay such as Flannel Overlay. VCN-Native Pod Networking offers more flexibility and control over the traffic and allows you to use different security rules.
VCN-Native Pod Networking connects nodes in a Kubernetes cluster to pod subnets in the OKE VCN. Pod IP addresses within the OKE VCN are directly routable from other VCNs that are connected (peered) to the OKE VCN, and from on-premises networks.
When you create a cluster that uses VCN-Native Pod Networking, the VCN that you specify must have a subnet named "pod". You must provide a subnet named "pod" so that the system can find that subnet. The pod subnet has security rules that enable pods on control plane nodes to communicate directly with pods on worker nodes and with other pods and other resources. See Creating a VCN-Native Pod Networking Pod Subnet. If you select VCN-Native Pod Networking and do not have a subnet named "pod", the cluster creation will fail.
When you create a node pool for a cluster that is using VCN-Native Pod Networking, the pod subnet that you specify (Pod Communication > Pod Communication Subnet or --pod-subnet-ids
) serves the function of a pod subnet for pods on worker nodes. That pod subnet should have security rules that enable pods on worker nodes to communicate directly with other pods on worker nodes and control plane nodes. You can optionally specify the worker node subnet as the pod subnet. The CIDR of the pod subnet that you specify must be larger than /25. The pod subnet should be larger than the worker node subnet.
In general, when you use VCN-Native Pod Networking, security rules can enable pods to communicate directly with other pods on the same node or on other nodes in the cluster, with other clusters, with other services, and with the internet.
Node Shapes and Number of Pods
When using the OCI VCN-Native Pod Networking CNI plugin, each pod needs a private IP address. By default, 31 IP addresses are assigned to a VNIC for use by pods running on the worker node.
You can specify the maximum number of pods that you want to run on a worker node. The default maximum is 31 pods per worker node. You can specify up to 110.
A node shape, and therefore a worker node, has a minimum of two VNICs. The first VNIC is connected to the worker subnet. The second VNIC is connected to the pod subnet. Therefore a worker node can support at least 31 pods. If you want more than 31 pods on a single worker node, specify a shape for the node pool that supports three or more VNICs: one VNIC to connect to the worker node subnet, and at least two VNICs to connect to the pod subnet.
A VM.PCAStandard1.4 standard node shape can have a maximum of four VNICs, and the worker node can support up to 93 pods. A VM.PCAStandard.E5.Flex node shape with five OCPUs can have a maximum of five VNICs, and the worker node can support up to 110 pods. A node cannot have more than 110 pods (see OKE Service Limits).
The following formula summarizes the maximum number of pods supported per node:
MIN( (Number of VNICs - 1) * 31 ), 110)
For information about all node shapes, see "Compute Shapes" in the Compute Instance Concepts chapter in the Oracle Private Cloud Appliance Concepts Guide.
VCN-Native Pod Networking Resources
The resource definitions in the following sections in this topic create a working example set of network resources for workload clusters when you are using VCN-Native Pod Networking. Use this configuration as a guide when you create these resources. You can change the values of properties such as CIDR blocks and IP addresses. You should not change the values of properties such as the network protocol, the stateful setting, or the private/public setting.
See Workload Cluster Network Ports for VCN-Native Pod Networking for specific ports that must be open for specific purposes.
Create the following network resources. To use Terraform, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
Note:
Create all of these network resources in the same compartment on the appliance.
-
VCN. See Creating a VCN-Native Pod Networking VCN.
-
Internet Gateway
-
NAT Gateway
-
Dynamic Routing Gateway
-
Local Peering Gateway
-
Route rules
-
Security lists
-
The following five subnets:
-
Worker. See Creating a VCN-Native Pod Networking Worker Subnet.
-
Worker load balancer. See Creating a VCN-Native Pod Networking Worker Load Balancer Subnet.
-
Control plane. See Creating a VCN-Native Pod Networking Control Plane Subnet.
-
Control plane load balancer. See Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet.
Workload Cluster Network CIDR Ranges for VCN-Native Pod Networking
Throughout this documentation, variables are used to represent CIDR ranges for instances in different subnets. The following table lists the CIDR variables and example values for use with VCN-Native Pod Networking.
Note:
These are examples only. The CIDR ranges you use depend on the number of clusters you have, the number of nodes in each cluster, the shape you select for the worker nodes, and the type of networking you are using.
For VCN-Native Pod Networking, every pod gets an IP address assigned from the IP address pool that is defined in the pod subnet CIDR. The shape you specify for the node pool determines the maximum number of VNICs (pods) for each worker node, as described in Node Shapes and Number of Pods.
The primary difference between IP address requirements of VCN-Native Pod Networking and Flannel Overlay networking is that VCN-Native Pod Networking requires more IP addresses to be available. The table in Workload Cluster Network CIDR Ranges for Flannel Overlay Networking shows smaller CIDR ranges than the following table for VCN-Native Pod Networking CIDR ranges.
Note:
The pod subnet CIDR must be larger than /25. The pod subnet should be larger than the worker node subnet.
Table 4-11 Example CIDR Values to Use with VCN-Native Pod Networking
Variable Name | Description | Example Value |
---|---|---|
|
VCN CIDR range This Is a small VCN with 8192 IP's for creating OKE infrastructure. |
172.31.0.0/19 |
|
Worker subnet CIDR |
172.31.8.0/21 |
|
Worker load balancer subnet CIDR |
172.31.0.0/23 |
|
OKE control plane subnet CIDR |
172.31.4.0/22 |
|
OKE control plane load balancer subnet CIDR |
172.31.2.0/23 |
|
Pod subnet CIDR |
172.31.16.0/20 |
|
CIDR for clients that are allowed to contact the Kubernetes API server |
10.0.0.0/8 |
|
Public IP CIDR configured in the Private Cloud Appliance Service Enclave |
10.0.0.0/8 |
|
CIDR used by the Kubernetes infrastructure to allocate IP addresses for various internal services and components |
253.255.0.0/16 |
The IP Subnet Calculator on Calculator.net is one tool for finding all available networks for a given IP address and prefix length.
Workload Cluster Network Ports for VCN-Native Pod Networking
The following table lists ports that are used by workload clusters when you use VCN-Native Pod Networking. These ports must be available to configure workload cluster networking. You might need to open additional ports for other purposes.
All protocols are TCP. All port states are Stateful. Port 6443 is the port used for Kubernetes API and is also known as
kubernetes_api_port
in this guide.
See also the tables in Port Matrix in the Oracle Private Cloud Appliance Security Guide.
If you are using a separate administration network, see OKE Cluster Management with Administration Network.
Table 4-12 Ports that Must Be Available for Use by Workload Clusters for VCN-Native Pod Networking
Source IP Address | Destination IP Address | Port | Description |
---|---|---|---|
bastion host: |
Worker nodes subnet: |
22 |
Outbound connections from the bastion host to the worker CIDR. |
bastion host: |
Control plane subnet: |
22 |
Outbound connections from the bastion host to the control plane nodes. |
Worker nodes subnet: |
yum repository |
80 |
Outbound connections from the worker CIDR to external applications. |
Worker nodes subnet: |
Control plane subnet: |
6443 |
Outbound connections from the worker CIDR to the Kubernetes API. This is necessary to allow nodes to join through either a public IP address on one of the nodes or the load balancer public IP address. Port 6443 is called the |
Worker nodes subnet: |
Control plane load balancer |
6443 |
Inbound connections from the worker CIDR to the Kubernetes API. |
CIDR for clients: |
Control plane load balancer |
6443 |
Inbound connections from clients to the Kubernetes API server. |
Worker nodes subnet: |
Control plane subnet: |
6443 |
Private outbound connections from the worker CIDR to |
|
Worker nodes subnet: |
30000-32767 |
Inbound traffic for applications from Kubernetes clients. |
|
|
10250 |
Kubernetes API endpoint to worker node communication. |
|
|
10256 |
Allow load balancer or network load balancer to communicate with |
|
|
12250 |
Pod to Kubernetes API endpoint communication. |
|
|
2379-2381 |
Communication between the |
|
|
10257-10260 |
Inbound connection for Kubernetes components. |
Example Terraform Scripts for VCN-Native Pod Networking Resources
The following Terraform scripts create the network resources that are required by OKE when you are using VCN-Native Pod Networking. Other sections in this chapter show other ways to define these same network resources.
Most of the values shown in these scripts, such as resource display names and CIDRs, are
examples. Some ports must be specified as shown (see Workload Cluster Network Ports for VCN-Native Pod Networking), the OKE pod subnet must be named pod
, and
the OKE control plane subnet must be named
control-plane
. See Workload Cluster Network CIDR Ranges for VCN-Native Pod Networking for comments about CIDR values.
variables.tf
This file creates several variables that are used to configure OKE network resources when you are using VCN-Native Pod Networking. Many of these variables are not assigned
values in this file. One port and five CIDRs are assigned values. The
kubernetes_api_port
, port 6443, is the port used to access the Kubernetes API. See also Workload Cluster Network Ports for VCN-Native Pod Networking. The six CIDRs that are
defined in this file are for the OKE VCN, pod
subnet, worker subnet, worker load balancer subnet, control plane subnet, and control plane
load balancer subnet.
variable "oci_config_file_profile" { type = string default = "DEFAULT" } variable "tenancy_ocid" { description = "tenancy OCID" type = string nullable = false } variable "compartment_id" { description = "compartment OCID" type = string nullable = false } variable "vcn_name" { description = "VCN name" nullable = false } variable "kube_client_cidr" { description = "CIDR of Kubernetes API clients" type = string nullable = false } variable "kubernetes_api_port" { description = "Port used for Kubernetes API" type = string default = "6443" } # IP network addressing variable "vcn_cidr" { default = "172.31.0.0/19" } # Subnet for KMIs where kube-apiserver and other control # plane applications run, max 9 nodes variable "kmi_cidr" { description = "Kubernetes control plane subnet CIDR" default = "172.31.4.0/22" } # Subnet for KMI load balancer variable "kmilb_cidr" { description = "Kubernetes control plane LB subnet CIDR" default = "172.31.2.0/23" } # Subnet CIDR configured for VCN public IP for NAT in Network variable "public_ip_cidr" { description = "Public IP CIDR configured in the Service Enclave" type = string nullable = false } # Subnet for worker nodes, max 128 nodes variable "worker_cidr" { description = "Kubernetes worker subnet CIDR" default = "172.31.8.0/21" } # Subnet for worker load balancer (for use by CCM) variable "workerlb_cidr" { description = "Kubernetes worker LB subnet CIDR" default = "172.31.0.0/23" } # Subnet for pod communication variable "pod_cidr" { description = "Kubernetes pod communication subnet CIDR" default = "172.31.16.0/20" } # Flag to Enable private endpoint variable "enable_private_endpoint" { description = "Flag to create private control plane endpoint/service-lb" type = bool default = false nullable = false }
terraform.tfvars
This file assigns values to some of the variables that were created in
variables.tf
.
# name of the profile to use from $HOME/.oci/config oci_config_file_profile = "DEFAULT" # tenancy ocid from the above profile tenancy_ocid = "tenancy_OCID" # compartment in which to build the OKE cluster compartment_id = "compartment_OCID" # display-name for the OKE VCN vcn_name = "oketest"
provider.tf
This file is required in order to use the OCI provider. The file initializes the OCI module using the OCI profile configuration file.
provider "oci" { config_file_profile = var.oci_config_file_profile tenancy_ocid = var.tenancy_ocid }
main.tf
This file specifies the provider to use (oracle/oci
), defines several security list rules, and initializes required local variables.
The version of the OCI provider that you use must be at least v4.50.0 but no greater than v6.36.0.
terraform { required_providers { oci = { source = "oracle/oci" version = ">= 4.50.0, <= 6.36.0" # If necessary, you can pin a specific version here # version = "4.71.0" } } required_version = ">= 1.1" } locals { kube_internal_cidr = "253.255.0.0/16" worker_lb_ingress_rules = [ { source = var.kube_client_cidr port_min = 80 port_max = 80 }, { source = var.kube_client_cidr port_min = 443 port_max = 443 } ] worker_ingress_rules = [ { source = var.kube_client_cidr port_min = 30000 port_max = 32767 }, { source = var.kmi_cidr port_min = 22 port_max = 22 }, { source = var.worker_cidr port_min = 22 port_max = 22 }, { source = var.worker_cidr port_min = 10250 port_max = 10250 }, { source = var.worker_cidr port_min = 10256 port_max = 10256 }, { source = var.worker_cidr port_min = 30000 port_max = 32767 }, { source = var.workerlb_cidr port_min = 10256 port_max = 10256 }, { source = var.workerlb_cidr port_min = 30000 port_max = 32767 }, { source = var.kmi_cidr port_min = 10250 port_max = 10250 }, { source = var.kmi_cidr port_min = 10256 port_max = 10256 }, { source = var.pod_cidr port_min = 30000 port_max = 32767 }, ] kmi_lb_ingress_rules = [ { source = local.kube_internal_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.kube_client_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.kmi_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.worker_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.worker_cidr port_min = 12250 port_max = 12250 }, { source = var.pod_cidr port_min = 12250 port_max = 12250 }, ] kmi_ingress_rules = [ { source = var.kube_client_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.kmilb_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.kmilb_cidr port_min = 12250 port_max = 12250 }, { source = var.worker_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.worker_cidr port_min = 12250 port_max = 12250 }, { source = var.kmi_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.kmi_cidr port_min = 2379 port_max = 2381 }, { source = var.kmi_cidr port_min = 10250 port_max = 10250 }, { source = var.kmi_cidr port_min = 10257 port_max = 10260 }, { source = var.pod_cidr port_min = var.kubernetes_api_port port_max = var.kubernetes_api_port }, { source = var.pod_cidr port_min = 12250 port_max = 12250 }, ] pod_ingress_rules = [ { source = var.vcn_cidr port_min = 22 port_max = 22 }, { source = var.workerlb_cidr port_min = 10256 port_max = 10256 }, { source = var.worker_cidr port_min = 10250 port_max = 10250 }, { source = var.worker_cidr port_min = 10256 port_max = 10256 }, { source = var.worker_cidr port_min = 80 port_max = 80 }, ] }
oke_vcn.tf
This file defines a VCN, NAT gateway, internet gateway, private route table, and public route table. The private route table is the default route table for the VCN.
resource "oci_core_vcn" "oke_vcn" { cidr_block = var.vcn_cidr dns_label = var.vcn_name compartment_id = var.compartment_id display_name = "${var.vcn_name}-vcn" } resource "oci_core_nat_gateway" "vcn_ngs" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id count = var.enable_private_endpoint ? 0:1 display_name = "VCN nat g6s" } resource "oci_core_internet_gateway" "vcn_igs" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id count = var.enable_private_endpoint ? 0:1 display_name = "VCN i6t g6s" enabled = true } resource "oci_core_default_route_table" "default_private" { manage_default_resource_id = oci_core_vcn.oke_vcn.default_route_table_id display_name = "Default - private" count = var.enable_private_endpoint ? 1:0 } resource "oci_core_default_route_table" "private" { count = var.enable_private_endpoint ? 0:1 manage_default_resource_id = oci_core_vcn.oke_vcn.default_route_table_id display_name = "Default - private" route_rules { destination = "0.0.0.0/0" destination_type = "CIDR_BLOCK" network_entity_id = oci_core_nat_gateway.vcn_ngs[0].id } } resource "oci_core_route_table" "public" { count = var.enable_private_endpoint ? 0:1 compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id display_name = "public" route_rules { destination = "0.0.0.0/0" destination_type = "CIDR_BLOCK" network_entity_id = oci_core_internet_gateway.vcn_igs[0].id } }
oke_pod_seclist.tf
This file defines the security list for the pod subnet. The rules for this security list were defined in other Terraform files in this set.
resource "oci_core_security_list" "pod" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id display_name = "${var.vcn_name}-pod" dynamic "ingress_security_rules" { iterator = port for_each = local.pod_ingress_rules content { source = port.value.source source_type = "CIDR_BLOCK" protocol = "6" tcp_options { min = port.value.port_min max = port.value.port_max } } } dynamic "ingress_security_rules" { iterator = icmp_type for_each = [0, 8] content { # ping from VCN; unreachable/TTL from anywhere source = var.kmi_cidr source_type = "CIDR_BLOCK" protocol = "1" icmp_options { type = icmp_type.value } } } dynamic "ingress_security_rules" { for_each = var.pod_cidr != null ? [var.pod_cidr] : [] content { source = ingress_security_rules.value source_type = "CIDR_BLOCK" protocol = "all" } } }
oke_pod_subnet.tf
This file defines the pod subnet.
Important:
The name of the pod
subnet must be exactly pod
.
resource "oci_core_subnet" "pod" { cidr_block = var.pod_cidr compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id display_name = "pod" dns_label = "pod" prohibit_public_ip_on_vnic = true security_list_ids = [ oci_core_default_security_list.oke_vcn.id, oci_core_security_list.pod.id ] }
oke_worker_seclist.tf
This file defines the security lists for both the worker subnet and the worker load balancer subnet. The rules for these security lists were defined in other Terraform files in this set.
resource "oci_core_security_list" "workerlb" { display_name = "${var.vcn_name}-workerlb" compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id dynamic "ingress_security_rules" { iterator = port for_each = local.worker_lb_ingress_rules content { source = port.value.source source_type = "CIDR_BLOCK" protocol = "6" tcp_options { min = port.value.port_min max = port.value.port_max } } } } resource "oci_core_security_list" "worker" { display_name = "${var.vcn_name}-worker" compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id dynamic "ingress_security_rules" { iterator = port for_each = local.worker_ingress_rules content { source = port.value.source source_type = "CIDR_BLOCK" protocol = "6" tcp_options { min = port.value.port_min max = port.value.port_max } } } dynamic "ingress_security_rules" { iterator = icmp_type for_each = [0, 8] content { # ping from VCN; unreachable/TTL from anywhere source = var.kmi_cidr source_type = "CIDR_BLOCK" protocol = "1" icmp_options { type = icmp_type.value } } } }
oke_worker_subnet.tf
This file defines the worker and worker load balancer subnets. The worker load balancer
subnet is named service-lb
.
resource "oci_core_subnet" "worker" { cidr_block = var.worker_cidr compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id display_name = "worker" dns_label = "worker" prohibit_public_ip_on_vnic = true security_list_ids = [ oci_core_default_security_list.oke_vcn.id, oci_core_security_list.worker.id ] } resource "oci_core_subnet" "worker_lb" { cidr_block = var.workerlb_cidr compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id display_name = "service-lb" dns_label = "servicelb" prohibit_public_ip_on_vnic = var.enable_private_endpoint route_table_id = var.enable_private_endpoint==false ? oci_core_route_table.public[0].id : oci_core_vcn.oke_vcn.default_route_table_id security_list_ids = [ oci_core_default_security_list.oke_vcn.id, oci_core_security_list.workerlb.id ] }
oke_kmi_seclist.tf
This file defines the security lists for the control plane and control plane load balancer subnets. This file also defines updates to make to the default security list for the VCN.
resource "oci_core_default_security_list" "oke_vcn" { manage_default_resource_id = oci_core_vcn.oke_vcn.default_security_list_id egress_security_rules { destination = "0.0.0.0/0" destination_type = "CIDR_BLOCK" protocol = "all" } dynamic "ingress_security_rules" { iterator = icmp_type for_each = [3, 8, 11] content { # ping from VCN; unreachable/TTL from anywhere source = (icmp_type.value == "8" ? var.vcn_cidr : "0.0.0.0/0") source_type = "CIDR_BLOCK" protocol = "1" icmp_options { type = icmp_type.value } } } } resource "oci_core_security_list" "kmilb" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id display_name = "${var.vcn_name}-kmilb" dynamic "ingress_security_rules" { iterator = port for_each = local.kmi_lb_ingress_rules content { source = port.value.source source_type = "CIDR_BLOCK" protocol = "6" tcp_options { min = port.value.port_min max = port.value.port_max } } } dynamic "ingress_security_rules" { for_each = var.enable_private_endpoint ? [1] : [] content { source = var.kmilb_cidr source_type = "CIDR_BLOCK" protocol = "6" tcp_options { min = var.kubernetes_api_port max = var.kubernetes_api_port } } } dynamic "ingress_security_rules" { for_each = var.enable_private_endpoint ? [] : [0] content { source = var.public_ip_cidr source_type = "CIDR_BLOCK" protocol = "6" tcp_options { min = var.kubernetes_api_port max = var.kubernetes_api_port } } } depends_on = [] } resource "oci_core_security_list" "kmi" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.oke_vcn.id display_name = "${var.vcn_name}-kmi" dynamic "ingress_security_rules" { iterator = port for_each = local.kmi_ingress_rules content { source = port.value.source source_type = "CIDR_BLOCK" protocol = "6" tcp_options { min = port.value.port_min max = port.value.port_max } } } }
oke_kmi_subnet.tf
This file defines the control plane and control plane load balancer subnets.
Important:
The name of the kmi
subnet must be exactly
control-plane
.
resource "oci_core_subnet" "kmi" { cidr_block = var.kmi_cidr compartment_id = var.compartment_id display_name = "control-plane" dns_label = "kmi" vcn_id = oci_core_vcn.oke_vcn.id prohibit_public_ip_on_vnic = true security_list_ids = [ oci_core_default_security_list.oke_vcn.id, oci_core_security_list.kmi.id ] } resource "oci_core_subnet" "kmi_lb" { cidr_block = var.kmilb_cidr compartment_id = var.compartment_id dns_label = "kmilb" vcn_id = oci_core_vcn.oke_vcn.id display_name = "control-plane-endpoint" prohibit_public_ip_on_vnic = var.enable_private_endpoint route_table_id = var.enable_private_endpoint==false ? oci_core_route_table.public[0].id : oci_core_default_route_table.default_private[0].id security_list_ids = [ oci_core_default_security_list.oke_vcn.id, oci_core_security_list.kmilb.id ] }
Creating a VCN-Native Pod Networking VCN
Create the following resources in the order listed:
-
VCN
-
Route rules
-
Public clusters:
-
Internet gateway and a route table with a route rule that references that internet gateway.
-
NAT gateway and a route table with a route rule that references that NAT gateway.
-
-
Private clusters:
-
Route table with no route rules.
-
(Optional) Dynamic Routing Gateway (DRG), attach the OKE VCN to that DRG, and create a route table with a route rule that references that DRG. See Private Clusters.
-
(Optional) Local Peering Gateway (LPG) and a route table with a route rule that references that LPG. See Private Clusters.
-
-
-
Security list. Modify the VCN default security list
Resource names and CIDR blocks are example values.
VCN
To create the VCN, use the instructions in Creating a VCN in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
For this example, use the following input to create the VCN. The VCN covers one contiguous CIDR block. The CIDR block cannot be changed after the VCN is created.
Compute Web UI property | OCI CLI property |
---|---|
|
|
Note the OCID of the new VCN. In the examples in this guide, this VCN OCID is ocid1.vcn.oke_vcn_id
.
Next Steps
-
Public internet access. For traffic on a public subnet that connects to the internet using public IP addresses, create an internet gateway and a route rule that references that internet gateway.
-
Private internet access. For traffic on a private subnet that needs to connect to the internet without exposing private IP addresses, create a NAT gateway and a route rule that references that NAT gateway.
-
VCN-only access. To restrict communication to only other resources on the same VCN, use the default route table, which has no route rules.
-
Instances in another VCN. To enable communication between the cluster and an instance running on a different VCN, create a Local Peering Gateway (LPG) and a route rule that references that LPG.
-
Data center IP address space. To enable communication between the cluster and the on-premises network IP address space, create a Dynamic Routing Gateway (DRG) and a route rule that references that DRG.
VCN Private Route Table
Edit the default route table that was created when you created the VCN. Change the name of the route table to vcn_private. This route table does not have any route rules. Do not add any route rules.
NAT Private Route Table
Create a NAT gateway and a route table with a route rule that references the NAT gateway.
NAT Gateway
To create the NAT gateway, use the instructions in Enabling Public Connections through a NAT Gateway in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
Note the name and OCID of the NAT gateway for assignment to the private route rule.
Private Route Rule
To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for Flannel Overlay Network Resources.
For this example, use the following input to create the route table with a private route rule that references the NAT gateway that was created in the preceding step.
Compute Web UI property | OCI CLI property |
---|---|
Route rule
|
|
Note the name and OCID of this route table for assignment to private subnets.
Local Peering Gateway
Create a Local Peering gateway (LPG) and a route table with a route rule that references the LPG.
Local Peering Gateway
To create the LPG, use the instructions in "Connecting VCNs through a Local Peering Gateway" in the Networking chapter of the Oracle Private Cloud Appliance User Guide.
Note the name and OCID of the LPG for assignment to the private route rule.
Private Route Rule
To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide.
For this example, use the following input to create the route table with a private route rule that references the LPG that was created in the preceding step.
Compute Web UI property | OCI CLI property |
---|---|
Route rule
|
|
Note the name and OCID of this route table for assignment to the "control-plane-endpoint" subnet (Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet).
Add the same route rule on the second VCN (the peered VCN), specifying the OKE VCN CIDR as the destination.
Dynamic Routing Gateway
Create a Dynamic Routing gateway (DRG) and a route table with a route rule that references the DRG.
Dynamic Routing Gateway
To create the DRG and attach the OKE VCN to that DRG, use the instructions in "Connecting to the On-Premises Network through a Dynamic Routing Gateway" in the Networking chapter of the Oracle Private Cloud Appliance User Guide. Create the DRG in the OKE VCN compartment, and then attach the OKE VCN to that DRG.
Note the name and OCID of the DRG for assignment to the private route rule.
Private Route Rule
To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide.
For this example, use the following input to create the route table with a private route rule that references the DRG that was created in the preceding step.
Compute Web UI property | OCI CLI property |
---|---|
Route rule
|
|
Note the name and OCID of this route table for assignment to the "control-plane-endpoint" subnet (Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet).
Public Route Table
Create an Internet gateway and a route table with a route rule that references the Internet gateway.
Internet Gateway
To create the internet gateway, use the instructions in Providing Public Access through an Internet Gateway in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
Note the name and OCID of the internet gateway for assignment to the public route rule.
Public Route Rule
To create a route table, use the instructions in "Creating a Route Table" in Working with Route Tables in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
For this example, use the following input to create the route table with a public route rule that references the internet gateway that was created in the preceding step.
Compute Web UI property | OCI CLI property |
---|---|
Route rule
|
|
Note the name and OCID of this route table for assignment to public subnets.
VCN Default Security List
Modify the default security list, using the input shown in the following table. Delete all of the default rules and create the rules shown in the following table.
To modify a security list, use the instructions in "Updating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
Compute Web UI property | OCI CLI property |
---|---|
|
|
One egress security rule:
|
One egress security rule:
|
Three ingress security rules: |
Three ingress security rules:
|
Ingress Rule 1
|
Ingress Rule 1
|
Ingress Rule 2
|
Ingress Rule 2
|
Ingress Rule 3
|
Ingress Rule 3
|
Note the name and OCID of this default security list for assignment to subnets.
Creating a VCN-Native Pod Networking Pod Subnet
The instructions in this topic create a pod subnet named "pod" in the VCN that provides the private IP addresses for pods running on the control plane nodes. The number of IP addresses in this subnet should be equal to or greater than the number of IP addresses in the control plane subnet. The pod subnet must be a private subnet.
The pod subnet supports communication between pods and direct access to individual pods using private pod IP addresses. The pod subnet must be private. The pod subnet enables pods to communicate with other pods on the same worker node, with pods on other worker nodes, with OCI services (through a service gateway) and with the internet (through a NAT gateway).
Create the following resources in the order listed:
-
Pod security list
-
Pod subnet
Create a Pod Security List
To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
The security rules shown in the following table define traffic that is allowed to contact pods directly. Use these security rules as part of network security groups (NSGs) or in security lists. Oracle recommends using NSGs. See Security Best Practices.
The security rules apply to all pods in all the worker nodes connected to the pod subnet specified for a node pool.
Route incoming requests to pods based on routing policies specified by routing rules and route tables. See the route tables defined in Creating a VCN-Native Pod Networking VCN.
For this example, use the following input for the pod subnet security list.
Compute Web UI property | OCI CLI property |
---|---|
|
|
One egress security rule:
|
One egress security rule:
|
Eight ingress security rules: |
Eight ingress security rules:
|
Ingress Rule 1
|
Ingress Rule 1
|
Ingress Rule 2
|
Ingress Rule 2
|
Ingress Rule 3
|
Ingress Rule 3
|
Ingress Rule 4
|
Ingress Rule 4
|
Ingress Rule 5
This ingress is optional. This port is open for an end user application. This rule could be different based on what applications are deployed. |
Ingress Rule 5
This ingress is optional. This port is open for an end user application. This rule could be different based on what applications are deployed. |
Ingress Rule 6
|
Ingress Rule 6
|
Ingress Rule 7
|
Ingress Rule 7
|
Ingress Rule 8
|
Ingress Rule 8
|
Create the Pod Subnet
To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
For this example, use the following input to create the pod subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the pod subnet in the same compartment where you created the VCN.
Important:
The name of this subnet must be exactly "pod".
Compute Web UI property | OCI CLI property |
---|---|
|
|
Creating a VCN-Native Pod Networking Worker Subnet
Create the following resources in the order listed:
-
Worker security list
-
Worker subnet
Create a Worker Security List
To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
This security list defines traffic that is allowed to contact worker nodes directly.
For this example, use the following input for the worker subnet security list.
Compute Web UI property | OCI CLI property |
---|---|
|
|
One egress security rule:
|
One egress security rule:
|
Thirteen ingress security rules: |
Thirteen ingress security rules:
|
Ingress Rule 1
|
Ingress Rule 1
|
Ingress Rule 2
|
Ingress Rule 2
|
Ingress Rule 3
|
Ingress Rule 3
|
Ingress Rule 4
|
Ingress Rule 4
|
Ingress Rule 5
|
Ingress Rule 5
|
Ingress Rule 6
|
Ingress Rule 6
|
Ingress Rule 7
|
Ingress Rule 7
|
Ingress Rule 8
|
Ingress Rule 8
|
Ingress Rule 9
|
Ingress Rule 9
|
Ingress Rule 10
|
Ingress Rule 10
|
Ingress Rule 11
|
Ingress Rule 11
|
Ingress Rule 12
|
Ingress Rule 12
|
Ingress Rule 13
|
Ingress Rule 13
|
Create the Worker Subnet
To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
For this example, use the following input to create the worker subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the worker subnet in the same compartment where you created the VCN.
Create either a NAT private worker subnet or a VCN private worker subnet. Create a NAT private worker subnet to communicate outside the VCN.
Table 4-13 Create a NAT Private Worker Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|
The difference in the following private subnet is the VCN private route table is used instead of the NAT private route table.
Table 4-14 Create a VCN Private Worker Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|
Creating a VCN-Native Pod Networking Worker Load Balancer Subnet
Create the following resources in the order listed:
-
Worker load balancer security list
-
Worker load balancer subnet
Create a Worker Load Balancer Security List
To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
This security list defines traffic, such as applications, that is allowed to contact the worker load balancer.
For this example, use the following input for the worker load balancer subnet security list. These sources and destinations are examples; adjust these for your applications.
Note:
When you create an external load balancer for your containerized applications (see Exposing Containerized Applications), remember to add that load balancer service front-end port to this security list.
Compute Web UI property | OCI CLI property |
---|---|
|
|
One egress security rule:
|
One egress security rule:
|
Two ingress security rules: |
Two ingress security rules:
|
Ingress Rule 1
|
Ingress Rule 1
|
Ingress Rule 2
|
Ingress Rule 2
|
Create the Worker Load Balancer Subnet
To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
For this example, use the following input to create the worker load balancer subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the worker load balancer subnet in the same compartment where you created the VCN.
Create either a private or a public worker load balancer subnet. Create a public worker load balancer subnet to use with a public cluster. Create a private worker load balancer subnet to expose applications in a private cluster.
Table 4-15 Create a Public Worker Load Balancer Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|
The difference in the following private subnet is the VCN private route table is used instead of the public route table.
Table 4-16 Create a VCN Private Worker Load Balancer Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|
Creating a VCN-Native Pod Networking Control Plane Subnet
Create the following resources in the order listed:
-
Control plane security list
-
Control plane subnet
Create a Control Plane Security List
To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
For this example, use the following input for the control plane subnet security list. The
kubernetes_api_port
is the port used to access the Kubernetes API: port 6443. See also Workload Cluster Network Ports for VCN-Native Pod Networking.
Compute Web UI property | OCI CLI property |
---|---|
|
|
One egress security rule:
|
One egress security rule:
|
Eleven ingress security rules: |
Eleven ingress security rules:
|
Ingress Rule 1
|
Ingress Rule 1
|
Ingress Rule 2
|
Ingress Rule 2
|
Ingress Rule 3
|
Ingress Rule 3
|
Ingress Rule 4
|
Ingress Rule 4
|
Ingress Rule 5
|
Ingress Rule 5
|
Ingress Rule 6
|
Ingress Rule 6
|
Ingress Rule 7
|
Ingress Rule 7
|
Ingress Rule 8
|
Ingress Rule 8
|
Ingress Rule 9
|
Ingress Rule 9
|
Ingress Rule 10
|
Ingress Rule 10
|
Ingress Rule 11
|
Ingress Rule 11
|
Create the Control Plane Subnet
To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
Use the following input to create the control plane subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the control plane subnet in the same compartment where you created the VCN.
Create either a NAT private control plane subnet or a VCN private control plane subnet. Create a NAT private control plane subnet to communicate outside the VCN.
Important:
The name of this subnet must be exactly "control-plane".
Table 4-17 Create a Data Center Private Control Plane Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|
The difference in the following private subnet is the VCN private route table is used instead of the NAT private route table.
Table 4-18 Create a VCN Private Control Plane Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|
Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet
Create the following resources in the order listed:
-
Control plane load balancer security list
-
Control plane load balancer subnet
Create a Control Plane Load Balancer Security List
To create a security list, use the instructions in "Creating a Security List" in Controlling Traffic with Security Lists in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
The control plane load balancer accepts traffic on port 6443, which is also called
kubernetes_api_port
in this guide. Adjust this security
list to only accept connections from where you expect the network to run. Port 6443 must
accept connections from the cluster control plane instances and worker instances.
For this example, use the following input for the control plane load balancer subnet security list.
Compute Web UI property | OCI CLI property |
---|---|
|
|
One egress security rule:
|
One egress security rule:
|
Eight ingress security rules: |
Eight ingress security rules:
|
Ingress Rule 1:
|
Ingress Rule 1:
|
Ingress Rule 2:
|
Ingress Rule 2:
|
Ingress Rule 3:
|
Ingress Rule 3:
|
Ingress Rule 4:
|
Ingress Rule 4:
|
Ingress Rule 5:
|
Ingress Rule 5:
|
Ingress Rule 6:
|
Ingress Rule 6:
|
Ingress Rule 7: Private Endpoint
|
Ingress Rule 7: Private Endpoint
|
Ingress Rule 8: Public Endpoint
|
Ingress Rule 8: Public Endpoint
|
Create the Control Plane Load Balancer Subnet
To create a subnet, use the instructions in Creating a Subnet in the Oracle Private Cloud Appliance User Guide. For Terraform input, see Example Terraform Scripts for VCN-Native Pod Networking Resources.
For this example, use the following input to create the control plane load balancer subnet. Use the OCID of the VCN that was created in Creating a VCN-Native Pod Networking VCN. Create the control plane load balancer subnet in the same compartment where you created the VCN.
Create either a private or a public control plane load balancer subnet. Create a public control plane load balancer subnet to use with a public cluster. Create a private control plane load balancer subnet to use with a private cluster.
See Private Clusters for information about using Local Peering Gateways to connect a private cluster to other instances on the Private Cloud Appliance and using Dynamic Routing Gateways to connect a private cluster to the on-premises IP address space. To create a private control plane load balancer subnet, specify one of the following route tables (see Creating a Flannel Overlay VCN):
-
vcn_private
-
lpg_rt
-
drg_rt
Table 4-19 Create a Public Control Plane Load Balancer Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|
The difference in the following private subnet is the VCN private route table is used instead of the public route table. Depending on your needs, you could specify the LPG route table or the DRG route table instead.
Table 4-20 Create a Private Control Plane Load Balancer Subnet
Compute Web UI property | OCI CLI property |
---|---|
|
|