Public and Private Clusters

Before you create a cluster, decide what kind of network access the cluster requires: whether you need a public cluster or a private cluster. You cannot create both public and private clusters in one VCN.

The key difference between a public cluster and a private cluster is whether you configure public or private subnets for the Kubernetes API endpoint and the worker load balancer.

Note:

The subnets for the worker nodes and control plane nodes are always private.

For the worker nodes and control plane nodes, you can configure route rules that allow access only within the VCN or outside the VCN. This documentation names those route tables "vcn_private" and "nat_private." You can choose either of these private subnet configurations for your worker nodes and control plane nodes whether the cluster is private or the cluster is public.

Public Clusters

A public cluster requires the following network resources:

Private Clusters

If you create multiple OKE VCNs, each CIDR must be unique. CIDRs of different VCNs for private clusters cannot overlap with any other VCN CIDRs or any on-premises CIDR. The IP addresses used must be exclusive to each VCN.

A private cluster has the following network resources:

  • A private subnet for the Kubernetes API endpoint. See the instructions for creating a private "control-plane-endpoint" subnet in Creating a Flannel Overlay Control Plane Load Balancer Subnet and Creating a VCN-Native Pod Networking Control Plane Load Balancer Subnet.

  • A private subnet for the worker load balancer. See the instructions for creating a private "service-lb" subnet in Creating a Flannel Overlay Worker Load Balancer Subnet and Creating a VCN-Native Pod Networking Worker Load Balancer Subnet.

  • A route table with no route rules. This route table allows access only within the VCN.

  • (Optional) A Local Peering Gateway (LPG). Use an LPG to allow access from other VCNs. An LPG allows access to the cluster from an instance running on a different VCN. Create an LPG on the OKE VCN, and create an LPG on a second VCN on the Private Cloud Appliance. Use the LPG connect command to peer the two LPGs. Peered VCNs can be in different tenancies. CIDRs for the peered VCNs cannot overlap. See "Connecting VCNs through a Local Peering Gateway" in the Networking chapter of the Oracle Private Cloud Appliance User Guide.

    Create a route rule to steer VCN subnet traffic to and from the LPGs, and security rules to allow or deny certain types of traffic. See Creating a Flannel Overlay VCN or Creating a VCN-Native Pod Networking VCN for the route table to add to the OKE VCN and similar route table to add to the second VCN. Add the same route rule on the second VCN, specifying the OKE VCN CIDR as the destination.

    Install the OCI SDK and kubectl on the instance on the second VCN and connect to the private cluster. See Creating a Kubernetes Configuration File.

  • (Optional) A Dynamic Routing Gateway (DRG). Use a DRG to enable access from the on-premises network. A DRG allows traffic between the OKE VCN and the on-premises network's IP address space. Create the DRG in the OKE VCN compartment, and then attach the OKE VCN to that DRG. See "Connecting to the On-Premises Network through a Dynamic Routing Gateway" in the Networking chapter of the Oracle Private Cloud Appliance User Guide.

    Create a route rule to steer traffic to the on-premises data center network's IP address space. See Creating a Flannel Overlay VCN or Creating a VCN-Native Pod Networking VCN for the route table to add to the OKE VCN.