Container Engine and Kubernetes Concepts

This topic describes key concepts you need to understand when using Container Engine for Kubernetes.

Kubernetes Clusters

A Kubernetes cluster is a group of nodes (machines running applications). Each node can be a physical machine or a virtual machine. The node's capacity (its number of CPUs and amount of memory) is defined when the node is created. A cluster comprises:

  • Control plane nodes (previously referred to as 'master nodes'). Typically, there will be three control plane nodes for high availability.
  • Worker nodes, organized into node pools.

Kubernetes Cluster Control Plane and Kubernetes API

The Kubernetes cluster control plane implements core Kubernetes functionality. It runs on compute instances (known as 'control plane nodes') in the Container Engine for Kubernetes service tenancy. The cluster control plane is fully managed by Oracle.

The cluster control plane runs a number of processes, including:

  • kube-apiserver to support Kubernetes API operations requested from the Kubernetes command line tool (kubectl) and other command line tools, as well as from direct REST calls. The kube-apiserver includes admissions controllers required for advanced Kubernetes operations.
  • kube-controller-manager to manage different Kubernetes components (for example, replication controller, endpoints controller, namespace controller, and serviceaccounts controller)
  • kube-scheduler to control where in the cluster to run jobs
  • etcd to store the cluster's configuration data

The Kubernetes API enables end users to query and manipulate Kubernetes resources (such as pods, namespaces, configmaps, and events).

You access the Kubernetes API on the cluster control plane through an endpoint hosted in a subnet of your VCN. This Kubernetes API endpoint subnet can be a private or public subnet. If you specify a public subnet for the Kubernetes API endpoint, you can optionally assign a public IP address to the Kubernetes API endpoint (in addition to the private IP address). You control access to the Kubernetes API endpoint subnet using security rules defined for security lists or network security groups.

Note

In earlier releases, clusters were provisioned with public Kubernetes API endpoints that were not integrated into your VCN.

You can continue to create such clusters using the CLI or API, but not the Console.

Kubernetes Worker Nodes and Node Pools

Worker nodes constitute the cluster data plane. Worker nodes are where you run the applications that you deploy in a cluster.

Each worker node runs a number of processes, including:

  • kubelet to communicate with the cluster control plane
  • kube-proxy to maintain networking rules

The cluster control plane processes monitor and record the state of the worker nodes and distribute requested operations between them.

A node pool is a subset of worker nodes within a cluster that all have the same configuration. Node pools enable you to create pools of machines within a cluster that have different configurations. For example, you might create one pool of nodes in a cluster as virtual machines, and another pool of nodes as bare metal machines. A cluster must have a minimum of one node pool, but a node pool need not contain any worker nodes.

Worker nodes in a node pool are connected to a worker node subnet in your VCN.

Pods

Where an application running on a worker node comprises multiple containers, Kubernetes groups the containers into a single logical unit called a pod for easy management and discovery. The containers in the pod share the same networking namespace and the same storage space, and can be managed as a single object by the cluster control plane. A number of pods providing the same functionality can be grouped into a single logical set known as a service.

For more information about pods, see the Kubernetes documentation.

Services

In Kubernetes, a service is an abstraction that defines a logical set of pods and a policy by which to access them. The set of pods targeted by a service is usually determined by a selector.

For some parts of an application (for example, frontends), you might want to expose a service on an external IP address outside of a cluster.

Kubernetes ServiceTypes enable you to specify the kind of service you want to expose. A LoadBalancer ServiceType creates an Oracle Cloud Infrastructure load balancer on load balancer subnets in your VCN.

For more information about services in general, see the Kubernetes documentation. For more information about creating load balancer services with Container Engine for Kubernetes, see Creating Load Balancers to Distribute Traffic Between Cluster Nodes.

Manifest Files (or Pod Specs)

A Kubernetes manifest file comprises instructions in a yaml or json file that specify how to deploy an application to the node or nodes in a Kubernetes cluster. The instructions include information about the Kubernetes deployment, the Kubernetes service, and other Kubernetes objects to be created on the cluster. The manifest is commonly also referred to as a pod spec, or as a deployment.yaml file (although other filenames are allowed). The parameters to include in a Kubernetes manifest file are described in the Kubernetes documentation.

Admission Controllers

A Kubernetes admission controller intercepts authenticated and authorized requests to the Kubernetes API server before admitting an object (such as a pod) to the cluster. An admission controller can validate an object, or modify it, or both. Many advanced features in Kubernetes require an enabled admission controller. For more information, see the Kubernetes documentation.

The Kubernetes version you select when you create a cluster using Container Engine for Kubernetes determines the admission controllers supported by that cluster. To find out the supported admission controllers, the order in which they run in the Kubernetes API server, and the Kubernetes versions in which they are supported, see Supported Admission Controllers.

Namespaces

A Kubernetes cluster can be organized into namespaces, to divide the cluster's resources between multiple users. Initially, a cluster has the following namespaces:

  • default, for resources with no other namespace
  • kube-system, for resources created by the Kubernetes system
  • kube-node-lease, for one lease object per node to help determine node availability
  • kube-public, usually used for resources that have to be accessible across the cluster

For more information about namespaces, see the Kubernetes documentation.