4 Using Kubernetes
Important:
The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.
We recommend that you upgrade the software described by this documentation as soon as possible.
This chapter describes how to get started using Kubernetes to deploy,
maintain and scale your containerized applications. In this chapter,
we describe basic usage of the kubectl command to
get you started creating and managing containers and services within
your environment.
The kubectl utility is fully documented in the
upstream documentation at:
About Runtime Engines
runc is the default runtime engine when you
create containers. You can also use the
kata-runtime runtime engine to create Kata
containers. For information on Kata containers and how to create
them, see Container Runtimes.
Getting Information about Nodes
To get a listing of all of the nodes in a cluster and the status
of each node, use the kubectl get command. This
command can be used to obtain listings of any kind of resource
that Kubernetes supports. In this case, the nodes
resource:
kubectl get nodes NAME STATUS ROLES AGE VERSION control.example.com Ready control-plane 1h v1.21.x+x.x.x.el8 worker1.example.com Ready <none> 1h v1.21.x+x.x.x.el8 worker2.example.com Ready <none> 1h v1.21.x+x.x.x.el8
You can get more detailed information about any resource using the
kubectl describe command. If you specify the
name of the resource, the output is limited to information about
that resource alone; otherwise, full details of all resources are
also printed to screen. For example:
kubectl describe nodes worker1.example.com
Name: worker1.example.com
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=worker1.example.com
kubernetes.io/os=linux
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"fe:78:5f:ea:7c:c0"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.0.2.11
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
...Running an Application in a Pod
To create a pod with a single running container, you can use the
kubectl create command. For example:
kubectl create deployment --image nginx hello-world deployment.apps/hello-world created
Substitute nginx with a container image.
Substitute hello-world with a name for your
deployment. Your pods are named by using the deployment name as a
prefix.
Tip:
Deployment, pod and service names conform to a requirement to
match a DNS-1123 label. These must consist of lower case
alphanumeric characters or -, and must start
and end with an alphanumeric character. The regular expression
that is used to validate names is
'[a-z0-9]([-a-z0-9]*[a-z0-9])?'. If you use a
name for your deployment that does not validate, an error is
returned.
There are many additional optional parameters that can be used
when you run a new application within Kubernetes. For instance, at run
time, you can specify how many replica pods should be started, or
you might apply a label to the deployment to make it easier to
identify pod components. To see a full list of options available
to you, run kubectl run --help.
To check that your new application deployment has created one or
more pods, use the kubectl get pods command:
kubectl get pods NAME READY STATUS RESTARTS AGE hello-world-5f55779987-wd857 1/1 Running 0 1m
Use kubectl describe to show a more detailed
view of your pods, including which containers are running and what
image they are based on, as well as which node is currently
hosting the pod:
kubectl describe pods
Name: hello-world-5f55779987-wd857
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: worker1.example.com/192.0.2.11
Start Time: Fri, 16 Aug 2019 08:48:33 +0100
Labels: app=hello-world
pod-template-hash=5f55779987
Annotations: <none>
Status: Running
IP: 10.244.1.3
Controlled By: ReplicaSet/hello-world-5f55779987
Containers:
nginx:
Container ID: cri-o://417b4b59f7005eb4b1754a1627e01f957e931c0cf24f1780cd94fa9949be1d31
Image: nginx
Image ID: docker-pullable://nginx@sha256:5d32f60db294b5deb55d078cd4feb410ad88e6fe7...
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 10 Dec 2018 08:25:25 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s8wj4 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-s8wj4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s8wj4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
....Scaling a Pod Deployment
To change the number of instances of the same pod that you are
running, you can use the kubectl scale
deployment command. For example:
kubectl scale deployment --replicas=3 hello-world deployment.apps/hello-world scaled
You can check that the number of pod instances has been scaled appropriately:
kubectl get pods NAME READY STATUS RESTARTS AGE hello-world-5f55779987-tswmg 1/1 Running 0 18s hello-world-5f55779987-v8w5h 1/1 Running 0 26m hello-world-5f55779987-wd857 1/1 Running 0 18s
Exposing a Service Object for an Application
Typically, while many applications only need to communicate internally within a pod, or even across pods, you might need to expose your application externally so that clients outside of the Kubernetes cluster can interface with the application. You can do this by creating a service definition for the deployment.
Note:
Prerequisite for LoadBalancer service:
The Oracle Cloud Infrastructure Cloud Controller Manager module
(oci-ccm) is used to create and manage Oracle Cloud Infrastructure load
balancers for Kubernetes applications. Hence, the following example assumes you have
completed installation of the oci-ccm module as described in Application Load Balancers.
To expose a deployment using a service object, you must define the service type that should
be used. The following example shows how you might use the kubectl expose
deployment command to expose the application via a LoadBalancer
service:
kubectl expose deployment hello-world --port 80 --type=LoadBalancer service/hello-world exposed
Use kubectl get services to list the different services that the cluster
is running as shown in the following example. Note that the EXTERNAL-IP field
of the LoadBalancer service initially shows as
<pending> whilst the setup of the service is still in progress:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world LoadBalancer 10.102.42.160 <pending> 80:31847/TCP 3s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h13mYou can see the load balancer in the Oracle Cloud Infrastructure console. Initially, its state in the console is shown as Creating.
Wait a few minutes for the setup of the service to complete. Run the kubectl get
services command again, and note that the EXTERNAL-IP field is
now populated with the IP address assigned to the LoadBalancer service:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world LoadBalancer 10.102.42.160 192.0.2.250 80:31847/TCP 85s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h15mPORT(s) field contains the following
ports:
-
Port
80: The port at which theLoadBalancerservice can be accessed. In this example, the service would be accessed at the following URL:http://192.0.2.250
-
Port
31847: The port assigned to theNodePortservice. TheNodePortservice enables the application to be accessed using URL format worker_node:NodePort, for example:http://worker1.example.com:31847/
Note:
Kubernetes creates theNodePortservice as part of itsLoadBalancersetup.
curl
commands as shown in the following examples:
-
For the
LoadBalancerservice:curl http://192.0.2.250 <html> <head> <title>Welcome to this servicer</title> </head> <body> <h1>Welcome to this service</h1> ... </body> </html> -
For each worker node, verify the
NodePortservice by running acurlcommand as illustrated below:curl http://worker1.example.com:31847/ <html> <head> <title>Welcome to this servicer</title> ... </head> <body> <h1>Welcome to this service</h1> ... </body> </html>
Deleting a Service or Deployment
Objects can be deleted easily within Kubernetes so that your
environment can be cleaned. Use the kubectl
delete command to remove an object.
To delete a service, specify the services object and the name of the service that you want to remove. For example:
kubectl delete services hello-world service "hello-world" deleted
To delete an entire deployment, and all of the pod replicas running for that deployment, specify the deployment object and the name that you used to create the deployment:
kubectl delete deployment hello-world deployment.extensions "hello-world" deleted
Working With Namespaces
Namespaces can be used to further separate resource usage and to provide limited environments for particular use cases. By default, Kubernetes configures a namespace for Kubernetes system components and a standard namespace to be used for all other deployments for which no namespace is defined.
To view existing namespaces, use the kubectl get
namespaces and kubectl describe
namespaces commands.
The kubectl command only displays resources in the
default namespace, unless you set the namespace specifically for a request.
Therefore, if you need to view the pods specific to the Kubernetes system, you would use the
--namespace option to set the namespace to kube-system for
the request. For example, in a cluster with a single control plane node:
kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE coredns-5bc65d7f4b-qzfcc 1/1 Running 0 23h coredns-5bc65d7f4b-z64f2 1/1 Running 0 23h etcd-control1.example.com 1/1 Running 0 23h kube-apiserver-control1.example.com 1/1 Running 0 23h kube-controller-control1.example.com 1/1 Running 0 23h kube-flannel-ds-2sjbx 1/1 Running 0 23h kube-flannel-ds-njg9r 1/1 Running 0 23h kube-proxy-m2rt2 1/1 Running 0 23h kube-proxy-tbkxd 1/1 Running 0 23h kube-scheduler-control1.example.com 1/1 Running 0 23h kubernetes-dashboard-7646bf6898-d6x2m 1/1 Running 0 23h
Using Deployment Files
To simplify the creation of pods and their related requirements, you can create a deployment file that define all of the elements that comprise the deployment. This deployment defines which images should be used to generate the containers within the pod, along with any runtime requirements, as well as Kubernetes networking and storage requirements in the form of services that should be configured and volumes that may need to be mounted.
Deployments are described in detail at:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/