4 Using Kubernetes
This chapter describes how to get started using Kubernetes to deploy, maintain, and scale
containerized applications. In this chapter, we describe basic usage of the
kubectl
command to get you started creating and managing containers and
services within the environment.
The kubectl
utility is fully documented in the upstream Kubernetes
documentation.
About Runtime Engines
runc
is the default runtime engine when you
create containers. You can also use the
kata-runtime
runtime engine to create Kata
containers. For information on Kata containers and how to create
them, see Container Runtimes.
Getting Information about Nodes
To get a listing of all nodes in a cluster and the status of each node, use the
kubectl get
command. This command can be used to obtain listings of any
kind of Kubernetes resource. In this case, the nodes
resource:
kubectl get nodes
The output looks similar to:
NAME STATUS ROLES AGE VERSION
control.example.com Ready control-plane 1h version
worker1.example.com Ready <none> 1h version
worker2.example.com Ready <none> 1h version
You can get more detailed information about any resource using the kubectl
describe
command. If you specify the name of the resource, the output is limited to
information about that resource alone; otherwise, full details of all resources are also
printed to screen. For example:
kubectl describe nodes worker1.example.com
The output looks similar to:
Name: worker1.example.com
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=worker1.example.com
kubernetes.io/os=linux
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"fe:78:5f:ea:7c:c0"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.0.2.11
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
...
Running an Application in a Pod
To create a pod with a single running container, you can use the kubectl
create
command. For example:
kubectl create deployment --image nginx hello-world
Substitute nginx
with a container image. Substitute
hello-world
with a name for the deployment. The pods are named by using the
deployment name as a prefix.
Tip:
Deployment, pod, and service names conform to a requirement to match a DNS-1123 label.
These must consist of lowercase alphanumeric characters or -
, and must
start and end with an alphanumeric character. The regular expression that's used to validate
names is '[a-z0-9]([-a-z0-9]*[a-z0-9])?
'. If you use a name for the
deployment that doesn't validate, an error is returned.
Many more optional parameters can be used when you run a new application within Kubernetes.
For example, at run time, you can specify how many replica pods are to be started, or you
might apply a label to the deployment to make it easier to identify pod components. To see a
full list of options available to you, run kubectl run --help
.
To check that a new application deployment has created one or more pods, use the
kubectl get pods
command:
kubectl get pods
The output looks similar to:
NAME READY STATUS RESTARTS AGE
hello-world-5f55779987-wd857 1/1 Running 0 1m
Use kubectl describe
to show a more detailed view of pods, including which
containers are running and what image they're based on, including which node is hosting the
pod:
kubectl describe pods
The output looks similar to:
Name: hello-world-5f55779987-wd857
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: worker1.example.com/192.0.2.11
Start Time: <date> 08:48:33 +0100
Labels: app=hello-world
pod-template-hash=5f55779987
Annotations: <none>
Status: Running
IP: 10.244.1.3
Controlled By: ReplicaSet/hello-world-5f55779987
Containers:
nginx:
Container ID: cri-o://417b4b59f7005eb4b1754a1627e01f957e931c0cf24f1780cd94fa9949be1d31
Image: nginx
Image ID: docker-pullable://nginx@sha256:5d32f60db294b5deb55d078cd4feb410ad88e6fe7...
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 10 Dec 2018 08:25:25 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s8wj4 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-s8wj4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s8wj4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
....
Scaling a Pod Deployment
To change the number of instances of the same pod that you're running, you can use the
kubectl scale deployment
command. For example:
kubectl scale deployment --replicas=3 hello-world
You can check that the number of pod instances has been scaled appropriately:
kubectl get pods
The output looks similar to:
NAME READY STATUS RESTARTS AGE
hello-world-5f55779987-tswmg 1/1 Running 0 18s
hello-world-5f55779987-v8w5h 1/1 Running 0 26m
hello-world-5f55779987-wd857 1/1 Running 0 18s
Exposing a Service Object for an Application
Typically, while many applications only need to communicate internally within a pod, or even across pods, you might need to expose an application externally so that clients outside of the Kubernetes cluster can interface with the application. You can do this by creating a service definition for the deployment.
Note:
The Oracle Cloud Infrastructure Cloud Controller Manager module is used to create and manage Oracle Cloud Infrastructure load balancers for Kubernetes applications. The following example assumes you have installed this module as described in Oracle Cloud Infrastructure Cloud Controller Manager Module.
To expose a deployment using a service object, you must define the service type to be used.
The following example shows how you might use the kubectl expose deployment
command to expose the application using a LoadBalancer
service:
kubectl expose deployment hello-world --port 80 --type=LoadBalancer
Use kubectl get services
to list the different services that the cluster is
running as shown in the following example. Note that the EXTERNAL-IP
field of
the LoadBalancer
service initially shows as <pending>
whilst the setup of the service is still in progress:
kubectl get services
The output looks similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world LoadBalancer 10.102.42.160 <pending> 80:31847/TCP 3s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h13m
You can see the load balancer in the Oracle Cloud Infrastructure console. Initially, its state in the console is shown as Creating.
Wait a few minutes for the setup of the service to complete. Run the kubectl get
services
command again, and note that the EXTERNAL-IP
field is now
populated with the IP address assigned to the LoadBalancer
service:
kubectl get services
The output looks similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world LoadBalancer 10.102.42.160 192.0.2.250 80:31847/TCP 85s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h15m
PORT(s)
field contains the following
ports:
-
Port
80
: The port at which theLoadBalancer
service can be accessed. In this example, the service would be accessed at the following URL:http://192.0.2.250
-
Port
31847
: The port assigned to theNodePort
service. TheNodePort
service enables the application to be accessed using URL format worker_node:
NodePort, for example:http://worker1.example.com:31847/
Note:
Kubernetes creates theNodePort
service as part of itsLoadBalancer
setup.
curl
commands as shown in the following examples:
-
For the
LoadBalancer
service:curl http://192.0.2.250
The output looks similar to:
<html> <head> <title>Welcome to this servicer</title> </head> <body> <h1>Welcome to this service</h1> ... </body> </html>
-
For each worker node, verify the
NodePort
service by running acurl
command:curl http://worker1.example.com:31847/
The output looks similar to:
<html> <head> <title>Welcome to this servicer</title> ... </head> <body> <h1>Welcome to this service</h1> ... </body> </html>
Deleting a Service or Deployment
Objects can be deleted easily within Kubernetes so that the environment can be cleaned up.
Use the kubectl delete
command to remove an object.
To delete a service, specify the services object and the name of the service that you want to remove. For example:
kubectl delete services hello-world
To delete an entire deployment, and all pod replicas running for that deployment, specify the deployment object and the name that you used to create the deployment:
kubectl delete deployment hello-world
Working With Namespaces
Namespaces can be used to further separate resource usage and to provide limited environments for particular use cases. By default, Kubernetes configures a namespace for Kubernetes system components and a standard namespace to be used for all other deployments for which no namespace is defined.
To view existing namespaces, use the kubectl get namespaces
and
kubectl describe namespaces
commands.
The kubectl
command only displays resources in the default
namespace, unless you set the namespace for a request. Therefore, if you need to view the pods
specific to the Kubernetes system, you would use the --namespace
option to
set the namespace to kube-system
for the request. For example:
kubectl get pods --namespace kube-system
The output looks similar to:
NAME READY STATUS RESTARTS AGE
coredns-5bc65d7f4b-qzfcc 1/1 Running 0 23h
coredns-5bc65d7f4b-z64f2 1/1 Running 0 23h
etcd-control1.example.com 1/1 Running 0 23h
kube-apiserver-control1.example.com 1/1 Running 0 23h
kube-controller-control1.example.com 1/1 Running 0 23h
kube-flannel-ds-2sjbx 1/1 Running 0 23h
kube-flannel-ds-njg9r 1/1 Running 0 23h
kube-proxy-m2rt2 1/1 Running 0 23h
kube-proxy-tbkxd 1/1 Running 0 23h
kube-scheduler-control1.example.com 1/1 Running 0 23h
kubernetes-dashboard-7646bf6898-d6x2m 1/1 Running 0 23h
Using Deployment Files
To simplify the creation of pods and their related requirements, you can create a deployment file that define all elements that consist of the deployment. This deployment defines which images are to be used to generate the containers within the pod, along with any runtime requirements, and Kubernetes networking, and storage requirements in the form of services to be configured and volumes that might need to be mounted.
Deployments are described in detail in the upstream Kubernetes documentation.