The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Chapter 5 Getting Started with Kubernetes
This chapter describes how to get started using Kubernetes to deploy, maintain and scale your containerized applications.
5.1 kubectl Basics
The kubectl utility is a command line tool that interfaces with the API Server to run commands against the cluster. The tool is typically run on the master node of the cluster. It effectively grants full administrative rights to the cluster and all of the nodes in the cluster.
The kubectl utility is documented fully at:
https://kubernetes.io/docs/reference/kubectl/overview/
In this section, we describe basic usage of the tool to get you started creating and managing pods and services within your environment.
Get Information About the Nodes in a Cluster
To get a listing of all of the nodes in a cluster and the status
of each node, use the kubectl get command.
This command can be used to obtain listings of any kind of
resource that Kubernetes supports. In this case, the
nodes
resource:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 1h v1.12.5+2.1.1.el7
worker1.example.com Ready <none> 1h v1.12.5+2.1.1.el7
worker2.example.com Ready <none> 1h v1.12.5+2.1.1.el7
You can get more detailed information about any resource using the kubectl describe command. If you specify the name of the resource, the output is limited to information about that resource alone; otherwise, full details of all resources are also printed to screen:
$ kubectl describe nodes worker1.example.com
Name: worker1.example.com
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=worker1.example.com
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"f2:24:33:ab:be:82"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.147.25.196
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
...
Run an Application in a Pod
To create a pod with a single running Docker container, you can use the kubectl create command:
$ kubectl create deployment --image nginx
hello-world
deployment.apps/hello-world created
Substitute hello-world
with a name
for your deployment. Your pods are named by using the deployment
name as a prefix. Substitute nginx
with a Docker image that can be pulled by the Docker engine.
Deployment, pod and service names conform to a requirement to
match a DNS-1123 label. These must consist of lower case
alphanumeric characters or -
, and must
start and end with an alphanumeric character. The regular
expression that is used to validate names is
'[a-z0-9]([-a-z0-9]*[a-z0-9])?
'. If you use
a name, for your deployment, that does not validate, an error
is returned.
There are many additional optional parameters that can be used when you run a new application within Kubernetes. For instance, at run time, you can specify how many replica pods should be started, or you might apply a label to the deployment to make it easier to identify pod components. To see a full list of options available to you, run kubectl run -h.
To check that your new application deployment has created one or more pods, use the kubectl get pods command:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-5f55779987-wd857 1/1 Running 0 1m
Use kubectl describe to show a more detailed view of your pods, including which containers are running and what image they are based on, as well as which node is currently hosting the pod:
$ kubectl describe pods
Name: hello-world-5f55779987-wd857
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: worker1.example.com/192.0.2.11
Start Time: Mon, 10 Dec 2018 08:25:17 -0800
Labels: app=hello-world
pod-template-hash=5f55779987
Annotations: <none>
Status: Running
IP: 10.244.1.3
Controlled By: ReplicaSet/hello-world-5f55779987
Containers:
nginx:
Container ID: docker://417b4b59f7005eb4b1754a1627e01f957e931c0cf24f1780cd94fa9949be1d31
Image: nginx
Image ID: docker-pullable://nginx@sha256:5d32f60db294b5deb55d078cd4feb410ad88e6fe77500c87d3970eca97f54dba
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 10 Dec 2018 08:25:25 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s8wj4 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-s8wj4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s8wj4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
....
Scale a Pod Deployment
To change the number of instances of the same pod that you are running, you can use the kubectl scale deployment command:
$ kubectl scale deployment --replicas=3
hello-world
deployment.apps/hello-world scaled
You can check that the number of pod instances has been scaled appropriately:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-5f55779987-tswmg 1/1 Running 0 18s
hello-world-5f55779987-v8w5h 1/1 Running 0 26m
hello-world-5f55779987-wd857 1/1 Running 0 18s
Expose a Service Object for Your Application
Typically, while many applications may only need to communicate internally within a pod, or even across pods, you may need to expose your application externally so that clients outside of the Kubernetes cluster can interface with the application. You can do this by creating a service definition for the deployment.
To expose a deployment using a service object, you must define
the service type that should be used. If you are not using a
cloud-based load balancing service, you can set the service type
to NodePort
. The NodePort
service exposes the application running within the cluster on a
dedicated port on the public IP address on all of the nodes
within the cluster. Use the kubectl expose
deployment to create a new service:
$ kubectl expose deployment hello-world
--port 80
--type=LoadBalancer
service/hello-world exposed
Use kubectl get services to list the different services that the cluster is running, and to obtain the port information required to access the service:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world LoadBalancer 10.102.42.160 <pending> 80:31847/TCP 3s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h13m
In this example output, you can see that traffic to port 80
inside the cluster is mapped to the NodePort
31847. The external IP that can be used to access the service is
listed as <pending>, meaning that if you connect to the
external IP address for any of the nodes within the cluster on
the port 31847, you are able access the service.
For the sake of the example in this guide, you can open a web browser to point at any of the nodes in the cluster, such as http://worker1.example.com:31847/, and it should display the NGINX demonstration application.
Delete a Service or Deployment
Objects can be deleted easily within Kubernetes so that your environment can be cleaned. Use the kubectl delete command to remove an object.
To delete a service, specify the services object and the name of the service that you want to remove:
$ kubectl delete services hello-world
To delete an entire deployment, and all of the pod replicas running for that deployment, specify the deployment object and the name that you used to create the deployment:
$ kubectl delete deployment hello-world
Work With Namespaces
Namespaces can be used to further separate resource usage and to provide limited environments for particular use cases. By default, Kubernetes configures a namespace for Kubernetes system components and a standard namespace to be used for all other deployments for which no namespace is defined.
To view existing namespaces, use the kubectl get namespaces and kubectl describe namespaces commands.
The kubectl command only displays resources
in the default namespace, unless you set the namespace
specifically for a request. Therefore, if you need to view the
pods specific to the Kubernetes system, you would use the
--namespace
option to set the namespace to
kube-system
for the request. For example, in
a cluster with a single master node:
$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-6c77847dcf-77grm 1/1 Running 2 5h26m
coredns-6c77847dcf-vtk8k 1/1 Running 2 5h26m
etcd-master.example.com 1/1 Running 3 5h25m
kube-apiserver-master.example.com 1/1 Running 4 5h25m
kube-controller-manager-master.example.com 1/1 Running 4 5h25m
kube-flannel-ds-4c285 1/1 Running 0 115m
kube-flannel-ds-ds66r 1/1 Running 0 115m
kube-proxy-5lssw 1/1 Running 0 117m
kube-proxy-tv2mj 1/1 Running 3 5h26m
kube-scheduler-master.example.com 1/1 Running 3 5h25m
kubernetes-dashboard-64458f66b6-q8dzh 1/1 Running 4 5h26m
5.2 Pod Configuration Using a YAML Deployment
To simplify the creation of pods and their related requirements, you can create a deployment file that define all of the elements that comprise the deployment. This deployment defines which images should be used to generate the containers within the pod, along with any runtime requirements, as well as Kubernetes networking and storage requirements in the form of services that should be configured and volumes that may need to be mounted.
Deployments are described in detail at https://kubernetes.io/docs/concepts/workloads/controllers/deployment/.
Kubernetes deployment files can be easily shared and Kubernetes is also capable of creating a deployment based on a remotely hosted file, allowing anyone to get a deployment running in minutes. You can create a deployment by running the following command:
$ kubectl create -f https://example.com/deployment.yaml
In the following example, you will create two YAML deployment files. The first is used to create a deployment that runs MySQL Server with a persistent volume for its data store. You will also configure the services that allow other pods in the cluster to consume this resource.
The second deployment will run a phpMyAdmin container in a
separate pod that will access the MySQL Server directly. That
deployment will also create a NodePort
service
so that the phpMyAdmin interface can be accessed from outside of
the Kubernetes cluster.
The following example illustrates how you can use YAML deployment files to define the scope and resources that you need to run a complete application.
The examples provided here are provided for demonstration purposes only. They are not intended for production use and do not represent a preferred method of deployment or configuration.
MySQL Server Deployment
To create the MySQL Server Deployment, create a single text file
mysql-db.yaml
in an editor. The description
here provides a breakdown of each of the objects as they are
defined in the text file. All of these definitions can appear in
the same file.
One problem when running databases within containers is that
containers are not persistent. This means that data hosted in
the database must be stored outside of the container itself.
Kubernetes handles setting up these persistent data stores in the
form of Persistent Volumes. There are a wide variety of
Persistent Volume types. In a production environment, some kind
of shared file system that is accessible to all nodes in the
cluster would be the most appropriate implementation choice,
however for this simple example you will use the
hostPath
type. The
hostPath
type allows you to use a local disk
on the node where the container is running.
In the Persistent Volume specification, we can define the size
of the storage that should be dedicated for this purpose and the
access modes that should be supported. For the
hostPath
type, the path where the data should
be stored is also defined. In this case, we use the path
/tmp/data
for demonstration purposes. These
parameters should be changed according to your own requirements.
The definition in the YAML file for the Persistent Volume object should appear similarly to the following:
apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage:5Gi
accessModes: - ReadWriteOnce hostPath: path: "/tmp/data
"
A Persistent Volume object is an entity within Kubernetes that stands on its own as a resource. For a pod to use this resource, it must request access and abide by the rules applied to its claim for access. This is defined in the form of a Persistent Volume Claim. Pods effectively mount Persistent Volume Claims as their storage.
The definition in the YAML file for the Persistent Volume Claim object should appear similarly to the following:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
It is important to define a service for the deployment. This
specifies the TCP ports used by the application that we intend
to run in our pod. In this case, the MySQL server listens on
port 3306. Most importantly, the name of the service can be used
by other deployments to access this service within the cluster,
regardless of the node where it is running. This service does
not specify a service type as it uses the default
ClusterIP
type so that it is only accessible
to other components running in the cluster internal network. In
this way, the MySQL server is isolated to requests from
containers running in pods within the Kubernetes cluster.
The Service definition in the YAML file might look as follows:
--- apiVersion: v1 kind: Service metadata: name: mysql-service labels: app: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None
A MySQL Server instance can be easily created as a Docker
container running in a pod, using the
mysql/mysql-server:latest
Docker image. In
the pod definition, specify the volume information to attach the
Persistent Volume Claim that was defined previously for this
purpose. Also, specify the container parameters, including the
image that should be used, the container ports that are used,
volume mount points and any environment variables required to
run the container. In this case, we mount the Persistent Volume
Claim onto /var/lib/mysql
in each running
container instance and we specify the
MYSQL_ROOT_PASSWORD
value as an environment
variable, as required by the image.
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
volumes:
- name: mysql-pv-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
containers:
- image: mysql:5.6
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-pv-storage
env:
- name: MYSQL_ROOT_PASSWORD
value: "password
"
Replace the password
value specified
for the MYSQL_ROOT_PASSWORD
environment
variable with a better alternative, suited to your security
requirements.
When you have created your YAML deployment file, save it and then run:
$ kubectl create -f mysql-db.yaml
persistentvolume/mysql-pv-volume created
persistentvolumeclaim/mysql-pv-claim created
service/mysql-service created
pod/mysql created
All of the resources and components defined in the file are created and loaded in Kubernetes. You can use the kubectl command to view details of each component as you require.
phpMyAdmin Deployment
To demonstrate how deployments can interconnect and consume services provided by one another, it is possible to set up a phpMyAdmin Docker instance that connects to the backend MySQL server that you deployed in the first part of this example.
The phpMyAdmin deployment uses a standard Docker image to
create a container running in a pod, and also defines a
NodePort
service that allows the web
interface to be accessed from any node in the cluster.
Create a new file called phpmyadmin.yaml
and
open it in an editor to add the two component definitions
described in the following text.
First, create the Service definition. This service defines the
port that is used in the container and the targetPort that this
is mapped to within the internal Kubernetes cluster network. Also
specify the Service type and set it to
NodePort
, to make the service accessible from
outside of the cluster network via any of the cluster nodes and
the port forwarding service that the NodePort
service type provides.
The declaration should look similar to the following:
apiVersion: v1 kind: Service metadata: labels: name: phpmyadmin name: phpmyadmin spec: ports: - port: 80 targetPort: 80 selector: name: phpmyadmin type: NodePort
Finally, define the pod where the phpMyAdmin container is
loaded. Here, you can specify the Docker image that should be
used for this container and the port that the container uses.
You can also specify the environment variables required to run
this image. Notably, the Docker image requires you to set the
environment variable PMA_HOST
, which should
provide the IP address or resolvable domain name for the MySQL
server. Since we cannot guess which IP address should be used
here, we can rely on Kubernetes to take care of this, by providing
the mysql-service
name as the value here.
Kubernetes automatically links the two pods using this service
definition.
The Pod definition should look similar to the following:
--- apiVersion: v1 kind: Pod metadata: name: phpmyadmin labels: name: phpmyadmin spec: containers: - name: phpmyadmin image: phpmyadmin/phpmyadmin env: - name: PMA_HOST value: mysql-service ports: - containerPort: 80 name: phpmyadmin
Save the file and then run the kubectl create command to load the YAML file into a deployment.
$ kubectl create -f phpmyadmin.yaml
service/phpmyadmin created
pod/phpmyadmin created
To check that this is working as expected, you need to determine
what port is being used for the port forwarding provided by the
NodePort
service:
$ kubectl get services phpmyadmin
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
phpmyadmin 10.110.16.56 <nodes> 80:31485/TCP 1d
In this example output, port 80 on the cluster network is being
mapped to port 30582 on each of the cluster nodes. Open a
browser to point to any of the cluster nodes on the specified
port mapping. For example:
http://master.example.com:31485/
. You
should be presented with the phpMyAdmin login page and you
should be able to log into phpMyAdmin as root with the password
that you specified as the MYSQL_ROOT_PASSWORD
environment variable when you deployed the MySQL server.
5.3 Using Persistent Storage
The concept of using persistent storage for a database deployment was introduced in the previous section, Section 5.2, “Pod Configuration Using a YAML Deployment”. Persistent storage is essential when working with stateful applications like databases, as it is important that you are able to retain data beyond the lifecycle of the container, or even of the pod, itself.
Persistent storage, in Kubernetes, is handled in the form of PersistentVolume objects and are bound to pods using PersistentVolumeClaims. PersistentVolumes can be hosted locally or can be hosted on networked storage devices or services.
While it is convenient to us the hostPath
persistent volume type to store data on the local disk in a
demonstration or small-scale deployment, a typical Kubernetes
environment involves multiple hosts and usually includes some type
of networked storage. Using networked storage helps to ensure
resilience and allows you to take full advantage of a clustered
environment. In the case where the node where a pod is running
fails, a new pod can be started on an alternate node and storage
access can be resumed. This is particularly important for database
environments where replica setup has been properly configured.
In this section, we continue to explore the Kubernetes components that are used to configure persistent storage, with the focus on using networked storage to host data.
5.3.1 Persistent Storage Concepts
Persistent storage is provided in Kubernetes using the PersistentVolume subsystem. To configure persistent storage, you should be familiar with the following terms:
-
PersistentVolume. A PersistentVolume defines the type of storage that is being used and the method used to connect to it. This is the real disk or networked storage service that is used to store data.
-
PersistentVolumeClaim. A PersistentVolumeClaim defines the parameters that a consumer, like a pod, uses to bind the PersistentVolume. The claim may specify quota and access modes that should be applied to the resource for a consumer. A pod can use a PersistentVolumeClaim to gain access to the volume and mount it.
-
StorageClass. A StorageClass is an object that specifies a volume plugin, known as a provisioner that allows users to define PersistentVolumeClaims without needing to preconfigure the storage for a PersistentVolume. This can be used to provide access to similar volume types as a pooled resource that can be dynamically provisioned for the lifecycle of a PersistentVolumeClaim.
PersistentVolumes can be provisioned either statically or dynamically.
Static PersistentVolumes are manually created and contain the details required to access real storage and can be consumed directly by any pod that has an associated PersistentVolumeClaim.
Dynamic PersistentVolumes can be automatically generated if a PersistentVolumeClaim does not match an existing static PersistentVolume and an existing StorageClass is requested in the claim. A StorageClass can be defined to host a pool of storage that can be accessed dynamically. Creating a StorageClass is an optional step that is only required if you intend to use dynamic provisioning.
The process to provision persistent storage is as follows:
-
Create a PersistentVolume or StorageClass.
-
Create PersistentVolumeClaims.
-
Configure a pod to use the PersistentVolumeClaim.
The examples, here, assume that you have configured storage manually and that you are using static provisioning. In each case, a PersistentVolume is configured, the PersistentVolumeClaim is created, and finally a pod is created to use the PersistentVolumeClaim.
5.3.2 Configuring NFS
In this example, it is assumed that an NFS appliance is already configured to allow access to all of the nodes in the cluster. Note that if your NFS appliance is hosted on Oracle Cloud Infrastructure, you must create ingress rules in the security list for the Virtual Cloud Network (VCN) subnet that you are using to host your Kubernetes nodes. The rules must be set to allow traffic on ports 2049 and 20049 for NFS Access and NFS Mount.
Each worker node within the cluster must also have the
nfs-utils
package installed:
# yum install nfs-utils
The following steps describe a deployment using YAML files for each object:
-
Create a PhysicalVolume object in a YAML file. For example, on the master node, create a file
pv-nfs.yml
and open it in an editor to include the following content:apiVersion: v1 kind: PersistentVolume metadata: name: nfs spec: capacity: storage:
1Gi
accessModes: - ReadWriteMany nfs: server:192.0.2.100
path: "/nfsshare
"Replace
1Gi
with the size of the storage available. Replace192.0.2.100
with the IP address of the NFS appliance in your environment. Replace/nfsshare
with the exported share name on your NFS appliance. -
Create the PersistentVolume using the YAML file you have just created, by running the following command on the master node:
$
kubectl create -f pv-nfs.yml
persistentvolume/nfs created -
Create a PhysicalVolumeClaim object in a YAML file. For example, on the master node, create a file
pvc-nfs.yml
and open it in an editor to include the following content:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs spec: accessModes: -
ReadWriteMany
resources: requests: storage:1Gi
Note that you can change the
accessModes
by changing theReadWriteMany
value, as required. You can also change the quota available in this claim, by changing the value of thestorage
option from1Gi
to some other value. -
Create the PersistentVolumeClaim using the YAML file you have just created, by running the following command on the master node:
$
kubectl create -f pvc-nfs.yml
persistentvolumeclaim/nfs created -
Check that the PersistentVolume and PersistentVolumeClaim have been created properly and that the PersistentVolumeClaim is bound to the correct volume:
$
kubectl get pv,pvc
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv/nfs 1Gi RWX Retain Bound default/nfs 7m NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc/nfs Bound nfs 1Gi RWX 2m -
At this point, you can set up pods that can use the PersistentVolumeClaim to bind to the PersistentVolume and use the resources that are available there. In the example steps that follow, a ReplicationController is used to set up two replica pods running web servers that use the PersistentVolumeClaim to mount the PersistentVolume onto a mountpath containing shared resources.
-
Create a ReplicationController object in a YAML file. For example, on the master node, create a file
rc-nfs.yml
and open it in an editor to include the following content:apiVersion: v1 kind: ReplicationController metadata: name: rc-nfs-test spec: replicas: 2 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html" volumes: - name: nfs persistentVolumeClaim: claimName: nfs
-
Create the ReplicationController using the YAML file you have just created, by running the following command on the master node:
$
kubectl create -f rc-nfs.yml
replicationcontroller/rc-nfs-test created -
Check that the pods have been created:
$
kubectl get pods
NAME READY STATUS RESTARTS AGE rc-nfs-test-c5440 1/1 Running 0 54s rc-nfs-test-8997k 1/1 Running 0 54s -
On the NFS appliance, create an index file in the
/nfsshare
export, to test that the web server pods have access to this resource. For example:$
echo "This file is available on NFS" > /nfsshare/index.html
-
You can either create a service to expose the web server ports so that you are able to check the output of the web server, or you can simply view the contents in the
/usr/share/nginx/html
folder on each pod, since the NFS share should be mounted onto this directory in each instance. For example, on the master node:$
kubectl exec rc-nfs-test-c5440 cat /usr/share/nginx/html/index.html
This file is available on NFS $kubectl exec rc-nfs-test-8997k cat /usr/share/nginx/html/index.html
This file is available on NFS
-
You can experiment further by shutting down a node where a pod is running. A new pod is spawned on a running node and instantly has access to the data on the NFS share. In this way, you can demonstrate data persistence and resilience during node failure.
5.3.3 Configuring iSCSI
In this example, it is assumed that an iSCSI service is already configured to expose a block device, as an iSCSI LUN, to all of the nodes in the cluster. Note that if your iSCSI server is hosted on Oracle Cloud Infrastructure, you must create ingress rules in the security list for the Virtual Cloud Network (VCN) subnet that you are using to host your Kubernetes nodes. The rules must be set to allow traffic on ports 860 and 3260.
Each worker node within the cluster must also have the
iscsi-initiator-utils
package installed:
# yum install iscsi-initiator-utils
You must manually edit the
/etc/iscsi/initiatorname.iscsi
file on all
nodes of cluster to add the initiator name
(iqn
) of the device. Restart the
iscsid
service once you have edited this
file.
For more information on configuring iSCSI on Oracle Linux 7, see Oracle® Linux 7: Administrator's Guide.
The following steps describe a deployment using YAML files for each object:
-
Create a PhysicalVolume object in a YAML file. For example, on the master node, create a file
pv-iscsi.yml
and open it in an editor to include the following content:apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage:
12Gi
accessModes: - ReadWriteOnce iscsi: targetPortal:192.0.2.100:3260
iqn:iqn.2017-10.local.example.server:disk1
lun: 0 fsType: 'ext4' readOnly: falseReplace
12Gi
with the size of the storage available. Replace192.0.2.100:3260
with the IP address and port number of the iSCSI target in your environment. Replaceiqn.2017-10.local.example.server:disk1
with theiqn
for the device that you wish to use via iSCSI. -
Create the PersistentVolume using the YAML file you have just created, by running the following command on the master node:
$
kubectl create -f pv-iscsi.yml
persistentvolume/iscsi-pv created -
Create a PhysicalVolumeClaim object in a YAML file. For example, on the master node, create a file
pvc-iscsi.yml
and open it in an editor to include the following content:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: iscsi-pvc spec: accessModes: -
ReadWriteOnce
resources: requests: storage:12Gi
Note that you can change the
accessModes
by changing theReadWriteOnce
value, as required. Supported modes for iSCSI includeReadWriteOnce
andReadOnlyMany
. You can also change the quota available in this claim, by changing the value of thestorage
option from12Gi
to some other value.Note that with iSCSI, support for both read and write operations limit you to hosting all of your pods on a single node. The scheduler automatically ensures that pods with the same PersistentVolumeClaim run on the same worker node.
-
Create the PersistentVolumeClaim using the YAML file you have just created, by running the following command on the master node:
$
kubectl create -f pvc-iscsi.yml
persistentvolumeclaim/iscsi-pvc created -
Check that the PersistentVolume and PersistentVolumeClaim have been created properly and that the PersistentVolumeClaim is bound to the correct volume:
$
kubectl get pv,pvc
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv/iscsi-pv 12Gi RWX Retain Bound default/iscsi-pvc 25s NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc/iscsi-pvc Bound iscsi-pv 12Gi RWX 21s -
At this point you can set up pods that can use the PersistentVolumeClaim to bind to the PersistentVolume and use the resources available there. In the following example, a ReplicationController is used to set up two replica pods running web servers that use the PersistentVolumeClaim to mount the PersistentVolume onto a mountpath containing shared resources.
-
Create a ReplicationController object in a YAML file. For example, on the master node, create a file
rc-iscsi.yml
and open it in an editor to include the following content:apiVersion: v1 kind: ReplicationController metadata: name: rc-iscsi-test spec: replicas: 2 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: iscsi mountPath: "/usr/share/nginx/html" volumes: - name: iscsi persistentVolumeClaim: claimName: iscsi-pvc
-
Create the ReplicationController using the YAML file you have just created, by running the following command on the master node:
$
kubectl create -f rc-iscsi.yml
replicationcontroller "rc-iscsi-test" created -
Check that the pods have been created:
$
kubectl get pods
NAME READY STATUS RESTARTS AGE rc-iscsi-test-05kdr 1/1 Running 0 9m rc-iscsi-test-wv4p5 1/1 Running 0 9m -
On any host where the iSCSI LUN can be mounted, mount the LUN and create an index file, to test that the web server pods have access to this resource. For example:
#
mount /dev/disk/by-path/
$ip-192.0.2.100\:3260-iscsi-iqn.2017-10.local.example.server\:disk1-lun-0
/mntecho "This file is available on iSCSI" > /mnt/index.html
-
You can either create a service to expose the web server ports so that you are able to check the output of the web server, or you can simply view the contents in the
/usr/share/nginx/html
folder on each pod, since the NFS share should be mounted onto this directory in each instance. For example, on the master node:$
kubectl exec rc-nfs-test-c5440 cat /usr/share/nginx/html/index.html
This file is available on iSCSI $kubectl exec rc-nfs-test-8997k cat /usr/share/nginx/html/index.html
This file is available on iSCSI
-