5.2 Pod Configuration Using a YAML Deployment

To simplify the creation of pods and their related requirements, you can create a deployment file that define all of the elements that comprise the deployment. This deployment defines which images should be used to generate the containers within the pod, along with any runtime requirements, as well as Kubernetes networking and storage requirements in the form of services that should be configured and volumes that may need to be mounted.

Deployments are described in detail at https://kubernetes.io/docs/concepts/workloads/controllers/deployment/.

Kubernetes deployment files can be easily shared and Kubernetes is also capable of creating a deployment based on a remotely hosted file, allowing anyone to get a deployment running in minutes. You can create a deployment by running the following command:

$ kubectl create -f https://example.com/deployment.yaml

In the following example, you will create two YAML deployment files. The first is used to create a deployment that runs MySQL Server with a persistent volume for its data store. You will also configure the services that allow other pods in the cluster to consume this resource.

The second deployment will run a phpMyAdmin container in a separate pod that will access the MySQL Server directly. That deployment will also create a NodePort service so that the phpMyAdmin interface can be accessed from outside of the Kubernetes cluster.

The following example illustrates how you can use YAML deployment files to define the scope and resources that you need to run a complete application.

Important

The examples provided here are provided for demonstration purposes only. They are not intended for production use and do not represent a preferred method of deployment or configuration.

MySQL Server Deployment

To create the MySQL Server Deployment, create a single text file mysql-db.yaml in an editor. The description here provides a breakdown of each of the objects as they are defined in the text file. All of these definitions can appear in the same file.

One problem when running databases within containers is that containers are not persistent. This means that data hosted in the database must be stored outside of the container itself. Kubernetes handles setting up these persistent data stores in the form of Persistent Volumes. There are a wide variety of Persistent Volume types. In a production environment, some kind of shared file system that is accessible to all nodes in the cluster would be the most appropriate implementation choice, however for this simple example you will use the hostPath type. The hostPath type allows you to use a local disk on the node where the container is running.

In the Persistent Volume specification, we can define the size of the storage that should be dedicated for this purpose and the access modes that should be supported. For the hostPath type, the path where the data should be stored is also defined. In this case, we use the path /tmp/data for demonstration purposes. These parameters should be changed according to your own requirements.

The definition in the YAML file for the Persistent Volume object should appear similarly to the following:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/data"

A Persistent Volume object is an entity within Kubernetes that stands on its own as a resource. For a pod to use this resource, it must request access and abide by the rules applied to its claim for access. This is defined in the form of a Persistent Volume Claim. Pods effectively mount Persistent Volume Claims as their storage.

The definition in the YAML file for the Persistent Volume Claim object should appear similarly to the following:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

It is important to define a service for the deployment. This specifies the TCP ports used by the application that we intend to run in our pod. In this case, the MySQL server listens on port 3306. Most importantly, the name of the service can be used by other deployments to access this service within the cluster, regardless of the node where it is running. This service does not specify a service type as it uses the default ClusterIP type so that it is only accessible to other components running in the cluster internal network. In this way, the MySQL server is isolated to requests from containers running in pods within the Kubernetes cluster.

The Service definition in the YAML file might look as follows:

---
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  labels:
    app: mysql
spec:
    selector:
      app: mysql
    ports:
      - port: 3306
    clusterIP: None

A MySQL Server instance can be easily created as a Docker container running in a pod, using the mysql/mysql-server:latest Docker image. In the pod definition, specify the volume information to attach the Persistent Volume Claim that was defined previously for this purpose. Also, specify the container parameters, including the image that should be used, the container ports that are used, volume mount points and any environment variables required to run the container. In this case, we mount the Persistent Volume Claim onto /var/lib/mysql in each running container instance and we specify the MYSQL_ROOT_PASSWORD value as an environment variable, as required by the image.

---
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  volumes:
    - name: mysql-pv-storage
      persistentVolumeClaim:
       claimName: mysql-pv-claim
  containers:
    - image: mysql:5.6 
      name: mysql  
      ports:
        - containerPort: 3306
          name: mysql
      volumeMounts:
        - mountPath: /var/lib/mysql
          name: mysql-pv-storage
      env:
        - name: MYSQL_ROOT_PASSWORD   
          value: "password"

Replace the password value specified for the MYSQL_ROOT_PASSWORD environment variable with a better alternative, suited to your security requirements.

When you have created your YAML deployment file, save it and then run:

$ kubectl create -f mysql-db.yaml
persistentvolume/mysql-pv-volume created
persistentvolumeclaim/mysql-pv-claim created
service/mysql-service created
pod/mysql created

All of the resources and components defined in the file are created and loaded in Kubernetes. You can use the kubectl command to view details of each component as you require.

phpMyAdmin Deployment

To demonstrate how deployments can interconnect and consume services provided by one another, it is possible to set up a phpMyAdmin Docker instance that connects to the backend MySQL server that you deployed in the first part of this example.

The phpMyAdmin deployment uses a standard Docker image to create a container running in a pod, and also defines a NodePort service that allows the web interface to be accessed from any node in the cluster.

Create a new file called phpmyadmin.yaml and open it in an editor to add the two component definitions described in the following text.

First, create the Service definition. This service defines the port that is used in the container and the targetPort that this is mapped to within the internal Kubernetes cluster network. Also specify the Service type and set it to NodePort, to make the service accessible from outside of the cluster network via any of the cluster nodes and the port forwarding service that the NodePort service type provides.

The declaration should look similar to the following:

apiVersion: v1
kind: Service
metadata:
  labels:
    name: phpmyadmin
  name: phpmyadmin
spec:
  ports:
    - port: 80
      targetPort: 80
  selector:
    name: phpmyadmin
  type: NodePort

Finally, define the pod where the phpMyAdmin container is loaded. Here, you can specify the Docker image that should be used for this container and the port that the container uses. You can also specify the environment variables required to run this image. Notably, the Docker image requires you to set the environment variable PMA_HOST, which should provide the IP address or resolvable domain name for the MySQL server. Since we cannot guess which IP address should be used here, we can rely on Kubernetes to take care of this, by providing the mysql-service name as the value here. Kubernetes automatically links the two pods using this service definition.

The Pod definition should look similar to the following:

---
apiVersion: v1
kind: Pod
metadata:
  name: phpmyadmin
  labels:
    name: phpmyadmin
spec:
  containers:
    - name: phpmyadmin
      image: phpmyadmin/phpmyadmin
      env:
        - name: PMA_HOST
          value: mysql-service
      ports:
        - containerPort: 80
          name: phpmyadmin

Save the file and then run the kubectl create command to load the YAML file into a deployment.

$ kubectl create -f phpmyadmin.yaml
service/phpmyadmin created
pod/phpmyadmin created

To check that this is working as expected, you need to determine what port is being used for the port forwarding provided by the NodePort service:

$ kubectl get services phpmyadmin
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
phpmyadmin   10.110.16.56   <nodes>       80:31485/TCP   1d

In this example output, port 80 on the cluster network is being mapped to port 30582 on each of the cluster nodes. Open a browser to point to any of the cluster nodes on the specified port mapping. For example: http://master.example.com:31485/. You should be presented with the phpMyAdmin login page and you should be able to log into phpMyAdmin as root with the password that you specified as the MYSQL_ROOT_PASSWORD environment variable when you deployed the MySQL server.