5.3.2 Configuring NFS

In this example, it is assumed that an NFS appliance is already configured to allow access to all of the nodes in the cluster. Note that if your NFS appliance is hosted on Oracle Cloud Infrastructure, you must create ingress rules in the security list for the Virtual Cloud Network (VCN) subnet that you are using to host your Kubernetes nodes. The rules must be set to allow traffic on ports 2049 and 20049 for NFS Access and NFS Mount.

Each worker node within the cluster must also have the nfs-utils package installed:

# yum install nfs-utils

The following steps describe a deployment using YAML files for each object:

  1. Create a PhysicalVolume object in a YAML file. For example, on the master node, create a file pv-nfs.yml and open it in an editor to include the following content:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: nfs
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      nfs:
        server: 192.0.2.100
        path: "/nfsshare"

    Replace 1Gi with the size of the storage available. Replace 192.0.2.100 with the IP address of the NFS appliance in your environment. Replace /nfsshare with the exported share name on your NFS appliance.

  2. Create the PersistentVolume using the YAML file you have just created, by running the following command on the master node:

    $ kubectl create -f pv-nfs.yml
    persistentvolume/nfs created
  3. Create a PhysicalVolumeClaim object in a YAML file. For example, on the master node, create a file pvc-nfs.yml and open it in an editor to include the following content:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: nfs
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi

    Note that you can change the accessModes by changing the ReadWriteMany value, as required. You can also change the quota available in this claim, by changing the value of the storage option from 1Gi to some other value.

  4. Create the PersistentVolumeClaim using the YAML file you have just created, by running the following command on the master node:

    $ kubectl create -f pvc-nfs.yml
    persistentvolumeclaim/nfs created
  5. Check that the PersistentVolume and PersistentVolumeClaim have been created properly and that the PersistentVolumeClaim is bound to the correct volume:

    $ kubectl get pv,pvc
    NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY  STATUS   CLAIM         STORAGECLASS   REASON    AGE
    pv/nfs    1Gi        RWX           Retain         Bound    default/nfs                            7m
    
    NAME          STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
    pvc/nfs       Bound     nfs       1Gi        RWX                          2m
  6. At this point, you can set up pods that can use the PersistentVolumeClaim to bind to the PersistentVolume and use the resources that are available there. In the example steps that follow, a ReplicationController is used to set up two replica pods running web servers that use the PersistentVolumeClaim to mount the PersistentVolume onto a mountpath containing shared resources.

    1. Create a ReplicationController object in a YAML file. For example, on the master node, create a file rc-nfs.yml and open it in an editor to include the following content:

      apiVersion: v1
      kind: ReplicationController
      metadata:
        name: rc-nfs-test
      spec:
        replicas: 2
        selector:
          app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
                - name: nginx
                  containerPort: 80
              volumeMounts:
                  - name: nfs
                    mountPath: "/usr/share/nginx/html"
            volumes:
            - name: nfs
              persistentVolumeClaim:
                claimName: nfs
    2. Create the ReplicationController using the YAML file you have just created, by running the following command on the master node:

      $ kubectl create -f rc-nfs.yml
      replicationcontroller/rc-nfs-test created
    3. Check that the pods have been created:

      $ kubectl get pods
      NAME                READY     STATUS    RESTARTS   AGE
      rc-nfs-test-c5440   1/1       Running   0          54s
      rc-nfs-test-8997k   1/1       Running   0          54s
    4. On the NFS appliance, create an index file in the /nfsshare export, to test that the web server pods have access to this resource. For example:

      $ echo "This file is available on NFS" > /nfsshare/index.html
    5. You can either create a service to expose the web server ports so that you are able to check the output of the web server, or you can simply view the contents in the /usr/share/nginx/html folder on each pod, since the NFS share should be mounted onto this directory in each instance. For example, on the master node:

      $ kubectl exec rc-nfs-test-c5440 cat /usr/share/nginx/html/index.html
      This file is available on NFS
      $ kubectl exec rc-nfs-test-8997k cat /usr/share/nginx/html/index.html
      This file is available on NFS

You can experiment further by shutting down a node where a pod is running. A new pod is spawned on a running node and instantly has access to the data on the NFS share. In this way, you can demonstrate data persistence and resilience during node failure.