5.3.3 Configuring iSCSI

In this example, it is assumed that an iSCSI service is already configured to expose a block device, as an iSCSI LUN, to all of the nodes in the cluster. Note that if your iSCSI server is hosted on Oracle Cloud Infrastructure, you must create ingress rules in the security list for the Virtual Cloud Network (VCN) subnet that you are using to host your Kubernetes nodes. The rules must be set to allow traffic on ports 860 and 3260.

Each worker node within the cluster must also have the iscsi-initiator-utils package installed:

# yum install iscsi-initiator-utils

You must manually edit the /etc/iscsi/initiatorname.iscsi file on all nodes of cluster to add the initiator name (iqn) of the device. Restart the iscsid service once you have edited this file.

For more information on configuring iSCSI on Oracle Linux 7, see Oracle® Linux 7: Administrator's Guide.

The following steps describe a deployment using YAML files for each object:

  1. Create a PhysicalVolume object in a YAML file. For example, on the master node, create a file pv-iscsi.yml and open it in an editor to include the following content:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: iscsi-pv
    spec:
      capacity:
        storage: 12Gi
      accessModes:
        - ReadWriteOnce
      iscsi:
         targetPortal: 192.0.2.100:3260
         iqn: iqn.2017-10.local.example.server:disk1
         lun: 0
         fsType: 'ext4'
         readOnly: false

    Replace 12Gi with the size of the storage available. Replace 192.0.2.100:3260 with the IP address and port number of the iSCSI target in your environment. Replace iqn.2017-10.local.example.server:disk1 with the iqn for the device that you wish to use via iSCSI.

  2. Create the PersistentVolume using the YAML file you have just created, by running the following command on the master node:

    $ kubectl create -f pv-iscsi.yml
    persistentvolume/iscsi-pv created
  3. Create a PhysicalVolumeClaim object in a YAML file. For example, on the master node, create a file pvc-iscsi.yml and open it in an editor to include the following content:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: iscsi-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 12Gi

    Note that you can change the accessModes by changing the ReadWriteOnce value, as required. Supported modes for iSCSI include ReadWriteOnce and ReadOnlyMany. You can also change the quota available in this claim, by changing the value of the storage option from 12Gi to some other value.

    Note that with iSCSI, support for both read and write operations limit you to hosting all of your pods on a single node. The scheduler automatically ensures that pods with the same PersistentVolumeClaim run on the same worker node.

  4. Create the PersistentVolumeClaim using the YAML file you have just created, by running the following command on the master node:

    $ kubectl create -f pvc-iscsi.yml
    persistentvolumeclaim/iscsi-pvc created
  5. Check that the PersistentVolume and PersistentVolumeClaim have been created properly and that the PersistentVolumeClaim is bound to the correct volume:

    $ kubectl get pv,pvc
    NAME         CAPACITY ACCESSMODES  RECLAIMPOLICY STATUS  CLAIM          STORAGECLASS REASON  AGE
    pv/iscsi-pv  12Gi     RWX          Retain        Bound   default/iscsi-pvc                   25s
    
    NAME            STATUS    VOLUME     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
    pvc/iscsi-pvc   Bound     iscsi-pv   12Gi       RWX                          21s
    
  6. At this point you can set up pods that can use the PersistentVolumeClaim to bind to the PersistentVolume and use the resources available there. In the following example, a ReplicationController is used to set up two replica pods running web servers that use the PersistentVolumeClaim to mount the PersistentVolume onto a mountpath containing shared resources.

    1. Create a ReplicationController object in a YAML file. For example, on the master node, create a file rc-iscsi.yml and open it in an editor to include the following content:

      apiVersion: v1
      kind: ReplicationController
      metadata:
        name: rc-iscsi-test
      spec:
        replicas: 2
        selector:
          app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
                - name: nginx
                  containerPort: 80
              volumeMounts:
                  - name: iscsi
                    mountPath: "/usr/share/nginx/html"
            volumes:  
            - name: iscsi
              persistentVolumeClaim:
                claimName: iscsi-pvc
    2. Create the ReplicationController using the YAML file you have just created, by running the following command on the master node:

      $ kubectl create -f rc-iscsi.yml
      replicationcontroller "rc-iscsi-test" created
    3. Check that the pods have been created:

      $ kubectl get pods
      NAME                  READY     STATUS    RESTARTS   AGE
      rc-iscsi-test-05kdr   1/1       Running   0          9m
      rc-iscsi-test-wv4p5   1/1       Running   0          9m
    4. On any host where the iSCSI LUN can be mounted, mount the LUN and create an index file, to test that the web server pods have access to this resource. For example:

      # mount /dev/disk/by-path/ip-192.0.2.100\:3260-iscsi-iqn.2017-10.local.example.server\:disk1-lun-0 /mnt
      $ echo "This file is available on iSCSI" > /mnt/index.html
    5. You can either create a service to expose the web server ports so that you are able to check the output of the web server, or you can simply view the contents in the /usr/share/nginx/html folder on each pod, since the NFS share should be mounted onto this directory in each instance. For example, on the master node:

      $ kubectl exec rc-nfs-test-c5440 cat /usr/share/nginx/html/index.html
      This file is available on iSCSI
      $ kubectl exec rc-nfs-test-8997k cat /usr/share/nginx/html/index.html
      This file is available on iSCSI