Creating a Persistent Volume Claim (PVC)
Container storage via a container's root file system is ephemeral, and can disappear upon container deletion and creation. To provide a durable location to prevent data from being lost, you can create and use persistent volumes to store data outside of containers.
A persistent volume offers persistent storage that enables your data to remain intact, regardless of whether the containers to which the storage is connected are terminated.
A persistent volume claim (PVC) is a request for storage, which is met by binding the PVC to a persistent volume (PV). A PVC provides an abstraction layer to the underlying storage.
With Oracle Cloud Infrastructure, you can provision persistent volume claims:
- By attaching volumes from the Oracle Cloud Infrastructure Block Volume service. The volumes are connected to clusters created by Container Engine for Kubernetes using FlexVolume and CSI (Container Storage Interface) volume plugins deployed on the clusters. See Provisioning PVCs on the Block Volume Service.
- By mounting file systems in the Oracle Cloud Infrastructure File Storage service. The File Storage service file systems are mounted inside containers running on clusters created by Container Engine for Kubernetes using a CSI (Container Storage Interface) driver deployed on the clusters. See Provisioning PVCs on the File Storage Service.
By default, Oracle encrypts customer data at rest in persistent storage. Oracle manages this default encryption with no action required on your part.
For more information about persistent volumes, persistent volume claims, and volume plugins, see the Kubernetes documentation.
Provisioning PVCs on the Block Volume Service
- New functionality is only being added to the CSI volume plugin, not to the FlexVolume volume plugin (although Kubernetes developers will continue to maintain the FlexVolume volume plugin).
- The CSI volume plugin does not require access to underlying operating system and root file system dependencies.
The StorageClass specified for a PVC controls which volume plugin to use to connect to Block Volume service volumes. Two storage classes are defined by default, oci-bv
for the CSI volume plugin and oci
for the FlexVolume plugin. If you don't explicitly specify a value for storageClassName
in the yaml file that defines the PVC, the cluster's default storage class is used. In clusters created by Container Engine for Kubernetes, the oci
storage class is initially set up as the default. The oci
storage class is used by the FlexVolume volume plugin.
In the case of the CSI volume plugin, the CSI topology feature ensures that worker nodes and
volumes are located in the same availability domain. In the case of the FlexVolume volume
plugin, you can use the matchLabels
element to select the availability domain
in which a persistent volume claim is provisioned. Note that you do not use the
matchLabels
element with the CSI volume plugin.
Regardless of the volume plugin you choose to use, if a cluster is in a different compartment to its worker nodes, you must create an additional policy to enable access to Block Volume service volumes. This situation arises when the subnet specified for a node pool belongs to a different compartment to the cluster. To enable the worker nodes to access Block Volume service volumes, create the additional policy with both the following policy statements:
ALLOW any-user to manage volumes in TENANCY where request.principal.type = 'cluster'
ALLOW any-user to manage volume-attachments in TENANCY where request.principal.type = 'cluster'
To explicitly specify the volume plugin to use to connect to the Block Volume service when provisioning a persistent volume claim, specify a value for storageClassName
in the yaml file that defines the PVC:
- to use the CSI volume plugin, specify
storageClassName: "oci-bv"
- to use the FlexVolume volume plugin, specify
storageClassName: "oci"
Note the following:
- The minimum amount of persistent storage that a PVC can request is 50 gigabytes. If the request is for less than 50 gigabytes, the request is rounded up to 50 gigabytes.
- If you want to be able to increase the amount of persistent storage that a PVC can request, set
allowVolumeExpansion: true
in the definition of the storage class specified for the PVC. See Expanding a Block Volume. - When you create a cluster, you can optionally define tags to apply to block volumes created when persistent volume claims (PVCs) are defined. Tagging enables you to group disparate resources across compartments, and also enables you to annotate resources with your own metadata. See Applying Tags to Block Volumes.
Creating a PVC on a Block Volume Using the CSI Volume Plugin
You can dynamically provision a block volume using the CSI plugin specified by the oci-bv
storage class's definition (provisioner: blockvolume.csi.oraclecloud.com
). For example, if the cluster administrator has not created any suitable PVs that match the PVC request.
You define a PVC in a file called csi-bvs-pvc.yaml. For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mynginxclaim
spec:
storageClassName: "oci-bv"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
Enter the following command to create the PVC from the csi-bvs-pvc.yaml file:
kubectl create -f csi-bvs-pvc.yaml
The output from the above command confirms the creation of the PVC:
persistentvolumeclaim "mynginxclaim" created
Verify that the PVC has been created by running kubectl get pvc
:
kubectl get pvc
The output from the above command shows the current status of the PVC:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynginxclaim Pending oci-bv 4m
The PVC has a status of Pending
because the oci-bv
storage class's definition includes volumeBindingMode: WaitForFirstConsumer
.
You can use this PVC when creating other objects, such as pods. For example, you could create a new pod from the following pod definition, which instructs the system to use the mynginxclaim PVC as the nginx volume, which is mounted by the pod at /data.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: mynginxclaim
Having created the new pod, you can verify that the PVC has been bound to a new persistent volume by entering:
kubectl get pvc
The output from the above command confirms that the PVC has been bound:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynginxclaim Bound ocid1.volume.oc1.iad.<unique_ID> 50Gi RWO oci-bv 4m
You can verify that the pod is using the new persistent volume claim by entering:
kubectl describe pod nginx
Creating a PVC From an Existing Block Volume or A Backup
You can create a PVC from an existing block volume or a block volume backup for use by the FlexVolume volume plugin. For example, if the cluster administrator has created a block volume backup for you to use when provisioning a new persistent volume claim. Such a block volume backup might come with data ready for use by other objects such as pods.
You define a PVC in a file called flex-pvcfrombackup.yaml file. You use the volume.beta.kubernetes.io/oci-volume-source
annotation element to specify the source of the block volume to use when provisioning a new persistent volume claim using the FlexVolume volume plugin. You can specify the OCID of either a block volume or a block volume backup as the value of the annotation. In this example, you specify the OCID of the block volume backup created by the cluster administrator. For example:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myvolume
annotations:
volume.beta.kubernetes.io/oci-volume-source: ocid1.volumebackup.oc1.iad.abuw...
spec:
selector:
matchLabels:
failure-domain.beta.kubernetes.io/zone: US-ASHBURN-AD-1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
Note that the flex-pvcfrombackup.yaml file includes the matchLabels
element, which is only applicable in the case of the FlexVolume volume plugin.
Enter the following command to create the PVC from the flex-pvcfrombackup.yaml file:
kubectl create -f flex-pvcfrombackup.yaml
The output from the above command confirms the creation of the PVC:
persistentvolumeclaim "myvolume" created
Verify that the PVC has been created and bound to a new persistent volume created from the volume backup by entering:
kubectl get pvc
The output from the above command shows the current status of the PVC:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
myvolume Bound ocid1.volume.oc1.iad.<unique_ID> 50Gi RWO oci 4m
You can use the new persistent volume created from the volume backup when defining other objects, such as pods. For example, the following pod definition instructs the system to use the myvolume PVC as the nginx volume, which is mounted by the pod at /data.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: myvolume
Having created the new pod, you can verify that it is running and using the new persistent volume claim by entering:
kubectl describe pod nginx
In the FlexVolume example in this topic, the PVC requests storage in an availability domain in the Ashburn region using the failure-domain.beta.kubernetes.io/zone
label. For more information about using this label (and the shortened versions of availability domain names to specify), see failure-domain.beta.kubernetes.io/zone.
Encrypting Data At Rest and Data In Transit with the Block Volume Service
The Oracle Cloud Infrastructure Block Volume service always encrypts all block volumes and volume backups at rest by using the Advanced Encryption Standard (AES) algorithm with 256-bit encryption. By default all volumes and their backups are encrypted using the Oracle-provided encryption keys. Each time a volume is cloned or restored from a backup the volume is assigned a new unique encryption key.
You have the option to encrypt all of your volumes and their backups using the keys that you own and manage using the Vault service, for more information see Overview of Vault. If you do not configure a volume to use the Vault service or you later unassign a key from the volume, the Block Volume service uses the Oracle-provided encryption key instead. This applies to both encryption at-rest and paravirtualized in-transit encryption.
All the data moving between the instance and the block volume is transferred over an internal and highly secure network. If you have specific compliance requirements related to the encryption of the data while it is moving between the instance and the block volume, the Block Volume service provides the option to enable in-transit encryption for paravirtualized volume attachments on virtual machine (VM) instances. Some bare metal shapes support in-transit encryption for the instance's iSCSI-attached block volumes.
For more information about block volume encryption, and in-transit encryption support, see Block Volume Encryption.
When Kubernetes PVCs are backed by the Block Volume service, you choose how block volumes are encrypted by specifying:
- The master encryption key to use, by setting the
kms-key-id
property in the Kubernetes storage class's definition. You can specify the OCID of a master encryption key in the Vault service. - How the block volume is attached to the compute instance, by setting the
attachment-type
property in the Kubernetes storage class's definition to eitheriscsi
orparavirtualized
. - Whether in transit encryption is enabled for each node pool in a cluster, by setting the node pool's
isPvEncryptionInTransitEnabled
property (using the CLI, the API, or the node pool's Use in transit encryption: option in the Console).
The interaction of the settings you specify determines how block volumes are encrypted, as shown in the table:
Node pool isPvEncryptionInTransitEnabled property set to: |
Storage class kms-key-id property set to: |
Storage class attachment-type property set to |
Is data encrypted at rest? | Is data encrypted in transit? | Notes |
---|---|---|---|---|---|
true |
OCID of a key in Vault | paravirtualized |
Yes (user-managed key) | Yes (user-managed key) | |
true |
OCID of a key in Vault | iscsi |
Error | Error | The PV cannot be provisioned because the attachment-type property must be set to paravirtualized when isPvEncryptionInTransitEnabled is set to True . |
true |
not set | paravirtualized |
Yes (Oracle-managed key) | Yes (Oracle-managed key) | |
true |
not set | iscsi |
Error | Error | The PV cannot be provisioned because the attachment-type property must be set to paravirtualized when isPvEncryptionInTransitEnabled is set to True . |
false |
OCID of a key in Vault | paravirtualized |
Yes (user-managed key) | No | |
false |
OCID of a key in Vault | iscsi |
Yes (user-managed key) | No | |
false |
not set | paravirtualized |
Yes (Oracle-managed key) | No | |
false |
not set | iscsi |
Yes (Oracle-managed key) | No |
Before you can create a cluster for which you want to manage the master encryption key yourself, you have to:
- Create a suitable master encryption key in Vault (or obtain the OCID of such a key). See Managing Keys.
- Create a policy granting access to the master encryption key. See Create Policy to Access User-Managed Encryption Keys for Encrypting Boot and Block Volumes.
For more information about key rotation in the Vault service, see To rotate a master encryption key.
Example: Configuring a storage class to enable at-rest and in-transit encryption using the default Oracle-managed key
To provision a PVC on a block volume, using a master encryption key managed by Oracle to encrypt data at rest (and optionally in transit), define a storage class and set:
provisioner:
toblockvolume.csi.oraclecloud.com
-
attachment-type
toparavirtualized
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: bv-encrypted-storage-class
provisioner: blockvolume.csi.oraclecloud.com
parameters:
attachment-type: "paravirtualized"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
You can then create a PVC that is provisioned by the storage class you have created.
Having defined the storage class and created the PVC, set each node pool's isPvEncryptionInTransitEnabled
property to true
(using the CLI, the API, or the node pool's Use in transit encryption: option in the Console). Note that encryption of in transit data is only supported in some situations (see Encrypting Data At Rest and Data In Transit with the Block Volume Service).
Example: Configuring a storage class to enable at-rest and in-transit encryption using a key that you manage
To provision a PVC on a block volume, using a master encryption key managed by you to encrypt data at rest (and optionally in transit), you have to:
- Create a suitable master encryption key in Vault (or obtain the OCID of such a key). See Managing Keys.
- Create a policy granting access to the master encryption key. See Create Policy to Access User-Managed Encryption Keys for Encrypting Boot and Block Volumes.
Having created a suitable master encryption key and policy, define a storage class and set:
provisioner:
toblockvolume.csi.oraclecloud.com
attachment-type
toparavirtualized
kms-key-id
to the OCID of the master encryption key in the Vault service that you want to use to encrypt data
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: bv-user-encrypted-storage-class
provisioner: blockvolume.csi.oraclecloud.com
parameters:
attachment-type: "paravirtualized"
kms-key-id: "ocid1.key.oc1.iad.anntl______usjh"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
You can then create a PVC that is provisioned by the storage class you have created.
Having defined the storage class and created the PVC, set each node pool's isPvEncryptionInTransitEnabled
property to true
(using the CLI, the API, or the node pool's Use in transit encryption: option in the Console). Note that encryption of in transit data is only supported in some situations (see Encrypting Data At Rest and Data In Transit with the Block Volume Service).
Expanding a Block Volume
When a PVC is created using the CSI volume plugin (provisioner: blockvolume.csi.oraclecloud.com
), you can expand the volume size online. By doing so, you make it possible to initially deploy applications with a certain amount of storage, and then subsequently increase the available storage without any downtime.
If you want to support storage request increases, set allowVolumeExpansion: true
in the definition of the storage class that you specify for the PVC. For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-bv
provisioner: blockvolume.csi.oraclecloud.com
parameters:
attachment-type: "paravirtualized"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
The default oci-bv
storage class for the CSI volume plugin has allowVolumeExpansion: true
by default.
To expand the size of a volume, edit the PVC manifest and update the volume size, and then apply the manifest. When the disk is next rescanned to enable the operating system to identify the expanded volume size (which can take a few minutes), the increased storage automatically becomes available to pods using the PVC. The pods do not have to be restarted.
Enter the following command to confirm the PVC has been bound to a newly-enlarged block volume:
kubectl get pvc <pvc_name> -o yaml
Note the following:
- Volume expansion is supported in clusters running Kubernetes 1.19 or later.
- The default
oci-bv
storage class for the CSI volume plugin is configured withallowVolumeExpansion: true
in clusters running Kubernetes 1.19 or later. Definitions ofoci-bv
storage classes in existing clusters running Kubernetes 1.19 or later are automatically edited to setallowVolumeExpansion: true
. - You cannot reduce the size of a block volume. You can only specify a larger value than the block volume’s current size. If you update a PVC manifest to request less storage than previously requested, the storage request fails.
- For more information about increasing block volume sizes in the Block Volume service, see Resizing a Volume. In particular, note the recommendation to create a backup before resizing a block volume.
Example: Configuring a storage class to enable block volume expansion
Edit the manifest of a PVC provisioned by the oci-bv
storage class and include a request for storage. For example, you might initially set storage: 100Gi
to request 100 GB of storage for the PVC, in a file called csi-bvs-pvc-exp.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: oci-bv
resources:
requests:
storage: 100Gi
volumeName: pvc-bv1
Enter the following command to create the PVC from the csi-bvs-pvc-exp.yaml file:
kubectl apply -f csi-bvs-pvc-exp.yaml
Subsequently, you might find you need to increase the amount of storage available to the PVC. For example, you might change the manifest and set storage: 200Gi
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: oci-bv
resources:
requests:
storage: 200Gi
volumeName: pvc-bv1
After you apply the manifest, the PV that provisions the PVC is increased to 200 GB. The manifest update triggers the Block Volume service to increase the size of the existing block volume to 200 GB. When the disk is next rescanned (which can take a few minutes), the increased storage automatically becomes available to pods using the PVC.
Specifying Block Volume Performance
Block volumes in the Block Volume service can be configured for different levels of performance, according to expected workload I/O requirements. Block volume performance is expressed in volume performance units (VPUs). A number of performance levels are available, including:
- Lower Cost (0 VPUs)
- Balanced (10 VPUs)
- Higher Performance (20 VPUs)
By default, block volumes are configured for the Balanced performance level (10 VPUs). For more information about the different block volume performance levels, see Block Volume Performance Levels.
When you define a PVC using the CSI volume plugin (provisioner: blockvolume.csi.oraclecloud.com
), you can specify a different block volume performance level in the storage class definition that is appropriate for the expected workload.
To create a PVC backed by a block volume with a Lower Cost, Balanced, or Higher Performance performance level, set vpusPerGB
in the storage class definition as follows:
- for a Lower Cost performance level, set
vpusPerGB: "0"
- for a Balanced performance level, set
vpusPerGB: "10"
- for a Higher Performance performance level, set
vpusPerGB: "20"
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: oci-high
provisioner: blockvolume.csi.oraclecloud.com
parameters:
vpusPerGB: "20"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
The value of vpusPerGB
must be "0", "10", or "20". Other values are not supported.
Create a manifest for a PVC provisioned by the oci-high
storage class and include a request for storage. For example, in a file called csi-bvs-pvc-perf.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oci-pvc-high
spec:
storageClassName: oci-high
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Enter the following command to create the PVC from the csi-bvs-pvc-perf.yaml file:
kubectl apply -f csi-bvs-pvc-perf.yaml
Having created the PVC, you can use it when creating other objects, such as pods. For example, you could create a new pod from the following pod definition, which instructs the system to use the oci-pvc-high
PVC:
apiVersion: v1
kind: Pod
metadata:
name: pod-high
spec:
containers:
- name: app
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: oci-pvc-high
When you create the pod, a new block volume is created in the Block Volume service to back the PVC. The new block volume has the performance level you specified in the oci-high
storage class definition.
Note that you cannot subsequently change the performance level of a block volume backing a PVC. Instead, you have to define a new storage class, set the performance level as required, and create a new PVC provisioned by that new storage class.
Provisioning PVCs on the File Storage Service
The Oracle Cloud Infrastructure File Storage service provides a durable, scalable, distributed, enterprise-grade network file system.
You use the File Storage service to provision PVCs by manually creating a file system and a mount target in the File Storage service, then defining and creating a PV backed by the new file system, and finally defining a new PVC. When you create the PVC, Container Engine for Kubernetes binds the PVC to the PV backed by the File Storage service.
Note that provisioning PVCs by binding them to PVs backed by the File Storage service is available for clusters running Kubernetes version 1.18 or later.
Provisioning a PVC on a File System
To create a PVC on a file system in the File Storage service (using Oracle-managed encryption keys to encrypt data at rest):
- Create a file system with a mount target in the File Storage service, selecting the Encrypt using Oracle-managed keys encryption option. See To create a file system.
- Create security rules in either a security list or a network security group for both the mount target that exports the file system, and for the cluster's worker nodes.The security rules to create depend on the relative network locations of the mount target and the worker nodes, according to the following scenarios:
These scenarios, the security rules to create, and where to create them, are fully described in the File Storage service documentation (see Configuring VCN Security Rules for File Storage).
- Create a PV backed by the file system in the File Storage service as follows:
-
Create a manifest file to define a PV and in the
csi:
section, set:driver
tofss.csi.oraclecloud.com
volumeHandle
to<FileSystemOCID>:<MountTargetIP>:<path>
where:<FileSystemOCID>
is the OCID of the file system defined in the File Storage service.<MountTargetIP>
is the IP address assigned to the mount target.<path>
is the mount path to the file system relative to the mount target IP address, starting with a slash.
ocid1.filesystem.oc1.iad.aaaa______j2xw:10.0.0.6:/FileSystem1
For example, the following manifest file (named fss-pv.yaml) defines a PV called
fss-pv
backed by a file system in the File Storage service:apiVersion: v1 kind: PersistentVolume metadata: name: fss-pv spec: capacity: storage: 50Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: fss.csi.oraclecloud.com volumeHandle: ocid1.filesystem.oc1.iad.aaaa______j2xw:10.0.0.6:/FileSystem1
- Create the PV from the manifest file by entering:
kubectl create -f <filename>
For example:
kubectl create -f fss-pv.yaml
-
- Create a PVC that is provisioned by the PV you have created, as follows:
- Create a manifest file to define the PVC and set:
storageClassName
to""
volumeName
to the name of the PV you created (for example,fss-pv
)
For example, the following manifest file (named
fss-pvc.yaml
) defines a PVC namedfss-pvc
that is provisioned by a PV namedfss-pv
:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: fss-pvc spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 50Gi volumeName: fss-pv
Note that the
requests: storage:
element must be present in the PVC's manifest file, and its value must match the value specified for thecapacity: storage:
element in the PV's manifest file. Apart from that, the value of therequests: storage:
element is ignored. - Create the PVC from the manifest file by entering:
kubectl create -f <filename>
For example:kubectl create -f fss-pvc.yaml
- Create a manifest file to define the PVC and set:
The PVC is bound to the PV backed by the File Storage service file system. Data is encrypted at rest, using encryption keys managed by Oracle.
Encrypting Data At Rest with the File Storage Service
The File Storage service always encrypts data at rest, using Oracle-managed encryption keys by default. However, you have the option to encrypt file systems using your own master encryption keys that you manage yourself in the Vault service.
Depending on how you want to encrypt data at rest, follow the appropriate instructions below:
- To create a PVC on a file system using Oracle-managed encryption keys to encrypt data at rest, follow the steps in Provisioning a PVC on a File System and select the Encrypt using Oracle-managed keys encryption option as described. Data is encrypted at rest, using encryption keys managed by Oracle.
- To create a PVC on a file system using master encryption keys that you manage to encrypt data at rest, follow the steps in Provisioning a PVC on a File System but select the Encrypt using customer-managed keys encryption option and specify the master encryption key in the Vault service. Data is encrypted at rest, using the encryption key you specify.
Encrypting Data In Transit with the File Storage Service
In-transit encryption secures data being transferred between instances and mounted file systems using TLS v. 1.2 (Transport Layer Security) encryption. For more information about in-transit encryption and the File Storage service, see Using In-transit Encryption.
You specify in-transit encryption independently of at-rest encryption. Data in transit is encrypted using a TLS certificate that is always Oracle-managed, regardless of whether data at rest is encrypted using Oracle-managed keys or using user-managed keys.
To create a PVC on a file system where data is encrypted in transit:
- Follow the instructions in Setting up In-transit Encryption for Linux to set up in-transit encryption on the file system. More specifically:
- Complete the prerequisites by setting up the following security rules in either a security list or a network security group for the mount target that exports the file system:
- A stateful ingress rule allowing TCP traffic to a Destination Port Range of 2051, either from all ports of a source IP address or CIDR block of your choice, or from all sources.
- A stateful egress rule allowing TCP traffic from a Source Port Range of 2051, either to all ports of a destination IP address or CIDR block of your choice, or to all destinations.
For more information, see Scenario C: Mount target and instance use in-transit encryption.
- Download the
oci-fss-utils
package on each worker node. Note that you have to agree to the License Agreement. See Task 1: Download the OCI-FSS-UTILS package. - Install the
oci-fss-utils
package on each worker node. See Task 2: Install the OCI-FSS-UTILS package on Oracle Linux or CentOS.
- Complete the prerequisites by setting up the following security rules in either a security list or a network security group for the mount target that exports the file system:
-
Follow the instructions in Provisioning a PVC on a File System, selecting either the Encrypt using Oracle-managed keys option or the Encrypt using customer-managed keys option as required for data encryption at rest. However, when creating the manifest file to define a PV, set
encryptInTransit
toTrue
in thecsi
section of the file. For example:apiVersion: v1 kind: PersistentVolume metadata: name: fss-encrypted-it-pv spec: capacity: storage: 50Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: fss.csi.oraclecloud.com volumeHandle: ocid1.filesystem.oc1.iad.aaaa______j2xw:10.0.0.6:/FileSystem1 encryptInTransit: True