5 Using Exascale Block Storage with EDV
This topic outlines the procedure for using Exascale block storage with Exascale Direct Volumes (EDV).
The Exascale block store provides capabilities to create and manage arbitrary-sized raw block volumes based on Exascale storage.
EDV is recommended for use-cases where clients want to use storage volumes inside the Exadata RDMA Network Fabric. For example:
-
You can use EDV to support Oracle Advanced Cluster File System (ACFS) or local Linux file systems (such as XFS, EXT4, and so on) on the Exadata compute nodes.
-
You can use EDV raw devices to support earlier Oracle Database versions, not natively supported on Exascale. For example, you can create an Oracle ASM disk group based on EDV raw devices and use the disk group to contain the Oracle Database data files. Or, you can create a file system on an EDV volume and use it to contain the Oracle Database software and data files.
Before you can use Exascale block storage with EDV, note the following:
-
The block store manager (BSM) and block store worker (BSW) services must be running in the Exascale cluster.
To display information about Exascale storage services running across the Exascale cluster, use the ESCLIlsservice
command. For example:@> lsservice --detail
-
The Exascale Direct Volume (EDV) service must be running on each Exadata compute node that you want to host an EDV attachment. If you are planning to use a cluster-wide attachment, then the EDV service must be running on every node in the Oracle Grid Infrastructure (GI) cluster.
To display information about client-side Exascale services, including Exascale Node Proxy (ESNP) and Exascale Direct Volume (EDV), use the DBMCLILIST DBSERVER
command, as follows:
The command output displays status information for all of the client-side Exascale services running on the Exadata compute node. You must run the command separately on each compute node.DBMCLI> list dbserver detail
-
During initial system deployment with Oracle Exadata Deployment Assistant (OEDA), the Exascale Direct Volume (EDV) service is configured on each Exadata compute node (bare-metal or VM) and runs with the permissions of the Exascale user that manages the Oracle Grid Infrastructure (GI) cluster. To create an EDV attachment, you must use the Exascale user linked with the EDV service.
If the GI cluster uses a non-role-separated user configuration with one Oracle OS user account, then the associated Exascale user is related to the EDV service. If the GI cluster uses a role-separated configuration with a Grid OS user account and an Oracle OS user account, then the EDV service is linked to the Exascale user associated with the Grid OS account.
To find the Exascale user linked with the EDV service, use the ESCLI
lsinitiator
command with the--detail
option and examine theuser
attribute.
To begin, you can create an Exascale block volume and EDV attachment.
For example:
@>mkvault my-vault
Vault @my-vault created.
@>mkvolume 1g --attributes name=edv1 --vault my-vault
Created volume with id vol0002_7eb2b5cc5d1a47f09abed0fa83514a36
@>lsvolume vol0002_7eb2b5cc5d1a47f09abed0fa83514a36 --detail
id vol0002_7eb2b5cc5d1a47f09abed0fa83514a36
name edv1
bandwidthProvisioned unlimited
contentType DATA
creationTime 2025-06-06T05:06:46+00:00
filePath @my-vault/vol.e2173df2021a4368bbf986483f8332b8
iopsProvisioned unlimited
mediaType HC
numAttachments 0
owners exa01
redundancy high
size 1G
state AVAILABLE
vault @my-vault
@>lsinitiator --detail
id e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
edvBaseDriverVersion 25.2.0.0.0.250602
edvEffectiveDriverVersion 25.2.0.0.0.250602
giClusterId deadbeef-badc-0fee-dead-beefbadc0fee
giClusterName edvTestCluster
hostName exa01
lastHeartbeat 2025-06-06T05:08:20+00:00
registerTime 2025-06-03T07:41:10+00:00
state ONLINE
user exa01
version 25.2.0.0.0.250602
@>mkvolumeattachment vol0002_7eb2b5cc5d1a47f09abed0fa83514a36 my-edv1 --attributes giClusterId=deadbeef-badc-0fee-dead-beefbadc0fee
Created edv attachment with id att0001_9deed3d3f7f944838f5767151d9f06de
@>lsvolumeattachment att0001_9deed3d3f7f944838f5767151d9f06de --detail
id att0001_9deed3d3f7f944838f5767151d9f06de
attachTime 2025-06-06T05:11:20+00:00
deviceName my-edv1
devicePath /dev/exc/my-edv1
giClusterId deadbeef-badc-0fee-dead-beefbadc0fee
giClusterName edvTestCluster
hostName
initiator
kernelDeviceName exc-dev1
logicalSectorSize 512
volume vol0002_7eb2b5cc5d1a47f09abed0fa83514a36
volumeSnapshot
@>
The EDV attachment creates an association between the volume and an EDV device file, which resides on the Exadata compute nodes hosting the attachment. A node hosting an EDV attachment is also known as an EDV initiator. If you create a cluster-wide attachment, then the EDV device file is created on every node in the Oracle Grid Infrastructure (GI) cluster. If you create a node-specific attachment, then the corresponding EDV device is only created on that node.
In the example, the EDV attachment is a cluster-wide attachment, and the EDV device
name is my-edv1
, so the corresponding device file is located at
/dev/exc/my-edv1
on each cluster node. The volume identifier
(vol0002_7eb2b5cc5d1a47f09abed0fa83514a36
) was reported to the
user during volume creation. Volume identifiers can also be discovered using the
ESCLI lsvolume
command. The GI cluster identifier
(deadbeef-badc-0fee-dead-beefbadc0fee
) was found by using the
ESCLI lsinitiator
command.
Alternatively, for a node-specific attachment you must specify the node-specific EDV initiator identifier instead of the GI cluster identifier. For example:
@>lsinitiator --detail
id e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
edvBaseDriverVersion 25.2.0.0.0.250602
edvEffectiveDriverVersion 25.2.0.0.0.250602
giClusterId deadbeef-badc-0fee-dead-beefbadc0fee
giClusterName edvTestCluster
hostName exa01
lastHeartbeat 2025-06-06T05:21:18+00:00
registerTime 2025-06-03T07:41:10+00:00
state ONLINE
user exa01
version 25.2.0.0.0.250602
@>mkvolumeattachment vol0002_7eb2b5cc5d1a47f09abed0fa83514a36 my-edv1 --attributes initiator=e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
Created edv attachment with id att0001_4118794cebae49d6836139b34df08b6b
@>lsvolumeattachment att0001_4118794cebae49d6836139b34df08b6b --detail
id att0001_4118794cebae49d6836139b34df08b6b
attachTime 2025-06-06T05:22:47+00:00
deviceName my-edv1
devicePath /dev/exc/my-edv1
giClusterId
giClusterName
hostName exa01
initiator e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
kernelDeviceName exc-dev1
logicalSectorSize 512
volume vol0002_7eb2b5cc5d1a47f09abed0fa83514a36
volumeSnapshot
@>
After attachment, the EDV device can be used on the Exadata compute node or GI cluster hosting the attachment.
For example, you could
use the following command sequence to create and mount an ACFS file system using the
EDV device defined in the previous examples
(/dev/exc/my-edv1
):
# # Confirmation of the EDV device at /dev/exc/my-edv1.
# lsblk /dev/exc/my-edv1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
exc-dev1 251:1 0 1G 0 disk
# # Note the default ownership and permissions for the EDV device.
# ls -l /dev/exc/my-edv1
brw-rw---- 2 root disk 251, 1 Jun 6 05:11 /dev/exc/my-edv1
# # Now use the device to support ACFS.
# mkfs -t acfs /dev/exc/my-edv1
mkfs.acfs: version = 23.0.0.0.0
mkfs.acfs: ACFS compatibility = 23.0.0.0.0
mkfs.acfs: on-disk version = 53.0
mkfs.acfs: volume = /dev/exc/my-edv1
mkfs.acfs: volume size = 1073741824 ( 1.00 GB )
mkfs.acfs: file system size = 1073741824 ( 1.00 GB )
mkfs.acfs: Format complete.
# mkdir /mnt/my-acfs1
# mount /dev/exc/my-edv1 /mnt/my-acfs1
# df /mnt/my-acfs1
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/exc/my-edv1 1048576 318180 730396 31% /mnt/my-acfs1
#
# # Mount the file system on other cluster nodes as required.
Note:
-
Each EDV attachment also has a kernel device file at
/dev/exc-devN
, where N is the minor number of the device. The kernel device name is contained as an attribute of the EDV attachment and is visible using the ESCLIlsvolumeattachment
command.Note that most Linux tools, such as
iostat
, display the kernel device file at/dev/exc-devN
, while Exascale commands use the user-named device file (under/dev/exc/
).The relationship between the kernel device file and the user-named device file is also recorded in the udev database and is visible using the following Linux command:
# udevadm info device-file
In the
udevadm
command, for the device-file value, you can specify either the kernel device file (/dev/exc-devN
) or the user-named device file (under/dev/exc/
). -
By default, read and write access to EDV device files is only available to the
root
operating system user and members of thedisk
group. Depending on your use case, you may need to modify the permissions on the EDV device files before using them.For example, to make the EDV device file at
/dev/exc/my-vol
readable and writable by theoracle
user anddba
group, you could configure it using a udev rule similar to the following:# cat /etc/udev/rules.d/57-edv-user.rules KERNEL=="exc-*", ENV{EXC_ALIAS}=="my-vol", OWNER="oracle", GROUP="dba", MODE="0660"
-
To facilitate the management of udev rules related to EDV devices, each EDV client node is configured with a template udev rules file at
/etc/udev/rules.d/57-edv-user.rules
, which you can modify to fulfill your requirements. To maintain existing udev rules,/etc/udev/rules.d/57-edv-user.rules
is preserved whenever the EDV client software is updated. -
Each EDV client node can support a maximum of 3000 attachments at the same time. This limit includes the total of all cluster attachments involving the server, as well as local attachments specific to the server.
An ACFS file system on EDV can also be exported to clients outside the Exadata RDMA Network Fabric by using ACFS HANFS. For more information about ACFS, see Oracle Advanced Cluster File System Administrator's Guide.
EDV devices can also be used as raw devices or in conjunction with other Linux file systems, such as XFS, EXT4, and so on.