5.1 Using Exascale Block Storage with EDV
This topic outlines the procedure for using Exascale block storage with Exascale Direct Volumes (EDV).
The Exascale block store provides capabilities to create and manage arbitrary-sized raw block volumes based on Exascale storage.
EDV is recommended for use-cases where clients want to use storage volumes inside the Exadata RDMA Network Fabric. For example:
-
You can use EDV to support Oracle Advanced Cluster File System (ACFS) or local Linux file systems (such as XFS, EXT4, and so on) on the Exadata compute nodes.
-
You can use EDV raw devices to support earlier Oracle Database versions, not natively supported on Exascale. For example, you can create an Oracle ASM disk group based on EDV raw devices and use the disk group to contain the Oracle Database data files. Or, you can create a file system on an EDV volume and use it to contain the Oracle Database software and data files.
Before you can use Exascale block storage with EDV, note the following:
-
The block store manager (BSM) and block store worker (BSW) services must be running in the Exascale cluster.
To display information about Exascale storage services running across the Exascale cluster, use the ESCLIlsservice
command. For example:@> lsservice --detail
-
The Exascale Direct Volume (EDV) service must be running on each Exadata compute node that you want to host an EDV attachment. If you are planning to use a cluster-wide attachment, then the EDV service must be running on every node in the Oracle Grid Infrastructure (GI) cluster.
To display information about client-side Exascale services, including Exascale Node Proxy (ESNP) and Exascale Direct Volume (EDV), use the DBMCLILIST DBSERVER
command, as follows:
The command output displays status information for all of the client-side Exascale services running on the Exadata compute node. You must run the command separately on each compute node.DBMCLI> list dbserver detail
-
During initial system deployment with Oracle Exadata Deployment Assistant (OEDA), the Exascale Direct Volume (EDV) service is configured on each Exadata compute node (bare-metal or VM) and is related to the Exascale user that manages the Oracle Grid Infrastructure (GI) cluster. To create an EDV attachment, you must use the Exascale user linked with the EDV service.
If the GI cluster uses a non-role-separated user configuration with one Oracle OS user account, then the associated Exascale user is related to the EDV service. If the GI cluster uses a role-separated configuration with a Grid OS user account and an Oracle OS user account, then the EDV service is linked to the Exascale user associated with the Grid OS account.
To find the Exascale user linked with the EDV service, use the ESCLI
lsinitiator
command with the--detail
option and examine theuser
attribute.
To begin, you can create an Exascale block volume and EDV attachment.
For example:
@>mkvault myvault
Vault @myvault created.
@>mkvolume 1g --attributes name=edv1 --vault myvault
Created volume with id 2:72ebb8a56fba4e97b4716c25981702cb
@>lsvolume 2:72ebb8a56fba4e97b4716c25981702cb --detail
id 2:72ebb8a56fba4e97b4716c25981702cb
name edv1
blockSizeIOPS 8192
contentType DATA
creationTime 2024-12-06T04:46:07+00:00
filePath @myvault/vol.5ceabbae36d04e31bc79f9c34658c22e
mediaType HC
numAttachments 0
iopsProvisioned unlimited
bandwidthProvisioned unlimited
redundancyType none
size 1G
state AVAILABLE
owners exa01
vault @myvault
vipId 111:02ae5dee-55a6-4dab-beb6-69a1b5525d60
@>lsinitiator --detail
id e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
edvBaseDriverVersion 25.1.0.0.0.241130
edvEffectiveDriverVersion 25.1.0.0.0.241130
giClusterId deadbeef-badc-0fee-dead-beefbadc0fee
giClusterName edvTestCluster
hostName exa01
lastHeartbeat 2024-12-06T04:45:33+00:00
registerTime 2024-12-06T04:41:18+00:00
state ONLINE
user exa01
version 25.1.0.0.0.241130
@>mkvolumeattachment 2:72ebb8a56fba4e97b4716c25981702cb myedv1 --attributes giClusterId=deadbeef-badc-0fee-dead-beefbadc0fee
Created edv attachment with id 1:cc9435ab951a49119dbb7e249a0b0219
@>lsvolumeattachment 1:cc9435ab951a49119dbb7e249a0b0219 --detail
id 1:cc9435ab951a49119dbb7e249a0b0219
attachTime 2024-12-06T04:47:41+00:00
kernelDeviceName exc-dev1
deviceName myedv1
devicePath /dev/exc/myedv1
giClusterId deadbeef-badc-0fee-dead-beefbadc0fee
giClusterName edvTestCluster
hostName
initiator
logicalSectorSize 512
volume 2:72ebb8a56fba4e97b4716c25981702cb
volumeSnapshot
@>
The EDV attachment creates an association between the volume and an EDV device file, which resides on the Exadata compute nodes hosting the attachment. A node hosting an EDV attachment is also known as an EDV initiator.
If you create a cluster-wide attachment to support a cluster file system such as ACFS, then the EDV device file is created on every node in the Oracle Grid Infrastructure (GI) cluster. If you create a node-specific attachment, then the corresponding EDV device is only created on that node.
In the example, the EDV attachment is a cluster-wide attachment, and the
EDV device name is myedv1
, so the corresponding device file is
located at /dev/exc/myedv1
on each cluster node. The volume
identifier (2:72ebb8a56fba4e97b4716c25981702cb
) was reported to the
user during volume creation. Volume identifiers can also be discovered using the
ESCLI lsvolume
command. The GI cluster identifier
(deadbeef-badc-0fee-dead-beefbadc0fee
) was found by using the
ESCLI lsinitiator
command.
Alternatively, for a node-specific attachment you must specify the node-specific EDV initiator identifier instead of the GI cluster identifier. For example:
@>lsinitiator --detail
id e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
edvBaseDriverVersion 25.1.0.0.0.241130
edvEffectiveDriverVersion 25.1.0.0.0.241130
giClusterId deadbeef-badc-0fee-dead-beefbadc0fee
giClusterName edvTestCluster
hostName exa01
lastHeartbeat 2024-12-06T04:45:33+00:00
registerTime 2024-12-06T04:41:18+00:00
state ONLINE
user exa01
version 25.1.0.0.0.241130
@>mkvolumeattachment 2:72ebb8a56fba4e97b4716c25981702cb myedv1 --attributes initiator=e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
Created edv attachment with id 1:50e52177583f4be4bad68ac20b65001e
@>lsvolumeattachment 1:50e52177583f4be4bad68ac20b65001e --detail
id 1:50e52177583f4be4bad68ac20b65001e
attachTime 2024-12-06T04:47:41+00:00
kernelDeviceName exc-dev1
deviceName myedv1
devicePath /dev/exc/myedv1
giClusterId
giClusterName
hostName exa01
initiator e7e0db8c-9a2a-0279-e7e0-db8c9a2a0279
logicalSectorSize 512
volume 2:72ebb8a56fba4e97b4716c25981702cb
volumeSnapshot
@>
After attachment, the EDV device can be used on the Exadata compute node or GI cluster hosting the attachment.
For example, you could use the following command sequence to create and
mount an ACFS file system using the EDV device defined in the previous examples
(/dev/exc/myedv1
):
# # Confirmation of the EDV device at /dev/exc/myedv1.
# lsblk /dev/exc/myedv1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
exc-dev1 251:1 0 1G 0 disk
# # Note the default ownership and permissions for the EDV device.
# ls -l /dev/exc/myedv1
brw-rw---- 1 root disk 251, 1 Oct 6 17:41 /dev/exc/myedv1
# # Now use the device to support ACFS.
# mkfs -t acfs /dev/exc/myedv1
mkfs.acfs: version = 23.0.0.0.0
mkfs.acfs: ACFS compatibility = 23.0.0.0.0
mkfs.acfs: on-disk version = 53.0
mkfs.acfs: volume = /dev/exc/myedv1
mkfs.acfs: volume size = 1073741824 ( 1.00 GB )
mkfs.acfs: file system size = 1073741824 ( 1.00 GB )
mkfs.acfs: Format complete.
# mkdir /mnt/myedv1
# mount /dev/exc/myedv1 /mnt/myedv1
# df /mnt/myedv1
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/exc/myedv1 1048576 318436 730140 31% /mnt/myedv1
#
# # Mount the file system on other cluster nodes as required.
Note:
-
Each EDV attachment also has a kernel device file at
/dev/exc-devN
, where N is the minor number of the device. The kernel device name is contained as an attribute of the EDV attachment and is visible using the ESCLIlsvolumeattachment
command. The relationship between the kernel device file and the user-named device file (under/dev/exc/
) is also recorded in the udev database and is visible using the following Linux command:# udevadm info device-file
For the device-file value, you can specify either the kernel device file (
/dev/exc-devN
) or the user-named device file (under/dev/exc/
). -
By default, read and write access to EDV device files is only available to the
root
operating system user and members of thedisk
group. Depending on your use case, you may need to modify the permissions on the EDV device files before using them.For example, to make the EDV device file at
/dev/exc/myvol
readable and writable by theoracle
user anddba
group, you could configure it using a udev rule similar to the following:# cat /etc/udev/rules.d/57-edv-user.rules KERNEL=="exc-*", ENV{EXC_ALIAS}=="myvol", OWNER="oracle", GROUP="dba", MODE="0660"
-
To facilitate the management of udev rules related to EDV devices, each EDV client node is configured with a template udev rules file at
/etc/udev/rules.d/57-edv-user.rules
, which you can modify to fulfill your requirements. To maintain existing udev rules,/etc/udev/rules.d/57-edv-user.rules
is preserved whenever the EDV client software is updated. -
Each EDV client node can support a maximum of 1024 attachments at the same time. This limit includes the total of all cluster attachments involving the server, as well as local attachments specific to the server.
An ACFS file system on EDV can also be exported to clients outside the Exadata RDMA Network Fabric by using ACFS HANFS. For more information about ACFS, see Oracle Advanced Cluster File System Administrator's Guide.
EDV devices can also be used as raw devices or in conjunction with other Linux file systems, such as XFS, EXT4, and so on.
Parent topic: Using Exascale Block Storage