Before You Begin
The procedure relies on the following assumptions:
Perform the steps in this procedure only if the directory or project is meant to be protected by cluster fencing, restricting access to read-only for nodes that leave the cluster.
Your cluster is operating.
The Oracle ZFS Storage Appliance NAS device is properly configured.
See Requirements, Recommendations, and Restrictions for Oracle ZFS Storage Appliance NAS Devices for the details about required device configuration.
You have added the device to the cluster by performing the steps in How to Install an Oracle ZFS Storage Appliance in a Cluster.
An NFS file system or directory from the Oracle ZFS Storage Appliance is already created in a project, which is itself in one of the storage pools of the device. It is important that in order for a directory (for example, the NFS file system) to be used by the cluster, to perform the configuration at the project level, as described below.
To perform this procedure, assume the root role or a role that provides solaris.cluster.read and solaris.cluster.modify authorization.
After you have identified the appropriate project, click Edit for that project.
In the Oracle ZFS Storage Appliance GUI, select the Protocols tab in the Edit Project page.
In the Protocols tab of the project, you can set the Share Mode to None, Read/Write, or Read only. Although, all three share modes are supported, as a best practice it is recommended that you use None, and not use the Read/Write mode.
The Share Mode can be set to Read/Write if it is required to make the project world-writable, but it is not recommended.
Use a CIDR mask of /32. For example, 192.168.254.254/32.
Root Access is required when configuring applications, such as Oracle RAC or HA for Oracle Database.
If you are adding multiple directories within the same project, verify that each directory that needs to be protected by cluster fencing has the Inherit from project property set.
# clnasdevice show -v ===NAS Devices=== Nas Device: device1.us.example.com Type: sun_uss userid: osc_agent nodeIPs{node1} 10.111.11.111 nodeIPs{node2} 10.111.11.112 nodeIPs{node3} 10.111.11.113 nodeIPs{node4} 10.111.11.114 Project: pool-0/local/projecta Project: pool-0/local/projectb
# clnasdevice add-dir -d project1,project2 myfiler
Specifies the project or projects that you are adding. Specify the full path name of the project, including the pool. For example, pool-0/local/projecta.
Specifies the name of the NAS device containing the projects.
For example:
# clnasdevice add-dir -d pool-0/local/projecta device1.us.example.com # clnasdevice add-dir -d pool-0/local/projectb device1.us.example.com # clnasdevice find-dir -v === NAS Devices === Nas Device: device1.us.example.com Type: sun_uss Unconfigured Project: pool-0/local/projecta File System: /export/projecta/filesystem-1 File System: /export/projecta/filesystem-2 Unconfigured Project: pool-0/local/projectb File System: /export/projectb/filesystem-1
For more information about the clnasdevice command, see the clnasdevice (1CL) man page.
# clnasdevice add-dir -d project1,project2 -Z zcname myfiler
Specifies the name of the zone cluster where the NAS projects are being added.
Perform this command from any cluster node:
# clnasdevice show -v -d all
For example:
# clnasdevice show -v -d all ===NAS Devices=== Nas Device: device1.us.example.com Type: sun_uss nodeIPs{node1} 10.111.11.111 nodeIPs{node2} 10.111.11.112 nodeIPs{node3} 10.111.11.113 nodeIPs{node4} 10.111.11.114 userid: osc_agent Project: pool-0/local/projecta File System: /export/projecta/filesystem-1 File System: /export/projecta/filesystem-2 Project: pool-0/local/projectb File System: /export/projectb/filesystem-1
# clnasdevice show -v -Z zcname
You can also perform zone cluster-related commands inside the zone cluster by omitting the –Z option. For more information about the clnasdevice command, see the clnasdevice (1CL) man page.
After you confirm that a project name is associated with the desired NFS file system, use that project name in the configuration command.
# mkdir -p /path-to-mountpoint
Name of the directory on which to mount the project.
If you are using your Oracle ZFS Storage Appliance NAS device for Oracle RAC or HA for Oracle Database, consult your Oracle Database guide or log into My Oracle Support for a current list of supported files and mount options. After you log into My Oracle Support, click the Knowledge tab and search for Bulletin 359515.1.
When mounting Oracle ZFS Storage Appliance NAS directories, select the mount options appropriate to your cluster applications. Mount the directories on each node that will access the directories. Oracle Solaris Cluster places no additional restrictions or requirements on the options that you use.
For more information, see Configuring Failover and Scalable Data Services on Shared File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.