This section includes only restrictions and requirements that have a direct impact on the procedures in this chapter. For general support information, contact your Oracle service provider.
This section describes the following requirements.
When you configure an Oracle ZFS Storage Appliance, you must meet the following requirements.
Do not use the default project. File systems created in the default project will not be fenced from failing cluster nodes.
If you wish for a file system within a project used by the cluster to be protected by cluster fencing, ensure that the Protocols tab of the file system has the Inherit from project property selected for NFS. If you do not want the file system to be protected (for example, to allow file system access when cluster nodes are in non-cluster mode), ensure that the Inherit from project setting is not selected. This setting can be changed as needed to allow some file systems within a project to be fenced, and other file systems within the same project to not be fenced. When unselecting the Inherit from project setting, verify that the file system has the desired NFS exception settings for the IP address of each cluster node. Ensure that NFS file systems created within a project for cluster use with fencing control are set to inherit NFS properties from their parent project.
For any projects that have file systems to be protected by cluster fencing, perform the following actions:
In the Protocols tab of the project, set the Share Mode to None or Read only.
If there are systems outside the cluster that will access the file system in this project, allow them access with the NFS exception entries created for each system. Use the Host entries or Network entries for these systems. Use the Network entry only if the system is on a different subnet than the subnet used by the cluster to access the NAS device.
The IP address must use the format of xxx.xxx.xxx.xxx/32. Set the Access Mode for the entry to Read/Write and select Root Access for the entry. Explicitly grant access to projects to all nodes in the cluster. Use only network exceptions when granting cluster access. Add exceptions for each public IP address within the cluster that might be used to access the storage, using the format of xxx.xxx.xxx.xxx/32. If a node has multiple active public network adapters, add the IP address of each one.
An Oracle ZFS Storage Appliance NAS device must be directly connected (through the same subnet) to all nodes of the cluster.
The cluster can be connected to multiple public networks to communicate with external systems. However, only one network can directly access a specific Oracle ZFS Storage Appliance device. The IP addresses configured in the NFS exception entries for the cluster nodes exist in that subnet, and the Oracle ZFS Storage Appliance device's network interface that is connected to that network is also configured with an IP address in that subnet. In the Network tab of the Oracle ZFS Storage Appliance Configuration panel, ensure that the subnet's network interface has Allow Administration selected.
Ensure that the Oracle ZFS Storage Appliance is running a qualified firmware release.
When you configure your Oracle ZFS Storage Appliance NAS device for use with the HA for Oracle Database data service, you must meet the following requirements:
To guarantee data integrity, configure the Oracle ZFS Storage Appliance NAS device with fencing support.
You can install Oracle Database and Oracle Clusterware software, as well as place files used by these installations, onto NFS shares from the NAS device, but you must ensure that the NFS shares used to store the files are mounted with the required mount options. These mount options must be appropriate for the file type. Do not mix file types with different mount requirements on the same NFS share.
Consult your Oracle Database guide or log into My Oracle Support for the most current list of supported files and mount options. After you log into My Oracle Support, click the Knowledge tab and search for Bulletin 359515.1.
When you configure your Oracle ZFS Storage Appliance NAS device for use with Oracle RAC, you must also comply with the requirements listed above.
The administrator has the option to create and use iSCSI LUNS on the Oracle ZFS Storage Appliance to be used as quorum devices.
If you boot the quorum device after booting the cluster nodes, your nodes cannot find the quorum device, and thus cannot count the quorum votes of the quorum device. This lack of quorum votes may result in the partition failing to form a cluster. If that situation occurs, reboot the cluster nodes.
The Oracle ZFS Storage Appliance NAS device must be located on the same network as the cluster nodes. If an Oracle ZFS Storage Appliance NAS quorum device is not located on the same network as the cluster nodes, the quorum device is at risk of not responding at boot time. This lack of response could cause the cluster nodes to be unable to form a cluster if they do not acquire enough votes (for example, when one node cannot be booted up). This risk is also present at quorum acquisition time for the cluster to resolve split brain situations, in which case it could cause the cluster to fail to stay up.
When you use an iSCSI LUN from an Oracle ZFS Storage Appliance NAS device as a cluster quorum device, the device appears to the quorum subsystem as a regular SCSI shared disk. The iSCSI connection to the NAS device is completely invisible to the quorum subsystem.
For instructions on adding an Oracle ZFS Storage Appliance NAS quorum device, see How to Add an Oracle ZFS Storage Appliance NAS Quorum Device in Administering an Oracle Solaris Cluster 4.4 Configuration.
The Oracle Solaris Cluster interface for configuring the NFS file systems from the Oracle ZFS Storage Appliance does not support the configuration at the individual file system level. The configuration of such file systems is restricted to the projects in the Oracle ZFS Storage Appliance that contain the file systems.