This section provides the following guidelines for planning global devices and for planning cluster file systems:
For more information about global devices and about cluster files systems, see Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.
Sun Cluster software does not require any specific disk layout or file system size. Consider the following points when you plan your layout for global devices and for cluster file systems.
Mirroring – You must mirror all global devices for the global device to be considered highly available. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to disks.
Disks – When you mirror, lay out file systems so that the file systems are mirrored across disk arrays.
Availability – You must physically connect a global device to more than one node in the cluster for the global device to be considered highly available. A global device with multiple physical connections can tolerate a single-node failure. A global device with only one physical connection is supported, but the global device becomes inaccessible from other nodes if the node with the connection is down.
Swap devices - Do not create a swap file on a global device.
Consider the following points when you plan cluster file systems.
Loopback file system (LOFS) - Sun Cluster software does not support the use of the loopback file system (LOFS) on cluster nodes.
Communication end-points - The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication end-point in the file-system name space.
Although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover.
Any FIFOs or named pipes that you create on a cluster file system would not be globally accessible.
Therefore, do not attempt to use the fattach command from any node other than the local node.
Add this planning information to the Disk Device Group Configurations Worksheet.
You must configure all volume-manager disk groups as Sun Cluster disk device groups. This configuration enables a secondary node to host multihost disks if the primary node fails. Consider the following points when you plan disk device groups.
Failover – You can configure multihost disks and properly configured volume-manager devices as failover devices. Proper configuration of a volume-manager device includes multihost disks and correct setup of the volume manager itself. This configuration ensures that multiple nodes can host the exported device. You cannot configure tape drives, CD-ROMs, or single-ported devices as failover devices.
Mirroring – You must mirror the disks to protect the data from disk failure. See Mirroring Guidelines for additional guidelines. See Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software or SPARC: Installing and Configuring VxVM Software and your volume-manager documentation for instructions on mirroring.
For more information about disk device groups, see “Devices” in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.
Consider the following points when you plan mount points for cluster file systems.
Mount-point location – Create mount points for cluster file systems in the /global directory, unless you are prohibited by other software products. By using the /global directory, you can more easily distinguish cluster file systems, which are globally available, from local file systems.
SPARC: VxFS mount requirement – If you use VERITAS File System (VxFS), globally mount and unmount a VxFS file system from the primary node. The primary node is the node that masters the disk on which the VxFS file system resides. This method ensures that the mount or unmount operation succeeds. A VxFS file-system mount or unmount operation that is performed from a secondary node might fail.
The following VxFS features are not supported in a Sun Cluster 3.1 cluster file system. They are, however, supported in a local file system.
Quick I/O
Snapshots
Storage checkpoints
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
VERITAS cluster file system (requires VxVM cluster feature & VERITAS Cluster Server)
Cache advisories can be used, but the effect is observed on the given node only.
All other VxFS features and options that are supported in a cluster file system are supported by Sun Cluster 3.1 software. See VxFS documentation for details about VxFS options that are supported in a cluster configuration.
Nesting mount points – Normally, you should not nest the mount points for cluster file systems. For example, do not set up one file system that is mounted on /global/a and another file system that is mounted on /global/a/b. To ignore this rule can cause availability and node boot-order problems. These problems would occur if the parent mount point is not present when the system attempts to mount a child of that file system. The only exception to this rule is if the devices for the two file systems have the same physical node connectivity. An example is different slices on the same disk.
forcedirectio - Sun Cluster software does not support the execution of binaries off cluster file systems that are mounted by using the forcedirectio mount option.