This section provides the following guidelines for planning global devices and for planning cluster file systems:
For information about the purpose and function of global devices, see Shared Devices, Local Devices, and Device Groups in Sun Cluster Overview for Solaris OS and Global Devices in Sun Cluster Concepts Guide for Solaris OS.
Sun Cluster software does not require any specific disk layout or file system size. Consider the following points when you plan your layout for global devices.
Mirroring – You must mirror all global devices for the global device to be considered highly available. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to disks.
Disks – When you mirror, lay out file systems so that the file systems are mirrored across disk arrays.
Availability – You must physically connect a global device to more than one voting node in the cluster for the global device to be considered highly available. A global device with multiple physical connections can tolerate a single-node failure. A global device with only one physical connection is supported, but the global device becomes inaccessible from other voting nodes if the node with the connection is down.
Swap devices – Do not create a swap file on a global device.
Non-global zones – Global devices are not directly accessible from a non-global zone. Only cluster-file-system data is accessible from a non-global zone.
For information about the purpose and function of device groups, see Shared Devices, Local Devices, and Device Groups in Sun Cluster Overview for Solaris OS and Device Groups in Sun Cluster Concepts Guide for Solaris OS.
Add this planning information to the Device Group Configurations Worksheet.
Consider the following points when you plan device groups.
Failover – You can configure multihost disks and properly configured volume-manager devices as failover devices. Proper configuration of a volume-manager device includes multihost disks and correct setup of the volume manager itself. This configuration ensures that multiple voting nodes can host the exported device. You cannot configure tape drives, CD-ROMs or DVD-ROMs, or single-ported devices as failover devices.
Mirroring – You must mirror the disks to protect the data from disk failure. See Mirroring Guidelines for additional guidelines. See Configuring Solaris Volume Manager Software or Installing and Configuring VxVM Software and your volume-manager documentation for instructions about mirroring.
Storage-based replication – Disks in a device group must be either all replicated or none replicated. A device group cannot use a mix of replicated and nonreplicated disks.
For information about the purpose and function of cluster file systems, see Cluster File Systems in Sun Cluster Overview for Solaris OS and Cluster File Systems in Sun Cluster Concepts Guide for Solaris OS.
You can alternatively configure highly available local file systems. This can provide better performance to support a data service with high I/O, or to permit use of certain file-system features that are not supported in a cluster file system. For more information, see Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Consider the following points when you plan cluster file systems.
Quotas – Quotas are not supported on cluster file systems. However, quotas are supported on highly available local file systems.
Non-global zones – If a cluster file system is to be accessed from a non-global zone, it must first be mounted in the global zone. The cluster file system is then mounted in the non-global zone by using a loopback mount. Therefore, the loopback file system (LOFS) must be enabled in a cluster that contains non-global zones.
Zone clusters – You cannot configure cluster file systems that use UFS or VxFS for use in a zone cluster. Use highly available local file systems instead. You can use a QFS shared file system in a zone cluster, but only to support Oracle RAC.
Loopback file system (LOFS) – During cluster creation with the Solaris 9 version of Sun Cluster software, LOFS is disabled by default. During cluster creation with the Solaris 10 version of Sun Cluster software, LOFS is enabled by default.
You must manually disable LOFS on each voting cluster node if the cluster meets both of the following conditions:
Sun Cluster HA for NFS is configured on a highly available local file system.
The automountd daemon is running.
If the cluster meets both of these conditions, you must disable LOFS to avoid switchover problems or other failures. If the cluster meets only one of these conditions, you can safely enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS.
Process accounting log files – Do not locate process accounting log files on a cluster file system or on a highly available local file system. A switchover would be blocked by writes to the log file, which would cause the node to hang. Use only a local file system to contain process accounting log files.
Communication endpoints – The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication endpoint in the file-system namespace.
Although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover.
Any FIFOs or named pipes that you create on a cluster file system would not be globally accessible.
Therefore, do not attempt to use the fattach command from any node other than the local node.
Device special files – Neither block special files nor character special files are supported in a cluster file system. To specify a path name to a device node in a cluster file system, create a symbolic link to the device name in the /dev directory. Do not use the mknod command for this purpose.
atime – Cluster file systems do not maintain atime.
ctime – When a file on a cluster file system is accessed, the update of the file's ctime might be delayed.
Installing applications - If you want the binaries of a highly available application to reside on a cluster file system, wait to install the application until after the cluster file system is configured. Also, if the application is installed by using the Sun Java System installer program and the application depends on any shared components, install those shared components on all nodes in the cluster that are not installed with the application.
This section describes requirements and restrictions for the following types of cluster file systems:
You can alternatively configure these and other types of file systems as highly available local file systems. For more information, see Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Follow these guidelines to determine what mount options to use when you create your cluster file systems.
See the mount_ufs(1M) man page for more information about UFS mount options.
Mount Option |
Usage |
Description |
---|---|---|
global |
Required |
This option makes the file system globally visible to all nodes in the cluster. |
log |
Required |
This option enables logging. |
See the VxFS mount_vxfs man page and Overview of Administering Cluster File Systems in Sun Cluster System Administration Guide for Solaris OS for more information about VxFS mount options.
Consider the following points when you plan mount points for cluster file systems.
Mount-point location – Create mount points for cluster file systems in the /global directory, unless you are prohibited by other software products. By using the /global directory, you can more easily distinguish cluster file systems, which are globally available, from local file systems.
SPARC: VxFS mount requirement – If you use Veritas File System (VxFS), globally mount and unmount a VxFS file system from the primary node. The primary node is the Solaris host that masters the disk on which the VxFS file system resides. This method ensures that the mount or unmount operation succeeds. A VxFS file-system mount or unmount operation that is performed from a secondary node might fail.
SPARC: VxFS feature restrictions –
The following VxFS features are not supported in a Sun Cluster 3.2 cluster file system. They are, however, supported in a local file system.
Quick I/O
Snapshots
Storage checkpoints
VxFS-specific mount options:
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
Veritas cluster file system (requires VxVM cluster feature & Veritas Cluster Server). The VxVM cluster feature is not supported on x86 based systems.
Cache advisories can be used, but the effect is observed on the given node only.
All other VxFS features and options that are supported in a cluster file system are supported by Sun Cluster 3.2 software. See VxFS documentation for details about VxFS options that are supported in a cluster configuration.
Nesting mount points – Normally, you should not nest the mount points for cluster file systems. For example, do not set up one file system that is mounted on /global/a and another file system that is mounted on /global/a/b. To ignore this rule can cause availability and node boot-order problems. These problems would occur if the parent mount point is not present when the system attempts to mount a child of that file system.
The only exception to this rule, for cluster file systems on UFS or VxFS, is if the devices for the two file systems have the same physical host connectivity. An example is different slices on the same disk.
This restriction still applies to QFS shared file systems, even if the two file-system devices have the same physical host connectivity.
forcedirectio – Sun Cluster software does not support the execution of binaries off cluster file systems that are mounted by using the forcedirectio mount option.