Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 4.1 |
1. Planning the Oracle Solaris Cluster Configuration
Finding Oracle Solaris Cluster Installation Tasks
Planning the Oracle Solaris OS
Guidelines for Selecting Your Oracle Solaris Installation Method
Oracle Solaris OS Feature Restrictions
SPARC: Guidelines for Oracle VM Server for SPARC in a Cluster
Planning the Oracle Solaris Cluster Environment
Oracle Solaris Cluster Configurable Components
Global-Cluster Node Names and Node IDs
Global-Cluster Requirements and Guidelines
Zone-Cluster Requirements and Guidelines
Guidelines for Trusted Extensions in a Zone Cluster
Planning Global Devices, Device Groups, and Cluster File Systems
Guidelines for Volume Manager Software
Guidelines for Solaris Volume Manager Software
UFS Cluster File System Logging
Guidelines for Mirroring Multihost Disks
Guidelines for Mirroring the ZFS Root Pool
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Creating a Cluster File System
This section provides the following information:
For information about the purpose and function of global devices, see Global Devices in Oracle Solaris Cluster Concepts Guide.
Oracle Solaris Cluster software does not require any specific disk layout or file system size. Consider the following points when you plan your layout for global devices:
Mirroring – You must mirror all global devices for the global device to be considered highly available. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to disks.
Disks – When you mirror, lay out file systems so that the file systems are mirrored across disk arrays.
Availability – You must physically connect a global device to more than one node in the cluster for the global device to be considered highly available. A global device with multiple physical connections can tolerate a single-node failure. A global device with only one physical connection is supported, but the global device becomes inaccessible from other nodes if the node with the connection is down.
Swap devices – Do not create a swap file on a global device.
Non-global zones – Global devices are not directly accessible from a non-global zone. Only data from a cluster file system is accessible from a non-global zone.
For information about the purpose and function of device groups, see Device Groups in Oracle Solaris Cluster Concepts Guide.
Consider the following points when you plan device groups:
Failover – You can configure multihost disks and properly configured volume-manager devices as failover devices. Proper configuration of a volume-manager device includes multihost disks and correct setup of the volume manager itself. This configuration ensures that multiple nodes can host the exported device. You cannot configure tape drives, CD-ROMs or DVD-ROMs, or single-ported devices as failover devices.
Mirroring – You must mirror the disks to protect the data from disk failure. See Mirroring Guidelines for additional guidelines. See Configuring Solaris Volume Manager Software and your volume-manager documentation for instructions about mirroring.
Storage-based replication – Disks in a device group must be either all replicated or none replicated. A device group cannot use a mix of replicated and nonreplicated disks.
For information about the purpose and function of cluster file systems, see Cluster File Systems in Oracle Solaris Cluster Concepts Guide.
Note - You can alternatively configure highly available local file systems. This can provide better performance to support a data service with high I/O, or to permit use of certain file system features that are not supported in a cluster file system. For more information, see Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Consider the following points when you plan cluster file systems:
Quotas – Quotas are not supported on cluster file systems. However, quotas are supported on highly available local file systems.
Zone clusters – You cannot configure cluster file systems that use UFS for use in a zone cluster. Use highly available local file systems instead.
Loopback file system (LOFS) – During cluster creation, LOFS is enabled by default. You must manually disable LOFS on each cluster node if the cluster meets both of the following conditions:
HA for NFS (HA for NFS) is configured on a highly available local file system.
The automountd daemon is running.
If the cluster meets both of these conditions, you must disable LOFS to avoid switchover problems or other failures. If the cluster meets only one of these conditions, you can safely enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS.
Process accounting log files – Do not locate process accounting log files on a cluster file system or on a highly available local file system. A switchover would be blocked by writes to the log file, which would cause the node to hang. Use only a local file system to contain process accounting log files.
Communication endpoints – The cluster file system does not support any of the file system features of Oracle Solaris software by which one would put a communication endpoint in the file system namespace. Therefore, do not attempt to use the fattach command from any node other than the local node.
Although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover.
Any FIFOs or named pipes that you create on a cluster file system would not be globally accessible.
Device special files – Neither block special files nor character special files are supported in a cluster file system. To specify a path name to a device node in a cluster file system, create a symbolic link to the device name in the /dev directory. Do not use the mknod command for this purpose.
atime – Cluster file systems do not maintain atime.
ctime – When a file on a cluster file system is accessed, the update of the file's ctime might be delayed.
Installing applications - If you want the binaries of a highly available application to reside on a cluster file system, wait to install the application until after the cluster file system is configured.
This section describes requirements and restrictions for mount options of UFS cluster file systems:
Note - You can alternatively configure this and other types of file systems as highly available local file systems. For more information, see Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Follow the guidelines in the following list of mount options to determine what mount options to use when you create your UFS cluster file systems.
Required. This option makes the file system globally visible to all nodes in the cluster.
Required. This option enables logging.
Conditional. This option is required only for cluster file systems that will host Oracle RAC RDBMS data files, log files, and control files.
Required. You do not have to explicitly specify the onerror=panic mount option in the /etc/vfstab file. This mount option is already the default value if no other onerror mount option is specified.
Note - Only the onerror=panic mount option is supported by Oracle Solaris Cluster software. Do not use the onerror=umount or onerror=lock mount options. These mount options are not supported on cluster file systems for the following reasons:
Use of the onerror=umount or onerror=lock mount option might cause the cluster file system to lock or become inaccessible. This condition might occur if the cluster file system experiences file corruption.
The onerror=umount or onerror=lock mount option might cause the cluster file system to become unmountable. This condition might thereby cause applications that use the cluster file system to hang or prevent the applications from being killed.
A node might require rebooting to recover from these states.
Optional. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior for the write() system call. If a write() succeeds, then this mount option ensures that sufficient space is on the disk.
If you do not specify syncdir, the same behavior occurs that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition (ENOSPC) until you close a file.
You see ENOSPC on close only during a very short time after a failover. With syncdir, as with POSIX behavior, the out-of-space condition would be discovered before the close.
See the mount_ufs(1M) man page for more information about UFS mount options.
Consider the following points when you plan mount points for cluster file systems:
Mount-point location – Create mount points for cluster file systems in the /global directory unless you are prohibited by other software products. By using the /global directory, you can more easily distinguish cluster file systems, which are globally available, from local file systems.
Nesting mount points – Normally, you should not nest the mount points for cluster file systems. For example, do not set up one file system that is mounted on /global/a and another file system that is mounted on /global/a/b. Ignoring this rule can cause availability and node boot-order problems. These problems would occur if the parent mount point is not present when the system attempts to mount a child of that file system.
The only exception to this rule is for cluster file systems on UFS. You can nest the mount points if the devices for the two file systems have the same physical host connectivity, for example, different slices on the same disk.
forcedirectio – Oracle Solaris Cluster software does not support the execution of binaries off cluster file systems that are mounted by using the forcedirectio mount option.