1. Planning the Oracle Solaris Cluster Configuration
Finding Oracle Solaris Cluster Installation Tasks
Planning the Oracle Solaris OS
Guidelines for Selecting Your Oracle Solaris Installation Method
Oracle Solaris OS Feature Restrictions
Oracle Solaris Software Group Considerations
Guidelines for the Root (/) File System
Guidelines for the /globaldevices File System
Example - Sample File-System Allocations
Guidelines for Non-Global Zones in a Global Cluster
SPARC: Guidelines for Sun Logical Domains in a Cluster
Planning the Oracle Solaris Cluster Environment
Oracle Solaris Cluster Configurable Components
Global-Cluster Voting-Node Names and Node IDs
Global-Cluster Requirements and Guidelines
Zone-Cluster Requirements and Guidelines
Guidelines for Trusted Extensions in a Zone Cluster
Planning the Global Devices, Device Groups, and Cluster File Systems
Choosing Mount Options for Cluster File Systems
Mount Information for Cluster File Systems
Guidelines for Volume-Manager Software
Guidelines for Solaris Volume Manager Software
Guidelines for Veritas Volume Manager Software
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Installing and Configuring Veritas Volume Manager
6. Creating a Cluster File System
7. Creating Non-Global Zones and Zone Clusters
8. Installing the Oracle Solaris Cluster Module to Sun Management Center
9. Uninstalling Software From the Cluster
A. Oracle Solaris Cluster Installation and Configuration Worksheets
Add this planning information to the Device Group Configurations Worksheet and the Volume-Manager Configurations Worksheet. For Solaris Volume Manager, also add this planning information to the Volumes Worksheet (Solaris Volume Manager).
This section provides the following guidelines for planning volume management of your cluster configuration:
Oracle Solaris Cluster software uses volume-manager software to group disks into device groups which can then be administered as one unit. Oracle Solaris Cluster software supports Solaris Volume Manager software and Veritas Volume Manager (VxVM) software that you install or use in the following ways.
Table 1-4 Supported Use of Volume Managers With Oracle Solaris Cluster Software
|
See your volume-manager documentation and Configuring Solaris Volume Manager Software or Installing and Configuring VxVM Software for instructions about how to install and configure the volume-manager software. For more information about the use of volume management in a cluster configuration, see Multihost Devices in Oracle Solaris Cluster Concepts Guide and Device Groups in Oracle Solaris Cluster Concepts Guide.
Consider the following general guidelines when you configure your disks with volume-manager software:
Software RAID – Oracle Solaris Cluster software does not support software RAID 5.
Mirrored multihost disks – You must mirror all multihost disks across disk expansion units. See Guidelines for Mirroring Multihost Disks for guidelines on mirroring multihost disks. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to devices.
Mirrored root – Mirroring the root disk ensures high availability, but such mirroring is not required. See Mirroring Guidelines for guidelines about deciding whether to mirror the root disk.
Unique naming – You might have local Solaris Volume Manager or VxVM volumes that are used as devices on which the /global/.devices/node@nodeid file systems are mounted. If so, the name of each local volume on which a /global/.devices/node@nodeid file system is to be mounted must be unique throughout the cluster.
Node lists – To ensure high availability of a device group, make its node lists of potential masters and its failback policy identical to any associated resource group. Or, if a scalable resource group uses more nodes than its associated device group, make the scalable resource group's node list a superset of the device group's node list. See the resource group planning information in the Oracle Solaris Cluster Data Services Planning and Administration Guide for information about node lists.
Multihost disks – You must connect, or port, all devices that are used to construct a device group to all of the nodes that are configured in the node list for that device group. Solaris Volume Manager software can automatically check for this connection at the time that devices are added to a disk set. However, configured VxVM disk groups do not have an association to any particular set of nodes.
Hot-spare disks – You can use hot-spare disks to increase availability, but hot spare disks are not required.
See your volume-manager documentation for disk layout recommendations and any additional restrictions.
Consider the following points when you plan Solaris Volume Manager configurations:
Local volume names – The name of each local Solaris Volume Manager volume on which a global-devices file system, /global/.devices/node@nodeid, is mounted must be unique throughout the cluster. Also, the name cannot be the same as any device-ID name.
Dual-string mediators – A disk string consists of a disk enclosure, its physical disks, cables from the enclosure to the host or hosts, and the interface adapter cards. Each disk set configured with exactly two disk strings and mastered by exactly two Solaris hosts is called a dual-string disk set. Such a disk set must have Solaris Volume Manager dual-string mediators configured. Observe the following rules when you configure dual-string mediators:
You must configure each disk set with two or three hosts that act as mediator hosts.
You must use the hosts that can master a disk set as mediators for that disk set. If you have a campus cluster, you can also configure a third node or a non-clustered host on the cluster network as a third mediator host to improve availability.
Mediators cannot be configured for disk sets that do not meet the two-string and two-host requirements.
See the mediator(7D) man page for details.
Consider the following points when you plan Veritas Volume Manager (VxVM) configurations.
Accessibility to nodes – You must configure all volume-manager disk groups as either Oracle Solaris Cluster device groups or as local-only disk groups. If you do not configure the disk group in one of these ways, the devices in the disk group will not be accessible to any node in the cluster.
A device group enables a secondary node to host multihost disks if the primary node fails.
A local-only disk group functions outside the control of Oracle Solaris Cluster software and can be accessed from only one node at a time.
Enclosure-Based Naming – If you use Enclosure-Based Naming of devices, ensure that you use consistent device names on all cluster nodes that share the same storage. VxVM does not coordinate these names, so the administrator must ensure that VxVM assigns the same names to the same devices from different nodes. Failure to assign consistent names does not interfere with correct cluster behavior. However, inconsistent names greatly complicate cluster administration and greatly increase the possibility of configuration errors, potentially leading to loss of data.
Root disk group – The creation of a root disk group is optional.
A root disk group can be created on the following disks:
The root disk, which must be encapsulated
One or more local nonroot disks, which you can encapsulate or initialize
A combination of root and local nonroot disks
The root disk group must be local to the Solaris host.
Simple root disk groups – Simple root disk groups, which are created on a single slice of the root disk, are not supported as disk types with VxVM on Oracle Solaris Cluster software. This is a general VxVM software restriction.
Encapsulation – Disks to be encapsulated must have two disk-slice table entries free.
Number of volumes – Estimate the maximum number of volumes any given device group can use at the time the device group is created.
If the number of volumes is less than 1000, you can use default minor numbering.
If the number of volumes is 1000 or greater, you must carefully plan the way in which minor numbers are assigned to device group volumes. No two device groups can have overlapping minor number assignments.
Dirty Region Logging – The use of Dirty Region Logging (DRL) decreases volume recovery time after a node failure. Using DRL might decrease I/O throughput.
Dynamic Multipathing (DMP) – The use of DMP alone to manage multiple I/O paths per Solaris host to the shared storage is not supported. The use of DMP is supported only in the following configurations:
A single I/O path per host is configured to the cluster's shared storage.
A supported multipathing solution is used, such as Solaris I/O multipathing software (MPxIO) or EMC PowerPath, that manages multiple I/O paths per host to the shared cluster storage.
ZFS – Root-disk encapsulation is incompatible with a ZFS root file system.
See your VxVM installation documentation for additional information.
Logging is required for UFS and VxFS cluster file systems. Oracle Solaris Cluster software supports the following choices of file-system logging:
Solaris UFS logging – See the mount_ufs(1M) man page for more information.
SPARC: Veritas File System (VxFS) logging – See the mount_vxfs man page provided with VxFS software for more information.
Both Solaris Volume Manager and Veritas Volume Manager support both types of file-system logging.
This section provides the following guidelines for planning the mirroring of your cluster configuration:
To mirror all multihost disks in an Oracle Solaris Cluster configuration enables the configuration to tolerate single-device failures. Oracle Solaris Cluster software requires that you mirror all multihost disks across expansion units. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to devices.
Consider the following points when you mirror multihost disks:
Separate disk expansion units – Each submirror of a given mirror or plex should reside in a different multihost expansion unit.
Disk space – Mirroring doubles the amount of necessary disk space.
Three-way mirroring – Solaris Volume Manager software and Veritas Volume Manager (VxVM) software support three-way mirroring. However, Oracle Solaris Cluster software requires only two-way mirroring.
Differing device sizes – If you mirror to a device of a different size, your mirror capacity is limited to the size of the smallest submirror or plex.
For more information about multihost disks, see Multihost Disk Storage in Oracle Solaris Cluster Overview and Oracle Solaris Cluster Concepts Guide.
Add this planning information to the Local File System Layout Worksheet.
For maximum availability, mirror root (/), /usr, /var, /opt, and swap on the local disks. Under VxVM, you encapsulate the root disk and mirror the generated subdisks. However, Oracle Solaris Cluster software does not require that you mirror the root disk.
Before you decide whether to mirror the root disk, consider the risks, complexity, cost, and service time for the various alternatives that concern the root disk. No single mirroring strategy works for all configurations. You might want to consider your local Oracle service representative's preferred solution when you decide whether to mirror root.
See your volume-manager documentation and Configuring Solaris Volume Manager Software or Installing and Configuring VxVM Software for instructions about how to mirror the root disk.
Consider the following points when you decide whether to mirror the root disk.
Boot disk – You can set up the mirror to be a bootable root disk. You can then boot from the mirror if the primary boot disk fails.
Complexity – To mirror the root disk adds complexity to system administration. To mirror the root disk also complicates booting in single-user mode.
Backups – Regardless of whether you mirror the root disk, you also should perform regular backups of root. Mirroring alone does not protect against administrative errors. Only a backup plan enables you to restore files that have been accidentally altered or deleted.
Quorum devices – Do not use a disk that was configured as a quorum device to mirror a root disk.
Quorum – Under Solaris Volume Manager software, in failure scenarios in which state database quorum is lost, you cannot reboot the system until maintenance is performed. See your Solaris Volume Manager documentation for information about the state database and state database replicas.
Separate controllers – Highest availability includes mirroring the root disk on a separate controller.
Secondary root disk – With a mirrored root disk, the primary root disk can fail but work can continue on the secondary (mirror) root disk. Later, the primary root disk might return to service, for example, after a power cycle or transient I/O errors. Subsequent boots are then performed by using the primary root disk that is specified for the eeprom(1M) boot-device parameter. In this situation, no manual repair task occurs, but the drive starts working well enough to boot. With Solaris Volume Manager software, a resync does occur. A resync requires a manual step when the drive is returned to service.
If changes were made to any files on the secondary (mirror) root disk, they would not be reflected on the primary root disk during boot time. This condition would cause a stale submirror. For example, changes to the /etc/system file would be lost. With Solaris Volume Manager software, some administrative commands might have changed the /etc/system file while the primary root disk was out of service.
The boot program does not check whether the system is booting from a mirror or from an underlying physical device. The mirroring becomes active partway through the boot process, after the volumes are loaded. Before this point, the system is therefore vulnerable to stale submirror problems.