This chapter provides planning information and guidelines for installing a Sun Cluster configuration.
The following overview information is in this chapter.
The following table shows where to find instructions for various Sun Cluster software installation tasks and the order in which you should perform them.
Table 1-1 Location of Sun Cluster Software Installation Task Information
Task |
For Instructions, Go To ... |
---|---|
Setting up cluster hardware |
Sun Cluster 3.0 Hardware Guide Documentation shipped with your server and storage devices |
Planning cluster software installation |
Chapter 1, Planning the Sun Cluster Configuration "Configuration Worksheets and Examples" in Sun Cluster 3.0 Release Notes |
Installing cluster framework, volume manager, and data service software packages | |
Configuring cluster framework and volume manager software |
Chapter 2, Installing and Configuring Sun Cluster Software Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager Volume manager documentation |
Upgrading cluster framework, data services, and volume manager software |
Chapter 3, Upgrading Sun Cluster Software Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager Volume manager documentation |
Planning, installing, and configuring data services and resource groups |
Sun Cluster 3.0 Data Services Installation and Configuration Guide |
Using the API |
Sun Cluster 3.0 Data Services Developers' Guide |
This section provides guidelines for planning Solaris software installation in a cluster configuration. For more information about Solaris software, refer to the Solaris installation documentation.
You can install Solaris software from a local CD-ROM or from a network install server by using the JumpStartTM installation method. In addition, Sun Cluster software provides a method for installing both the Solaris operating environment and Sun Cluster software by using custom JumpStart. If you are installing several cluster nodes, consider a network install.
Refer to "How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes" for details about the custom JumpStart installation method. Refer to Solaris installation documentation for details about standard Solaris installation methods.
Add this information to the "Local File System Layout Worksheet" in Sun Cluster 3.0 Release Notes.
When the Solaris operating environment is installed, ensure that the required Sun Cluster partitions are created, and that all partitions meet minimum space requirements.
swap - Allocate at least 750 Mbytes or twice the physical memory, whichever is greater.
/globaldevices - Create a 100-Mbyte file system that will be used by the scinstall(1M) utility for global devices.
Volume manager - Create a 10-Mbyte partition for volume manager use on a slice at the end of the disk (slice 7). If your cluster uses VERITAS Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices available for use by VxVM.
To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris operating environment.
Refer to the following guidelines for additional partition planning information.
As with any other system running the Solaris operating environment, you can configure the root (/), /var, /usr, and /opt directories as separate file systems, or you can include all the directories in the root (/) file system. The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when planning your partitioning scheme.
root (/) - The Sun Cluster software itself occupies less than 40 Mbytes of space in the root (/) file system. Solstice DiskSuiteTM software requires less than 5 Mbytes, and VxVM software requires less than 15 Mbytes. For best results, configure ample additional space and inode capacity for the creation of both block special devices and character special devices used by either Solstice DiskSuite or VxVM software, especially if a large number of shared disks are in the cluster. Therefore, add at least 100 Mbytes to the amount of space you would normally allocate for your root (/) file system.
/var - The Sun Cluster software occupies a negligible amount of space in /var at installation time. However, set aside ample space for log files. Also, more messages might be logged on a clustered node than would be found on a typical standalone server. Therefore, allow at least 100 Mbytes for /var.
/usr - Sun Cluster software occupies less than 25 Mbytes of space in /usr. Solstice DiskSuite and VxVM software each require less than 15 Mbytes.
/opt - Sun Cluster framework software uses less than 2 Mbytes in /opt. However, each Sun Cluster data service might use between 1 Mbyte and 5 Mbytes. Solstice DiskSuite software does not use any space in /opt. VxVM software can use over 40 Mbytes if all of its packages and tools are installed. In addition, most database and applications software is installed in /opt. If you use SunTM Management Center software (formerly named Sun Enterprise SyMONTM) to monitor the cluster, you need an additional 25 Mbytes of space on each node to support the Sun Management Center agent and Sun Cluster module packages.
The minimum size of the swap partition must be either 750 Mbytes or twice the amount of physical memory on the machine, whichever is greater. In addition, any third-party applications you install might also have swap requirements. Refer to third-party application documentation for any swap requirements.
Sun Cluster software requires that you set aside a special file system one of the local disks for use in managing global devices. This file system must be separate, as it will later be mounted as a cluster file system. Name this file system /globaldevices, which is the default name recognized by the scinstall(1M) command. The scinstall(1M) command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number assigned to a node when it becomes a cluster member, and the original /globaldevices mount point is removed.The /globaldevices file system must have ample space and inode capacity for creating both block special devices and character special devices, especially if a large number of disks are in the cluster. A file system size of 100 Mbytes should be more than enough for most cluster configurations.
If you use Solstice DiskSuite software, you must set aside a slice on the root disk for use in creating the replica database. Specifically, set aside a slice for this purpose on each local disk. But, if you only have one local disk on a node, you might need to create three replica databases in the same slice for Solstice DiskSuite software to function properly. Refer to the Solstice DiskSuite documentation for more information.
If you use VxVM and you intend to encapsulate the root disk, you need two unused slices available for use by VxVM, as well as some additional unassigned free space at either the beginning or end of the disk. Refer to the VxVM documentation for more information about encapsulation.
Table 1-2 shows a partitioning scheme for a cluster node that has less than 750 Mbytes of physical memory. This scheme will be installed with the Solaris operating environment End User System Support software group, Sun Cluster software, and the Sun Cluster HA for NFS data service. The last slice on the disk, slice 7, has been allocated with a small amount of space for volume manager use.
This layout allows for the use of either Solstice DiskSuite software or VxVM. If you use Solstice DiskSuite software, you use slice 7 for the replica database. If you use VxVM, you can later free slice 7 by assigning it a zero length. This layout frees two slices, 4 and 7, and it provides for unused space at the end of the disk.
Table 1-2 Sample File-System Allocation
Slice |
Contents |
Allocation (in Mbytes) |
Description |
---|---|---|---|
0 |
/ |
1168 |
441 Mbytes for Solaris operating environment software. 100 Mbytes extra for root (/). 100 Mbytes extra for /var. 25 Mbytes for Sun Cluster software. 55 Mbytes for volume manager software. 1 Mbyte for Sun Cluster HA for NFS software. 25 Mbytes for the Sun Management Center agent and Sun Cluster module agent packages. 421 Mbytes (the remaining free space on the disk) for possible future use by database and application software. |
1 |
swap |
750 |
Minimum size when physical memory is less than 750 Mbytes. |
2 |
overlap |
2028 |
The entire disk. |
3 |
/globaldevices |
100 |
The Sun Cluster software later assigns this slice a different mount point and mounts it as a cluster file system. |
4 |
unused |
- |
Available as a free slice for encapsulating the root disk under VxVM. |
5 |
unused |
- |
|
6 |
unused |
- |
|
7 |
volume manager |
10 |
If Solstice DiskSuite software, used for replica database. If VxVM, later free the slice and some space at the end of the disk. |
This section provides guidelines for planning and preparing for Sun Cluster software installation. For detailed information about Sun Cluster components, refer to Sun Cluster 3.0 Concepts.
Ensure that you have any necessary license certificates available before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.
For licensing requirements for volume manager software and applications software, refer to the installation documentation for those products.
After installing each software product, you must also install any required patches. For the current list of required patches, refer to Sun Cluster 3.0 Release Notes or consult your Enterprise Services representative or service provider. Refer to Sun Cluster 3.0 System Administration Guide for general guidelines and procedures for applying patches.
You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public network connection to the same set of public subnets.
The following table lists the components that need IP addresses assigned to them. Add these IP addresses to any naming services used. Also add these IP addresses to the local /etc/inet/hosts file on each cluster node after Sun Cluster software is installed.
Table 1-3 Sun Cluster Components That Use IP Addresses
Component |
IP Addresses Needed |
---|---|
Administrative console |
1 per subnet |
Cluster nodes |
1 per node, per subnet |
Terminal concentrator or System Service Processor |
1 |
Logical addresses |
1 per logical host resource, per subnet |
A terminal concentrator communicates between the administrative console and the cluster node consoles. Sun EnterpriseTM E10000 servers use a System Service Processor (SSP) instead of a terminal concentrator. For more information about console access, refer to Sun Cluster 3.0 Concepts.
Each data service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed. Refer to Sun Cluster 3.0 Data Services Installation and Configuration Guide for information and worksheets for planning resource groups. For more information about data services and resources, also refer to Sun Cluster 3.0 Concepts.
This section provides guidelines for the Sun Cluster components that you configure during installation.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.
You specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes. Information for most other worksheets is grouped by node name.
The node name is the name you assign to a machine during installation of the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.
Sun Cluster software uses the private network for internal communication between nodes. Sun Cluster requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when installing Sun Cluster software on the first node of the cluster. You can choose to accept the default private network address (172.16.0.0) and netmask (255.255.0.0), or type different choices if the default network address is already in use elsewhere in the enterprise.
After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.
If you specify a private network address other than the default, it must meet the following requirements.
Use zeroes for the last two octets of the address
Follow the guidelines in RFC 1597 for network address assignments
Refer to TCP/IP and Data Communications Administration Guide for instructions on obtaining copies of RFCs.
If you specify a netmask other than the default, it must meet the following requirements.
Minimally mask all bits given in the private network address
Have no "holes"
Add this planning information to the "Cluster Interconnect Worksheet" in Sun Cluster 3.0 Release Notes.
The cluster interconnect provides the hardware pathway for private network communication between cluster nodes. Each interconnect consists of a cable between two transport adapters, a transport adapter and a transport junction, or two transport junctions. During Sun Cluster installation, you specify the following configuration information for two cluster interconnects.
Transport adapters - For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is direct connected (adapter to adapter) or uses a transport junction.
Transport junctions - If transport junctions, such as a network switch, are used, specify the transport junction name for each interconnect. The default name is switchN, where N is a number automatically assigned during installation. Also specify the junction port name, or accept the default name. The default port name is the same as the node ID of the node hosting the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI.
Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.
You can configure additional private network connections after installation by using the scsetup(1M) utility.
For more information about the cluster interconnect, refer to Sun Cluster 3.0 Concepts.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.
The private hostname is the name used for inter-node communication over the private network interface. Private hostnames are automatically created during Sun Cluster installation, and follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. This node ID number is automatically assigned during Sun Cluster installation to each node when it becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.
Add this planning information to the "Public Networks Worksheet" in Sun Cluster 3.0 Release Notes.
Public networks communicate outside the cluster. Consider the following points when planning your public network configuration.
Public networks and the private network (cluster interconnect) must use separate adapters.
You must have at least one public network that is connected to all cluster nodes.
You can have as many additional public network connections as your hardware configuration allows.
See also "NAFO Groups" for guidelines on planning public network adapter backup groups. For more information about public network interfaces, refer to Sun Cluster 3.0 Concepts.
Add this planning information to the "Disk Device Group Configurations Worksheet" in Sun Cluster 3.0 Release Notes.
You must configure all volume manager disk groups as Sun Cluster disk device groups. This configuration enables multihost disks to be hosted by a secondary node if the primary node fails. Consider the following points when planning disk device groups.
Failover - You can configure multiported disks and properly-configured volume manager devices as failover devices. Proper configuration of a volume manager device includes multiported disks and correct setup of the volume manager itself so that the exported device can be hosted by multiple nodes. You cannot configure tape drives, CD-ROMs, or single-ported disks as failover devices.
Mirroring - You must mirror the disks to protect the data from disk failure. Refer to your volume manager documentation for instructions on mirroring.
For more information about disk device groups, refer to Sun Cluster 3.0 Concepts.
Add this planning information to the "Public Networks Worksheet" in Sun Cluster 3.0 Release Notes.
A Network Adapter Failover (NAFO) group provides public network adapter monitoring and failover, and is the foundation for a network address resource. If the active adapter of a NAFO group that is configured with two or more adapters fails, all of its addresses fail over to another adapter in the NAFO group. In this way, the active NAFO group adapter maintains public network connectivity to the subnet to which the adapters in the NAFO group connect.
Consider the following points when planning your NAFO groups.
Each public network adapter must belong to a NAFO group.
Each node can have only one NAFO group per subnet.
Only one adapter in a given NAFO group can have a hostname association, in the form of an /etc/hostname.adapter file.
NAFO group naming convention is nafoN, where N is the number you supply when you create the NAFO group.
For more information about Network Adapter Failover, refer to Sun Cluster 3.0 Concepts.
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.
Consider the following points when planning quorum devices.
Minimum - A two-node cluster must have at least one shared disk assigned as a quorum device. For other topologies, quorum devices are optional.
Odd number rule - If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices so that the quorum devices have completely independent failure pathways.
Connection - A quorum device cannot be connected to more than two nodes.
For more information about quorum, refer to Sun Cluster 3.0 Concepts.
This section provides guidelines for planning global devices and cluster file systems. For more information about global devices and cluster files systems, refer to Sun Cluster 3.0 Concepts.
Sun Cluster does not require any specific disk layout or file system size. Consider the following points when planning your global device and cluster file system layout.
Mirroring - All global devices must be mirrored to be considered highly available.
Disks - When mirroring, lay out disks so that they are mirrored across disk expansion units.
Availability - A global device must have a physical connection to more than one node in the cluster to be considered highly available. This configuration can tolerate a single-node failure. A global device with only one physical connection is supported, but it is inaccessible from other nodes if the node with the connection is down.
Consider the following points when planning mount points for cluster file systems.
Mount point location - Create mount points in the /global directory, unless prohibited by other software products. Using a /global directory enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
Nesting mount points - Normally, you should not nest the mount points for cluster file systems. For example, do not set up one file system mounted on /global/a and another file system mounted on /global/a/b. Ignoring this rule can cause availability and node boot order problems, because the parent mount point might not be present. The only exception to this rule is if the devices for the two file systems have the same physical node connectivity (for example, different slices on the same disk).
This section provides guidelines for planning volume management of your cluster configuration.
Sun Cluster uses volume manager software to group disks into disk device groups that can then be administered as one unit. Sun Cluster supports Solstice DiskSuite software and VERITAS Volume Manager (VxVM). You can use only one volume manager within a single cluster configuration. Refer to your volume manager documentation and to either Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager for instructions on configuring the volume manager software. For more information about volume management in a cluster configuration, refer to Sun Cluster 3.0 Concepts.
Add this planning information to the "Disk Device Group Configurations Worksheet" and the "Volume Manager Configurations Worksheet" in Sun Cluster 3.0 Release Notes, and to the "Metadevices Worksheet (Solstice DiskSuite)" in Sun Cluster 3.0 Release Notes, if applicable.
Consider the following general guidelines when configuring your disks.
Mirrored multihost disks - You must mirror all multihost disks across disk expansion units. See "Mirroring Multihost Disks" for guidelines on mirroring multihost disks.
Mirrored root - Mirroring the root disk ensures high availability, but such mirroring is not required. See "Mirroring Guidelines" for guidelines on deciding whether to mirror the root disk.
Unique naming - On any cluster node, if a local Solstice DiskSuite metadevice or VxVM volume is used as the device on which the /global/.devices/node@nodeid file system is mounted, the name of that metadevice or volume must be unique throughout the cluster.
Node lists - To ensure high availability of a disk device group, make its node lists of potential masters and its failback policy identical to any associated resource group. Or, if a scalable resource group uses more nodes than its associated disk device group, make the scalable resource group's node list a superset of the disk device group's node list. Refer to the resource group planning information in Sun Cluster 3.0 Data Services Installation and Configuration Guide for information about node lists.
Multiported disks - You must connect, or port, all disks used to construct a device group within the cluster to all of the nodes configured in the node list for that device group. Solstice DiskSuite software is able to automatically check for this at the time that disks are added to a diskset. However, configured VxVM disk groups do not have an association to any particular set of nodes. In addition, when you use the clustering software to register Solstice DiskSuite disksets, VxVM disk groups, or individual sets of global devices as global device groups, you can perform only limited connectivity checking.
Hot spare disks - You can use hot spare disks to increase availability, but they are not required.
Refer to your volume manager documentation for disk layout recommendations and any additional restrictions.
Consider the following points when planning Solstice DiskSuite configurations.
Mediators - Each diskset configured with exactly two disk strings and mastered by exactly two nodes must have Solstice DiskSuite mediators configured for the diskset. A disk string consists of a disk enclosure, its physical disks, cables from the enclosure to the node(s), and the interface adapter cards. You must configure each diskset with exactly two nodes acting as mediator hosts. You must use the same two nodes for all disksets requiring mediators and those two nodes must master those disksets. Mediators cannot be configured for disksets that do not meet the two-string and two-host requirements. See the mediator(7) man page for details.
/kernel/drv/md.conf settings - All metadevices used by each diskset are created in advance, at reconfiguration boot time, based on configuration parameters found in the /kernel/drv/md.conf file. The fields in the md.conf file are described in the Solstice DiskSuite documentation. You must modify the nmd and md_nsets fields as follows to support a Sun Cluster configuration.
nmd - The nmd field defines the number of metadevices created for each diskset. You must set the value of nmd to the predicted largest number of metadevices used by any one of the disksets in the cluster. For example, if a cluster uses 10 metadevices in its first 15 disksets, but 1000 metadevices in the 16th diskset, you must set the value of nmd to at least 1000. The maximum number of metadevices allowed per diskset is 8192.
md_nsets - The md_nsets field defines the total number of disksets that can be created for a system to meet the needs of the entire cluster. You must set the value of md_nsets to the expected number of disksets in the cluster, plus one to allow Solstice DiskSuite software to manage the private disks on the local host (that is, those metadevices that are not in the local diskset). The maximum number of disksets allowed per cluster is 32.
Set these fields at installation time to allow for all predicted future expansion of the cluster. Increasing these values after the cluster is in production is time consuming because it requires a reconfiguration reboot for each node. Raising these values later also increases the possibility of inadequate space allocation in the root (/) file system to create all of the requested devices.
All cluster nodes must have identical /kernel/drv/md.conf files, regardless of the number of disksets served by each node. Failure to follow this guideline can result in serious Solstice DiskSuite errors and possible loss of data.
Consider the following points when planning VERITAS Volume Manager (VxVM) configurations.
Root disk group - You must create a default root disk group (rootdg) on each node. The rootdg disk group can be created on the following disks.
The root disk, which must be encapsulated
One or more local non-root disks, which can be encapsulated or initialized
A combination of root and local non-root disks
The rootdg disk group must be local to the node.
Encapsulation - Disks to be encapsulated must have two disk-slice table entries free.
Number of volumes - Estimate the maximum number of volumes any given disk device group will use at the time the disk device group is created.
If the number of volumes is less than 1000, you can use default minor numbering.
If the number of volumes is 1000 or greater, you must carefully plan the way in which minor numbers are assigned to disk device group volumes. No two disk device groups can have overlapping minor number assignments.
Dirty Region Logging - Using Dirty Region Logging (DRL) is highly recommended but not required. Using DRL decreases volume recovery time after a node failure. Using DRL might decrease I/O throughput.
Logging is required for cluster file systems. Sun Cluster supports the following logging file systems.
Solstice DiskSuite trans-metadevice UNIX file system (UFS) logging
Solaris UFS logging
For information about Solstice DiskSuite trans-metadevice UFS logging, refer to your Solstice DiskSuite documentation. For information about Solaris UFS logging, refer to the mount_ufs(1M) man page and Solaris Transition Guide.
The following table lists the logging file systems supported by each volume manager.
Table 1-4 Supported File-System Logging Matrix
Volume Manager |
Supported File-System Logging |
---|---|
Solstice DiskSuite |
Solstice DiskSuite trans-metadevice UFS logging, Solaris UFS logging |
VERITAS Volume Manager |
Solaris UFS logging |
Consider the following points when choosing between Solaris UFS logging and Solstice DiskSuite trans-metadevice UFS logging for your Solstice DiskSuite volume manager.
Solaris UFS log size - Solaris UFS logging always allocates the log using free space on the UFS file system, and depending on the size of the file system.
On file systems less than 1 Gbyte, the log occupies 1 Mbyte.
On file systems 1 Gbyte or greater, the log occupies 1 Mbyte per Gbyte on the file system, up to a maximum of 64 Mbytes.
Log metadevice - With Solstice DiskSuite trans-metadevice UFS logging, the trans device used for logging creates a metadevice. The log is yet another metadevice that you can mirror and stripe. Furthermore, you can create a maximum 1-Tbyte logging file system with the Solstice DiskSuite software.
This section provides guidelines for planning the mirroring of your cluster configuration.
Mirroring all multihost disks in a Sun Cluster configuration enables the configuration to tolerate single-disk failures. Sun Cluster software requires that you mirror all multihost disks across disk expansion units.
Consider the following points when mirroring multihost disks.
Separate disk expansion units - Each submirror of a given mirror or plex should reside in a different multihost disk expansion unit.
Disk space - Mirroring doubles the amount of necessary disk space.
Three-way mirroring - Solstice DiskSuite software and VERITAS Volume Manager (VxVM) support three-way mirroring. However, Sun Cluster requires only two-way mirroring.
Number of metadevices - Under Solstice DiskSuite software, mirrors consist of other metadevices such as concatenations or stripes. Large configurations might contain a large number of metadevices. For example, seven metadevices are created for each logging UFS file system.
Differing disk sizes - If you mirror to a disk of a different size, your mirror capacity is limited to the size of the smallest submirror or plex.
For more information about multihost disks, refer to Sun Cluster 3.0 Concepts.
For maximum availability, you should mirror root (/), /usr, /var, /opt, and swap on the local disks. Under VxVM, you encapsulate the root disk and mirror the generated subdisks. However, mirroring the root disk is not a requirement of Sun Cluster.
Before deciding whether to mirror the root disk, consider the risks, complexity, cost, and service time for the various alternatives concerning the root disk. There is no single mirroring strategy that works for all configurations. You might want to consider your local Enterprise Services representative's preferred solution when deciding whether to mirror root.
Refer to your volume manager documentation and to either Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager for instructions on mirroring the root disk.
Consider the following issues when deciding whether to mirror the root disk.
Complexity - Mirroring the root disk adds complexity to system administration and complicates booting in single-user mode.
Backups - Regardless of whether or not you mirror the root disk, you also should perform regular backups of root. Mirroring alone does not protect against administrative errors. Only a backup plan enables you to restore files that have been accidentally altered or deleted.
Quorum - Under Solstice DiskSuite software, in failure scenarios in which metadevice state database quorum is lost, you cannot reboot the system until maintenance is performed. Refer to the Solstice DiskSuite documentation for information about the metadevice state database and state database replicas.
Separate controllers - Highest availability includes mirroring the root disk on a separate controller.
Boot disk - You can set up the mirror to be a bootable root disk so that you can boot from the mirror if the primary boot disk fails.
Secondary root disk - With a mirrored root disk, the primary root disk can fail but work can continue on the secondary (mirror) root disk. At a later point, the primary root disk might return to service (perhaps after a power cycle or transient I/O errors) and subsequent boots are performed by using the primary root disk specified in the OpenBootTM PROM boot-device field. In this situation no manual repair task occurs, but the drive starts working well enough to boot. Note that a Solstice DiskSuite resync does occur. A resync requires a manual step when the drive is returned to service.
If changes were made to any files on the secondary (mirror) root disk, they would not be reflected on the primary root disk during boot time (causing a stale submirror). For example, changes to the /etc/system file would be lost. Some Solstice DiskSuite administrative commands might have changed the /etc/system file while the primary root disk was out of service.
The boot program does not check whether it is booting from a mirror or an underlying physical device, and the mirroring becomes active partway through the boot process (after the metadevices are loaded). Before this point, the system is vulnerable to stale submirror problems.