Go to main content

Oracle® Solaris Cluster 4.3 Software Installation Guide

Exit Print View

Updated: June 2019
 
 

Planning Volume Management

This section provides the following guidelines for planning volume management of your cluster configuration:

Oracle Solaris Cluster software uses volume manager software to group disks into device groups that can then be administered as one unit. You must install Solaris Volume Manager software on all nodes of the cluster.

See your volume manager documentation and Configuring Solaris Volume Manager Software for instructions about how to install and configure the volume manager software. For more information about the use of volume management in a cluster configuration, see Multihost Devices in Oracle Solaris Cluster 4.3 Concepts Guide and Device Groups in Oracle Solaris Cluster 4.3 Concepts Guide.

Guidelines for Volume Manager Software

Consider the following general guidelines when you configure your disks with volume manager software:

  • Software RAID – Oracle Solaris Cluster software does not support software RAID 5.

  • Mirrored multihost disks – You must mirror all multihost disks across disk expansion units. See Guidelines for Mirroring Multihost Disks for guidelines on mirroring multihost disks. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to devices.

  • Mirrored root – Mirroring the ZFS root pool ensures high availability, but such mirroring is not required. See Mirroring Guidelines for guidelines to help determine whether to mirror the ZFS root pool.

  • Node lists – To ensure high availability of a device group, make its node lists of potential masters and its failback policy identical to any associated resource group. Or, if a scalable resource group uses more nodes than its associated device group, make the scalable resource group's node list a superset of the device group's node list. See the resource group planning information in the Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide for information about node lists.

  • Multihost disks – You must connect, or port, all devices that are used to construct a device group to all of the nodes that are configured in the node list for that device group. Solaris Volume Manager software can automatically check for this connection at the time that devices are added to a disk set.

  • Hot-spare disks – You can use hot-spare disks to increase availability, but hot spare disks are not required.

See your volume manager software documentation for disk layout recommendations and any additional restrictions.

Guidelines for Solaris Volume Manager Software

Consider the following points when you plan Solaris Volume Manager configurations:

  • Unique naming – Disk set names must be unique throughout the cluster.

  • Disk set reserved names – Do not name a disk set admin or shared.

  • Dual-string mediators – A disk string consists of a disk enclosure, its physical disks, cables from the enclosure to the host or hosts, and the interface adapter cards. Each disk set configured with exactly two disk strings and mastered by exactly two or three Oracle Solaris hosts is called a dual-string disk set. This type of disk set must have Solaris Volume Manager dual-string mediators configured. Observe the following rules when you configure dual-string mediators:

    • You must configure each disk set with two or three hosts that act as mediator hosts.

    • You must use the hosts that can master a disk set as mediators for that disk set. If you have a campus cluster, you can also configure a third node or a non-clustered host on the cluster network as a third mediator host to improve availability.

    • Mediators cannot be configured for disk sets that do not meet the two-string and two-host requirements.

    See the mediator(7D) man page for details.

UFS Cluster File System Logging

Logging is required for UFS cluster file systems. Oracle Solaris Cluster software supports Oracle Solaris UFS logging. See the mount_ufs(1M) man page for more information.

Mirroring Guidelines

This section provides the following guidelines for planning the mirroring of your cluster configuration:

Guidelines for Mirroring Multihost Disks

Mirroring all multihost disks in an Oracle Solaris Cluster configuration enables the configuration to tolerate single-device failures. Oracle Solaris Cluster software requires that you mirror all multihost disks across expansion units. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to devices.

Consider the following points when you mirror multihost disks:

  • Separate disk expansion units – Each submirror of a given mirror or plex should reside in a different multihost expansion unit.

  • Disk space – Mirroring doubles the amount of necessary disk space.

  • Three-way mirroring – Solaris Volume Manager software supports three-way mirroring. However, Oracle Solaris Cluster software requires only two-way mirroring.

  • Differing device sizes – If you mirror to a device of a different size, your mirror capacity is limited to the size of the smallest submirror or plex.

For more information about multihost disks, see Multihost Devices in Oracle Solaris Cluster 4.3 Concepts Guide.

Guidelines for Mirroring the ZFS Root Pool

Oracle Solaris ZFS is the default root file system in the Oracle Solaris release. See How to Configure a Mirrored Root Pool (SPARC or x86/VTOC) in Managing ZFS File Systems in Oracle Solaris 11.3 for instructions about how to mirror the ZFS root pool. Also see Chapter 6, Managing the ZFS Root Pool in Managing ZFS File Systems in Oracle Solaris 11.3 for information about how to manage the different root pool components.

For maximum availability, mirror root (/), /usr, /var, /opt, and swap on the local disks. However, Oracle Solaris Cluster software does not require that you mirror the ZFS root pool.

Consider the following points when you decide whether to mirror the ZFS root pool:

  • Boot disk – You can set up the mirror to be a bootable root pool. You can then boot from the mirror if the primary boot disk fails.

  • Backups – Regardless of whether you mirror the root pool, you also should perform regular backups of root. Mirroring alone does not protect against administrative errors. Only a backup plan enables you to restore files that have been accidentally altered or deleted.

  • Quorum devices – Do not use a disk that was configured as a quorum device to mirror a root pool.

  • Separate controllers – Highest availability includes mirroring the root pool on a separate controller.