Sun Cluster Software Installation Guide for Solaris OS

Chapter 1 Planning the Sun Cluster Configuration

This chapter provides planning information and guidelines specific to a Sun Cluster 3.2 11/09 configuration.

The following overview information is in this chapter:

Finding Sun Cluster Installation Tasks

The following table shows where to find instructions for various installation tasks for Sun Cluster software installation and the order in which you should perform the tasks.

Table 1–1 Sun Cluster Software Installation Task Information

Task 

Instructions 

Set up cluster hardware. 

Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

Documentation that shipped with your server and storage devices 

Plan global-cluster software installation. 

Chapter 1, Planning the Sun Cluster Configuration

Installation and Configuration Worksheets

Install software packages. Optionally, install and configure Sun QFS software. 

Installing the Software

Using SAM-QFS With Sun Cluster

Establish a new global cluster or a new global-cluster node. 

Establishing a New Global Cluster or New Global-Cluster Node

Configure Solaris Volume Manager software. 

Configuring Solaris Volume Manager Software

Solaris Volume Manager documentation 

Install and configure Veritas Volume Manager (VxVM) software. 

Installing and Configuring VxVM Software

VxVM documentation 

Configure cluster file systems, if used. 

How to Create Cluster File Systems

(Optional) On the Solaris 10 OS, create non-global zones.

Configuring a Non-Global Zone on a Global-Cluster Node

(Optional) On the Solaris 10 OS, create zone clusters.

Configuring a Zone Cluster

(Optional) SPARC: Install and configure the Sun Cluster module to Sun Management Center.

SPARC: Installing the Sun Cluster Module for Sun Management Center

Sun Management Center documentation 

Plan, install, and configure resource groups and data services. Create highly available local file systems, if used. 

Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Develop custom data services. 

Sun Cluster Data Services Developer’s Guide for Solaris OS

Planning the Solaris OS

This section provides the following guidelines for planning Solaris software installation in a cluster configuration.

For more information about Solaris software, see your Solaris installation documentation.

Guidelines for Selecting Your Solaris Installation Method

You can install Solaris software from a local DVD-ROM or from a network installation server by using the JumpStartTM installation method. In addition, Sun Cluster software provides a custom method for installing both the Solaris OS and Sun Cluster software by using the JumpStart installation method. If you are installing several cluster nodes, consider a network installation.

See How to Install Solaris and Sun Cluster Software (JumpStart) for details about the scinstall JumpStart installation method. See your Solaris installation documentation for details about standard Solaris installation methods.

Solaris OS Feature Restrictions

Consider the following points when you plan the use of the Solaris OS in a Sun Cluster configuration:

Solaris Software Group Considerations

Sun Cluster 3.2 11/09 software requires at least the End User Solaris Software Group (SUNWCuser). However, other components of your cluster configuration might have their own Solaris software requirements as well. Consider the following information when you decide which Solaris software group you are installing.


Tip –

To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


System Disk Partitions

Add this information to the appropriate Local File System Layout Worksheet.

When you install the Solaris OS, ensure that you create the required Sun Cluster partitions and that all partitions meet minimum space requirements.

To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris OS.

See the following guidelines for additional partition planning information:

Guidelines for the Root (/) File System

As with any other system running the Solaris OS, you can configure the root (/), /var, /usr, and /opt directories as separate file systems. Or, you can include all the directories in the root (/) file system.

The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when you plan your partitioning scheme.

Guidelines for the /globaldevices File System

Sun Cluster software offers two choices of locations to host the global-devices namespace:

This section describes the guidelines for using a dedicated partition. This information does not apply if you instead host the global-devices namespace on a lofi.

The /globaldevices file system is usually located on your root disk. However, if you use different storage on which to locate the global-devices file system, such as a Logical Volume Manager volume, it must not be part of a Solaris Volume Manager shared disk set or part of a VxVM disk group other than a root disk group. This file system is later mounted as a UFS cluster file system. Name this file system /globaldevices, which is the default name that is recognized by the scinstall(1M) command.


Note –

No file-system type other than UFS is valid for the global-devices file system. Do not attempt to change the file-system type after the global-devices file system is created.

However, a UFS global-devices file system can coexist on a node with other root file systems that use ZFS.


The scinstall command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number that is assigned to a Solaris host when it becomes a global-cluster member. The original /globaldevices mount point is removed.

The /globaldevices file system must have ample space and ample inode capacity for creating both block special devices and character special devices. This guideline is especially important if a large number of disks are in the cluster. A file system size of 512 Mbytes should suffice for most cluster configurations.

Volume Manager Requirements

If you use Solaris Volume Manager software, you must set aside a slice on the root disk for use in creating the state database replica. Specifically, set aside a slice for this purpose on each local disk. But, if you have only one local disk on a Solaris host, you might need to create three state database replicas in the same slice for Solaris Volume Manager software to function properly. See your Solaris Volume Manager documentation for more information.

If you use Veritas Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices that are available for use by VxVM. Additionally, you need to have some additional unassigned free space at either the beginning or the end of the disk. See your VxVM documentation for more information about root disk encapsulation.

Example – Sample File-System Allocations

Table 1–2 shows a partitioning scheme for a Solaris host that has less than 750 Mbytes of physical memory. This scheme is to be installed with the End User Solaris Software Group, Sun Cluster software, and the Sun Cluster HA for NFS data service. The last slice on the disk, slice 7, is allocated with a small amount of space for volume-manager use.

This layout allows for the use of either Solaris Volume Manager software or VxVM software. If you use Solaris Volume Manager software, you use slice 7 for the state database replica. If you use VxVM, you later free slice 7 by assigning the slice a zero length. This layout provides the necessary two free slices, 4 and 7, as well as provides for unused space at the end of the disk.

Table 1–2 Example File-System Allocation

Slice 

Contents 

Size Allocation 

Description 

/

6.75GB 

Remaining free space on the disk after allocating space to slices 1 through 7. Used for the Solaris OS, Sun Cluster software, data-services software, volume-manager software, Sun Management Center agent and Sun Cluster module agent packages, root file systems, and database and application software. 

swap

1GB 

512 Mbytes for the Solaris OS. 

512 Mbytes for Sun Cluster software. 

overlap 

8.43GB 

The entire disk. 

/globaldevices

512MB 

The Sun Cluster software later assigns this slice a different mount point and mounts the slice as a cluster file system. On the Solaris 10 OS, if you choose to use a lofi device instead of a dedicated partition, leave slice 3 as Unused. 

unused 

Available as a free slice for encapsulating the root disk under VxVM. 

unused 

unused 

volume manager 

20MB 

Used by Solaris Volume Manager software for the state database replica, or used by VxVM for installation after you free the slice. 

Guidelines for Non-Global Zones in a Global Cluster

For information about the purpose and function of Solaris 10 zones in a cluster, see Support for Solaris Zones in Sun Cluster Concepts Guide for Solaris OS.

For guidelines about configuring a cluster of non-global zones, see Zone Clusters.

Consider the following points when you create a Solaris 10 non-global zone, simply referred to as a zone, on a global-cluster node.

SPARC: Guidelines for Sun Logical Domains in a Cluster

Consider the following points when you create a Sun Logical Domains (LDoms) I/O domain or guest domain on a physically clustered machine that is SPARC hypervisor capable:

For more information about Sun Logical Domains, see the Logical Domains (LDoms) 1.0.3 Administration Guide.

Planning the Sun Cluster Environment

This section provides guidelines for planning and preparing the following components for Sun Cluster software installation and configuration:

For detailed information about Sun Cluster components, see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.

Licensing

Ensure that you have available all necessary license certificates before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.

For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches. For proper cluster operation, ensure that all cluster nodes maintain the same patch level.

Public-Network IP Addresses

For information about the use of public networks by the cluster, see Public Network Adapters and IP Network Multipathing in Sun Cluster Concepts Guide for Solaris OS.

You must set up a number of public-network IP addresses for various Sun Cluster components, depending on your cluster configuration. Each Solaris host in the cluster configuration must have at least one public-network connection to the same set of public subnets.

The following table lists the components that need public-network IP addresses assigned. Add these IP addresses to the following locations:

Table 1–3 Sun Cluster Components That Use Public-Network IP Addresses

Component 

Number of IP Addresses Needed 

Administrative console

1 IP address per subnet. 

Global-cluster nodes

1 IP address per node, per subnet. 

Zone-cluster nodes

1 IP address per node, per subnet. 

Domain console network interface (Sun FireTM 15000)

1 IP address per domain. 

(Optional) Non-global zones

1 IP address per subnet. 

Console-access device

1 IP address. 

Logical addresses 

1 IP address per logical host resource, per subnet. 

For more information about planning IP addresses, see Chapter 3, Planning Your TCP/IP Network (Task), in System Administration Guide: IP Services (Solaris 9) or Chapter 2, Planning Your TCP/IP Network (Tasks), in System Administration Guide: IP Services (Solaris 10).

Console-Access Devices

You must have console access to all cluster nodes. If you install Cluster Control Panel software on an administrative console, you must provide the hostname and port number of the console-access device that is used to communicate with the cluster nodes.

For more information about console access, see the Sun Cluster Concepts Guide for Solaris OS.

Alternatively, if you connect an administrative console directly to cluster nodes or through a management network, you instead provide the hostname of each global-cluster node and its serial port number that is used to connect to the administrative console or the management network.

Logical Addresses

Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed.

For more information, see the Sun Cluster Data Services Planning and Administration Guide for Solaris OS. For additional information about data services and resources, also see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.

Public Networks

Public networks communicate outside the cluster. Consider the following points when you plan your public-network configuration:

For more information about public-network interfaces, see Sun Cluster Concepts Guide for Solaris OS.

Quorum Servers

You can use Sun Cluster Quorum Server software to configure a machine as a quorum server and then configure the quorum server as your cluster's quorum device. You can use a quorum server instead of or in addition to shared disks and NAS filers.

Consider the following points when you plan the use of a quorum server in a Sun Cluster configuration.

NFS Guidelines

Consider the following points when you plan the use of Network File System (NFS) in a Sun Cluster configuration.

Service Restrictions

Observe the following service restrictions for Sun Cluster configurations:

Network Time Protocol (NTP)

Synchronization – The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time.

Accuracy – Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.

Error messages about nonexistent nodes – Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information about how to suppress these messages under otherwise normal cluster conditions.

See the Sun Cluster Concepts Guide for Solaris OS for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines about how to configure NTP for a Sun Cluster configuration.

Sun Cluster Configurable Components

This section provides guidelines for the following Sun Cluster components that you configure:

Add this information to the appropriate configuration planning worksheet.

Global-Cluster Name

Specify a name for the global cluster during Sun Cluster configuration. The global cluster name should be unique throughout the enterprise.

For information about naming a zone cluster, see Zone Clusters.

Global-Cluster Voting-Node Names

The name of a voting node in a global cluster is the same name that you assign to the physical or virtual host when you install it with the Solaris OS. See the hosts(4) man page for information about naming requirements.

In single-host cluster installations, the default cluster name is the name of the voting node.

During Sun Cluster configuration, you specify the names of all voting nodes that you are installing in the global cluster.

For information about node names in a zone cluster, see Zone Clusters.

Zone Names

On the Solaris 10 OS in versions that support Solaris brands, a non-global zone of brand native is a valid potential node of a resource-group node list. Use the naming convention nodename:zonename to specify a non-global zone to a Sun Cluster command.

To specify the global zone, you need to specify only the voting-node name.

For information about a cluster of non-global zones, see Zone Clusters.

Private Network


Note –

You do not need to configure a private network for a single-host global cluster. The scinstall utility automatically assigns the default private-network address and netmask, even though a private network is not used by the cluster.


Sun Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Sun Cluster software. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Sun Cluster software on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:

If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. On the Solaris 10 OS, the utility also prompts you for the number of zone clusters that you want to support. The number of global-cluster nodes that you specify should also include the expected number of unclustered non-global zones that will use the private network.

The utility calculates the netmask for the minimum IP address range that will support the number of nodes, zone clusters, and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, zone clusters, and private networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes, zone clusters, and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the IP address range.

The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.


Note –

Changing the cluster private IP-address range might be necessary to support the addition of voting nodes, non-global zones, zone clusters, or private networks.

To change the private-network address and netmask after the cluster is established, see How to Change the Private Network Address or Address Range of an Existing Cluster in Sun Cluster System Administration Guide for Solaris OS. You must bring down the cluster to make these changes.

However, on the Solaris 10 OS the cluster can remain in cluster mode if you use the cluster set-netprops command to change only the netmask. For any zone cluster that is already configured in the cluster, the private IP subnets and the corresponding private IP addresses that are allocated for that zone cluster will also be updated.


If you specify a private-network address other than the default, the address must meet the following requirements:

See Planning Your TCP/IP Network (Tasks), in System Administration Guide: IP Services (Solaris 9 or Solaris 10) for more information about private networks.

Private Hostnames

The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration of a global cluster or a zone cluster. These private hostnames follow the naming convention clusternodenodeid -priv, where nodeid is the numeral of the internal node ID. During Sun Cluster configuration, the node ID number is automatically assigned to each voting node when the node becomes a cluster member. A voting node of the global cluster and a node of a zone cluster can both have the same private hostname, but each hostname resolves to a different private-network IP address.

After a global cluster is configured, you can rename its private hostnames by using the clsetup(1CL) utility. Currently, you cannot rename the private hostname of a zone-cluster node.

For the Solaris 10 OS, the creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.

Cluster Interconnect

The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:

For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Sun Cluster Concepts Guide for Solaris OS.


Note –

You do not need to configure a cluster interconnect for a single-host cluster. However, if you anticipate eventually adding more voting nodes to a single-host cluster configuration, you might want to configure the cluster interconnect for future use.


During Sun Cluster configuration, you specify configuration information for one or two cluster interconnects.

You can configure additional cluster interconnects, up to six interconnects total, after the cluster is established by using the clsetup(1CL) utility.

For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS. For general information about the cluster interconnect, see Cluster-Interconnect Components in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

Transport Adapters

For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-host cluster, you also specify whether your interconnect is a point-to-point connection (adapter to adapter) or uses a transport switch.

Consider the following guidelines and restrictions:

See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.

Transport Switches

If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.

Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the Solaris host that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI-PCI.


Note –

Clusters with three or more voting nodes must use transport switches. Direct connection between voting cluster nodes is supported only for two-host clusters.


If your two-host cluster is direct connected, you can still specify a transport switch for the interconnect.


Tip –

If you specify a transport switch, you can more easily add another voting node to the cluster in the future.


Global Fencing

Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk during split-brain situations. By default, the scinstall utility in Typical Mode leaves global fencing enabled, and each shared disk in the configuration uses the default global fencing setting of pathcount. With the pathcount setting, the fencing protocol for each shared disk is chosen based on the number of DID paths that are attached to the disk.

In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For most situations, respond No to keep global fencing enabled. However, you can disable global fencing to support the following situations:


Caution – Caution –

If you disable fencing under other situations than the following, your data might be vulnerable to corruption during application failover. Examine this data corruption possibility carefully when you consider turning off fencing.


If you disable global fencing during cluster configuration, fencing is turned off for all shared disks in the cluster. After the cluster is configured, you can change the global fencing protocol or override the fencing protocol of individual shared disks. However, to change the fencing protocol of a quorum device, you must first unconfigure the quorum device. Then set the new fencing protocol of the disk and reconfigure it as a quorum device.

For more information about fencing behavior, see Failfast Mechanism in Sun Cluster Concepts Guide for Solaris OS. For more information about setting the fencing protocol of individual shared disks, see the cldevice(1CL) man page. For more information about the global fencing setting, see the cluster(1CL) man page.

Quorum Devices

Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a voting node, the quorum device prevents amnesia or split-brain problems when the voting cluster node attempts to rejoin the cluster. For more information about the purpose and function of quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS.

During Sun Cluster installation of a two-host cluster, you can choose to let the scinstall utility automatically configure as a quorum device an available shared disk in the configuration. Shared disks include any Sun NAS device that is configured for use as a shared disk. The scinstall utility assumes that all available shared disks are supported as quorum devices.

If you want to use a quorum server or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.

After installation, you can also configure additional quorum devices by using the clsetup(1CL) utility.


Note –

You do not need to configure quorum devices for a single-host cluster.


If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.

Consider the following points when you plan quorum devices.

For more information about quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS and Quorum Devices in Sun Cluster Overview for Solaris OS.

Zone Clusters

On the Solaris 10 OS, a zone cluster is a cluster of non-global zones. All nodes of a zone cluster are configured as non-global zones of the cluster brand. No other brand type is permitted in a zone cluster. You can run supported services on the zone cluster similar to a global cluster, with the isolation that is provided by Solaris zones.

Consider the following points when you plan the creation of a zone cluster.

Global-Cluster Requirements and Guidelines

Zone-Cluster Requirements and Guidelines

Planning the Global Devices, Device Groups, and Cluster File Systems

This section provides the following guidelines for planning global devices and for planning cluster file systems:

Global Devices

For information about the purpose and function of global devices, see Shared Devices, Local Devices, and Device Groups in Sun Cluster Overview for Solaris OS and Global Devices in Sun Cluster Concepts Guide for Solaris OS.

Sun Cluster software does not require any specific disk layout or file system size. Consider the following points when you plan your layout for global devices.

Device Groups

For information about the purpose and function of device groups, see Shared Devices, Local Devices, and Device Groups in Sun Cluster Overview for Solaris OS and Device Groups in Sun Cluster Concepts Guide for Solaris OS.

Add this planning information to the Device Group Configurations Worksheet.

Consider the following points when you plan device groups.

Cluster File Systems

For information about the purpose and function of cluster file systems, see Cluster File Systems in Sun Cluster Overview for Solaris OS and Cluster File Systems in Sun Cluster Concepts Guide for Solaris OS.


Note –

You can alternatively configure highly available local file systems. This can provide better performance to support a data service with high I/O, or to permit use of certain file-system features that are not supported in a cluster file system. For more information, see Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


Consider the following points when you plan cluster file systems.

Choosing Mount Options for Cluster File Systems

This section describes requirements and restrictions for the following types of cluster file systems:


Note –

You can alternatively configure these and other types of file systems as highly available local file systems. For more information, see Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


Follow these guidelines to determine what mount options to use when you create your cluster file systems.

UFS Cluster File Systems

Mount Option 

Usage 

Description 

global

Required 

This option makes the file system globally visible to all nodes in the cluster. 

logging

Required 

This option enables logging. 

forcedirectio

Conditional 

This option is required only for cluster file systems that will host Oracle Real Application Clusters RDBMS data files, log files, and control files. 


Note –

Oracle Real Application Clusters is supported for use only in SPARC based clusters.


onerror=panic

Required 

You do not have to explicitly specify the onerror=panic mount option in the /etc/vfstab file. This mount option is already the default value if no other onerror mount option is specified.


Note –

Only the onerror=panic mount option is supported by Sun Cluster software. Do not use the onerror=umount or onerror=lock mount options. These mount options are not supported on cluster file systems for the following reasons:

  • Use of the onerror=umount or onerror=lock mount option might cause the cluster file system to lock or become inaccessible. This condition might occur if the cluster file system experiences file corruption.

  • The onerror=umount or onerror=lock mount option might cause the cluster file system to become unmountable. This condition might thereby cause applications that use the cluster file system to hang or prevent the applications from being killed.

A node might require rebooting to recover from these states.


syncdir

Optional 

If you specify syncdir, you are guaranteed POSIX-compliant file system behavior for the write() system call. If a write() succeeds, then this mount option ensures that sufficient space is on the disk.

If you do not specify syncdir, the same behavior occurs that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition (ENOSPC) until you close a file.

You see ENOSPC on close only during a very short time after a failover. With syncdir, as with POSIX behavior, the out-of-space condition would be discovered before the close.

See the mount_ufs(1M) man page for more information about UFS mount options.

VxFS Cluster File Systems

Mount Option 

Usage 

Description 

global

Required 

This option makes the file system globally visible to all nodes in the cluster. 

log

Required 

This option enables logging. 

See the VxFS mount_vxfs man page and Overview of Administering Cluster File Systems in Sun Cluster System Administration Guide for Solaris OS for more information about VxFS mount options.

Mount Information for Cluster File Systems

Consider the following points when you plan mount points for cluster file systems.

Planning Volume Management

Add this planning information to the Device Group Configurations Worksheet and the Volume-Manager Configurations Worksheet. For Solaris Volume Manager, also add this planning information to the Volumes Worksheet (Solaris Volume Manager).

This section provides the following guidelines for planning volume management of your cluster configuration:

Sun Cluster software uses volume-manager software to group disks into device groups which can then be administered as one unit. Sun Cluster software supports Solaris Volume Manager software and Veritas Volume Manager (VxVM) software that you install or use in the following ways.

Table 1–4 Supported Use of Volume Managers With Sun Cluster Software

Volume-Manager Software 

Requirements 

Solaris Volume Manager 

You must install Solaris Volume Manager software on all voting nodes of the cluster, regardless of whether you use VxVM on some nodes to manage disks. 

SPARC: VxVM with the cluster feature

You must install and license VxVM with the cluster feature on all voting nodes of the cluster. 

VxVM without the cluster feature 

You are only required to install and license VxVM on those voting nodes that are attached to storage devices that VxVM manages. 

Both Solaris Volume Manager and VxVM

If you install both volume managers on the same voting node, you must use Solaris Volume Manager software to manage disks that are local to each node. Local disks include the root disk. Use VxVM to manage all shared disks. 

See your volume-manager documentation and Configuring Solaris Volume Manager Software or Installing and Configuring VxVM Software for instructions about how to install and configure the volume-manager software. For more information about the use of volume management in a cluster configuration, see Multihost Devices in Sun Cluster Concepts Guide for Solaris OS and Device Groups in Sun Cluster Concepts Guide for Solaris OS.

Guidelines for Volume-Manager Software

Consider the following general guidelines when you configure your disks with volume-manager software:

See your volume-manager documentation for disk layout recommendations and any additional restrictions.

Guidelines for Solaris Volume Manager Software

Consider the following points when you plan Solaris Volume Manager configurations:

Guidelines for Veritas Volume Manager Software

Consider the following points when you plan Veritas Volume Manager (VxVM) configurations.

See your VxVM installation documentation for additional information.

File-System Logging

Logging is required for UFS and VxFS cluster file systems. Sun Cluster software supports the following choices of file-system logging:

Both Solaris Volume Manager and Veritas Volume Manager support both types of file-system logging.

Mirroring Guidelines

This section provides the following guidelines for planning the mirroring of your cluster configuration:

Guidelines for Mirroring Multihost Disks

To mirror all multihost disks in a Sun Cluster configuration enables the configuration to tolerate single-device failures. Sun Cluster software requires that you mirror all multihost disks across expansion units. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to devices.

Consider the following points when you mirror multihost disks:

For more information about multihost disks, see Multihost Disk Storage in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

Guidelines for Mirroring the Root Disk

Add this planning information to the Local File System Layout Worksheet.

For maximum availability, mirror root (/), /usr, /var, /opt, and swap on the local disks. Under VxVM, you encapsulate the root disk and mirror the generated subdisks. However, Sun Cluster software does not require that you mirror the root disk.

Before you decide whether to mirror the root disk, consider the risks, complexity, cost, and service time for the various alternatives that concern the root disk. No single mirroring strategy works for all configurations. You might want to consider your local Sun service representative's preferred solution when you decide whether to mirror root.

See your volume-manager documentation and Configuring Solaris Volume Manager Software or Installing and Configuring VxVM Software for instructions about how to mirror the root disk.

Consider the following points when you decide whether to mirror the root disk.