Sun Cluster 3.1 Software Installation Guide

Chapter 1 Planning the Sun Cluster Configuration

This chapter provides planning information and guidelines for installing a Sun Cluster configuration.

The following overview information is in this chapter:

Where to Find Sun Cluster Installation Tasks

The following table shows where to find instructions for various Sun Cluster software installation tasks and the order in which you should perform them.

Table 1–1 Location of Sun Cluster Software Installation Task Information

Task 

For Instructions, Go To … 

Set up cluster hardware. 

Sun Cluster 3.x Hardware Administration Manual

Documentation shipped with your server and storage devices 

Plan cluster software installation. 

This chapter 

“Sun Cluster Installation and Configuration Worksheets” in Sun Cluster 3.1 Release Notes

Install a new cluster or add nodes to an existing cluster: Install the Solaris operating environment, Cluster Control Panel (optional), SunPlex Manager (optional), cluster framework, and data service software packages. 

Installing the Software

Install and configure Solstice DiskSuite/Solaris Volume Manager software. 

Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software

Solstice DiskSuite/Solaris Volume Manager documentation 

Install and configure VERITAS Volume Manager (VxVM) software. 

Installing and Configuring VxVM Software

VxVM documentation 

Configure cluster framework software and optionally install and configure Sun Management Center. 

Configuring the Cluster

Plan, install, and configure resource groups and data services. 

Sun Cluster 3.1 Data Service Planning and Administration Guide

“Sun Cluster Installation and Configuration Worksheets” in Sun Cluster 3.1 Data Service Release Notes

Develop custom data services. 

Sun Cluster 3.1 Data Services Developer's Guide

Upgrade from Sun Cluster 3.0 to Sun Cluster 3.1 software (Solaris operating environment, cluster framework, data services, and volume manager software). 

Upgrading From Sun Cluster 3.0 to Sun Cluster 3.1 Software

Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software or Installing and Configuring VxVM Software

Volume manager documentation 

Planning the Solaris Operating Environment

This section provides guidelines for planning Solaris software installation in a cluster configuration. For more information about Solaris software, see the Solaris installation documentation.

Guidelines for Selecting Your Solaris Installation Method

You can install Solaris software from a local CD-ROM or from a network installation server by using the JumpStartTM installation method. In addition, Sun Cluster software provides a custom method for installing both the Solaris operating environment and Sun Cluster software by using JumpStart. If you are installing several cluster nodes, consider a network installation.

See How to Install Solaris and Sun Cluster Software (JumpStart) for details about the scinstall JumpStart installation method. See the Solaris installation documentation for details about standard Solaris installation methods.

Solaris Software Group Considerations

Sun Cluster 3.1 software requires at least the Solaris End User System Support software group. However, other components of your cluster configuration might have their own Solaris software requirements as well. Consider the following information when you decide which Solaris software group you will install.

System Disk Partitions

Add this information to “Local File Systems With Mirrored Root Worksheet” in Sun Cluster 3.1 Release Notes or “Local File Systems with Non-Mirrored Root Worksheet” in Sun Cluster 3.1 Release Notes.

When you install the Solaris operating environment, ensure that you create the required Sun Cluster partitions and that all partitions meet minimum space requirements.

To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris operating environment.

See the following guidelines for additional partition planning information.

Guidelines for the Root (/) File System

As with any other system running the Solaris operating environment, you can configure the root (/), /var, /usr, and /opt directories as separate file systems, or you can include all the directories in the root (/) file system. The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when you plan your partitioning scheme.

Guidelines for the swap Partition

The amount of swap space allocated for Solaris and Sun Cluster software combined must be no less that 750 Mbytes. For best results, add at least 512 Mbytes for Sun Cluster software to the amount required by the Solaris operating environment. In addition, allocate additional swap space for any third-party applications you install on the node that also have swap requirements. See your third-party application documentation for any swap requirements.

Guidelines for the /globaldevices File System

Sun Cluster software requires that you set aside a special file system on one of the local disks for use in managing global devices. This file system must be separate, as it will later be mounted as a cluster file system. Name this file system /globaldevices, which is the default name recognized by the scinstall(1M) command. The scinstall command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number assigned to a node when it becomes a cluster member, and the original /globaldevices mount point is removed. The /globaldevices file system must have ample space and inode capacity for creating both block special devices and character special devices, especially if a large number of disks are in the cluster. A file system size of 512 Mbytes should be more than enough for most cluster configurations.

Volume Manager Requirements

If you use Solstice DiskSuite/Solaris Volume Manager software, you must set aside a slice on the root disk for use in creating the state database replica. Specifically, set aside a slice for this purpose on each local disk. But, if you only have one local disk on a node, you might need to create three state database replicas in the same slice for Solstice DiskSuite/Solaris Volume Manager software to function properly. See the Solstice DiskSuite/Solaris Volume Manager documentation for more information.

If you use VxVM and you intend to encapsulate the root disk, you need two unused slices available for use by VxVM, as well as some additional unassigned free space at either the beginning or the end of the disk. See the VxVM documentation for more information about root disk encapsulation.

Example—Sample File System Allocations

Table 1–2 shows a partitioning scheme for a cluster node that has less than 750 Mbytes of physical memory. This scheme will be installed with the Solaris operating environment End User System Support software group, Sun Cluster software, and the Sun Cluster HA for NFS data service. The last slice on the disk, slice 7, is allocated with a small amount of space for volume manager use.

This layout allows for the use of either Solstice DiskSuite/Solaris Volume Manager software or VxVM. If you use Solstice DiskSuite/Solaris Volume Manager software, you use slice 7 for the state database replica. If you use VxVM, you later free slice 7 by assigning it a zero length. This layout provides the necessary two free slices, 4 and 7, and it provides for unused space at the end of the disk.

Table 1–2 Example File System Allocation

Slice 

Contents 

Allocation (in Mbytes) 

Description 

/

6.75GB 

Remaining free space on the disk after allocating space to slices 1 through 7. Used for Solaris operating environment software, Sun Cluster software, data services software, volume manager software, Sun Management Center agent and Sun Cluster module agent packages, root file systems, and database and application software. 

swap 

1GB 

512 Mbytes for Solaris operating environment software. 

512 Mbytes for Sun Cluster software. 

overlap 

8.43GB 

The entire disk. 

/globaldevices

512MB 

The Sun Cluster software later assigns this slice a different mount point and mounts it as a cluster file system. 

unused 

Available as a free slice for encapsulating the root disk under VxVM. 

unused 

-  

unused 

-  

volume manager 

20MB 

Used by Solstice DiskSuite/Solaris Volume Manager software for the state database replica, or used by VxVM for installation after you free the slice. 

Planning the Sun Cluster Environment

This section provides guidelines for planning and preparing for Sun Cluster software installation. For detailed information about Sun Cluster components, see the Sun Cluster 3.1 Concepts Guide.

Licensing

Ensure that you have any necessary license certificates available before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.

For licensing requirements for volume manager software and applications software, see the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches. For information about current required patches, see “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes or consult your Sun service provider. See “Patching Sun Cluster Software and Firmware” in Sun Cluster 3.1 System Administration Guide for general guidelines and procedures for applying patches.

IP Addresses

You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public network connection to the same set of public subnets.

The following table lists the components that need IP addresses assigned to them. Add these IP addresses to any naming services used. Also add these IP addresses to the local /etc/inet/hosts file on each cluster node after you install Solaris software.

Table 1–3 Sun Cluster Components That Use IP Addresses

Component 

IP Addresses Needed 

Administrative console

1 per subnet 

Network adapters

2 per adapter (1 primary IP address and 1 test IP address) 

Cluster nodes 

1 per node, per subnet 

Domain console network interface

(Sun Fire 15000) 

1 per domain 

Console-access device

Logical addresses 

1 per logical host resource, per subnet 

Console-Access Devices

You must have console access to all cluster nodes. If you install Cluster Control Panel software on your administrative console, you must provide the hostname of the console-access device used to communicate with the cluster nodes. A terminal concentrator can be used to communicate between the administrative console and the cluster node consoles. A Sun Enterprise 10000 server uses a System Service Processor (SSP) instead of a terminal concentrator. A Sun FireTM server uses a system controller. For more information about console access, see the Sun Cluster 3.1 Concepts Guide.

Logical Addresses

Each data service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed. See the Sun Cluster 3.1 Data Service Planning and Administration Guide for information and “Sun Cluster Installation and Configuration Worksheets” in Sun Cluster 3.1 Data Service Release Notes for worksheets for planning resource groups. For more information about data services and resources, also see the Sun Cluster 3.1 Concepts Guide.

Sun Cluster Configurable Components

This section provides guidelines for the Sun Cluster components that you configure during installation.

Cluster Name

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.

You specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.

Node Names

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes. Information for most other worksheets is grouped by node name.

The node name is the name you assign to a machine when you install the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster.

Private Network

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.

Sun Cluster software uses the private network for internal communication between nodes. Sun Cluster requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when you install Sun Cluster software on the first node of the cluster. You can either accept the default private network address (172.16.0.0) and netmask (255.255.0.0) or type different choices if the default network address is already in use elsewhere in the enterprise.


Note –

After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.


If you specify a private network address other than the default, it must meet the following requirements:

If you specify a netmask other than the default, it must meet the following requirements:

Private Hostnames

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.

The private hostname is the name used for inter–node communication over the private network interface. Private hostnames are automatically created during Sun Cluster installation, and follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. During Sun Cluster installation, this node ID number is automatically assigned to each node when it becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.

Cluster Interconnect

Add this planning information to the “Cluster Interconnect Worksheet” in Sun Cluster 3.1 Release Notes.

The cluster interconnects provide the hardware pathways for private network communication between cluster nodes. Each interconnect consists of a cable connected between two transport adapters, a transport adapter and a transport junction, or two transport junctions. During Sun Cluster installation, you specify the following configuration information for two cluster interconnects.


Note –

Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.


You can configure additional private network connections after installation by using the scsetup(1M) utility.

For more information about the cluster interconnect, see the Sun Cluster 3.1 Concepts Guide.

Public Networks

Add this planning information to the “Public Networks Worksheet” in Sun Cluster 3.1 Release Notes.

Public networks communicate outside the cluster. Consider the following points when you plan your public network configuration.

See also IP Network Multipathing Groups for guidelines on planning public network adapter backup groups. For more information about public network interfaces, see the Sun Cluster 3.1 Concepts Guide.

Disk Device Groups

Add this planning information to the “Disk Device Groups Worksheet” in Sun Cluster 3.1 Release Notes.

You must configure all volume manager disk groups as Sun Cluster disk device groups. This configuration enables a secondary node to host multihost disks if the primary node fails. Consider the following points when you plan disk device groups.

For more information about disk device groups, see the Sun Cluster 3.1 Concepts Guide.

IP Network Multipathing Groups

Add this planning information to the “Public Networks Worksheet” in Sun Cluster 3.1 Release Notes.

Internet Protocol (IP) Network Multipathing groups, which replace Network Adapter Failover (NAFO) groups, provide public network adapter monitoring and failover, and are the foundation for a network address resource. If a multipathing group is configured with two or more adapters and an adapter fails, all of the addresses on the failed adapter fail over to another adapter in the multipathing group. In this way, the multipathing group adapters maintain public network connectivity to the subnet to which the adapters in the multipathing group connect.

Consider the following points when you plan your multipathing groups.

For more information about IP Network Multipathing, see “Deploying Network Multipathing” in IP Network Multipathing Administration Guide or “Administering Network Multipathing (Task)” in System Administration Guide: IP Services.

Quorum Devices

Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.

Consider the following points when you plan quorum devices.

For more information about quorum, see the Sun Cluster 3.1 Concepts Guide.

Planning the Global Devices and Cluster File Systems

This section provides guidelines for planning global devices and cluster file systems. For more information about global devices and cluster files systems, see the Sun Cluster 3.1 Concepts Guide.

Guidelines for Highly Available Global Devices and Cluster File Systems

Sun Cluster software does not require any specific disk layout or file system size. Consider the following points when you plan your global device and cluster file system layout.

Mount Information for Cluster File Systems

Consider the following points when you plan mount points for cluster file systems.

Planning Volume Management

Add this planning information to the “Disk Device Groups Worksheet” in Sun Cluster 3.1 Release Notes and the “Volume Manager Configurations Worksheet” in Sun Cluster 3.1 Release Notes. For Solstice DiskSuite/Solaris Volume Manager, also add this planning information to the “Metadevices Worksheet (Solstice DiskSuite/Solaris Volume Manager)” in Sun Cluster 3.1 Release Notes.

This section provides guidelines for planning volume management of your cluster configuration.

Sun Cluster uses volume manager software to group disks into disk device groups that can then be administered as one unit. Sun Cluster supports Solstice DiskSuite/Solaris Volume Manager software and VERITAS Volume Manager (VxVM).

See your volume manager documentation and Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software or Installing and Configuring VxVM Software for instructions on how to install and configure the volume manager software. For more information about volume management in a cluster configuration, see the Sun Cluster 3.1 Concepts Guide.

Guidelines for Volume Manager Software

Consider the following general guidelines when configuring your disks.

See your volume manager documentation for disk layout recommendations and any additional restrictions.

Guidelines for Solstice DiskSuite/Solaris Volume Manager Software

Consider the following points when you plan Solstice DiskSuite/Solaris Volume Manager configurations.

Guidelines for VERITAS Volume Manager Software

Consider the following points when you plan VERITAS Volume Manager (VxVM) configurations.

File-System Logging

Logging is required for cluster file systems. Sun Cluster software supports the following logging file systems:

The following table lists the logging file systems supported by each volume manager.

Table 1–4 Supported File System Logging Matrix

Volume Manager 

Supported File System Logging  

Solstice DiskSuite/Solaris Volume Manager 

Solaris UFS logging,Solstice DiskSuite trans-metadevice logging or Solaris Volume Manager transactional-volume logging, VxFS logging 

VERITAS Volume Manager 

Solaris UFS logging, VxFS logging 

Consider the following points when you choose between Solaris UFS logging and trans-metadevice logging.

Mirroring Guidelines

This section provides guidelines for planning the mirroring of your cluster configuration.

Mirroring Multihost Disks

Mirroring all multihost disks in a Sun Cluster configuration enables the configuration to tolerate single-disk failures. Sun Cluster software requires that you mirror all multihost disks across disk expansion units. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to disks.

Consider the following points when you mirror multihost disks.

For more information about multihost disks, see the Sun Cluster 3.1 Concepts Guide.

Mirroring the Root Disk

Add this planning information to the “Local File Systems With Mirrored Root Worksheet” in Sun Cluster 3.1 Release Notes.

For maximum availability, you should mirror root (/), /usr, /var, /opt, and swap on the local disks. Under VxVM, you encapsulate the root disk and mirror the generated subdisks. However, Sun Cluster software does not require that you mirror the root disk.

Before you decide whether to mirror the root disk, consider the risks, complexity, cost, and service time for the various alternatives concerning the root disk. No single mirroring strategy works for all configurations. You might want to consider your local Sun service representative's preferred solution when you decide whether to mirror root.

See your volume manager documentation and Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software or Installing and Configuring VxVM Software for instructions on how to mirror the root disk.

Consider the following points when you decide whether to mirror the root disk.