Sun Cluster 3.0 Installation Guide

Chapter 1 Planning the Sun Cluster Configuration

This chapter provides planning information and guidelines for installing a Sun Cluster configuration.

The following overview information is in this chapter.

Where to Find Sun Cluster Installation Tasks

The following table shows where to find instructions for various Sun Cluster software installation tasks and the order in which you should perform them.

Table 1-1 Location of Sun Cluster Software Installation Task Information

Task 

For Instructions, Go To ... 

Setting up cluster hardware 

Sun Cluster 3.0 Hardware Guide

Documentation shipped with your server and storage devices 

Planning cluster software installation 

Chapter 1, Planning the Sun Cluster Configuration

"Configuration Worksheets and Examples" in Sun Cluster 3.0 Release Notes

Installing cluster framework, volume manager, and data service software packages 

Chapter 2, Installing and Configuring Sun Cluster Software

Configuring cluster framework and volume manager software 

Chapter 2, Installing and Configuring Sun Cluster Software

Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager

Volume manager documentation 

Upgrading cluster framework, data services, and volume manager software 

Chapter 3, Upgrading Sun Cluster Software

Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager

Volume manager documentation 

Planning, installing, and configuring data services and resource groups 

Sun Cluster 3.0 Data Services Installation and Configuration Guide

Using the API 

Sun Cluster 3.0 Data Services Developers' Guide

Planning the Solaris Operating Environment

This section provides guidelines for planning Solaris software installation in a cluster configuration. For more information about Solaris software, refer to the Solaris installation documentation.

Guidelines for Selecting Your Solaris Installation Method

You can install Solaris software from a local CD-ROM or from a network install server by using the JumpStartTM installation method. In addition, Sun Cluster software provides a method for installing both the Solaris operating environment and Sun Cluster software by using custom JumpStart. If you are installing several cluster nodes, consider a network install.

Refer to "How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes" for details about the custom JumpStart installation method. Refer to Solaris installation documentation for details about standard Solaris installation methods.

System Disk Partitions

Add this information to the "Local File System Layout Worksheet" in Sun Cluster 3.0 Release Notes.

When the Solaris operating environment is installed, ensure that the required Sun Cluster partitions are created, and that all partitions meet minimum space requirements.

To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris operating environment.

Refer to the following guidelines for additional partition planning information.

Guidelines for the Root (/) File System

As with any other system running the Solaris operating environment, you can configure the root (/), /var, /usr, and /opt directories as separate file systems, or you can include all the directories in the root (/) file system. The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when planning your partitioning scheme.

Guidelines for the swap Partition

The minimum size of the swap partition must be either 750 Mbytes or twice the amount of physical memory on the machine, whichever is greater. In addition, any third-party applications you install might also have swap requirements. Refer to third-party application documentation for any swap requirements.

Guidelines for the /globaldevices File System

Sun Cluster software requires that you set aside a special file system one of the local disks for use in managing global devices. This file system must be separate, as it will later be mounted as a cluster file system. Name this file system /globaldevices, which is the default name recognized by the scinstall(1M) command. The scinstall(1M) command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number assigned to a node when it becomes a cluster member, and the original /globaldevices mount point is removed.The /globaldevices file system must have ample space and inode capacity for creating both block special devices and character special devices, especially if a large number of disks are in the cluster. A file system size of 100 Mbytes should be more than enough for most cluster configurations.

Volume Manager Requirements

If you use Solstice DiskSuite software, you must set aside a slice on the root disk for use in creating the replica database. Specifically, set aside a slice for this purpose on each local disk. But, if you only have one local disk on a node, you might need to create three replica databases in the same slice for Solstice DiskSuite software to function properly. Refer to the Solstice DiskSuite documentation for more information.

If you use VxVM and you intend to encapsulate the root disk, you need two unused slices available for use by VxVM, as well as some additional unassigned free space at either the beginning or end of the disk. Refer to the VxVM documentation for more information about encapsulation.

Example--Sample File-System Allocations

Table 1-2 shows a partitioning scheme for a cluster node that has less than 750 Mbytes of physical memory. This scheme will be installed with the Solaris operating environment End User System Support software group, Sun Cluster software, and the Sun Cluster HA for NFS data service. The last slice on the disk, slice 7, has been allocated with a small amount of space for volume manager use.

This layout allows for the use of either Solstice DiskSuite software or VxVM. If you use Solstice DiskSuite software, you use slice 7 for the replica database. If you use VxVM, you can later free slice 7 by assigning it a zero length. This layout frees two slices, 4 and 7, and it provides for unused space at the end of the disk.

Table 1-2 Sample File-System Allocation

Slice 

Contents 

Allocation (in Mbytes) 

Description 

/

1168 

441 Mbytes for Solaris operating environment software. 

100 Mbytes extra for root (/).

100 Mbytes extra for /var.

25 Mbytes for Sun Cluster software.  

55 Mbytes for volume manager software. 

1 Mbyte for Sun Cluster HA for NFS software. 

25 Mbytes for the Sun Management Center agent and Sun Cluster module agent packages. 

421 Mbytes (the remaining free space on the disk) for possible future use by database and application software. 

swap 

750 

Minimum size when physical memory is less than 750 Mbytes. 

overlap 

2028 

The entire disk. 

/globaldevices

100 

The Sun Cluster software later assigns this slice a different mount point and mounts it as a cluster file system. 

unused 

Available as a free slice for encapsulating the root disk under VxVM. 

unused 

 

unused 

 

volume manager 

10 

If Solstice DiskSuite software, used for replica database. If VxVM, later free the slice and some space at the end of the disk. 

Planning the Sun Cluster Environment

This section provides guidelines for planning and preparing for Sun Cluster software installation. For detailed information about Sun Cluster components, refer to Sun Cluster 3.0 Concepts.

Licensing

Ensure that you have any necessary license certificates available before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.

For licensing requirements for volume manager software and applications software, refer to the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches. For the current list of required patches, refer to Sun Cluster 3.0 Release Notes or consult your Enterprise Services representative or service provider. Refer to Sun Cluster 3.0 System Administration Guide for general guidelines and procedures for applying patches.

IP Addresses

You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public network connection to the same set of public subnets.

The following table lists the components that need IP addresses assigned to them. Add these IP addresses to any naming services used. Also add these IP addresses to the local /etc/inet/hosts file on each cluster node after Sun Cluster software is installed.

Table 1-3 Sun Cluster Components That Use IP Addresses

Component 

IP Addresses Needed 

Administrative console 

1 per subnet 

Cluster nodes 

1 per node, per subnet 

Terminal concentrator or System Service Processor  

Logical addresses 

1 per logical host resource, per subnet 

Terminal Concentrator or System Service Processor

A terminal concentrator communicates between the administrative console and the cluster node consoles. Sun EnterpriseTM E10000 servers use a System Service Processor (SSP) instead of a terminal concentrator. For more information about console access, refer to Sun Cluster 3.0 Concepts.

Logical Addresses

Each data service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed. Refer to Sun Cluster 3.0 Data Services Installation and Configuration Guide for information and worksheets for planning resource groups. For more information about data services and resources, also refer to Sun Cluster 3.0 Concepts.

Sun Cluster Configurable Components

This section provides guidelines for the Sun Cluster components that you configure during installation.

Cluster Name

Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.

You specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.

Node Names

Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes. Information for most other worksheets is grouped by node name.

The node name is the name you assign to a machine during installation of the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster.

Private Network

Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.

Sun Cluster software uses the private network for internal communication between nodes. Sun Cluster requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when installing Sun Cluster software on the first node of the cluster. You can choose to accept the default private network address (172.16.0.0) and netmask (255.255.0.0), or type different choices if the default network address is already in use elsewhere in the enterprise.


Note -

After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.


If you specify a private network address other than the default, it must meet the following requirements.

If you specify a netmask other than the default, it must meet the following requirements.

Cluster Interconnect

Add this planning information to the "Cluster Interconnect Worksheet" in Sun Cluster 3.0 Release Notes.

The cluster interconnect provides the hardware pathway for private network communication between cluster nodes. Each interconnect consists of a cable between two transport adapters, a transport adapter and a transport junction, or two transport junctions. During Sun Cluster installation, you specify the following configuration information for two cluster interconnects.


Note -

Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.


You can configure additional private network connections after installation by using the scsetup(1M) utility.

For more information about the cluster interconnect, refer to Sun Cluster 3.0 Concepts.

Private Hostnames

Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.

The private hostname is the name used for inter-node communication over the private network interface. Private hostnames are automatically created during Sun Cluster installation, and follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. This node ID number is automatically assigned during Sun Cluster installation to each node when it becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.

Public Networks

Add this planning information to the "Public Networks Worksheet" in Sun Cluster 3.0 Release Notes.

Public networks communicate outside the cluster. Consider the following points when planning your public network configuration.

See also "NAFO Groups" for guidelines on planning public network adapter backup groups. For more information about public network interfaces, refer to Sun Cluster 3.0 Concepts.

Disk Device Groups

Add this planning information to the "Disk Device Group Configurations Worksheet" in Sun Cluster 3.0 Release Notes.

You must configure all volume manager disk groups as Sun Cluster disk device groups. This configuration enables multihost disks to be hosted by a secondary node if the primary node fails. Consider the following points when planning disk device groups.

For more information about disk device groups, refer to Sun Cluster 3.0 Concepts.

NAFO Groups

Add this planning information to the "Public Networks Worksheet" in Sun Cluster 3.0 Release Notes.

A Network Adapter Failover (NAFO) group provides public network adapter monitoring and failover, and is the foundation for a network address resource. If the active adapter of a NAFO group that is configured with two or more adapters fails, all of its addresses fail over to another adapter in the NAFO group. In this way, the active NAFO group adapter maintains public network connectivity to the subnet to which the adapters in the NAFO group connect.

Consider the following points when planning your NAFO groups.

For more information about Network Adapter Failover, refer to Sun Cluster 3.0 Concepts.

Quorum Devices

Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.

Consider the following points when planning quorum devices.

For more information about quorum, refer to Sun Cluster 3.0 Concepts.

Planning the Global Devices and Cluster File Systems

This section provides guidelines for planning global devices and cluster file systems. For more information about global devices and cluster files systems, refer to Sun Cluster 3.0 Concepts.

Guidelines for Highly Available Global Devices and Cluster File Systems

Sun Cluster does not require any specific disk layout or file system size. Consider the following points when planning your global device and cluster file system layout.

Mount Information for Cluster File Systems

Consider the following points when planning mount points for cluster file systems.

Planning Volume Management

This section provides guidelines for planning volume management of your cluster configuration.

Sun Cluster uses volume manager software to group disks into disk device groups that can then be administered as one unit. Sun Cluster supports Solstice DiskSuite software and VERITAS Volume Manager (VxVM). You can use only one volume manager within a single cluster configuration. Refer to your volume manager documentation and to either Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager for instructions on configuring the volume manager software. For more information about volume management in a cluster configuration, refer to Sun Cluster 3.0 Concepts.

Add this planning information to the "Disk Device Group Configurations Worksheet" and the "Volume Manager Configurations Worksheet" in Sun Cluster 3.0 Release Notes, and to the "Metadevices Worksheet (Solstice DiskSuite)" in Sun Cluster 3.0 Release Notes, if applicable.

Guidelines for Volume Manager software

Consider the following general guidelines when configuring your disks.

Refer to your volume manager documentation for disk layout recommendations and any additional restrictions.

Guidelines for Solstice DiskSuite

Consider the following points when planning Solstice DiskSuite configurations.

Guidelines for VERITAS Volume Manager

Consider the following points when planning VERITAS Volume Manager (VxVM) configurations.

File-System Logging

Logging is required for cluster file systems. Sun Cluster supports the following logging file systems.

For information about Solstice DiskSuite trans-metadevice UFS logging, refer to your Solstice DiskSuite documentation. For information about Solaris UFS logging, refer to the mount_ufs(1M) man page and Solaris Transition Guide.

The following table lists the logging file systems supported by each volume manager.

Table 1-4 Supported File-System Logging Matrix

Volume Manager 

Supported File-System Logging  

Solstice DiskSuite 

Solstice DiskSuite trans-metadevice UFS logging, Solaris UFS logging 

VERITAS Volume Manager 

Solaris UFS logging 

Consider the following points when choosing between Solaris UFS logging and Solstice DiskSuite trans-metadevice UFS logging for your Solstice DiskSuite volume manager.

Mirroring Guidelines

This section provides guidelines for planning the mirroring of your cluster configuration.

Mirroring Multihost Disks

Mirroring all multihost disks in a Sun Cluster configuration enables the configuration to tolerate single-disk failures. Sun Cluster software requires that you mirror all multihost disks across disk expansion units.

Consider the following points when mirroring multihost disks.

For more information about multihost disks, refer to Sun Cluster 3.0 Concepts.

Mirroring the Root Disk

For maximum availability, you should mirror root (/), /usr, /var, /opt, and swap on the local disks. Under VxVM, you encapsulate the root disk and mirror the generated subdisks. However, mirroring the root disk is not a requirement of Sun Cluster.

Before deciding whether to mirror the root disk, consider the risks, complexity, cost, and service time for the various alternatives concerning the root disk. There is no single mirroring strategy that works for all configurations. You might want to consider your local Enterprise Services representative's preferred solution when deciding whether to mirror root.

Refer to your volume manager documentation and to either Appendix A, Configuring Solstice DiskSuite Software or Appendix B, Configuring VERITAS Volume Manager for instructions on mirroring the root disk.

Consider the following issues when deciding whether to mirror the root disk.