Sun Cluster Software Installation Guide for Solaris OS

Chapter 1 Planning the Sun Cluster Configuration

This chapter provides planning information and guidelines for installing a Sun Cluster configuration.

The following overview information is in this chapter:

Where to Find Sun Cluster Installation Tasks

The following table shows where to find instructions for various installation tasks for Sun Cluster software installation and the order in which you should perform the tasks.

Table 1–1 Sun Cluster Software Installation Task Information

Task 

Instructions 

Set up cluster hardware. 

Plan cluster software installation. 

Install a new cluster or add nodes to an existing cluster. Optionally, install and configure Sun StorEdge QFS software. 

Installing the Software

Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide

Install and configure Solstice DiskSuiteTM or Solaris Volume Manager software.

SPARC: Install and configure VERITAS Volume Manager (VxVM) software. 

Configure cluster framework software and optionally install and configure the Sun Cluster module to Sun Management Center (which is available on SPARC based systems only). 

Configuring the Cluster

Plan, install, and configure resource groups and data services. 

Sun Cluster Data Service Planning and Administration Guide for Solaris OS

Develop custom data services. 

Sun Cluster Data Services Developer's Guide for Solaris OS

Upgrade to Sun Cluster 3.1 9/04 software. 

Planning the Solaris OS

This section provides guidelines for planning Solaris software installation in a cluster configuration. For more information about Solaris software, see your Solaris installation documentation.

Guidelines for Selecting Your Solaris Installation Method

You can install Solaris software from a local CD-ROM or from a network installation server by using the JumpStartTM installation method. In addition, Sun Cluster software provides a custom method for installing both the Solaris OS and Sun Cluster software by using the JumpStart installation method. If you are installing several cluster nodes, consider a network installation.

See How to Install Solaris and Sun Cluster Software (JumpStart) for details about the scinstall JumpStart installation method. See your Solaris installation documentation for details about standard Solaris installation methods.

Solaris OS Feature Restrictions

The following Solaris OS features are not supported in a Sun Cluster configuration:

Solaris Software Group Considerations

Sun Cluster 3.1 9/04 software requires at least the End User Solaris Software Group. However, other components of your cluster configuration might have their own Solaris software requirements as well. Consider the following information when you decide which Solaris software group you are installing.


Tip –

To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


System Disk Partitions

Add this information to the appropriate Local File System Layout Worksheet.

When you install the Solaris OS, ensure that you create the required Sun Cluster partitions and that all partitions meet minimum space requirements.

To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris OS.

See the following guidelines for additional partition planning information:

Guidelines for the Root (/) File System

As with any other system running the Solaris OS, you can configure the root (/), /var, /usr, and /opt directories as separate file systems. Or, you can include all the directories in the root (/) file system. The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when you plan your partitioning scheme.

Guidelines for the /globaldevices File System

Sun Cluster software requires you to set aside a special file system on one of the local disks for use in managing global devices. This file system is later mounted as a cluster file system. Name this file system /globaldevices, which is the default name that is recognized by the scinstall(1M) command.

The scinstall command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number that is assigned to a node when it becomes a cluster member. The original /globaldevices mount point is removed.

The /globaldevices file system must have ample space and ample inode capacity for creating both block special devices and character special devices. This guideline is especially important if a large number of disks are in the cluster. A file system size of 512 Mbytes should suffice for most cluster configurations.

Volume Manager Requirements

If you use Solstice DiskSuite or Solaris Volume Manager software, you must set aside a slice on the root disk for use in creating the state database replica. Specifically, set aside a slice for this purpose on each local disk. But, if you only have one local disk on a node, you might need to create three state database replicas in the same slice for Solstice DiskSuite or Solaris Volume Manager software to function properly. See your Solstice DiskSuite or Solaris Volume Manager documentation for more information.

SPARC: If you use VERITAS Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices that are available for use by VxVM. Additionally, you need to have some additional unassigned free space at either the beginning or the end of the disk. See your VxVM documentation for more information about root disk encapsulation.

Example—Sample File-System Allocations

Table 1–2 shows a partitioning scheme for a cluster node that has less than 750 Mbytes of physical memory. This scheme is to be installed with the End User Solaris Software Group, Sun Cluster software, and the Sun Cluster HA for NFS data service. The last slice on the disk, slice 7, is allocated with a small amount of space for volume-manager use.

This layout allows for the use of either Solstice DiskSuite or Solaris Volume Manager software or VxVM software. If you use Solstice DiskSuite or Solaris Volume Manager software, you use slice 7 for the state database replica. If you use VxVM, you later free slice 7 by assigning the slice a zero length. This layout provides the necessary two free slices, 4 and 7, as well as provides for unused space at the end of the disk.

Table 1–2 Example File-System Allocation

Slice 

Contents 

Allocation (in Mbytes) 

Description 

/

6.75GB 

Remaining free space on the disk after allocating space to slices 1 through 7. Used for the Solaris OS, Sun Cluster software, data-services software, volume-manager software, Sun Management Center agent and Sun Cluster module agent packages, root file systems, and database and application software. 

swap

1GB 

512 Mbytes for the Solaris OS. 

512 Mbytes for Sun Cluster software. 

overlap 

8.43GB 

The entire disk. 

/globaldevices

512MB 

The Sun Cluster software later assigns this slice a different mount point and mounts the slice as a cluster file system. 

unused 

Available as a free slice for encapsulating the root disk under VxVM. 

unused 

-  

unused 

-  

volume manager 

20MB 

Used by Solstice DiskSuite or Solaris Volume Manager software for the state database replica, or used by VxVM for installation after you free the slice. 

Planning the Sun Cluster Environment

This section provides guidelines for planning and preparing the following components for Sun Cluster software installation and configuration:

For detailed information about Sun Cluster components, see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.

Licensing

Ensure that you have available all necessary license certificates before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.

For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches.

IP Addresses

You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public-network connection to the same set of public subnets.

The following table lists the components that need IP addresses assigned. Add these IP addresses to any naming services that are used. Also add these IP addresses to the local /etc/inet/hosts file on each cluster node after you install Solaris software.

Table 1–3 Sun Cluster Components That Use IP Addresses

Component 

Number of IP Addresses Needed 

Administrative console

1 per subnet 

IP Network Multipathing groups

  • Single-adapter groups – 1

  • Multiple-adapter groups – 1 primary IP address plus 1 test IP address for each adapter in the group

Cluster nodes 

1 per node, per subnet 

Domain console network interface (Sun FireTM 15000)

1 per domain 

Console-access device

Logical addresses 

1 per logical host resource, per subnet 

Console-Access Devices

You must have console access to all cluster nodes. If you install Cluster Control Panel software on your administrative console, you must provide the hostname of the console-access device that is used to communicate with the cluster nodes.

For more information about console access, see the Sun Cluster Concepts Guide for Solaris OS.

Logical Addresses

Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed.

Public Networks

Public networks communicate outside the cluster. Consider the following points when you plan your public-network configuration:

See IP Network Multipathing Groups for guidelines on planning public-network-adapter backup groups. For more information about public-network interfaces, see Sun Cluster Concepts Guide for Solaris OS.

Guidelines for NFS

Consider the following points when you plan the use of Network File System (NFS) in a Sun Cluster configuration.

Service Restrictions

Observe the following service restrictions for Sun Cluster configurations:

Sun Cluster Configurable Components

This section provides guidelines for the following Sun Cluster components that you configure:

Add this information to the appropriate configuration worksheet.

Table 1–4 Worksheets for Sun Cluster Configuration

Configuration Worksheet 

Location 

Table 2–2 (to use defaults) or Table 2–3 (to customize)

How to Configure Sun Cluster Software on All Nodes (scinstall)

Table 2–6

How to Install and Configure Sun Cluster Software (SunPlex Installer)

Table 2–7

How to Install Solaris and Sun Cluster Software (JumpStart)

Table 2–8

How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

Cluster Name

Specify a name for the cluster during Sun Cluster configuration. The cluster name should be unique throughout the enterprise.

Node Names

The node name is the name that you assign to a machine when you install the Solaris OS. During Sun Cluster configuration, you specify the names of all nodes that you are installing as a cluster. In single-node cluster installations, the default node name is the same as the cluster name.

Private Network


Note –

You do not need to configure a private network for a single-node cluster.


Sun Cluster software uses the private network for internal communication between nodes. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. You specify the private-network address and netmask when you configure Sun Cluster software on the first node of the cluster. You can either accept the default private-network address (172.16.0.0) and netmask (255.255.0.0) or type different choices if the default network address is already in use elsewhere in the same enterprise.


Note –

After the installation utility (scinstall, SunPlex Installer, or JumpStart) has finished processing and the cluster is established, you cannot change the private-network address and netmask. You must uninstall and reinstall the cluster software to use a different private-network address or netmask.


If you specify a private-network address other than the default, the address must meet the following requirements:

Although the scinstall utility lets you specify an alternate netmask, best practice is to accept the default netmask, 255.255.0.0. There is no benefit if you specify a netmask that represents a larger network. And the scinstall utility does not accept a netmask that represents a smaller network.

See “Planning Your TCP/IP Network” in System Administration Guide, Volume 3 (Solaris 8) or “Planning Your TCP/IP Network (Task)” in System Administration Guide: IP Services (Solaris 9) for more information about private networks.

Private Hostnames

The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration. These private hostnames follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. During Sun Cluster configuration, the node ID number is automatically assigned to each node when the node becomes a cluster member. After the cluster is configured, you can rename private hostnames by using the scsetup(1M) utility.

Cluster Interconnect


Note –

You do not need to configure a cluster interconnect for a single-node cluster. However, if you anticipate eventually adding nodes to a single-node cluster configuration, you might want to configure the cluster interconnect for future use.


The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:

During Sun Cluster configuration, you specify the following information for two cluster interconnects:

You can configure additional private-network connections after the cluster is established by using the scsetup(1M) utility.

For more information about the cluster interconnect, see “Cluster Interconnect” in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

IP Network Multipathing Groups

Add this planning information to the Public Networks Worksheet.

Internet Protocol (IP) Network Multipathing groups, which replace Network Adapter Failover (NAFO) groups, provide public-network adapter monitoring and failover, and are the foundation for a network-address resource. A multipathing group provides high availability when the multipathing group is configured with two or more adapters. If one adapter fails, all of the addresses on the failed adapter fail over to another adapter in the multipathing group. In this way, the multipathing-group adapters maintain public-network connectivity to the subnet to which the adapters in the multipathing group connect.

Consider the following points when you plan your multipathing groups.

Most procedures, guidelines, and restrictions that are identified in the Solaris documentation for IP Network Multipathing are the same for both cluster and noncluster environments. Therefore, see the appropriate Solaris document for additional information about IP Network Multipathing:

Also see “IP Network Multipathing Groups” in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

Quorum Devices

Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You configure quorum devices by using the scsetup(1M) utility.


Note –

You do not need to configure quorum devices for a single-node cluster.


Consider the following points when you plan quorum devices.

For more information about quorum devices, see “Quorum and Quorum Devices” in Sun Cluster Concepts Guide for Solaris OS and “Quorum Devices” in Sun Cluster Overview for Solaris OS.

Planning the Global Devices and Cluster File Systems

This section provides the following guidelines for planning global devices and for planning cluster file systems:

For more information about global devices and about cluster files systems, see Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

Guidelines for Highly Available Global Devices and Cluster File Systems

Sun Cluster software does not require any specific disk layout or file system size. Consider the following points when you plan your layout for global devices and for cluster file systems.

Cluster File Systems

Consider the following points when you plan cluster file systems.

Disk Device Groups

Add this planning information to the Disk Device Group Configurations Worksheet.

You must configure all volume-manager disk groups as Sun Cluster disk device groups. This configuration enables a secondary node to host multihost disks if the primary node fails. Consider the following points when you plan disk device groups.

For more information about disk device groups, see “Devices” in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

Mount Information for Cluster File Systems

Consider the following points when you plan mount points for cluster file systems.

Planning Volume Management

Add this planning information to the Disk Device Group Configurations Worksheet and the Volume-Manager Configurations Worksheet. For Solstice DiskSuite or Solaris Volume Manager, also add this planning information to the Metadevices Worksheet (Solstice DiskSuite or Solaris Volume Manager).

This section provides the following guidelines for planning volume management of your cluster configuration:

Sun Cluster software uses volume-manager software to group disks into disk device groups which can then be administered as one unit. Sun Cluster software supports Solstice DiskSuite or Solaris Volume Manager software and VERITAS Volume Manager (VxVM) software that you install or use in the following ways.

Table 1–5 Supported Use of Volume Managers with Sun Cluster Software

Volume-Manager Software 

Requirements 

Solstice DiskSuite or Solaris Volume Manager 

You must install Solstice DiskSuite or Solaris Volume Manager software on all nodes of the cluster, regardless of whether you use VxVM on some nodes to manage disks. 

SPARC: VxVM with the cluster feature 

You must install and license VxVM with the cluster feature on all nodes of the cluster. 

SPARC: VxVM without the cluster feature 

You are only required to install and license VxVM on those nodes that are attached to storage devices which VxVM manages.  

SPARC: Both Solstice DiskSuite or Solaris Volume Manager and VxVM 

If you install both volume managers on the same node, you must use Solstice DiskSuite or Solaris Volume Manager software to manage disks that are local to each node. Local disks include the root disk. Use VxVM to manage all shared disks. 

See your volume-manager documentation and Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software or SPARC: Installing and Configuring VxVM Software for instructions on how to install and configure the volume-manager software. For more information about volume management in a cluster configuration, see the Sun Cluster Concepts Guide for Solaris OS.

Guidelines for Volume-Manager Software

Consider the following general guidelines when you configure your disks with volume-manager software:

See your volume-manager documentation for disk layout recommendations and any additional restrictions.

Guidelines for Solstice DiskSuite or Solaris Volume Manager Software

Consider the following points when you plan Solstice DiskSuite or Solaris Volume Manager configurations:

SPARC: Guidelines for VERITAS Volume Manager Software

Consider the following points when you plan VERITAS Volume Manager (VxVM) configurations.

See your VxVM installation documentation for additional information.

File-System Logging

Logging is required for UFS and VxFS cluster file systems. This requirement does not apply to QFS shared file systems. Sun Cluster software supports the following choices of file-system logging:

The following table lists the file-system logging supported by each volume manager.

Table 1–6 Supported File System Logging Matrix

Volume Manager 

Supported File System Logging  

Solstice DiskSuite or Solaris Volume Manager 

  • Solaris UFS logging

  • Solstice DiskSuite trans-metadevice logging

  • Solaris Volume Manager transactional-volume logging

  • VxFS logging

SPARC: VERITAS Volume Manager 

  • Solaris UFS logging

  • VxFS logging

Consider the following points when you choose between Solaris UFS logging and Solstice DiskSuite trans-metadevice logging/Solaris Volume Manager transactional-volume logging for UFS cluster file systems:

Mirroring Guidelines

This section provides the following guidelines for planning the mirroring of your cluster configuration:

Guidelines for Mirroring Multihost Disks

To mirror all multihost disks in a Sun Cluster configuration enables the configuration to tolerate single-device failures. Sun Cluster software requires that you mirror all multihost disks across expansion units. You do not need to use software mirroring if the storage device provides hardware RAID as well as redundant paths to devices.

Consider the following points when you mirror multihost disks.

For more information about multihost disks, see “Multihost Disk Storage” in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

Guidelines for Mirroring the Root Disk

Add this planning information to the Local File System Layout Worksheet.

For maximum availability, mirror root (/), /usr, /var, /opt, and swap on the local disks. Under VxVM, you encapsulate the root disk and mirror the generated subdisks. However, Sun Cluster software does not require that you mirror the root disk.

Before you decide whether to mirror the root disk, consider the risks, complexity, cost, and service time for the various alternatives that concern the root disk. No single mirroring strategy works for all configurations. You might want to consider your local Sun service representative's preferred solution when you decide whether to mirror root.

See your volume-manager documentation and Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software or SPARC: Installing and Configuring VxVM Software for instructions on how to mirror the root disk.

Consider the following points when you decide whether to mirror the root disk.