Sun Cluster 3.1 Software Installation Guide

Planning the Sun Cluster Environment

This section provides guidelines for planning and preparing for Sun Cluster software installation. For detailed information about Sun Cluster components, see the Sun Cluster 3.1 Concepts Guide.

Licensing

Ensure that you have any necessary license certificates available before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.

For licensing requirements for volume manager software and applications software, see the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches. For information about current required patches, see “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes or consult your Sun service provider. See “Patching Sun Cluster Software and Firmware” in Sun Cluster 3.1 System Administration Guide for general guidelines and procedures for applying patches.

IP Addresses

You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public network connection to the same set of public subnets.

The following table lists the components that need IP addresses assigned to them. Add these IP addresses to any naming services used. Also add these IP addresses to the local /etc/inet/hosts file on each cluster node after you install Solaris software.

Table 1–3 Sun Cluster Components That Use IP Addresses

Component 

IP Addresses Needed 

Administrative console

1 per subnet 

Network adapters

2 per adapter (1 primary IP address and 1 test IP address) 

Cluster nodes 

1 per node, per subnet 

Domain console network interface

(Sun Fire 15000) 

1 per domain 

Console-access device

Logical addresses 

1 per logical host resource, per subnet 

Console-Access Devices

You must have console access to all cluster nodes. If you install Cluster Control Panel software on your administrative console, you must provide the hostname of the console-access device used to communicate with the cluster nodes. A terminal concentrator can be used to communicate between the administrative console and the cluster node consoles. A Sun Enterprise 10000 server uses a System Service Processor (SSP) instead of a terminal concentrator. A Sun FireTM server uses a system controller. For more information about console access, see the Sun Cluster 3.1 Concepts Guide.

Logical Addresses

Each data service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed. See the Sun Cluster 3.1 Data Service Planning and Administration Guide for information and “Sun Cluster Installation and Configuration Worksheets” in Sun Cluster 3.1 Data Service Release Notes for worksheets for planning resource groups. For more information about data services and resources, also see the Sun Cluster 3.1 Concepts Guide.

Sun Cluster Configurable Components

This section provides guidelines for the Sun Cluster components that you configure during installation.

Cluster Name

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.

You specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.

Node Names

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes. Information for most other worksheets is grouped by node name.

The node name is the name you assign to a machine when you install the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster.

Private Network

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.

Sun Cluster software uses the private network for internal communication between nodes. Sun Cluster requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when you install Sun Cluster software on the first node of the cluster. You can either accept the default private network address (172.16.0.0) and netmask (255.255.0.0) or type different choices if the default network address is already in use elsewhere in the enterprise.


Note –

After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.


If you specify a private network address other than the default, it must meet the following requirements:

If you specify a netmask other than the default, it must meet the following requirements:

Private Hostnames

Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.

The private hostname is the name used for inter–node communication over the private network interface. Private hostnames are automatically created during Sun Cluster installation, and follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. During Sun Cluster installation, this node ID number is automatically assigned to each node when it becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.

Cluster Interconnect

Add this planning information to the “Cluster Interconnect Worksheet” in Sun Cluster 3.1 Release Notes.

The cluster interconnects provide the hardware pathways for private network communication between cluster nodes. Each interconnect consists of a cable connected between two transport adapters, a transport adapter and a transport junction, or two transport junctions. During Sun Cluster installation, you specify the following configuration information for two cluster interconnects.


Note –

Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.


You can configure additional private network connections after installation by using the scsetup(1M) utility.

For more information about the cluster interconnect, see the Sun Cluster 3.1 Concepts Guide.

Public Networks

Add this planning information to the “Public Networks Worksheet” in Sun Cluster 3.1 Release Notes.

Public networks communicate outside the cluster. Consider the following points when you plan your public network configuration.

See also IP Network Multipathing Groups for guidelines on planning public network adapter backup groups. For more information about public network interfaces, see the Sun Cluster 3.1 Concepts Guide.

Disk Device Groups

Add this planning information to the “Disk Device Groups Worksheet” in Sun Cluster 3.1 Release Notes.

You must configure all volume manager disk groups as Sun Cluster disk device groups. This configuration enables a secondary node to host multihost disks if the primary node fails. Consider the following points when you plan disk device groups.

For more information about disk device groups, see the Sun Cluster 3.1 Concepts Guide.

IP Network Multipathing Groups

Add this planning information to the “Public Networks Worksheet” in Sun Cluster 3.1 Release Notes.

Internet Protocol (IP) Network Multipathing groups, which replace Network Adapter Failover (NAFO) groups, provide public network adapter monitoring and failover, and are the foundation for a network address resource. If a multipathing group is configured with two or more adapters and an adapter fails, all of the addresses on the failed adapter fail over to another adapter in the multipathing group. In this way, the multipathing group adapters maintain public network connectivity to the subnet to which the adapters in the multipathing group connect.

Consider the following points when you plan your multipathing groups.

For more information about IP Network Multipathing, see “Deploying Network Multipathing” in IP Network Multipathing Administration Guide or “Administering Network Multipathing (Task)” in System Administration Guide: IP Services.

Quorum Devices

Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.

Consider the following points when you plan quorum devices.

For more information about quorum, see the Sun Cluster 3.1 Concepts Guide.