This section provides guidelines for planning and preparing for Sun Cluster software installation. For detailed information about Sun Cluster components, see the Sun Cluster 3.1 Concepts Guide.
Ensure that you have any necessary license certificates available before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.
For licensing requirements for volume manager software and applications software, see the installation documentation for those products.
After installing each software product, you must also install any required patches. For information about current required patches, see “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes or consult your Sun service provider. See “Patching Sun Cluster Software and Firmware” in Sun Cluster 3.1 System Administration Guide for general guidelines and procedures for applying patches.
You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public network connection to the same set of public subnets.
The following table lists the components that need IP addresses assigned to them. Add these IP addresses to any naming services used. Also add these IP addresses to the local /etc/inet/hosts file on each cluster node after you install Solaris software.
Table 1–3 Sun Cluster Components That Use IP Addresses
You must have console access to all cluster nodes. If you install Cluster Control Panel software on your administrative console, you must provide the hostname of the console-access device used to communicate with the cluster nodes. A terminal concentrator can be used to communicate between the administrative console and the cluster node consoles. A Sun Enterprise 10000 server uses a System Service Processor (SSP) instead of a terminal concentrator. A Sun FireTM server uses a system controller. For more information about console access, see the Sun Cluster 3.1 Concepts Guide.
Each data service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed. See the Sun Cluster 3.1 Data Service Planning and Administration Guide for information and “Sun Cluster Installation and Configuration Worksheets” in Sun Cluster 3.1 Data Service Release Notes for worksheets for planning resource groups. For more information about data services and resources, also see the Sun Cluster 3.1 Concepts Guide.
This section provides guidelines for the Sun Cluster components that you configure during installation.
Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.
You specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.
Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes. Information for most other worksheets is grouped by node name.
The node name is the name you assign to a machine when you install the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster.
Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.
Sun Cluster software uses the private network for internal communication between nodes. Sun Cluster requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when you install Sun Cluster software on the first node of the cluster. You can either accept the default private network address (172.16.0.0) and netmask (255.255.0.0) or type different choices if the default network address is already in use elsewhere in the enterprise.
After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.
If you specify a private network address other than the default, it must meet the following requirements:
Use zeroes for the last two octets of the address.
Follow the guidelines in RFC 1597 for network address assignments.
See “Planning Your TCP/IP Network” in System Administration Guide, Volume 3 (Solaris 8) or “Planning Your TCP/IP Network (Task)” in System Administration Guide: IP Services (Solaris 9) for instructions on how to contact the InterNIC to obtain copies of RFCs.
If you specify a netmask other than the default, it must meet the following requirements:
Minimally mask all bits given in the private network address
Have no “holes”
Add this planning information to the “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes.
The private hostname is the name used for inter–node communication over the private network interface. Private hostnames are automatically created during Sun Cluster installation, and follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. During Sun Cluster installation, this node ID number is automatically assigned to each node when it becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.
Add this planning information to the “Cluster Interconnect Worksheet” in Sun Cluster 3.1 Release Notes.
The cluster interconnects provide the hardware pathways for private network communication between cluster nodes. Each interconnect consists of a cable connected between two transport adapters, a transport adapter and a transport junction, or two transport junctions. During Sun Cluster installation, you specify the following configuration information for two cluster interconnects.
Transport adapters – For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is direct connected (adapter to adapter) or uses a transport junction. If your two-node cluster is direct connected, you can still specify a transport junction for the interconnect. If you specify a transport junction, it will be easier to add another node to the cluster in the future.
Transport junctions – If you use transport junctions, such as a network switch, specify a transport junction name for each interconnect. You can use the default name switchN, where N is a number automatically assigned during installation, or create other names.
Also specify the junction port name, or accept the default name. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI.
Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.
You can configure additional private network connections after installation by using the scsetup(1M) utility.
For more information about the cluster interconnect, see the Sun Cluster 3.1 Concepts Guide.
Add this planning information to the “Public Networks Worksheet” in Sun Cluster 3.1 Release Notes.
Public networks communicate outside the cluster. Consider the following points when you plan your public network configuration.
Public networks and the private network (cluster interconnect) must use separate adapters.
You must have at least one public network that is connected to all cluster nodes.
You can have as many additional public network connections as your hardware configuration allows.
The local-mac-address? variable must use the default value true for Ethernet adapters. Sun Cluster 3.1 software does not support a local-mac-address? value of false for Ethernet adapters. This requirement is a change from Sun Cluster 3.0, which did require a local-mac-address? value of false.
See also IP Network Multipathing Groups for guidelines on planning public network adapter backup groups. For more information about public network interfaces, see the Sun Cluster 3.1 Concepts Guide.
Add this planning information to the “Disk Device Groups Worksheet” in Sun Cluster 3.1 Release Notes.
You must configure all volume manager disk groups as Sun Cluster disk device groups. This configuration enables a secondary node to host multihost disks if the primary node fails. Consider the following points when you plan disk device groups.
Failover – You can configure multiported disks and properly-configured volume manager devices as failover devices. Proper configuration of a volume manager device includes multiported disks and correct setup of the volume manager itself so that multiple nodes can host the exported device. You cannot configure tape drives, CD-ROMs, or single-ported disks as failover devices.
Mirroring – You must mirror the disks to protect the data from disk failure. See Mirroring Guidelines for additional guidelines. See Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software or Installing and Configuring VxVM Software and your volume manager documentation for instructions on mirroring.
For more information about disk device groups, see the Sun Cluster 3.1 Concepts Guide.
Add this planning information to the “Public Networks Worksheet” in Sun Cluster 3.1 Release Notes.
Internet Protocol (IP) Network Multipathing groups, which replace Network Adapter Failover (NAFO) groups, provide public network adapter monitoring and failover, and are the foundation for a network address resource. If a multipathing group is configured with two or more adapters and an adapter fails, all of the addresses on the failed adapter fail over to another adapter in the multipathing group. In this way, the multipathing group adapters maintain public network connectivity to the subnet to which the adapters in the multipathing group connect.
Consider the following points when you plan your multipathing groups.
Each public network adapter must belong to a multipathing group.
The local-mac-address? variable must have a value of true for Ethernet adapters. This is a change from the requirement for Sun Cluster 3.0 software.
You must configure a test IP address for each multipathing group adapter.
Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.
Test IP addresses must not be used by normal applications because they are not highly available.
There are no requirements or restrictions for the name of a multipathing group.
For more information about IP Network Multipathing, see “Deploying Network Multipathing” in IP Network Multipathing Administration Guide or “Administering Network Multipathing (Task)” in System Administration Guide: IP Services.
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.
Consider the following points when you plan quorum devices.
Minimum – A two-node cluster must have at least one shared disk assigned as a quorum device. For other topologies, quorum devices are optional.
Odd-number rule – If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices so that the quorum devices have completely independent failure pathways.
Connection – Do not connect a quorum device to more than two nodes.
For more information about quorum, see the Sun Cluster 3.1 Concepts Guide.