This section provides guidelines for planning and preparing for Sun Cluster software installation. For detailed information about Sun Cluster components, refer to Sun Cluster 3.0 Concepts.
Ensure that you have any necessary license certificates available before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.
For licensing requirements for volume manager software and applications software, refer to the installation documentation for those products.
After installing each software product, you must also install any required patches. For the current list of required patches, refer to Sun Cluster 3.0 Release Notes or consult your Enterprise Services representative or service provider. Refer to Sun Cluster 3.0 System Administration Guide for general guidelines and procedures for applying patches.
You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public network connection to the same set of public subnets.
The following table lists the components that need IP addresses assigned to them. Add these IP addresses to any naming services used. Also add these IP addresses to the local /etc/inet/hosts file on each cluster node after Sun Cluster software is installed.
Table 1-3 Sun Cluster Components That Use IP Addresses
Component |
IP Addresses Needed |
---|---|
Administrative console |
1 per subnet |
Cluster nodes |
1 per node, per subnet |
Terminal concentrator or System Service Processor |
1 |
Logical addresses |
1 per logical host resource, per subnet |
A terminal concentrator communicates between the administrative console and the cluster node consoles. Sun EnterpriseTM E10000 servers use a System Service Processor (SSP) instead of a terminal concentrator. For more information about console access, refer to Sun Cluster 3.0 Concepts.
Each data service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed. Refer to Sun Cluster 3.0 Data Services Installation and Configuration Guide for information and worksheets for planning resource groups. For more information about data services and resources, also refer to Sun Cluster 3.0 Concepts.
This section provides guidelines for the Sun Cluster components that you configure during installation.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.
You specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes. Information for most other worksheets is grouped by node name.
The node name is the name you assign to a machine during installation of the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.
Sun Cluster software uses the private network for internal communication between nodes. Sun Cluster requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when installing Sun Cluster software on the first node of the cluster. You can choose to accept the default private network address (172.16.0.0) and netmask (255.255.0.0), or type different choices if the default network address is already in use elsewhere in the enterprise.
After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.
If you specify a private network address other than the default, it must meet the following requirements.
Use zeroes for the last two octets of the address
Follow the guidelines in RFC 1597 for network address assignments
Refer to TCP/IP and Data Communications Administration Guide for instructions on obtaining copies of RFCs.
If you specify a netmask other than the default, it must meet the following requirements.
Minimally mask all bits given in the private network address
Have no "holes"
Add this planning information to the "Cluster Interconnect Worksheet" in Sun Cluster 3.0 Release Notes.
The cluster interconnect provides the hardware pathway for private network communication between cluster nodes. Each interconnect consists of a cable between two transport adapters, a transport adapter and a transport junction, or two transport junctions. During Sun Cluster installation, you specify the following configuration information for two cluster interconnects.
Transport adapters - For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is direct connected (adapter to adapter) or uses a transport junction.
Transport junctions - If transport junctions, such as a network switch, are used, specify the transport junction name for each interconnect. The default name is switchN, where N is a number automatically assigned during installation. Also specify the junction port name, or accept the default name. The default port name is the same as the node ID of the node hosting the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI.
Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.
You can configure additional private network connections after installation by using the scsetup(1M) utility.
For more information about the cluster interconnect, refer to Sun Cluster 3.0 Concepts.
Add this planning information to the "Cluster and Node Names Worksheet" in Sun Cluster 3.0 Release Notes.
The private hostname is the name used for inter-node communication over the private network interface. Private hostnames are automatically created during Sun Cluster installation, and follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. This node ID number is automatically assigned during Sun Cluster installation to each node when it becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.
Add this planning information to the "Public Networks Worksheet" in Sun Cluster 3.0 Release Notes.
Public networks communicate outside the cluster. Consider the following points when planning your public network configuration.
Public networks and the private network (cluster interconnect) must use separate adapters.
You must have at least one public network that is connected to all cluster nodes.
You can have as many additional public network connections as your hardware configuration allows.
See also "NAFO Groups" for guidelines on planning public network adapter backup groups. For more information about public network interfaces, refer to Sun Cluster 3.0 Concepts.
Add this planning information to the "Disk Device Group Configurations Worksheet" in Sun Cluster 3.0 Release Notes.
You must configure all volume manager disk groups as Sun Cluster disk device groups. This configuration enables multihost disks to be hosted by a secondary node if the primary node fails. Consider the following points when planning disk device groups.
Failover - You can configure multiported disks and properly-configured volume manager devices as failover devices. Proper configuration of a volume manager device includes multiported disks and correct setup of the volume manager itself so that the exported device can be hosted by multiple nodes. You cannot configure tape drives, CD-ROMs, or single-ported disks as failover devices.
Mirroring - You must mirror the disks to protect the data from disk failure. Refer to your volume manager documentation for instructions on mirroring.
For more information about disk device groups, refer to Sun Cluster 3.0 Concepts.
Add this planning information to the "Public Networks Worksheet" in Sun Cluster 3.0 Release Notes.
A Network Adapter Failover (NAFO) group provides public network adapter monitoring and failover, and is the foundation for a network address resource. If the active adapter of a NAFO group that is configured with two or more adapters fails, all of its addresses fail over to another adapter in the NAFO group. In this way, the active NAFO group adapter maintains public network connectivity to the subnet to which the adapters in the NAFO group connect.
Consider the following points when planning your NAFO groups.
Each public network adapter must belong to a NAFO group.
Each node can have only one NAFO group per subnet.
Only one adapter in a given NAFO group can have a hostname association, in the form of an /etc/hostname.adapter file.
NAFO group naming convention is nafoN, where N is the number you supply when you create the NAFO group.
For more information about Network Adapter Failover, refer to Sun Cluster 3.0 Concepts.
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.
Consider the following points when planning quorum devices.
Minimum - A two-node cluster must have at least one shared disk assigned as a quorum device. For other topologies, quorum devices are optional.
Odd number rule - If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices so that the quorum devices have completely independent failure pathways.
Connection - A quorum device cannot be connected to more than two nodes.
For more information about quorum, refer to Sun Cluster 3.0 Concepts.