This section provides guidelines for the following Sun Cluster components that you configure during installation:
Add this planning information to the Cluster and Node Names Worksheet.
Specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.
Add this planning information to the Cluster and Node Names Worksheet. Information for most other worksheets is grouped by node name.
The node name is the name that you assign to a machine when you install the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster. In single-node cluster installations, the default node name is the same as the cluster name.
Add this planning information to the Cluster and Node Names Worksheet.
You do not need to configure a private network for a single-node cluster.
Sun Cluster software uses the private network for internal communication between nodes. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when you install Sun Cluster software on the first node of the cluster. You can either accept the default private network address (172.16.0.0) and netmask (255.255.0.0) or type different choices if the default network address is already in use elsewhere in the enterprise.
After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.
If you specify a private network address other than the default, the address must meet the following requirements:
Use zeroes for the last two octets of the address.
Follow the guidelines in RFC 1597 for network address assignments.
You can contact the InterNIC to obtain copies of RFCs. See “Planning Your TCP/IP Network” in System Administration Guide, Volume 3 (Solaris 8) or “Planning Your TCP/IP Network (Task)” in System Administration Guide: IP Services (Solaris 9) for instructions.
If you specify a netmask other than the default, the netmask must minimally mask all bits that are given in the private network address.
Add this planning information to the Cluster and Node Names Worksheet.
The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster installation. These private hostnames follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. During Sun Cluster installation, the node ID number is automatically assigned to each node when the node becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.
Add this planning information to the Cluster Interconnect Worksheet.
You do not need to configure a cluster interconnect for a single-node cluster. However, if you anticipate eventually adding nodes to a single-node cluster configuration, you might want to configure the cluster interconnect for future use.
The cluster interconnects provide the hardware pathways for private network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:
Between two transport adapters
Between a transport adapter and a transport junction
Between two transport junctions
During Sun Cluster installation, you specify the following configuration information for two cluster interconnects:
Transport adapters – For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is direct connected (adapter to adapter) or uses a transport junction. If your two-node cluster is direct connected, you can still specify a transport junction for the interconnect.
If you specify a transport junction, you can more easily add another node to the cluster in the future.
See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.
Transport junctions – If you use transport junctions, such as a network switch, specify a transport junction name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during installation, or create another name.
Also specify the junction port name or accept the default name. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI-PCI.
Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.
You can configure additional private-network connections after installation by using the scsetup(1M) utility.
For more information about the cluster interconnect, see the Sun Cluster 3.1 10/03 Concepts Guide.
Add this planning information to the Public Networks Worksheet.
Public networks communicate outside the cluster. Consider the following points when you plan your public network configuration.
Public networks and the private network (cluster interconnect) must use separate adapters.
You must have at least one public network that is connected to all cluster nodes.
You can have as many additional public network connections as your hardware configuration allows.
The local-mac-address? variable must use the default value true for Ethernet adapters. Sun Cluster 3.1 software does not support a local-mac-address? value of false for Ethernet adapters. This requirement is a change from Sun Cluster 3.0, which did require a local-mac-address? value of false.
See IP Network Multipathing Groups for guidelines on planning public-network-adapter backup groups. For more information about public network interfaces, see the Sun Cluster 3.1 10/03 Concepts Guide.
Add this planning information to the Disk Device Group Configurations Worksheet.
You must configure all volume-manager disk groups as Sun Cluster disk device groups. This configuration enables a secondary node to host multihost disks if the primary node fails. Consider the following points when you plan disk device groups.
Failover – You can configure multiported disks and properly configured volume-manager devices as failover devices. Proper configuration of a volume-manager device includes multiported disks and correct setup of the volume manager itself. This configuration ensures that multiple nodes can host the exported device. You cannot configure tape drives, CD-ROMs, or single-ported disks as failover devices.
Mirroring – You must mirror the disks to protect the data from disk failure. See Mirroring Guidelines for additional guidelines. See Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software or Installing and Configuring VxVM Software and your volume-manager documentation for instructions on mirroring.
For more information about disk device groups, see the Sun Cluster 3.1 10/03 Concepts Guide.
Add this planning information to the Public Networks Worksheet.
Internet Protocol (IP) Network Multipathing groups, which replace Network Adapter Failover (NAFO) groups, provide public network adapter monitoring and failover, and are the foundation for a network-address resource. A multipathing group provides high availability when the multipathing group is configured with two or more adapters. If one adapter fails, all of the addresses on the failed adapter fail over to another adapter in the multipathing group. In this way, the multipathing-group adapters maintain public-network connectivity to the subnet to which the adapters in the multipathing group connect.
Consider the following points when you plan your multipathing groups.
Each public network adapter must belong to a multipathing group.
For multipathing groups that contain two or more adapters, you must configure a test IP address for each adapter in the group. If a multipathing group contains only one adapter, you do not need to configure a test IP address.
Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.
Test IP addresses must not be used by normal applications because the test IP addresses are not highly available.
In the /etc/default/mpathd file, do no change the value of TRACK_INTERFACES_ONLY_WITH_GROUPS from yes to no.
The name of a multipathing group has no requirements or restrictions.
For more information about IP Network Multipathing, see “Deploying Network Multipathing” in IP Network Multipathing Administration Guide (Solaris 8) or“Administering Network Multipathing (Task)” in System Administration Guide: IP Services (Solaris 9).
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.
You do not need to configure quorum devices for a single-node cluster.
Consider the following points when you plan quorum devices.
Minimum – A two-node cluster must have at least one shared disk assigned as a quorum device. For other topologies, quorum devices are optional.
Odd-number rule – If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices have completely independent failure pathways.
Connection – You must connect a quorum device to at least two nodes.
For more information about quorum devices, see the Sun Cluster 3.1 10/03 Concepts Guide.