This section provides guidelines for the Sun Cluster components that you configure during installation.
Add this planning information to the "Cluster and Node Names Worksheet" in the Sun Cluster 3.0 12/01 Release Notes.
You specify a name for the cluster during Sun Cluster installation. The cluster name should be unique throughout the enterprise.
Add this planning information to the "Cluster and Node Names Worksheet" in the Sun Cluster 3.0 12/01 Release Notes. Information for most other worksheets is grouped by node name.
The node name is the name you assign to a machine when you install the Solaris operating environment. During Sun Cluster installation, you specify the names of all nodes that you are installing as a cluster.
Add this planning information to the "Cluster and Node Names Worksheet" in the Sun Cluster 3.0 12/01 Release Notes.
Sun Cluster software uses the private network for internal communication between nodes. Sun Cluster requires at least two connections to the cluster interconnect on the private network. You specify the private network address and netmask when you install Sun Cluster software on the first node of the cluster. You can either accept the default private network address (172.16.0.0) and netmask (255.255.0.0) or type different choices if the default network address is already in use elsewhere in the enterprise.
After you have successfully installed the node as a cluster member, you cannot change the private network address and netmask.
If you specify a private network address other than the default, it must meet the following requirements.
Use zeroes for the last two octets of the address
Follow the guidelines in RFC 1597 for network address assignments
See the TCP/IP and Data Communications Administration Guide for instructions on obtaining copies of RFCs.
If you specify a netmask other than the default, it must meet the following requirements.
Minimally mask all bits given in the private network address
Have no "holes"
Add this planning information to the "Cluster and Node Names Worksheet" in the Sun Cluster 3.0 12/01 Release Notes.
The private hostname is the name used for inter-node communication over the private network interface. Private hostnames are automatically created during Sun Cluster installation, and follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. During Sun Cluster installation, this node ID number is automatically assigned to each node when it becomes a cluster member. After installation, you can rename private hostnames by using the scsetup(1M) utility.
Add this planning information to the "Cluster Interconnect Worksheet" in the Sun Cluster 3.0 12/01 Release Notes.
The cluster interconnect provides the hardware pathway for private network communication between cluster nodes. Each interconnect consists of a cable connected between two transport adapters, a transport adapter and a transport junction, or two transport junctions. During Sun Cluster installation, you specify the following configuration information for two cluster interconnects.
Transport adapters - For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is direct connected (adapter to adapter) or uses a transport junction. If your two-node cluster is direct connected, you can still specify a transport junction for the interconnect. If you specify a transport junction, it will be easier to add another node to the cluster in the future.
Transport junctions - If you use transport junctions, such as a network switch, specify a transport junction name for each interconnect. You can use the default name switchN, where N is a number automatically assigned during installation, or create other names.
Also specify the junction port name, or accept the default name. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI.
Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.
You can configure additional private network connections after installation by using the scsetup(1M) utility.
For more information about the cluster interconnect, see Sun Cluster 3.0 12/01 Concepts.
Add this planning information to the "Public Networks Worksheet" in the Sun Cluster 3.0 12/01 Release Notes.
Public networks communicate outside the cluster. Consider the following points when you plan your public network configuration.
Public networks and the private network (cluster interconnect) must use separate adapters.
You must have at least one public network that is connected to all cluster nodes.
You can have as many additional public network connections as your hardware configuration allows.
The local-mac-address variable must use the default value false. Sun Cluster software does not support a local-mac-address value of true.
See also "NAFO Groups" for guidelines on planning public network adapter backup groups. For more information about public network interfaces, see Sun Cluster 3.0 12/01 Concepts.
Add this planning information to the "Disk Device Group Configurations Worksheet" in the Sun Cluster 3.0 12/01 Release Notes.
You must configure all volume manager disk groups as Sun Cluster disk device groups. This configuration enables a secondary node to host multihost disks if the primary node fails. Consider the following points when you plan disk device groups.
Failover - You can configure multiported disks and properly-configured volume manager devices as failover devices. Proper configuration of a volume manager device includes multiported disks and correct setup of the volume manager itself so that multiple nodes can host the exported device. You cannot configure tape drives, CD-ROMs, or single-ported disks as failover devices.
Mirroring - You must mirror the disks to protect the data from disk failure. See "Mirroring Guidelines" for additional guidelines. See "Installing and Configuring Solstice DiskSuite Software" or "Installing and Configuring VxVM Software" and your volume manager documentation for instructions on mirroring.
For more information about disk device groups, see Sun Cluster 3.0 12/01 Concepts.
Add this planning information to the "Public Networks Worksheet" in the Sun Cluster 3.0 12/01 Release Notes.
A Network Adapter Failover (NAFO) group provides public network adapter monitoring and failover, and is the foundation for a network address resource. If a NAFO group is configured with two or more adapters and the active adapter fails, all of the NAFO group's addresses fail over to another adapter in the NAFO group. In this way, the active NAFO group adapter maintains public network connectivity to the subnet to which the adapters in the NAFO group connect.
Consider the following points when you plan your NAFO groups.
Each public network adapter must belong to a NAFO group.
Each node can have only one NAFO group per subnet.
No more than one adapter in a given NAFO group can have a hostname association, in the form of an /etc/hostname.adapter file.
The NAFO group naming convention is nafoN, where N is the number you supply when you create the NAFO group.
For more information about Network Adapter Failover, see Sun Cluster 3.0 12/01 Concepts.
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. You assign quorum devices by using the scsetup(1M) utility.
Consider the following points when you plan quorum devices.
Minimum - A two-node cluster must have at least one shared disk assigned as a quorum device. For other topologies, quorum devices are optional.
Odd-number rule - If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices so that the quorum devices have completely independent failure pathways.
Connection - Do not connect a quorum device to more than two nodes.
For more information about quorum, see Sun Cluster 3.0 12/01 Concepts.