This section provides guidelines for the following Sun Cluster components that you configure:
Add this information to the appropriate configuration planning worksheet.
Specify a name for the cluster during Sun Cluster configuration. The cluster name should be unique throughout the enterprise.
The cluster node name is the same name that you assign to the machine when you install it with the Solaris OS. See the hosts(4) man page for information about naming requirements.
In single-node cluster installations, the default cluster name is the name of the node.
During Sun Cluster configuration, you specify the names of all nodes that you are installing in the cluster.
On the Solaris 10 OS, use the naming convention nodename:zonename to specify a non-global zone to a Sun Cluster command.
The nodename is the name of the cluster node.
The zonename is the name that you assign to the non-global zone when you create the zone on the node. The zone name must be unique on the node. However, you can use the same zone name on different nodes, because the different node name in nodename:zonename makes the complete non-global zone name unique in the cluster.
To specify the global zone, you only need to specify the node name.
You do not need to configure a private network for a single-node cluster. The scinstall utility automatically assigns the default private-network address and netmask, even though a private network is not used by the cluster.
Sun Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Sun Cluster software. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Sun Cluster software on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:
Accept the default private-network address (172.16.0.0) and netmask (255.255.248.0). This IP address range supports a combined maximum of 64 nodes and non-global zones and a maximum of 10 private networks.
The maximum number of nodes that an IP address range can support does not reflect the maximum number of nodes that the hardware configuration can support.
Specify a different allowable private-network address and accept the default netmask.
Accept the default private-network address and specify a different netmask.
Specify both a different private-network address and a different netmask.
If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. The number of nodes that you specify should also include the expected number of non-global zones that will use the private network.
The utility calculates the netmask for the minimum IP address range that will support the number of nodes and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, and private networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the IP address range.
The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.
To change the private-network address and netmask after the cluster is established, see How to Change the Private Network Address or Address Range of an Existing Cluster in Sun Cluster System Administration Guide for Solaris OS. You must bring down the cluster to make these changes.
Changing the cluster private IP address range might be necessary to support the addition of nodes, non-global zones, or private networks.
If you specify a private-network address other than the default, the address must meet the following requirements:
Address and netmask sizes - The private network address cannot be smaller than the netmask. For example, you can use a private network address of 172.16.10.0 with a netmask of 255.255.255.0. But you cannot use a private network address of 172.16.10.0 with a netmask of 255.255.0.0.
Acceptable addresses - The address must be included in the block of addresses that RFC 1918 reserves for use in private networks. You can contact the InterNIC to obtain copies of RFCs or view RFCs online at http://www.rfcs.org.
Use in multiple clusters - You can use the same private network address in more than one cluster. Private IP network addresses are not accessible from outside the cluster.
IPv6 - Sun Cluster software does not support IPv6 addresses for the private interconnect. The system does configure IPv6 addresses on the private network adapters to support scalable services that use IPv6 addresses. But internode communication on the private network does not use these IPv6 addresses.
The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration. These private hostnames follow the naming convention clusternodenodeid -priv, where nodeid is the numeral of the internal node ID. During Sun Cluster configuration, the node ID number is automatically assigned to each node when the node becomes a cluster member. After the cluster is configured, you can rename private hostnames by using the clsetup(1CL) utility.
For the Solaris 10 OS, the creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.
The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:
Between two transport adapters
Between a transport adapter and a transport switch
For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Sun Cluster Concepts Guide for Solaris OS.
You do not need to configure a cluster interconnect for a single-node cluster. However, if you anticipate eventually adding nodes to a single-node cluster configuration, you might want to configure the cluster interconnect for future use.
During Sun Cluster configuration, you specify configuration information for one or two cluster interconnects.
The use of two cluster interconnects provides higher availability than one interconnect. If the number of available adapter ports is limited, you can use tagged VLANs to share the same adapter with both the private and public network. For more information, see the guidelines for tagged VLAN adapters in Transport Adapters.
The use of one cluster interconnect reduces the number of adapter ports that is used for the private interconnect but provides less availability. In addition, the cluster would spend more time in automatic recovery if the single private interconnect fails.
You can configure additional cluster interconnects after the cluster is established by using the clsetup(1CL) utility.
For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS. For general information about the cluster interconnect, see Cluster-Interconnect Components in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.
For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is a point-to-point connection (adapter to adapter) or uses a transport switch.
Consider the following guidelines and restrictions:
Local MAC address assignment - All private network adapters must use network interface cards (NICs) that support local MAC address assignment. Link-local IPv6 addresses, which are required on private network adapters to support IPv6 public network addresses, are derived from the local MAC addresses.
Tagged VLAN adapters – Sun Cluster software supports tagged Virtual Local Area Networks (VLANs) to share an adapter between the private cluster interconnect and the public network. To configure a tagged VLAN adapter for the cluster interconnect, specify the adapter name and its VLAN ID (VID) in one of the following ways:
Specify the usual adapter name, which is the device name plus the instance number or physical point of attachment (PPA). For example, the name of instance 2 of a Cassini Gigabit Ethernet adapter would be ce2. If the scinstall utility asks whether the adapter is part of a shared virtual LAN, answer yes and specify the adapter's VID number.
Specify the adapter by its VLAN virtual device name. This name is composed of the adapter name plus the VLAN instance number. The VLAN instance number is derived from the formula (1000*V)+N, where V is the VID number and N is the PPA.
As an example, for VID 73 on adapter ce2, the VLAN instance number would be calculated as (1000*73)+2. You would therefore specify the adapter name as ce73002 to indicate that it is part of a shared virtual LAN.
For information about configuring VLAN in a cluster, see Configuring VLANs as Private Interconnect Networks in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS. For general information about VLAN, see Solaris 9 9/05 Sun Hardware Platform Guide.
See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.
If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.
Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI-PCI.
Clusters with three or more nodes must use transport switches. Direct connection between cluster nodes is supported only for two-node clusters.
If your two-node cluster is direct connected, you can still specify a transport switch for the interconnect.
If you specify a transport switch, you can more easily add another node to the cluster in the future.
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. For more information about the purpose and function of quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS.
During Sun Cluster installation of a two-node cluster, you can choose to let the scinstall utility automatically configure a SCSI quorum device. This quorum device is chosen from the available shared SCSI storage disks. The scinstall utility assumes that all available shared SCSI storage disks are supported to be quorum devices.
If you want to use a quorum server or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.
After installation, you can also configure additional quorum devices by using the clsetup(1CL) utility.
You do not need to configure quorum devices for a single-node cluster.
If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.
Consider the following points when you plan quorum devices.
Minimum – A two-node cluster must have at least one quorum device, which can be a shared SCSI disk, a quorum server, or a Network Appliance NAS device. For other topologies, quorum devices are optional.
Odd-number rule – If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices have completely independent failure pathways.
Distribution of quorum votes - For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by nodes. Otherwise, the nodes cannot form a cluster if all quorum devices are unavailable, even if all nodes are functioning.
Connection – You must connect a quorum device to at least two nodes.
SCSI fencing protocol – When a SCSI quorum device is configured, its SCSI protocol is automatically set to SCSI-2 in a two-node cluster or SCSI-3 in cluster with three or more nodes. You cannot change the SCSI protocol of a device after it is configured as a quorum device.
ZFS storage pools - Do not add a configured quorum device to a Zettabyte File System (ZFS) storage pool. When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost. The disk can then no longer provide a quorum vote to the cluster.
Once a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device.
For more information about quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS and Quorum Devices in Sun Cluster Overview for Solaris OS.