This section provides guidelines for planning and preparing the following components for Sun Cluster software installation and configuration:
Ensure that you have available all necessary license certificates before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.
For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.
After installing each software product, you must also install any required patches.
For information about current required patches, see Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS or consult your Sun service provider.
For general guidelines and procedures for applying patches, see Chapter 10, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.
For information about the use of public networks by the cluster, see Public Network Adapters and Internet Protocol (IP) Network Multipathing in Sun Cluster Concepts Guide for Solaris OS.
You must set up a number of public network IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public-network connection to the same set of public subnets.
Any naming services that are used
The local /etc/inet/hosts file on each cluster node, after you install Solaris software
For Solaris 10, the local /etc/inet/ipnodes file on each cluster node, after you install Solaris software
Number of IP Addresses Needed
1 IP address per subnet.
1 IP address per node, per subnet.
1 IP address per domain.
(Optional) Non-global zones
1 IP address per subnet.
1 IP address.
1 IP address per logical host resource, per subnet.
1 IP address.
You must have console access to all cluster nodes. If you install Cluster Control Panel software on an administrative console, you must provide the hostname and port number of the console-access device that is used to communicate with the cluster nodes.
A terminal concentrator is used to communicate between the administrative console and the cluster node consoles.
A Sun Enterprise 10000 server uses a System Service Processor (SSP) instead of a terminal concentrator.
A Sun Fire server uses a system controller instead of a terminal concentrator.
For more information about console access, see the Sun Cluster Concepts Guide for Solaris OS.
Alternatively, if you connect an administrative console directly to cluster nodes or through a management network, you instead provide the hostname of each cluster node and its serial port number that is used to connect to the administrative console or the management network.
Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed.
For more information, see the Sun Cluster Data Services Planning and Administration Guide for Solaris OS. For additional information about data services and resources, also see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.
Public networks communicate outside the cluster. Consider the following points when you plan your public-network configuration:
Separation of public and private network - Public networks and the private network (cluster interconnect) must use separate adapters, or you must configure tagged VLAN on tagged-VLAN capable adapters and VLAN-capable switches to use the same adapter for both the private interconnect and the public network.
Minimum - All cluster nodes must be connected to at least one public network. Public-network connections can use different subnets for different nodes.
Scalable services - All nodes that run a scalable service must either use the same subnet or set of subnets or use different subnets that are routable among themselves.
IPv4 - Sun Cluster software supports IPv4 addresses on the public network.
IPv6 - Sun Cluster software supports IPv6 addresses on the public network under the following conditions or restrictions:
Sun Cluster software does not support IPv6 addresses on the public network if the private interconnect uses SCI adapters.
Sun Cluster software supports IPv6 addresses for both failover and scalable data services.
IPMP groups - Each public network adapter that is used for data-service traffic must belong to an IP network multipathing (IPMP) group. If a public-network adapter is not used for data-service traffic, you do not have to configure it in an IPMP group.
In the Sun Cluster 3.2 release, the scinstall utility no longer automatically configures a single-adapter IPMP group on each unconfigured public-network adapter during Sun Cluster creation. Instead, the scinstall utility automatically configures a multiple-adapter IPMP group for each set of public-network adapters in the cluster that uses the same subnet. On the Solaris 10 OS, these groups are probe based. However, the scinstall utility ignores adapters that are already configured in an IPMP group. If any adapter in an IPMP group that the scinstall utility configures will not be used for data-service traffic, you can remove that adapter from the group.
For guidelines and instructions to configure IPMP groups, follow the procedures in Part VI, IPMP, in System Administration Guide: IP Services. To modify IPMP groups after cluster installation, follow the guidelines in How to Administer IP Network Multipathing Groups in a Cluster in Sun Cluster System Administration Guide for Solaris OS and procedures in Administering IPMP (Tasks) in System Administration Guide: IP Services (Solaris 9 or Solaris 10).
local-mac-address setting - The local-mac-address? variable must use the default value true for Ethernet adapters. Sun Cluster software does not support a local-mac-address? value of false for Ethernet adapters. This requirement is a change from Sun Cluster 3.0, which did require a local-mac-address? value of false.
For more information about public-network interfaces, see Sun Cluster Concepts Guide for Solaris OS.
Consider the following points when you plan the use of Network File System (NFS) in a Sun Cluster configuration.
NFS client - No Sun Cluster node can be an NFS client of a Sun Cluster HA for NFS-exported file system that is being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.
NFSv3 protocol - If you are mounting file systems on the cluster nodes from external NFS servers, such as NAS filers, and you are using the NFSv3 protocol, you cannot run NFS client mounts and the Sun Cluster HA for NFS data service on the same cluster node. If you do, certain Sun Cluster HA for NFS data-service activities might cause the NFS daemons to stop and restart, interrupting NFS services. However, you can safely run the Sun Cluster HA for NFS data service if you use the NFSv4 protocol to mount external NFS file systems on the cluster nodes.
Locking - Applications that run locally on the cluster must not lock files on a file system that is exported through NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere with the ability to restart the lock manager ( lockd(1M)). During restart, a blocked local process might be granted a lock which might be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
NFS security features - Sun Cluster software does not support the following options of the share_nfs(1M) command:
However, Sun Cluster software does support the following security features for NFS:
The use of secure ports for NFS. You enable secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.
The use of Kerberos with NFS. For more information, see Securing Sun Cluster HA for NFS With Kerberos V5 in Sun Cluster Data Service for NFS Guide for Solaris OS.
Observe the following service restrictions for Sun Cluster configurations:
Boot and install servers - Do not use a Sun Cluster configuration to provide a highly available boot or installation service on client systems.
These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and pmfd, respectively.
If the RPC service that you install also uses one of these program numbers, you must change that RPC service to use a different program number.
Scheduling classes - Sun Cluster software does not support the running of high-priority process scheduling classes on cluster nodes. Do not run either of the following types of processes on cluster nodes:
Processes that run in the time-sharing scheduling class with a high priority
Processes that run in the real-time scheduling class
Sun Cluster software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
This section provides guidelines for the following Sun Cluster components that you configure:
Add this information to the appropriate configuration planning worksheet.
Specify a name for the cluster during Sun Cluster configuration. The cluster name should be unique throughout the enterprise.
The cluster node name is the same name that you assign to the machine when you install it with the Solaris OS. See the hosts(4) man page for information about naming requirements.
In single-node cluster installations, the default cluster name is the name of the node.
During Sun Cluster configuration, you specify the names of all nodes that you are installing in the cluster.
On the Solaris 10 OS, use the naming convention nodename:zonename to specify a non-global zone to a Sun Cluster command.
The nodename is the name of the cluster node.
The zonename is the name that you assign to the non-global zone when you create the zone on the node. The zone name must be unique on the node. However, you can use the same zone name on different nodes, because the different node name in nodename:zonename makes the complete non-global zone name unique in the cluster.
To specify the global zone, you only need to specify the node name.
You do not need to configure a private network for a single-node cluster. The scinstall utility automatically assigns the default private-network address and netmask, even though a private network is not used by the cluster.
Sun Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Sun Cluster software. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Sun Cluster software on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:
Accept the default private-network address (172.16.0.0) and netmask (255.255.248.0). This IP address range supports a combined maximum of 64 nodes and non-global zones and a maximum of 10 private networks.
The maximum number of nodes that an IP address range can support does not reflect the maximum number of nodes that the hardware configuration can support.
Specify a different allowable private-network address and accept the default netmask.
Accept the default private-network address and specify a different netmask.
Specify both a different private-network address and a different netmask.
If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. The number of nodes that you specify should also include the expected number of non-global zones that will use the private network.
The utility calculates the netmask for the minimum IP address range that will support the number of nodes and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, and private networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the IP address range.
The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.
To change the private-network address and netmask after the cluster is established, see How to Change the Private Network Address or Address Range of an Existing Cluster in Sun Cluster System Administration Guide for Solaris OS. You must bring down the cluster to make these changes.
Changing the cluster private IP address range might be necessary to support the addition of nodes, non-global zones, or private networks.
If you specify a private-network address other than the default, the address must meet the following requirements:
Address and netmask sizes - The private network address cannot be smaller than the netmask. For example, you can use a private network address of 172.16.10.0 with a netmask of 255.255.255.0. But you cannot use a private network address of 172.16.10.0 with a netmask of 255.255.0.0.
Acceptable addresses - The address must be included in the block of addresses that RFC 1918 reserves for use in private networks. You can contact the InterNIC to obtain copies of RFCs or view RFCs online at http://www.rfcs.org.
Use in multiple clusters - You can use the same private network address in more than one cluster. Private IP network addresses are not accessible from outside the cluster.
IPv6 - Sun Cluster software does not support IPv6 addresses for the private interconnect. The system does configure IPv6 addresses on the private network adapters to support scalable services that use IPv6 addresses. But internode communication on the private network does not use these IPv6 addresses.
The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration. These private hostnames follow the naming convention clusternodenodeid -priv, where nodeid is the numeral of the internal node ID. During Sun Cluster configuration, the node ID number is automatically assigned to each node when the node becomes a cluster member. After the cluster is configured, you can rename private hostnames by using the clsetup(1CL) utility.
For the Solaris 10 OS, the creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.
The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:
Between two transport adapters
Between a transport adapter and a transport switch
For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Sun Cluster Concepts Guide for Solaris OS.
You do not need to configure a cluster interconnect for a single-node cluster. However, if you anticipate eventually adding nodes to a single-node cluster configuration, you might want to configure the cluster interconnect for future use.
During Sun Cluster configuration, you specify configuration information for one or two cluster interconnects.
The use of two cluster interconnects provides higher availability than one interconnect. If the number of available adapter ports is limited, you can use tagged VLANs to share the same adapter with both the private and public network. For more information, see the guidelines for tagged VLAN adapters in Transport Adapters.
The use of one cluster interconnect reduces the number of adapter ports that is used for the private interconnect but provides less availability. In addition, the cluster would spend more time in automatic recovery if the single private interconnect fails.
You can configure additional cluster interconnects after the cluster is established by using the clsetup(1CL) utility.
For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS. For general information about the cluster interconnect, see Cluster-Interconnect Components in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.
For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is a point-to-point connection (adapter to adapter) or uses a transport switch.
Consider the following guidelines and restrictions:
Local MAC address assignment - All private network adapters must use network interface cards (NICs) that support local MAC address assignment. Link-local IPv6 addresses, which are required on private network adapters to support IPv6 public network addresses, are derived from the local MAC addresses.
Tagged VLAN adapters – Sun Cluster software supports tagged Virtual Local Area Networks (VLANs) to share an adapter between the private cluster interconnect and the public network. To configure a tagged VLAN adapter for the cluster interconnect, specify the adapter name and its VLAN ID (VID) in one of the following ways:
Specify the usual adapter name, which is the device name plus the instance number or physical point of attachment (PPA). For example, the name of instance 2 of a Cassini Gigabit Ethernet adapter would be ce2. If the scinstall utility asks whether the adapter is part of a shared virtual LAN, answer yes and specify the adapter's VID number.
Specify the adapter by its VLAN virtual device name. This name is composed of the adapter name plus the VLAN instance number. The VLAN instance number is derived from the formula (1000*V)+N, where V is the VID number and N is the PPA.
As an example, for VID 73 on adapter ce2, the VLAN instance number would be calculated as (1000*73)+2. You would therefore specify the adapter name as ce73002 to indicate that it is part of a shared virtual LAN.
For information about configuring VLAN in a cluster, see Configuring VLANs as Private Interconnect Networks in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS. For general information about VLAN, see Solaris 9 9/05 Sun Hardware Platform Guide.
See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.
If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.
Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI-PCI.
Clusters with three or more nodes must use transport switches. Direct connection between cluster nodes is supported only for two-node clusters.
If your two-node cluster is direct connected, you can still specify a transport switch for the interconnect.
If you specify a transport switch, you can more easily add another node to the cluster in the future.
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. For more information about the purpose and function of quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS.
During Sun Cluster installation of a two-node cluster, you can choose to let the scinstall utility automatically configure a SCSI quorum device. This quorum device is chosen from the available shared SCSI storage disks. The scinstall utility assumes that all available shared SCSI storage disks are supported to be quorum devices.
If you want to use a quorum server or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.
After installation, you can also configure additional quorum devices by using the clsetup(1CL) utility.
You do not need to configure quorum devices for a single-node cluster.
If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.
Consider the following points when you plan quorum devices.
Minimum – A two-node cluster must have at least one quorum device, which can be a shared SCSI disk, a quorum server, or a Network Appliance NAS device. For other topologies, quorum devices are optional.
Odd-number rule – If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices have completely independent failure pathways.
Distribution of quorum votes - For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by nodes. Otherwise, the nodes cannot form a cluster if all quorum devices are unavailable, even if all nodes are functioning.
Connection – You must connect a quorum device to at least two nodes.
SCSI fencing protocol – When a SCSI quorum device is configured, its SCSI protocol is automatically set to SCSI-2 in a two-node cluster or SCSI-3 in cluster with three or more nodes. You cannot change the SCSI protocol of a device after it is configured as a quorum device.
ZFS storage pools - Do not add a configured quorum device to a Zettabyte File System (ZFS) storage pool. When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost. The disk can then no longer provide a quorum vote to the cluster.
Once a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device.
For more information about quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS and Quorum Devices in Sun Cluster Overview for Solaris OS.