This section provides guidelines for planning and preparing the following components for Sun Cluster software installation and configuration:
For detailed information about Sun Cluster components, see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.
Ensure that you have available all necessary license certificates before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.
For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.
After installing each software product, you must also install any required patches.
For information about current required patches, see Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS or consult your Sun service provider.
For general guidelines and procedures for applying patches, see Chapter 8, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.
You must set up a number of IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public-network connection to the same set of public subnets.
The following table lists the components that need IP addresses assigned. Add these IP addresses to the following locations:
Any naming services that are used
The local /etc/inet/hosts file on each cluster node, after you install Solaris software
For Solaris 10, the local /etc/inet/iphosts file on each cluster node, after you install Solaris software
For more information about planning IP addresses, see System Administration Guide, Volume 3 (Solaris 8) or System Administration Guide: IP Services (Solaris 9 or Solaris 10).
For more information about test IP addresses to support IP Network Multipathing, see IP Network Multipathing Administration Guide.
You must have console access to all cluster nodes. If you install Cluster Control Panel software on your administrative console, you must provide the hostname of the console-access device that is used to communicate with the cluster nodes.
A terminal concentrator is used to communicate between the administrative console and the cluster node consoles.
A Sun Enterprise 10000 server uses a System Service Processor (SSP) instead of a terminal concentrator.
A Sun FireTM server uses a system controller instead of a terminal concentrator.
For more information about console access, see the Sun Cluster Concepts Guide for Solaris OS.
Consider the following points when you plan your logical addresses:
Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed.
The IP address must be on the same subnet as the test IP address that is used by the IP Network Multipathing group that hosts the logical address.
For more information, see the Sun Cluster Data Services Planning and Administration Guide for Solaris OS. For additional information about data services and resources, also see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.
Public networks communicate outside the cluster. Consider the following points when you plan your public-network configuration:
Public networks and the private network (cluster interconnect) must use separate adapters, or you must configure tagged VLAN on tagged-VLAN capable adapters and VLAN-capable switches to use the same adapter for both the private interconnect and the public network.
You must have at least one public network that is connected to all cluster nodes.
You can have as many additional public-network connections as your hardware configuration allows.
Sun Cluster software supports IPv4 addresses on the public network.
Sun Cluster software supports IPv6 addresses on the public network under the following conditions or restrictions:
Sun Cluster software does not support IPv6 addresses on the public network if the private interconnect uses SCI adapters.
On Solaris 9 OS and Solaris 10 OS, Sun Cluster software supports IPv6 addresses for both failover and scalable data services.
On Solaris 8 OS, Sun Cluster software supports IPv6 addresses for failover data services only.
Each public network adapter must belong to an Internet Protocol (IP) Network Multipathing (IP Network Multipathing) group. See IP Network Multipathing Groups for additional guidelines.
All public network adapters must use network interface cards (NICs) that support local MAC address assignment. Local MAC address assignment is a requirement of IP Network Multipathing.
The local-mac-address? variable must use the default value true for Ethernet adapters. Sun Cluster software does not support a local-mac-address? value of false for Ethernet adapters. This requirement is a change from Sun Cluster 3.0, which did require a local-mac-address? value of false.
During Sun Cluster installation on the Solaris 9 or Solaris 10 OS, the scinstall utility automatically configures a single-adapter IP Network Multipathing group for each public-network adapter. To modify these backup groups after installation, follow the procedures in Administering IPMP (Tasks) in System Administration Guide: IP Services (Solaris 9 or Solaris 10).
Sun Cluster configurations do not support filtering with Solaris IP Filter.
See IP Network Multipathing Groups for guidelines on planning public-network-adapter backup groups. For more information about public-network interfaces, see Sun Cluster Concepts Guide for Solaris OS.
Add this planning information to the Public Networks Worksheet.
Internet Protocol (IP) Network Multipathing groups, which replace Network Adapter Failover (NAFO) groups, provide public-network adapter monitoring and failover, and are the foundation for a network-address resource. A multipathing group provides high availability when the multipathing group is configured with two or more adapters. If one adapter fails, all of the addresses on the failed adapter fail over to another adapter in the multipathing group. In this way, the multipathing-group adapters maintain public-network connectivity to the subnet to which the adapters in the multipathing group connect.
The following describes the circumstances when you must manually configure IP Network Multipathing groups during a Sun Cluster software installation:
For Sun Cluster software installations on the Solaris 8 OS, you must manually configure all public network adapters in IP Network Multipathing groups, with test IP addresses.
If you use SunPlex Installer to install Sun Cluster software on the Solaris 9 or Solaris 10 OS, some but not all public network adapters might need to be manually configured in IP Network Multipathing groups.
For Sun Cluster software installations on the Solaris 9 or Solaris 10 OS, except when using SunPlex Installer, the scinstall utility automatically configures all public network adapters as single-adapter IP Network Multipathing groups.
Consider the following points when you plan your multipathing groups.
Each public network adapter must belong to a multipathing group.
In the following kinds of multipathing groups, you must configure a test IP address for each adapter in the group:
On the Solaris 8 OS, all multipathing groups require a test IP address for each adapter.
On the Solaris 9 or Solaris 10 OS, multipathing groups that contain two or more adapters require test IP addresses. If a multipathing group contains only one adapter, you do not need to configure a test IP address.
Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.
Test IP addresses must not be used by normal applications because the test IP addresses are not highly available.
In the /etc/default/mpathd file, the value of TRACK_INTERFACES_ONLY_WITH_GROUPS must be yes.
The name of a multipathing group has no requirements or restrictions.
Most procedures, guidelines, and restrictions that are identified in the Solaris documentation for IP Network Multipathing are the same for both cluster and noncluster environments. Therefore, see the appropriate Solaris document for additional information about IP Network Multipathing:
For the Solaris 8 OS, see Deploying Network Multipathing in IP Network Multipathing Administration Guide.
For the Solaris 9 OS, see Chapter 28, Administering Network Multipathing (Task), in System Administration Guide: IP Services.
For the Solaris 10 OS, see Chapter 30, Administering IPMP (Tasks), in System Administration Guide: IP Services.
Also see IP Network Multipathing Groups in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.
Consider the following points when you plan the use of Network File System (NFS) in a Sun Cluster configuration.
No Sun Cluster node can be an NFS client of a Sun Cluster HA for NFS-exported file system being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.
Applications that run locally on the cluster must not lock files on a file system that is exported through NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere with the ability to restart the lock manager ( lockd(1M)). During restart, a blocked local process might be granted a lock which might be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
Sun Cluster software does not support the following options of the share_nfs(1M) command:
secure
sec=dh
However, Sun Cluster software does support the following security features for NFS:
The use of secure ports for NFS. You enable secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.
The use of Kerberos with NFS. For more information, see Securing Sun Cluster HA for NFS With Kerberos V5 in Sun Cluster Data Service for NFS Guide for Solaris OS.
Observe the following service restrictions for Sun Cluster configurations:
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. There is no data service available for NIS or NIS+. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or installation service on client systems.
Do not use a Sun Cluster configuration to provide an rarpd service.
If you install an RPC service on the cluster, the service must not use any of the following program numbers:
100141
100142
100248
These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and pmfd, respectively.
If the RPC service that you install also uses one of these program numbers, you must change that RPC service to use a different program number.
Sun Cluster software does not support the running of high-priority process scheduling classes on cluster nodes. Do not run either of the following types of processes on cluster nodes:
Processes that run in the time-sharing scheduling class with a high priority
Processes that run in the real-time scheduling class
Sun Cluster software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
This section provides guidelines for the following Sun Cluster components that you configure:
Add this information to the appropriate configuration planning worksheet.
Specify a name for the cluster during Sun Cluster configuration. The cluster name should be unique throughout the enterprise.
The node name is the name that you assign to a machine when you install the Solaris OS. During Sun Cluster configuration, you specify the names of all nodes that you are installing as a cluster. In single-node cluster installations, the default cluster name is the node name.
You do not need to configure a private network for a single-node cluster.
Sun Cluster software uses the private network for internal communication between nodes. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. You specify the private-network address and netmask when you configure Sun Cluster software on the first node of the cluster. You can either accept the default private-network address (172.16.0.0) and netmask (255.255.0.0) or type different choices.
After the installation utility (scinstall, SunPlex Installer, or JumpStart) has finished processing and the cluster is established, you cannot change the private-network address and netmask. You must uninstall and reinstall the cluster software to use a different private-network address or netmask.
If you specify a private-network address other than the default, the address must meet the following requirements:
The address must use zeroes for the last two octets of the address, as in the default address 172.16.0.0. Sun Cluster software requires the last 16 bits of the address space for its own use.
The address must be included in the block of addresses that RFC 1918 reserves for use in private networks. You can contact the InterNIC to obtain copies of RFCs or view RFCs online at http://www.rfcs.org.
You can use the same private network address in more than one cluster. Private IP network addresses are not accessible from outside the cluster.
Sun Cluster software does not support IPv6 addresses for the private interconnect. The system does configure IPv6 addresses on the private network adapters to support scalable services that use IPv6 addresses. But internode communication on the private network does not use these IPv6 addresses.
Although the scinstall utility lets you specify an alternate netmask, best practice is to accept the default netmask, 255.255.0.0. There is no benefit if you specify a netmask that represents a larger network. And the scinstall utility does not accept a netmask that represents a smaller network.
See Planning Your TCP/IP Network in System Administration Guide, Volume 3 (Solaris 8) or Planning Your TCP/IP Network (Tasks), in System Administration Guide: IP Services (Solaris 9 or Solaris 10) for more information about private networks.
The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration. These private hostnames follow the naming convention clusternodenodeid -priv, where nodeid is the numeral of the internal node ID. During Sun Cluster configuration, the node ID number is automatically assigned to each node when the node becomes a cluster member. After the cluster is configured, you can rename private hostnames by using the scsetup(1M) utility.
You do not need to configure a cluster interconnect for a single-node cluster. However, if you anticipate eventually adding nodes to a single-node cluster configuration, you might want to configure the cluster interconnect for future use.
The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:
Between two transport adapters
Between a transport adapter and a transport junction
Between two transport junctions
During Sun Cluster configuration, you specify configuration information for two cluster interconnects. You can configure additional private-network connections after the cluster is established by using the scsetup(1M) utility.
For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.For general information about the cluster interconnect, see Cluster Interconnect in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.
For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is direct connected (adapter to adapter) or uses a transport junction.
Consider the following guidelines and restrictions:
IPv6 - Sun Cluster software does not support IPv6 communications over the private interconnects.
Local MAC address assignment - All private network adapters must use network interface cards (NICs) that support local MAC address assignment. Link-local IPv6 addresses, which are required on private network adapters to support IPv6 public network addresses, are derived from the local MAC addresses.
Tagged VLAN adapters – Sun Cluster software supports tagged Virtual Local Area Networks (VLANs) to share an adapter between the private interconnect and the public network. To configure a tagged VLAN adapter for the private interconnect, specify the adapter name and its VLAN ID (VID) in one of the following ways:
Specify the usual adapter name, which is the device name plus the instance number or physical point of attachment (PPA). For example, the name of instance 2 of a Cassini Gigabit Ethernet adapter would be ce2. If the scinstall utility asks whether the adapter is part of a shared virtual LAN, answer yes and specify the adapter's VID number.
Specify the adapter by its VLAN virtual device name. This name is composed of the adapter name plus the VLAN instance number. The VLAN instance number is derived from the formula (1000*V)+N, where V is the VID number and N is the PPA.
As an example, for VID 73 on adapter ce2, the VLAN instance number would be calculated as (1000*73)+2. You would therefore specify the adapter name as ce73002 to indicate that it is part of a shared virtual LAN.
For more information about VLAN, see Configuring VLANs in Solaris 9 9/04 Sun Hardware Platform Guide.
SBus SCI adapters – The SBus Scalable Coherent Interface (SCI) is not supported as a cluster interconnect. However, the SCI–PCI interface is supported.
Logical network interfaces - Logical network interfaces are reserved for use by Sun Cluster software.
See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.
If you use transport junctions, such as a network switch, specify a transport junction name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name. The exception is the Sun Fire Link adapter, which requires the junction name sw-rsm N. The scinstall utility automatically uses this junction name after you specify a Sun Fire Link adapter (wrsmN).
Also specify the junction port name or accept the default name. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI-PCI.
Clusters with three or more nodes must use transport junctions. Direct connection between cluster nodes is supported only for two-node clusters.
If your two-node cluster is direct connected, you can still specify a transport junction for the interconnect.
If you specify a transport junction, you can more easily add another node to the cluster in the future.
Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. During Sun Cluster installation of a two-node cluster, the scinstall utility automatically configures a quorum device. The quorum device is chosen from the available shared storage disks. The scinstall utility assumes that all available shared storage disks are supported to be quorum devices. After installation, you can also configure additional quorum devices by using the scsetup(1M) utility.
You do not need to configure quorum devices for a single-node cluster.
If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the scsetup utility to configure quorum manually.
Consider the following points when you plan quorum devices.
Minimum – A two-node cluster must have at least one quorum device, which can be a shared disk or a Network Appliance NAS device. For other topologies, quorum devices are optional.
Odd-number rule – If more than one quorum device is configured in a two-node cluster, or in a pair of nodes directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices have completely independent failure pathways.
Connection – You must connect a quorum device to at least two nodes.
For more information about quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS and Quorum Devices in Sun Cluster Overview for Solaris OS.