Sun Cluster Software Installation Guide for Solaris OS

Planning the Sun Cluster Environment

This section provides guidelines for planning and preparing the following components for Sun Cluster software installation and configuration:

For detailed information about Sun Cluster components, see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.

Licensing

Ensure that you have available all necessary license certificates before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.

For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches.

Public Network IP Addresses

For information about the use of public networks by the cluster, see Public Network Adapters and Internet Protocol (IP) Network Multipathing in Sun Cluster Concepts Guide for Solaris OS.

You must set up a number of public network IP addresses for various Sun Cluster components, depending on your cluster configuration. Each node in the cluster configuration must have at least one public-network connection to the same set of public subnets.

The following table lists the components that need public network IP addresses assigned. Add these IP addresses to the following locations:

Table 1–3 Sun Cluster Components That Use Public Network IP Addresses

Component 

Number of IP Addresses Needed 

Administrative console

1 IP address per subnet. 

Cluster nodes 

1 IP address per node, per subnet. 

Domain console network interface (Sun FireTM 15000)

1 IP address per domain. 

(Optional) Non-global zones

1 IP address per subnet. 

Console-access device

1 IP address. 

Logical addresses 

1 IP address per logical host resource, per subnet. 

Quorum server 

1 IP address. 

For more information about planning IP addresses, see System Administration Guide: IP Services (Solaris 9 or Solaris 10).

Console-Access Devices

You must have console access to all cluster nodes. If you install Cluster Control Panel software on an administrative console, you must provide the hostname and port number of the console-access device that is used to communicate with the cluster nodes.

For more information about console access, see the Sun Cluster Concepts Guide for Solaris OS.

Alternatively, if you connect an administrative console directly to cluster nodes or through a management network, you instead provide the hostname of each cluster node and its serial port number that is used to connect to the administrative console or the management network.

Logical Addresses

Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed.

For more information, see the Sun Cluster Data Services Planning and Administration Guide for Solaris OS. For additional information about data services and resources, also see the Sun Cluster Overview for Solaris OS and the Sun Cluster Concepts Guide for Solaris OS.

Public Networks

Public networks communicate outside the cluster. Consider the following points when you plan your public-network configuration:

For more information about public-network interfaces, see Sun Cluster Concepts Guide for Solaris OS.

Guidelines for NFS

Consider the following points when you plan the use of Network File System (NFS) in a Sun Cluster configuration.

Service Restrictions

Observe the following service restrictions for Sun Cluster configurations:

Sun Cluster Configurable Components

This section provides guidelines for the following Sun Cluster components that you configure:

Add this information to the appropriate configuration planning worksheet.

Cluster Name

Specify a name for the cluster during Sun Cluster configuration. The cluster name should be unique throughout the enterprise.

Node Names

The cluster node name is the same name that you assign to the machine when you install it with the Solaris OS. See the hosts(4) man page for information about naming requirements.

In single-node cluster installations, the default cluster name is the name of the node.

During Sun Cluster configuration, you specify the names of all nodes that you are installing in the cluster.

Zone Names

On the Solaris 10 OS, use the naming convention nodename:zonename to specify a non-global zone to a Sun Cluster command.

To specify the global zone, you only need to specify the node name.

Private Network


Note –

You do not need to configure a private network for a single-node cluster. The scinstall utility automatically assigns the default private-network address and netmask, even though a private network is not used by the cluster.


Sun Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Sun Cluster software. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Sun Cluster software on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:

If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. The number of nodes that you specify should also include the expected number of non-global zones that will use the private network.

The utility calculates the netmask for the minimum IP address range that will support the number of nodes and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, and private networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the IP address range.

The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.


Note –

To change the private-network address and netmask after the cluster is established, see How to Change the Private Network Address or Address Range of an Existing Cluster in Sun Cluster System Administration Guide for Solaris OS. You must bring down the cluster to make these changes.

Changing the cluster private IP address range might be necessary to support the addition of nodes, non-global zones, or private networks.


If you specify a private-network address other than the default, the address must meet the following requirements:

See Planning Your TCP/IP Network (Tasks), in System Administration Guide: IP Services (Solaris 9 or Solaris 10) for more information about private networks.

Private Hostnames

The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration. These private hostnames follow the naming convention clusternodenodeid -priv, where nodeid is the numeral of the internal node ID. During Sun Cluster configuration, the node ID number is automatically assigned to each node when the node becomes a cluster member. After the cluster is configured, you can rename private hostnames by using the clsetup(1CL) utility.

For the Solaris 10 OS, the creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.

Cluster Interconnect

The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:

For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Sun Cluster Concepts Guide for Solaris OS.


Note –

You do not need to configure a cluster interconnect for a single-node cluster. However, if you anticipate eventually adding nodes to a single-node cluster configuration, you might want to configure the cluster interconnect for future use.


During Sun Cluster configuration, you specify configuration information for one or two cluster interconnects.

You can configure additional cluster interconnects after the cluster is established by using the clsetup(1CL) utility.

For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS. For general information about the cluster interconnect, see Cluster-Interconnect Components in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS.

Transport Adapters

For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-node cluster, you also specify whether your interconnect is a point-to-point connection (adapter to adapter) or uses a transport switch.

Consider the following guidelines and restrictions:

See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.

Transport Switches

If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.

Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the node that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI-PCI.


Note –

Clusters with three or more nodes must use transport switches. Direct connection between cluster nodes is supported only for two-node clusters.


If your two-node cluster is direct connected, you can still specify a transport switch for the interconnect.


Tip –

If you specify a transport switch, you can more easily add another node to the cluster in the future.


Quorum Devices

Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. For more information about the purpose and function of quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS.

During Sun Cluster installation of a two-node cluster, you can choose to let the scinstall utility automatically configure a SCSI quorum device. This quorum device is chosen from the available shared SCSI storage disks. The scinstall utility assumes that all available shared SCSI storage disks are supported to be quorum devices.

If you want to use a quorum server or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.

After installation, you can also configure additional quorum devices by using the clsetup(1CL) utility.


Note –

You do not need to configure quorum devices for a single-node cluster.


If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.

Consider the following points when you plan quorum devices.

For more information about quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS and Quorum Devices in Sun Cluster Overview for Solaris OS.