Oracle® Solaris Cluster Software Installation Guide

Exit Print View

Updated: September 2014, E39580-02
 
 

Oracle Solaris Cluster Configurable Components

This section provides guidelines for the following Oracle Solaris Cluster components that you configure:

Global-Cluster Name

Specify a name for the global cluster during Oracle Solaris Cluster configuration. The global cluster name should be unique throughout the enterprise.

For information about naming a zone cluster, see Zone Clusters.

Global-Cluster Node Names and Node IDs

The name of a node in a global cluster is the same name that you assign to the physical or virtual host when you install it with the Oracle Solaris OS. See the hosts (4) man page for information about naming requirements.

In single-host cluster installations, the default cluster name is the name of the node.

During Oracle Solaris Cluster configuration, you specify the names of all nodes that you are installing in the global cluster. The node name must be same as the output of the uname -n command..

A node ID number is assigned to each cluster node for intracluster use, beginning with the number 1. Node ID numbers are assigned to each cluster node in the order that the node becomes a cluster member. If you configure all cluster nodes in one operation, the node from which you run the scinstall utility is the last node assigned a node ID number. You cannot change a node ID number after it is assigned to a cluster node.

A node that becomes a cluster member is assigned the lowest available node ID number. If a node is removed from the cluster, its node ID becomes available for assignment to a new node. For example, if in a four-node cluster the node that is assigned node ID 3 is removed and a new node is added, the new node is assigned node ID 3, not node ID 5.

If you want the assigned node ID numbers to correspond to certain cluster nodes, configure the cluster nodes one node at a time in the order that you want the node ID numbers to be assigned. For example, to have the cluster software assign node ID 1 to phys-schost-1, configure that node as the sponsoring node of the cluster. If you next add phys-schost-2 to the cluster established by phys-schost-1, phys-schost-2 is assigned node ID 2.

For information about node names in a zone cluster, see Zone Clusters.

Private Network Configuration


Note -  You do not need to configure a private network for a single-host global cluster. The scinstall utility automatically assigns the default private-network address and netmask even though a private network is not used by the cluster.

Oracle Solaris Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Oracle Solaris Cluster software. An Oracle Solaris Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Oracle Solaris Cluster software on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:

  • Accept the default private-network address (172.16.0.0) and default netmask (255.255.240.0). This IP address range supports a combined maximum of 64 nodes and non-global zones, a maximum of 12 zone clusters, and a maximum of 10 private networks.


    Note -  The maximum number of nodes that an IP address range can support does not reflect the maximum number of nodes that the hardware or software configuration can currently support.
  • Specify a different allowable private-network address and accept the default netmask.

  • Accept the default private-network address and specify a different netmask.

  • Specify both a different private-network address and a different netmask.

If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. The utility also prompts you for the number of zone clusters that you want to support. The number of global-cluster nodes that you specify should also include the expected number of unclustered non-global zones that will use the private network.

The utility calculates the netmask for the minimum IP address range that will support the number of nodes, zone clusters, and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, zone clusters, and private networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes, zone clusters, and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the IP address range.

The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.


Note -  Changing the cluster private IP address range might be necessary to support the addition of nodes, non-global zones, zone clusters, or private networks.

To change the private-network address and netmask after the cluster is established, see How to Change the Private Network Address or Address Range of an Existing Cluster in Oracle Solaris Cluster System Administration Guide . You must bring down the cluster to make these changes.

However, the cluster can remain in cluster mode if you use the cluster set-netprops command to change only the netmask. For any zone cluster that is already configured in the cluster, the private IP subnets and the corresponding private IP addresses that are allocated for that zone cluster will also be updated.


If you specify a private-network address other than the default, the address must meet the following requirements:

  • Address and netmask sizes – The private network address cannot be smaller than the netmask. For example, you can use a private network address of 172.16.10.0 with a netmask of 255.255.255.0. However, you cannot use a private network address of 172.16.10.0 with a netmask of 255.255.0.0.

  • Acceptable addresses – The address must be included in the block of addresses that RFC 1918 reserves for use in private networks. You can contact the InterNIC to obtain copies of RFCs or view RFCs online at http://www.rfcs.org.

  • Use in multiple clusters – You can use the same private-network address in more than one cluster provided that the clusters are on different private networks. Private IP network addresses are not accessible from outside the physical cluster.

  • Oracle VM Server for SPARC - When guest domains are created on the same physical machine and are connected to the same virtual switch, the private network is shared by such guest domains and is visible to all these domains. Proceed with caution before you specify a private-network IP address range to the scinstall utility for use by a cluster of guest domains. Ensure that the address range is not already in use by another guest domain that exists on the same physical machine and shares its virtual switch.

  • VLANs shared by multiple clusters – Oracle Solaris Cluster configurations support the sharing of the same private-interconnect VLAN among multiple clusters. You do not have to configure a separate VLAN for each cluster. However, for the highest level of fault isolation and interconnect resilience, limit the use of a VLAN to a single cluster.

  • IPv6 Oracle Solaris Cluster software does not support IPv6 addresses for the private interconnect. The system does configure IPv6 addresses on the private-network adapters to support scalable services that use IPv6 addresses. However, internode communication on the private network does not use these IPv6 addresses.

See Planning for Network Deployment in Oracle Solaris 11.2 for more information about private networks.

Private Hostnames

The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Oracle Solaris Cluster configuration of a global cluster or a zone cluster. These private hostnames follow the naming convention clusternodenode-id -priv, where node-id is the numeral of the internal node ID. During Oracle Solaris Cluster configuration, the node ID number is automatically assigned to each node when the node becomes a cluster member. A node of the global cluster and a node of a zone cluster can both have the same private hostname, but each hostname resolves to a different private-network IP address.

After a global cluster is configured, you can rename its private hostnames by using the clsetup (1CL) utility. Currently, you cannot rename the private hostname of a zone-cluster node.

The creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.

Cluster Interconnect

The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:

  • Between two transport adapters

  • Between a transport adapter and a transport switch

For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Oracle Solaris Cluster Concepts Guide .


Note -  You do not need to configure a cluster interconnect for a single-host cluster. However, if you anticipate eventually adding more nodes to a single-host cluster configuration, you might want to configure the cluster interconnect for future use.

During Oracle Solaris Cluster configuration, you specify configuration information for one or two cluster interconnects.

  • If the number of available adapter ports is limited, you can use tagged VLANs to share the same adapter with both the private and public network. For more information, see the guidelines for tagged VLAN adapters in Transport Adapters.

  • You can set up from one to six cluster interconnects in a cluster. While a single cluster interconnect reduces the number of adapter ports that are used for the private interconnect, it provides no redundancy and less availability. If a single interconnect fails, the cluster is at a higher risk of having to perform automatic recovery. Whenever possible, install two or more cluster interconnects to provide redundancy and scalability, and therefore higher availability, by avoiding a single point of failure.

You can configure additional cluster interconnects, up to six interconnects total, after the cluster is established by using the clsetup utility.

For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Oracle Solaris Cluster 4.2 Hardware Administration Manual . For general information about the cluster interconnect, see Cluster Interconnect in Oracle Solaris Cluster Concepts Guide .

Transport Adapters

For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-host cluster, you also specify whether your interconnect is a point-to-point connection (adapter to adapter) or uses a transport switch.

Consider the following guidelines and restrictions:

  • IPv6 – Oracle Solaris Cluster software does not support IPv6 communications over the private interconnects.

  • Local MAC address assignment – All private network adapters must use network interface cards (NICs) that support local MAC address assignment. Link-local IPv6 addresses, which are required on private-network adapters to support IPv6 public-network addresses for scalable data services, are derived from the local MAC addresses.

  • Tagged VLAN adapters – Oracle Solaris Cluster software supports tagged Virtual Local Area Networks (VLANs) to share an adapter between the private cluster interconnect and the public network. You must use the dladm create-vlan command to configure the adapter as a tagged VLAN adapter before you configure it with the cluster.

    To configure a tagged VLAN adapter for the cluster interconnect, specify the adapter by its VLAN virtual device name. This name is composed of the adapter name plus the VLAN instance number. The VLAN instance number is derived from the formula (1000*V)+N, where V is the VID number and N is the PPA.

    As an example, for VID 73 on adapter net2, the VLAN instance number would be calculated as (1000*73)+2. You would therefore specify the adapter name as net73002 to indicate that it is part of a shared virtual LAN.

    For information about configuring VLAN in a cluster, see Configuring VLANs as Private Interconnect Networks in Oracle Solaris Cluster 4.2 Hardware Administration Manual . For information about creating and administering VLANs, see the dladm (1M) man page and Chapter 3, Configuring Virtual Networks by Using Virtual Local Area Networks, in Managing Network Datalinks in Oracle Solaris 11.2 .

  • SPARC: Oracle VM Server for SPARC guest domains – Specify adapter names by their virtual names, vnetN, such as vnet0 and vnet1. Virtual adapter names are recorded in the /etc/path_to_inst file.

  • Logical network interfaces – Logical network interfaces are reserved for use by Oracle Solaris Cluster software.

Transport Switches

If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.

Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the Oracle Solaris host that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types.

Clusters with three or more nodes must use transport switches. Direct connection between cluster nodes is supported only for two-host clusters. If your two-host cluster is direct connected, you can still specify a transport switch for the interconnect.


Tip  -  If you specify a transport switch, you can more easily add another node to the cluster in the future.

Global Fencing

Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk during split-brain situations. By default, the scinstall utility in Typical Mode leaves global fencing enabled, and each shared disk in the configuration uses the default global fencing setting of prefer3. With the prefer3 setting, the SCSI-3 protocol is used.

If any device is unable to use the SCSI-3 protocol, the pathcount setting should be used instead, where the fencing protocol for the shared disk is chosen based on the number of DID paths that are attached to the disk. Non-SCSI-3 capable devices are limited to two DID device paths within the cluster. Fencing can be turned off for devices which do not support either SCSI-3 or SCSI-2 fencing. However, data integrity for such devices cannot be guaranteed during split-brain situations.

In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For most situations, respond No to keep global fencing enabled. However, you can disable global fencing in certain situations.


Caution

Caution  -  If you disable fencing under other situations than the ones described, your data might be vulnerable to corruption during application failover. Examine this data corruption possibility carefully when you consider turning off fencing.


The situations in which you can disable global fencing are as follows:

  • The shared storage does not support SCSI reservations.

    If you turn off fencing for a shared disk that you then configure as a quorum device, the device uses the software quorum protocol. This is true regardless of whether the disk supports SCSI-2 or SCSI-3 protocols. Software quorum is a protocol in Oracle Solaris Cluster software that emulates a form of SCSI Persistent Group Reservations (PGR).

  • You want to enable systems that are outside the cluster to gain access to storage that is attached to the cluster.

If you disable global fencing during cluster configuration, fencing is turned off for all shared disks in the cluster. After the cluster is configured, you can change the global fencing protocol or override the fencing protocol of individual shared disks. However, to change the fencing protocol of a quorum device, you must first unconfigure the quorum device. Then set the new fencing protocol of the disk and reconfigure it as a quorum device.

For more information about fencing behavior, see Failfast Mechanism in Oracle Solaris Cluster Concepts Guide . For more information about setting the fencing protocol of individual shared disks, see the cldevice (1CL) man page. For more information about the global fencing setting, see the cluster (1CL) man page.

Quorum Devices

Oracle Solaris Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. For more information about the purpose and function of quorum devices, see Quorum and Quorum Devices in Oracle Solaris Cluster Concepts Guide .

During Oracle Solaris Cluster installation of a two-host cluster, you can choose to have the scinstall utility automatically configure an available shared disk in the configuration as a quorum device. The scinstall utility assumes that all available shared disks are supported as quorum devices.

If you want to use a quorum server or an Oracle ZFS Storage Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.

After installation, you can also configure additional quorum devices by using the clsetup utility.


Note -  You do not need to configure quorum devices for a single-host cluster.

If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.

Consider the following points when you plan quorum devices:

  • Minimum – A two-host cluster must have at least one quorum device, which can be a shared disk, a quorum server, or a NAS device. For other topologies, quorum devices are optional.

  • Odd-number rule – If more than one quorum device is configured in a two-host cluster or in a pair of hosts directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices have completely independent failure pathways.

  • Distribution of quorum votes – For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by nodes. Otherwise, the nodes cannot form a cluster if all quorum devices are unavailable even if all nodes are functioning.

  • Connection – You must connect a quorum device to at least two nodes.

  • SCSI fencing protocol – When a SCSI shared-disk quorum device is configured, its fencing protocol is automatically set to SCSI-2 in a two-host cluster or SCSI-3 in a cluster with three or more nodes.

  • Changing the fencing protocol of quorum devices – For SCSI disks that are configured as a quorum device, you must unconfigure the quorum device before you can enable or disable its SCSI fencing protocol.

  • Software quorum protocol – You can configure supported shared disks that do not support SCSI protocol, such as SATA disks, as quorum devices. You must disable fencing for such disks. The disks would then use the software quorum protocol, which emulates SCSI PGR.

    The software quorum protocol would also be used by SCSI-shared disks if fencing is disabled for such disks.

  • Replicated devices – Oracle Solaris Cluster software does not support replicated devices as quorum devices.

  • ZFS storage pools – Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost. The disk can then no longer provide a quorum vote to the cluster.

    After a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device.

For more information about quorum devices, see Quorum and Quorum Devices in Oracle Solaris Cluster Concepts Guide .