JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

Finding Oracle Solaris Cluster Installation Tasks

Planning the Oracle Solaris OS

Guidelines for Selecting Your Oracle Solaris Installation Method

Oracle Solaris OS Feature Restrictions

Oracle Solaris Software Group Considerations

System Disk Partitions

Guidelines for the Root (/) File System

Guidelines for the /globaldevices File System

Volume Manager Requirements

Example - Sample File-System Allocations

Guidelines for Non-Global Zones in a Global Cluster

SPARC: Guidelines for Sun Logical Domains in a Cluster

Planning the Oracle Solaris Cluster Environment

Licensing

Software Patches

Public-Network IP Addresses

Console-Access Devices

Logical Addresses

Public Networks

Quorum Servers

NFS Guidelines

Service Restrictions

Network Time Protocol (NTP)

Oracle Solaris Cluster Configurable Components

Global-Cluster Name

Global-Cluster Voting-Node Names and Node IDs

Zone Names

Private Network

Private Hostnames

Cluster Interconnect

Transport Adapters

Transport Switches

Global Fencing

Quorum Devices

Zone Clusters

Global-Cluster Requirements and Guidelines

Zone-Cluster Requirements and Guidelines

Guidelines for Trusted Extensions in a Zone Cluster

Planning the Global Devices, Device Groups, and Cluster File Systems

Global Devices

Device Groups

Cluster File Systems

Choosing Mount Options for Cluster File Systems

UFS Cluster File Systems

VxFS Cluster File Systems

Mount Information for Cluster File Systems

Planning Volume Management

Guidelines for Volume-Manager Software

Guidelines for Solaris Volume Manager Software

Guidelines for Veritas Volume Manager Software

File-System Logging

Mirroring Guidelines

Guidelines for Mirroring Multihost Disks

Guidelines for Mirroring the Root Disk

2.  Installing Software on Global-Cluster Nodes

3.  Establishing the Global Cluster

4.  Configuring Solaris Volume Manager Software

5.  Installing and Configuring Veritas Volume Manager

6.  Creating a Cluster File System

7.  Creating Non-Global Zones and Zone Clusters

8.  Installing the Oracle Solaris Cluster Module to Sun Management Center

9.  Uninstalling Software From the Cluster

A.  Oracle Solaris Cluster Installation and Configuration Worksheets

Index

Planning the Oracle Solaris Cluster Environment

This section provides guidelines for planning and preparing the following components for Oracle Solaris Cluster software installation and configuration:

For detailed information about Oracle Solaris Cluster components, see the Oracle Solaris Cluster Overview and the Oracle Solaris Cluster Concepts Guide.

Licensing

Ensure that you have available all necessary license certificates before you begin software installation. Oracle Solaris Cluster software does not require a license certificate, but each node installed with Oracle Solaris Cluster software must be covered under your Oracle Solaris Cluster software license agreement.

For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches. For proper cluster operation, ensure that all cluster nodes maintain the same patch level.

Public-Network IP Addresses

For information about the use of public networks by the cluster, see Public Network Adapters and IP Network Multipathing in Oracle Solaris Cluster Concepts Guide.

You must set up a number of public-network IP addresses for various Oracle Solaris Cluster components, depending on your cluster configuration. Each Solaris host in the cluster configuration must have at least one public-network connection to the same set of public subnets.

The following table lists the components that need public-network IP addresses assigned. Add these IP addresses to the following locations:

Table 1-3 Oracle Solaris Cluster Components That Use Public-Network IP Addresses

Component
Number of IP Addresses Needed
Administrative console
1 IP address per subnet.
Global-cluster nodes
1 IP address per node, per subnet.
Zone-cluster nodes
1 IP address per node, per subnet.
Domain console network interface (Sun Fire 15000)
1 IP address per domain.
(Optional) Non-global zones
1 IP address per subnet.
Console-access device
1 IP address.
Logical addresses
1 IP address per logical host resource, per subnet.

For more information about planning IP addresses, see Chapter 2, Planning Your TCP/IP Network (Tasks), in System Administration Guide: IP Services.

Console-Access Devices

You must have console access to all cluster nodes. If you install Cluster Control Panel software on an administrative console, you must provide the hostname and port number of the console-access device that is used to communicate with the cluster nodes.

For more information about console access, see the Oracle Solaris Cluster Concepts Guide.

Alternatively, if you connect an administrative console directly to cluster nodes or through a management network, you instead provide the hostname of each global-cluster node and its serial port number that is used to connect to the administrative console or the management network.

Logical Addresses

Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed.

For more information, see the Oracle Solaris Cluster Data Services Planning and Administration Guide. For additional information about data services and resources, also see the Oracle Solaris Cluster Overview and the Oracle Solaris Cluster Concepts Guide.

Public Networks

Public networks communicate outside the cluster. Consider the following points when you plan your public-network configuration:

For more information about public-network interfaces, see Oracle Solaris Cluster Concepts Guide.

Quorum Servers

You can use Oracle Solaris Cluster Quorum Server software to configure a machine as a quorum server and then configure the quorum server as your cluster's quorum device. You can use a quorum server instead of or in addition to shared disks and NAS filers.

Consider the following points when you plan the use of a quorum server in an Oracle Solaris Cluster configuration.

NFS Guidelines

Consider the following points when you plan the use of Network File System (NFS) in an Oracle Solaris Cluster configuration.

Service Restrictions

Observe the following service restrictions for Oracle Solaris Cluster configurations:

Network Time Protocol (NTP)

Observe the following guidelines for NTP:

See the Oracle Solaris Cluster Concepts Guide for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines about how to configure NTP for an Oracle Solaris Cluster configuration.

Oracle Solaris Cluster Configurable Components

This section provides guidelines for the following Oracle Solaris Cluster components that you configure:

Add this information to the appropriate configuration planning worksheet.

Global-Cluster Name

Specify a name for the global cluster during Oracle Solaris Cluster configuration. The global cluster name should be unique throughout the enterprise.

For information about naming a zone cluster, see Zone Clusters.

Global-Cluster Voting-Node Names and Node IDs

The name of a voting node in a global cluster is the same name that you assign to the physical or virtual host when you install it with the Solaris OS. See the hosts(4) man page for information about naming requirements.

In single-host cluster installations, the default cluster name is the name of the voting node.

During Oracle Solaris Cluster configuration, you specify the names of all voting nodes that you are installing in the global cluster.

A node ID number is assigned to each cluster node for intracluster use, beginning with the number 1. Node ID numbers are assigned to each cluster node in the order that the node becomes a cluster member. If you configure all cluster nodes in one operation, the node from which you run the scinstall utility is the last node assigned a node ID number. You cannot change a node ID number after it is assigned to a cluster node.

A node that becomes a cluster member is assigned the lowest available node ID number. If a node is removed from the cluster, its node ID becomes available for assignment to a new node. For example, if in a four-node cluster the node that is assigned node ID 3 is removed and a new node is added, the new node is assigned node ID 3, not node ID 5.

If you want the assigned node ID numbers to correspond to certain cluster nodes, configure the cluster nodes one node at a time in the order that you want the node ID numbers to be assigned. For example, to have the cluster software assign node ID 1 to phys-schost-1, configure that node as the sponsoring node of the cluster. If you next add phys-schost-2 to the cluster established by phys-schost-1, phys-schost-2 is assigned node ID 2.

For information about node names in a zone cluster, see Zone Clusters.

Zone Names

A non-global zone of brand native is a valid potential node of a resource-group node list. Use the naming convention nodename:zonename to specify a non-global zone to an Oracle Solaris Cluster command.

To specify the global zone, you need to specify only the voting-node name.

For information about a cluster of non-global zones, see Zone Clusters.

You can turn off cluster functionality for a selected non-global zone. A root user logged into one of these zones is not able to discover or disrupt operation of the cluster. For instructions, see

Private Network


Note - You do not need to configure a private network for a single-host global cluster. The scinstall utility automatically assigns the default private-network address and netmask, even though a private network is not used by the cluster.


Oracle Solaris Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Oracle Solaris Cluster software. An Oracle Solaris Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Oracle Solaris Cluster software on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:

If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. The utility also prompts you for the number of zone clusters that you want to support. The number of global-cluster nodes that you specify should also include the expected number of unclustered non-global zones that will use the private network.

The utility calculates the netmask for the minimum IP address range that will support the number of nodes, zone clusters, and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, zone clusters, and private networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes, zone clusters, and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the IP address range.

The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.


Note - Changing the cluster private IP-address range might be necessary to support the addition of voting nodes, non-global zones, zone clusters, or private networks.

To change the private-network address and netmask after the cluster is established, see How to Change the Private Network Address or Address Range of an Existing Cluster in Oracle Solaris Cluster System Administration Guide. You must bring down the cluster to make these changes.

However, the cluster can remain in cluster mode if you use the cluster set-netprops command to change only the netmask. For any zone cluster that is already configured in the cluster, the private IP subnets and the corresponding private IP addresses that are allocated for that zone cluster will also be updated.


If you specify a private-network address other than the default, the address must meet the following requirements:

See Chapter 2, Planning Your TCP/IP Network (Tasks), in System Administration Guide: IP Services for more information about private networks.

Private Hostnames

The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Oracle Solaris Cluster configuration of a global cluster or a zone cluster. These private hostnames follow the naming convention clusternodenodeid -priv, where nodeid is the numeral of the internal node ID. During Oracle Solaris Cluster configuration, the node ID number is automatically assigned to each voting node when the node becomes a cluster member. A voting node of the global cluster and a node of a zone cluster can both have the same private hostname, but each hostname resolves to a different private-network IP address.

After a global cluster is configured, you can rename its private hostnames by using the clsetup(1CL) utility. Currently, you cannot rename the private hostname of a zone-cluster node.

The creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.

Cluster Interconnect

The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:

For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Oracle Solaris Cluster Concepts Guide.


Note - You do not need to configure a cluster interconnect for a single-host cluster. However, if you anticipate eventually adding more voting nodes to a single-host cluster configuration, you might want to configure the cluster interconnect for future use.


During Oracle Solaris Cluster configuration, you specify configuration information for one or two cluster interconnects.

You can configure additional cluster interconnects, up to six interconnects total, after the cluster is established by using the clsetup(1CL) utility.

For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Oracle Solaris Cluster 3.3 Hardware Administration Manual. For general information about the cluster interconnect, see Cluster-Interconnect Components in Oracle Solaris Cluster Overview and Oracle Solaris Cluster Concepts Guide.

Transport Adapters

For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-host cluster, you also specify whether your interconnect is a point-to-point connection (adapter to adapter) or uses a transport switch.

Consider the following guidelines and restrictions:

See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.

Transport Switches

If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.

Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the Solaris host that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types.


Note - Clusters with three or more voting nodes must use transport switches. Direct connection between voting cluster nodes is supported only for two-host clusters.


If your two-host cluster is direct connected, you can still specify a transport switch for the interconnect.


Tip - If you specify a transport switch, you can more easily add another voting node to the cluster in the future.


Global Fencing

Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk during split-brain situations. By default, the scinstall utility in Typical Mode leaves global fencing enabled, and each shared disk in the configuration uses the default global fencing setting of pathcount. With the pathcount setting, the fencing protocol for each shared disk is chosen based on the number of DID paths that are attached to the disk.

In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For most situations, respond No to keep global fencing enabled. However, you can disable global fencing to support the following situations:


Caution

Caution - If you disable fencing under other situations than the following, your data might be vulnerable to corruption during application failover. Examine this data corruption possibility carefully when you consider turning off fencing.


If you disable global fencing during cluster configuration, fencing is turned off for all shared disks in the cluster. After the cluster is configured, you can change the global fencing protocol or override the fencing protocol of individual shared disks. However, to change the fencing protocol of a quorum device, you must first unconfigure the quorum device. Then set the new fencing protocol of the disk and reconfigure it as a quorum device.

For more information about fencing behavior, see Failfast Mechanism in Oracle Solaris Cluster Concepts Guide. For more information about setting the fencing protocol of individual shared disks, see the cldevice(1CL) man page. For more information about the global fencing setting, see the cluster(1CL) man page.

Quorum Devices

Oracle Solaris Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a voting node, the quorum device prevents amnesia or split-brain problems when the voting cluster node attempts to rejoin the cluster. For more information about the purpose and function of quorum devices, see Quorum and Quorum Devices in Oracle Solaris Cluster Concepts Guide.

During Oracle Solaris Cluster installation of a two-host cluster, you can choose to let the scinstall utility automatically configure as a quorum device an available shared disk in the configuration. Shared disks include any Sun NAS device that is configured for use as a shared disk. The scinstall utility assumes that all available shared disks are supported as quorum devices.

If you want to use a quorum server, an Oracle Sun Storage 7000 Unified Storage System NAS device, or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.

After installation, you can also configure additional quorum devices by using the clsetup(1CL) utility.


Note - You do not need to configure quorum devices for a single-host cluster.


If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.

Consider the following points when you plan quorum devices.

For more information about quorum devices, see Quorum and Quorum Devices in Oracle Solaris Cluster Concepts Guide and Quorum Devices in Oracle Solaris Cluster Overview.

Zone Clusters

A zone cluster is a cluster of non-global Solaris Containers zones. All nodes of a zone cluster are configured as non-global zones of the cluster brand. No other brand type is permitted in a zone cluster. You can run supported services on the zone cluster similar to a global cluster, with the isolation that is provided by Solaris zones.

Consider the following points when you plan the creation of a zone cluster.

Global-Cluster Requirements and Guidelines

Zone-Cluster Requirements and Guidelines

Guidelines for Trusted Extensions in a Zone Cluster

Consider the following points when you use the Trusted Extensions feature of Oracle Solaris in a zone cluster: