Sun Cluster Software Installation Guide for Solaris OS

Establishing a New Global Cluster or New Global-Cluster Node

This section provides information and procedures to establish a new global cluster or to add a node to an existing cluster. Global-cluster nodes can be physical machines, (SPARC only) Sun Logical Domains (LDoms) I/O domains, or Sun LDoms guest domains. A cluster can consist of a combination of any of these node types. Before you start to perform these tasks, ensure that you installed software packages for the Solaris OS, Sun Cluster framework, and other products as described in Installing the Software.


Note –

You can alternatively deploy the Sun Cluster Plug-in for Sun N1TM Service Provisioning System to create a multiple-node cluster or add a node to an existing cluster. Follow instructions in the documentation that is provided with the plug-in. You can also access this information at http://wikis.sun.com/display/SunCluster/Sun+Cluster+Framework+Plug-in.


The following task map lists the tasks to perform. Complete the procedures in the order that is indicated.

Table 3–1 Task Map: Establish the Global Cluster

Method 

Instructions 

Use one of the following methods to establish a new global cluster or add a node to an existing global cluster: 

  • (New clusters only) Use the scinstall utility to establish the cluster.

How to Configure Sun Cluster Software on All Nodes (scinstall)

  • (New clusters only) Use an XML configuration file to establish the cluster.

How to Configure Sun Cluster Software on All Nodes (XML)

  • (New clusters or added nodes) Set up a JumpStart install server. Then create a flash archive of the installed system. Finally, use the scinstall JumpStart option to install the flash archive on each node and establish the cluster.

How to Install Solaris and Sun Cluster Software (JumpStart)

  • (Added nodes only) Use the clsetup command to add the new node to the cluster authorized-nodes list. If necessary, also configure the cluster interconnect and reconfigure the private network address range.

    Configure Sun Cluster software on a new node by using the scinstall utility or by using an XML configuration file.

How to Prepare the Cluster for Additional Global-Cluster Nodes

How to Change the Private Network Configuration When Adding Nodes or Private Networks

How to Configure Sun Cluster Software on Additional Global-Cluster Nodes (scinstall)

How to Configure Sun Cluster Software on Additional Global-Cluster Nodes (XML)

If you added a node to a cluster, update the quorum configuration information. 

How to Update Quorum Devices After Adding a Node to a Global Cluster

Assign quorum votes and remove the cluster from installation mode, if this operation was not already performed. 

How to Configure Quorum Devices

Validate the quorum configuration. 

How to Verify the Quorum Configuration and Installation Mode

(Optional) Change a node's private hostname.

How to Change Private Hostnames

Create or modify the NTP configuration file, if not already configured. 

How to Configure Network Time Protocol (NTP)

(Optional) Configure IPsec to secure the private interconnect.

How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect

If using a volume manager, install the volume management software. 

Chapter 4, Configuring Solaris Volume Manager Software or Chapter 5, Installing and Configuring Veritas Volume Manager

Create cluster file systems or highly available local file systems as needed. 

How to Create Cluster File Systems or Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS

(Optional) SPARC: Configure Sun Management Center to monitor the cluster.

SPARC: Installing the Sun Cluster Module for Sun Management Center

Install third-party applications, register resource types, set up resource groups, and configure data services. 

Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Documentation that is supplied with the application software 

Take a baseline recording of the finished cluster configuration. 

How to Record Diagnostic Data of the Cluster Configuration

ProcedureHow to Configure Sun Cluster Software on All Nodes (scinstall)

Perform this procedure from one node of the global cluster to configure Sun Cluster software on all nodes of the cluster.


Note –

This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.

Ensure that Sun Cluster software packages are installed on the node, either manually or by using the silent-mode form of the Java ES installer program, before you run the scinstall command. For information about running the Java ES installer program from an installation script, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.


Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

  1. If you disabled remote configuration during Sun Cluster software installation, re-enable remote configuration.

    Enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes.

  2. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  3. From one cluster node, start the scinstall utility.


    phys-schost# /usr/cluster/bin/scinstall
    
  4. Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.


     *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Create a new cluster or add a cluster node
          * 2) Configure a cluster to be JumpStarted from this install server
            3) Manage a dual-partition upgrade
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

    The New Cluster and Cluster Node Menu is displayed.

  5. Type the option number for Create a New Cluster and press the Return key.

    The Typical or Custom Mode menu is displayed.

  6. Type the option number for either Typical or Custom and press the Return key.

    The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to continue.

  7. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  8. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  9. From one node, verify that all nodes have joined the cluster.


    phys-schost# clnode status
    

    Output resembles the following.


    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  10. (Optional) Enable the automatic node reboot feature.

    This feature automatically reboots a node if all monitored shared-disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.

    1. Enable automatic reboot.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
  11. If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.


    exclude:lofs

    The change to the /etc/system file becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

    However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

    • Disable LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.


Example 3–1 Configuring Sun Cluster Software on All Nodes

The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall utility in Typical Mode. The other cluster node is phys-schost-2. The adapter names are qfe2 and qfe3. The automatic selection of a quorum device is enabled. Both nodes use the partition /globaldevices for the global-devices namespace.


  Installation and Configuration

    Log file - /var/cluster/logs/install/scinstall.log.24747

    Testing for "/globaldevices" on "phys-schost-1" … done
    Testing for "/globaldevices" on "phys-schost-2" … done
    Checking installation status … done

    The Sun Cluster software is already installed on "phys-schost-1".
    The Sun Cluster software is already installed on "phys-schost-2".
    Starting discovery of the cluster transport configuration.

    The following connections were discovered:

        phys-schost-1:qfe2  switch1  phys-schost-2:qfe2
        phys-schost-1:qfe3  switch2  phys-schost-2:qfe3

    Completed discovery of the cluster transport configuration.

    Started cluster check on "phys-schost-1".
    Started cluster check on "phys-schost-2".

    cluster check completed with no errors or warnings for "phys-schost-1".
    cluster check completed with no errors or warnings for "phys-schost-2".

    Removing the downloaded files … done

    Configuring "phys-schost-2" … done
    Rebooting "phys-schost-2" … done

    Configuring "phys-schost-1" … done
    Rebooting "phys-schost-1" …

Log file - /var/cluster/logs/install/scinstall.log.24747

Rebooting …

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

If you intend to configure any quorum devices in your cluster, go to How to Configure Quorum Devices.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

ProcedureHow to Configure Sun Cluster Software on All Nodes (XML)

Perform this procedure to configure a new global cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs Sun Cluster 3.2 11/09 software.

This procedure configures the following cluster components:

Before You Begin

Perform the following tasks:

  1. Ensure that Sun Cluster 3.2 11/09 software is not yet configured on each potential cluster node.

    1. Become superuser on a potential node that you want to configure in the new cluster.

    2. Determine whether Sun Cluster software is already configured on the potential node.


      phys-schost# /usr/sbin/clinfo -n
      
      • If the command returns the following message, proceed to Step c.


        clinfo: node is not configured as part of acluster: Operation not applicable

        This message indicates that Sun Cluster software is not yet configured on the potential node.

      • If the command returns the node ID number, do not perform this procedure.

        The return of a node ID indicates that Sun Cluster software is already configured on the node.

        If the cluster is running an older version of Sun Cluster software and you want to install Sun Cluster 3.2 11/09 software, instead perform upgrade procedures in Sun Cluster Upgrade Guide for Solaris OS.

    3. Repeat Step a and Step b on each remaining potential node that you want to configure in the new cluster.

      If Sun Cluster software is not yet configured on any of the potential cluster nodes, proceed to Step 2.

  2. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  3. If you are duplicating an existing cluster than runs Sun Cluster 3.2 11/09 software, use a node in that cluster to create a cluster configuration XML file.

    1. Become superuser on an active member of the cluster that you want to duplicate.

    2. Export the existing cluster's configuration information to a file.


      phys-schost# cluster export -o clconfigfile
      
      -o

      Specifies the output destination.

      clconfigfile

      The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

      For more information, see the cluster(1CL) man page.

    3. Copy the configuration file to the potential node from which you will configure the new cluster.

      You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

  4. Become superuser on the potential node from which you will configure the new cluster.

  5. Modify the cluster configuration XML file as needed.

    1. Open your cluster configuration XML file for editing.

      • If you are duplicating an existing cluster, open the file that you created with the cluster export command.

      • If you are not duplicating an existing cluster, create a new file.

        Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

    2. Modify the values of the XML elements to reflect the cluster configuration that you want to create.

      • To establish a cluster, the following components must have valid values in the cluster configuration XML file:

        • Cluster name

        • Cluster nodes

        • Cluster transport

      • The cluster is created with the assumption that the partition /globaldevices exists on each node that you configure as a cluster node. The global-devices namespace is created on this partition. If you need to use a different file-system name on which to create the global devices, add the following property to the <propertyList> element for each node that does not have a partition that is named /globaldevices.


        …
          <nodeList>
            <node name="node" id="N">
              <propertyList>
        …
                <property name="globaldevfs" value="/filesystem-name">
        …
              </propertyList>
            </node>
        …

        To instead use a lofi device for the global-devices namespace, set the value of the globaldevfs property to lofi.


                
        <property name="globaldevfs" value="lofi">
        
      • If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.

      See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.

  6. Validate the cluster configuration XML file.


    phys-schost# /usr/share/src/xmllint --valid --noout clconfigfile
    

    See the xmllint(1) man page for more information.

  7. From the potential node that contains the cluster configuration XML file, create the cluster.


    phys-schost# cluster create -i clconfigfile
    
    -i clconfigfile

    Specifies the name of the cluster configuration XML file to use as the input source.

  8. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  9. From one node, verify that all nodes have joined the cluster.


    phys-schost# clnode status
    

    Output resembles the following.


    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  10. Install any necessary patches to support Sun Cluster software, if you have not already done so.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  11. If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.


    exclude:lofs

    The change to the /etc/system file becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

    However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

    • Disable LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.

  12. To duplicate quorum information from an existing cluster, configure the quorum device by using the cluster configuration XML file.

    You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create a required quorum device, go instead to How to Configure Quorum Devices.

    1. If you are using a quorum server for the quorum device, ensure that the quorum server is set up and running.

      Follow instructions in How to Install and Configure Quorum Server Software.

    2. If you are using a NAS device for the quorum device, ensure that the NAS device is set up and operational.

      1. Observe the requirements for using a NAS device as a quorum device.

        See Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS.

      2. Follow instructions in your device's documentation to set up the NAS device.

    3. Ensure that the quorum configuration information in the cluster configuration XML file reflects valid values for the cluster that you created.

    4. If you made changes to the cluster configuration XML file, validate the file.


      phys-schost# xmllint --valid --noout clconfigfile
      
    5. Configure the quorum device.


      phys-schost# clquorum add -i clconfigfile devicename
      
      devicename

      Specifies the name of the device to configure as a quorum device.

  13. Remove the cluster from installation mode.


    phys-schost# clquorum reset
    
  14. Close access to the cluster configuration by machines that are not configured cluster members.


    phys-schost# claccess deny-all
    
  15. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    1. Enable automatic reboot.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …

Example 3–2 Configuring Sun Cluster Software on All Nodes By Using an XML File

The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Solaris 10 OS and is not configured with non-global zones. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new cluster is d3.

The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes.


phys-newhost-N# /usr/sbin/clinfo -n
clinfo: node is not configured as part of acluster: Operation not applicable
 
phys-oldhost-1# cluster export -o clusterconf.xml
Copy clusterconf.xml to phys-newhost-1 and modify the file with valid values
 
phys-newhost-1# xmllint --valid --noout clusterconf.xml
No errors are reported
 
phys-newhost-1# cluster create -i clusterconf.xml
phys-newhost-N# svcs multi-user-server phys-newhost-N
STATE          STIME    FMRI
online         17:52:55 svc:/milestone/multi-user-server:default
phys-newhost-1# clnode status
Output shows that both nodes are online
 
phys-newhost-1# clquorum add -i clusterconf.xml d3
phys-newhost-1# clquorum reset

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

Go to How to Verify the Quorum Configuration and Installation Mode.

See Also

After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component to. For example, if you are duplicating resource groups, ensure that the <resourcegroupNodeList> entry contains the valid node names for the new cluster, and not the node names from the cluster that you duplicated unless the node names are the same.

To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate. The following table lists the cluster components that you can create from a cluster configuration XML file after the cluster is established and the man page for the command that you use to duplicate the component.


Note –

This table provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


Cluster Component 

Man Page 

Special Instructions 

Device groups: Solaris Volume Manager and Veritas Volume Manager 

cldevicegroup(1CL)

For Solaris Volume Manager, first create the disk sets that you specify in the cluster configuration XML file. 

For VxVM, first install and configure VxVM software and create the disk groups that you specify in the cluster configuration XML file. 

Resources 

clresource(1CL)

You can use the -a option of the clresource, clressharedaddress, or clreslogicalhostname command to also duplicate the resource type and resource group that are associated with the resource that you duplicate.

Otherwise, you must first add the resource type and resource group to the cluster before you add the resource. 

Shared address resources 

clressharedaddress(1CL)

Logical hostname resources 

clreslogicalhostname(1CL)

Resource types 

clresourcetype(1CL)

Resource groups 

clresourcegroup(1CL)

NAS devices 

clnasdevice(1CL)

You must first set up the NAS device as described in the device's documentation. 

SNMP hosts 

clsnmphost(1CL)

The clsnmphost create -i command requires that you specify a user password file with the -f option.

SNMP users 

clsnmpuser(1CL)

 

Thresholds for monitoring system resources on cluster objects 

cltelemetryattribute(1CL)

 

ProcedureHow to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris OS and Sun Cluster software on all global-cluster nodes and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.

Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

  1. Set up your JumpStart install server.

    Ensure that the JumpStart install server meets the following requirements.

    • The install server is on the same subnet as the cluster nodes, or on the Solaris boot server for the subnet that the cluster nodes use.

    • The install server is not itself a cluster node.

    • The install server installs a release of the Solaris OS that is supported by the Sun Cluster software.

    • A custom JumpStart directory exists for JumpStart installation of Sun Cluster software. This jumpstart-dir directory must meet the following requirements:

      • Contain a copy of the check utility.

      • Be NFS exported for reading by the JumpStart install server.

    • Each new cluster node is configured as a custom JumpStart installation client that uses the custom JumpStart directory that you set up for Sun Cluster installation.

    Follow the appropriate instructions for your software platform and OS version to set up the JumpStart install server. See Creating a Profile Server for Networked Systems in Solaris 9 9/04 Installation Guide or Creating a Profile Server for Networked Systems in Solaris 10 10/09 Installation Guide: Custom JumpStart and Advanced Installations.

    See also the setup_install_server(1M) and add_install_client(1M) man pages.

  2. If you are installing a new node to an existing cluster, add the node to the list of authorized cluster nodes.

    1. Switch to another cluster node that is active and start the clsetup utility.

    2. Use the clsetup utility to add the new node's name to the list of authorized cluster nodes.

    For more information, see How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS.

  3. On a cluster node or another machine of the same server platform, install the Solaris OS and any necessary patches, if you have not already done so.

    If Solaris software is already installed on the server, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

    Follow procedures in How to Install Solaris Software.

  4. (Optional) SPARC: On the installed system, install Sun Logical Domains (LDoms) software and create domains, if you have not already done so.

    Follow the procedures in SPARC: How to Install Sun Logical Domains Software and Create Domains.

  5. On the installed system, install Sun Cluster software and any necessary patches, if you have not already done so.

    Follow procedures in How to Install Sun Cluster Framework and Data-Service Software Packages.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  6. Enable the common agent container daemon to start automatically during system boots.


    machine# cacaoadm enable
    
  7. On the installed system, update the /etc/inet/hosts file and, if also needed, the /etc/inet/ipnodes file with all public IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. See Public-Network IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.

  8. On the installed system, reset Sun Java Web Console to its initial unconfigured state.

    The following command removes configuration information from the web console. Some of this configuration information is specific to the installed system. You must remove this information before you create the flash archive. Otherwise, the configuration information that is transferred to the cluster node might prevent the web console from starting or from interacting correctly with the cluster node.


    # /usr/share/webconsole/private/bin/wcremove -i console
    

    After you install the unconfigured web console on the cluster node and start the web console for the first time, the web console automatically runs its initial configuration and uses information from the cluster node.

    For more information about the wcremove command, see Java Web Console User Identity in System Administration Guide: Basic Administration.

  9. Create the flash archive of the installed system.


    machine# flarcreate -n name archive
    
    -n name

    Name to give the flash archive.

    archive

    File name to give the flash archive, with the full path. By convention, the file name ends in .flar.

    Follow procedures in one of the following manuals:

  10. Ensure that the flash archive is NFS exported for reading by the JumpStart install server.

    See Managing Network File Systems (Overview), in System Administration Guide: Network Services (Solaris 9 or Solaris 10) for more information about automatic file sharing.

    See also the share(1M) and dfstab(4) man pages.

  11. On the JumpStart install server, become superuser.

  12. From the JumpStart install server, start the scinstall(1M) utility.

    In the media path, replace arch with sparc or x86 (Solaris 10 only) and replace ver with 9 for Solaris 9 or 10 for Solaris 10.


    installserver# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/ \
    Solaris_ver/Tools/
    
    installserver# ./scinstall
    

    The scinstall Main Menu is displayed.

  13. Type the option number for Configure a Cluster to be JumpStarted From This Install Server and press the Return key.

    This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.


     *** Main Menu ***
     
        Please select from one of the following (*) options:
    
          * 1) Create a new cluster or add a cluster node
          * 2) Configure a cluster to be JumpStarted from this install server
            3) Manage a dual-partition upgrade
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node 
    
          * ?) Help with menu options
          * q) Quit
     
        Option:  2
    
  14. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall command stores your configuration information and copies the autoscinstall.class default class file in the /jumpstart-dir/autoscinstall.d/3.2/ directory. This file is similar to the following example.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750  swap
    filesys         rootdisk.s3 512  /globaldevices
    filesys         rootdisk.s7 20
    cluster         SUNWCuser        add
    package         SUNWman          add
  15. If necessary, make adjustments to the autoscinstall.class file to configure JumpStart to install the flash archive.

    1. Modify entries as necessary to match configuration choices that you made when you installed the Solaris OS on the flash archive machine or when you ran the scinstall utility.

      For example, if you assigned slice 4 for the global-devices file system and specified to scinstall that the file-system name is /gdevs, you would change the /globaldevices entry of the autoscinstall.class file to the following:


      filesys         rootdisk.s4 512  /gdevs
    2. Change the following entries in the autoscinstall.class file.

      Existing Entry to Replace 

      New Entry to Add 

      install_type

      initial_install

      install_type

      flash_install

      system_type

      standalone

      archive_location

      retrieval_type location

      See archive_location Keyword in Solaris 9 9/04 Installation Guide or archive_location Keyword in Solaris 10 10/09 Installation Guide: Custom JumpStart and Advanced Installations for information about valid values for retrieval_type and location when used with the archive_location keyword.

    3. Remove all entries that would install a specific package, such as the following entries.


      cluster         SUNWCuser        add
      package         SUNWman          add
    4. To use a lofi device for the global-devices namespace, delete the filesys entry for the /globaldevices partition.

    5. If your configuration has additional Solaris software requirements, change the autoscinstall.class file accordingly.

      The autoscinstall.class file installs the End User Solaris Software Group (SUNWCuser).

    6. If you install the End User Solaris Software Group (SUNWCuser ), add to the autoscinstall.class file any additional Solaris software packages that you might need.

      The following table lists Solaris packages that are required to support some Sun Cluster functionality. These packages are not included in the End User Solaris Software Group. See Solaris Software Group Considerations for more information.

      Feature 

      Mandatory Solaris Software Packages 

      RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) 

      SPARC: Solaris 9: SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox

      Solaris 10: SUNWrsm SUNWrsmo

      scsnapshot

      SUNWp15u SUNWp15v SUNWp15p

      Sun Cluster Manager

      SUNWapchr SUNWapchu

    You can change the default class file in one of the following ways:

    • Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.

    • Update the rules file to point to other profiles, then run the check utility to validate the rules file.

    As long as the Solaris OS installation profile meets minimum Sun Cluster file-system allocation requirements, Sun Cluster software places no restrictions on other changes to the installation profile. See System Disk Partitions for partitioning guidelines and requirements to support Sun Cluster software.

    For more information about JumpStart profiles, see Chapter 26, Preparing Custom JumpStart Installations (Tasks), in Solaris 9 9/04 Installation Guide or Chapter 3, Preparing Custom JumpStart Installations (Tasks), in Solaris 10 10/09 Installation Guide: Custom JumpStart and Advanced Installations.

  16. To install required packages for any of the following features or to perform other postinstallation tasks, set up your own finish script.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.


    Your own finish script runs after the standard finish script that is installed by the scinstall command. See Preparing Custom JumpStart Installations in Chapter 26, Preparing Custom JumpStart Installations (Tasks), in Solaris 9 9/04 Installation Guide or Chapter 3, Preparing Custom JumpStart Installations (Tasks), in Solaris 10 10/09 Installation Guide: Custom JumpStart and Advanced Installations for information about creating a JumpStart finish script.

    1. Ensure that any dependency Solaris packages will be installed by the default class file.

      See Step 15.

    2. Name your finish script finish.

    3. Modify the finish script to install the software packages listed in the following table that support the features that you intend to use.

      Feature 

      Additional Sun Cluster 3.2 11/09 Packages to Install 

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      • Solaris 9: SUNWsci SUNWscid SUNWscidx

      • Solaris 10: SUNWscir SUNWsci SUNWscidr SUNWscid

      RSMRDT drivers 

      SUNWscrdt

      • Install the packages in the order that is used in the table.

      • Install the packages from the /cdrom/suncluster_3_0Packages/ directory, where arch is sparc or x86, and where ver is 10 for Solaris 10 .

    4. Make any additional modifications for other postinstallation tasks that you want the finish script to perform.

    5. Copy your finish script to each jumpstart-dir/autoscinstall.d/nodes/node directory.

      Create one node directory for each node in the cluster. Or, use this naming convention to create symbolic links to a shared finish script.

  17. Exit from the JumpStart install server.

  18. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  19. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      As superuser, use the following command to start the cconsole utility:


      adminconsole# /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  20. Shut down each node.


    phys-schost# shutdown -g0 -y -i0
    
  21. Boot each node to start the JumpStart installation.

    • On SPARC based systems, do the following:


      ok boot net - install
      

      Note –

      Surround the dash (-) in the command with a space on each side.


    • On x86 based systems, do the following:

      1. Press any key to begin the booting sequence.


        Press any key to reboot.
        keystroke
        
      2. As soon as the BIOS information screen appears, immediately press Esc+2 or press the F2 key.

        After the initialization sequence completes, the BIOS Setup Utility screen appears.

      3. In the BIOS Setup Utility menu bar, navigate to the Boot menu item.

        The list of boot devices is displayed.

      4. Navigate to the listed IBA that is connected to the same network as the JumpStart PXE install server and move it to the top of the boot order.

        The lowest number to the right of the IBA boot choices corresponds to the lower Ethernet port number. The higher number to the right of the IBA boot choices corresponds to the higher Ethernet port number.

      5. Save your change and exit the BIOS.

        The boot sequence begins again. After further processing, the GRUB menu is displayed.

      6. Immediately select the Solaris JumpStart entry and press Enter.


        Note –

        If the Solaris JumpStart entry is the only entry listed, you can alternatively wait for the selection screen to time out. If you do not respond in 30 seconds, the system automatically continues the boot sequence.



        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris_10 Jumpstart                                                    |
        |                                                                         |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        After further processing, the installation type menu is displayed.

      7. From the installation type menu, immediately type the menu number for Custom JumpStart.


        Note –

        If you do not type the number for Custom JumpStart before the 30–second timeout period ends, the system automatically begins the Solaris interactive installation.



              Select the type of installation you want to perform:
        
                 1 Solaris Interactive
                 2 Custom JumpStart
                 3 Solaris Interactive Text (Desktop session)
                 4 Solaris Interactive Text (Console session)
                 5 Apply driver updates
                 6 Single user shell
        
                 Enter the number of your choice.
        2
        

        JumpStart installs the Solaris OS and Sun Cluster software on each node. When the installation is successfully completed, each node is fully installed as a new cluster node. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.

      8. When the BIOS screen again appears, immediately press Esc+2 or press the F2 key.


        Note –

        If you do not interrupt the BIOS at this point, it automatically returns to the installation type menu. There, if no choice is typed within 30 seconds, the system automatically begins an interaction installation.


        After further processing, the BIOS Setup Utility is displayed.

      9. In the menu bar, navigate to the Boot menu.

        The list of boot devices is displayed.

      10. Navigate to the Hard Drive entry and move it back to the top of the boot order.

      11. Save your change and exit the BIOS.

        The boot sequence begins again. No further interaction with the GRUB menu is needed to complete booting into cluster mode.

  22. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  23. If you are installing a new node to an existing cluster, create mount points on the new node for all existing cluster file systems.

    1. From another cluster node that is active, display the names of all cluster file systems.


      phys-schost# mount | grep global | egrep -v node@ | awk '{print $1}'
      
    2. On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.


      phys-schost-new# mkdir -p mountpoint
      

      For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.


      Note –

      The mount points become active after you reboot the cluster in Step 28.


    3. If Veritas Volume Manager (VxVM) is installed on any nodes that are already in the cluster, view the vxio number on each VxVM–installed node.


      phys-schost# grep vxio /etc/name_to_major
      vxio NNN
      
      • Ensure that the same vxio number is used on each of the VxVM-installed nodes.

      • Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

      • If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  24. (Optional) To use dynamic reconfiguration on Sun Enterprise 10000 servers, add the following entry to the /etc/system file on each node in the cluster.


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  25. If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.


    exclude:lofs

    The change to the /etc/system file becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

    However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

    • Disable LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.

  26. If you will use any of the following adapters for the cluster interconnect, uncomment the relevant entry in the /etc/system file on each node.

    Adapter 

    Entry 

    ce 

    set ce:ce_taskq_disable=1 

    ipge 

    set ipge:ipge_taskq_disable=1 

    ixge 

    set ixge:ixge_taskq_disable=1 

    This entry becomes effective after the next system reboot.

  27. x86: Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    • On the Solaris 9 OS, set the default to kadb.


      phys-schost# eeprom boot-file=kadb
      
    • On the Solaris 10OS, set the default to kmdb in the GRUB boot parameters menu.


      grub edit> kernel /platform/i86pc/multiboot kmdb
      
  28. If you performed a task that requires a cluster reboot, follow these steps to reboot the cluster.

    The following are some of the tasks that require a reboot:

    • Adding a new node to an existing cluster

    • Installing patches that require a node or cluster reboot

    • Making configuration changes that require a reboot to become active

    1. On one node, become superuser.

    2. Shut down the cluster.


      phys-schost-1# cluster shutdown -y -g0 clustername
      

      Note –

      Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

      Cluster nodes remain in installation mode until the first time that you run the clsetup command. You run this command during the procedure How to Configure Quorum Devices.


    3. Reboot each node in the cluster.

      • On SPARC based systems, do the following:


        ok boot
        
      • On x86 based systems, do the following:

        When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  29. (Optional) If you did not perform Step 28 to reboot the nodes, start the Sun Java Web Console web server manually on each node.


    phys-schost# smcwebserver start
    

    For more information, see the smcwebserver(1M) man page.

  30. From one node, verify that all nodes have joined the cluster.


    phys-schost# clnode status
    

    Output resembles the following.


    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  31. (Optional) On each node, enable automatic node reboot if all monitored shared-disk paths fail.

    1. Enable automatic reboot.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
Next Steps

If you added a node to a two-node cluster, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to the next appropriate procedure:

Troubleshooting

Disabled scinstall option – If the JumpStart option of the scinstall command does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 16 to correct JumpStart setup, then restart the scinstall utility.

ProcedureHow to Prepare the Cluster for Additional Global-Cluster Nodes

Perform this procedure on existing global-cluster nodes to prepare the cluster for the addition of new cluster nodes.

Before You Begin

Perform the following tasks:

  1. If you use the Cluster Control Panel (CCP), update the configuration files on the administrative console.

    1. Add to the cluster's entry in the /etc/clusters file the name of the node that you are adding.

    2. Add to the /etc/serialports files an entry with the new node name, the host name of the node's console-access device, and the port number.

  2. Add the name of the new node to the cluster's authorized-nodes list.

    1. On any node, become superuser.

    2. Start the clsetup utility.


      phys-schost# clsetup
      

      The Main Menu is displayed.

    3. Choose the menu item, New nodes.

    4. Choose the menu item, Specify the name of a machine which may add itself.

    5. Follow the prompts to add the node's name to the list of recognized machines.

      The clsetup utility displays the message Command completed successfully if the task is completed without error.

    6. Quit the clsetup utility.

  3. If you are adding a node to a single-node cluster, ensure that two cluster interconnects already exist by displaying the interconnect configuration.


    phys-schost# clinterconnect show
    

    You must have at least two cables or two adapters configured before you can add a node.

    • If the output shows configuration information for two cables or for two adapters, proceed to Step 4.

    • If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.

      1. On one node, start the clsetup utility.


        phys-schost# clsetup
        
      2. Choose the menu item, Cluster interconnect.

      3. Choose the menu item, Add a transport cable.

        Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport switch.

      4. If necessary, repeat Step c to configure a second cluster interconnect.

      5. When finished, quit the clsetup utility.

      6. Verify that the cluster now has two cluster interconnects configured.


        phys-schost# clinterconnect show
        

        The command output should show configuration information for at least two cluster interconnects.

  4. Ensure that the private-network configuration can support the nodes and private networks that you are adding.

    1. Display the maximum numbers of nodes and private networks, and zone clusters on the Solaris 10 OS, that the current private-network configuration supports.


      phys-schost# cluster show-netprops
      

      The output looks similar to the following, which shows the default values on the Solaris 10 OS:


      === Private Network ===                        
      
      private_netaddr:                                172.16.0.0
        private_netmask:                                255.255.240.0
        max_nodes:                                      64
        max_privatenets:                                10
        max_zoneclusters:                               12
    2. Determine whether the current private-network configuration can support the increased number of nodes, including non-global zones, and private networks.

Next Steps

Configure Sun Cluster software on the new cluster nodes. Go to How to Configure Sun Cluster Software on Additional Global-Cluster Nodes (scinstall) or How to Configure Sun Cluster Software on Additional Global-Cluster Nodes (XML).

ProcedureHow to Change the Private Network Configuration When Adding Nodes or Private Networks

Perform this task to change the global-cluster's private IP-address range to accommodate an increase in one or more of the following cluster components:

You can also use this procedure to decrease the private IP-address range.


Note –

This procedure requires you to shut down the entire cluster. On the Solaris 10 OS, if you need to change only the netmask, for example, to add support for zone clusters, do not perform this procedure. Instead, run the following command from a global-cluster node that is running in cluster mode to specify the expected number of zone clusters:


phys-schost> cluster set-netprops num_zoneclusters=N

This command does not require you to shut down the cluster.


Before You Begin

Ensure that remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser is enabled for all cluster nodes.

  1. Become superuser on a node of the cluster.

  2. From one node, start the clsetup utility.


    # clsetup
    

    The clsetup Main Menu is displayed.

  3. Switch each resource group offline.

    If the node contains non-global zones, any resource groups in the zones are also switched offline.

    1. Type the number that corresponds to the option for Resource groups and press the Return key.

      The Resource Group Menu is displayed.

    2. Type the number that corresponds to the option for Online/Offline or Switchover a resource group and press the Return key.

    3. Follow the prompts to take offline all resource groups and to put them in the unmanaged state.

    4. When all resource groups are offline, type q to return to the Resource Group Menu.

  4. Disable all resources in the cluster.

    1. Type the number that corresponds to the option for Enable/Disable a resource and press the Return key.

    2. Choose a resource to disable and follow the prompts.

    3. Repeat the previous step for each resource to disable.

    4. When all resources are disabled, type q to return to the Resource Group Menu.

  5. Quit the clsetup utility.

  6. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.


    # cluster status -t resource,resourcegroup
    
    -t

    Limits output to the specified cluster object

    resource

    Specifies resources

    resourcegroup

    Specifies resource groups

  7. From one node, shut down the cluster.


    # cluster shutdown -g0 -y
    
    -g

    Specifies the wait time in seconds

    -y

    Prevents the prompt that asks you to confirm a shutdown from being issued

  8. Boot each node into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems, perform the following commands:

      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
        +----------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                               | 
        | Solaris failsafe                                                     |
        |                                                                      |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       | 
        | kernel /platform/i86pc/multiboot                                     | 
        | module /platform/i86pc/boot_archive                                  | 
        |+----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


  9. From one node, start the clsetup utility.

    When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.

  10. Type the number that corresponds to the option for Change IP Address Range and press the Return key.

    The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.

  11. To change either the private-network IP address or the IP address range, type yes and press the Return key.

    The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.

  12. Change or accept the private-network IP address.

    • To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.

      The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.

    • To change the default private-network IP address, perform the following substeps.

      1. Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.

        The clsetup utility will prompt for the new private-network IP address.

      2. Type the new IP address and press the Return key.

        The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.

  13. Change or accept the default private-network IP address range.

    On the Solaris 9 OS, the default netmask is 255.255.248.0. This default IP address range supports up to 64 nodes and up to 10 private networks in the cluster. On the Solaris 10 OS, the default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, up to 12 zone clusters, and up to 10 private networks in the cluster.

    • To accept the default IP address range, type yes and press the Return key.

      Then skip to the next step.

    • To change the IP address range, perform the following substeps.

      1. Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.

        When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you expect to configure in the cluster.

      2. Enter the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you expect to configure in the cluster.

        From these numbers, the clsetup utility calculates two proposed netmasks:

        • The first netmask is the minimum netmask to support the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you specified.

        • The second netmask supports twice the number of nodes and private networks, and zone clusters on the Solaris 10 OS, that you specified, to accommodate possible future growth.

      3. Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes and private networks, and zone clusters on the Solaris 10 OS.

  14. Type yes in response to the clsetup utility's question about proceeding with the update.

  15. When finished, exit the clsetup utility.

  16. Reboot each node back into the cluster.

    1. Shut down each node.


      # shutdown -g0 -y
      
    2. Boot each node into cluster mode.

      • On SPARC based systems, do the following:


        ok boot
        
      • On x86 based systems, do the following:

        When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

  17. From one node, start the clsetup utility.


    # clsetup
    

    The clsetup Main Menu is displayed.

  18. Re-enable all disabled resources.

    1. Type the number that corresponds to the option for Resource groups and press the Return key.

      The Resource Group Menu is displayed.

    2. Type the number that corresponds to the option for Enable/Disable a resource and press the Return key.

    3. Choose a resource to enable and follow the prompts.

    4. Repeat for each disabled resource.

    5. When all resources are re-enabled, type q to return to the Resource Group Menu.

  19. Bring each resource group back online.

    If the node contains non-global zones, also bring online any resource groups that are in those zones.

    1. Type the number that corresponds to the option for Online/Offline or Switchover a resource group and press the Return key.

    2. Follow the prompts to put each resource group into the managed state and then bring the resource group online.

  20. When all resource groups are back online, exit the clsetup utility.

    Type q to back out of each submenu, or press Ctrl-C.

Next Steps

To add a node to an existing cluster, go to one of the following procedures:

To create a non-global zone on a cluster node, go to Configuring a Non-Global Zone on a Global-Cluster Node.

ProcedureHow to Configure Sun Cluster Software on Additional Global-Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing global cluster. To use JumpStart to add a new node, instead follow procedures in How to Install Solaris and Sun Cluster Software (JumpStart).


Note –

This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.

Ensure that Sun Cluster software packages are installed on the node, either manually or by using the silent-mode form of the Java ES installer program, before you run the scinstall command. For information about running the Java ES installer program from an installation script, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.


Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

  1. On the cluster node to configure, become superuser.

  2. Start the scinstall utility.


    phys-schost-new# /usr/cluster/bin/scinstall
    

    The scinstall Main Menu is displayed.

  3. Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Manage a dual-partition upgrade
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

    The New Cluster and Cluster Node Menu is displayed.

  4. Type the option number for Add This Machine as a Node in an Existing Cluster and press the Return key.

  5. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility configures the node and boots the node into the cluster.

  6. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  7. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.

  8. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  9. From an active cluster member, prevent any other nodes from joining the cluster.


    phys-schost# claccess deny-all
    

    Alternately, you can use the clsetup utility. See How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS for procedures.

  10. From one node, verify that all nodes have joined the cluster.


    phys-schost# clnode status
    

    Output resembles the following.


    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  11. Verify that all necessary patches are installed.


    phys-schost# showrev -p
    
  12. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    1. Enable automatic reboot.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
  13. If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.


    exclude:lofs

    The change to the /etc/system file becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

    However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

    • Disable LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.


Example 3–3 Configuring Sun Cluster Software on an Additional Node

The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.


*** Adding a Node to an Existing Cluster ***
Fri Feb  4 10:17:53 PST 2005


scinstall -ik -C schost -N phys-schost-1 -A trtype=dlpi,name=qfe2 -A trtype=dlpi,name=qfe3 
-m endpoint=:qfe2,endpoint=switch1 -m endpoint=:qfe3,endpoint=switch2


Checking device to use for global devices file system ... done

Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "qfe2" to the cluster configuration ... done
Adding adapter "qfe3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "phys-schost-1" ... done

Copying the postconfig file from "phys-schost-1" if it exists ... done
Copying the Common Agent Container keys from "phys-schost-1" ... done


Setting the node ID for "phys-schost-3" ... done (id=1)

Setting the major number for the "did" driver ... 
Obtaining the major number for the "did" driver from "phys-schost-1" ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Initializing NTP configuration ... done

Updating nsswitch.conf ... 
done

Adding clusternode entries to /etc/inet/hosts ... done


Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files

Updating "/etc/hostname.hme0".

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
The "local-mac-address?" parameter setting has been changed to "true".

Ensure network routing is disabled ... done

Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done
Updating file ("hosts") on node phys-schost-1 ... done

Rebooting ... 

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

If you added a node to an existing cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

ProcedureHow to Configure Sun Cluster Software on Additional Global-Cluster Nodes (XML)

Perform this procedure to configure a new global-cluster node by using an XML cluster configuration file. The new node can be a duplication of an existing cluster node that runs Sun Cluster 3.2 11/09 software.

This procedure configures the following cluster components on the new node:

Before You Begin

Perform the following tasks:

  1. Ensure that Sun Cluster software is not yet configured on the potential node that you want to add to a cluster.

    1. Become superuser on the potential node.

    2. Determine whether Sun Cluster software is configured on the potential node.


      phys-schost-new# /usr/sbin/clinfo -n
      
      • If the command fails, go to Step 2.

        Sun Cluster software is not yet configured on the node. You can add the potential node to the cluster.

      • If the command returns a node ID number, proceed to Step c.

        Sun Cluster software is already a configured on the node. Before you can add the node to a different cluster, you must remove the existing cluster configuration information.

    3. Boot the potential node into noncluster mode.

      • On SPARC based systems, perform the following command:


        ok boot -x
        
      • On x86 based systems, perform the following commands:

        1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

          The GRUB menu appears similar to the following:


          GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
          +----------------------------------------------------------------------+
          | Solaris 10 /sol_10_x86                                               | 
          | Solaris failsafe                                                     |
          |                                                                      |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press enter to boot the selected OS, 'e' to edit the
          commands before booting, or 'c' for a command-line.

          For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

        2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

          The GRUB boot parameters screen appears similar to the following:


          GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       | 
          | kernel /platform/i86pc/multiboot                                     | 
          | module /platform/i86pc/boot_archive                                  | 
          |+----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.
        3. Add -x to the command to specify that the system boot into noncluster mode.


          [ Minimal BASH-like line editing is supported. For the first word, TAB
          lists possible command completions. Anywhere else TAB lists the possible
          completions of a device/filename. ESC at any time exits. ]
          
          grub edit> kernel /platform/i86pc/multiboot -x
          
        4. Press Enter to accept the change and return to the boot parameters screen.

          The screen displays the edited command.


          GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot -x                                  |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.-
        5. Type b to boot the node into noncluster mode.


          Note –

          This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


    4. Unconfigure Sun Cluster software from the potential node.


      phys-schost-new# /usr/cluster/bin/clnode remove
      
  2. If you are duplicating a node that runs Sun Cluster 3.2 11/09 software, create a cluster configuration XML file.

    1. Become superuser on the cluster node that you want to duplicate.

    2. Export the existing node's configuration information to a file.


      phys-schost# clnode export -o clconfigfile
      
      -o

      Specifies the output destination.

      clconfigfile

      The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

      For more information, see the clnode(1CL) man page.

    3. Copy the cluster configuration XML file to the potential node that you will configure as a new cluster node.

  3. Become superuser on the potential node.

  4. Modify the cluster configuration XML file as needed.

    1. Open your cluster configuration XML file for editing.

      • If you are duplicating an existing cluster node, open the file that you created with the clnode export command.

      • If you are not duplicating an existing cluster node, create a new file.

        Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory.

    2. Modify the values of the XML elements to reflect the node configuration that you want to create.

      See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.

  5. Validate the cluster configuration XML file.


    phys-schost-new# xmllint --valid --noout clconfigfile
    
  6. Configure the new cluster node.


    phys-schost-new# clnode add -n sponsornode -i clconfigfile
    
    -n sponsornode

    Specifies the name of an existing cluster member to act as the sponsor for the new node.

    -i clconfigfile

    Specifies the name of the cluster configuration XML file to use as the input source.

  7. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    1. Enable automatic reboot.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

If you added a node to a cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

ProcedureHow to Update Quorum Devices After Adding a Node to a Global Cluster

If you added a node to a global cluster, you must update the configuration information of the quorum devices, regardless of whether you use shared disks, NAS devices, a quorum server, or a combination. To do this, you remove all quorum devices and update the global-devices namespace. You can optionally reconfigure any quorum devices that you still want to use. This registers the new node with each quorum device, which can then recalculate its vote count based on the new number of nodes in the cluster.

Any newly configured SCSI quorum devices will be set to SCSI-3 reservations.

Before You Begin

Ensure that you have completed installation of Sun Cluster software on the added node.

  1. On any node of the cluster, become superuser.

  2. Ensure that all cluster nodes are online.


    phys-schost# cluster status -t node
    
  3. View the current quorum configuration.

    Command output lists each quorum device and each node. The following example output shows the current SCSI quorum device, d3.


    phys-schost# clquorum list
    d3
    …
  4. Note the name of each quorum device that is listed.

  5. Remove the original quorum device.

    Perform this step for each quorum device that is configured.


    phys-schost# clquorum remove devicename
    
    devicename

    Specifies the name of the quorum device.

  6. Verify that all original quorum devices are removed.

    If the removal of the quorum devices was successful, no quorum devices are listed.


    phys-schost# clquorum status
    
  7. Update the global-devices namespace.


    phys-schost# cldevice populate
    

    Note –

    This step is necessary to prevent possible node panic.


  8. On each node, verify that the cldevice populate command has completed processing before you attempt to add a quorum device.

    The cldevice populate command executes remotely on all nodes, even through the command is issued from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.


    phys-schost# ps -ef | grep scgdevs
    
  9. (Optional) Add a quorum device.

    You can configure the same device that was originally configured as the quorum device or choose a new shared device to configure.

    1. (Optional) If you want to choose a new shared device to configure as a quorum device, display all devices that the system checks.

      Otherwise, skip to Step c.


      phys-schost# cldevice list -v
      

      Output resembles the following:


      DID Device          Full Device Path
      ----------          ----------------
      d1                  phys-schost-1:/dev/rdsk/c0t0d0
      d2                  phys-schost-1:/dev/rdsk/c0t6d0
      d3                  phys-schost-2:/dev/rdsk/c1t1d0
      d3                  phys-schost-1:/dev/rdsk/c1t1d0 
      …
    2. From the output, choose a shared device to configure as a quorum device.

    3. Configure the shared device as a quorum device.


      phys-schost# clquorum add -t type devicename
      
      -t type

      Specifies the type of quorum device. If this option is not specified, the default type shared_disk is used.

    4. Repeat for each quorum device that you want to configure.

    5. Verify the new quorum configuration.


      phys-schost# clquorum list
      

      Output should list each quorum device and each node.


Example 3–4 Updating SCSI Quorum Devices After Adding a Node to a Two-Node Cluster

The following example identifies the original SCSI quorum device d2, removes that quorum device, lists the available shared devices, updates the global-device namespace, configures d3 as a new SCSI quorum device, and verifies the new device.


phys-schost# clquorum list
d2
phys-schost-1
phys-schost-2

phys-schost# clquorum remove d2
phys-schost# clquorum status
…
--- Quorum Votes by Device ---

Device Name       Present      Possible      Status
-----------       -------      --------      ------

phys-schost# cldevice list -v
DID Device          Full Device Path
----------          ----------------
…
d3                  phys-schost-2:/dev/rdsk/c1t1d0
d3                  phys-schost-1:/dev/rdsk/c1t1d0
…
phys-schost# cldevice populate
phys-schost# ps -ef - grep scgdevs
phys-schost# clquorum add d3
phys-schost# clquorum list
d3
phys-schost-1
phys-schost-2

Next Steps

Go to How to Verify the Quorum Configuration and Installation Mode.

ProcedureHow to Configure Quorum Devices


Note –

You do not need to configure quorum devices in the following circumstances:

Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.


Perform this procedure one time only, after the new cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.

Before You Begin
  1. If both of the following conditions apply, modify the netmask file entries for the public network on each cluster node.

    • You intend to use a quorum server.

    • The public network uses variable-length subnet masking, also called classless inter domain routing (CIDR).

    If you use a quorum server but the public network uses classful subnets, as defined in RFC 791, you do not need to perform this step.

    1. Add to the /etc/inet/netmasks file an entry for each public subnet that the cluster uses.

      The following is an example entry that contains a public-network IP address and netmask:


      10.11.30.0	255.255.255.0
    2. Append netmask + broadcast + to the hostname entry in each /etc/hostname.adapter file.


      nodename netmask + broadcast +
      
  2. On one node, become superuser.

  3. Ensure that all cluster nodes are online.


    phys-schost# cluster status -t node
    
  4. To use a shared disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.

    1. From one node of the cluster, display a list of all the devices that the system checks.

      You do not need to be logged in as superuser to run this command.


      phys-schost-1# cldevice list -v
      

      Output resembles the following:


      DID Device          Full Device Path
      ----------          ----------------
      d1                  phys-schost-1:/dev/rdsk/c0t0d0
      d2                  phys-schost-1:/dev/rdsk/c0t6d0
      d3                  phys-schost-2:/dev/rdsk/c1t1d0
      d3                  phys-schost-1:/dev/rdsk/c1t1d0
      …
    2. Ensure that the output shows all connections between cluster nodes and storage devices.

    3. Determine the global device-ID name of each shared disk that you are configuring as a quorum device.


      Note –

      Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.


      Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d3 is shared by phys-schost-1 and phys-schost-2.

  5. To use a shared disk that does not support the SCSI protocol, ensure that fencing is disabled for that shared disk.

    1. Display the fencing setting for the individual disk.


      phys-schost# cldevice show device
      
      === DID Device Instances ===
      DID Device Name:                                      /dev/did/rdsk/dN
      …
        default_fencing:                                     nofencing
      • If fencing for the disk is set to nofencing or nofencing-noscrub, fencing is disabled for that disk. Go to Step 6.

      • If fencing for the disk is set to pathcount or scsi, disable fencing for the disk. Skip to Step c.

      • If fencing for the disk is set to global, determine whether fencing is also disabled globally. Proceed to Step b.

        Alternatively, you can simply disable fencing for the individual disk, which overrides for that disk whatever value the global_fencing property is set to. Skip to Step c to disable fencing for the individual disk.

    2. Determine whether fencing is disabled globally.


      phys-schost# cluster show -t global
      
      === Cluster ===
      Cluster name:                                         cluster
      …
         global_fencing:                                      nofencing
      • If global fencing is set to nofencing or nofencing-noscrub, fencing is disabled for the shared disk whose default_fencing property is set to global. Go to Step 6.

      • If global fencing is set to pathcount or prefer3, disable fencing for the shared disk. Proceed to Step c.


      Note –

      If an individual disk has its default_fencing property set to global, the fencing for that individual disk is disabled only while the cluster-wide global_fencing property is set to nofencing or nofencing-noscrub. If the global_fencing property is changed to a value that enables fencing, then fencing becomes enabled for all disks whose default_fencing property is set to global.


    3. Disable fencing for the shared disk.


      phys-schost# cldevice set \
      -p default_fencing=nofencing-noscrub device
      
    4. Verify that fencing for the shared disk is now disabled.


      phys-schost# cldevice show device
      
  6. Start the clsetup utility.


    phys-schost# clsetup
    

    The Initial Cluster Setup screen is displayed.


    Note –

    If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 11.


  7. Answer the prompt Do you want to add any quorum disks?.

    • If your cluster is a two-node cluster, you must configure at least one shared quorum device. Type Yes to configure one or more quorum devices.

    • If your cluster has three or more nodes, quorum device configuration is optional.

      • Type No if you do not want to configure additional quorum devices. Then skip to Step 10.

      • Type Yes to configure additional quorum devices. Then proceed to Step 8.

  8. Specify what type of device you want to configure as a quorum device.


    Note –

    NAS devices are not a supported option for quorum devices in an Sun Cluster 3.2 11/09 configuration. Reference to NAS devices in the following table are for information only.


    Quorum Device Type 

    Description 

    shared_disk

    Sun NAS device or shared disk 

    quorum_server

    Quorum server 

    netapp_nas

    Network Appliance NAS device 

  9. Specify the name of the device to configure as a quorum device.

    • For a quorum server, also specify the following information:

      • The IP address of the quorum server host

      • The port number that is used by the quorum server to communicate with the cluster nodes

    • For a Network Appliance NAS device, also specify the following information:

      • The name of the NAS device

      • The LUN ID of the NAS device

  10. At the prompt Is it okay to reset "installmode"?, type Yes.

    After the clsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.

  11. Quit the clsetup utility.

Next Steps

Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

Interrupted clsetup processing - If the quorum setup process is interrupted or fails to be completed successfully, rerun clsetup.

Changes to quorum vote count – If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then add it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List” in Chapter 6, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

ProcedureHow to Verify the Quorum Configuration and Installation Mode

Perform this procedure to verify that quorum configuration was completed successfully, if quorum was configured, and that cluster installation mode is disabled.

You do not need to be superuser to run these commands.

  1. From any global-cluster node, verify the device and node quorum configurations.


    phys-schost% clquorum list
    

    Output lists each quorum device and each node.

  2. From any node, verify that cluster installation mode is disabled.


    phys-schost% cluster show -t global | grep installmode
      installmode:                                    disabled

    Cluster installation and creation is complete.

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

See Also

Make a backup of your cluster configuration.

An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration. For more information, see How to Back Up the Cluster Configuration in Sun Cluster System Administration Guide for Solaris OS.

ProcedureHow to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames, clusternodenodeid-priv, that are assigned during Sun Cluster software installation.


Note –

Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.


Perform this procedure on one active node of the cluster.

  1. Become superuser on a global-cluster node.

  2. Start the clsetup utility.


    phys-schost# clsetup
    

    The clsetup Main Menu is displayed.

  3. Type the option number for Private Hostnames and press the Return key.

    The Private Hostname Menu is displayed.

  4. Type the option number for Change a Private Hostname and press the Return key.

  5. Follow the prompts to change the private hostname.

    Repeat for each private hostname to change.

  6. Verify the new private hostnames.


    phys-schost# clnode show -t node | grep privatehostname
      privatehostname:                                clusternode1-priv
      privatehostname:                                clusternode2-priv
      privatehostname:                                clusternode3-priv
Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

ProcedureHow to Configure Network Time Protocol (NTP)


Note –

If you installed your own /etc/inet/ntp.conf file before you installed Sun Cluster software, you do not need to perform this procedure. Determine your next step:


Perform this task to create or modify the NTP configuration file after you perform any of the following tasks:

If you added a node to a single-node cluster, you must ensure that the NTP configuration file that you use is copied to the original cluster node as well as to the new node.

  1. Become superuser on a cluster node.

  2. If you have your own /etc/inet/ntp.conf file, copy your file to each node of the cluster.

  3. If you do not have your own /etc/inet/ntp.conf file to install, use the /etc/inet/ntp.conf.cluster file as your NTP configuration file.


    Note –

    Do not rename the ntp.conf.cluster file as ntp.conf.


    If the /etc/inet/ntp.conf.cluster file does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Sun Cluster software. Sun Cluster software creates the /etc/inet/ntp.conf.cluster file as the NTP configuration file if an /etc/inet/ntp.conf file is not already present on the node. If so, perform the following edits instead on that ntp.conf file.

    1. Use your preferred text editor to open the NTP configuration file on one node of the cluster for editing.

    2. Ensure that an entry exists for the private hostname of each cluster node.

      If you changed any node's private hostname, ensure that the NTP configuration file contains the new private hostname.

    3. If necessary, make other modifications to meet your NTP requirements.

    4. Copy the NTP configuration file to all nodes in the cluster.

      The contents of the NTP configuration file must be identical on all cluster nodes.

  4. Stop the NTP daemon on each node.

    Wait for the command to complete successfully on each node before you proceed to Step 5.

    • SPARC: For the Solaris 9 OS, use the following command:


      phys-schost# /etc/init.d/xntpd stop
      
    • For the Solaris 10 OS, use the following command:


      phys-schost# svcadm disable ntp
      
  5. Restart the NTP daemon on each node.

    • If you use the ntp.conf.cluster file, run the following command:


      phys-schost# /etc/init.d/xntpd.cluster start
      

      The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file.

      • If the ntp.conf file exists, the script exits immediately without starting the NTP daemon.

      • If the ntp.conf file does not exist but the ntp.conf.cluster file does exist, the script starts the NTP daemon. In this case, the script uses the ntp.conf.cluster file as the NTP configuration file.

    • If you use the ntp.conf file, run one of the following commands:

      • SPARC: For the Solaris 9 OS, use the following command:


        phys-schost# /etc/init.d/xntpd start
        
      • For the Solaris 10 OS, use the following command:


        phys-schost# svcadm enable ntp
        
Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

ProcedureHow to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect

You can configure IP Security Architecture (IPsec) for the clprivnetinterface to provide secure TCP/IP communication on the cluster interconnect.

For information about IPsec, see Part IV, IP Security, in System Administration Guide: IP Services and the ipsecconf(1M) man page. For information about the clprivnet interface, see the clprivnet(7) man page.

Perform this procedure on each global-cluster voting node that you want to configure to use IPsec.

  1. Become superuser.

  2. On each node, determine the IP address of the clprivnet interface of the node.


    phys-schost# ifconfig clprivnet0
    
  3. On each node, configure the /etc/inet/ipsecinit.conf policy file and add Security Associations (SAs) between each pair of private-interconnect IP addresses that you want to use IPsec.

    Follow the instructions in How to Secure Traffic Between Two Systems With IPsec in System Administration Guide: IP Services. In addition, observe the following guidelines:

    • Ensure that the values of the configuration parameters for these addresses are consistent on all the partner nodes.

    • Configure each policy as a separate line in the configuration file.

    • To implement IPsec without rebooting, follow the instructions in the procedure's example, Securing Traffic With IPsec Without Rebooting.

    For more information about the sa unique policy, see the ipsecconf(1M) man page.

    1. In each file, add one entry for each clprivnet IP address in the cluster to use IPsec.

      Include the clprivnet IP address of the local node.

    2. If you use VNICs, also add one entry for the IP address of each physical interface that is used by the VNICs.

    3. (Optional) To enable striping of data over all links, include the sa unique policy in the entry.

      This feature helps the driver to optimally utilize the bandwidth of the cluster private network, which provides a high granularity of distribution and better throughput. The clprivnetinterface uses the Security Parameter Index (SPI) of the packet to stripe the traffic.

  4. On each node, edit the /etc/inet/ike/config file to set the p2_idletime_secs parameter.

    Add this entry to the policy rules that are configured for cluster transports. This setting provides the time for security associations to be regenerated when a cluster node reboots, and limits how quickly a rebooted node can rejoin the cluster. A value of 30 seconds should be adequate.


    phys-schost# vi /etc/inet/ike/config
    …
    {
        label "clust-priv-interconnect1-clust-priv-interconnect2"
    …
    p2_idletime_secs 30
    }
    …
Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

ProcedureHow to Record Diagnostic Data of the Cluster Configuration

After you finish configuring the global cluster but before you put it into production, use the Sun Explorer utility to record baseline information about the cluster. This data can be used if there is a future need to troubleshoot the cluster.

  1. Become superuser.

  2. Run the explorer utility on each node in the cluster.

    Use the appropriate command for your platform:

    Server 

    Command 

    Sun Fire 3800 through 6800 

    # explorer -i -w default,scextended

    Sun Fire V1280 and E2900 

    # explorer -i -w default,1280extended

    Sun Fire T1000 and T2000 

    # explorer -i -w default,Tx000

    Sun Fire X4x00 and X8x00 

    # explorer -i -w default,ipmi

    All other platforms 

    # explorer -i

    For more information, see the explorer(1M) man page in the /opt/SUNWexplo/man/man1m/ directory and Sun Explorer User’s Guide.

    The explorer output file is saved in the /opt/SUNWexplo/output/ directory as explorer.hostid.hostname-date.tar.gz.

  3. Save the files to a location that you can access if the entire cluster is down.

  4. Send all explorer files by email to the Sun Explorer database alias for your geographic location.

    This database makes your explorer output available to Sun technical support if the data is needed to help diagnose a technical problem with your cluster.

    Location 

    Email Address 

    North, Central, and South America (AMER) 

    explorer-database-americas@sun.com

    Europe, Middle East, and Africa (EMEA) 

    explorer-database-emea@sun.com

    Asia, Australia, New Zealand, and Pacific (APAC) 

    explorer-database-apac@sun.com