Sun Cluster Software Installation Guide for Solaris OS

ProcedureHow to Configure Sun Cluster Software on All Nodes (XML)

Perform this procedure to configure a new global cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs Sun Cluster 3.2 11/09 software.

This procedure configures the following cluster components:

Before You Begin

Perform the following tasks:

  1. Ensure that Sun Cluster 3.2 11/09 software is not yet configured on each potential cluster node.

    1. Become superuser on a potential node that you want to configure in the new cluster.

    2. Determine whether Sun Cluster software is already configured on the potential node.


      phys-schost# /usr/sbin/clinfo -n
      
      • If the command returns the following message, proceed to Step c.


        clinfo: node is not configured as part of acluster: Operation not applicable

        This message indicates that Sun Cluster software is not yet configured on the potential node.

      • If the command returns the node ID number, do not perform this procedure.

        The return of a node ID indicates that Sun Cluster software is already configured on the node.

        If the cluster is running an older version of Sun Cluster software and you want to install Sun Cluster 3.2 11/09 software, instead perform upgrade procedures in Sun Cluster Upgrade Guide for Solaris OS.

    3. Repeat Step a and Step b on each remaining potential node that you want to configure in the new cluster.

      If Sun Cluster software is not yet configured on any of the potential cluster nodes, proceed to Step 2.

  2. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  3. If you are duplicating an existing cluster than runs Sun Cluster 3.2 11/09 software, use a node in that cluster to create a cluster configuration XML file.

    1. Become superuser on an active member of the cluster that you want to duplicate.

    2. Export the existing cluster's configuration information to a file.


      phys-schost# cluster export -o clconfigfile
      
      -o

      Specifies the output destination.

      clconfigfile

      The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

      For more information, see the cluster(1CL) man page.

    3. Copy the configuration file to the potential node from which you will configure the new cluster.

      You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

  4. Become superuser on the potential node from which you will configure the new cluster.

  5. Modify the cluster configuration XML file as needed.

    1. Open your cluster configuration XML file for editing.

      • If you are duplicating an existing cluster, open the file that you created with the cluster export command.

      • If you are not duplicating an existing cluster, create a new file.

        Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

    2. Modify the values of the XML elements to reflect the cluster configuration that you want to create.

      • To establish a cluster, the following components must have valid values in the cluster configuration XML file:

        • Cluster name

        • Cluster nodes

        • Cluster transport

      • The cluster is created with the assumption that the partition /globaldevices exists on each node that you configure as a cluster node. The global-devices namespace is created on this partition. If you need to use a different file-system name on which to create the global devices, add the following property to the <propertyList> element for each node that does not have a partition that is named /globaldevices.


        …
          <nodeList>
            <node name="node" id="N">
              <propertyList>
        …
                <property name="globaldevfs" value="/filesystem-name">
        …
              </propertyList>
            </node>
        …

        To instead use a lofi device for the global-devices namespace, set the value of the globaldevfs property to lofi.


                
        <property name="globaldevfs" value="lofi">
        
      • If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.

      See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.

  6. Validate the cluster configuration XML file.


    phys-schost# /usr/share/src/xmllint --valid --noout clconfigfile
    

    See the xmllint(1) man page for more information.

  7. From the potential node that contains the cluster configuration XML file, create the cluster.


    phys-schost# cluster create -i clconfigfile
    
    -i clconfigfile

    Specifies the name of the cluster configuration XML file to use as the input source.

  8. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  9. From one node, verify that all nodes have joined the cluster.


    phys-schost# clnode status
    

    Output resembles the following.


    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  10. Install any necessary patches to support Sun Cluster software, if you have not already done so.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  11. If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.


    exclude:lofs

    The change to the /etc/system file becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

    However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

    • Disable LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.

  12. To duplicate quorum information from an existing cluster, configure the quorum device by using the cluster configuration XML file.

    You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create a required quorum device, go instead to How to Configure Quorum Devices.

    1. If you are using a quorum server for the quorum device, ensure that the quorum server is set up and running.

      Follow instructions in How to Install and Configure Quorum Server Software.

    2. If you are using a NAS device for the quorum device, ensure that the NAS device is set up and operational.

      1. Observe the requirements for using a NAS device as a quorum device.

        See Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS.

      2. Follow instructions in your device's documentation to set up the NAS device.

    3. Ensure that the quorum configuration information in the cluster configuration XML file reflects valid values for the cluster that you created.

    4. If you made changes to the cluster configuration XML file, validate the file.


      phys-schost# xmllint --valid --noout clconfigfile
      
    5. Configure the quorum device.


      phys-schost# clquorum add -i clconfigfile devicename
      
      devicename

      Specifies the name of the device to configure as a quorum device.

  13. Remove the cluster from installation mode.


    phys-schost# clquorum reset
    
  14. Close access to the cluster configuration by machines that are not configured cluster members.


    phys-schost# claccess deny-all
    
  15. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    1. Enable automatic reboot.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …

Example 3–2 Configuring Sun Cluster Software on All Nodes By Using an XML File

The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Solaris 10 OS and is not configured with non-global zones. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new cluster is d3.

The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes.


phys-newhost-N# /usr/sbin/clinfo -n
clinfo: node is not configured as part of acluster: Operation not applicable
 
phys-oldhost-1# cluster export -o clusterconf.xml
Copy clusterconf.xml to phys-newhost-1 and modify the file with valid values
 
phys-newhost-1# xmllint --valid --noout clusterconf.xml
No errors are reported
 
phys-newhost-1# cluster create -i clusterconf.xml
phys-newhost-N# svcs multi-user-server phys-newhost-N
STATE          STIME    FMRI
online         17:52:55 svc:/milestone/multi-user-server:default
phys-newhost-1# clnode status
Output shows that both nodes are online
 
phys-newhost-1# clquorum add -i clusterconf.xml d3
phys-newhost-1# clquorum reset

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

Go to How to Verify the Quorum Configuration and Installation Mode.

See Also

After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component to. For example, if you are duplicating resource groups, ensure that the <resourcegroupNodeList> entry contains the valid node names for the new cluster, and not the node names from the cluster that you duplicated unless the node names are the same.

To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate. The following table lists the cluster components that you can create from a cluster configuration XML file after the cluster is established and the man page for the command that you use to duplicate the component.


Note –

This table provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


Cluster Component 

Man Page 

Special Instructions 

Device groups: Solaris Volume Manager and Veritas Volume Manager 

cldevicegroup(1CL)

For Solaris Volume Manager, first create the disk sets that you specify in the cluster configuration XML file. 

For VxVM, first install and configure VxVM software and create the disk groups that you specify in the cluster configuration XML file. 

Resources 

clresource(1CL)

You can use the -a option of the clresource, clressharedaddress, or clreslogicalhostname command to also duplicate the resource type and resource group that are associated with the resource that you duplicate.

Otherwise, you must first add the resource type and resource group to the cluster before you add the resource. 

Shared address resources 

clressharedaddress(1CL)

Logical hostname resources 

clreslogicalhostname(1CL)

Resource types 

clresourcetype(1CL)

Resource groups 

clresourcegroup(1CL)

NAS devices 

clnasdevice(1CL)

You must first set up the NAS device as described in the device's documentation. 

SNMP hosts 

clsnmphost(1CL)

The clsnmphost create -i command requires that you specify a user password file with the -f option.

SNMP users 

clsnmpuser(1CL)

 

Thresholds for monitoring system resources on cluster objects 

cltelemetryattribute(1CL)