Go to main content

Oracle® Solaris Cluster 4.3 Software Installation Guide

Exit Print View

Updated: June 2019
 
 

How to Configure Oracle Solaris Cluster Software on All Nodes (XML)

Perform this procedure to configure a new global cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs the Oracle Solaris Cluster software.

This procedure configures the following cluster components:

  • Cluster name

  • Cluster node membership

  • Cluster interconnect

Before You Begin

Perform the following tasks:

  • Ensure that the Oracle Solaris OS is installed to support the Oracle Solaris Cluster software.

    If the Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for the Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing the Oracle Solaris software to meet Oracle Solaris Cluster software requirements.

  • Ensure that NWAM is disabled. See How to Install Oracle Solaris Cluster Software Packages for instructions.

  • SPARC: If you are configuring Oracle VM Server for SPARC logical domains as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See How to Install Oracle VM Server for SPARC Software and Create Domains.

  • Ensure that any adapters that you want to use as tagged VLAN adapters are configured and that you have their VLAN IDs.

  • Ensure that Oracle Solaris Cluster 4.3 software and updates are installed on each node that you will configure. See How to Install Oracle Solaris Cluster Software Packages.

  1. Ensure that the Oracle Solaris Cluster 4.3 software is not yet configured on each potential cluster node.
    1. Assume the root role on a potential node that you want to configure in the new cluster.
    2. Determine whether the Oracle Solaris Cluster software is already configured on the potential node.
      phys-schost# /usr/sbin/clinfo -n
      • If the command returns the following message, proceed to Step c.
        clinfo: node is not configured as part of a cluster: Operation not applicable

        This message indicates that the Oracle Solaris Cluster software is not yet configured on the potential node.

      • If the command returns the node ID number, do not perform this procedure.

        The return of a node ID indicates that the Oracle Solaris Cluster software is already configured on the node.

        If the cluster is running an older version of Oracle Solaris Cluster software and you want to install Oracle Solaris Cluster 4.3 software, instead perform upgrade procedures in Oracle Solaris Cluster 4.3 Upgrade Guide.

    3. Repeat Step a and Step b on each remaining potential node that you want to configure in the new cluster.

      If the Oracle Solaris Cluster software is not yet configured on any of the potential cluster nodes, proceed to Step 2.

  2. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bind
  3. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  4. If you are duplicating an existing cluster than runs the Oracle Solaris Cluster 4.3 software, use a node in that cluster to create a cluster configuration XML file.
    1. Assume the root role on an active member of the cluster that you want to duplicate.
    2. Export the existing cluster's configuration information to a file.
      phys-schost# cluster export -o clconfigfile
      –o

      Specifies the output destination.

      clconfigfile

      The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

      For more information, see the cluster(1CL) man page.

    3. Copy the configuration file to the potential node from which you will configure the new cluster.

      You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

  5. Assume the root role on the potential node from which you will configure the new cluster.
  6. Modify or create the cluster configuration XML file as needed.

    Include or modify the values of the XML elements to reflect the cluster configuration that you want to create.

    • If you are duplicating an existing cluster, open the file that you created with the cluster export command.

    • If you are not duplicating an existing cluster, create a new file.

      Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

    • To establish a cluster, the following components must have valid values in the cluster configuration XML file:

      • Cluster name

      • Cluster nodes

      • Cluster transport

    • If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.

    See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.

  7. Validate the cluster configuration XML file.
    phys-schost# /usr/share/src/xmllint --valid --noout clconfigfile

    See the xmllint(1) man page for more information.

  8. Authorize acceptance of cluster configuration commands by the control node.
    1. Determine which system to use to issue the cluster creation command.

      This system is the control node.

    2. On all systems that you will configure in the cluster, other than the control node, authorize acceptance of commands from the control node.
      phys-schost# clauth enable -n control-node

      If you want to use the des (Diffie-Hellman) authentication protocol instead of the sys (unix) protocol, include –p des in the command.

      phys-schost# clauth enable -p des -n control-node

      For information about setting up DES authentication, see Administering Authentication With Secure RPC in Managing Kerberos and Other Authentication Services in Oracle Solaris 11.3.

  9. From the potential node that contains the cluster configuration XML file, create the cluster.
    phys-schost# cluster create -i clconfigfile
    –i clconfigfile

    Specifies the name of the cluster configuration XML file to use as the input source.

  10. Verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.

    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  11. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  12. Perform any necessary updates to the Oracle Solaris Cluster software.

    For instructions on updating your software, see Chapter 11, Updating Your Software in Oracle Solaris Cluster 4.3 System Administration Guide.

  13. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  14. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.3 for more information about modifying the automounter map.

  15. To duplicate quorum information from an existing cluster, configure the quorum device by using the cluster configuration XML file.

    You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create a required quorum device, go instead to How to Configure Quorum Devices.

    1. If you are using a quorum server for the quorum device, ensure that the quorum server is set up and running.

      Follow instructions in How to Install and Configure Oracle Solaris Cluster Quorum Server Software.

    2. If you are using a NAS device for the quorum device, ensure that the NAS device is set up and operational.
      1. Observe the requirements for using a NAS device as a quorum device.

        See Oracle Solaris Cluster With Network-Attached Storage Device Manual.

      2. Follow instructions in your device's documentation to set up the NAS device.
    3. Ensure that the quorum configuration information in the cluster configuration XML file reflects valid values for the cluster that you created.
    4. If you made changes to the cluster configuration XML file, validate the file.
      phys-schost# xmllint --valid --noout clconfigfile
    5. Configure the quorum device.
      phys-schost# clquorum add -i clconfigfile device-name
      device-name

      Specifies the name of the device to configure as a quorum device.

  16. Remove the cluster from installation mode.
    phys-schost# clquorum reset
  17. Close access to the cluster configuration by machines that are not configured cluster members.
    phys-schost# claccess deny-all
  18. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    Note -  At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled +
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===
      
      Node Name:                                      node
      …
      reboot_on_path_failure:                          enabled
      …
Example 2  Configuring Oracle Solaris Cluster Software on All Nodes By Using an XML File

The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Oracle Solaris OS. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new cluster is d3.

The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes.

phys-newhost-N# /usr/sbin/clinfo -n
clinfo: node is not configured as part of a cluster: Operation not applicable
 
phys-oldhost-1# cluster export -o clusterconf.xml
Copy clusterconf.xml to phys-newhost-1 and modify the file with valid values
 
phys-newhost-1# xmllint --valid --noout clusterconf.xml
No errors are reported
 
phys-newhost-1# cluster create -i clusterconf.xml
phys-newhost-N# svcs multi-user-server
STATE          STIME    FMRI
online         17:52:55 svc:/milestone/multi-user-server:default
phys-newhost-1# clnode status
Output shows that both nodes are online
 
phys-newhost-1# clquorum add -i clusterconf.xml d3
phys-newhost-1# clquorum reset

Configuring Additional Components

After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component to. For example, if you are duplicating resource groups, ensure that the resourcegroupNodeList entry contains the valid node names for the new cluster and not the node names from the cluster that you duplicated unless the node names are the same.

To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate.

The following describes a list of the cluster components that you can create from a cluster configuration XML file after the cluster is established. The list includes the man page for the command that you use to duplicate the component:

  • Device groups: Solaris Volume Manager: cldevicegroup(1CL)

    For Solaris Volume Manager, first create the disk sets that you specify in the cluster configuration XML file.

  • Resource Group Manager components

    You can use the –a option of the clresource, clressharedaddress, or clreslogicalhostname command to also duplicate the resource type and resource group that are associated with the resource that you duplicate. Otherwise, you must first add the resource type and resource group to the cluster before you add the resource.

  • NAS devices: clnasdevice(1CL)

    You must first set up the NAS device as described in the device's documentation.

  • SNMP hosts: clsnmphost(1CL)

    The clsnmphost create -i command requires that you specify a user password file with the –f option.

  • SNMP users: clsnmpuser(1CL)

  • Thresholds for monitoring system resources on cluster objects: cltelemetryattribute(1CL)

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

Go to How to Verify the Quorum Configuration and Installation Mode.