JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

2.  Installing Software on Global-Cluster Nodes

3.  Establishing the Global Cluster

Overview of Establishing a New Cluster or Cluster Node

Establishing a New Global Cluster or New Global-Cluster Node

Configuring Oracle Solaris Cluster Software on All Nodes (scinstall)

How to Configure Oracle Solaris Cluster Software on All Nodes (scinstall)

How to Configure Oracle Solaris Cluster Software on All Nodes (XML)

Installing and Configuring Oracle Solaris and Oracle Solaris Cluster Software (Automated Installer)

How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated Installer)

How to Prepare the Cluster for Additional Global-Cluster Nodes

How to Change the Private Network Configuration When Adding Nodes or Private Networks

Configuring Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)

How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)

How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (XML File)

How to Update Quorum Devices After Adding a Node to a Global Cluster

How to Configure Quorum Devices

How to Verify the Quorum Configuration and Installation Mode

How to Change Private Hostnames

Configuring Network Time Protocol (NTP)

How to Use Your Own /etc/inet/ntp.conf File

How to Install NTP After Adding a Node to a Single-Node Cluster

How to Update NTP After Changing a Private Hostname

How to Validate the Cluster

How to Record Diagnostic Data of the Cluster Configuration

4.  Configuring Solaris Volume Manager Software

5.  Creating a Cluster File System

6.  Creating Zone Clusters

7.  Uninstalling Software From the Cluster

Index

Establishing a New Global Cluster or New Global-Cluster Node

This section describes how to establish a new global cluster or add a node to an existing cluster. Global-cluster nodes can be physical machines, Oracle VM Server for SPARC I/O domains, or Oracle VM Server for SPARC guest domains. A cluster can consist of a combination of any of these node types. Before you start to perform these tasks, ensure that you installed software packages for the Oracle Solaris OS, Oracle Solaris Cluster framework, and other products as described in Installing the Software.

This section contains the following information and procedures:

Configuring Oracle Solaris Cluster Software on All Nodes (scinstall)

The scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.

Private-network address

172.16.0.0

Private-network netmask

255.255.240.0

Cluster-transport adapters

Exactly two adapters

Cluster-transport switches

switch1 and switch2

Global fencing

Enabled

Installation security (DES)

Limited

Complete one of the following cluster configuration worksheets to plan your Typical mode or Custom mode installation:

How to Configure Oracle Solaris Cluster Software on All Nodes (scinstall)

Perform this procedure from one node of the global cluster to configure Oracle Solaris Cluster software on all nodes of the cluster.


Note - This procedure uses the interactive form of the scinstall command. For information about how to use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.


Follow these guidelines to use the interactive scinstall utility in this procedure:

Before You Begin

Perform the following tasks:

  1. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  2. On each node to configure in a cluster, assume the root role.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as nonroot through a profile shell, or prefix the command with the pfexec command.

  3. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bindEntry 2
  4. Prepare public-network interfaces.
    1. Create static IP addresses for each public-network interface.
      # ipadm create-ip interface
      # ipadm create-addr -T static -a local=address/prefix-length addrobj

      For more information, see How to Configure an IP Interface in Connecting Systems Using Fixed Network Configuration in Oracle Solaris 11.1.

    2. (Optional) Create IPMP groups for public-network interfaces.

      During initial cluster configuration, unless non-link-local IPv6 public network interfaces exist in the cluster, IPMP groups are automatically created based on matching subnets. These groups use transitive probes for interface monitoring and no test addresses are required.

      If these automatically created IPMP groups would not meet your needs, or if IPMP groups would not be created because your configuration includes one or more non-link-local IPv6 public network interfaces, do one of the following:

      • Create the IPMP groups you need before you establish the cluster.
      • After the cluster is established, use the ipadm command to edit the IPMP groups.

      For more information, see Configuring IPMP Groups in Managing Oracle Solaris 11.1 Network Performance.

  5. From one cluster node, start the scinstall utility.
    phys-schost# scinstall
  6. Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.
     *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Create a new cluster or add a cluster node
          * 2) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  1

    The New Cluster and Cluster Node Menu is displayed.

  7. Type the option number for Create a New Cluster and press the Return key.

    The Typical or Custom Mode menu is displayed.

  8. Type the option number for either Typical or Custom and press the Return key.

    The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to continue.

  9. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  10. Verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.

    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  11. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  12. Take the cluster out of installmode.
    phys-schost# clquorum reset
  13. (Optional) Enable the automatic node reboot feature.

    This feature automatically reboots a node if all monitored shared-disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.


    Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.


    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
  14. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  15. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks Involving Maps in Managing Network File Systems in Oracle Solaris 11.1 for more information about modifying the automounter map.

Example 3-1 Configuring Oracle Solaris Cluster Software on All Nodes

The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall utility in Typical Mode. The other cluster node is phys-schost-2. The adapter names are net2 and net3. The automatic selection of a quorum device is enabled.

    Log file - /var/cluster/logs/install/scinstall.log.24747

   Configuring global device using lofi on pred1: done
    Starting discovery of the cluster transport configuration.

    The following connections were discovered:

        phys-schost-1:net2  switch1  phys-schost-2:net2
        phys-schost-1:net3  switch2  phys-schost-2:net3

    Completed discovery of the cluster transport configuration.

    Started cluster check on "phys-schost-1".
    Started cluster check on "phys-schost-2".

    cluster check completed with no errors or warnings for "phys-schost-1".
    cluster check completed with no errors or warnings for "phys-schost-2".

    Configuring "phys-schost-2" … done
    Rebooting "phys-schost-2" … done

    Configuring "phys-schost-1" … done
    Rebooting "phys-schost-1" …

Log file - /var/cluster/logs/install/scinstall.log.24747

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

If you intend to configure any quorum devices in your cluster, go to How to Configure Quorum Devices.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

How to Configure Oracle Solaris Cluster Software on All Nodes (XML)

Perform this procedure to configure a new global cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs Oracle Solaris Cluster 4.1 software.

This procedure configures the following cluster components:

Before You Begin

Perform the following tasks:

  1. Ensure that the Oracle Solaris Cluster 4.1 software is not yet configured on each potential cluster node.
    1. Assume the root role on a potential node that you want to configure in the new cluster.
    2. Determine whether the Oracle Solaris Cluster software is already configured on the potential node.
      phys-schost# /usr/sbin/clinfo -n
      • If the command returns the following message, proceed to Step c.
        clinfo: node is not configured as part of acluster: Operation not applicable

        This message indicates that the Oracle Solaris Cluster software is not yet configured on the potential node.

      • If the command returns the node ID number, do not perform this procedure.

        The return of a node ID indicates that the Oracle Solaris Cluster software is already configured on the node.

        If the cluster is running an older version of Oracle Solaris Cluster software and you want to install Oracle Solaris Cluster 4.1 software, instead perform upgrade procedures in Oracle Solaris Cluster Upgrade Guide.

    3. Repeat Step a and Step b on each remaining potential node that you want to configure in the new cluster.

      If the Oracle Solaris Cluster software is not yet configured on any of the potential cluster nodes, proceed to Step 2.

  2. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bindEntry 2
  3. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  4. If you are duplicating an existing cluster than runs the Oracle Solaris Cluster 4.1 software, use a node in that cluster to create a cluster configuration XML file.
    1. Assume the root role on an active member of the cluster that you want to duplicate.
    2. Export the existing cluster's configuration information to a file.
      phys-schost# cluster export -o clconfigfile
      -o

      Specifies the output destination.

      clconfigfile

      The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

      For more information, see the cluster(1CL) man page.

    3. Copy the configuration file to the potential node from which you will configure the new cluster.

      You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

  5. Assume the root role on the potential node from which you will configure the new cluster.
  6. Modify or create the cluster configuration XML file as needed.

    Include or modify the values of the XML elements to reflect the cluster configuration that you want to create.

    • If you are duplicating an existing cluster, open the file that you created with the cluster export command.

    • If you are not duplicating an existing cluster, create a new file.

      Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

    • To establish a cluster, the following components must have valid values in the cluster configuration XML file:

      • Cluster name

      • Cluster nodes

      • Cluster transport

    • If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.

    See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.

  7. Validate the cluster configuration XML file.
    phys-schost# /usr/share/src/xmllint --valid --noout clconfigfile

    See the xmllint(1) man page for more information.

  8. From the potential node that contains the cluster configuration XML file, create the cluster.
    phys-schost# cluster create -i clconfigfile
    -i clconfigfile

    Specifies the name of the cluster configuration XML file to use as the input source.

  9. Verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.

    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  10. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  11. Perform any necessary updates to the Oracle Solaris Cluster software.

    See Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide for installation instructions.

  12. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  13. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks Involving Maps in Managing Network File Systems in Oracle Solaris 11.1 for more information about modifying the automounter map.

  14. To duplicate quorum information from an existing cluster, configure the quorum device by using the cluster configuration XML file.

    You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create a required quorum device, go instead to How to Configure Quorum Devices.

    1. If you are using a quorum server for the quorum device, ensure that the quorum server is set up and running.

      Follow instructions in How to Install and Configure Oracle Solaris Cluster Quorum Server Software.

    2. If you are using a NAS device for the quorum device, ensure that the NAS device is set up and operational.
      1. Observe the requirements for using a NAS device as a quorum device.

        See Oracle Solaris Cluster With Network-Attached Storage Device Manual.

      2. Follow instructions in your device's documentation to set up the NAS device.
    3. Ensure that the quorum configuration information in the cluster configuration XML file reflects valid values for the cluster that you created.
    4. If you made changes to the cluster configuration XML file, validate the file.
      phys-schost# xmllint --valid --noout clconfigfile
    5. Configure the quorum device.
      phys-schost# clquorum add -i clconfigfile device-name
      device-name

      Specifies the name of the device to configure as a quorum device.

  15. Remove the cluster from installation mode.
    phys-schost# clquorum reset
  16. Close access to the cluster configuration by machines that are not configured cluster members.
    phys-schost# claccess deny-all
  17. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.


    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …

Example 3-2 Configuring Oracle Solaris Cluster Software on All Nodes By Using an XML File

The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Solaris 11.1 OS. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new cluster is d3.

The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes.

phys-newhost-N# /usr/sbin/clinfo -n
clinfo: node is not configured as part of a cluster: Operation not applicable
 
phys-oldhost-1# cluster export -o clusterconf.xml
Copy clusterconf.xml to phys-newhost-1 and modify the file with valid values
 
phys-newhost-1# xmllint --valid --noout clusterconf.xml
No errors are reported
 
phys-newhost-1# cluster create -i clusterconf.xml
phys-newhost-N# svcs multi-user-server
STATE          STIME    FMRI
online         17:52:55 svc:/milestone/multi-user-server:default
phys-newhost-1# clnode status
Output shows that both nodes are online
 
phys-newhost-1# clquorum add -i clusterconf.xml d3
phys-newhost-1# clquorum reset
Configuring Additional Components

After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component to. For example, if you are duplicating resource groups, ensure that the <resourcegroupNodeList> entry contains the valid node names for the new cluster and not the node names from the cluster that you duplicated unless the node names are the same.

To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate.

The following describes a list of the cluster components that you can create from a cluster configuration XML file after the cluster is established. The list includes the man page for the command that you use to duplicate the component:

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

Go to How to Verify the Quorum Configuration and Installation Mode.

Installing and Configuring Oracle Solaris and Oracle Solaris Cluster Software (Automated Installer)

During the scinstall Automated Installer (AI) installation of a cluster, you choose to run installation of the Oracle Solaris software in one of the following ways:

See Installing With the Text Installer in Installing Oracle Solaris 11.1 Systems for more information about interactive installation of Oracle Solaris software.

The scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.

Private-network address

172.16.0.0

Private-network netmask

255.255.240.0

Cluster-transport adapters

Exactly two adapters

Cluster-transport switches

switch1 and switch2

Global fencing

Enabled

Installation security (DES)

Limited

Complete one of the following cluster configuration worksheets to plan your Typical mode or Custom mode installation:

How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated Installer)

This procedure describes how to set up and use the scinstall(1M) custom Automated Installer installation method. This method installs both Oracle Solaris OS and Oracle Solaris Cluster framework and data services software on all global-cluster nodes in the same operation and establishes the cluster. These nodes can be physical machines or (SPARC only) Oracle VM Server for SPARC I/O domains or guest domains, or a combination of any of these types of nodes.


Note - If your physically clustered machines are configured with Oracle VM Server for SPARC, install the Oracle Solaris Cluster software only in I/O domains or guest domains.


Follow these guidelines to use the interactive scinstall utility in this procedure:

Before You Begin

Perform the following tasks:

  1. Set up your Automated Installer (AI) install server and DHCP server.

    Ensure that the AI install server meets the following requirements.

    • The install server is on the same subnet as the cluster nodes.

    • The install server is not itself a cluster node.

    • The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.

    • Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.

    Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 8, Setting Up an Install Server, in Installing Oracle Solaris 11.1 Systems and Working With DHCP in Oracle Solaris 11.1.

  2. On the AI install server, assume the root role.
  3. On the AI install server, install the Oracle Solaris Cluster AI support package.
    1. Ensure that the solaris and ha-cluster publishers are valid.
      installserver# pkg publisher
      PUBLISHER        TYPE     STATUS   URI
      solaris          origin   online   solaris-repository
      ha-cluster       origin   online   ha-cluster-repository
    2. Install the cluster AI support package.
      installserver# pkg install ha-cluster/system/install
  4. On the AI install server, start the scinstall utility.
    installserver# /usr/cluster/bin/scinstall

    The scinstall Main Menu is displayed.

  5. Choose the Install and Configure a Cluster From This Automated Installer Install Server menu item.
     *** Main Menu ***
     
        Please select from one of the following (*) options:
    
          * 1) Install and configure a cluster from this Automated Installer install server
          * 2) Print release information for this Automated Installer install server 
    
          * ?) Help with menu options
          * q) Quit
     
        Option:  1
  6. Follow the menu prompts to supply your answers from the configuration planning worksheet.
  7. To perform any other postinstallation tasks, set up your own AI manifest.

    See Chapter 13, Running a Custom Script During First Boot, in Installing Oracle Solaris 11.1 Systems.

  8. Exit from the AI install server.
  9. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] […]  &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.
  10. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bindEntry 2
  11. Shut down and boot each node to start the AI installation.

    The Oracle Solaris software is installed with the default configuration.


    Note - You cannot use this method if you want to customize the Oracle Solaris installation. If you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and Oracle Solaris Cluster software is not installed and configured. To customize Oracle Solaris during installation, instead follow instructions in How to Install Oracle Solaris Software, then install and configure the cluster by following instructions in How to Install Oracle Solaris Cluster Framework and Data Service Software Packages.


    • SPARC:
      1. Shut down each node.
        phys-schost# shutdown -g0 -y -i0
      2. Boot the node with the following command
        ok boot net:dhcp - install

        Note - Surround the dash (-) in the command with a space on each side.


    • x86:
      1. Reboot the node.
        # reboot -p
      2. During PXE boot, press Control-N.

        The GRUB menu is displayed.

      3. Immediately select the Automated Install entry and press Return.

        Note - If you do not select the Automated Install entry within 20 seconds, installation proceeds using the default interactive text installer method, which will not install and configure the Oracle Solaris Cluster software.


        On each node, a new boot environment (BE) is created and Automated Installer installs the Oracle Solaris OS and Oracle Solaris Cluster software. When the installation is successfully completed, each node is fully installed as a new cluster node. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file on each node.

  12. Verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.

    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  13. On each node, activate the installed BE and boot into cluster mode.
    1. Activate the installed BE.
      # beadm activate BE-name
    2. Shut down the node.
      # shutdown -y -g0 -i0

      Note - Do not use the reboot or halt command. These commands do not activate a new BE.


    3. Boot the node into cluster mode.
  14. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks Involving Maps in Managing Network File Systems in Oracle Solaris 11.1 for more information about modifying the automounter map.

  15. x86: Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    grub edit> kernel /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS -k

    For more information, see How to Boot a System With the Kernel Debugger Enabled (kmdb) in Booting and Shutting Down Oracle Solaris on x86 Platforms.

  16. If you performed a task that requires a cluster reboot, reboot the cluster.

    The following tasks require a reboot:

    • Installing software updates that require a node or cluster reboot

    • Making configuration changes that require a reboot to become active

    1. On one node, assume the root role.
    2. Shut down the cluster.
      phys-schost-1# cluster shutdown -y -g0 cluster-name

      Note - Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

      Cluster nodes remain in installation mode until the first time that you run the clsetup command. You run this command during the procedure How to Configure Quorum Devices.


    3. Reboot each node in the cluster.

    The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  17. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  18. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  19. (Optional) On each node, enable automatic node reboot if all monitored shared-disk paths fail.

    Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.


    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled
      -p
      Specifies the property to set
      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …

Next Steps

1. Perform all of the following procedures that are appropriate for your cluster configuration.

2. Configure quorum, if not already configured, and perform postinstallation tasks.

Troubleshooting

Disabled scinstall option – If the AI option of the scinstall command is not preceded by an asterisk, the option is disabled. This condition indicates that AI setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 7 to correct the AI setup, then restart the scinstall utility.

How to Prepare the Cluster for Additional Global-Cluster Nodes

Perform this procedure on existing global-cluster nodes to prepare the cluster for the addition of new cluster nodes.

Before You Begin

Perform the following tasks:

  1. Add the name of the new node to the cluster's authorized-nodes list.
    1. On any node, assume the root role.
    2. Start the clsetup utility.
      phys-schost# clsetup

      The Main Menu is displayed.

    3. Choose the New Nodes menu item.
    4. Choose the Specify the Name of a Machine Which May Add Itself menu item.
    5. Follow the prompts to add the node's name to the list of recognized machines.

      The clsetup utility displays the message Command completed successfully if the task is completed without error.

    6. Quit the clsetup utility.
  2. If you are adding a node to a single-node cluster, ensure that two cluster interconnects already exist by displaying the interconnect configuration.
    phys-schost# clinterconnect show

    You must have at least two cables or two adapters configured before you can add a node.

    • If the output shows configuration information for two cables or for two adapters, proceed to Step 3.
    • If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.
      1. On one node, start the clsetup utility.
        phys-schost# clsetup
      2. Choose the Cluster Interconnect menu item.
      3. Choose the Add a Transport Cable menu item.

        Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport switch.

      4. If necessary, repeat Step c to configure a second cluster interconnect.
      5. When finished, quit the clsetup utility.
      6. Verify that the cluster now has two cluster interconnects configured.
        phys-schost# clinterconnect show

        The command output should show configuration information for at least two cluster interconnects.

  3. Ensure that the private-network configuration can support the nodes and private networks that you are adding.
    1. Display the maximum numbers of nodes, private networks, and zone clusters that the current private-network configuration supports.
      phys-schost# cluster show-netprops

      The output looks similar to the following:

      === Private Network ===                        
      
      private_netaddr:                                172.16.0.0
        private_netmask:                                255.255.240.0
        max_nodes:                                      64
        max_privatenets:                                10
        max_zoneclusters:                               12
    2. Determine whether the current private-network configuration can support the increased number of nodes, including non-global zones, and private networks.

Next Steps

Configure Oracle Solaris Cluster software on the new cluster nodes. Go to How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall) or How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (XML File).

How to Change the Private Network Configuration When Adding Nodes or Private Networks

Perform this task to change the global-cluster's private IP address range to accommodate an increase in one or more of the following cluster components:

You can also use this procedure to decrease the private IP address range.


Note - This procedure requires you to shut down the entire cluster. If you need to change only the netmask, for example, to add support for zone clusters, do not perform this procedure. Instead, run the following command from a global-cluster node that is running in cluster mode to specify the expected number of zone clusters:

phys-schost# cluster set-netprops num_zoneclusters=N

This command does not require you to shut down the cluster.


  1. Assume the root role on a node of the cluster.
  2. From one node, start the clsetup utility.
    phys-schost# clsetup

    The clsetup Main Menu is displayed.

  3. Switch each resource group offline.
    1. Choose the Resource Groups menu item.

      The Resource Group Menu is displayed.

    2. Choose the Online/Offline or Switchover a Resource Group menu item.
    3. Follow the prompts to take offline all resource groups and to put them in the unmanaged state.
    4. When all resource groups are offline, type q to return to the Resource Group Menu.
  4. Disable all resources in the cluster.
    1. Choose the Enable/Disable a Resource menu item.
    2. Choose a resource to disable and follow the prompts.
    3. Repeat the previous step for each resource to disable.
    4. When all resources are disabled, type q to return to the Resource Group Menu.
  5. Quit the clsetup utility.
  6. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.
    # cluster status -t resource,resourcegroup
    -t

    Limits output to the specified cluster object

    resource

    Specifies resources

    resourcegroup

    Specifies resource groups

  7. From one node, shut down the cluster.
    # cluster shutdown -g0 -y
    -g

    Specifies the wait time in seconds

    -y

    Prevents the prompt that asks you to confirm a shutdown from being issued

  8. Boot each node into noncluster mode.
    • SPARC:
      ok boot -x
    • x86:
      1. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

        For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.1 Systems.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
      3. Add -x to the command to specify that the system boot into noncluster mode.
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


  9. From one node, start the clsetup utility.

    When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.

  10. Choose the Change Network Addressing and Ranges for the Cluster Transport menu item.

    The clsetup utility displays the current private network configuration, then asks if you would like to change this configuration.

  11. To change either the private network IP address or the IP address range, type yes and press the Return key.

    The clsetup utility displays the default private network IP address, 172.16.0.0, and asks if it is okay to accept this default.

  12. Change or accept the private-network IP address.
    • To accept the default private network IP address and proceed to changing the IP address range, type yes and press the Return key.
    • To change the default private network IP address:
      1. Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.

        The clsetup utility will prompt for the new private-network IP address.

      2. Type the new IP address and press the Return key.

        The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.

  13. Change or accept the default private network IP address range.

    The default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, 12 zone clusters, and 10 private networks in the cluster.

    • To accept the default IP address range, type yes and press the Return key.
    • To change the IP address range:
      1. Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.

        When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks, and zone clusters that you expect to configure in the cluster.

      2. Provide the number of nodes, private networks, and zone clusters that you expect to configure in the cluster.

        From these numbers, the clsetup utility calculates two proposed netmasks:

        • The first netmask is the minimum netmask to support the number of nodes, private networks, and zone clusters that you specified.

        • The second netmask supports twice the number of nodes, private networks, and zone clusters that you specified, to accommodate possible future growth.

      3. Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes, private networks, and zone clusters.
  14. Type yes in response to the clsetup utility's question about proceeding with the update.
  15. When finished, exit the clsetup utility.
  16. Reboot each node back into the cluster.
    1. Shut down each node.
      # shutdown -g0 -y
    2. Boot each node into cluster mode.
  17. From one node, start the clsetup utility.
    # clsetup

    The clsetup Main Menu is displayed.

  18. Re-enable all disabled resources.
    1. Choose the Resource Groups menu item.

      The Resource Group Menu is displayed.

    2. Choose the Enable/Disable a Resource menu item.
    3. Choose a resource to enable and follow the prompts.
    4. Repeat for each disabled resource.
    5. When all resources are re-enabled, type q to return to the Resource Group Menu.
  19. Bring each resource group back online.

    If the node contains non-global zones, also bring online any resource groups that are in those zones.

    1. Choose the Online/Offline or Switchover a Resource Group menu item.
    2. Follow the prompts to put each resource group into the managed state and then bring the resource group online.
  20. When all resource groups are back online, exit the clsetup utility.

    Type q to back out of each submenu, or press Control-C.

Next Steps

To add a node to an existing cluster, go to one of the following procedures:

Configuring Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)

The scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the cluster transport switches as switch1 and switch2.

Complete one of the following configuration planning worksheets. See Planning the Oracle Solaris OS and Planning the Oracle Solaris Cluster Environment for planning guidelines.

How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing global cluster. To use Automated Installer to add a new node, follow the instructions in How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated Installer).


Note - This procedure uses the interactive form of the scinstall command. For information about how to use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.


Follow these guidelines to use the interactive scinstall utility in this procedure:

Before You Begin

Perform the following tasks:

  1. On the cluster node to configure, assume the root role.
  2. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bindEntry 2
  3. Prepare public-network interfaces.
    1. Create static IP addresses for each public-network interface.
      # ipadm create-ip interface
      # ipadm create-addr -T static -a local=address/prefix-length addrobj

      For more information, see How to Configure an IP Interface in Connecting Systems Using Fixed Network Configuration in Oracle Solaris 11.1.

    2. (Optional) Create IPMP groups for public-network interfaces.

      During initial cluster configuration, unless non-link-local IPv6 public network interfaces exist in the cluster, IPMP groups are automatically created based on matching subnets. These groups use transitive probes for interface monitoring and no test addresses are required.

      If these automatically created IPMP groups would not meet your needs, or if IPMP groups would not be created because your configuration includes one or more non-link-local IPv6 public network interfaces, do one of the following:

      • Create the IPMP groups you need before you establish the cluster.
      • After the cluster is established, use the ipadm command to edit the IPMP groups.

      For more information, see Configuring IPMP Groups in Managing Oracle Solaris 11.1 Network Performance.

  4. Start the scinstall utility.
    phys-schost-new# /usr/cluster/bin/scinstall

    The scinstall Main Menu is displayed.

  5. Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.
      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Create a new cluster or add a cluster node
          * 2) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1

    The New Cluster and Cluster Node Menu is displayed.

  6. Type the option number for Add This Machine as a Node in an Existing Cluster and press the Return key.
  7. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility configures the node and boots the node into the cluster.

  8. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.
  9. Verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.

    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  10. From an active cluster member, prevent any other nodes from joining the cluster.
    phys-schost# claccess deny-all

    Alternately, you can use the clsetup utility. See How to Add a Node to an Existing Cluster in Oracle Solaris Cluster System Administration Guide for procedures.

  11. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  12. If TCP wrappers are used in the cluster, ensure that the clprivnet0 IP addresses for all added nodes are added to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each node, edit the /etc/hosts.allow file with the IP addresses of all clprivnet0 devices in the cluster.
  13. Verify that all necessary software updates are installed.
    phys-schost# pkg list
  14. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.


    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
  15. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks Involving Maps in Managing Network File Systems in Oracle Solaris 11.1 for more information about modifying the automounter map.

Example 3-3 Configuring Oracle Solaris Cluster Software on an Additional Node

The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.

Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "net2" to the cluster configuration ... done
Adding adapter "net3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "phys-schost-1" ... done

Copying the postconfig file from "phys-schost-1" if it exists ... done
Setting the node ID for "phys-schost-3" ... done (id=1)

Verifying the major number for the "did" driver from "phys-schost-1" ... done
Initializing NTP configuration ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done


Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files

Updating "/etc/hostname.hme0".

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done
Updating file ("hosts") on node phys-schost-1 ... done

Log file - /var/cluster/logs/install/scinstall.log.6952

Rebooting ... 

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

If you added a node to an existing cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (XML File)

Perform this procedure to configure a new global-cluster node by using an XML cluster configuration file. The new node can be a duplication of an existing cluster node that runs the Oracle Solaris Cluster 4.1 software.

This procedure configures the following cluster components on the new node:

Before You Begin

Perform the following tasks:

  1. Ensure that the Oracle Solaris Cluster software is not yet configured on the potential node that you want to add to a cluster.
    1. Assume the root role on the potential node.
    2. Determine whether the Oracle Solaris Cluster software is configured on the potential node.
      phys-schost-new# /usr/sbin/clinfo -n
      • If the command fails, go to Step 2.

        The Oracle Solaris Cluster software is not yet configured on the node. You can add the potential node to the cluster.

      • If the command returns a node ID number, the Oracle Solaris Cluster software is already a configured on the node.

        Before you can add the node to a different cluster, you must remove the existing cluster configuration information.

    3. Boot the potential node into noncluster mode.
      • SPARC:
        ok boot -x
      • x86:
        1. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

          For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.1 Systems.

        2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
        3. Add -x to the command to specify that the system boot into noncluster mode.
        4. Press Enter to accept the change and return to the boot parameters screen.

          The screen displays the edited command.

        5. Type b to boot the node into noncluster mode.

          Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


    4. Unconfigure the Oracle Solaris Cluster software from the potential node.
      phys-schost-new# /usr/cluster/bin/clnode remove
  2. If you are duplicating a node that runs the Oracle Solaris Cluster 4.1 software, create a cluster configuration XML file.
    1. Assume the root role on the cluster node that you want to duplicate.
    2. Export the existing node's configuration information to a file.
      phys-schost# clnode export -o clconfigfile
      -o

      Specifies the output destination.

      clconfigfile

      The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

      For more information, see the clnode(1CL) man page.

    3. Copy the cluster configuration XML file to the potential node that you will configure as a new cluster node.
  3. Assume the root role on the potential node.
  4. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bindEntry 2
  5. Modify or create the cluster configuration XML file as needed.
    • If you are duplicating an existing cluster node, open the file that you created with the clnode export command.

    • If you are not duplicating an existing cluster node, create a new file.

      Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory.

    • Modify the values of the XML elements to reflect the node configuration that you want to create.

      See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.

  6. Validate the cluster configuration XML file.
    phys-schost-new# xmllint --valid --noout clconfigfile
  7. Configure the new cluster node.
    phys-schost-new# clnode add -n sponsor-node -i clconfigfile
    -n sponsor-node

    Specifies the name of an existing cluster member to act as the sponsor for the new node.

    -i clconfigfile

    Specifies the name of the cluster configuration XML file to use as the input source.

  8. If TCP wrappers are used in the cluster, ensure that the clprivnet0 IP addresses for all added nodes are added to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each node, edit the /etc/hosts.allow file with the IP addresses of all clprivnet0 devices in the cluster.
  9. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.


    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

If you added a node to a cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

How to Update Quorum Devices After Adding a Node to a Global Cluster

If you added a node to a global cluster, you must update the configuration information of the quorum devices regardless of whether you use shared disks, NAS devices, a quorum server, or a combination. To do this, you remove all quorum devices and update the global-devices namespace. You can optionally reconfigure any quorum devices that you still want to use. This update registers the new node with each quorum device, which can then recalculate its vote count based on the new number of nodes in the cluster.

Any newly configured SCSI quorum devices will be set to SCSI-3 reservations.

Before You Begin

Ensure that you have completed installation of the Oracle Solaris Cluster software on the added node.

  1. On any node of the cluster, assume the root role.
  2. Ensure that all cluster nodes are online.
    phys-schost# cluster status -t node
  3. View the current quorum configuration.

    Command output lists each quorum device and each node. The following example output shows the current SCSI quorum device, d3.

    phys-schost# clquorum list
    d3
    …
  4. Note the name of each quorum device that is listed.
  5. Remove the original quorum device.

    Perform this step for each quorum device that is configured.

    phys-schost# clquorum remove device-name
    device-name

    Specifies the name of the quorum device.

  6. Verify that all original quorum devices are removed.

    If the removal of the quorum devices was successful, no quorum devices are listed.

    phys-schost# clquorum status
  7. Update the global-devices namespace.
    phys-schost# cldevice populate

    Note - This step is necessary to prevent possible node panic.


  8. On each node, verify that the cldevice populate command has completed processing before you attempt to add a quorum device.

    The cldevice populate command executes remotely on all nodes, even through the command is issued from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster:

    phys-schost# ps -ef | grep scgdevs
  9. (Optional) Add a quorum device.

    You can configure the same device that was originally configured as the quorum device or choose a new shared device to configure.

    1. (Optional) If you want to choose a new shared device to configure as a quorum device, display all devices that the system checks and choose the shared device from the output.
      phys-schost# cldevice list -v

      Output resembles the following:

      DID Device          Full Device Path
      ----------          ----------------
      d1                  phys-schost-1:/dev/rdsk/c0t0d0
      d2                  phys-schost-1:/dev/rdsk/c0t6d0
      d3                  phys-schost-2:/dev/rdsk/c1t1d0
      d3                  phys-schost-1:/dev/rdsk/c1t1d0 
      …
    2. Configure the shared device as a quorum device.
      phys-schost# clquorum add -t type device-name
      -t type

      Specifies the type of quorum device. If this option is not specified, the default type shared_disk is used.

    3. Repeat for each quorum device that you want to configure.
    4. Verify the new quorum configuration.
      phys-schost# clquorum list

      Output should list each quorum device and each node.

Example 3-4 Updating SCSI Quorum Devices After Adding a Node to a Two-Node Cluster

The following example identifies the original SCSI quorum device d2, removes that quorum device, lists the available shared devices, updates the global-device namespace, configures d3 as a new SCSI quorum device, and verifies the new device.

phys-schost# clquorum list
d2
phys-schost-1
phys-schost-2

phys-schost# clquorum remove d2
phys-schost# clquorum status
…
--- Quorum Votes by Device ---

Device Name       Present      Possible      Status
-----------       -------      --------      ------

phys-schost# cldevice list -v
DID Device          Full Device Path
----------          ----------------
…
d3                  phys-schost-2:/dev/rdsk/c1t1d0
d3                  phys-schost-1:/dev/rdsk/c1t1d0
…
phys-schost# cldevice populate
phys-schost# ps -ef - grep scgdevs
phys-schost# clquorum add d3
phys-schost# clquorum list
d3
phys-schost-1
phys-schost-2

Next Steps

Go to How to Verify the Quorum Configuration and Installation Mode.

How to Configure Quorum Devices


Note - You do not need to configure quorum devices in the following circumstances:

If you chose automatic quorum configuration when you established the cluster, do not perform this procedure. Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.


Perform this procedure one time only, after the new cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.

Before You Begin

  1. If both of the following conditions apply, ensure that the correct prefix length is set for the public-network addresses.
    • You intend to use a quorum server.

    • The public network uses variable-length subnet masking, also called classless inter domain routing (CIDR).

    # ipadm show-addr
        ADDROBJ           TYPE     STATE        ADDR
        lo0/v4            static   ok           127.0.0.1/8
        ipmp0/v4          static   ok           10.134.94.58/24 

    Note - If you use a quorum server but the public network uses classful subnets as defined in RFC 791, you do not need to perform this step.


  2. On one node, assume the root role.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as nonroot through a profile shell, or prefix the command with the pfexec command.

  3. Ensure that all cluster nodes are online.
    phys-schost# cluster status -t node
  4. To use a shared disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.
    1. From one node of the cluster, display a list of all the devices that the system checks.

      You do not need to be logged in as the root role to run this command.

      phys-schost-1# cldevice list -v

      Output resembles the following:

      DID Device          Full Device Path
      ----------          ----------------
      d1                  phys-schost-1:/dev/rdsk/c0t0d0
      d2                  phys-schost-1:/dev/rdsk/c0t6d0
      d3                  phys-schost-2:/dev/rdsk/c1t1d0
      d3                  phys-schost-1:/dev/rdsk/c1t1d0
      …
    2. Ensure that the output shows all connections between cluster nodes and storage devices.
    3. Determine the global device ID of each shared disk that you are configuring as a quorum device.

      Note - Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.


      Use the cldevice output from Step a to identify the device ID of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d3 is shared by phys-schost-1 and phys-schost-2.

  5. To use a shared disk that does not support the SCSI protocol, ensure that fencing is disabled for that shared disk.
    1. Display the fencing setting for the individual disk.
      phys-schost# cldevice show device
      
      === DID Device Instances ===
      DID Device Name:                                      /dev/did/rdsk/dN
      …
        default_fencing:                                     nofencing
      • If fencing for the disk is set to nofencing or nofencing-noscrub, fencing is disabled for that disk. Go to Step 6.
      • If fencing for the disk is set to pathcount or scsi, disable fencing for the disk. Skip to Step c.
      • If fencing for the disk is set to global, determine whether fencing is also disabled globally. Proceed to Step b.

        Alternatively, you can simply disable fencing for the individual disk, which overrides for that disk whatever value the global_fencing property is set to. Skip to Step c to disable fencing for the individual disk.

    2. Determine whether fencing is disabled globally.
      phys-schost# cluster show -t global
      
      === Cluster ===
      Cluster name:                                         cluster
      …
         global_fencing:                                      nofencing
      • If global fencing is set to nofencing or nofencing-noscrub, fencing is disabled for the shared disk whose default_fencing property is set to global. Go to Step 6.
      • If global fencing is set to pathcount or prefer3, disable fencing for the shared disk. Proceed to Step c.

      Note - If an individual disk has its default_fencing property set to global, the fencing for that individual disk is disabled only while the cluster-wide global_fencing property is set to nofencing or nofencing-noscrub. If the global_fencing property is changed to a value that enables fencing, then fencing becomes enabled for all disks whose default_fencing property is set to global.


    3. Disable fencing for the shared disk.
      phys-schost# cldevice set \
      -p default_fencing=nofencing-noscrub device
    4. Verify that fencing for the shared disk is now disabled.
      phys-schost# cldevice show device
  6. Start the clsetup utility.
    phys-schost# clsetup

    The Initial Cluster Setup screen is displayed.


    Note - If the Main Menu is displayed instead, the initial cluster setup was already successfully performed. Skip to Step 11.


  7. Indicate whether you want to add any quorum disks.
    • If your cluster is a two-node cluster, you must configure at least one shared quorum device. Type Yes to configure one or more quorum devices.
    • If your cluster has three or more nodes, quorum device configuration is optional.
      • Type No if you do not want to configure additional quorum devices. Then skip to Step 10.
      • Type Yes to configure additional quorum devices.
  8. Specify what type of device you want to configure as a quorum device.
    Quorum Device Type
    Description
    shared_disk
    Shared LUNs from the following:
    • Shared SCSI disk

    • Serial Attached Technology Attachment (SATA) storage

    • Sun ZFS Storage Appliance

    quorum_server
    Quorum server
  9. Specify the name of the device to configure as a quorum device and provide any required additional information.
    • For a quorum server, also specify the following information:

      • The IP address of the quorum server host

      • The port number that is used by the quorum server to communicate with the cluster nodes

  10. Type Yes to verify that it is okay to reset installmode.

    After the clsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.

  11. Quit the clsetup utility.

Next Steps

Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

Interrupted clsetup processing – If the quorum setup process is interrupted or fails to be completed successfully, rerun clsetup.

Changes to quorum vote count – If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then adding it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List” in Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.

Unreachable quorum device – If you see messages on the cluster nodes that a quorum device is unreachable or if you see failures of cluster nodes with the message CMM: Unable to acquire the quorum device, there might be a problem with the quorum device or the path to it. Check that both the quorum device and the path to it are functional.

If the problem persists, use a different quorum device. Or, if you want to use the same quorum device, increase the quorum timeout to a high value, as follows:


Note - For Oracle Real Application Clusters (Oracle RAC), do not change the default quorum timeout of 25 seconds. In certain split-brain scenarios, a longer timeout period might lead to the failure of Oracle RAC VIP failover, due to the VIP resource timing out. If the quorum device being used is not conforming with the default 25–second timeout, use a different quorum device.


How to Verify the Quorum Configuration and Installation Mode

Perform this procedure to verify that the quorum configuration was completed successfully and that cluster installation mode is disabled.

You do not need to be the root role to run these commands.

  1. From any global-cluster node, verify the device and node quorum configurations.
    phys-schost$ clquorum list

    Output lists each quorum device and each node.

  2. From any node, verify that cluster installation mode is disabled.
    phys-schost$ cluster show -t global | grep installmode
      installmode:                                    disabled

    Cluster installation and creation is complete.

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

See Also

Make a backup of your cluster configuration.

An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration. For more information, see How to Back Up the Cluster Configuration in Oracle Solaris Cluster System Administration Guide.

How to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames, clusternodenodeID-priv, that are assigned during Oracle Solaris Cluster software installation.


Note - Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.


Perform this procedure on one active node of the cluster.

  1. Assume the root role on a global-cluster node.
  2. Start the clsetup utility.
    phys-schost# clsetup

    The clsetup Main Menu is displayed.

  3. Type the option number for Private Hostnames and press the Return key.

    The Private Hostname Menu is displayed.

  4. Type the option number for Change a Node Private Hostname and press the Return key.
  5. Follow the prompts to change the private hostname.

    Repeat for each private hostname to change.

  6. Verify the new private hostnames.
    phys-schost# clnode show -t node | grep privatehostname
      privatehostname:                                clusternode1-priv
      privatehostname:                                clusternode2-priv
      privatehostname:                                clusternode3-priv

Next Steps

Update the NTP configuration with the changed private hostnames. Go to How to Update NTP After Changing a Private Hostname.

Configuring Network Time Protocol (NTP)

This section contains the following procedures:

How to Use Your Own /etc/inet/ntp.conf File


Note - If you installed your own /etc/inet/ntp.conf file before you installed the Oracle Solaris Cluster software, you do not need to perform this procedure. Proceed to How to Validate the Cluster.


  1. Assume the root role on a cluster node.
  2. Add your /etc/inet/ntp.conf file to each node of the cluster.
  3. On each node, determine the state of the NTP service.
    phys-schost# svcs svc:/network/ntp:default
  4. Start the NTP daemon on each node.
    • If the NTP service is disabled, enable the service.
      phys-schost# svcadm enable svc:/network/ntp:default
    • If the NTP service is online, restart the service.
      phys-schost# svcadm restart svc:/network/ntp:default

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

How to Install NTP After Adding a Node to a Single-Node Cluster

When you add a node to a single-node cluster, you must ensure that the NTP configuration file that you use is copied to the original cluster node as well as to the new node.

  1. Assume the root role on a cluster node.
  2. Copy the /etc/inet/ntp.conf and /etc/inet/ntp.conf.sc files from the added node to the original cluster node.

    These files were created on the added node when it was configured with the cluster.

  3. On the original cluster node, create a symbolic link named /etc/inet/ntp.conf.include that points to the /etc/inet/ntp.conf.sc file.
    phys-schost# ln -s /etc/inet/ntp.conf.sc /etc/inet/ntp.conf.include
  4. On each node, determine the state of the NTP service.
    phys-schost# svcs svc:/network/ntp:default
  5. Start the NTP daemon on each node.
    • If the NTP service is disabled, enable the service.
      phys-schost# svcadm enable svc:/network/ntp:default
    • If the NTP service is online, restart the service.
      phys-schost# svcadm restart svc:/network/ntp:default

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

How to Update NTP After Changing a Private Hostname

  1. Assume the root role on a cluster node.
  2. On each node of the cluster, update the /etc/inet/ntp.conf.sc file with the changed private hostname.
  3. On each node, determine the state of the NTP service.
    phys-schost# svcs svc:/network/ntp:default
  4. Start the NTP daemon on each node.
    • If the NTP service is disabled, enable the service.
      phys-schost# svcadm enable svc:/network/ntp:default
    • If the NTP service is online, restart the service.
      phys-schost# svcadm restart svc:/network/ntp:default

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

How to Validate the Cluster

After you complete all configuration of the cluster, use the cluster check command to validate the cluster configuration and functionality. For more information, see the cluster(1CL) man page.


Tip - For ease of future reference or troubleshooting, for each validation that you run, use the -o outputdir option to specify a subdirectory for log files. Reuse of an existing subdirectory name will remove all existing files in the subdirectory. Therefore, to ensure that log files are available for future reference, specify a unique subdirectory name for each cluster check that you run.


Before You Begin

Ensure that you have completed the installation and configuration of all hardware and software components in the cluster, including firmware and software updates.

  1. Assume the root role on a node of the cluster.
  2. Ensure that you have the most current checks.
    1. Go to the Patches & Updates tab of My Oracle Support.
    2. In the Advanced Search, select Solaris Cluster as the Product and type check in the Description field.

      The search locates Oracle Solaris Cluster software updates that contain checks.

    3. Apply any software updates that are not already installed on your cluster.
  3. Run basic validation checks.
    phys-schost# cluster check -v -o outputdir
    -v

    Verbose mode.

    -o outputdir

    Redirects output to the outputdir subdirectory.

    The command runs all available basic checks. No cluster functionality is affected.

  4. Run interactive validation checks.
    phys-schost# cluster check -v -k interactive -o outputdir
    -k interactive

    Specifies running interactive validation checks

    The command runs all available interactive checks and prompts you for needed information about the cluster. No cluster functionality is affected.

  5. Run functional validation checks.
    1. List all available functional checks in nonverbose mode.
      phys-schost# cluster list-checks -k functional
    2. Determine which functional checks perform actions that would interfere with cluster availability or services in a production environment.

      For example, a functional check might trigger a node panic or a failover to another node.

      phys-schost# cluster list-checks -v -C check-ID
      -C check-ID

      Specifies a specific check.

    3. If the functional check that you want to perform might interrupt cluster functioning, ensure that the cluster is not in production.
    4. Start the functional check.
      phys-schost# cluster check -v -k functional -C check-ID -o outputdir
      -k functional

      Specifies running functional validation checks

      Respond to prompts from the check to confirm that the check should run, and for any information or actions you must perform.

    5. Repeat Step c and Step d for each remaining functional check to run.

      Note - For record-keeping purposes, specify a unique outputdir subdirectory name for each check you run. If you reuse an outputdir name, output for the new check overwrites the existing contents of the reused outputdir subdirectory.


Example 3-5 Listing Interactive Validation Checks

The following example lists all interactive checks that are available to run on the cluster. Example output shows a sampling of possible checks; actual available checks vary for each configuration

# cluster list-checks -k interactive
 Some checks might take a few moments to run (use -v to see progress)...
 I6994574  :   (Moderate)   Fix for GLDv3 interfaces on cluster transport vulnerability applied?

Example 3-6 Running a Functional Validation Check

The following example first shows the verbose listing of functional checks. The verbose description is then listed for the check F6968101, which indicates that the check would disrupt cluster services. The cluster is taken out of production. The functional check is then run with verbose output logged to the funct.test.F6968101.12Jan2011 subdirectory. Example output shows a sampling of possible checks; actual available checks vary for each configuration.

# cluster list-checks -k functional
 F6968101  :   (Critical)   Perform resource group switchover
 F6984120  :   (Critical)   Induce cluster transport network failure - single adapter.
 F6984121  :   (Critical)   Perform cluster shutdown
 F6984140  :   (Critical)   Induce node panic
…

# cluster list-checks -v -C F6968101
 F6968101: (Critical) Perform resource group switchover
Keywords: SolarisCluster3.x, functional
Applicability: Applicable if multi-node cluster running live.
Check Logic: Select a resource group and destination node. Perform 
'/usr/cluster/bin/clresourcegroup switch' on specified resource group 
either to specified node or to all nodes in succession.
Version: 1.2
Revision Date: 12/10/10 

Take the cluster out of production

# cluster check -k functional -C F6968101 -o funct.test.F6968101.12Jan2011
F6968101 
  initializing...
  initializing xml output...
  loading auxiliary data...
  starting check run...
     pschost1, pschost2, pschost3, pschost4:     F6968101.... starting:  
Perform resource group switchover           


  ============================================================

   >>> Functional Check <<<

    'Functional' checks exercise cluster behavior. It is recommended that you
    do not run this check on a cluster in production mode.' It is recommended
    that you have access to the system console for each cluster node and
    observe any output on the consoles while the check is executed.

    If the node running this check is brought down during execution the check
    must be rerun from this same node after it is rebooted into the cluster in
    order for the check to be completed.

    Select 'continue' for more details on this check.

          1) continue
          2) exit

          choice: 1


  ============================================================

   >>> Check Description <<<
…
Follow onscreen directions

Next Steps

Before you put the cluster into production, make a baseline recording of the cluster configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the Cluster Configuration.

How to Record Diagnostic Data of the Cluster Configuration

After you finish configuring the global cluster but before you put it into production, use the Oracle Explorer utility to record baseline information about the cluster. This data can be used if you need to troubleshoot the cluster in the future.

  1. Assume the root role.
  2. Install the Oracle Explorer software if it is not already installed.

    The Services Tools Bundle contains the Oracle Explorer packages SUNWexplo and SUNWexplu. See http://www.oracle.com/us/support/systems/premier/services-tools-bundle-sun-systems-163717.html for software download and installation information.

  3. Run the explorer utility on each node in the cluster.

    Use the appropriate command for your platform. For example, to collect information on a Sun Fire T1000 server from Oracle, run the following command:

    # explorer -i -w default,Tx000

    For more information, see the explorer(1M) man page in the /opt/SUNWexplo/man/man1m/ directory and Oracle Explorer Data Collector User Guide which is available through Note 1153444.1 on My Oracle Support:

    https://support.oracle.com

    The explorer output file is saved in the /opt/SUNWexplo/output/ directory as explorer.hostid.hostname-date.tar.gz.

  4. Save the files to a location that you can access if the entire cluster is down.
  5. Send all explorer files to the Oracle Explorer database for your geographic location.

    Follow the procedures in Oracle Explorer Data Collector User's Guide to use FTP or HTTPS to submit Oracle Explorer files.

    The Oracle Explorer database makes your explorer output available to Oracle technical support if the data is needed to help diagnose a technical problem with your cluster.