How to Configure Oracle Solaris Cluster Software on All Nodes (scinstall)

Perform the following tasks:

  • Ensure that the Oracle Solaris OS is installed to support the Oracle Solaris Cluster software.

    If the Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for the Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing the Oracle Solaris software to meet Oracle Solaris Cluster software requirements.

  • SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See How to Install Oracle VM Server for SPARC Software and Create Domains.

  • Ensure that Oracle Solaris Cluster software packages and updates are installed on each node. See How to Install Oracle Solaris Cluster Software (pkg).

  • Ensure that any adapters that you want to use as tagged VLAN adapters are configured and that you have their VLAN IDs.

  • Have available your completed Typical Mode or Custom Mode installation worksheet. See Configuring Oracle Solaris Cluster Software on All Nodes (scinstall).

Perform this procedure from one node of the global cluster to configure Oracle Solaris Cluster software on all nodes of the cluster.

This procedure uses the interactive form of the scinstall command. For information about how to use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(8) man page.

Note:

Alternatively, you can use the Oracle Solaris Cluster Manager browser interface to perform this task.The cluster configuration wizard can be launched only on a cluster that has not been configured already. You cannot launch it on a node that is already a part of a cluster. The clauth command controls the network access policies for machines that are to be configured as nodes of a new-cluster. For more information about the clauth command, see the clauth(8CL) man page. To use the Oracle Solaris Cluster Manager to configure the cluster, run steps 1 through 5 in this section then from a host that runs the Oracle Solaris Cluster Manager log in to the control node and configure the cluster. For Oracle Solaris Cluster Manager log in instructions, see How to Access Oracle Solaris Cluster Manager in Administering an Oracle Solaris Cluster 4.4 Configuration. After logging in, if the node is not configured as part of the cluster, the wizard will display a screen with the Configure button. Click Configure to launch the cluster configuration wizard.

Follow these guidelines to use the interactive scinstall utility in this procedure:

  • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

  • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

  1. If you are using switches in the private interconnect of your new cluster, ensure that Neighbor Discovery Protocol (NDP) is disabled.

    Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.

    During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.

    After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.

  2. On each node to configure in a cluster, assume the root role.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.

  3. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm restart rpc/bind
  4. Prepare public-network interfaces.
    1. Ensure that static IP addresses for each public-network interface exist.
      # ipadm

      If static IP addresses for each public-network interface do not exist, create them.

      # ipadm create-ip interface
      # ipadm create-addr -T static -a local=address/prefix-length addrobj

      For more information, see How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.4.

    2. Create IPMP groups for public-network interfaces.

      During initial cluster configuration, unless non-link-local IPv6 public network interfaces exist in the cluster, IPMP groups are automatically created based on matching subnets. These groups use transitive probes for interface monitoring and no test addresses are required.

      If these automatically created IPMP groups would not meet your needs, or if IPMP groups would not be created because your configuration includes one or more non-link-local IPv6 public network interfaces, do one of the following:

      • Create the IPMP groups you need before you establish the cluster.

      • After the cluster is established, use the ipadm command to edit the IPMP groups.

      For more information, see Configuring IPMP Groups in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.4.

    3. Create trunk and DLMP link aggregations and VNICs that are directly backed by link aggregations for public-network interfaces.
  5. Authorize acceptance of cluster configuration commands by the control node.
    1. Determine which system to use to issue the cluster creation command.

      This system is the control node.

    2. On all systems that you will configure in the cluster, other than the control node, authorize acceptance of commands from the control node.
      phys-schost# clauth enable -n control-node

      If you want to use the des (Diffie-Hellman) authentication protocol instead of the sys (unix) protocol, include -p des option in the command.

      phys-schost# clauth enable -p des -n control-node
  6. From one cluster node, start the scinstall utility.

    phys-schost# scinstall
  7. Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.
     *** Main Menu ***
    
    Please select from one of the following (*) options:
    
    * 1) Create a new cluster or add a cluster node
    * 2) Print release information for this cluster node
    
    * ?) Help with menu options
    * q) Quit
    
    Option:  1

    The New Cluster and Cluster Node Menu is displayed.

  8. Type the option number for Create a New Cluster and press the Return key.

    The Typical or Custom Mode menu is displayed.

  9. Type the option number for either Typical or Custom and press the Return key.

    The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to continue.

  10. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  11. Verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.

    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  12. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(8CL) man page.

  13. Take the cluster out of installmode.
    phys-schost# clquorum reset
  14. Enable the automatic node reboot feature.

    This feature automatically reboots a node if all monitored shared-disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.

    Note:

    At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled +
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===
      
      Node Name:                                      node
      …
      reboot_on_path_failure:                          enabled
      …
  15. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.

      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  16. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks for Autofs Maps in Managing Network File Systems in Oracle Solaris 11.4 for more information about modifying the automounter map.

Example 3-1 Configuring Oracle Solaris Cluster Software on All Nodes

The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall utility in Typical Mode. The other cluster node is phys-schost-2. The adapter names are net2 and net3. The automatic selection of a quorum device is enabled.

Log file - /var/cluster/logs/install/scinstall.log.24747

Configuring global device using lofi on pred1: done
Starting discovery of the cluster transport configuration.

The following connections were discovered:

phys-schost-1:net2  switch1  phys-schost-2:net2
phys-schost-1:net3  switch2  phys-schost-2:net3

Completed discovery of the cluster transport configuration.

Started cluster check on "phys-schost-1".
Started cluster check on "phys-schost-2".

cluster check completed with no errors or warnings for "phys-schost-1".
cluster check completed with no errors or warnings for "phys-schost-2".

Configuring "phys-schost-2" … done
Rebooting "phys-schost-2" … done

Configuring "phys-schost-1" … done
Rebooting "phys-schost-1" …

Log file - /var/cluster/logs/install/scinstall.log.24747

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

If you intend to configure any quorum devices in your cluster, go to How to Configure Quorum Devices.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.