Oracle® Solaris Cluster Software Installation Guide

Exit Print View

Updated: September 2014, E39580-02
 
 

Configuring Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)

The scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the cluster transport switches as switch1 and switch2.

Complete one of the following configuration planning worksheets. See Planning the Oracle Solaris OS and Planning the Oracle Solaris Cluster Environment for planning guidelines.

  • Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.

    Component
    Description/Example
    Answer
    Sponsoring Node
    What is the name of the sponsoring node?
    Choose any node that is active in the cluster.
    Cluster Name
    What is the name of the cluster that you want the node to join?
    Check
    Do you want to run the cluster check validation utility?
    Yes  |  No
    Autodiscovery of Cluster Transport
    Do you want to use autodiscovery to configure the cluster transport?
    • If no, supply the following additional information:

    Yes  |  No
    Point-to-Point Cables
    Does the node that you are adding to the cluster make this a two-node cluster?
    Yes  |  No
    Does the cluster use switches?
    Yes  |  No
    Cluster Switches
    If used, what are the names of the two switches?
    • Defaults: switch1 and switch2

    First:
    Second:
    Cluster Transport Adapters and Cables
    Transport adapter names:
    First:
    Second:
    Where does each transport adapter connect to (a switch or another adapter)?
    • Switch defaults: switch1 and switch2

    First:
    Second:
    For transport switches, do you want to use the default port name?
    First:  Yes | No
    Second:  Yes | No
    • If no, what is the name of the port that you want to use?

    First:
    Second:
    Automatic Reboot
    Do you want scinstall to automatically reboot the node after installation?
    Yes  |  No
  • Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.

    Component
    Description/Example
    Answer
    Sponsoring Node
    What is the name of the sponsoring node?
    Choose any node that is active in the cluster.
    Cluster Name
    What is the name of the cluster that you want the node to join?
    Check
    Do you want to run the cluster check validation utility?
    Yes   |  No
    Autodiscovery of Cluster Transport
    Do you want to use autodiscovery to configure the cluster transport?
    • If no, supply the following additional information:

    Yes  |  No
    Point-to-Point Cables
    Does the node that you are adding to the cluster make this a two-node cluster?
    Yes  |  No
    Does the cluster use switches?
    Yes  |  No
    Cluster Switches
    Transport switch name, if used:
    • Defaults: switch1 and switch2

    First:
    Second:
    Cluster Transport Adapters and Cables
    Transport adapter name:
    First:
    Second:
    Where does each transport adapter connect to (a switch or another adapter)?
    • Switch defaults: switch1 and switch2

    First:
    Second:
    If a transport switch, do you want to use the default port name?
    First:  Yes | No
    Second:  Yes | No
    • If no, what is the name of the port that you want to use?

    First:
    Second:
    Automatic Reboot
    Do you want scinstall to automatically reboot the node after installation?
    Yes | No

How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing global cluster. To use Automated Installer to add a new node, follow the instructions in How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (IPS Repositories).

This procedure uses the interactive form of the scinstall command. For information about how to use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall (1M) man page.

    Follow these guidelines to use the interactive scinstall utility in this procedure:

  • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

  • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

Before You Begin

Perform the following tasks:

  1. On the cluster node to configure, assume the root role.
  2. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bind
  3. Prepare public-network interfaces.
    1. Create static IP addresses for each public-network interface.
      # ipadm create-ip interface
      # ipadm create-addr -T static -a local=address/prefix-length addrobj

      For more information, see How to Configure an IPv4 Interface in Configuring and Administering Network Components in Oracle Solaris 11.2 .

    2. (Optional) Create IPMP groups for public-network interfaces.

      During initial cluster configuration, unless non-link-local IPv6 public network interfaces exist in the cluster, IPMP groups are automatically created based on matching subnets. These groups use transitive probes for interface monitoring and no test addresses are required.

      If these automatically created IPMP groups would not meet your needs, or if IPMP groups would not be created because your configuration includes one or more non-link-local IPv6 public network interfaces, do one of the following:

      • Create the IPMP groups you need before you establish the cluster.
      • After the cluster is established, use the ipadm command to edit the IPMP groups.

      For more information, see Configuring IPMP Groups in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.2 .

  4. Start the scinstall utility.
    phys-schost-new# /usr/cluster/bin/scinstall

    The scinstall Main Menu is displayed.

  5. Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.
      *** Main Menu ***
    
    Please select from one of the following (*) options:
    
    * 1) Create a new cluster or add a cluster node
    * 2) Print release information for this cluster node
    
    * ?) Help with menu options
    * q) Quit
    
    Option:  1

    The New Cluster and Cluster Node Menu is displayed.

  6. Type the option number for Add This Machine as a Node in an Existing Cluster and press the Return key.
  7. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility configures the node and boots the node into the cluster.

  8. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.
  9. Verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.

    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  10. From an active cluster member, prevent any other nodes from joining the cluster.
    phys-schost# claccess deny-all

    Alternately, you can use the clsetup utility. See How to Add a Node to an Existing Cluster or Zone Cluster in Oracle Solaris Cluster System Administration Guide for procedures.

  11. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  12. If TCP wrappers are used in the cluster, ensure that the clprivnet0 IP addresses for all added nodes are added to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each node, edit the /etc/hosts.allow file with the IP addresses of all clprivnet0 devices in the cluster.
  13. Verify that all necessary software updates are installed.
    phys-schost# pkg list
  14. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    Note -  At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===
      
      Node Name:                                      node
      …
      reboot_on_path_failure:                          enabled
      …
  15. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks Involving Maps in Managing Network File Systems in Oracle Solaris 11.2 for more information about modifying the automounter map.

Example 3-3  Configuring Oracle Solaris Cluster Software on an Additional Node

The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.

Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "net2" to the cluster configuration ... done
Adding adapter "net3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "phys-schost-1" ... done

Copying the postconfig file from "phys-schost-1" if it exists ... done
Setting the node ID for "phys-schost-3" ... done (id=1)

Verifying the major number for the "did" driver from "phys-schost-1" ... done
Initializing NTP configuration ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done


Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files

Updating "/etc/hostname.hme0".

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done
Updating file ("hosts") on node phys-schost-1 ... done

Log file - /var/cluster/logs/install/scinstall.log.6952

Rebooting ... 

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

If you added a node to an existing cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.