Sun Cluster Software Installation Guide for Solaris OS

ProcedureHow to Configure Sun Cluster Software on Additional Global-Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing global cluster. To use JumpStart to add a new node, instead follow procedures in How to Install Solaris and Sun Cluster Software (JumpStart).


Note –

This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.

Ensure that Sun Cluster software packages are installed on the node, either manually or by using the silent-mode form of the Java ES installer program, before you run the scinstall command. For information about running the Java ES installer program from an installation script, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.


Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

  1. On the cluster node to configure, become superuser.

  2. Start the scinstall utility.


    phys-schost-new# /usr/cluster/bin/scinstall
    

    The scinstall Main Menu is displayed.

  3. Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Manage a dual-partition upgrade
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

    The New Cluster and Cluster Node Menu is displayed.

  4. Type the option number for Add This Machine as a Node in an Existing Cluster and press the Return key.

  5. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility configures the node and boots the node into the cluster.

  6. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  7. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.

  8. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  9. From an active cluster member, prevent any other nodes from joining the cluster.


    phys-schost# claccess deny-all
    

    Alternately, you can use the clsetup utility. See How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS for procedures.

  10. From one node, verify that all nodes have joined the cluster.


    phys-schost# clnode status
    

    Output resembles the following.


    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(1CL) man page.

  11. Verify that all necessary patches are installed.


    phys-schost# showrev -p
    
  12. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    1. Enable automatic reboot.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
  13. If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.


    exclude:lofs

    The change to the /etc/system file becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

    However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

    • Disable LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.


Example 3–3 Configuring Sun Cluster Software on an Additional Node

The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.


*** Adding a Node to an Existing Cluster ***
Fri Feb  4 10:17:53 PST 2005


scinstall -ik -C schost -N phys-schost-1 -A trtype=dlpi,name=qfe2 -A trtype=dlpi,name=qfe3 
-m endpoint=:qfe2,endpoint=switch1 -m endpoint=:qfe3,endpoint=switch2


Checking device to use for global devices file system ... done

Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "qfe2" to the cluster configuration ... done
Adding adapter "qfe3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "phys-schost-1" ... done

Copying the postconfig file from "phys-schost-1" if it exists ... done
Copying the Common Agent Container keys from "phys-schost-1" ... done


Setting the node ID for "phys-schost-3" ... done (id=1)

Setting the major number for the "did" driver ... 
Obtaining the major number for the "did" driver from "phys-schost-1" ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Initializing NTP configuration ... done

Updating nsswitch.conf ... 
done

Adding clusternode entries to /etc/inet/hosts ... done


Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files

Updating "/etc/hostname.hme0".

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
The "local-mac-address?" parameter setting has been changed to "true".

Ensure network routing is disabled ... done

Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done
Updating file ("hosts") on node phys-schost-1 ... done

Rebooting ... 

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

If you added a node to an existing cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.