Sun Cluster Software Installation Guide for Solaris OS

How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing cluster.

  1. Ensure that all necessary hardware is installed.

  2. Ensure that the Solaris OS is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  3. Ensure that Sun Cluster software packages are installed on the node.

    See How to Install Sun Cluster Software Packages.

  4. Complete the following configuration worksheet.

    Table 2–8 Added Node Configuration Worksheet



    Enter Answers Here 

    Software Patch Installation 

    Do you want scinstall to install patches for you?

    Yes  |  No 

    If yes, what is the patch directory? 


    Do you want to use a patch list? 

    Yes  |  No 

    Sponsoring Node 

    What is the name of the sponsoring node?  

    Choose any node that is active in the cluster.


    Cluster Name 

    What is the name of the cluster that you want the node to join? 



    Do you want to run the sccheck validation utility?

    Yes  |  No 

    Autodiscovery of Cluster Transport 

    Do you want to use autodiscovery to configure the cluster transport? 

    If no, supply the following additional information: 

    Yes  |  No 

    Point-to-Point Cables 

    Does the node that you are adding to the cluster make this a two-node cluster? 

    Yes  |  No 

    Does the cluster use transport junctions? 

    Yes  |  No 

    Cluster–Transport Junctions 

    If used, what are the names of the two transport junctions? 

      Defaults: switch1 and switch2





    Cluster-Transport Adapters and Cables 

    What are the names of the two transport adapters? 





    Where does each transport adapter connect to (a transport junction or another adapter)?

      Junction defaults: switch1 and switch2


    For transport junctions, do you want to use the default port name? 

    Yes | No 

    Yes | No 

    If no, what is the name of the port that you want to use? 


    Global-Devices File System 

    What is the name of the global-devices file system? 

      Default: /globaldevices


    Automatic Reboot 

    Do you want scinstall to automatically reboot the node after installation?

    Yes  |  No 

    See Planning the Solaris OS and Planning the Sun Cluster Environment for planning guidelines.

  5. If you are adding this node to a single-node cluster, determine whether two cluster interconnects already exist.

    You must have at least two cables or two adapters configured before you can add a node.

    # scconf -p | grep cable
    # scconf -p | grep adapter
    • If the output shows configuration information for two cables or for two adapters, proceed to Step 6.

    • If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.

    1. On the existing cluster node, start the scsetup(1M) utility.

      # scsetup

    2. Choose the menu item, Cluster interconnect.

    3. Choose the menu item, Add a transport cable.

      Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport junction.

    4. If necessary, repeat Step c to configure a second cluster interconnect.

      When finished, quit the scsetup utility.

    5. Verify that the cluster now has two cluster interconnects configured.

      # scconf -p | grep cable
      # scconf -p | grep adapter

      The command output should show configuration information for at least two cluster interconnects.

  6. If you are adding this node to an existing cluster, add the new node to the cluster authorized–nodes list.

    1. On any active cluster member, start the scsetup(1M) utility.

      # scsetup

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task completes without error.

    5. Quit the scsetup utility.

  7. Become superuser on the cluster node to configure.

  8. Install Sun Web Console packages.

    These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.

      # ./setup

      The setup command installs all packages to support Sun Web Console.

  9. Install additional packages if you intend to use any of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers

    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.

    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.1 9/04 packages that each feature requires and the order in which you must install each group of packages. The scinstall utility does not automatically install these packages.


      Additional Sun Cluster 3.1 9/04 Packages to Install  



      SCI-PCI adapters 

      SUNWsci SUNWscid SUNWscidx

      RSMRDT drivers 


    2. Ensure that any dependency Solaris packages are already installed.

      See Step 8 in How to Install Solaris Software.

    3. On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .

      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/

    4. Install the additional packages.

      # pkgadd -d . packages

    5. If you are adding a node to a single-node cluster, repeat these steps to add the same packages to the original cluster node.

  10. On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .

    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/

  11. Start the scinstall utility.

    # /usr/cluster/bin/scinstall

  12. Follow these guidelines to use the interactive scinstall utility:

    • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

    • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

  13. From the Main Menu, choose the menu item, Install a cluster or cluster node.

      *** Main Menu ***
        Please select from one of the following (*) options:
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
          * ?) Help with menu options
          * q) Quit
        Option:  1

  14. From the Install Menu, choose the menu item, Add this machine as a node in an existing cluster.

  15. Follow the menu prompts to supply your answers from the worksheet that you completed in Step 4.

    The scinstall utility configures the node and boots the node into the cluster.

  16. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.

    # eject cdrom

  17. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.

  18. From an active cluster member, prevent any other nodes from joining the cluster.

    # /usr/cluster/bin/scconf -a -T node=.



    Specifies authentication options


    Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster

    Alternately, you can use the scsetup(1M) utility. See “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster System Administration Guide for Solaris OS for procedures.

  19. Update the quorum vote count.

    When you increase or decrease the number of node attachments to a quorum device, the cluster does not automatically recalculate the quorum vote count. This step reestablishes the correct quorum vote.

    Use the scsetup utility to remove each quorum device and then add it back into the configuration. Do this for one quorum device at a time.

    If the cluster has only one quorum device, configure a second quorum device before you remove and readd the original quorum device. Then remove the second quorum device to return the cluster to its original configuration.

  20. Install Sun StorEdge QFS file system software.

    Follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

  21. (Optional) SPARC: To install VERITAS File System, go to SPARC: How to Install VERITAS File System Software.

  22. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

Example – Configuring Sun Cluster Software on an Additional Node

The following example shows the scinstall command executed and the messages that the utility logs as scinstall completes configuration tasks on the node phys-schost-3. The sponsoring node is phys-schost-1.

 >>> Confirmation <<<
    Your responses indicate the following options to scinstall:
      scinstall -ik \
           -C sc-cluster \
           -N phys-schost-1 \
           -A trtype=dlpi,name=hme1 -A trtype=dlpi,name=hme3 \
           -m endpoint=:hme1,endpoint=switch1 \
           -m endpoint=:hme3,endpoint=switch2
    Are these the options you want to use (yes/no) [yes]?
    Do you want to continue with the install (yes/no) [yes]?
Checking device to use for global devices file system ... done
Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "hme1" to the cluster configuration ... done
Adding adapter "hme3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Copying the config from "phys-schost-1" ... done
Setting the node ID for "phys-schost-3" ... done (id=3)
Verifying the major number for the "did" driver with "phys-schost-1" ...done
Checking for global devices global file system ... done
Updating vfstab ... done
Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.
Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done
Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done
Verifying that power management is NOT configured ... done
Unconfiguring power management ... done 
/etc/power.conf has been renamed to /etc/power.conf.61501001054 
Power management is incompatible with the HA goals of the cluster.
 Please do not attempt to re-configure power management.
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter. 
Having a cluster node act as a router is not supported by Sun Cluster. 
Please do not re-enable network routing.
Log file - /var/cluster/logs/install/scinstall.log.9853
Rebooting ...