Sun Cluster 3.1 10/03 Software Installation Guide

Installing the Software

The following Task Map lists the tasks that you perform to install the software on multinode or single-node clusters. Complete the procedures in the order that is indicated.

Table 2–1 Task Map: Installing the Software on a Multinode Cluster

Task 

Instructions 

1. Plan the layout of your cluster configuration and prepare to install software. 

How to Prepare for Cluster Software Installation

2. (Optional) Install Cluster Control Panel (CCP) software on the administrative console.

How to Install Cluster Control Panel Software on the Administrative Console

3. Install the Solaris operating environment and Sun Cluster software. Choose one of the following methods: 

  • Method 1 – (New multinode clusters only) Install Solaris software, optionally preinstall Sun Cluster software on all nodes by using the Web Start program, and then use the scinstall utility to establish the cluster.

  1. How to Install Solaris Software

  2. How to Preinstall Sun Cluster Software Packages

  3. How to Install Sun Cluster Software on All Nodes (Typical) or How to Install Sun Cluster Software on All Nodes (Custom)

  • Method 2 – (Added nodes only) Install Solaris software, optionally preinstall Sun Cluster software on the added nodes by using the Web Start program, and then add the nodes to the cluster by using the scinstall utility.

  1. How to Install Solaris Software

  2. How to Preinstall Sun Cluster Software Packages

  3. How to Install Sun Cluster Software on Additional Cluster Nodes (scinstall)

  • Method 3 – (New multinode clusters only) Install Solaris software, then install SunPlexTM Manager and use it to install Sun Cluster software.

  1. How to Install Solaris Software

  2. Using SunPlex Manager to Install Sun Cluster Software

  • Method 4 – (New multinode clusters or added nodes) Install Solaris software and Sun Cluster software in one operation by using the scinstall utility's custom JumpStart option.

How to Install Solaris and Sun Cluster Software (JumpStart)

  • Method 5 – (New single-node clusters) Install Solaris software and then install Sun Cluster software by using the scinstall -iFo command.

  1. How to Install Solaris Software

  2. How to Install Sun Cluster Software on a Single-Node Cluster

4. Configure the name-service look-up order. 

How to Configure the Name-Service Switch

5. Set up directory paths. 

How to Set Up the Root Environment

6. Install data-service software packages. 

How to Install Data-Service Software Packages (Web Start) or How to Install Data-Service Software Packages (scinstall)

7. Perform postinstallation setup and assign quorum votes. (Multinode clusters only)

How to Perform Postinstallation Setup

8. Install and configure volume-manager software: 

  • Install and configure Solstice DiskSuite/Solaris Volume Manager software.

  • Install and configure VERITAS Volume Manager software.

9. Configure the cluster. 

Configuring the Cluster

How to Prepare for Cluster Software Installation

Before you begin to install software, make the following preparations.

  1. Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.

  2. Have available all related documentation, including third-party documents.

    The following is a partial list of product documentation that you might need for reference during cluster installation.

    • Solaris software

    • Solstice DiskSuite/Solaris Volume Manager software

    • VERITAS Volume Manager

    • Sun Management Center

    • Third-party applications such as ORACLE

  3. Plan your cluster configuration.


    Caution – Caution –

    Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Sun Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.

    For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames that you use in the cluster. Another example with special requirements is Sun Cluster HA for SAP. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software.


  4. Get all necessary patches for your cluster configuration.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    Copy the patches that are required for Sun Cluster into a single directory. The directory must be on a file system that is accessible by all nodes. The default patch directory is /var/cluster/patches.


    Tip –

    After you install Solaris software on a node, you can view the /etc/release file to see the exact version of Solaris software that is installed.


    1. (Optional) If you are not using SunPlex Manager, you can create a patch list file. If you specify a patch list file, SunPlex Manager only installs the patches that are listed in the patch list file.

      For information about creating a patch-list file, refer to the patchadd(1M) man page.

    2. Record the path to the patch directory.

  5. Do you intend to use Cluster Control Panel software to connect from an administrative console to your cluster nodes?

How to Install Cluster Control Panel Software on the Administrative Console


Note –

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


This procedure describes how to install the Cluster Control Panel (CCP) software on the administrative console. The CCP provides a launchpad for the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes. These tools also provide a common window that sends input to all nodes at one time.

You can use any desktop machine that runs the Solaris 8 or Solaris 9 operating environment as an administrative console. In addition, you can also use the administrative console as a Sun Management Center console or server as well as a documentation server. See Sun Management Center documentation for information on how to install Sun Management Center software. See the Sun Cluster 3.1 10/03 Release Notes for information on how to install Sun Cluster documentation.

  1. Ensure that a supported version of the Solaris operating environment and any Solaris patches are installed on the administrative console.

    All platforms require at least the Solaris End User System Support software group.

  2. (Optional) If you intend to use the Web Start program with a GUI, ensure that the DISPLAY environment variable is set.

  3. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive of the administrative console.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

  4. Become superuser on the administrative console.

  5. Change to the /cdrom/suncluster_3_1_u1 directory.


    # cd /cdrom/suncluster_3_1_u1
    

  6. Start the Web Start program.


    # ./installer
    

  7. Choose Custom installation.

    The utility displays a list of software packages.

  8. Deselect the Sun Cluster Framework package.

  9. Select the Sun Cluster cconsole package.

  10. (Optional) Select the Sun Cluster Documentation package.

    If you do not install the documentation on your administrative console, you can still view an HTML or PDF collection directly from the CD-ROM.

  11. Follow onscreen instructions to continue package installation.

    After installation is finished, you can view any available installation log.

  12. Create an /etc/clusters file on the administrative console.

    Add your cluster name and the physical node name of each cluster node to the file.


    # vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  13. Create an /etc/serialports file.

    Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.

    • For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.

    • For all other console-access devices, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.

    • For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.


    # vi /etc/serialports
    node1 ca-dev-hostname port
    node2 ca-dev-hostname port
    
    node1, node2

    Physical names of the cluster nodes

    ca-dev-hostname

    Hostname of the console-access device

    port

    Serial port number

  14. (Optional) For convenience, add the PATH to the /opt/SUNWcluster/bin directory and the /opt/SUNWcluster/man directory to the MANPATH on the administrative console.

    If you installed the SUNWscman package, also add the /usr/cluster/man directory to the MANPATH.

  15. Start the CCP utility.


    # /opt/SUNWcluster/bin/ccp &
    

    Click the cconsole, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:


    # /opt/SUNWcluster/bin/ctelnet &
    

    See the procedure “How to Remotely Log In to Sun Cluster” in “Beginning to Administer the Cluster” in Sun Cluster 3.1 10/03 System Administration Guide for information about how to use the CCP utility. Also see the ccp(1M) man page.

  16. Is the Solaris operating environment already installed on each cluster node to meet Sun Cluster software requirements?

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task to install the Solaris operating environment on each node in the cluster.


Tip –

To speed installation, you can install the Solaris operating environment on each node at the same time.


If your nodes are already installed with the Solaris operating environment but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. If so, follow the steps that are described in this procedure to ensure successful installation of Sun Cluster software. See Planning the Solaris Operating Environment for information about required partitioning and other Sun Cluster installation requirements.

  1. Ensure that the hardware setup is complete and that connections are verified before you install Solaris software.

    See the Sun Cluster 3.1 Hardware Administration Collection and your server and storage device documentation for details.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. Have available your completed Local File System Layout Worksheet.

  4. Do you use a naming service?

    • If no, go to Step 5. You set up local hostname information in Step 11.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.

  5. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time. Use the following command to start cconsole:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  6. Install the Solaris operating environment as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris operating environment.


    You can use any method that is normally used to install Solaris software. These methods include the Solaris interactive installation program, Solaris JumpStart, and the Solaris Web Start program.

    During Solaris software installation, perform the following steps:

    1. Install at least the End User System Support software group.

      • If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or to use SCI-PCI adapters for the interconnect transport, only the higher-level software groups include the required RSMAPI software packages. These packages are SUNWrsm, SUNWrsmx, SUNWrsmo, and SUNWrsmox. If you install the End User System Support software group, you must install the RSMAPI software packages manually from the Solaris CD-ROM at Step 8.

      • If you intend to use SunPlex Manager, the required Apache software packages (SUNWapchr and SUNWapchu) are included with the higher-level software groups. If you install the End User System Support software group, you must install the Apache software packages manually from the Solaris CD-ROM at Step 9.

      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem. If you intend to use SunPlex Manager to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size. If you intend to use SunPlex Manager to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9), also make this file system mounted on /sds.

        Otherwise, create any file-system partitions that are needed to support your volume-manager software as described in System Disk Partitions.


        Note –

        If you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, you must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9).


    3. For ease of administration, set the same root password on each node.

  7. Are you installing a new node to an existing cluster?

    • If no, skip to Step 8.

    • If yes, perform the following steps:

    1. Have you added the new node to the cluster's authorized-node list?

    2. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      

    3. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

    4. Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?

      • If no, proceed to Step 8.

      • If yes, ensure that the same vxio number is used on the VxVM-installed nodes. Also ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.


        # grep vxio /etc/name_to_major
        vxio NNN
        

        If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  8. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If no, proceed to Step 9.

    • If yes and you installed the End User System Support software group, install the RSMAPI software packages from the Solaris CD-ROM. Otherwise, proceed to Step 9.


      # pkgadd -d . SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox
      

    • If yes and you installed a higher-level software group than the End User System Support software group, proceed to Step 9.

  9. Do you intend to use SunPlex Manager?

    • If no, or if you installed a higher-level software group than the End User System Support software group, proceed to Step 10.

    • If yes and you installed the End User System Support software group, install the Apache software packages from the Solaris CD-ROM.


      # pkgadd -d . SUNWapchr SUNWapchu
      

    • If yes and you installed a higher-level software group than the End User System Support software group, proceed to Step 14.

    Apache software packages must already be installed before SunPlex Manager is installed.

  10. Install any hardware-related patches and download any needed firmware that is contained in the hardware patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  11. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  12. Do you intend to use dynamic reconfiguration on Sun Enterprise 10000 servers?

    • If no, proceed to Step 14.

    • If yes, on each node add the following entry to the /etc/system file.


      set kernel_cage_enable=1

      This entry becomes effective after the next system reboot. See the Sun Cluster 3.1 10/03 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  13. Do you intend to use VERITAS File System (VxFS) software?

    • If no, proceed to Step 14.

    • If yes, perform the following steps.

    1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.

    2. Install any Sun Cluster patches that are required to support VxFS.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    3. In the /etc/system file on each node, set the value for the rpcmod:svc_default_stksize variable to 0x8000 and set the value of the lwp_default_stksize variable to 0x6000.


      set rpcmod:svc_default_stksize=0x8000
      set lwp_default_stksize=0x6000

      Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.

      Also, you must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

  14. Preinstall Sun Cluster software packages.

    Go to How to Preinstall Sun Cluster Software Packages.

How to Preinstall Sun Cluster Software Packages

Perform the following procedure to use the Web Start program to install the Sun Cluster software packages on each node of the cluster. You can run the Web Start program with a command-line interface (CLI) or with a graphical-user interface (GUI). The content and sequence of instructions in both the CLI and the GUI are generally the same. See the installer(1M) man page for more information about the Web Start installation program.


Note –

If you enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes, you do not have to perform these procedures. The scinstall utility automatically installs Sun Cluster framework software on all cluster nodes.

However, if you need to install any Sun Cluster software packages in addition to the framework software, install the packages from the Sun Cluster 3.1 10/03 CD-ROM. Do this task before you start the scinstall utility. You can install these additional Sun Cluster packages by using the pkgadd(1M) command or by using the Web Start program.


  1. Ensure that the Solaris operating environment is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. Become superuser on a cluster node to install.

  3. (Optional) If you intend to use the Web Start program with a GUI, ensure that the DISPLAY environment variable is set.

  4. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

  5. Change to the root directory of the CD-ROM, where the installer(1M) utility resides.

  6. Start the Web Start program.


    # ./installer
    

  7. Choose Typical or Custom installation.

    • Choose Typical to preinstall the default set of Sun Cluster framework software packages.

    • Choose Custom to specify the Sun Cluster software packages to preinstall. The nondefault software packages include packages that support other languages, the RSMAPI, and SCI-PCI adapters.

  8. Follow instructions on the screen to install Sun Cluster software on the node.

    After installation is finished, you can view any available installation log.

  9. Repeat Step 1 through Step 8 on each remaining cluster node to install.

  10. Install Sun Cluster software on the cluster nodes.

How to Install Sun Cluster Software on All Nodes (Typical)

Perform this procedure to install Sun Cluster software on all nodes of the cluster with the default cluster-configuration settings. To specify all cluster configuration settings, instead follow procedures in How to Install Sun Cluster Software on All Nodes (Custom).

The scinstall command checks for patches in the /var/cluster/patches directory or the /var/patches directory. If neither of the directories exist, then no patches are added. If both directories exist, then only the patches in the /var/cluster/patches directory are added.

A patch-list file may be included in the patch directory. The default patch-list file is patchlist. For information about creating a patch-list file, refer to the patchadd(1M) man page.

  1. Ensure that the Solaris operating environment is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. Did you preinstall Sun Cluster software?

    • If yes, proceed to Step 3.

    • If no, enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser . Doing this task enables the scinstall utility to install Sun Cluster software packages.

  3. Have available the following completed configuration planning worksheets:

    See Planning the Sun Cluster Environment for planning guidelines.

  4. Become superuser on the cluster node from which you intend to install the cluster.

  5. On one node of the cluster, start the scinstall utility.

    • If you preinstalled Sun Cluster software, type the following command:


      # /usr/cluster/bin/scinstall
      

    • If you did not preinstall Sun Cluster software, insert the Sun Cluster 3.1 10/03 CD-ROM and type the following commands, where ver is 8 (for Solaris 8) or 9 (for Solaris 9):


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Tools
      # ./scinstall
      

    Follow these guidelines to use the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  6. From the Main Menu, type 1 (Install a cluster or cluster node).


     *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

  7. From the Install Menu, type 1 (Install all nodes of a new cluster).


      *** Install Menu ***
    
        Please select from any one of the following options:
    
            1) Install all nodes of a new cluster
            2) Install just this machine as the first node of a new cluster
            3) Add this machine as a node in an existing cluster
    
            ?) Help with menu options
            q) Return to the Main Menu
    
        Option:  1
    …
      *** Installing all Nodes of a New Cluster ***
    …
        Do you want to continue (yes/no) [yes]?  y
    

  8. Type 1 to specify the Typical installation option.


      >>> Type of Installation <<<
    …
        Please select from one of the following options:
    
            1) Typical
            2) Custom
    
            ?) Help
            q) Return to the Main Menu
    
        Option [1]:  1
    

    For the Typical installation of Sun Cluster software, scinstall automatically specifies the following configuration defaults.

    Component 

    Default Value 

    Private network address 

    172.16.0.0

    Cluster transport junctions 

    switch1 and switch2

    Global-devices file-system name 

    /globaldevices

    Installation security (DES) 

    Limited 

    Solaris and Sun Cluster patch directory 

    /var/cluster/patches


    Note –

    You cannot change the private network address after cluster installation.


  9. Specify the cluster name.


      >>> Cluster Name <<<
    …
        What is the name of the cluster you want to establish?  clustername
    

  10. Specify the names of the other nodes to become part of this cluster.


      >>> Cluster Nodes <<<
    …
        Node name:  node2
        Node name (Ctrl-D to finish):  Control-D
    
        This is the complete list of nodes:
    …
        Is it correct (yes/no) [yes]?  

  11. Specify the first cluster-interconnect transport adapter.


      >>> Cluster Transport Adapters and Cables <<<
    
        Select the first cluster transport adapter for "node":
    
            1) adapter
            2) adapterN) Other
    
        Option:  N
    

    The scinstall utility lists all Ethernet adapters on the node. To configure adapters that are not listed, such as SCI-PCI adapters, type the number for the Other menu option. Then specify the adapter information that is requested in the subsequent prompts.


    Note –

    If your configuration uses SCI–PCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) that is on the SCI Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the switch port name 0.


    …
        Use the default port name for the "adapter" connection (yes/no) [yes]? n
        What is the name of the port you want to use?  0
    


  12. Specify the second cluster-interconnect transport adapter.


      >>> Cluster Transport Adapters and Cables <<<
    
        Select the second cluster transport adapter for "node":
    
            1) adapter
            2) adapterN) Other
    
        Option:  N

    You configure two adapters by using the scinstall command. You can configure additional adapters after Sun Cluster software is installed by using the scsetup utility.

  13. Confirm that the scinstall utility should begin installation.


        Is it okay to begin the installation (yes/no) [yes]?  y
    
  14. Specify whether installation should stop if the sccheck utility detects errors.


        Interrupt the installation for sccheck errors (yes/no) [no]?

    If you choose to interrupt installation and the sccheck utility detects any problems, the utility displays information about the problems and prompts you for your next action. Log files are placed in the /var/cluster/logs/install/sccheck/ directory.

    If the sccheck utility quits with an error message because a version Sun Explorer software earlier than 3.5.1 is installed, remove the existing SUNWexplo package. Install the SUNWexplo package that is supplied on the Sun Cluster 3.1 10/03 CD-ROM. Then restart the scinstall utility.

    The scinstall utility continues installation of all cluster nodes and reboots the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  15. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

Example – Installing Sun Cluster Software on All Nodes (Typical)

The following example shows the scinstall progress messages that are logged as scinstall completes Typical installation tasks on a two-node cluster. The cluster node names are phys-schost-1 and phys-schost-2. The specified adapter names are qfe2 and hme2. The Sun Cluster software is already installed by the Web Start program.


  Installation and Configuration

    Log file - /var/cluster/logs/install/scinstall.log.834

    Testing for "/globaldevices" on "phys-schost-1" ... done
    Testing for "/globaldevices" on "phys-schost-2" ... done

    Checking installation status ... done

    The Sun Cluster software is already installed on "phys-schost-1".
    The Sun Cluster software is already installed on "phys-schost-2".

    Starting discovery of the cluster transport configuration.

    Probing ..

    The following connections were discovered:

        phys-schost-1:qfe2  switch1  phys-schost-2:qfe2
        phys-schost-1:hme2  switch2  phys-schost-2:hme2

    Completed discovery of the cluster transport configuration.

    Started sccheck on "phys-schost-1".
    Started sccheck on "phys-schost-2".

    sccheck completed with no errors or warnings for "phys-schost-1".
    sccheck completed with no errors or warnings for "phys-schost-2".

    Configuring "phys-schost-2" ... done
    Rebooting "phys-schost-2" ... done

    Configuring "phys-schost-1" ... done
    Rebooting "phys-schost-1" ... 

Log file - /var/cluster/logs/install/scinstall.log.834

Rebooting ... 

How to Install Sun Cluster Software on All Nodes (Custom)

Perform this procedure to install Sun Cluster software on all nodes of the cluster and to specify all cluster configuration settings. To install Sun Cluster software with the default cluster configuration settings, instead go to How to Install Sun Cluster Software on All Nodes (Typical).

  1. Ensure that the Solaris operating environment is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. Did you preinstall Sun Cluster software?

    • If yes, proceed to Step 3.

    • If no, enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser . Doing this task enables the scinstall utility to install Sun Cluster software packages.

  3. Have available the following completed configuration planning worksheets:

    See Planning the Sun Cluster Environment for planning guidelines.

  4. Become superuser on the cluster node from which you intend to install the cluster.

  5. On one node of the cluster, start the scinstall utility.

    • If you preinstalled Sun Cluster software, type the following command:


      # /usr/cluster/bin/scinstall
      

    • If you did not preinstall Sun Cluster software, insert the Sun Cluster 3.1 10/03 CD-ROM and type the following commands, where ver is 8(for Solaris 8) or 9 (for Solaris 9):


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Tools
      # ./scinstall
      

    Follow these guidelines to use the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  6. From the Main Menu, type 1 (Install a cluster or cluster node).


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

  7. From the Install Menu, type 1 (Install all nodes of a new cluster).


      *** Install Menu ***
    
        Please select from any one of the following options:
    
            1) Install all nodes of a new cluster
            2) Install just this machine as the first node of a new cluster
            3) Add this machine as a node in an existing cluster
    
            ?) Help with menu options
            q) Return to the Main Menu
    
        Option:  1
    …
      *** Installing all Nodes of a New Cluster ***
    …
        Do you want to continue (yes/no) [yes]?  y
    

  8. Type 2 to specify the Custom installation option.


      >>> Type of Installation <<<
    …
        Please select from one of the following options:
    
            1) Typical
            2) Custom
    
            ?) Help
            q) Return to the Main Menu
    
        Option [1]:  2
    

  9. Specify the cluster name.


      >>> Cluster Name <<<
    …
        What is the name of the cluster you want to establish?  clustername
    

  10. Specify the names of the other nodes to become part of this cluster.


      >>> Cluster Nodes <<<
    …
        Node name:  node2
        Node name (Ctrl-D to finish):  Control-D
    
        This is the complete list of nodes:
    …
        Is it correct (yes/no) [yes]?  

  11. Specify whether to use Data Encryption Standard (DES) authentication.

    DES authentication provides an additional level of security at installation time. DES authentication enables the sponsoring node to authenticate nodes that attempt to contact the sponsoring node to update the cluster configuration.

    If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.


     >>> Authenticating Requests to Add Nodes <<<
    …
        Do you need to use DES authentication (yes/no) [no]? 
  12. Specify the private network address and netmask.


     >>> Network Address for the Cluster Transport <<<
    …
        Is it okay to accept the default network address (yes/no) [yes]? 
        Is it okay to accept the default netmask (yes/no) [yes]? 


    Note –

    You cannot change the private network address after the cluster is successfully formed.


  13. Specify whether the cluster uses transport junctions.

    • If this is a two-node cluster, specify whether you intend to use transport junctions.


       >>> Point-to-Point Cables <<<
       …
          Does this two-node cluster use transport junctions (yes/no) [yes]? 


      Tip –

      You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.


    • If this cluster has three or more nodes, you must use transport junctions. Press Return to continue to the next screen.


       
      >>> Point-to-Point Cables <<<
       …
          Since this is not a two-node cluster, you will be asked to configure
          two transport junctions.
          
      Hit ENTER to continue: 

  14. Does this cluster use transport junctions?

    • If no, proceed to Step 15.

    • If yes, specify names for the transport junctions. You can use the default names switchN or create your own names.


       >>> Cluster Transport Junctions <<<
       …
          What is the name of the first junction in the cluster [switch1]? 
          What is the name of the second junction in the cluster [switch2]? 

  15. Specify the first cluster-interconnect transport adapter for the node from which you are installing the cluster.


      >>> Cluster Transport Adapters and Cables <<<
    
        Select the first cluster transport adapter for "node":
    
            1) adapter
            2) adapterN) Other
    
        Option:  N
    

    The scinstall utility lists all Ethernet adapters that are found on the node. To configure adapters that are not listed, such as SCI-PCI adapters, type the number for the Other menu option. Then specify the adapter information that is requested in the subsequent prompts.


    Note –

    If your configuration uses SCI–PCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) that is on the SCI Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the switch port name 0.


    …
        Use the default port name for the "adapter" connection (yes/no) [yes]? n
        What is the name of the port you want to use?  0
    


  16. Specify the second cluster-interconnect transport adapter for the node from which you are installing the cluster.


      >>> Cluster Transport Adapters and Cables <<<
    
        Select the second cluster transport adapter for "node":
    
            1) adapter
            2) adapterN) Other
    
        Option:  N
    

    You configure two adapters by using the scinstall command. You can configure additional adapters after Sun Cluster software is installed by using the scsetup utility.

  17. Specify whether to use autodiscovery to automatically choose the transport adapters for the other nodes of the cluster.


        Is it okay to use autodiscovery for the other nodes (yes/no) [yes]?  

    • If you type yes to choose to use autodiscovery, proceed to Step 18. The scinstall utility chooses transport adapters, junctions, and ports to configure for the remaining nodes.

    • If you type no to decline autodiscovery, answer the subsequent prompts. Specify the transport adapter names, junction names, and port names that you want to configure for each of the remaining nodes.

  18. Confirm that the scinstall utility should install patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.


      >>> Software Patch Installation <<<
    …
        Do you want scinstall to install patches for you (yes/no) [yes]?  y
        What is the name of the patch directory? /var/cluster/patches
        Do you want scinstall to use a patch list file (yes/no) [no]?  n
  19. Specify the global-devices file-system name.


     >>> Global Devices File System <<<
    …
        The default is to use /globaldevices.
    …
        Is it okay to use this default (yes/no) [yes]? 

  20. Confirm that the scinstall utility should begin installation.


        Is it okay to begin the installation (yes/no) [yes]?  y
    
  21. Specify whether installation should stop if the sccheck utility detects errors.


        Interrupt the installation for sccheck errors (yes/no) [no]?

    If you choose to interrupt installation and the sccheck utility detects any problems, the utility displays information about the problems and prompts you for your next action. Log files are placed in the /var/cluster/logs/install/sccheck/ directory.

    If the sccheck utility quits with an error message because a version Sun Explorer software earlier than 3.5.1 is installed, remove the existing SUNWexplo package. Install the SUNWexplo package that is supplied on the Sun Cluster 3.1 10/03 CD-ROM. Then restart the scinstall utility.

    The scinstall utility continues installation of all cluster nodes and reboots the cluster. Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file.

  22. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

Example – Installing Sun Cluster Software on All Nodes (Custom)

The following example shows the scinstall progress messages that are logged as scinstall completes Custom installation tasks on a two-node cluster. The cluster node names are phys-schost-1 and phys-schost-2. The specified adapter names are qfe2 and hme2. The Sun Cluster software is already installed by the Web Start program.


  Installation and Configuration

    Log file - /var/cluster/logs/install/scinstall.log.834

    Testing for "/globaldevices" on "phys-schost-1" ... done
    Testing for "/globaldevices" on "phys-schost-2" ... done

    Checking installation status ... done

    The Sun Cluster software is already installed on "phys-schost-1".
    The Sun Cluster software is already installed on "phys-schost-2".

    Starting discovery of the cluster transport configuration.

    Probing ..

    The following connections were discovered:

        phys-schost-1:qfe2  switch1  phys-schost-2:qfe2
        phys-schost-1:hme2  switch2  phys-schost-2:hme2

    Completed discovery of the cluster transport configuration.

    Started sccheck on "phys-schost-1".
    Started sccheck on "phys-schost-2".

    sccheck completed with no errors or warnings for "phys-schost-1".
    sccheck completed with no errors or warnings for "phys-schost-2".

    Configuring "phys-schost-2" ... done
    Rebooting "phys-schost-2" ... done

    Configuring "phys-schost-1" ... done
    Rebooting "phys-schost-1" ... 

Log file - /var/cluster/logs/install/scinstall.log.834

Rebooting ... 

How to Install Sun Cluster Software on Additional Cluster Nodes (scinstall)

Perform this procedure to add new nodes to an existing cluster.

  1. Ensure that the Solaris operating environment is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. Prepare the cluster to accept a new node.

    Follow instructions in the procedure “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster 3.1 10/03 System Administration Guide.

  3. Did you preinstall Sun Cluster software?

    • If yes, proceed to Step 4.

    • If no, enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser . Doing this task enables the scinstall utility to install Sun Cluster software packages.

  4. Become superuser on the cluster node to install.

  5. Start the scinstall utility.

    • If you preinstalled Sun Cluster software, type the following command:


      # /usr/cluster/bin/scinstall
      

    • If you did not preinstall Sun Cluster software, insert the Sun Cluster 3.1 10/03 CD-ROM. Then type the following commands, where ver is 8 (for Solaris 8) or 9 (for Solaris 9):


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Tools
      # ./scinstall
      

    Follow these guidelines to use the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  6. From the Main Menu, type 1 (Install a cluster or cluster node).


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

  7. From the Install Menu, type 3 (Add this machine as a node in an existing cluster).


      *** Install Menu ***
    
        Please select from any one of the following options:
    
            1) Install all nodes of a new cluster
            2) Install just this machine as the first node of a new cluster
            3) Add this machine as a node in an existing cluster
    
            ?) Help with menu options
            q) Return to the Main Menu
    
        Option:  3
    …
      *** Adding a Node to an Existing Cluster ***
    …
        Do you want to continue (yes/no) [yes]?  y
    

  8. If prompted whether to continue to install Sun Cluster software packages, type yes.


     >>> Software Package Installation <<<
      
        Installation of the Sun Cluster framework software packages will
        take a few minutes to complete.
      
        Is it okay to continue (yes/no) [yes]?  y
      
    ** Installing SunCluster 3.0 **
            SUNWscr.....done
    …
    Hit ENTER to continue:

    After all packages are installed, press Return to continue to the next screen.

  9. Confirm that the scinstall utility should install patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.


      >>> Software Patch Installation <<<
    …
        Do you want scinstall to install patches for you (yes/no) [yes]?  y
        What is the name of the patch directory? /var/cluster/patches
        Do you want scinstall to use a patch list file (yes/no) [no]?  n
  10. Specify the name of any existing cluster node to be considered the sponsoring node.


     >>> Sponsoring Node <<<
    …
        What is the name of the sponsoring node?  node1
    

  11. Specify the cluster name.


     >>> Cluster Name <<<
    …
        What is the name of the cluster you want to join?  clustername
    

  12. Specify whether installation should stop if the sccheck utility detects errors.


     >>> Check <<<
    …
        Do you want to run sccheck (yes/no) [yes]?  y
    

    If you choose to interrupt installation and the sccheck utility detects any problems, the utility displays information about the problems and prompts you for your next action. Log files are placed in the /var/cluster/logs/install/sccheck/ directory.

    If the sccheck utility quits with an error message because a version Sun Explorer software earlier than 3.5.1 is installed, remove the existing SUNWexplo package. Install the SUNWexplo package that is supplied on the Sun Cluster 3.1 10/03 CD-ROM. Then restart the scinstall utility.

    When the node passes sccheck validation checks, proceed to the next step.

  13. Specify whether to use autodiscovery to configure the cluster transport.

    If your configuration does not use Ethernet adapters, answer no and skip to Step 15.


     >>> Autodiscovery of Cluster Transport <<<
      
        If you are using ethernet adapters as your cluster transport
        adapters, autodiscovery is the best method for configuring the
        cluster transport.
      
        Do you want to use autodiscovery (yes/no) [yes]?
    …
        The following connections were discovered:
      
            node1:adapter1 switch1 node2:adapter1 
            node1:adapter2 switch2 node2:adapter2 
      
        Is it okay to add these connections to the configuration (yes/no) [yes]?

  14. Did you choose to use autodiscovery in Step 13?

  15. Specify whether this is a two-node cluster.


     >>> Point-to-Point Cables <<<
    …
        Is this a two-node cluster (yes/no) [yes]? 

  16. Did you specify that this cluster is a two-node cluster?

    • If yes, specify whether to use transport junctions.


          Does this two-node cluster use transport junctions (yes/no) [yes]? 

    • If no, press Return to continue. You must use transport junctions if a cluster contains three or more nodes.


          Since this is not a two-node cluster, you will be asked to configure
          two transport junctions.
        
      Hit ENTER to continue: 

  17. Did you specify that the cluster is to use transport junctions?

    • If no, proceed to Step 18.

    • If yes, specify the transport junctions.


       >>> Cluster Transport Junctions <<<
      …
          What is the name of the first junction in the cluster [switch1]? 
          What is the name of the second junction in the cluster [switch2]? 

  18. Specify the first cluster-interconnect transport adapter.

    Type help to list all transport adapters that are available to the node.


     >>> Cluster Transport Adapters and Cables <<<
    …
        What is the name of the first cluster transport adapter (help)?  adapter
    

  19. Specify what the first transport adapter connects to.

    • If the transport adapter uses a transport junction, specify the name of the junction and its port.


         Name of the junction to which "adapter" is connected [switch1]? 
      …
          Use the default port name for the "adapter" connection (yes/no) [yes]? 

    • If the transport adapter does not use a transport junction, specify the name of the other transport adapter that the first transport adapter connects to.


          Name of adapter on "node1" to which "adapter" is connected?  adapter
      
  20. Specify the second cluster-interconnect transport adapter.

    Type help to list all transport adapters that are available to the node.


        What is the name of the second cluster transport adapter (help)? adapter
    

  21. Specify what the second transport adapter connects to.

    • If the transport adapter uses a transport junction, specify the name of the junction and its port.


          Name of the junction to which "adapter" is connected [switch2]? 
          Use the default port name for the "adapter" connection (yes/no) [yes]? 
       
      Hit ENTER to continue: 

    • If the transport adapter does not use a transport junction, specify the name of the other transport adapter that the first transport adapter connects to.


          Name of adapter on "node1" to which "adapter" is connected?  adapter
      

  22. Specify the global-devices file-system name.


     >>> Global Devices File System <<<
    …
        The default is to use /globaldevices.
     
        Is it okay to use this default (yes/no) [yes]? 

  23. Specify automatic reboot.


     >>> Automatic Reboot <<<
    …
        Do you want scinstall to reboot for you (yes/no) [yes]? y
    

  24. Accept or decline the generated scinstall command.

    The scinstall command that is generated from your input is displayed for confirmation.


     >>> Confirmation <<<
     
        Your responses indicate the following options to scinstall:
     
          scinstall -i  \
    …
        Are these the options you want to use (yes/no) [yes]? 
        Do you want to continue with the install (yes/no) [yes]? 

    • If you accept the command and continue the installation, scinstall processing continues.

      Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.


      Note –

      Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum possible number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information on how to suppress these messages under otherwise normal cluster conditions.


    • If you decline the command, the scinstall utility asks if you want to deinstall the Sun Cluster software.


          Do you want to de-install the Sun Cluster software (yes/no) [no]? 

      After scinstall returns you to the Main Menu, you can rerun menu option 2 and provide different answers. Your previous session answers display as the defaults.

  25. Repeat this procedure on any additional node to install until all nodes are fully configured.

  26. From an active cluster member, prevent any nodes from joining the cluster.


    # /usr/cluster/bin/scconf -a -T node=.
    
    -a

    Add

    -T

    Specifies authentication options

    node=.

    Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster

    Alternately, you can use the scsetup(1M) utility. See “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster 3.1 10/03 System Administration Guide for procedures.

  27. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

Example – Installing Sun Cluster Software on an Additional Node

The following example shows the scinstall command executed and the messages that the utility logs as scinstall completes installation tasks on the node phys-schost-3. The sponsoring node is phys-schost-1.


 >>> Confirmation <<<
  
    Your responses indicate the following options to scinstall:
  
      scinstall -ik \
           -C sc-cluster \
           -N phys-schost-1 \
           -A trtype=dlpi,name=hme1 -A trtype=dlpi,name=hme3 \
           -m endpoint=:hme1,endpoint=switch1 \
           -m endpoint=:hme3,endpoint=switch2
  
    Are these the options you want to use (yes/no) [yes]?
  
    Do you want to continue with the install (yes/no) [yes]?
  
Checking device to use for global devices file system ... done
  
Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "hme1" to the cluster configuration ... done
Adding adapter "hme3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done
  
Copying the config from "phys-schost-1" ... done
Setting the node ID for "phys-schost-3" ... done (id=3)
 
Verifying the major number for the "did" driver with "phys-schost-1" ...done
  
Checking for global devices global file system ... done
Updating vfstab ... done
  
Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.
  
Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done
  
Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done
  
Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.61501001054
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.
  
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Sun Cluster.
Please do not re-enable network routing.
  
Log file - /var/cluster/logs/install/scinstall.log.9853
  
  
Rebooting ...

Using SunPlex Manager to Install Sun Cluster Software


Note –

To add a new node to an existing cluster, do not use SunPlex Manager. Instead, use the procedure How to Install Sun Cluster Software on Additional Cluster Nodes (scinstall).


This section describes how to install SunPlex Manager. This section also describes how to use SunPlex Manager to install Sun Cluster software and to establish new cluster nodes. You can also use SunPlex Manager to install or configure one or more of the following additional software products:

The following table lists SunPlex Manager installation requirements for these additional software products.

Table 2–2 Requirements to Use SunPlex Manager to Install Software

Software Package 

Installation Requirements 

Solstice DiskSuite/Solaris Volume Manager 

A partition that uses /sds as the mount–point name. The partition must be at least 20 Mbytes in size.

Sun Cluster HA for NFS data service 

  • At least two shared disks, of the same size, that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Manager.

  • A logical hostname for use by Sun Cluster HA for NFS. The logical hostname must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as the base hostnames of the cluster nodes.

  • A test IP address for each node of the cluster. SunPlex Manager uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for NFS.

Sun Cluster HA for Apache scalable data service 

  • At least two shared disks of the same size that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Manager.

  • A shared address for use by Sun Cluster HA for Apache. The shared address must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as the base hostnames of the cluster nodes.

  • A test IP address for each node of the cluster. SunPlex Manager uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for Apache.

The test IP addresses that you supply must meet the following requirements:

The following table lists each metaset name and cluster-file-system mount point that is created by SunPlex Manager. The number of metasets and mount points that SunPlex Manager creates depends on the number of shared disks that are connected to the node. For example, if a node has four shared disks connected, SunPlex Manager creates the mirror-1 and mirror-2 metasets. However, SunPlex Manager does not create the mirror-3 metaset, because the node does not have enough shared disks to create a third metaset.

Table 2–3 Metasets Installed by SunPlex Manager

Shared Disks 

Metaset Name 

Cluster File System Mount Point 

Purpose 

First pair of shared disks 

mirror-1

/global/mirror-1

Sun Cluster HA for NFS or Sun Cluster HA for Apache scalable data service, or both 

Second pair of shared disks 

mirror-2

/global/mirror-2

Unused 

Third pair of shared disks 

mirror-3

/global/mirror-3

Unused 


Note –

If the cluster does not meet the minimum shared-disk requirement, SunPlex Manager still installs the Solstice DiskSuite packages. However, without sufficient shared disks, SunPlex Manager cannot configure the metasets, metadevices, or volumes. SunPlex Manager then cannot configure the cluster file systems that are needed to create instances of the data service.


SunPlex Manager recognizes a limited character set to increase security. Characters that are not a part of the set are silently filtered out when HTML forms are submitted to the SunPlex Manager server. The following characters are accepted by SunPlex Manager:


()+,-./0-9:=@A-Z^_a-z{|}~

This filter can cause problems in the following two areas:

How to Install SunPlex Manager Software

This procedure describes how to install SunPlex Manager software on your cluster.


Note –

If you intend to install Sun Cluster software by using another method, you do not need to perform this procedure. The scinstall command automatically installs SunPlex Manager for you as part of the installation process.


Perform this procedure on each node of the cluster.

  1. Ensure that Solaris software and patches are installed on each node of the cluster.

    You must install Solaris software as described in How to Install Solaris Software.


    Note –

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software. You must also ensure that the installation meets the requirements for any other software that you intend to install on the cluster.


  2. Become superuser on a cluster node.

  3. Are Apache software packages installed on the node?

    • If yes, proceed to Step 4.

    • If no, install Apache software packages.

    1. Insert the Solaris 8 Software 2 of 2 CD-ROM into the CD-ROM drive of the node.

      If the Volume Management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM.

    2. Change to the /cdrom/sol_8_sparc/Solaris_8/Product directory.


      # cd /cdrom/sol_8_sparc/Solaris_8/Product
      

      For Solaris 9, change to the /cdrom/cdrom0/Solaris_9/Product directory.


      # cd /cdrom/cdrom0/Solaris_9/Product
      

    3. Install the Apache software packages in the following order.


      # pkgadd -d . SUNWapchr SUNWapchu SUNWapchd
      

    4. Install any Apache software patches.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  4. Install the SunPlex Manager software packages.

    1. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive of the node.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

    2. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Packages directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .

      The following example uses the path to the Solaris 8 version of Sun Cluster software.


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_8/Packages
      

    3. Install the SunPlex Manager software packages. Answer yes for all prompts.


      # pkgadd -d . SUNWscva SUNWscvr SUNWscvw
      

  5. Repeat Step 2 through Step 4 on each node of the cluster.

  6. Is the root password the same on every node of the cluster?

    • If yes, proceed to Step 7.

    • If no, set the root password to the same value on each node of the cluster. If necessary, also use the chkey(1) command to update the RPC key pair.


      # passwd
      Enter new password
      # chkey -p
      

    The root password must be the same on all nodes in the cluster to use the root password to access SunPlex Manager.

  7. Use SunPlex Manager to install Sun Cluster software.

    Go to How to Install Sun Cluster Software (SunPlex Manager).

How to Install Sun Cluster Software (SunPlex Manager)

Note –

To add a new node to an existing cluster, do not use SunPlex Manager. Instead, go to How to Install Sun Cluster Software on Additional Cluster Nodes (scinstall).


Perform this procedure to use SunPlex Manager to install Sun Cluster software and patches on all nodes in the cluster in a single operation. In addition, you can use this procedure to install Solstice DiskSuite software and patches (Solaris 8) or to configure Solaris Volume Manager mirrored disksets (Solaris 9).

If you use SunPlex Manager to install Solstice DiskSuite software or to configure Solaris Volume Manager disksets, you can also install one or both of the following data services:

The installation process might take from 30 minutes to two or more hours. The actual length of time depends on the number of cluster nodes, your choice of data services to install, and the number of disks in your cluster configuration.

  1. Ensure that the cluster configuration meets the requirements to use SunPlex Manager to install software.

    See Using SunPlex Manager to Install Sun Cluster Software for installation requirements and restrictions.

  2. Do you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache?

  3. Ensure that SunPlex Manager software is installed on each node of the cluster.

    See the installation procedures in How to Install SunPlex Manager Software.

  4. Have available the following completed configuration planning worksheets:

    See Chapter 1, Planning the Sun Cluster Configuration and the Sun Cluster 3.1 Data Service Planning and Administration Guide for planning guidelines.

  5. Prepare file-system paths to a CD-ROM image of each software product that you intend to install.

    1. Provide each CD-ROM image in a location that is available to each node.

      The CD-ROM images must be accessible to all nodes of the cluster from the same file-system path. These paths can be one or more of the following locations:

      • CD-ROM drives that are exported to the network from machines outside the cluster.

      • Exported file systems on machines outside the cluster.

      • CD-ROM images that are copied to local file systems on each node of the cluster. The local file system must use the same name on each node.

    2. Record the path to each CD-ROM image.

      You specify this information in Step 17.

  6. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If no, proceed to Step 7.

    • If yes, install additional packages from the Sun Cluster 3.1 10/03 CD-ROM that are required to support the RSMAPI or SCI-PCI adapters. SunPlex Manager does not automatically install these packages. The following table lists the Sun Cluster 3.1 10/03 packages and the order in which you must install the packages.

      Feature 

      Additional Sun Cluster 3.1 10/03 Packages to Install  

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      SUNWsci SUNWscid SUNWscidx

      Use the following command to install these additional packages. Replace ver with 8 (for Solaris 8) or 9 (for Solaris 9).


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Packages
      # pkgadd -d . packages
      

  7. Are there any patches that are required to support Sun Cluster or Solstice DiskSuite software?

  8. Do you intend to use SunPlex Manager to install patches?

    • If yes, proceed to Step 9.

    • If no, manually install all patches that are required to support Sun Cluster or Solstice DiskSuite software before you use SunPlex Manager, then skip to Step 10.

  9. Copy patches that are required for Sun Cluster or Solstice DiskSuite software into a single directory. This directory must reside on a file system that is available to each node.

    1. Ensure that only one version of each patch is present in this patch directory.

      If the patch directory contains multiple versions of the same patch, SunPlex Manager cannot determine the correct patch dependency order.

    2. Ensure that the patches are uncompressed.

    3. Record the path to the patch directory.

      You specify this information in Step 17.

  10. Start SunPlex Manager.

    1. From the administrative console or any other machine outside the cluster, launch a browser.

    2. Disable the browser's Web proxy.

      SunPlex Manager installation functionality is incompatible with Web proxies.

    3. Ensure that disk caching and memory caching is enabled.

      The disk cache and memory cache size must be greater than 0.

    4. From the browser, connect to port 3000 on a node of the cluster.


      https://node:3000
      

      The Sun Cluster Installation screen is displayed in the browser window.


      Note –

      If SunPlex Manager displays the administration interface instead of the Sun Cluster Installation screen, Sun Cluster software is already installed on that node. Check that the name of the node in the URL is the correct name of the cluster node to install.


    5. If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

    6. Log in as superuser.

  11. In the Sun Cluster Installation screen, verify that the cluster meets the listed requirements for using SunPlex Manager.

    • The Solaris End User Software Group or higher is installed.

    • Root-disk partitions include the following:

      • At least 750 Mbytes for swap

      • A 512-Mbyte slice with the mount point /globaldevices

      • A 20-Mbyte slice with the mount point /sds for volume manager use

    • File system paths to all needed CD-ROM images and patches are set up, as described in Step 5 through Step 9.

    If you meet all listed requirements, click Next to continue to the next screen.

  12. Type a name for the cluster and select the number of nodes in your cluster.

    The default number of nodes that are displayed might be higher than the number of nodes you intend to install in your cluster. If so, select the correct number of nodes you intend to install. This situation might occur if other nodes that are ready to be installed by SunPlex Manager use the same public network as the nodes that you intend to install.


    Tip –

    You can use the Back button to return to a previous screen and change your information. However, SunPlex Manager does not save the information that you supplied in the later screens. When you click Next, you must again type or select your configuration information in those screens.


  13. Type the name of each cluster node.

    SunPlex Manager supplies as defaults the names of nodes that the GUI finds on the public network that are ready to be installed by SunPlex Manager. If you specify a larger number of nodes to install than exist on the network, SunPlex Manager supplies additional default names. These additional names follow the naming convention phys-clustername-N.


    Note –

    SunPlex Manager might list nodes other than the ones you intend to install in your cluster. This situation occurs under the following conditions:

    • The other nodes use the same public network as the nodes that you are installing.

    • The other nodes are installed with SunPlex Manager software but are not yet installed with Sun Cluster software.

    If SunPlex Manager supplies the name of a node that you do not want in your cluster, type over the name with the correct node name.


  14. From the pull-down lists for each node, select the names of the two adapters that are used for the private interconnects.

    Refer to your completed “Cluster Interconnect Worksheet” for the appropriate adapter names for each node.

  15. Choose whether to install Solstice DiskSuite software (Solaris 8) or to configure Solaris Volume Manager mirrored disksets (Solaris 9).

    You must install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager mirrored disksets (Solaris 9) if you intend to install the Sun Cluster HA for NFS or Sun Cluster HA for Apache data service.


    Caution – Caution –

    When SunPlex Manager installs Solstice DiskSuite software or configures Solaris Volume Manager disksets, any data on all shared disks is lost.


  16. In Step 15, did you choose to install Solstice DiskSuite software or configure Solaris Volume Manager disksets?

    • If no, proceed to Step 17.

    • If yes, choose whether to install Sun Cluster HA for NFS, Sun Cluster HA for Apache, or both.

      • Refer to your completed “Network Resources” worksheet for the appropriate logical hostname or shared address.

      • For Sun Cluster HA for NFS, also specify the logical hostname that the data service is to use and a test IP address for each node.

      • For Sun Cluster HA for Apache, also specify the shared address that the data service is to use and a test IP address for each node.

  17. Type the path for each CD-ROM image that is needed to install the packages you specified, and optionally the path for the patch directory.

    Type each path in the appropriate path field for each software package, as shown in the following table. If you have already installed the required patches, leave the Patch Directory Path field blank.

    Software Package to Install 

    Name of CD-ROM Image Path Field 

    Solstice DiskSuite 

    Solaris CD-ROM Path 

    Sun Cluster 

    Sun Cluster 3.1 10/03 CD-ROM Path 

    Sun Cluster HA for NFS, Sun Cluster HA for Apache 

    Sun Cluster 3.1 Agents CD-ROM Path 

    Sun Cluster patches, Solstice DiskSuite patches 

    Patch Directory Path 

    Each specified path for a CD-ROM image must be the directory that contains the .cdtoc file for the CD-ROM.

  18. Choose whether to validate the cluster configuration by using the sccheck(1M) utility.

    • If the sccheck utility detects no problems, SunPlex Manager displays the Confirm Information screen. Proceed to Step 19.

    • If the sccheck utility detects problems, SunPlex Manager displays information about the problems found and prompts you for your next action. If you must quit SunPlex Manager to correct the problem, return to Step 10 to restart SunPlex Manager. Otherwise, proceed to Step 19.

    • If the sccheck utility quits with an error message that a version of Sun Explorer software earlier than 3.5.1 is installed, click Cancel to stop installation. Remove the existing SUNWexplo package and install the SUNWexplo package that is supplied on the Sun Cluster 3.1 10/03 CD-ROM. Then restart SunPlex Manager.

  19. Is the information that you supplied correct as displayed in the Confirm Information screen?

    • If yes, proceed to Step 20.

    • If no, perform the following steps to correct the configuration information.

    1. Click Back until you return to the screen with the information to change.


      Note –

      When you click Back to back up to a previous screen, any information that you typed in the subsequent screens is lost.


    2. Type the correct information and click Next.

    3. Retype or reselect the information in each screen until you return to the Confirm Information screen.

    4. Verify that the information in the Confirm Information screen is now correct.

  20. Click Begin Installation to start the installation process.


    Note –

    Do not close the browser window nor change the URL during the installation process.


    1. If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

    2. If the browser prompts for login information, type the appropriate superuser ID and password for the node that you connect to.

    During installation, the screen displays brief messages about the status of the cluster installation process. When installation is complete, the browser displays the cluster monitoring and administration GUI.

    SunPlex Manager installation output is logged in the /var/cluster/spm/messages file.

    Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  21. Log back into SunPlex Manager to verify quorum assignments and to modify the assignments, if necessary.

    For clusters with three or more nodes, the use of shared quorum devices is optional. SunPlex Manager might or might not have assigned quorum votes to any quorum devices, depending on whether appropriate shared disks were available. You can use SunPlex Manager to designate quorum devices and to reassign quorum votes in the cluster.

  22. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

How to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris and Sun Cluster software on all cluster nodes in a single operation and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.

  1. Ensure that the hardware setup is complete and that connections are verified before you install Solaris software.

    See the Sun Cluster 3.1 Hardware Administration Collection and your server and storage device documentation for details on how to set up the hardware.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. Have available the following information:

    See Planning the Solaris Operating Environment and Planning the Sun Cluster Environment for planning guidelines.

  4. Do you use a naming service?

    • If no, proceed to Step 5. You set up the necessary hostname information in Step 30.

    • If yes, add the following information to any naming services that clients use to access cluster services:

      • Address-to-name mappings for all public hostnames and logical addresses

      • The IP address and hostname of the JumpStart server

      See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.

  5. Are you installing a new node to an existing cluster?

  6. As superuser, set up the JumpStart installation server for Solaris operating-environment installation.

    See “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide for instructions on how to set up a JumpStart installation server. See also the setup_install_server(1M) and add_install_client(1M) man pages.

    When you set up the installation server, ensure that the following requirements are met.

    • The installation server is on the same subnet as the cluster nodes but is not itself a cluster node.

    • The installation server installs the release of the Solaris operating environment required by the Sun Cluster software.

    • A custom JumpStart directory exists for JumpStart installation of Sun Cluster. This jumpstart-dir directory must contain a copy of the check(1M) utility and be NFS exported for reading by the JumpStart installation server.

    • Each new cluster node is configured as a custom JumpStart install client that uses the custom JumpStart directory set up for Sun Cluster installation.

  7. Create a directory on the JumpStart installation server to hold your copy of the Sun Cluster 3.1 10/03 CD-ROM. Skip this step if a directory already exists.

    In the following example, the /export/suncluster directory is created for this purpose.


    # mkdir -m 755 /export/suncluster
    

  8. Copy the Sun Cluster CD-ROM to the JumpStart installation server.

    1. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive on the JumpStart installation server.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

    2. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Tools directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .

      The following example uses the path to the Solaris 8 version of Sun Cluster software.


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_8/Tools
      

    3. Copy the CD-ROM to a new directory on the JumpStart installation server.

      The scinstall command creates the new installation directory when the command copies the CD-ROM files. The installation directory name /export/suncluster/sc31 is used here as an example.


      ./scinstall -a /export/suncluster/sc31
      

    4. Eject the CD-ROM.


      # cd /
      # eject cdrom
      

    5. Ensure that the Sun Cluster 3.1 10/03 CD-ROM image on the JumpStart installation server is NFS exported for reading by the JumpStart installation server.

      See “Solaris NFS Environment” in System Administration Guide, Volume 3 or “Managing Network File Systems (Overview)” in System Administration Guide: Resource Management and Network Services for more information about automatic file sharing. See also the share(1M) and dfstab(4) man pages.

  9. From the JumpStart installation server, start the scinstall(1 M) utility.

    The path /export/suncluster/sc31 is used here as an example of the installation directory that you created.


    # cd /export/suncluster/sc31/SunCluster_3.1/Sol_ver/Tools
    # ./scinstall
    


    Note –

    In the CD-ROM path, replace ver with 8 (for Solaris 8) or 9 (for Solaris 9).


    Follow these guidelines to use the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  10. From the Main Menu, type 2 (Configure a cluster to be JumpStarted from this installation server).

    This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.


     *** Main Menu ***
     
        Please select from one of the following (*) options:
     
          * 1) Install a cluster or cluster node
          * 2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
     
        Option:  2
     
     *** Custom JumpStart ***
    …
        Do you want to continue (yes/no) [yes]? 


    Note –

    If option 2 does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or the setup has an error. Exit the scinstall utility, repeat Step 6 through Step 8 to correct JumpStart setup, then restart the scinstall utility.


  11. Specify the JumpStart directory name.

    The JumpStart directory name /export/suncluster/sc31 is used here as an example.


     >>> Custom JumpStart Directory <<<
    …
        What is your JumpStart directory name?  /export/suncluster/sc31
    

  12. Specify the name of the cluster.


     >>> Cluster Name <<<
    …
        What is the name of the cluster you want to establish?  clustername
    

  13. Specify the names of all cluster nodes.


     >>> Cluster Nodes <<<
    …
        Please list the names of all cluster nodes planned for the initial
        cluster configuration. You must enter at least two nodes. List one
        node name per line. When finished, type Control-D:
     
        Node name:  node1
        Node name:  node2
        Node name (Ctrl-D to finish): <Control-D>
     
        This is the complete list of nodes:
    … 
        Is it correct (yes/no) [yes]? 

  14. Specify whether to use Data Encryption Standard (DES) authentication.

    DES authentication provides an additional level of security at installation time. DES authentication enables the sponsoring node to authenticate nodes that attempt to contact the sponsoring node to update the cluster configuration.

    If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.


     >>> Authenticating Requests to Add Nodes <<<
    …
        Do you need to use DES authentication (yes/no) [no]? 
  15. Specify the private network address and netmask.


     >>> Network Address for the Cluster Transport <<<
    …
        Is it okay to accept the default network address (yes/no) [yes]? 
        Is it okay to accept the default netmask (yes/no) [yes]? 


    Note –

    You cannot change the private network address after the cluster is successfully formed.


  16. Specify whether the cluster uses transport junctions.

    • If this is a two-node cluster, specify whether you intend to use transport junctions.


       >>> Point-to-Point Cables <<<
       …
          Does this two-node cluster use transport junctions (yes/no) [yes]? 


      Tip –

      You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.


    • If this cluster has three or more nodes, you must use transport junctions. Press Return to continue to the next screen.


       >>> Point-to-Point Cables <<<
       …
          Since this is not a two-node cluster, you will be asked to configure
          two transport junctions.
          
      Hit ENTER to continue: 

  17. Does this cluster use transport junctions?

    • If no, proceed to Step 18.

    • If yes, specify names for the transport junctions. You can use the default names switchN or create your own names.


       >>> Cluster Transport Junctions <<<
       …
          What is the name of the first junction in the cluster [switch1]? 
          What is the name of the second junction in the cluster [switch2]? 

  18. Specify the first cluster-interconnect transport adapter of the first node.


     >>> Cluster Transport Adapters and Cables <<<
    …
     For node "node1",
        What is the name of the first cluster transport adapter?  adapter
    

  19. Specify the connection endpoint of the first adapter.

    • If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.


      …
          Name of adapter on "node2" to which "adapter" is connected? adapter
      

    • If the cluster uses transport junctions, specify the name of the first transport junction and its port.


      …
       For node "node1",
          Name of the junction to which "adapter" is connected? switch
      …
       For node "node1",
         Use the default port name for the "adapter" connection (yes/no) [yes]? 


      Note –

      If your configuration uses SCI–PCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) that is on the SCI Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the switch port name 0.


      …
          Use the default port name for the "adapter" connection (yes/no) [yes]? n
          What is the name of the port you want to use?  0
      


  20. Specify the second cluster-interconnect transport adapter of the first node.


    …
     For node "node1",
        What is the name of the second cluster transport adapter? adapter
    

  21. Specify the connection endpoint of the second adapter.

    • If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.


      …
          Name of adapter on "node2" to which "adapter" is connected? adapter
      

    • If the cluster uses transport junctions, specify the name of the second transport junction and its port.


      …
       For node "node1",
          Name of the junction to which "adapter" is connected? switch
      …
       For node "node1",
        Use the default port name for the "adapter" connection (yes/no) [yes]? 


      Note –

      If your configuration uses SCI–PCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) that is on the SCI Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the switch port name 0.


      …
          Use the default port name for the "adapter" connection (yes/no) [yes]? n
          What is the name of the port you want to use?  0
      


  22. Does this cluster use transport junctions?

  23. Specify the global-devices file-system name for each cluster node.


     
    >>> Global Devices File System <<<
    …
        The default is to use /globaldevices.
     
     For node "node1",
        Is it okay to use this default (yes/no) [yes]? 
     
     For node "node2",
        Is it okay to use this default (yes/no) [yes]? 

  24. Confirm that the scinstall utility should install patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.


    Note –

    If you specify a patch directory for the scinstall command, then patches in Solaris patch directories, as specified in Step 29, are not installed.



      >>> Software Patch Installation <<<
    …
        Do you want scinstall to install patches for you (yes/no) [yes]?  y
        What is the name of the patch directory? /export/suncluster/sc31/patches
        Do you want scinstall to use a patch list file (yes/no) [no]?  n
  25. Accept or decline the generated scinstall commands.

    The scinstall command that is generated from your input is displayed for confirmation.


     >>> Confirmation <<<
     
        Your responses indicate the following options to scinstall:
    -----------------------------------------
     For node "node1",
          scinstall -c jumpstart-dir -h node1  \
    …
        Are these the options you want to use (yes/no) [yes]? 
    -----------------------------------------
     For node "node2",
          scinstall -c jumpstart-dir -h node2  \
    …
        Are these the options you want to use (yes/no) [yes]? 
    -----------------------------------------
        Do you want to continue with JumpStart set up (yes/no) [yes]? 

    If you do not accept the generated commands, the scinstall utility returns you to the Main Menu. You can then rerun menu option 3 and provide different answers. Your previous answers display as the defaults.

  26. If necessary, make adjustments to the default class file, or profile, created by scinstall.

    The scinstall command creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.1 directory.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750  swap
    filesys         rootdisk.s3 512  /globaldevices
    filesys         rootdisk.s7 20
    cluster         SUNWCuser        add
    package         SUNWman          add

    The default class file installs the End User System Support software group (SUNWCuser) of Solaris software. If your configuration has additional Solaris software requirements, change the class file accordingly. See Solaris Software Group Considerations for more information.

    You can change the profile in one of the following ways:

    • Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.

    • Update the rules file to point to other profiles, then run the check utility to validate the rules file.

    If the Solaris operating-environment install profile meets minimum Sun Cluster file-system allocation requirements, no restrictions are placed on other changes to the install profile. See System Disk Partitions for partitioning guidelines and requirements to support Sun Cluster 3.1 software. For more information about JumpStart profiles, see “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide.

  27. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If no, proceed to Step 28.

    • If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 26.


      package         SUNWrsm         add
      package         SUNWrsmx        add
      package         SUNWrsmo        add
      package         SUNWrsmox       add

      In addition, you must create or modify a postinstallation finish script at Step 32 to install the Sun Cluster packages to support the RSMAPI and SCI-PCI adapters.

      If you install a higher software group than End User System Support, the RSMAPI software packages are automatically installed with the Solaris software. You then do not need to add the packages to the class file.

  28. Do you intend to use SunPlex Manager?

    • If no, proceed to Step 29.

    • If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 26.


      package         SUNWapchr       add
      package         SUNWapchu       add

      These Apache software packages are required for SunPlex Manager. However, if you install a higher software group than End User System Support, the Apache software packages are installed with the Solaris software. You then do not need to add the packages to the class file.

  29. Set up Solaris patch directories.


    Note –

    If you specify a patch directory for the scinstall command in Step 24, patches in Solaris patch directories are not installed.


    1. Create jumpstart-dir/autoscinstall.d/nodes/node/patches directories on the JumpStart installation server.

      Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.


      # mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches
      

    2. Place copies of any Solaris patches into each of these directories.

      Also place copies of any hardware-related patches that must be installed after Solaris software is installed into each of these directories.

  30. Set up files to contain the necessary hostname information locally on each node.

    1. On the JumpStart installation server, create files that are named jumpstart-dir/autoscinstall.d/nodes/node/archive/etc/inet/hosts.

      Create one file for each node, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared hosts file.

    2. Add the following entries into each file.

      • IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. The NFS server could be the JumpStart installation server or another machine.

      • IP address and hostname of each node in the cluster.

  31. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If no, proceed to Step 32 if you intend to add your own postinstallation finish script. Otherwise, skip to Step 33.

    • If yes, follow instructions in Step 32 to set up a postinstallation finish script to install the following additional packages. Install the appropriate packages from the /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Packages directory of the Sun Cluster 3.1 10/03 CD-ROM in the order that is given in the following table.


      Note –

      In the CD-ROM path, replace ver with 8 (for Solaris 8) or 9 (for Solaris 9).


      Feature 

      Additional Sun Cluster 3.1 10/03 Packages to Install  

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      SUNWsci SUNWscid SUNWscidx

  32. (Optional) Add your own postinstallation finish script.


    Note –

    If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport, you must modify the finish script to install the Sun Cluster SUNWscrif software package. This package is not automatically installed by scinstall.


    You can add your own finish script, which is run after the standard finish script installed by the scinstall command. See “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide for information about creating a JumpStart finish script.

    1. Name your finish script finish.

    2. Copy your finish script to the jumpstart-dir/autoscinstall.d/nodes/node directory, one directory for each node in the cluster.

      Alternately, use this naming convention to create symbolic links to a shared finish script.

  33. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time. Use the following command to start cconsole:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  34. From the ok PROM prompt on the console of each node, type the boot net - install command to begin the network JumpStart installation of each node.


    ok boot net - install
    


    Note –

    Surround the dash (-) in the command with a space on each side.


    Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.


    Note –

    Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information on how to suppress these messages under otherwise normal cluster conditions.


    When the installation is successfully completed, each node is fully installed as a new cluster node.

  35. Are you installing a new node to an existing cluster?

    • If no, proceed to Step 36.

    • If yes, create mount points on the new node for all existing cluster file systems.

    1. From another cluster node that is active, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      

    2. On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.


      Note –

      The mount points become active after you reboot the cluster in Step 37.


    3. Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?

      • If no, proceed to Step 36.

      • If yes, ensure that the same vxio number is used on the VxVM-installed nodes. Also ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.


        # grep vxio /etc/name_to_major
        vxio NNN
        

        If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  36. Do you intend to use dynamic reconfiguration on Sun Enterprise 10000 servers?

    • If no, proceed to Step 37.

    • If yes, on each node add the following entry to the /etc/system file.


      set kernel_cage_enable=1

      This entry becomes effective after the next system reboot. See the Sun Cluster 3.1 10/03 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  37. Did you add a new node to an existing cluster or install Sun Cluster software patches that require you to reboot the entire cluster, or both?

    • If no, reboot the individual node if any patches that you installed require a node reboot. Also reboot if any other changes that you made require a reboot to become active, then proceed to Step 38.

    • If yes, perform a reconfiguration reboot of the cluster as instructed in the following steps.

    1. From one node, shut down the cluster.


      # scshutdown
      


      Note –

      Do not reboot the first-installed node of the cluster until after the cluster is shut down.


    2. Reboot each node in the cluster.


      ok boot
      

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down. Cluster nodes remain in installation mode until the first time you run the scsetup(1M) command, during the procedure How to Perform Postinstallation Setup.

  38. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

How to Install Sun Cluster Software on a Single-Node Cluster

Perform this task to install Sun Cluster software and establish the cluster on a single node by using the scinstall command. See the scinstall(1M) man page for details.


Note –

You cannot use SunPlex Manager or the interactive form of the scinstall utility to install Sun Cluster software on a single-node cluster.


The scinstall -iFo command establishes the following defaults during installation.

Some steps that are required for multinode cluster installations are not necessary for single-node cluster installations. When you install a single-node cluster, you do not need to perform the following steps:


Tip –

If you anticipate eventually adding a second node to your cluster, you can configure the transport interconnect during initial cluster installation. The transport interconnect is then available for later use. See the scinstall(1M) man page for details.


  1. Ensure that the Solaris operating environment is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. Become superuser on the cluster node to install.

  3. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive of the node to install and configure.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

  4. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Tools directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) ).

    The following example uses the path to the Solaris 8 version of Sun Cluster software.


    # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_8/Tools
    

  5. Install the Sun Cluster software and necessary patches by using the scinstall command.


    ./scinstall -iFo -M patchdir=dirname
    
    -i

    Specifies the install form of the scinstall command. The scinstall command installs Sun Cluster software and initializes the node as a new cluster.

    -F

    Establishes the node as the first node in a new cluster. All -F options can be used when installing a single-node cluster.

    -o

    Specifies that only one node is being installed for a single-node cluster. The -o option is only legal when used with both the -i and the -F forms of the command. When the -o option is used, cluster installation mode is preset to the disabled state.

    -M patchdir=dirname[[,patchlistfile=filename]]

    Specifies the path to patch information so that the specified patches can be installed by using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

    The -M option is not required with the scinstall -iFo command. The -M option is shown in this procedure because it is the most efficient method of installing patches during a single-node cluster installation. However, you can use any method that you prefer to install patches.

  6. Reboot the node.

    The reboot after Sun Cluster software installation establishes the node as the cluster.

  7. Verify the installation by using the scstat command.


    # scstat -n
    

    See the scstat(1M) man page for details.

  8. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.


Tip –

You can expand a single-node cluster into a multinode cluster by following the appropriate procedures provided in “Adding and Removing a Cluster Node” in Sun Cluster 3.1 10/03 System Administration Guide.


Example—Installing Sun Cluster Software on a Single-Node Cluster

The following example shows how to use the scinstall and scstat commands to install and verify a single-node cluster. The example includes installation of all patches. See the scinstall(1M) and scstat(1M) man pages for details.


# scinstall -iFo -M patchdir=/var/cluster/patches 

Checking device to use for global devices file system ... done
** Installing SunCluster 3.1 framework **
...
Installing patches ... done

Initializing cluster name to "phys-schost-1" ... done
Initializing authentication options ... done

Setting the node ID for "phys-schost-1" ... done (id=1)

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done

Please reboot this machine.

# reboot

# scstat -n

-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     phys-schost-1       Online
		

How to Configure the Name-Service Switch

Perform this task on each node in the cluster.

  1. Become superuser on the cluster node.

  2. Edit the /etc/nsswitch.conf file.

    1. Verify that cluster is the first source look-up for the hosts and netmasks database entries.

      This order is necessary for Sun Cluster software to function properly. The scinstall(1 M) command adds cluster to these entries during installation.

    2. (Optional) To increase availability to data services if the naming service becomes unavailable, change the look-up order of the following entries.

      • For the hosts and netmasks database entries, follow cluster with files.

      • For Sun Cluster HA for NFS, also insert [SUCCESS=return] after cluster files and before name services.


        hosts:      cluster files [SUCCESS=return] nis

        This look-up order ensures that, if the node resolves a name locally, the node does not contact the listed name service(s). Instead, the node returns success immediately.

      • For all other database entries, place files first in the look-up order.

      • If the [NOTFOUND=return] criterion becomes the last item of an entry after you modify the lookup order, the criterion is no longer necessary. You can either delete the [NOTFOUND=return] criterion from the entry or leave the criterion in the entry. A [NOTFOUND=return] criterion at the end of an entry is ignored.

    The following example shows partial contents of an /etc/nsswitch.conf file. The look-up order for the hosts and netmasks database entries is first cluster, then files. The look-up order for other entries begins with files. The [NOTFOUND=return] criterion is removed from the entries.


    # vi /etc/nsswitch.conf
    …
    passwd:     files nis
    group:      files nis
    …
    hosts:      cluster files nis
    …
    netmasks:   cluster files nis
    …

    See the nsswitch.conf(4) man page for more information about nsswitch.conf file entries.

  3. Set up your root user's environment.

    Go to How to Set Up the Root Environment.

How to Set Up the Root Environment

Perform these tasks on each node in the cluster.


Note –

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See “Customizing a User's Work Environment” in System Administration Guide, Volume 1 (Solaris 8) or “Customizing a User's Work Environment” in System Administration Guide: Basic Administration (Solaris 9) for more information.


  1. Become superuser on a cluster node.

  2. Modify PATH and MANPATH entries in the .cshrc or .profile file.

    1. Set the PATH to include /usr/sbin and /usr/cluster/bin. Also include the following volume-manager-specific paths that apply to your configuration:

      Software Product 

      PATH

      VERITAS Volume Manager (VxVM) 

      /etc/vx/bin

      VxVM 3.2 GUI  

      /opt/VRTSvmsa/bin

      VxVM 3.5 GUI 

      /opt/VRTSob/bin

      VERITAS File System (VxFS) 

      /opt/VRTSvxfs/sbin, /usr/lib/fs/vxfs/bin, and /etc/fs/vxfs

    2. Set the MANPATH to include /usr/cluster/man. Also include the following volume-manager-specific paths that apply to your configuration:

      Software Product 

      MANPATH

      Solstice DiskSuite/Solaris Volume Manager 

      /usr/share/man

      VxVM 

      /opt/VRTS/man

      VxVM GUI 

      /opt/VRTSvmsa/man

      VxFS 

      /opt/VRTS/man

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

  4. Repeat Step 1 through Step 3 on each remaining cluster node.

  5. Install data-service software packages.

How to Install Data-Service Software Packages (Web Start)

If you install data services from the Sun Cluster 3.1 10/03 Data Services release, you can use the Web Start program to install the packages. To install data services from a an earlier release, follow the procedures in How to Install Data-Service Software Packages (scinstall).

You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.

  1. Become superuser on a cluster node.

  2. (Optional) If you intend to use the Web Start program with a GUI, ensure that the DISPLAY environment variable is set.

  3. Load the Sun Cluster 3.1 Agents CD-ROM into the CD-ROM drive.

    If the Volume Management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

  4. Change to the directory where the CD-ROM is mounted.


    # cd cdrom-mount-point
    

  5. Start the Web Start program.


    # ./installer
    
  6. When you are prompted, select the type of installation.

    • To install all data services on the CD-ROM, select Typical.

    • To install only a subset of the data services on the CD-ROM, select Custom.

  7. When you are prompted, select the locale to install.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  8. Follow instructions on the screen to install the data-service packages on the node.

    After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the program created during the installation. These logs are located in the /var/sadm/install/logs directory.

  9. Exit the Web Start program.

  10. Unload the Sun Cluster 3.1 Agents CD-ROM from the CD-ROM drive.

    1. To ensure that the CD-ROM is not being used, change to a directory that does not reside on the CD-ROM.

    2. Eject the CD-ROM.


      # eject cdrom
      
  11. Repeat Step 1 through Step 10 on each remaining cluster node.

  12. Install any Sun Cluster data-service patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. Shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down. Cluster nodes remain in installation mode until the first time you run the scsetup(1M) command, during the procedure How to Perform Postinstallation Setup.


  13. Perform postinstallation setup and assign quorum votes.

    Go to How to Perform Postinstallation Setup.

How to Install Data-Service Software Packages (scinstall)

Perform this task on each cluster node to install data services. If you install data services from the Sun Cluster 3.1 10/03 Data Services release, you can also use the Web Start program to install the packages. See How to Install Data-Service Software Packages (Web Start).


Note –

You do not need to perform this procedure if you used SunPlex Manager to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, or both, and if you do not intend to install any other data services. Instead, go to How to Perform Postinstallation Setup.


  1. Become superuser on a cluster node.

  2. Insert the Sun Cluster 3.1 Agents CD-ROM into the CD-ROM drive on the node.

  3. Change to the directory where the CD-ROM is mounted.


    # cd cdrom-mount-point
    

  4. Start the scinstall(1 M) utility.


    # scinstall
    

    Follow these guidelines to use the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or the Main Menu.

  5. To add data services, type 3 (Add support for new data services to this cluster node).

  6. Follow the prompts to select all data services to install.

    You must install the same set of data-service packages on each node. This requirement applies even if a node is not expected to host resources for an installed data service.

  7. After the data services are installed, exit the scinstall utility.

  8. Unload the Sun Cluster 3.1 Agents CD-ROM from the CD-ROM drive.

    1. To ensure that the CD-ROM is not being used, change to a directory that does not reside on the CD-ROM.

    2. Eject the CD-ROM.


      # eject cdrom
      
  9. Repeat Step 1 through Step 8 on each cluster node where you are installing data services.

  10. Install any Sun Cluster data-service patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. Shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down. Cluster nodes remain in installation mode until the first time you run the scsetup(1M) command, during the procedure How to Perform Postinstallation Setup.


  11. Perform postinstallation setup and assign quorum votes.

    Go to How to Perform Postinstallation Setup.

How to Perform Postinstallation Setup

Perform this procedure one time only, after the cluster is fully formed.

  1. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                       Node name      Status
                       ---------      ------
      Cluster node:    phys-schost-1  Online
      Cluster node:    phys-schost-2  Online

  2. On each node, verify device connectivity to the cluster nodes.

    Run the scdidadm(1M) command to display a list of all the devices that the system checks. You do not need to be logged in as superuser to run this command.


    % scdidadm -L
    

    The list on each node should be the same. Output resembles the following.


    1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    …

  3. Determine the global device-ID name of each shared disk that you are configuring as a quorum device.

    Use the scdidadm output from Step 2 to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step 2 shows that global device d2 is shared by phys-schost-1 and phys-schost-2. You use this information in Step 8. See Quorum Devices for further information about planning quorum devices.

  4. Are you adding a new node to an existing cluster?

  5. Did you use SunPlex Manager to install Sun Cluster software?

    • If no, proceed to Step 6.

    • If yes, skip to Step 12. During Sun Cluster installation, SunPlex Manager assigns quorum votes and removes the cluster from installation mode for you.

  6. Become superuser on one node of the cluster.

  7. Start the scsetup(1M) utility.


    # scsetup
    

    The Initial Cluster Setup screen is displayed.


    Note –

    If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 12.


    If the quorum setup process is interrupted or fails to be completed successfully, rerun scsetup.

  8. At the prompt Do you want to add any quorum disks?, configure at least one shared quorum device if your cluster is a two-node cluster.

    If your cluster has three or more nodes, quorum device configuration is optional.

  9. At the prompt Is it okay to reset "installmode"?, answer Yes.

    After the scsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.

  10. Quit the scsetup utility.

  11. From any node, verify the device and node quorum configurations.


    % scstat -q
    

  12. From any node, verify that cluster installation mode is disabled.

    You do not need to be superuser to run this command.


    % scconf -p | grep "install mode"
    Cluster install mode: 
                                disabled

  13. Do you intend to use VERITAS File System (VxFS) software?

    • If no, proceed to Step 2.

    • If yes, perform the following steps.

    1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster, if VxFS software is not already installed.

    2. In the /etc/system file on each node, set the value for the rpcmod:svc_default_stksize variable to 0x8000 and set the value of the lwp_default_stksize variable to 0x6000.


      set rpcmod:svc_default_stksize=0x8000
      set lwp_default_stksize=0x6000

      Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation changes the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually change the value back to 0x8000 after VxFS installation is complete.

      Also, you must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

  14. Install volume management software.

How to Uninstall Sun Cluster Software to Correct Installation Problems

Perform this procedure if the installed node cannot join the cluster, or if you need to correct configuration information, for example, the transport adapters.


Note –

If the node has already joined the cluster and is no longer in installation mode (see Step 12 of How to Perform Postinstallation Setup), do not perform this procedure. Instead, go to “How to Uninstall Sun Cluster Software From a Cluster Node” in “Adding and Removing a Cluster Node” in Sun Cluster 3.1 10/03 System Administration Guide.


  1. Attempt to reinstall the node.

    You can correct certain failed installations simply by repeating Sun Cluster software installation on the node. If you have already tried to reinstall the node without success, proceed to Step 2 to uninstall Sun Cluster software from the node.

  2. Become superuser on an active cluster member other than the node that you are uninstalling.

  3. From the active cluster member, add the node that you intend to uninstall to the cluster node-authentication list.


    # /usr/cluster/bin/scconf -a -T node=nodename
    
    -a

    Add

    -T

    Specifies authentication options

    node=nodename

    Specifies the name of the node to add to the authentication list

    Alternately, you can use the scsetup(1M) utility. See “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster 3.1 10/03 System Administration Guide for procedures.

  4. Become superuser on the node you intend to uninstall.

  5. Reboot the node into noncluster mode.


    # shutdown -g0 -y -i0
    ok boot -x
    

  6. Uninstall the node.

    Run the scinstall command from a directory that does not contain any files that are delivered by the Sun Cluster packages.


    # cd /
    # /usr/cluster/bin/scinstall -r
    

    See the scinstall(1 M) man page for more information.

  7. Reinstall Sun Cluster software on the node.

    Refer to Table 2–1 for the list of all installation tasks and the order in which to perform the tasks.