Sun Cluster 3.0 U1 Installation Guide

Installing the Software

The following table lists the tasks you perform to install the software.

Table 2-1 Task Map: Installing the Software

Task 

For Instructions, Go To ... 

Plan the layout of your cluster configuration and prepare to install software. 

"How to Prepare for Cluster Software Installation"

(Optional) Install the Cluster Control Panel (CCP) software on the administrative console.

"How to Install Cluster Control Panel Software on the Administrative Console"

Install the Solaris operating environment and Sun Cluster software to establish new cluster nodes. Choose one of the following three methods. 

 

  • Method 1 - (New clusters or added nodes) Install Solaris software, then install the Sun Cluster software by using the scinstall utility.

"How to Install Solaris Software"

"How to Install Sun Cluster Software (scinstall)"

  • Method 2 - (New clusters only) Install Solaris software, then install SunPlexTM Manager and use it to install the Sun Cluster software.

"How to Install Solaris Software"

"Using SunPlex Manager to Install Sun Cluster Software"

  • Method 3 - (New clusters or added nodes) Install Solaris software and Sun Cluster software in one operation by using the scinstall utility's custom JumpStart option.

"How to Install Solaris and Sun Cluster Software (JumpStart)"

Configure the name service look-up order. 

"How to Configure the Name Service Switch"

Set up directory paths. 

"How to Set Up the Root Environment"

Install data service software packages. 

"How to Install Data Service Software Packages"

Perform post-installation setup and assign quorum votes. 

"How to Perform Post-Installation Setup"

Install and configure volume manager software. 

 

  • Install and configure Solstice DiskSuite software.

"Installing and Configuring Solstice DiskSuite Software"

Solstice DiskSuite documentation 

  • Install and configure VERITAS Volume Manager software.

"Installing and Configuring VxVM Software"

VERITAS Volume Manager documentation 

Configure the cluster. 

"Configuring the Cluster"

How to Prepare for Cluster Software Installation

Before you begin to install software, make the following preparations.

  1. Read the following manuals for information that will help you plan your cluster configuration and prepare your installation strategy.

    • Sun Cluster 3.0 U1 Release Notes--Restrictions, bug workarounds, and other late-breaking information.

    • Sun Cluster 3.0 U1 Release Notes Supplement--Post-release documentation about additional restrictions, bug workarounds, new features, and other late-breaking information. This document is regularly updated and is published online at the following Web site.

      http://docs.sun.com

    • Sun Cluster 3.0 U1 Concepts--Overview of the Sun Cluster product.

    • Sun Cluster 3.0 U1 Installation Guide (this manual)--Planning guidelines and procedures for installing and configuring Solaris, Sun Cluster, and volume manager software.

    • Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide--Planning guidelines and procedures for installing and configuring data services.

  2. Plan your cluster configuration.

    • Use the planning guidelines in Chapter 1, Planning the Sun Cluster Configuration and in the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide to determine how you will install and configure your cluster.

    • Fill out the cluster framework and data services configuration worksheets in the Sun Cluster 3.0 U1 Release Notes. Use your completed worksheets for reference during the installation and configuration tasks.

  3. Have available all related documentation, including third-party documents.

    The following is a partial list of product documentation you might need for reference during cluster installation.

    • Solaris software

    • Solstice DiskSuite software

    • VERITAS Volume Manager

    • Sun Management Center

    • Third-party applications such as Oracle

  4. Get all necessary patches for your cluster configuration.

    See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.

  5. Do you intend to use Cluster Control Panel software to connect from an administrative console to your cluster nodes?

How to Install Cluster Control Panel Software on the Administrative Console

This procedure describes how to install the Cluster Control Panel (CCP) software on the administrative console. The CCP provides a launchpad for the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, plus a common window that sends input to all nodes at one time.

You can use any desktop machine that runs the Solaris 8 operating environment as an administrative console. In addition, you can also use the administrative console as a Sun Management Center console and/or server, and as an AnswerBook server. See Sun Management Center documentation for information on how to install Sun Management Center software. See the Sun Cluster 3.0 U1 Release Notes for information on how to install an AnswerBook server.


Note -

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


  1. Ensure that the Solaris 8 operating environment and any Solaris patches are installed on the administrative console.

    All platforms require Solaris 8 with at least the End User System Support software group.

  2. If you install from the CD-ROM, insert the Sun Cluster 3.0 7/01 CD-ROM into the CD-ROM drive of the administrative console.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0u1 directory.

  3. Change to the /cdrom/suncluster_3_0u1/SunCluster_3.0/Packages directory.


    # cd /cdrom/suncluster_3_0u1/SunCluster_3.0/Packages
    

  4. Install the SUNWccon package.


    # pkgadd -d . SUNWccon
    

  5. (Optional) Install the SUNWscman package.


    # pkgadd -d . SUNWscman
    

    When you install the SUNWscman package on the administrative console, you can view Sun Cluster man pages from the administrative console before you install Sun Cluster software on the cluster nodes.

  6. If you installed from a CD-ROM, eject the CD-ROM.

  7. Create an /etc/clusters file.

    Add your cluster name and the physical node name of each cluster node to the file.


    # vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  8. Create an /etc/serialports file.

    Add the physical node name of each cluster node, the terminal concentrator (TC) or System Service Processor (SSP) name, and the serial port numbers to the file.


    # vi /etc/serialports
    node1 TC-hostname 500N
    node2 TC-hostname 500N
    
    node1

    Physical name of the cluster node

    TC-hostname

    Name of the TC or SSP

    500N

    Serial (telnet) port number


    Note -

    Use the telnet(1) port numbers, not the physical port numbers, for the serial port numbers in the /etc/serialports file. To determine the serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the serial port number should be 5006.


    For Sun Enterprise E10000 server servers, see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.

  9. For convenience, add the /opt/SUNWcluster/bin directory to the PATH and the /opt/SUNWcluster/man directory to the MANPATH on the administrative console.

    If you installed the SUNWscman package, also add the /usr/cluster/man directory to the MANPATH.

  10. Start the CCP utility.


    # /opt/SUNWcluster/bin/ccp clustername
    

    See the procedure "How to Remotely Log In to Sun Cluster" in the Sun Cluster 3.0 U1 System Administration Guide and the /opt/SUNWcluster/bin/ccp(1M) man page for information about how to use the CCP.

  11. Install the Solaris operating environment.

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task to install the Solaris operating environment on each node in the cluster.


Note -

If your nodes are already installed with the Solaris operating environment, you must still reinstall the Solaris software as described in this procedure to ensure successful installation of Sun Cluster software.


  1. Ensure that the hardware setup is complete and connections are verified before you install Solaris software.

    See the Sun Cluster 3.0 U1 Hardware Guide and your server and storage device documentation for details.

  2. Have available your completed "Local File System Layout Worksheet" from the Sun Cluster 3.0 Release Notes.

  3. Are you using a naming service?

    • If no, proceed to Step 4. You will set up local hostname information in Step 12.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.

  4. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    If Cluster Control Panel (CCP) is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. CCP also opens a master window from which you can send your input to all individual console windows at the same time.

    If you do not use CCP, connect to the consoles of each node individually.


    Tip -

    To save time, you can install the Solaris operating environment on each node at the same time.


  5. On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.

    Sun Cluster software does not support the local-mac-address variable set to true.

    1. Display the value of the local-mac-address variable.

      • If the node is preinstalled with Solaris software, as superuser run the following command.


         # /usr/sbin/eeprom local-mac-address?
        

      • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


        ok printenv local-mac-address?
        

    2. Does the command return local-mac-address?=false on each node?

      • If yes, the variable settings are correct. Proceed to Step 6.

      • If no, change the variable setting on any node that is not set to false.

        • If the node is preinstalled with Solaris software, as superuser run the following command.


           # /usr/sbin/eeprom local-mac-address?=false
          

        • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


          ok setenv local-mac-address? false
          

    3. Repeat Step a to verify any changes you made in Step b.

      The new setting becomes effective at the next system reboot.

  6. Install the Solaris operating environment as instructed in the Solaris installation documentation.


    Note -

    You must install all nodes in a cluster with the same version of the Solaris operating environment.


    You can use any method normally used to install the Solaris operating environment to install the software on new nodes to be installed into a clustered environment. These methods include the Solaris interactive installation program, Solaris JumpStart, and Solaris Web Start.

    During Solaris software installation, do the following.

    1. Install at least the End User System Support software group.


      Note -

      Sun Enterprise E10000 server servers require the Entire Distribution + OEM software group.


      You might need to install other Solaris software packages which are not part of the End User System Support software group, for example, the Apache HTTP server packages. Third-party software, such as Oracle, might also require additional Solaris packages. See your third-party documentation for any Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 100 MBytes for use by the global-devices subsystem. If you intend to use SunPlex Manager to install Sun Cluster software, you must create the file system with a mount point of /globaldevices. This mount point is the default used by scinstall.


        Note -

        A global-devices file system is required for Sun Cluster software installation to succeed.


      • If you plan to use SunPlex Manager to install Solstice DiskSuite while installing Sun Cluster software, create a file system on slice 7 of at least 10 Mbytes with a mount point of /sds. Otherwise, create any file system partitions needed to support your volume manager software as described in "System Disk Partitions".

    3. Choose auto reboot.


      Note -

      Solaris software is installed and the node reboots before the next prompts display.


    4. For ease of administration, set the same root password on each node.

    5. Answer no when asked whether to enable automatic power-saving shutdown.

      You must disable automatic shutdown in Sun Cluster configurations. See the pmconfig(1M) and power.conf(4) man pages for more information.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. See the ifconfig(1M) man page for more information about Solaris interface groups.


  7. Are you installing a new node to an existing cluster?

  8. Have you added the new node to the cluster's authorized-node list?

    • If yes, proceed to Step 9.

    • If no, run scsetup(1M) from another, active cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 U1 System Administration Guide for procedures.

  9. Create a mount point on the new node for each cluster file system in the cluster.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk `{print $1}'
      

    2. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  10. Install any Solaris software patches.

    See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions. If necessary, view the /etc/release file to see the exact version of Solaris software that is installed on a node.

  11. Install any hardware-related patches and download any needed firmware contained in the hardware patches.

    See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.

  12. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  13. Install Sun Cluster software on your cluster nodes.

How to Install Sun Cluster Software (scinstall)

After you install the Solaris operating environment, perform this task on each node of the cluster to install Sun Cluster software and establish new cluster nodes. You can also use this procedure to add new nodes to an existing cluster.


Note -

If you used the scinstall(1M) custom JumpStart or SunPlex Manager installation method, the Sun Cluster software is already installed. Proceed to "How to Configure the Name Service Switch".


  1. Have available the following completed configuration planning worksheets from the Sun Cluster 3.0 Release Notes.

    • "Cluster and Node Names Worksheet"

    • "Cluster Interconnect Worksheet"

    See "Planning the Sun Cluster Environment" for planning guidelines.

  2. Become superuser on the cluster node.

  3. If you install from the CD-ROM, insert the Sun Cluster 3.0 7/01 CD-ROM into the CD-ROM drive of the node to install and configure.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0u1 directory.

  4. Change to the /cdrom/suncluster_3_0u1/SunCluster_3.0/Tools directory.


    # cd /cdrom/suncluster_3_0u1/SunCluster_3.0/Tools
    

  5. Are you installing a new node to an existing cluster?

  6. Install the first node and establish the new cluster.

    Follow the prompts to install Sun Cluster software, using the information from your configuration planning worksheets.

    1. Start the scinstall(1M) utility.


      ./scinstall
      

      Follow these guidelines to use the interactive scinstall utility.

      • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

      • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

      • Your session answers are stored as defaults for the next time you run this menu option. Default answers display between brackets ([ ]) at the end of the prompt.


      Tip -

      Until the node is successfully booted in cluster mode, you can rerun scinstall and change the configuration information as needed. However, if bad configuration data for the node was pushed over to the established portion of the cluster, you might first need to remove the bad information. To do this, log in to one of the active cluster nodes, then use the scconf(1M) command to remove the bad adapter, junction, or cable information.


    2. From the Main Menu, type 1 (Establish a new cluster).


       *** Main Menu ***
       
          Please select from one of the following (*) options:
       
            * 1) Establish a new cluster using this machine as the first node
            * 2) Add this machine as a node in an established cluster
              3) Configure a cluster to be JumpStarted from this install server
              4) Add support for new data services to this cluster node
              5) Print release information for this cluster node
       
            * ?) Help with menu options
            * q) Quit
       
          Option:  1
       
       *** Establishing a New Cluster ***
      ...
       Do you want to continue (yes/no) [yes]?  y
      

    3. Specify the cluster name.


       >>> Cluster Name <<<
      ...
          What is the name of the cluster you want to establish?  clustername 
      

    4. Specify the names of the other nodes that will become part of this cluster.


       >>> Cluster Nodes <<<
      ...
          Node name:  node2
          Node name (Ctrl-D to finish):  <Control-D>
       
          This is the complete list of nodes:
      ...
          Is it correct (yes/no) [yes]? 

    5. Specify whether to use data encryption standard (DES) authentication.

      By default, Sun Cluster software permits a node to connect to the cluster only if the node is physically connected to the private interconnect and if the node name was specified in Step d. However, the node actually communicates with the sponsoring node over the public network, since the private interconnect is not yet fully configured. DES authentication provides an additional level of security at installation time by enabling the sponsoring node to more reliably authenticate nodes that attempt to contact it to update the cluster configuration.

      If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.


       >>> Authenticating Requests to Add Nodes <<<
      ...
          Do you need to use DES authentication (yes/no) [no]? 

    6. Specify the private network address and netmask.


      Note -

      You cannot change the private network address after the cluster is successfully formed.



       >>> Network Address for the Cluster Transport <<<
      ...
          Is it okay to accept the default network address (yes/no) [yes]? 
          Is it okay to accept the default netmask (yes/no) [yes]? 
    7. If this is a two-node cluster, specify whether the cluster uses transport junctions.


      Tip -

      You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.



       >>> Point-to-Point Cables <<<
       ...
          Does this two-node cluster use transport junctions (yes/no) [yes]? 
    8. If this cluster uses transport junctions, specify names for the transport junctions.

      You must use transport junctions if a cluster contains three or more nodes. You can use the default names switchN or create your own names.


       >>> Cluster Transport Junctions <<<
       ...
          What is the name of the first junction in the cluster [switch1]? 
          What is the name of the second junction in the cluster [switch2]? 

    9. Specify the cluster interconnect transport adapters and, if used, the names of the transport junctions they connect to.

      You can configure up to two adapters by using the scinstall command. You can configure additional adapters after Sun Cluster software is installed by using the scsetup utility.


       >>> Cluster Transport Adapters and Cables <<<
       ...
          What is the name of the first cluster transport adapter?  adapter
       ...
          Name of the junction to which "adapter" is connected [switch1]? 
       ...
          What is the name of the second cluster transport adapter?  adapter
       ...
          Okay to use the default for the "qfe0" connection (yes/no) [yes]? 
        
          What is the name of the second cluster transport adapter?  adapter
       
          Name of the junction to which "adapter" is connected [switch2]? 
           Use the default port for the "adapter" connection [yes]? 

    10. Specify the global devices file system name.


       >>> Global Devices File System <<<
      ...
          The default is to use /globaldevices.
       
          Is it okay to use this default (yes/no) [yes]? 

    11. Do you have any Sun Cluster software patches to install?

      • If yes, type no in the Automatic Reboot screen to decline automatic reboot.

      • If no, type yes to accept automatic reboot.


       >>> Automatic Reboot <<<
      ...
          Do you want scinstall to reboot for you (yes/no) [yes]? 
    12. Accept or decline the generated scinstall command.

      The scinstall command generated from your input is displayed for confirmation.


       >>> Confirmation <<<
       
          Your responses indicate the following options to scinstall:
       
            scinstall -i  \
      ...
          Are these the options you want to use (yes/no) [yes]? 
          Do you want to continue with the install (yes/no) [yes]? 

      • If you accept the command and continue the installation, scinstall processing continues. "Example--Installing Sun Cluster Software" shows an example of the output you might see during scinstall processing.

      • If you decline the command, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 1 and provide different answers. Your previous answers display as the defaults.

    Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.pid file, where pid is the process ID number of the scinstall instance.


    Note -

    Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to the maximum possible number of nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP)" for information on how to suppress these messages under otherwise normal cluster conditions.


  7. Do you have any Sun Cluster software patches to install?

    • If yes, install any Sun Cluster software patches on the node and reboot the node. See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.

    • If no, and scinstall rebooted the node during installation, go to Step 8. If scinstall did not reboot the node, manually reboot the node to establish the cluster before you proceed to Step 8.

    The first node reboot after Sun Cluster software installation forms the cluster and establishes this node as the first-installed node of the cluster.

  8. Install the second node of the cluster.

    Follow the prompts to install Sun Cluster software. Refer to the information from your configuration planning worksheets.


    Note -

    Do not reboot or shut down the first-installed node while any other nodes are being installed, even if you use another node as the sponsoring node. Until quorum votes are assigned to the cluster nodes and cluster install mode is disabled, the first-installed node, which established the cluster, is the only node that has a quorum vote. Rebooting or shutting down the first-installed node will therefore cause a system panic because of lost quorum.


    1. Start the scinstall utility.

      You can start this step while software is still being installed on the first-installed node. If necessary, the second node waits for the first node to complete installation.


      ./scinstall
      

    2. From the Main Menu, type 2 (Add this machine as a node).


       *** Main Menu ***
       
          Please select from one of the following (*) options:
       
            * 1) Establish a new cluster using this machine as the first node
            * 2) Add this machine as a node in an established cluster
              3) Configure a cluster to be JumpStarted from this install server
              4) Add support for new data services to this cluster node
              5) Print release information for this cluster node
       
            * ?) Help with menu options
            * q) Quit
       
          Option:  2
       
        *** Adding a Node to an Established Cluster ***
      ...
          Do you want to continue (yes/no) [yes]? y
      

    3. Specify the name of any existing cluster node, referred to as the sponsoring node.


       >>> Sponsoring Node <<<
      ...
          What is the name of the sponsoring node?  node1
      

    4. Specify the cluster name.


       >>> Cluster Name <<<
      ...
          What is the name of the cluster you want to join?  clustername
      

    5. Specify whether this is a two-node cluster and whether the cluster uses transport junctions.

      You must use transport junctions if a cluster contains three or more nodes.


       >>> Point-to-Point Cables <<<
      ...
          Is this a two-node cluster (yes/no) [yes]? 
       
          Does this two-node cluster use transport junctions (yes/no) [yes]? 

    6. Specify the cluster interconnect transport adapters and transport junctions, if any.


       >>> Cluster Transport Adapters and Cables <<<
      ...
          What is the name of the first cluster transport adapter?  adapter
      ...
          Name of adapter on "node1" to which "adapter" is connected?  adapter
       
          What is the name of the second cluster transport adapter?  adapter
          Name of adapter on "node1" to which "adapter" is connected?  adapter
      

    7. Specify the global devices file system name.


       >>> Global Devices File System <<<
      ...
          The default is to use /globaldevices.
       
          Is it okay to use this default (yes/no) [yes]? 

    8. Do you have any Sun Cluster software patches to install?

      • If yes, type no in the Automatic Reboot screen to decline automatic reboot.

      • If no, type yes to accept automatic reboot.


       >>> Automatic Reboot <<<
      ...
          Do you want scinstall to reboot for you (yes/no) [yes]? 
    9. Accept or decline the generated scinstall command.

      The scinstall command generated from your input is displayed for confirmation.


       >>> Confirmation <<<
       
          Your responses indicate the following options to scinstall:
       
            scinstall -i  \
      ...
          Are these the options you want to use (yes/no) [yes]? 
          Do you want to continue with the install (yes/no) [yes]? 

      • If you accept the command and continue the installation, scinstall processing continues. "Example--Installing Sun Cluster Software" shows an example of the output you might see during scinstall processing. If the sponsoring node is not yet established in the cluster, scinstall waits for the sponsoring node to become available.

      • If you decline the command, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 2 and provide different answers. Your previous answers display as the defaults.

    Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.pid file, where pid is the process ID number of the scinstall instance.


    Note -

    Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to eight nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP)" for information on how to suppress these messages under otherwise normal cluster conditions.


  9. Do you have any Sun Cluster software patches to install?

    • If yes, install the Sun Cluster software patches on the node and reboot the node. See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.


      Note -

      Do not reboot or shut down the first-installed node while any other nodes are being installed, even if you use another node as the sponsoring node. Until quorum votes are assigned to the cluster nodes and cluster install mode is disabled, the first-installed node, which established the cluster, is the only node that has a quorum vote. Rebooting or shutting down the first-installed node will therefore cause a system panic because of lost quorum. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".


    • If no, and scinstall rebooted the node during installation, go to Step 10. If scinstall did not reboot the node, manually reboot the node to establish the cluster before you proceed to Step 10.

  10. Repeat Step 8 and Step 9 on each additional node until all nodes are fully configured.

    You do not need to wait for the second node to complete installation and reboot into the node before you begin installation on additional nodes.

  11. Set up the name service look-up order.

    Go to "How to Configure the Name Service Switch".

Example--Installing Sun Cluster Software

The following example shows the progress messages displayed as scinstall installation tasks are completed on the node phys-schost-1, which is the first node to be installed in the cluster.


** Installing SunCluster 3.0 **
        SUNWscr.....done.
        SUNWscdev...done.
        SUNWscu.....done.
        SUNWscman...done.
        SUNWscsal...done.
        SUNWscsam...done.
        SUNWscrsmop.done.
        SUNWsci.....done.
        SUNWscid....done.
        SUNWscidx...done.
        SUNWscvm....done.
        SUNWmdm.....done.
 
Initializing cluster name to "sccluster" ... done
Initializing authentication options ... done
Initializing configuration for adapter "hme2" ... done
Initializing configuration for adapter "hme4" ... done
Initializing configuration for junction "switch1" ... done
Initializing configuration for junction "switch2" ... done
Initializing configuration for cable ... done
Initializing configuration for cable ... done
Setting the node ID for "phys-schost-1" ... done (id=1)
 
Checking for global devices global file system ... done
Checking device to use for global devices file system ... done
Updating vfstab ... done
 
Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.
 
Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done
 
Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done
 
Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.060199105132
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.
 
Ensure routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Sun Cluster.
Please do not re-enable network routing.
 
Log file - /var/cluster/logs/install/scinstall.log.276
 
Rebooting ... 

Using SunPlex Manager to Install Sun Cluster Software


Note -

To add a new node to an existing cluster, do not use SunPlex Manager. Instead, go to "How to Install Sun Cluster Software (scinstall)".


This section describes how to install SunPlex Manager and use it to install Sun Cluster software and establish new cluster nodes. You can also use SunPlex Manager to install one or more of the following additional software products.

The following table lists SunPlex Manager installation requirements for these additional software products.

Table 2-2 Requirements to Use SunPlex Manager to Install Software

Software Package 

Installation Requirements 

Solstice DiskSuite 

A 10-Mbyte partition that uses /sds as the file system name.

Sun Cluster HA for NFS data service 

At least two shared disks of the same size which are connected to the same set of nodes. 

Solstice DiskSuite software installed by SunPlex Manager.  

A logical hostname for use by Sun Cluster HA for NFS. The logical hostname must have a valid IP address that is accessible by all cluster nodes and is on the same subnet as the base hostnames of the cluster nodes. 

Sun Cluster HA for Apache scalable data service 

At least two shared disks of the same size which are connected to the same set of nodes. 

Solstice DiskSuite software installed by SunPlex Manager.  

A shared address for use by Sun Cluster HA for Apache. The shared address must have a valid IP address that is accessible by all cluster nodes and is on the same subnet as the base hostnames of the cluster nodes. 

The following table lists each metaset name and cluster file system mount point created by SunPlex Manager, depending on the number of shared disks connected to the node. For example, if a node has four shared disks connected to it, SunPlex Manager creates the mirror-1 and stripe-1 metasets, but does not create the concat-1 metaset because the node does not have enough shared disks to create a third metaset.

Table 2-3 Metasets Installed by SunPlex Manager

Shared Disks [If the cluster does not meet the minimum shared disk requirement, SunPlex Manager still installs the Solstice DiskSuite packages. But without sufficient shared disks, SunPlex Manager cannot configure the metasets, metadevices, or cluster file systems needed to create instances of the data service.]

Metaset Name 

Cluster File System Mount Point 

Purpose 

First pair of shared disks 

mirror-1

/global/mirror-1

Sun Cluster HA for NFS or Sun Cluster HA for Apache scalable data service, or both 

Second pair of shared disks 

stripe-1

/global/stripe-1

unused 

Third pair of shared disks 

concat-1

/global/concat-1

unused 

How to Install SunPlex Manager Software

The SunPlex Manager graphical user interface (GUI) provides an easy way to install and administer Sun Cluster software. Follow this procedure to install SunPlex Manager software on your cluster.


Note -

If you intend to install Sun Cluster software by using another method, you do not need to perform this procedure. The scinstall command installs SunPlex Manager for you as part of the installation process.


Perform this procedure on each node of the cluster.

  1. Ensure that Solaris software and patches are installed on each node of the cluster.

    See the installation procedures in "How to Install Solaris Software".

  2. Become superuser on a cluster node.

  3. Install Apache software packages.

    The Apache software packages are included in the Solaris Entire Distribution software group and all higher-level software groups. If you installed a lower-level software group, use the pkginfo(1) command to determine whether the software packages in Step c are already installed. If they are already installed, proceed to Step 4.

    1. If you install from the CD-ROM, insert the Solaris 8 Software 2 of 2 CD-ROM into the CD-ROM drive of the node.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM.

    2. Change to the /cdrom/sol_8_sparc/Solaris_8/Product directory.


      # cd /cdrom/sol_8_sparc/Solaris_8/Product
      

    3. Install the Apache software packages in the following order.


      # pkgadd -d . SUNWapchr SUNWapchu SUNWapchd
      

    4. Eject the Solaris CD-ROM.

    5. Install any Apache software patches.

      See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.

  4. Install the SunPlex Manager software packages.

    1. If you install from the CD-ROM, insert the Sun Cluster 3.0 7/01 CD-ROM into the CD-ROM drive of the node.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0u1 directory.

    2. Change to the /cdrom/suncluster_3_0u1/SunCluster_3.0/Packages directory.


      # cd /cdrom/suncluster_3_0u1/SunCluster_3.0/Packages
      

    3. Install the SunPlex Manager software packages and answer yes for all prompts.


      # pkgadd -d . SUNWscva SUNWscvr SUNWscvw
      

    4. Eject the Sun Cluster CD-ROM.

  5. Repeat Step 2 through Step 4 on each node of the cluster.

  6. Is the root password the same on every node of the cluster?

    • If yes, go to Step 7.

    • If no, set the root password to the same value on each node of the cluster. If necessary, also use chkey(1) to update the RPC key pair.


      # passwd
      Enter new password
      # chkey -p
      

    The root password must be the same on all nodes in the cluster to use the root password to access SunPlex Manager.

  7. Do you intend to set up additional user accounts to access SunPlex Manager?

    Users who do not use the root system account nor have a user account set up on a particular node cannot access the cluster through SunPlex Manager from that node. Also, users cannot manage that node through another cluster node to which the users do have access.

  8. Determine how to set up user accounts to access SunPlex Manager.

    In addition to root-user access, users can log in to SunPlex Manager with a user account that has role-based access control (RBAC). Go to one of the procedures listed in the following table to set up user accounts.

    Table 2-4 Methods to Set Up SunPlex Manager User Accounts

    Method 

    Go to This Procedure 

    Add RBAC authorization to an existing user account.  

    "How to Add RBAC Authorization to an Existing User Account"

    Create a new user account that has RBAC authorization. 

    "How to Create a New User Account"


    Note -

    If you assign RBAC authorization to a non-root user account, that user account can perform administrative actions usually performed only by root.


    See "Role-Based Access Control" in the Solaris System Administration Guide, Volume 2 for more information.

How to Add RBAC Authorization to an Existing User Account

Add RBAC authorization to an existing user account. This enables the user to log in to SunPlex Manager by using the user's regular system password and have access to full SunPlex Manager functionality.


Note -

If you assign RBAC authorization to a non-root user account, that user account can perform a set of administrative actions usually performed only by root.


  1. Become superuser on a node of the cluster.

  2. Add the following entry to the /etc/user_attr file.


    # vi /etc/user_attr
    username::::type=normal;auths=solaris.cluster.admin
    

  3. Repeat on each remaining node of the cluster.

  4. Use SunPlex Manager to install Sun Cluster software.

    Go to "How to Install Sun Cluster Software (SunPlex Manager)".

How to Create a New User Account

Create a new user account on all nodes the cluster.


Note -

If you assign RBAC authorization to a non-root user account, that user account can perform a set of administrative actions usually performed only by root.


  1. Become superuser on a node of the cluster.

  2. Create the new user account.


    # useradd -d dir -A solaris.cluster.admin login      
    
    -d dir

    Specifies the home directory of the new user

    -A solaris.cluster.admin

    Assigns solaris.cluster.admin authorization to the new user account

    login

    Name of the new user account


    Note -

    The user name must be unique and must not already exist either on the local machine or in the network name service.


    See the useradd(1M) man page for more information about creating user accounts.

  3. Set the password.


    # passwd login
    

  4. Repeat on each remaining node of the cluster.

    Ensure that the password for the user account is the same on all nodes of the cluster.

  5. Use SunPlex Manager to install Sun Cluster software.

    Go to "How to Install Sun Cluster Software (SunPlex Manager)".

How to Install Sun Cluster Software (SunPlex Manager)

Note -

To add a new node to an existing cluster, do not use SunPlex Manager. Instead, go to "How to Install Sun Cluster Software (scinstall)".


Perform this procedure to use SunPlex Manager to install Sun Cluster software and patches on all nodes in the cluster in a single operation. In addition, you can use this procedure to install Solstice DiskSuite software and patches, and to install the Sun Cluster HA for NFS data service or scalable Sun Cluster HA for Apache data service or both.

The installation process might take from 30 minutes to two or more hours, depending on the number of cluster nodes, choice of data services, and number of disks in your cluster configuration.

  1. Ensure that SunPlex Manager software is installed on each node of the cluster.

    See the installation procedures in "How to Install SunPlex Manager Software". See "Using SunPlex Manager to Install Sun Cluster Software" for installation requirements.

  2. Do you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache?

  3. Prepare file system paths to a CD-ROM image of each software product you intend to install.

    1. Provide each CD-ROM image in a location that is available to each node.

      The CD-ROM images must be accessible to all nodes of the cluster from the same file system path. These paths can be one or more of the following locations.

      • CD-ROM drives exported to the network from machines outside the cluster.

      • Exported file systems on machines outside the cluster.

      • CD-ROM images copied to local file systems on each node of the cluster. The local file system must use the same name on each node.

    2. Record the path to each CD-ROM image.

      You will provide this information to SunPlex Manager in Step 19.

  4. Are there any patches required to support Sun Cluster or Solstice DiskSuite software?

  5. Do you intend to use SunPlex Manager to install patches?

    • If yes, go to Step 6.

    • If no, manually install all patches required to support Sun Cluster or Solstice DiskSuite software before you use SunPlex Manager, then proceed to Step 7.

  6. Copy patches required for Sun Cluster or Solstice DiskSuite software into a single directory on a file system that is available to each node.

    1. Ensure that only one version of each patch is present in this patch directory.

      If the patch directory contains multiple versions of the same patch, SunPlex Manager cannot determine the correct patch dependency order.

    2. Ensure that the patches are uncompressed.

    3. Record the path to the patch directory.

      You will provide this information to SunPlex Manager in Step 19.

  7. Have available the following completed configuration planning worksheets from the Sun Cluster 3.0 Release Notes.

    • "Cluster and Node Names Worksheet"

    • "Cluster Interconnect Worksheet"

    • "Network Resources" worksheet

    See Chapter 1, Planning the Sun Cluster Configuration and the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide for planning guidelines.

  8. From the administrative console or any other machine outside the cluster, launch a browser.

  9. Disable the browser's Web proxy.

    SunPlex Manager installation functionality is incompatible with Web proxies.

  10. Ensure the disk caching and memory caching is enabled.

    The disk cache and memory cache size must be greater than 0.

  11. From the browser, connect to port 3000 on one node of the cluster.


    https://node:3000/
    

    The Sun Cluster Installation screen displays in the browser window.


    Note -

    If the SunPlex Manager displays the administration interface instead of the Sun Cluster Installation screen, Sun Cluster software is already installed on that node. Check that the name of the node in the URL is the correct name of the cluster node to install.


  12. If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

  13. In the Sun Cluster Installation screen, verify that the cluster meets the listed requirements for using SunPlex Manager.

    • The Solaris End User Software Group or higher is installed.

    • Root disk partitions include a 100-MByte slice with the mount point /globaldevices.

    • Root disk partitions include a 10-MByte slice with the mount point /sds, if you will install Solstice DiskSuite.

    • File system paths to all needed CD-ROM images and patches are set up, as described in Step 3 through Step 6.

    If you meet all listed requirements, click Next to continue to the next screen.

  14. Type a name for the cluster and select the number of nodes in your cluster.

    Click Next to continue.


    Tip -

    You can use the Back button to return to a previous screen and change your information. However, SunPlex Manager does not save the information you supplied in the later screens. When you click Next, you must again type or select your configuration information in those screens.


  15. Type the name of each cluster node.

    Click Next to continue.

  16. From the pull-down lists for each node, select the names of the two adapters used for the private interconnects.

    Refer to your completed "Cluster Interconnect Worksheet" for the appropriate adapter names for each node.

    Click Next to continue.

  17. Choose whether to install Solstice DiskSuite software.

    You must install Solstice DiskSuite software if you intend to install the Sun Cluster HA for NFS or Sun Cluster HA for Apache data service.


    Caution - Caution -

    When Solstice DiskSuite is installed, any data on all shared disks will be lost.


    Click Next to continue.

  18. Choose whether to install Sun Cluster HA for NFS, Sun Cluster HA for Apache, or both.

    Refer to your completed "Network Resources" worksheet for the appropriate logical hostname or shared address.

    • For Sun Cluster HA for NFS, also specify the logical hostname the data service will use.

    • For Sun Cluster HA for Apache, also specify the shared address the data service will use.

    Click Next to continue.

  19. Type the path for each CD-ROM image needed to install the packages you specified, and optionally the path for the patch directory.

    • Type each path in the appropriate path field for each software package, as shown in Table 2-5.

    • Each specified path for a CD-ROM image must be the directory which contains the .cdtoc file for the CD-ROM.

    • For any software package you do not install, leave the relevant path field blank.

    • If you have already installed the required patches, leave the Patch Directory Path field blank.

    Table 2-5 CD-ROM Image Path Fields for Software Packages

    Software Package to Install 

    Name of CD-ROM Image Path Field 

    Solstice DiskSuite 

    Solaris CD-ROM Path 

    Sun Cluster 

    Sun Cluster 3.0 7/01 CD-ROM Path 

    Sun Cluster HA for NFS, 

    Sun Cluster HA for Apache 

    Sun Cluster 3.0 Agents 7/01 CD-ROM Path 

    Sun Cluster patches, 

    Solstice DiskSuite patches 

    Patch Directory Path 

    Click Next to continue.

  20. Is the information you supplied correct as displayed in the Confirm Information screen?

    • If yes, proceed to Step 21.

    • If no, perform the following steps to correct the configuration information.

    1. Click Back until you return to the screen with the information to change.


      Note -

      When you click Back to back up to a previous screen, any information you typed in the subsequent screens is lost.


    2. Type the correct information and click Next.

    3. Retype or reselect the information in each screen until you return to the Confirm Information screen.

    4. Verify that the information in the Confirm Information screen is now correct.

  21. Click Begin Installation to start the installation process.


    Note -

    Do not close the browser window or change the URL during the installation process.


    1. If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

    2. If the browser prompts for login information, type the appropriate user ID and password for the node you connect to.

    During installation, the screen displays brief messages about the status of the cluster installation process. When installation is complete, the browser displays the cluster monitoring and administration GUI.

    SunPlex Manager installation output is logged in the /var/cluster/spm directory. Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.pid file, where pid is the process ID number of the scinstall instance.

  22. Use SunPlex Manager to verify quorum assignments and modify them, if necessary.

    For clusters with three or more nodes, using shared quorum devices is optional. SunPlex Manager might or might not have assigned quorum votes to any quorum devices, depending on whether appropriate shared disks were available. You can use SunPlex Manager to designate quorum devices and reassign quorum votes in the cluster.

  23. Set up the name service look-up order.

    Go to "How to Configure the Name Service Switch".

How to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris and Sun Cluster software on all cluster nodes in a single operation and establish the cluster. You can also use this procedure to add new nodes to an existing cluster.

  1. Ensure that the hardware setup is complete and connections are verified before you install Solaris software.

    See the Sun Cluster 3.0 U1 Hardware Guide and your server and storage device documentation for details on how to set up the hardware.

  2. Have available the following information.

    • The Ethernet address of each cluster node

    • The following completed configuration planning worksheets from the Sun Cluster 3.0 Release Notes.

      • "Local File System Layout Worksheet"

      • "Cluster and Node Names Worksheet"

      • "Cluster Interconnect Worksheet"

    See "Planning the Solaris Operating Environment" and "Planning the Sun Cluster Environment" for planning guidelines.

  3. Are you using a naming service?

    • If no, proceed to Step 4. You will set up the necessary hostname information in Step 13.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses, as well as the IP address and hostname of the JumpStart server, to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.

  4. Are you installing a new node to an existing cluster?

    • If yes, run scsetup(1M) from another, active cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 U1 System Administration Guide for procedures.

    • If no, go to Step 5.

  5. As superuser, set up the JumpStart install server for Solaris operating environment installation.

    See the setup_install_server(1M) and add_install_client(1M) man pages and the Solaris Advanced Installation Guide for instructions on how to set up a JumpStart install server.

    When you set up the install server, ensure that the following requirements are met.

    • The install server is on the same subnet as the cluster nodes, but is not itself a cluster node.

    • The install server installs the release of the Solaris operating environment required by the Sun Cluster software.

    • A custom JumpStart directory exists for JumpStart installation of Sun Cluster. This jumpstart-dir directory must contain a copy of the check(1M) utility and be NFS exported for reading by the JumpStart install server.

    • Each new cluster node is configured as a custom JumpStart install client that uses the custom JumpStart directory set up for Sun Cluster installation.

  6. Create a directory on the JumpStart install server to hold your copy of the Sun Cluster 3.0 7/01 CD-ROM, if one does not already exist.

    In the following example, the /export/suncluster directory is created for this purpose.


    # mkdir -m 755 /export/suncluster
    

  7. Copy the Sun Cluster CD-ROM to the JumpStart install server.

    1. Insert the Sun Cluster 3.0 7/01 CD-ROM into the CD-ROM drive on the JumpStart install server.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0u1 directory.

    2. Change to the /cdrom/suncluster_3_0u1/SunCluster_3.0/Tools directory.


      # cd /cdrom/suncluster_3_0u1/SunCluster_3.0/Tools
      

    3. Copy the CD-ROM to a new directory on the JumpStart install server.

      The scinstall command creates the new installation directory as it copies the CD-ROM files. The installation directory name /export/suncluster/sc30 is used here as an example.


      ./scinstall -a /export/suncluster/sc30
      

    4. Eject the CD-ROM.


      # cd /
      # eject cdrom
      

    5. Ensure that the Sun Cluster 3.0 7/01 CD-ROM image on the JumpStart install server is NFS exported for reading by the JumpStart install server.

      See the NFS Administration Guide and the share(1M) and dfstab(4) man pages for more information about automatic file sharing.

  8. Are you installing a new node to an existing cluster?

  9. Have you added the node to the cluster's authorized-node list?

    • If yes, proceed to Step 10.

    • If no, run scsetup(1M) from any existing cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 U1 System Administration Guide for procedures.

  10. Use scinstall to configure custom JumpStart finish scripts.

    JumpStart uses these finish scripts to install the Sun Cluster software.

    1. From the JumpStart install server, start the scinstall(1M) utility.

      The path /export/suncluster/sc30 is used here as an example of the installation directory you created.


      # cd /export/suncluster/sc30/SunCluster_3.0/Tools./scinstall
      

      Follow these guidelines to use the interactive scinstall utility.

      • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

      • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

      • Your session answers are stored as defaults for the next time you run this menu option. Default answers display between brackets ([ ]) at the end of the prompt.

    2. From the Main Menu, type 3 (Configure a cluster to be JumpStarted from this install server).

      If option 3 does not have an asterisk in front, the option is disabled because JumpStart setup is not complete or has an error. Exit the scinstall utility, repeat Step 5 through Step 7 to correct JumpStart setup, then restart the scinstall utility.


       *** Main Menu ***
       
          Please select from one of the following (*) options:
       
              1) Establish a new cluster using this machine as the first node
              2) Add this machine as a node in an established cluster
            * 3) Configure a cluster to be JumpStarted from this install server
              4) Add support for new data services to this cluster node
              5) Print release information for this cluster node
       
            * ?) Help with menu options
            * q) Quit
       
          Option:  3
       
       *** Custom JumpStart ***
      ...
          Do you want to continue (yes/no) [yes]? 

    3. Specify the JumpStart directory name.


       >>> Custom JumpStart Directory <<<
      ....
          What is your JumpStart directory name?  jumpstart-dir
      

    4. Specify the name of the cluster.


       >>> Cluster Name <<<
      ...
          What is the name of the cluster you want to establish?  clustername
      

    5. Specify the names of all cluster nodes.


       >>> Cluster Nodes <<<
      ...
          Please list the names of all cluster nodes planned for the initial
          cluster configuration. You must enter at least two nodes. List one
          node name per line. When finished, type Control-D:
       
          Node name:  node1
          Node name:  node2
          Node name (Ctrl-D to finish): <Control-D>
       
          This is the complete list of nodes:
      ... 
          Is it correct (yes/no) [yes]? 

    6. Specify whether to use data encryption standard (DES) authentication.

      By default, Sun Cluster software permits a node to connect to the cluster only if the node is physically connected to the private interconnect and if the node name was specified in Step e. However, the node actually communicates with the sponsoring node over the public network, since the private interconnect is not yet fully configured. DES authentication provides an additional level of security at installation time by enabling the sponsoring node to more reliably authenticate nodes that attempt to contact it to update the cluster configuration.

      If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.


       >>> Authenticating Requests to Add Nodes <<<
      ...
          Do you need to use DES authentication (yes/no) [no]? 

    7. Specify the private network address and netmask.


      Note -

      You cannot change the private network address after the cluster is successfully formed.



       >>> Network Address for the Cluster Transport <<<
      ...
          Is it okay to accept the default network address (yes/no) [yes]? 
          Is it okay to accept the default netmask (yes/no) [yes]? 
    8. If this is a two-node cluster, specify whether the cluster uses transport junctions.


      Tip -

      You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.



       >>> Point-to-Point Cables <<<
      ...
          Does this two-node cluster use transport junctions (yes/no) [yes]? 
    9. If this cluster uses transport junctions, specify the transport junction names.

      You must use transport junctions if a cluster contains three or more nodes. You can use the default names switchN or create your own names.


       >>> Cluster Transport Junctions <<<
      ...
          What is the name of the first junction in the cluster [switch1]? 
          What is the name of the second junction in the cluster [switch2]? 

    10. Specify the cluster interconnect transport adapters and, if used, the names of the transport junctions they connect to.

      You can configure up to two adapters by using the scinstall command. You can configure additional adapters after Sun Cluster software is installed by using the scsetup utility.


       >>> Cluster Transport Adapters and Cables <<<
      ...
       For node "node1",
          What is the name of the first cluster transport adapter?  adapter
      ...
       For node "node1",
          Name of the junction to which "adapter" is connected [switch1]? 
      ...
       For node "node1",
          Okay to use the default for the "adapter" connection (yes/no) [yes]? 
       
       For node "node1",
          What is the name of the second cluster transport adapter?  adapter
       For node "node1",
          Name of the junction to which "adapter" is connected [switch2]? 
       For node "node1",
          Use the default port for the "adapter" connection (yes/no) [yes]? 
       
       For node "node2",
          What is the name of the first cluster transport adapter?  adapter
       For node "node2",
          Name of the junction to which "adapter" is connected [switch1]? 
       For node "node2",
          Okay to use the default for the "adapter" connection (yes/no) [yes]? 
       
       For node "node2",
          What is the name of the second cluster transport adapter?  adapter
       For node "node2",
          Name of the junction to which "adapter" is connected [switch2]? 
       For node "node2",
          Use the default port for the "adapter" connection (yes/no) [yes]? 
       

    11. Specify the global devices file system name.


       >>> Global Devices File System <<<
      ...
          The default is to use /globaldevices.
       
       For node "node1",
          Is it okay to use this default (yes/no) [yes]? 
       
       For node "node2",
          Is it okay to use this default (yes/no) [yes]? 

    12. Accept or decline the generated scinstall commands.

      The scinstall command generated from your input is displayed for confirmation.


       >>> Confirmation <<<
       
          Your responses indicate the following options to scinstall:
      -----------------------------------------
       For node "node1",
            scinstall -c jumpstart-dir -h node1  \
      ...
          Are these the options you want to use (yes/no) [yes]? 
      -----------------------------------------
       For node "node2",
            scinstall -c jumpstart-dir -h node2  \
      ...
          Are these the options you want to use (yes/no) [yes]? 
      -----------------------------------------
          Do you want to continue with JumpStart set up (yes/no) [yes]? 

      If you do not accept the generated commands, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 3 and provide different answers. Your previous answers display as the defaults.

  11. If necessary, make adjustments to the default class file, or profile, created by scinstall.

    The scinstall command creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.0 directory.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750 swap
    filesys         rootdisk.s3 100  /globaldevices
    filesys         rootdisk.s7 10
    cluster         SUNWCuser       add
    package         SUNWman         add


    Note -

    The default class file installs the End User System Support software group (SUNWCuser) of Solaris software. For Sun Enterprise E10000 server servers, you must install the Entire Distribution + OEM software group. Also, some third-party software, such as Oracle, might require additional Solaris packages. See your third-party documentation for any Solaris software requirements.


    You can change the profile in one of the following ways.

    • Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.

    • Update the rules file to point to other profiles, then run the check utility to validate the rules file.

    As long as the Solaris operating environment install profile meets minimum Sun Cluster file system allocation requirements, there are no restrictions on other changes to the install profile. See "System Disk Partitions" for partitioning guidelines and requirements to support Sun Cluster 3.0 software.

  12. Set up Solaris patch directories.

    1. Create jumpstart-dir/autoscinstall.d/nodes/node/patches directories on the JumpStart install server.

      Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.


      # mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches
      

    2. Place copies of any Solaris patches into each of these directories.

      Also place copies of any hardware-related patches that must be installed after Solaris software is installed into each of these directories.

  13. Set up files to contain the necessary hostname information locally on each node.

    1. On the JumpStart install server, create files named jumpstart-dir/autoscinstall.d/nodes/node/archive/etc/inet/hosts.

      Create one file for each node, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared hosts file.

    2. Add the following entries into each file.

      • IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. This could be the JumpStart install server or another machine.

      • IP address and hostname of each node in the cluster.

  14. (Optional) Add your own post-installation finish script.

    You can add your own finish script, which is run after the standard finish script installed by the scinstall command.

    1. Name your finish script finish.

    2. Copy your finish script to the jumpstart-dir/autoscinstall.d/nodes/node directory, one directory for each node in the cluster.

      Alternately, use this naming convention to create symbolic links to a shared finish script.

  15. If you use an administrative console, display a console screen for each node in the cluster.

    If cconsole(1M) is installed and configured on your administrative console, you can use it to display the individual console screens. Otherwise, you must connect to the consoles of each node individually.

  16. From the ok PROM prompt on the console of each node, type the boot net - install command to begin the network JumpStart installation of each node.


    Note -

    The dash (-) in the command must be surrounded by a space on each side.



    ok boot net - install
    

    Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.pid file, where pid is the process ID number of the scinstall instance.


    Note -

    Unless you have installed your own ntp.conf file in the /etc/inet directory, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to the maximum possible number of nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP)" for information on how to suppress these messages under otherwise-normal cluster conditions.


    When the installation is successfully completed, each node is fully installed as a new cluster node.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be reenabled. See the ifconfig(1M) man page for more information about Solaris interface groups.


  17. Are you installing a new node to an existing cluster?

    • If no, proceed to Step 18.

    • If yes, create mount points on the new node for all existing cluster file systems.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      

    2. On the node you added to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file system name returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node being added to the cluster.


      Note -

      The mount points become active after you reboot the cluster in Step 19.


  18. Install any Sun Cluster software patches.

    See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.

  19. Did you add a new node to an existing cluster, or install Sun Cluster software patches that require you to reboot the entire cluster, or both?

    • If no, reboot the individual node if any patches you installed require a node reboot.

    • If yes, perform a reconfiguration reboot.

    1. From one node, shut down the cluster.


      # scshutdown
      


      Note -

      Do not reboot the first-installed node of the cluster until after the cluster is shut down.


    2. Reboot each node in the cluster.


      ok boot
      

    Until cluster install mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in install mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".

  20. Set up the name service look-up order.

    Go to "How to Configure the Name Service Switch".

How to Configure the Name Service Switch

Perform this task on each node in the cluster.

  1. Become superuser on the cluster node.

  2. Edit the /etc/nsswitch.conf file.

    1. Verify that cluster is the first source look-up for the hosts and netmasks database entries.

      This order is necessary for Sun Cluster software to function properly. The scinstall(1M) command adds cluster to these entries during installation.

    2. (Optional) To increase availability to data services if the naming service becomes unavailable, change the lookup order of the following entries.

      • For the hosts and netmasks database entries, place files after cluster.

      • For all other database entries, place files first in look-up order.

      If the [NOTFOUND=return] criterion becomes the last item of an entry after you modify the lookup order, the criterion is no longer necessary. You can either delete the [NOTFOUND=return] criterion from the entry or leave it in, in which case it is ignored.

    The following example shows partial contents of an /etc/nsswitch.conf file. The look-up order for the hosts and netmasks database entries is first cluster, then files. The look-up order for other entries begins with files. The [NOTFOUND=return] criterion is removed from the entries.


    # vi /etc/nsswitch.conf
    ...
    passwd:     files nis
    group:      files nis
    ...
    hosts:      cluster files nis
    ...
    netmasks:   cluster files nis
    ...

    See nsswitch.conf(4) for more information about nsswitch.conf entries.

  3. Set up your root user's environment.

    Go to "How to Set Up the Root Environment".

How to Set Up the Root Environment

Perform these tasks on each node in the cluster.


Note -

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See the Solaris System Administration Guide, Volume 1 for more information about how to customize a user's work environment.


  1. Become superuser on a cluster node.

  2. Modify the .cshrc file PATH and MANPATH entries.

    1. Set the PATH to include /usr/sbin and /usr/cluster/bin.

      For VERITAS Volume Manager, also set your PATH to include /etc/vx/bin. If you will install the VRTSvmsa package, also add /opt/VRTSvmsa/bin to your PATH.

    2. Set the MANPATH to include /usr/cluster/man. Also include the volume manager-specific paths.

      • For Solstice DiskSuite software, set your MANPATH to include /usr/share/man.

      • For VERITAS Volume Manager, set your MANPATH to include /opt/VRTSvxvm/man. If you will install the VRTSvmsa package, also add /opt/VRTSvmsa/man to your MANPATH.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

  4. Repeat Step 1 through Step 3 on each remaining cluster node.

  5. Install data service software packages.

    Go to "How to Install Data Service Software Packages".

How to Install Data Service Software Packages

Perform this task on each cluster node.


Note -

If you used SunPlex Manager to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, or both, and you do not intend to install any other data services, you do not need to perform this procedure. Instead, go to "How to Perform Post-Installation Setup".


  1. Become superuser on a cluster node.

  2. If you install from the CD-ROM, insert the Sun Cluster 3.0 Agents 7/01 CD-ROM into the CD-ROM drive on the node.

  3. Start the scinstall(1M) utility.


    # scinstall
    

    Follow these guidelines to use the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  4. To add data services, type 4 (Add support for a new data service to this cluster node).

    Follow the prompts to select all data services to install.July 2001


    Note -

    You must install the same set of data service packages on each node, even if a node is not expected to host resources for an installed data service.


  5. If you installed from a CD-ROM, eject the CD-ROM.

  6. Install any Sun Cluster data service patches.

    See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, first shut down the cluster by using the scshutdown(1M) command, then reboot the each node in the cluster.


    Note -

    Until cluster install mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".


  7. Repeat Step 1 through Step 6 on each remaining cluster node.

  8. Perform post-installation setup and assign quorum votes.

    Go to "How to Perform Post-Installation Setup".

How to Perform Post-Installation Setup

Perform this procedure one time only, after the cluster is fully formed.

Verify that all nodes have joined the cluster.

  1. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes.You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                       Node name      Status
                       ---------      ------
      Cluster node:    phys-schost-1  Online
      Cluster node:    phys-schost-2  Online

  2. On each node, verify device connectivity to the cluster nodes.

    Run the scdidadm(1M) command to display a list of all the devices that the system checks. You do not need to be logged in as superuser to run this command.


    % scdidadm -L
    

    The list on each node should be the same. Output resembles the following.


    1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    ...

  3. Determine the global device ID (DID) of each shared disk you will configure as a quorum device.

    Use the scdidadm output from Step 2 to identify the DID name of each shared disk you will configure as a quorum device. For example, the output in the previous substep shows that global device d2 is shared by phys-schost-1 and phys-schost-2. You will use this information in Step 8. See "Quorum Devices" for further information about planning quorum devices.

  4. Are you adding a new node to an existing cluster?

    • If yes, you might need to update the quorum configuration to accommodate your cluster's new configuration. See Sun Cluster 3.0 U1 Concepts for information about quorum. To change the quorum configuration, follow procedures in the Sun Cluster 3.0 U1 System Administration Guide. When the quorum configuration is satisfactory, go to Step 12.

    • If no, go to Step 6.

  5. Did you use SunPlex Manager to install Sun Cluster software?

    • If yes, skip to Step 11. During Sun Cluster installation, SunPlex Manager assigns quorum votes and removes the cluster from install mode for you.

    • If no, go to Step 6.

  6. Become superuser on one node of the cluster.

  7. Start the scsetup(1M) utility.


    # scsetup
    

    The Initial Cluster Setup screen is displayed.


    Note -

    If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 11.


    If the quorum setup process is interrupted or fails to complete successfully, rerun scsetup.

  8. At the prompt Do you want to add any quorum disks?, configure at least one shared quorum device if your cluster is a two-node cluster.

    A two-node cluster remains in install mode until a shared quorum device is configured. After the scsetup utility configures the quorum device, the message Command completed successfully is displayed. If your cluster has three or more nodes, quorum device configuration is optional.

  9. At the prompt Is it okay to reset "installmode"?, answer Yes.

    After the scsetup utility sets quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed and the utility returns you to the Main Menu.

  10. From any node, verify the device and node quorum configurations.


    % scstat -q
    

  11. From any node, verify that cluster install mode is disabled.

    You do not need to be superuser to run this command.


    % scconf -p | grep 'Cluster install mode:'
    Cluster install mode:                                  disabled

  12. Install volume management software.