Sun Cluster Software Installation Guide for Solaris OS

Establishing the Cluster

This section provides information and procedures to establish a new cluster or to add a node to an existing cluster. Before you start to perform these tasks, ensure that you installed software packages for the Solaris OS, Sun Cluster framework, and other products as described in Installing the Software.

The following task map lists the tasks to perform. Complete the procedures in the order that is indicated.

Table 2–2 Task Map: Establish the Cluster

Method 

Instructions 

1. Use one of the following methods to establish a new cluster or add a node to an existing cluster: 

  • (New clusters only) Use the scinstall utility to establish the cluster.

How to Configure Sun Cluster Software on All Nodes (scinstall)

  • (New clusters or added nodes) Set up a JumpStart installation server. Then create a flash archive of the installed system. Finally, use the scinstall JumpStart option to install the flash archive on each node and establish the cluster.

How to Install Solaris and Sun Cluster Software (JumpStart)

  • (New multiple-node clusters only)Use SunPlex Installer to establish the cluster. Optionally, also configure Solstice DiskSuite or Solaris Volume Manager disk sets, scalable Sun Cluster HA for Apache data service, and Sun Cluster HA for NFS data service.

Using SunPlex Installer to Configure Sun Cluster Software

How to Configure Sun Cluster Software (SunPlex Installer)

  • (Added nodes only) Configure Sun Cluster software on the new node by using the scinstall utility.

How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

2. (Oracle Real Application Clusters only) If you added a node to a two-node cluster that runs Sun Cluster Support for Oracle Real Application Clusters and that uses a shared SCSI disk as the quorum device, update the SCSI reservations.

How to Update SCSI Reservations After Adding a Node

3. Install data-service software packages. 

How to Install Data-Service Software Packages (pkgadd)

How to Install Data-Service Software Packages (scinstall)

How to Install Data-Service Software Packages (Web Start installer)

4. Assign quorum votes and remove the cluster from installation mode, if this operation was not already performed. 

How to Configure Quorum Devices

5. Validate the quorum configuration. 

How to Verify the Quorum Configuration and Installation Mode

6. Configure the cluster. 

Configuring the Cluster

ProcedureHow to Configure Sun Cluster Software on All Nodes (scinstall)

Perform this procedure from one node of the cluster to configure Sun Cluster software on all nodes of the cluster.

Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. If you disabled remote configuration during Sun Cluster software installation, re-enable remote configuration.

    Enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes.

  2. (Optional) To use the scinstall(1M) utility to install patches, download patches to a patch directory.

    • If you use Typical mode to install the cluster, use a directory named either /var/cluster/patches/ or /var/patches/ to contain the patches to install.

      In Typical mode, the scinstall command checks both those directories for patches.

      • If neither of those directories exist, then no patches are added.

      • If both directories exist, then only the patches in the /var/cluster/patches/ directory are added.

    • If you use Custom mode to install the cluster, you specify the path to the patch directory. Specifying the path ensures that you do not have to use the patch directories that scinstall checks for in Typical mode.

    You can include a patch-list file in the patch directory. The default patch-list file name is patchlist. For information about creating a patch-list file, refer to the patchadd(1M) man page.

  3. Become superuser on the cluster node from which you intend to configure the cluster.

  4. Start the scinstall utility.


    # /usr/cluster/bin/scinstall
    
  5. From the Main Menu, choose the menu item, Install a cluster or cluster node.


     *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    
  6. From the Install Menu, choose the menu item, Install all nodes of a new cluster.

  7. From the Type of Installation menu, choose either Typical or Custom.

  8. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  9. For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    # svcs multi-user-server
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  10. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
  11. Install any necessary patches to support Sun Cluster software, if you have not already done so.

  12. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.


Example 2–1 Configuring Sun Cluster Software on All Nodes

The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall Typical mode. The other cluster node is phys-schost-2. The adapter names are qfe2 and qfe3. The automatic selection of a quorum device is enabled.


  Installation and Configuration

    Log file - /var/cluster/logs/install/scinstall.log.24747

    Testing for "/globaldevices" on "phys-schost-1" … done
    Testing for "/globaldevices" on "phys-schost-2" … done
    Checking installation status … done

    The Sun Cluster software is already installed on "phys-schost-1".
    The Sun Cluster software is already installed on "phys-schost-2".
    Starting discovery of the cluster transport configuration.

    The following connections were discovered:

        phys-schost-1:qfe2  switch1  phys-schost-2:qfe2
        phys-schost-1:qfe3  switch2  phys-schost-2:qfe3

    Completed discovery of the cluster transport configuration.

    Started sccheck on "phys-schost-1".
    Started sccheck on "phys-schost-2".

    sccheck completed with no errors or warnings for "phys-schost-1".
    sccheck completed with no errors or warnings for "phys-schost-2".

    Removing the downloaded files … done

    Configuring "phys-schost-2" … done
    Rebooting "phys-schost-2" … done

    Configuring "phys-schost-1" … done
    Rebooting "phys-schost-1" …

Log file - /var/cluster/logs/install/scinstall.log.24747

Rebooting …

Next Steps

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to the next appropriate procedure:

Troubleshooting

You cannot change the private-network address and netmask after scinstall processing is finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then perform the procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) and then perform this procedure to reinstall the software and configure the node with the correct information.

ProcedureHow to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris OS and Sun Cluster software on all cluster nodes in the same operation and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.

Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. Set up your JumpStart installation server.

  2. If you are installing a new node to an existing cluster, add the node to the list of authorized cluster nodes.

    1. Switch to another cluster node that is active and start the scsetup(1M) utility.

    2. Use the scsetup utility to add the new node's name to the list of authorized cluster nodes.

    For more information, see How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS.

  3. On a cluster node or another machine of the same server platform, install the Solaris OS, if you have not already done so.

    Follow procedures in How to Install Solaris Software.

  4. On the installed system, install Sun Cluster software, if you have not done so already.

    Follow procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

  5. Enable the common agent container daemon to start automatically during system boots.


    # cacaoadm enable
    
  6. On the installed system, install any necessary patches to support Sun Cluster software.

  7. On the installed system, update the /etc/inet/hosts file with all IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. See IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.

  8. For Solaris 10, on the installed system, update the /etc/inet/ipnodes file with all IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service.

  9. Create the flash archive of the installed system.


    # flarcreate -n name archive
    
    -n name

    Name to give the flash archive.

    archive

    File name to give the flash archive, with the full path. By convention, the file name ends in .flar.

    Follow procedures in one of the following manuals:

  10. Ensure that the flash archive is NFS exported for reading by the JumpStart installation server.

    See Solaris NFS Environment in System Administration Guide, Volume 3 (Solaris 8) or Managing Network File Systems (Overview), in System Administration Guide: Network Services (Solaris 9 or Solaris 10) for more information about automatic file sharing.

    See also the share(1M) and dfstab(4) man pages.

  11. From the JumpStart installation server, start the scinstall(1M) utility.

    The path /export/suncluster/sc31/ is used here as an example of the installation directory that you created. In the CD-ROM path, replace arch with sparc or x86 and replace ver with 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10.


    # cd /export/suncluster/sc31/Solaris_arch/Product/sun_cluster/ \
    Solaris_ver/Tools/
    # ./scinstall
    
  12. From the Main Menu, choose the menu item, Configure a cluster to be JumpStarted from this installation server.

    This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.


     *** Main Menu ***
     
        Please select from one of the following (*) options:
     
          * 1) Install a cluster or cluster node
          * 2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
     
        Option:  2
    
  13. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall command stores your configuration information and copies the autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.1/ directory. This file is similar to the following example.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750  swap
    filesys         rootdisk.s3 512  /globaldevices
    filesys         rootdisk.s7 20
    cluster         SUNWCuser        add
    package         SUNWman          add
  14. Make adjustments to the autoscinstall.class file to configure JumpStart to install the flash archive.

    1. Modify entries as necessary to match configuration choices you made when you installed the Solaris OS on the flash archive machine or when you ran the scinstall utility.

      For example, if you assigned slice 4 for the global-devices file system and specified to scinstall that the file-system name is /gdevs, you would change the /globaldevices entry of the autoscinstall.class file to the following:


      filesys         rootdisk.s4 512  /gdevs
    2. Change the following entries in the autoscinstall.class file.

      Existing Entry to Replace 

      New Entry to Add 

      install_type

      initial_install

      install_type

      flash_install

      system_type

      standalone

      archive_location

      retrieval_type location

      See archive_location Keyword in Solaris 8 Advanced Installation Guide, Solaris 9 9/04 Installation Guide, or Solaris 10 Installation Guide: Custom JumpStart and Advanced Installations for information about valid values for retrieval_type and location when used with the archive_location keyword.

    3. Remove all entries that would install a specific package, such as the following entries.


      cluster         SUNWCuser        add
      package         SUNWman          add
  15. Set up Solaris patch directories, if you did not already install the patches on the flash-archived system.


    Note –

    If you specified a patch directory to the scinstall utility, patches that are located in Solaris patch directories are not installed.


    1. Create jumpstart-dir/autoscinstall.d/nodes/node/patches/ directories that are NFS exported for reading by the JumpStart installation server.

      Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.


      # mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches/
      
    2. Place copies of any Solaris patches into each of these directories.

    3. Place copies of any hardware-related patches that you must install after Solaris software is installed into each of these directories.

  16. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      Use the following command to start the cconsole utility:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  17. Shut down each node.


    # shutdown -g0 -y -i0
    
  18. Boot each node to start the JumpStart installation.

    • On SPARC based systems, do the following:


      ok boot net - install
      

      Note –

      Surround the dash (-) in the command with a space on each side.


    • On x86 based systems, do the following:

      1. When the BIOS information screen appears, press the Esc key.

        The Select Boot Device screen appears.

      2. On the Select Boot Device screen, choose the listed IBA that is connected to the same network as the JumpStart PXE installation server.

        The lowest number to the right of the IBA boot choices corresponds to the lower Ethernet port number. The higher number to the right of the IBA boot choices corresponds to the higher Ethernet port number.

        The node reboots and the Device Configuration Assistant appears.

      3. On the Boot Solaris screen, choose Net.

      4. At the following prompt, choose Custom JumpStart and press Enter:


        Select the type of installation you want to perform:
        
                 1 Solaris Interactive
                 2 Custom JumpStart
        
        Enter the number of your choice followed by the <ENTER> key.
        
        If you enter anything else, or if you wait for 30 seconds,
        an interactive installation will be started.
      5. When prompted, answer the questions and follow the instructions on the screen.

    JumpStart installs the Solaris OS and Sun Cluster software on each node. When the installation is successfully completed, each node is fully installed as a new cluster node. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.

  19. For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    # svcs multi-user-server
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  20. If you are installing a new node to an existing cluster, create mount points on the new node for all existing cluster file systems.

    1. From another cluster node that is active, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      
    2. On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.


      Note –

      The mount points become active after you reboot the cluster in Step 24.


    3. If VERITAS Volume Manager (VxVM) is installed on any nodes that are already in the cluster, view the vxio number on each VxVM–installed node.


      # grep vxio /etc/name_to_major
      vxio NNN
      
      • Ensure that the same vxio number is used on each of the VxVM-installed nodes.

      • Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

      • If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  21. (Optional) To use dynamic reconfiguration on Sun Enterprise 10000 servers, add the following entry to the /etc/system file. Add this entry on each node in the cluster.


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  22. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.

  23. x86: Set the default boot file to kadb.


    # eeprom boot-file=kadb
    

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  24. If you performed a task that requires a cluster reboot, follow these steps to reboot the cluster.

    The following are some of the tasks that require a reboot:

    • Adding a new node to an existing cluster

    • Installing patches that require a node or cluster reboot

    • Making configuration changes that require a reboot to become active

    1. From one node, shut down the cluster.


      # scshutdown
      

      Note –

      Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

      Cluster nodes remain in installation mode until the first time that you run the scsetup(1M) command. You run this command during the procedure How to Configure Quorum Devices.


    2. Reboot each node in the cluster.

      • On SPARC based systems, do the following:


        ok boot
        
      • On x86 based systems, do the following:


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
        Boot args:
        
        Type   b [file-name] [boot-flags] <ENTER>  to boot with options
        or     i <ENTER>                           to enter boot interpreter
        or     <ENTER>                             to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b
        

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  25. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
Next Steps

If you added a node to a two-node cluster, go to How to Update SCSI Reservations After Adding a Node.

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to the next appropriate procedure:

Troubleshooting

Disabled scinstall option – If the JumpStart option of the scinstall command does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 10 to correct JumpStart setup, then restart the scinstall utility.

Error messages about nonexistent nodes – Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information about how to suppress these messages under otherwise normal cluster conditions.

Changing the private-network address – You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.

Using SunPlex Installer to Configure Sun Cluster Software


Note –

Do not use this configuration method in the following circumstances:


This section describes how to use SunPlex Installer, the installation module of SunPlex Manager, to establish a new cluster. You can also use SunPlex Installer to install or configure one or more of the following additional software products:

Installation Requirements

The following table lists SunPlex Installer installation requirements for these additional software products.

Table 2–3 Requirements to Use SunPlex Installer to Install Software

Software Package 

Installation Requirements 

Solstice DiskSuite or Solaris Volume Manager 

A partition that uses /sds as the mount–point name. The partition must be at least 20 Mbytes in size.

Sun Cluster HA for NFS data service 

  • At least two shared disks, of the same size, that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Installer.

  • A logical hostname for use by Sun Cluster HA for NFS. The logical hostname must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as any of the adapters in the IP Network Multipathing group that hosts the logical address.

  • A test IP address for each node of the cluster. SunPlex Installer uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for NFS.

Sun Cluster HA for Apache scalable data service 

  • At least two shared disks of the same size that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Installer.

  • A shared address for use by Sun Cluster HA for Apache. The shared address must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as any of the adapters in the IP Network Multipathing group that hosts the logical address.

  • A test IP address for each node of the cluster. SunPlex Installer uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for Apache.

Test IP Addresses

The test IP addresses that you supply must meet the following requirements:

The following table lists each metaset name and cluster-file-system mount point that is created by SunPlex Installer. The number of metasets and mount points that SunPlex Installer creates depends on the number of shared disks that are connected to the node. For example, if a node is connected to four shared disks, SunPlex Installer creates the mirror-1 and mirror-2 metasets. However, SunPlex Installer does not create the mirror-3 metaset, because the node does not have enough shared disks to create a third metaset.

Table 2–4 Metasets Created by SunPlex Installer

Shared Disks 

Metaset Name 

Cluster File System Mount Point 

Purpose 

First pair 

mirror-1

/global/mirror-1

Sun Cluster HA for NFS or Sun Cluster HA for Apache scalable data service, or both 

Second pair 

mirror-2

/global/mirror-2

Unused 

Third pair 

mirror-3

/global/mirror-3

Unused 


Note –

If the cluster does not meet the minimum shared-disk requirement, SunPlex Installer still installs the Solstice DiskSuite packages. However, without sufficient shared disks, SunPlex Installer cannot configure the metasets, metadevices, or volumes. SunPlex Installer then cannot configure the cluster file systems that are needed to create instances of the data service.


Character-Set Limitations

SunPlex Installer recognizes a limited character set to increase security. Characters that are not a part of the set are silently filtered out when HTML forms are submitted to the SunPlex Installer server. The following characters are accepted by SunPlex Installer:


()+,-./0-9:=@A-Z^_a-z{|}~

This filter can cause problems in the following two areas:

ProcedureHow to Configure Sun Cluster Software (SunPlex Installer)

Perform this procedure to use SunPlex Installer to configure Sun Cluster software and install patches on all nodes in the cluster in a single operation. In addition, you can use this procedure to install Solstice DiskSuite software and patches (Solaris 8) and to configure Solstice DiskSuite or Solaris Volume Manager mirrored disk sets.


Note –

Do not use this configuration method in the following circumstances:


The installation process might take from 30 minutes to two or more hours. The actual length of time depends on the number of nodes that are in the cluster, your choice of data services to install, and the number of disks that are in your cluster configuration.

Before You Begin

Perform the following tasks:

Steps
  1. Prepare file-system paths to a CD-ROM image of each software product that you intend to install.

    Follow these guidelines to prepare the file-system paths:

    • Provide each CD-ROM image in a location that is available to each node.

    • Ensure that the CD-ROM images are accessible to all nodes of the cluster from the same file-system path. These paths can be one or more of the following locations:

      • CD-ROM drives that are exported to the network from machines outside the cluster.

      • Exported file systems on machines outside the cluster.

      • CD-ROM images that are copied to local file systems on each node of the cluster. The local file system must use the same name on each node.

  2. x86: Determine whether you are using the Netscape NavigatorTM browser or the Microsoft Internet Explorer browser on your administrative console.

    • If you are using Netscape Navigator, proceed to Step 3.

    • If you are using Internet Explorer, skip to Step 4.

  3. x86: Ensure that the Java plug-in is installed and working on your administrative console.

    1. Start the Netscape Navigator browser on the administrative console that you use to connect to the cluster.

    2. From the Help menu, choose About Plug-ins.

    3. Determine whether the Java plug-in is listed.

    4. Download the latest Java plug-in from http://java.sun.com/products/plugin.

    5. Install the plug-in on your administrative console.

    6. Create a symbolic link to the plug-in.


      % cd ~/.netscape/plugins/
      % ln -s /usr/j2se/plugin/i386/ns4/javaplugin.so .
      
    7. Skip to Step 5.

  4. x86: Ensure that Java 2 Platform, Standard Edition (J2SE) for Windows is installed and working on your administrative console.

    1. On your Microsoft Windows desktop, click Start, point to Settings, and then select Control Panel.

      The Control Panel window appears.

    2. Determine whether the Java Plug-in is listed.

      • If no, proceed to Step c.

      • If yes, double-click the Java Plug-in control panel. When the control panel window opens, click the About tab.

        • If an earlier version is shown, proceed to Step c.

        • If version 1.4.1 or a later version is shown, skip to Step 5.

    3. Download the latest version of J2SE for Windows from http://java.sun.com/j2se/downloads.html.

    4. Install the J2SE for Windows software on your administrative console.

    5. Restart the system on which your administrative console runs.

      The J2SE for Windows control panel is activated.

  5. If patches exist that are required to support Sun Cluster or Solstice DiskSuite software, determine how to install those patches.

    • To manually install patches, use the patchadd command to install all patches before you use SunPlex Installer.

    • To use SunPlex Installer to install patches, copy patches into a single directory.

      Ensure that the patch directory meets the following requirements:

      • The patch directory resides on a file system that is available to each node.

      • Only one version of each patch is present in this patch directory. If the patch directory contains multiple versions of the same patch, SunPlex Installer cannot determine the correct patch dependency order.

      • The patches are uncompressed.

  6. From the administrative console or any other machine outside the cluster, launch a browser.

  7. Disable the browser's Web proxy.

    SunPlex Installer installation functionality is incompatible with Web proxies.

  8. Ensure that disk caching and memory caching is enabled.

    The disk cache and memory cache size must be greater than 0.

  9. From the browser, connect to port 3000 on a node of the cluster.


    https://node:3000
    

    The Sun Cluster Installation screen is displayed in the browser window.


    Note –

    If SunPlex Installer displays the data services installation screen instead of the Sun Cluster Installation screen, Sun Cluster framework software is already installed and configured on that node. Check that the name of the node in the URL is the correct name of the cluster node to install.


  10. If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

  11. Log in as superuser.

  12. In the Sun Cluster Installation screen, verify that the cluster meets the listed requirements for using SunPlex Installer.

    If you meet all listed requirements, click Next to continue to the next screen.

  13. Follow the menu prompts to supply your answers from the configuration planning worksheet.

  14. Click Begin Installation to start the installation process.

    Follow these guidelines to use SunPlex Installer:

    • Do not close the browser window or change the URL during the installation process.

    • If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

    • If the browser prompts for login information, type the appropriate superuser ID and password for the node that you connect to.

    SunPlex Installer installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

    During installation, the screen displays brief messages about the status of the cluster installation process. When installation and configuration is complete, the browser displays the cluster monitoring and administration GUI.

    SunPlex Installer installation output is logged in the /var/cluster/spm/messages file. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.

  15. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
  16. Verify the quorum assignments and modify those assignments, if necessary.

    For clusters with three or more nodes, the use of shared quorum devices is optional. SunPlex Installer might or might not have assigned quorum votes to any quorum devices, depending on whether appropriate shared disks were available. You can use SunPlex Manager to designate quorum devices and to reassign quorum votes in the cluster. See Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS for more information.

  17. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.

Next Steps

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.

ProcedureHow to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing cluster. To use JumpStart to add a new node, instead follow procedures in How to Install Solaris and Sun Cluster Software (JumpStart).

Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. If you are adding this node to a single-node cluster, ensure that two cluster interconnects already exist by displaying the interconnect configuration.


    # scconf -p | grep cable
    # scconf -p | grep adapter
    

    You must have at least two cables or two adapters configured before you can add a node.

    • If the output shows configuration information for two cables or for two adapters, proceed to Step 2.

    • If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.

      1. On the existing cluster node, start the scsetup(1M) utility.


        # scsetup
        
      2. Choose the menu item, Cluster interconnect.

      3. Choose the menu item, Add a transport cable.

        Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport junction.

      4. If necessary, repeat Step c to configure a second cluster interconnect.

        When finished, quit the scsetup utility.

      5. Verify that the cluster now has two cluster interconnects configured.


        # scconf -p | grep cable
        # scconf -p | grep adapter
        

        The command output should show configuration information for at least two cluster interconnects.

  2. If you are adding this node to an existing cluster, add the new node to the cluster authorized–nodes list.

    1. On any active cluster member, start the scsetup(1M) utility.


      # scsetup
      

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task is completed without error.

    5. Quit the scsetup utility.

  3. Become superuser on the cluster node to configure.

  4. Start the scinstall utility.


    # /usr/cluster/bin/scinstall
    
  5. From the Main Menu, choose the menu item, Install a cluster or cluster node.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    
  6. From the Install Menu, choose the menu item, Add this machine as a node in an existing cluster.

  7. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility configures the node and boots the node into the cluster.

  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  9. Install any necessary patches to support Sun Cluster software, if you have not already done so.

  10. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.

  11. For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    # svcs multi-user-server
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  12. From an active cluster member, prevent any other nodes from joining the cluster.


    # /usr/cluster/bin/scconf -a -T node=.
    
    -a

    Specifies the add form of the command

    -T

    Specifies authentication options

    node=.

    Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster

    Alternately, you can use the scsetup(1M) utility. See How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS for procedures.

  13. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
  14. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.


Example 2–2 Configuring Sun Cluster Software on an Additional Node

The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.


*** Adding a Node to an Existing Cluster ***
Fri Feb  4 10:17:53 PST 2005


scinstall -ik -C schost -N phys-schost-1 -A trtype=dlpi,name=qfe2 -A trtype=dlpi,name=qfe3 
-m endpoint=:qfe2,endpoint=switch1 -m endpoint=:qfe3,endpoint=switch2


Checking device to use for global devices file system ... done

Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "qfe2" to the cluster configuration ... done
Adding adapter "qfe3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "phys-schost-1" ... done

Copying the postconfig file from "phys-schost-1" if it exists ... done
Copying the Common Agent Container keys from "phys-schost-1" ... done


Setting the node ID for "phys-schost-3" ... done (id=1)

Setting the major number for the "did" driver ... 
Obtaining the major number for the "did" driver from "phys-schost-1" ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Initializing NTP configuration ... done

Updating nsswitch.conf ... 
done

Adding clusternode entries to /etc/inet/hosts ... done


Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files

Updating "/etc/hostname.hme0".

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
The "local-mac-address?" parameter setting has been changed to "true".

Ensure network routing is disabled ... done

Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done
Updating file ("hosts") on node phys-schost-1 ... done

Rebooting ... 

Next Steps

Determine your next step:

If you added a node to a two-node cluster, go to How to Update SCSI Reservations After Adding a Node.

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

When you increase or decrease the number of node attachments to a quorum device, the cluster does not automatically recalculate the quorum vote count. To reestablish the correct quorum vote, use the scsetup utility to remove each quorum device and then add it back into the configuration. Do this on one quorum device at a time.

If the cluster has only one quorum device, configure a second quorum device before you remove and readd the original quorum device. Then remove the second quorum device to return the cluster to its original configuration.

ProcedureHow to Update SCSI Reservations After Adding a Node

If you added a node to a two-node cluster that uses one or more shared SCSI disks as quorum devices, you must update the SCSI Persistent Group Reservations (PGR). To do this, you remove the quorum devices which have SCSI-2 reservations. If you want to add back quorum devices, the newly configured quorum devices will have SCSI-3 reservations.

Before You Begin

Ensure that you have completed installation of Sun Cluster software on the added node.

Steps
  1. Become superuser on any node of the cluster.

  2. View the current quorum configuration.

    The following example output shows the status of quorum device d3.


    # scstat -q
    

    Note the name of each quorum device that is listed.

  3. Remove the original quorum device.

    Perform this step for each quorum device that is configured.


    # scconf -r -q globaldev=devicename
    
    -r

    Removes

    -q globaldev=devicename

    Specifies the name of the quorum device

  4. Verify that all original quorum devices are removed.


    # scstat -q
    
  5. (Optional) Add a SCSI quorum device.

    You can configure the same device that was originally configured as the quorum device or choose a new shared device to configure.

    1. (Optional) If you want to choose a new shared device to configure as a quorum device, display all devices that the system checks.

      Otherwise, skip to Step c.


      # scdidadm -L
      

      Output resembles the following:


      1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
      2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      …
    2. From the output, choose a shared device to configure as a quorum device.

    3. Configure the shared device as a quorum device.


      # scconf -a -q globaldev=devicename
      
      -a

      Adds

    4. Repeat for each quorum device that you want to configure.

  6. If you added any quorum devices, verify the new quorum configuration.


    # scstat -q
    

    Each new quorum device should be Online and have an assigned vote.


Example 2–3 Updating SCSI Reservations After Adding a Node

The following example identifies the original quorum device d2, removes that quorum device, lists the available shared devices, and configures d3 as a new quorum device.


(List quorum devices)
# scstat -q
…
-- Quorum Votes by Device --
 
                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d2s2  1        1       Online

(Remove the original quorum device)
# scconf -r -q globaldev=d2
 
(Verify the removal of the original quorum device)
# scstat -q
…
-- Quorum Votes by Device --
 
                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
 
(List available devices)
# scdidadm -L
…
3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
…
 
(Add a quorum device)
# scconf -a -q globaldev=d3
 
(Verify the addition of the new quorum device)
# scstat -q
…
-- Quorum Votes by Device --

                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d3s2 2        2       Online

Next Steps

ProcedureHow to Install Data-Service Software Packages (pkgadd)

Perform this procedure to install data services for the Solaris 10 OS from the Sun Cluster 2 of 2 CD-ROM. The Sun Cluster 2 of 2 CD-ROM contains the data services for Sun Java System applications. This procedure uses the pkgadd(1M) program to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.


Note –

Do not use this procedure for the following kinds of data-service packages:


Steps
  1. Become superuser on the cluster node.

  2. Insert the Sun Cluster 2 of 2 CD-ROM in the CD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the Solaris_arch/Product/sun_cluster_agents/Solaris_10/Packages/ directory, where arch is sparc or x86.


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents/ \
    Solaris_10/Packages/
    
  4. Install the data service packages on the global zone.


    # pkgadd -G -d . [packages]
    -G

    Adds packages to the current zone only. You must add Sun Cluster packages only to the global zone. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

    -d

    Specifies the location of the packages to install.

    packages

    Optional. Specifies the name of one or more packages to install. If no package name is specified, the pkgadd program displays a pick list of all packages that are available to install.

  5. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  6. Install any patches for the data services that you installed.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

    If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.


Next Steps

ProcedureHow to Install Data-Service Software Packages (scinstall)

Perform this procedure to install data services from the Sun Cluster Agents CD of the Sun Cluster 3.1 8/05 release. This procedure uses the interactive scinstall utility to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.


Note –

Do not use this procedure for the following kinds of data-service packages:


You do not need to perform this procedure if you used SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache or both and you do not intend to install any other data services. Instead, go to How to Configure Quorum Devices.

To install data services from the Sun Cluster 3.1 10/03 release or earlier, you can alternatively use the Web Start installer program to install the packages. See How to Install Data-Service Software Packages (Web Start installer).

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. Become superuser on the cluster node.

  2. Insert the Sun Cluster Agents CD in the CD-ROM drive on the node.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the directory where the CD-ROM is mounted.


    # cd /cdrom/cdrom0/
    
  4. Start the scinstall(1M) utility.


    # scinstall
    
  5. From the Main Menu, choose the menu item, Add support for new data services to this cluster node.

  6. Follow the prompts to select the data services to install.

    You must install the same set of data-service packages on each node. This requirement applies even if a node is not expected to host resources for an installed data service.

  7. After the data services are installed, quit the scinstall utility.

  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  9. Install any Sun Cluster data-service patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. This inability to obtain quorum causes the entire cluster to shut down.

    If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.


Next Steps

ProcedureHow to Install Data-Service Software Packages (Web Start installer)

Perform this procedure to install data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster Agents CD. This procedure uses the Web Start installer program on the CD-ROM to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.


Note –

Do not use this procedure for the following kinds of data-service packages:

You do not need to perform this procedure if you used SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache or both and you do not intend to install any other data services. Instead, go to How to Configure Quorum Devices.


To install data services from the Sun Cluster 3.1 10/03 release or earlier, you can alternatively follow the procedures in How to Install Data-Service Software Packages (scinstall).

You can run the installer program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the installer program, see the installer(1M) man page.

Before You Begin

If you intend to use the installer program with a GUI, ensure that the DISPLAY environment variable is set.

Steps
  1. Become superuser on the cluster node.

  2. Insert the Sun Cluster Agents CD in the CD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the directory of the CD-ROM where the installer program resides.


    # cd /cdrom/cdrom0/Solaris_arch/
    

    In the Solaris_arch/ directory, arch is sparc or x86.

  4. Start the Web Start installer program.


    # ./installer
    
  5. When you are prompted, select the type of installation.

    See the Sun Cluster Release Notes for a listing of the locales that are available for each data service.

    • To install all data services on the CD-ROM, select Typical.

    • To install only a subset of the data services on the CD-ROM, select Custom.

  6. When you are prompted, select the locale to install.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  7. Follow instructions on the screen to install the data-service packages on the node.

    After the installation is finished, the installer program provides an installation summary. This summary enables you to view logs that the program created during the installation. These logs are located in the /var/sadm/install/logs/ directory.

  8. Quit the installer program.

  9. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  10. Install any Sun Cluster data-service patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

    If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.


Next Steps

ProcedureHow to Configure Quorum Devices


Note –

You do not need to configure quorum devices in the following circumstances:

Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.


Perform this procedure one time only, after the cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.

Before You Begin

If you intend to configure a Network Appliance network-attached storage (NAS) device as a quorum device, do the following:

Steps
  1. If you want to use a shared SCSI disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.

    1. From one node of the cluster, display a list of all the devices that the system checks.

      You do not need to be logged in as superuser to run this command.


      % scdidadm -L
      

      Output resembles the following:


      1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
      2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      …
    2. Ensure that the output shows all connections between cluster nodes and storage devices.

    3. Determine the global device-ID name of each shared disk that you are configuring as a quorum device.


      Note –

      Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.


      Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d2 is shared by phys-schost-1 and phys-schost-2.

  2. Become superuser on one node of the cluster.

  3. Start the scsetup(1M) utility.


    # scsetup
    

    The Initial Cluster Setup screen is displayed.


    Note –

    If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 8.


  4. Answer the prompt Do you want to add any quorum disks?.

    • If your cluster is a two-node cluster, you must configure at least one shared quorum device. Type Yes to configure one or more quorum devices.

    • If your cluster has three or more nodes, quorum device configuration is optional.

      • Type No if you do not want to configure additional quorum devices. Then skip to Step 7.

      • Type Yes to configure additional quorum devices. Then proceed to Step 5.

  5. Specify what type of device you want to configure as a quorum device.

    • Choose scsi to configure a shared SCSI disk.

    • Choose netapp_nas to configure a Network Appliance NAS device.

  6. Specify the name of the device to configure as a quorum device.

    For a Network Appliance NAS device, also specify the following information:

    • The name of the NAS device

    • The LUN ID of the NAS device

  7. At the prompt Is it okay to reset "installmode"?, type Yes.

    After the scsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.

  8. Quit the scsetup utility.

Next Steps

Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

Interrupted scsetup processing — If the quorum setup process is interrupted or fails to be completed successfully, rerun scsetup.

Changes to quorum vote count — If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then add it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List” in Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

ProcedureHow to Verify the Quorum Configuration and Installation Mode

Perform this procedure to verify that quorum configuration was completed successfully and that cluster installation mode is disabled.

Steps
  1. From any node, verify the device and node quorum configurations.


    % scstat -q
    
  2. From any node, verify that cluster installation mode is disabled.

    You do not need to be superuser to run this command.


    % scconf -p | grep "install mode"
    Cluster install mode:                disabled

    Cluster installation is complete.

Next Steps

Go to Configuring the Cluster to install volume management software and perform other configuration tasks on the cluster or new cluster node.


Note –

If you added a new node to a cluster that uses VxVM, you must perform steps in SPARC: How to Install VERITAS Volume Manager Software to do one of the following tasks: