Sun Cluster Software Installation Guide for Solaris OS

Chapter 2 Installing and Configuring Sun Cluster Software

This chapter provides procedures for how to install and configure your cluster. You can also use these procedures to add a new node to an existing cluster. This chapter also provides procedures to uninstall certain cluster software.

The following sections are in this chapter.

Installing the Software

This section provides information and procedures to install software on the cluster nodes.

The following task map lists the tasks that you perform to install software on multiple-node or single-node clusters. Complete the procedures in the order that is indicated.

Table 2–1 Task Map: Installing the Software

Task 

Instructions 

1. Plan the layout of your cluster configuration and prepare to install software. 

How to Prepare for Cluster Software Installation

2. (Optional) Install Cluster Control Panel (CCP) software on the administrative console.

How to Install Cluster Control Panel Software on an Administrative Console

3. Install the Solaris OS on all nodes. 

How to Install Solaris Software

4. (Optional) SPARC: Install Sun StorEdge Traffic Manager software.

SPARC: How to Install Sun Multipathing Software

5. (Optional) SPARC: Install VERITAS File System software.

SPARC: How to Install VERITAS File System Software

6. Install Sun Cluster software packages and any Sun Java System data services for the Solaris 8 or Solaris 9 OS that you will use. 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

7. Set up directory paths. 

How to Set Up the Root Environment

8. Establish the cluster or additional cluster nodes. 

Establishing the Cluster

ProcedureHow to Prepare for Cluster Software Installation

Before you begin to install software, make the following preparations.

Steps
  1. Ensure that the hardware and software that you choose for your cluster configuration are supported for this release of Sun Cluster software.

    Contact your Sun sales representative for the most current information about supported cluster configurations.

  2. Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.

  3. Have available all related documentation, including third-party documents.

    The following is a partial list of products whose documentation you might need to reference during cluster installation:

    • Solaris OS

    • Solstice DiskSuite or Solaris Volume Manager software

    • Sun StorEdge QFS software

    • SPARC: VERITAS Volume Manager

    • SPARC: Sun Management Center

    • Third-party applications

  4. Plan your cluster configuration.


    Caution – Caution –

    Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Sun Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.

    For example, the Oracle Real Application Clusters Guard option of Oracle Real Application Clusters has special requirements for the hostnames that you use in the cluster. Another example with special requirements is Sun Cluster HA for SAP. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software.

    Also note that both Oracle Real Application Clusters and Sun Cluster HA for SAP are not supported for use in x86 based clusters.


  5. Obtain all necessary patches for your cluster configuration.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    1. Copy the patches that are required for Sun Cluster into a single directory.

      The directory must be on a file system that is accessible by all nodes. The default patch directory is /var/cluster/patches/.


      Tip –

      After you install Solaris software on a node, you can view the /etc/release file to see the exact version of Solaris software that is installed.


    2. (Optional) If you are using SunPlex Installer, you can create a patch list file.

      If you specify a patch list file, SunPlex Installer only installs the patches that are listed in the patch list file. For information about creating a patch-list file, refer to the patchadd(1M) man page.

    3. Record the path to the patch directory.

Next Steps

If you want to use Cluster Control Panel software to connect from an administrative console to your cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.

Otherwise, choose the Solaris installation procedure to use.

ProcedureHow to Install Cluster Control Panel Software on an Administrative Console


Note –

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a single interface from which to start the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time.

You can use any desktop machine that runs the Solaris 8 or Solaris 9 OS as an administrative console. In addition, you can also use the administrative console as a documentation server. If you are using Sun Cluster on a SPARC based system, you can use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information about how to install Sun Management Center software. See the Sun Cluster 3.1 8/05 Release Notes for Solaris OS for additional information about how to install Sun Cluster documentation.

Before You Begin

Ensure that a supported version of the Solaris OS and any Solaris patches are installed on the administrative console. All platforms require at least the End User Solaris Software Group.

Steps
  1. Become superuser on the administrative console.

  2. Insert the Sun Cluster 2 of 2 CD-ROM in the CD-ROM drive of the administrative console.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10 .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
    
  4. Install the SUNWccon package.


    # pkgadd -d . SUNWccon
    
  5. (Optional) Install the SUNWscman package.


    # pkgadd -d . SUNWscman
    

    When you install the SUNWscman package on the administrative console, you can view Sun Cluster man pages from the administrative console before you install Sun Cluster software on the cluster nodes.

  6. (Optional) Install the Sun Cluster documentation packages.


    Note –

    If you do not install the documentation on your administrative console, you can still view HTML or PDF documentation directly from the CD-ROM. Use a web browser to view the Solaris_arch/Product/sun_cluster/index.html file on the Sun Cluster 2 of 2 CD-ROM, where arch is sparc or x86.


    1. Determine whether the SUNWsdocs package is already installed on the administrative console.


      # pkginfo | grep SUNWsdocs
      application SUNWsdocs     Documentation Navigation for Solaris 9

      If the SUNWsdocs package is not yet installed, you must install it before you install the documentation packages.

    2. Choose the Sun Cluster documentation packages to install.

      The following documentation collections are available in both HTML and PDF format:

      Collection Title 

      HTML Package Name 

      PDF Package Name 

      Sun Cluster 3.1 9/04 Software Collection for Solaris OS (SPARC Platform Edition) 

      SUNWscsdoc

      SUNWpscsdoc

      Sun Cluster 3.1 9/04 Software Collection for Solaris OS (x86 Platform Edition) 

      SUNWscxdoc

      SUNWpscxdoc

      Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition) 

      SUNWschw

      SUNWpschw

      Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition) 

      SUNWscxhw

      SUNWpscxhw

      Sun Cluster 3.1 9/04 Reference Collection for Solaris OS 

      SUNWscref

      SUNWpscref

    3. Install the SUNWsdocs package, if not already installed, and your choice of Sun Cluster documentation packages.


      Note –

      All documentation packages have a dependency on the SUNWsdocs package. The SUNWsdocs package must exist on the system before you can successfully install a documentation package on that system.



      # pkgadd -d . SUNWsdocs pkg-list
      
  7. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  8. Create an /etc/clusters file on the administrative console.

    Add your cluster name and the physical node name of each cluster node to the file.


    # vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  9. Create an /etc/serialports file.

    Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.


    # vi /etc/serialports
    node1 ca-dev-hostname port
    node2 ca-dev-hostname port
    
    node1, node2

    Physical names of the cluster nodes

    ca-dev-hostname

    Hostname of the console-access device

    port

    Serial port number

    Note these special instructions to create an /etc/serialports file:

    • For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.

    • For all other console-access devices, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.

    • For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.

  10. (Optional) For convenience, set the directory paths on the administrative console.

    1. Add the /opt/SUNWcluster/bin/ directory to the PATH.

    2. Add the /opt/SUNWcluster/man/ directory to the MANPATH.

    3. If you installed the SUNWscman package, also add the /usr/cluster/man/ directory to the MANPATH.

  11. Start the CCP utility.


    # /opt/SUNWcluster/bin/ccp &
    

    Click the cconsole, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:


    # /opt/SUNWcluster/bin/ctelnet &
    

    See the procedure “How to Remotely Log In to Sun Cluster” in Beginning to Administer the Cluster in Sun Cluster System Administration Guide for Solaris OS for additional information about how to use the CCP utility. Also see the ccp(1M) man page.

Next Steps

Determine whether the Solaris OS is already installed to meet Sun Cluster software requirements.

ProcedureHow to Install Solaris Software

Follow these procedures to install the Solaris OS on each node in the cluster or to install the Solaris OS on the master node that you will flash archive for a JumpStart installation. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.


Tip –

To speed installation, you can install the Solaris OS on each node at the same time.


If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

Before You Begin

Perform the following tasks:

Steps
  1. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      Use the following command to start the cconsole utility:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  2. Install the Solaris OS as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.


    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.


      Tip –

      To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem.

        If you intend to use SunPlex Installer to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size.

        If you intend to use SunPlex Installer to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10), also make this file system mount on /sds.


        Note –

        If you intend to use SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, SunPlex Installer must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10).


      • Create any other file-system partitions that you need, as described in System Disk Partitions.

    3. For ease of administration, set the same root password on each node.

  3. If you are adding a node to an existing cluster, prepare the cluster to accept the new node.

    1. On any active cluster member, start the scsetup(1M) utility.


      # scsetup
      

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task is completed without error.

    5. Quit the scsetup utility.

    6. From the active cluster node, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      
    7. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  4. If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.

    1. Ensure that the same vxio number is used on the VxVM-installed nodes.


      # grep vxio /etc/name_to_major
      vxio NNN
      
    2. Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

    3. If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.

  5. If you installed the End User Solaris Software Group, use the pkgadd command to manually install any additional Solaris software packages that you might need.

    The following Solaris packages are required to support some Sun Cluster functionality.


    Note –

    Install packages in the order in which they are listed in the following table.


    Feature 

    Mandatory Solaris Software Packages 

    RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) 

    Solaris 8 or Solaris 9: SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox

    Solaris 10: SUNWrsm SUNWrsmo

    SunPlex Manager 

    SUNWapchr SUNWapchu

    • For the Solaris 8 or Solaris 9 OS, use the following command:


      # pkgadd -d . packages
      
    • For the Solaris 10 OS, use the following command:


      # pkgadd -G -d . packages
      

      You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

  6. Install any required Solaris OS patches and hardware-related firmware and patches, including those for storage-array support. Also download any needed firmware that is contained in the hardware patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  7. x86: Set the default boot file to kadb.


    # eeprom boot-file=kadb
    

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  8. Update the /etc/inet/hosts file on each node with all IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. See IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.

  9. If you will use ce adapters for the cluster interconnect, add the following entry to the /etc/system file.


    set ce:ce_taskq_disable=1

    This entry becomes effective after the next system reboot.

  10. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.

Next Steps

If you intend to use Sun multipathing software, go to SPARC: How to Install Sun Multipathing Software.

If you intend to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

See Also

See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.

ProcedureSPARC: How to Install Sun Multipathing Software

Perform this procedure on each node of the cluster to install and configure Sun multipathing software for fiber channel (FC) storage. Multipathing software manages multiple I/O paths to the shared cluster storage.

Before You Begin

Perform the following tasks:

Steps
  1. Become superuser.

  2. For the Solaris 8 or Solaris 9 OS, install on each node Sun StorEdge Traffic Manager software and any necessary patches.

  3. Enable multipathing functionality.

    • For the Solaris 8 or 9 OS, change the value of the mpxio-disable parameter to no.

      Modify this entry in the /kernel/drv/scsi_vhci.conf file on each node.


      set mpxio-disable=no
    • For the Solaris 10 OS, issue the following command on each node:


      Caution – Caution –

      If Sun Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in a Sun Cluster environment.



      # /usr/sbin/stmsboot -e
      
      -e

      Enables Solaris I/O multipathing

      See the stmsboot(1M) man page for more information.

  4. For the Solaris 8 or Solaris 9 OS, determine whether your version of Sun StorEdge SAN Foundation software includes built-in support for your storage array.

    If the software does not include built-in support for your storage array, edit the /kernel/drv/scsi_vhci.conf file on each node to include the necessary entries. For more information, see the release notes for your storage device.

  5. For the Solaris 8 or Solaris 9 OS, shut down each node and perform a reconfiguration boot.

    The reconfiguration boot creates the new Solaris device files and links.


    # shutdown -y -g0 -i0
    ok boot -r
    
  6. After the reconfiguration reboot is finished on all nodes, perform any additional tasks that are necessary to complete the configuration of your storage array.

    See installation instructions for your storage array in the Sun Cluster Hardware Administration Collection for details.

Troubleshooting

If you installed Sun multipathing software after Sun Cluster software was installed on the cluster, DID mappings might require updating. Issue the following commands on each node of the cluster to regenerate the DID namespace.

# scdidadm -C# scdidadm -r(Solaris 8 or 9 only) # cfgadm -c configure# scgdevs

See the scdidadm(1M), scgdevs(1M)man pages for more information.

Next Steps

If you intend to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

ProcedureSPARC: How to Install VERITAS File System Software

Perform this procedure on each node of the cluster.

Steps
  1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.

  2. Install any Sun Cluster patches that are required to support VxFS.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  3. In the /etc/system file on each node, set the following values.


    set rpcmod:svc_default_stksize=0x8000
    set lwp_default_stksize=0x6000

    These changes become effective at the next system reboot.

    • Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.

    • You must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

Next Steps

Install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

ProcedureHow to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

Follow this procedure to use the Sun JavaTM Enterprise System (Java ES) installer program to perform one or more of the following installation tasks:


Note –

Do not use this procedure to install the following kinds of data service packages:


Before You Begin

Perform the following tasks:

Steps
  1. (Optional) To use the installer program with a GUI, ensure that the display environment of the cluster node to install is set to display the GUI.


    % xhost +
    % setenv DISPLAY nodename:0.0
    
  2. Become superuser on the cluster node to install.

  3. Insert the Sun Cluster 1 of 2 CD-ROM in the CD-ROM drive.

  4. Change to the directory of the CD-ROM where the installer program resides.


    # cd /cdrom/cdrom0/Solaris_arch/
    

    In the Solaris_arch/ directory, arch is sparc or x86.

  5. Start the Java ES installer program.


    # ./installer
    
  6. Follow instructions on the screen to install Sun Cluster framework software and data services on the node.

    When prompted whether to configure Sun Cluster framework software, choose Configure Later.

    After installation is finished, you can view any available installation log. See the Sun Java Enterprise System 2005Q5 Installation Guide for additional information about using the Java ES installer program.

  7. Install additional packages if you intend to use any of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.


    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.1 8/05 packages that each feature requires, in the order in which you must install each group of packages. The Java ES installer program does not automatically install these packages.


      Note –

      Install packages in the order in which they are listed in the following table.


      Feature 

      Additional Sun Cluster 3.1 8/05 Packages to Install 

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      • Solaris 8 and 9: SUNWsci SUNWscid SUNWscidx

      • Solaris 10: SUNWscir SUNWsci SUNWscidr SUNWscid

      RSMRDT drivers 

      SUNWscrdt

    2. Insert the Sun Cluster 2 of 2 CD-ROM, if it is not already inserted in the CD-ROM drive.

    3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10 .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      
    4. Install the additional packages.


      # pkgadd -d . packages
      
  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  9. Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.4.2_03 of Java software.

    1. Determine what directory the /usr/java/ directory is symbolically linked to.


      # ls -l /usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /usr/java -> /usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      # /usr/j2se/bin/java -version
      # /usr/java1.2/bin/java -version
      # /usr/jdk/jdk1.5.0_01/bin/java -version
      
    3. If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.4.2_03 software.


      # rm /usr/java
      # ln -s /usr/j2se /usr/java
      
Next Steps

If you want to install Sun StorEdge QFS file system software, follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.

ProcedureHow to Set Up the Root Environment


Note –

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User's Work Environment in System Administration Guide, Volume 1 (Solaris 8) or in System Administration Guide: Basic Administration (Solaris 9 or Solaris 10) for more information.


Perform this procedure on each node in the cluster.

Steps
  1. Become superuser on a cluster node.

  2. Modify PATH and MANPATH entries in the .cshrc or .profile file.

    1. Set the PATH to include /usr/sbin/ and /usr/cluster/bin/.

    2. Set the MANPATH to include /usr/cluster/man/.

    See your volume manager documentation and other application documentation for additional file paths to set.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

Next Steps

Configure Sun Cluster software on the cluster nodes. Go to Establishing the Cluster.

Establishing the Cluster

This section provides information and procedures to establish a new cluster or to add a node to an existing cluster. Before you start to perform these tasks, ensure that you installed software packages for the Solaris OS, Sun Cluster framework, and other products as described in Installing the Software.

The following task map lists the tasks to perform. Complete the procedures in the order that is indicated.

Table 2–2 Task Map: Establish the Cluster

Method 

Instructions 

1. Use one of the following methods to establish a new cluster or add a node to an existing cluster: 

  • (New clusters only) Use the scinstall utility to establish the cluster.

How to Configure Sun Cluster Software on All Nodes (scinstall)

  • (New clusters or added nodes) Set up a JumpStart installation server. Then create a flash archive of the installed system. Finally, use the scinstall JumpStart option to install the flash archive on each node and establish the cluster.

How to Install Solaris and Sun Cluster Software (JumpStart)

  • (New multiple-node clusters only)Use SunPlex Installer to establish the cluster. Optionally, also configure Solstice DiskSuite or Solaris Volume Manager disk sets, scalable Sun Cluster HA for Apache data service, and Sun Cluster HA for NFS data service.

Using SunPlex Installer to Configure Sun Cluster Software

How to Configure Sun Cluster Software (SunPlex Installer)

  • (Added nodes only) Configure Sun Cluster software on the new node by using the scinstall utility.

How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

2. (Oracle Real Application Clusters only) If you added a node to a two-node cluster that runs Sun Cluster Support for Oracle Real Application Clusters and that uses a shared SCSI disk as the quorum device, update the SCSI reservations.

How to Update SCSI Reservations After Adding a Node

3. Install data-service software packages. 

How to Install Data-Service Software Packages (pkgadd)

How to Install Data-Service Software Packages (scinstall)

How to Install Data-Service Software Packages (Web Start installer)

4. Assign quorum votes and remove the cluster from installation mode, if this operation was not already performed. 

How to Configure Quorum Devices

5. Validate the quorum configuration. 

How to Verify the Quorum Configuration and Installation Mode

6. Configure the cluster. 

Configuring the Cluster

ProcedureHow to Configure Sun Cluster Software on All Nodes (scinstall)

Perform this procedure from one node of the cluster to configure Sun Cluster software on all nodes of the cluster.

Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. If you disabled remote configuration during Sun Cluster software installation, re-enable remote configuration.

    Enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes.

  2. (Optional) To use the scinstall(1M) utility to install patches, download patches to a patch directory.

    • If you use Typical mode to install the cluster, use a directory named either /var/cluster/patches/ or /var/patches/ to contain the patches to install.

      In Typical mode, the scinstall command checks both those directories for patches.

      • If neither of those directories exist, then no patches are added.

      • If both directories exist, then only the patches in the /var/cluster/patches/ directory are added.

    • If you use Custom mode to install the cluster, you specify the path to the patch directory. Specifying the path ensures that you do not have to use the patch directories that scinstall checks for in Typical mode.

    You can include a patch-list file in the patch directory. The default patch-list file name is patchlist. For information about creating a patch-list file, refer to the patchadd(1M) man page.

  3. Become superuser on the cluster node from which you intend to configure the cluster.

  4. Start the scinstall utility.


    # /usr/cluster/bin/scinstall
    
  5. From the Main Menu, choose the menu item, Install a cluster or cluster node.


     *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    
  6. From the Install Menu, choose the menu item, Install all nodes of a new cluster.

  7. From the Type of Installation menu, choose either Typical or Custom.

  8. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  9. For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    # svcs multi-user-server
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  10. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
  11. Install any necessary patches to support Sun Cluster software, if you have not already done so.

  12. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.


Example 2–1 Configuring Sun Cluster Software on All Nodes

The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall Typical mode. The other cluster node is phys-schost-2. The adapter names are qfe2 and qfe3. The automatic selection of a quorum device is enabled.


  Installation and Configuration

    Log file - /var/cluster/logs/install/scinstall.log.24747

    Testing for "/globaldevices" on "phys-schost-1" … done
    Testing for "/globaldevices" on "phys-schost-2" … done
    Checking installation status … done

    The Sun Cluster software is already installed on "phys-schost-1".
    The Sun Cluster software is already installed on "phys-schost-2".
    Starting discovery of the cluster transport configuration.

    The following connections were discovered:

        phys-schost-1:qfe2  switch1  phys-schost-2:qfe2
        phys-schost-1:qfe3  switch2  phys-schost-2:qfe3

    Completed discovery of the cluster transport configuration.

    Started sccheck on "phys-schost-1".
    Started sccheck on "phys-schost-2".

    sccheck completed with no errors or warnings for "phys-schost-1".
    sccheck completed with no errors or warnings for "phys-schost-2".

    Removing the downloaded files … done

    Configuring "phys-schost-2" … done
    Rebooting "phys-schost-2" … done

    Configuring "phys-schost-1" … done
    Rebooting "phys-schost-1" …

Log file - /var/cluster/logs/install/scinstall.log.24747

Rebooting …

Next Steps

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to the next appropriate procedure:

Troubleshooting

You cannot change the private-network address and netmask after scinstall processing is finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then perform the procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) and then perform this procedure to reinstall the software and configure the node with the correct information.

ProcedureHow to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris OS and Sun Cluster software on all cluster nodes in the same operation and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.

Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. Set up your JumpStart installation server.

  2. If you are installing a new node to an existing cluster, add the node to the list of authorized cluster nodes.

    1. Switch to another cluster node that is active and start the scsetup(1M) utility.

    2. Use the scsetup utility to add the new node's name to the list of authorized cluster nodes.

    For more information, see How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS.

  3. On a cluster node or another machine of the same server platform, install the Solaris OS, if you have not already done so.

    Follow procedures in How to Install Solaris Software.

  4. On the installed system, install Sun Cluster software, if you have not done so already.

    Follow procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

  5. Enable the common agent container daemon to start automatically during system boots.


    # cacaoadm enable
    
  6. On the installed system, install any necessary patches to support Sun Cluster software.

  7. On the installed system, update the /etc/inet/hosts file with all IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. See IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.

  8. For Solaris 10, on the installed system, update the /etc/inet/ipnodes file with all IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service.

  9. Create the flash archive of the installed system.


    # flarcreate -n name archive
    
    -n name

    Name to give the flash archive.

    archive

    File name to give the flash archive, with the full path. By convention, the file name ends in .flar.

    Follow procedures in one of the following manuals:

  10. Ensure that the flash archive is NFS exported for reading by the JumpStart installation server.

    See Solaris NFS Environment in System Administration Guide, Volume 3 (Solaris 8) or Managing Network File Systems (Overview), in System Administration Guide: Network Services (Solaris 9 or Solaris 10) for more information about automatic file sharing.

    See also the share(1M) and dfstab(4) man pages.

  11. From the JumpStart installation server, start the scinstall(1M) utility.

    The path /export/suncluster/sc31/ is used here as an example of the installation directory that you created. In the CD-ROM path, replace arch with sparc or x86 and replace ver with 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10.


    # cd /export/suncluster/sc31/Solaris_arch/Product/sun_cluster/ \
    Solaris_ver/Tools/
    # ./scinstall
    
  12. From the Main Menu, choose the menu item, Configure a cluster to be JumpStarted from this installation server.

    This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.


     *** Main Menu ***
     
        Please select from one of the following (*) options:
     
          * 1) Install a cluster or cluster node
          * 2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
     
        Option:  2
    
  13. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall command stores your configuration information and copies the autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.1/ directory. This file is similar to the following example.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750  swap
    filesys         rootdisk.s3 512  /globaldevices
    filesys         rootdisk.s7 20
    cluster         SUNWCuser        add
    package         SUNWman          add
  14. Make adjustments to the autoscinstall.class file to configure JumpStart to install the flash archive.

    1. Modify entries as necessary to match configuration choices you made when you installed the Solaris OS on the flash archive machine or when you ran the scinstall utility.

      For example, if you assigned slice 4 for the global-devices file system and specified to scinstall that the file-system name is /gdevs, you would change the /globaldevices entry of the autoscinstall.class file to the following:


      filesys         rootdisk.s4 512  /gdevs
    2. Change the following entries in the autoscinstall.class file.

      Existing Entry to Replace 

      New Entry to Add 

      install_type

      initial_install

      install_type

      flash_install

      system_type

      standalone

      archive_location

      retrieval_type location

      See archive_location Keyword in Solaris 8 Advanced Installation Guide, Solaris 9 9/04 Installation Guide, or Solaris 10 Installation Guide: Custom JumpStart and Advanced Installations for information about valid values for retrieval_type and location when used with the archive_location keyword.

    3. Remove all entries that would install a specific package, such as the following entries.


      cluster         SUNWCuser        add
      package         SUNWman          add
  15. Set up Solaris patch directories, if you did not already install the patches on the flash-archived system.


    Note –

    If you specified a patch directory to the scinstall utility, patches that are located in Solaris patch directories are not installed.


    1. Create jumpstart-dir/autoscinstall.d/nodes/node/patches/ directories that are NFS exported for reading by the JumpStart installation server.

      Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.


      # mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches/
      
    2. Place copies of any Solaris patches into each of these directories.

    3. Place copies of any hardware-related patches that you must install after Solaris software is installed into each of these directories.

  16. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      Use the following command to start the cconsole utility:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  17. Shut down each node.


    # shutdown -g0 -y -i0
    
  18. Boot each node to start the JumpStart installation.

    • On SPARC based systems, do the following:


      ok boot net - install
      

      Note –

      Surround the dash (-) in the command with a space on each side.


    • On x86 based systems, do the following:

      1. When the BIOS information screen appears, press the Esc key.

        The Select Boot Device screen appears.

      2. On the Select Boot Device screen, choose the listed IBA that is connected to the same network as the JumpStart PXE installation server.

        The lowest number to the right of the IBA boot choices corresponds to the lower Ethernet port number. The higher number to the right of the IBA boot choices corresponds to the higher Ethernet port number.

        The node reboots and the Device Configuration Assistant appears.

      3. On the Boot Solaris screen, choose Net.

      4. At the following prompt, choose Custom JumpStart and press Enter:


        Select the type of installation you want to perform:
        
                 1 Solaris Interactive
                 2 Custom JumpStart
        
        Enter the number of your choice followed by the <ENTER> key.
        
        If you enter anything else, or if you wait for 30 seconds,
        an interactive installation will be started.
      5. When prompted, answer the questions and follow the instructions on the screen.

    JumpStart installs the Solaris OS and Sun Cluster software on each node. When the installation is successfully completed, each node is fully installed as a new cluster node. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.

  19. For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    # svcs multi-user-server
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  20. If you are installing a new node to an existing cluster, create mount points on the new node for all existing cluster file systems.

    1. From another cluster node that is active, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      
    2. On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.


      Note –

      The mount points become active after you reboot the cluster in Step 24.


    3. If VERITAS Volume Manager (VxVM) is installed on any nodes that are already in the cluster, view the vxio number on each VxVM–installed node.


      # grep vxio /etc/name_to_major
      vxio NNN
      
      • Ensure that the same vxio number is used on each of the VxVM-installed nodes.

      • Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

      • If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  21. (Optional) To use dynamic reconfiguration on Sun Enterprise 10000 servers, add the following entry to the /etc/system file. Add this entry on each node in the cluster.


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  22. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.

  23. x86: Set the default boot file to kadb.


    # eeprom boot-file=kadb
    

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  24. If you performed a task that requires a cluster reboot, follow these steps to reboot the cluster.

    The following are some of the tasks that require a reboot:

    • Adding a new node to an existing cluster

    • Installing patches that require a node or cluster reboot

    • Making configuration changes that require a reboot to become active

    1. From one node, shut down the cluster.


      # scshutdown
      

      Note –

      Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

      Cluster nodes remain in installation mode until the first time that you run the scsetup(1M) command. You run this command during the procedure How to Configure Quorum Devices.


    2. Reboot each node in the cluster.

      • On SPARC based systems, do the following:


        ok boot
        
      • On x86 based systems, do the following:


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
        Boot args:
        
        Type   b [file-name] [boot-flags] <ENTER>  to boot with options
        or     i <ENTER>                           to enter boot interpreter
        or     <ENTER>                             to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b
        

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  25. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
Next Steps

If you added a node to a two-node cluster, go to How to Update SCSI Reservations After Adding a Node.

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to the next appropriate procedure:

Troubleshooting

Disabled scinstall option – If the JumpStart option of the scinstall command does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 10 to correct JumpStart setup, then restart the scinstall utility.

Error messages about nonexistent nodes – Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information about how to suppress these messages under otherwise normal cluster conditions.

Changing the private-network address – You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.

Using SunPlex Installer to Configure Sun Cluster Software


Note –

Do not use this configuration method in the following circumstances:


This section describes how to use SunPlex Installer, the installation module of SunPlex Manager, to establish a new cluster. You can also use SunPlex Installer to install or configure one or more of the following additional software products:

Installation Requirements

The following table lists SunPlex Installer installation requirements for these additional software products.

Table 2–3 Requirements to Use SunPlex Installer to Install Software

Software Package 

Installation Requirements 

Solstice DiskSuite or Solaris Volume Manager 

A partition that uses /sds as the mount–point name. The partition must be at least 20 Mbytes in size.

Sun Cluster HA for NFS data service 

  • At least two shared disks, of the same size, that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Installer.

  • A logical hostname for use by Sun Cluster HA for NFS. The logical hostname must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as any of the adapters in the IP Network Multipathing group that hosts the logical address.

  • A test IP address for each node of the cluster. SunPlex Installer uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for NFS.

Sun Cluster HA for Apache scalable data service 

  • At least two shared disks of the same size that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Installer.

  • A shared address for use by Sun Cluster HA for Apache. The shared address must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as any of the adapters in the IP Network Multipathing group that hosts the logical address.

  • A test IP address for each node of the cluster. SunPlex Installer uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for Apache.

Test IP Addresses

The test IP addresses that you supply must meet the following requirements:

The following table lists each metaset name and cluster-file-system mount point that is created by SunPlex Installer. The number of metasets and mount points that SunPlex Installer creates depends on the number of shared disks that are connected to the node. For example, if a node is connected to four shared disks, SunPlex Installer creates the mirror-1 and mirror-2 metasets. However, SunPlex Installer does not create the mirror-3 metaset, because the node does not have enough shared disks to create a third metaset.

Table 2–4 Metasets Created by SunPlex Installer

Shared Disks 

Metaset Name 

Cluster File System Mount Point 

Purpose 

First pair 

mirror-1

/global/mirror-1

Sun Cluster HA for NFS or Sun Cluster HA for Apache scalable data service, or both 

Second pair 

mirror-2

/global/mirror-2

Unused 

Third pair 

mirror-3

/global/mirror-3

Unused 


Note –

If the cluster does not meet the minimum shared-disk requirement, SunPlex Installer still installs the Solstice DiskSuite packages. However, without sufficient shared disks, SunPlex Installer cannot configure the metasets, metadevices, or volumes. SunPlex Installer then cannot configure the cluster file systems that are needed to create instances of the data service.


Character-Set Limitations

SunPlex Installer recognizes a limited character set to increase security. Characters that are not a part of the set are silently filtered out when HTML forms are submitted to the SunPlex Installer server. The following characters are accepted by SunPlex Installer:


()+,-./0-9:=@A-Z^_a-z{|}~

This filter can cause problems in the following two areas:

ProcedureHow to Configure Sun Cluster Software (SunPlex Installer)

Perform this procedure to use SunPlex Installer to configure Sun Cluster software and install patches on all nodes in the cluster in a single operation. In addition, you can use this procedure to install Solstice DiskSuite software and patches (Solaris 8) and to configure Solstice DiskSuite or Solaris Volume Manager mirrored disk sets.


Note –

Do not use this configuration method in the following circumstances:


The installation process might take from 30 minutes to two or more hours. The actual length of time depends on the number of nodes that are in the cluster, your choice of data services to install, and the number of disks that are in your cluster configuration.

Before You Begin

Perform the following tasks:

Steps
  1. Prepare file-system paths to a CD-ROM image of each software product that you intend to install.

    Follow these guidelines to prepare the file-system paths:

    • Provide each CD-ROM image in a location that is available to each node.

    • Ensure that the CD-ROM images are accessible to all nodes of the cluster from the same file-system path. These paths can be one or more of the following locations:

      • CD-ROM drives that are exported to the network from machines outside the cluster.

      • Exported file systems on machines outside the cluster.

      • CD-ROM images that are copied to local file systems on each node of the cluster. The local file system must use the same name on each node.

  2. x86: Determine whether you are using the Netscape NavigatorTM browser or the Microsoft Internet Explorer browser on your administrative console.

    • If you are using Netscape Navigator, proceed to Step 3.

    • If you are using Internet Explorer, skip to Step 4.

  3. x86: Ensure that the Java plug-in is installed and working on your administrative console.

    1. Start the Netscape Navigator browser on the administrative console that you use to connect to the cluster.

    2. From the Help menu, choose About Plug-ins.

    3. Determine whether the Java plug-in is listed.

    4. Download the latest Java plug-in from http://java.sun.com/products/plugin.

    5. Install the plug-in on your administrative console.

    6. Create a symbolic link to the plug-in.


      % cd ~/.netscape/plugins/
      % ln -s /usr/j2se/plugin/i386/ns4/javaplugin.so .
      
    7. Skip to Step 5.

  4. x86: Ensure that Java 2 Platform, Standard Edition (J2SE) for Windows is installed and working on your administrative console.

    1. On your Microsoft Windows desktop, click Start, point to Settings, and then select Control Panel.

      The Control Panel window appears.

    2. Determine whether the Java Plug-in is listed.

      • If no, proceed to Step c.

      • If yes, double-click the Java Plug-in control panel. When the control panel window opens, click the About tab.

        • If an earlier version is shown, proceed to Step c.

        • If version 1.4.1 or a later version is shown, skip to Step 5.

    3. Download the latest version of J2SE for Windows from http://java.sun.com/j2se/downloads.html.

    4. Install the J2SE for Windows software on your administrative console.

    5. Restart the system on which your administrative console runs.

      The J2SE for Windows control panel is activated.

  5. If patches exist that are required to support Sun Cluster or Solstice DiskSuite software, determine how to install those patches.

    • To manually install patches, use the patchadd command to install all patches before you use SunPlex Installer.

    • To use SunPlex Installer to install patches, copy patches into a single directory.

      Ensure that the patch directory meets the following requirements:

      • The patch directory resides on a file system that is available to each node.

      • Only one version of each patch is present in this patch directory. If the patch directory contains multiple versions of the same patch, SunPlex Installer cannot determine the correct patch dependency order.

      • The patches are uncompressed.

  6. From the administrative console or any other machine outside the cluster, launch a browser.

  7. Disable the browser's Web proxy.

    SunPlex Installer installation functionality is incompatible with Web proxies.

  8. Ensure that disk caching and memory caching is enabled.

    The disk cache and memory cache size must be greater than 0.

  9. From the browser, connect to port 3000 on a node of the cluster.


    https://node:3000
    

    The Sun Cluster Installation screen is displayed in the browser window.


    Note –

    If SunPlex Installer displays the data services installation screen instead of the Sun Cluster Installation screen, Sun Cluster framework software is already installed and configured on that node. Check that the name of the node in the URL is the correct name of the cluster node to install.


  10. If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

  11. Log in as superuser.

  12. In the Sun Cluster Installation screen, verify that the cluster meets the listed requirements for using SunPlex Installer.

    If you meet all listed requirements, click Next to continue to the next screen.

  13. Follow the menu prompts to supply your answers from the configuration planning worksheet.

  14. Click Begin Installation to start the installation process.

    Follow these guidelines to use SunPlex Installer:

    • Do not close the browser window or change the URL during the installation process.

    • If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

    • If the browser prompts for login information, type the appropriate superuser ID and password for the node that you connect to.

    SunPlex Installer installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

    During installation, the screen displays brief messages about the status of the cluster installation process. When installation and configuration is complete, the browser displays the cluster monitoring and administration GUI.

    SunPlex Installer installation output is logged in the /var/cluster/spm/messages file. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.

  15. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
  16. Verify the quorum assignments and modify those assignments, if necessary.

    For clusters with three or more nodes, the use of shared quorum devices is optional. SunPlex Installer might or might not have assigned quorum votes to any quorum devices, depending on whether appropriate shared disks were available. You can use SunPlex Manager to designate quorum devices and to reassign quorum votes in the cluster. See Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS for more information.

  17. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.

Next Steps

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.

ProcedureHow to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing cluster. To use JumpStart to add a new node, instead follow procedures in How to Install Solaris and Sun Cluster Software (JumpStart).

Before You Begin

Perform the following tasks:

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. If you are adding this node to a single-node cluster, ensure that two cluster interconnects already exist by displaying the interconnect configuration.


    # scconf -p | grep cable
    # scconf -p | grep adapter
    

    You must have at least two cables or two adapters configured before you can add a node.

    • If the output shows configuration information for two cables or for two adapters, proceed to Step 2.

    • If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.

      1. On the existing cluster node, start the scsetup(1M) utility.


        # scsetup
        
      2. Choose the menu item, Cluster interconnect.

      3. Choose the menu item, Add a transport cable.

        Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport junction.

      4. If necessary, repeat Step c to configure a second cluster interconnect.

        When finished, quit the scsetup utility.

      5. Verify that the cluster now has two cluster interconnects configured.


        # scconf -p | grep cable
        # scconf -p | grep adapter
        

        The command output should show configuration information for at least two cluster interconnects.

  2. If you are adding this node to an existing cluster, add the new node to the cluster authorized–nodes list.

    1. On any active cluster member, start the scsetup(1M) utility.


      # scsetup
      

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task is completed without error.

    5. Quit the scsetup utility.

  3. Become superuser on the cluster node to configure.

  4. Start the scinstall utility.


    # /usr/cluster/bin/scinstall
    
  5. From the Main Menu, choose the menu item, Install a cluster or cluster node.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
            4) Upgrade this cluster node
          * 5) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    
  6. From the Install Menu, choose the menu item, Add this machine as a node in an existing cluster.

  7. Follow the menu prompts to supply your answers from the configuration planning worksheet.

    The scinstall utility configures the node and boots the node into the cluster.

  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  9. Install any necessary patches to support Sun Cluster software, if you have not already done so.

  10. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.

  11. For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    # svcs multi-user-server
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  12. From an active cluster member, prevent any other nodes from joining the cluster.


    # /usr/cluster/bin/scconf -a -T node=.
    
    -a

    Specifies the add form of the command

    -T

    Specifies authentication options

    node=.

    Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster

    Alternately, you can use the scsetup(1M) utility. See How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS for procedures.

  13. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online
  14. To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.


    exclude:lofs

    The re-enabling of LOFS becomes effective after the next system reboot.


    Note –

    You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:

    • Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.

    • Disable the automountd daemon.

    • Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.


    See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.


Example 2–2 Configuring Sun Cluster Software on an Additional Node

The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.


*** Adding a Node to an Existing Cluster ***
Fri Feb  4 10:17:53 PST 2005


scinstall -ik -C schost -N phys-schost-1 -A trtype=dlpi,name=qfe2 -A trtype=dlpi,name=qfe3 
-m endpoint=:qfe2,endpoint=switch1 -m endpoint=:qfe3,endpoint=switch2


Checking device to use for global devices file system ... done

Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "qfe2" to the cluster configuration ... done
Adding adapter "qfe3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "phys-schost-1" ... done

Copying the postconfig file from "phys-schost-1" if it exists ... done
Copying the Common Agent Container keys from "phys-schost-1" ... done


Setting the node ID for "phys-schost-3" ... done (id=1)

Setting the major number for the "did" driver ... 
Obtaining the major number for the "did" driver from "phys-schost-1" ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Initializing NTP configuration ... done

Updating nsswitch.conf ... 
done

Adding clusternode entries to /etc/inet/hosts ... done


Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files

Updating "/etc/hostname.hme0".

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
The "local-mac-address?" parameter setting has been changed to "true".

Ensure network routing is disabled ... done

Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done
Updating file ("hosts") on node phys-schost-1 ... done

Rebooting ... 

Next Steps

Determine your next step:

If you added a node to a two-node cluster, go to How to Update SCSI Reservations After Adding a Node.

If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:

 

Sun Cluster 2 of 2 CD-ROM 

(Sun Java System data services) 

Sun Cluster Agents CD 

(All other data services) 

Procedure 

Solaris 8 or 9 

Solaris 10 

Solaris 8 or 9 

Solaris 10 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

     

How to Install Data-Service Software Packages (pkgadd)

 

   

How to Install Data-Service Software Packages (scinstall)

   

How to Install Data-Service Software Packages (Web Start installer)

   

 

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

When you increase or decrease the number of node attachments to a quorum device, the cluster does not automatically recalculate the quorum vote count. To reestablish the correct quorum vote, use the scsetup utility to remove each quorum device and then add it back into the configuration. Do this on one quorum device at a time.

If the cluster has only one quorum device, configure a second quorum device before you remove and readd the original quorum device. Then remove the second quorum device to return the cluster to its original configuration.

ProcedureHow to Update SCSI Reservations After Adding a Node

If you added a node to a two-node cluster that uses one or more shared SCSI disks as quorum devices, you must update the SCSI Persistent Group Reservations (PGR). To do this, you remove the quorum devices which have SCSI-2 reservations. If you want to add back quorum devices, the newly configured quorum devices will have SCSI-3 reservations.

Before You Begin

Ensure that you have completed installation of Sun Cluster software on the added node.

Steps
  1. Become superuser on any node of the cluster.

  2. View the current quorum configuration.

    The following example output shows the status of quorum device d3.


    # scstat -q
    

    Note the name of each quorum device that is listed.

  3. Remove the original quorum device.

    Perform this step for each quorum device that is configured.


    # scconf -r -q globaldev=devicename
    
    -r

    Removes

    -q globaldev=devicename

    Specifies the name of the quorum device

  4. Verify that all original quorum devices are removed.


    # scstat -q
    
  5. (Optional) Add a SCSI quorum device.

    You can configure the same device that was originally configured as the quorum device or choose a new shared device to configure.

    1. (Optional) If you want to choose a new shared device to configure as a quorum device, display all devices that the system checks.

      Otherwise, skip to Step c.


      # scdidadm -L
      

      Output resembles the following:


      1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
      2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      …
    2. From the output, choose a shared device to configure as a quorum device.

    3. Configure the shared device as a quorum device.


      # scconf -a -q globaldev=devicename
      
      -a

      Adds

    4. Repeat for each quorum device that you want to configure.

  6. If you added any quorum devices, verify the new quorum configuration.


    # scstat -q
    

    Each new quorum device should be Online and have an assigned vote.


Example 2–3 Updating SCSI Reservations After Adding a Node

The following example identifies the original quorum device d2, removes that quorum device, lists the available shared devices, and configures d3 as a new quorum device.


(List quorum devices)
# scstat -q
…
-- Quorum Votes by Device --
 
                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d2s2  1        1       Online

(Remove the original quorum device)
# scconf -r -q globaldev=d2
 
(Verify the removal of the original quorum device)
# scstat -q
…
-- Quorum Votes by Device --
 
                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
 
(List available devices)
# scdidadm -L
…
3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
…
 
(Add a quorum device)
# scconf -a -q globaldev=d3
 
(Verify the addition of the new quorum device)
# scstat -q
…
-- Quorum Votes by Device --

                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d3s2 2        2       Online

Next Steps

ProcedureHow to Install Data-Service Software Packages (pkgadd)

Perform this procedure to install data services for the Solaris 10 OS from the Sun Cluster 2 of 2 CD-ROM. The Sun Cluster 2 of 2 CD-ROM contains the data services for Sun Java System applications. This procedure uses the pkgadd(1M) program to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.


Note –

Do not use this procedure for the following kinds of data-service packages:


Steps
  1. Become superuser on the cluster node.

  2. Insert the Sun Cluster 2 of 2 CD-ROM in the CD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the Solaris_arch/Product/sun_cluster_agents/Solaris_10/Packages/ directory, where arch is sparc or x86.


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents/ \
    Solaris_10/Packages/
    
  4. Install the data service packages on the global zone.


    # pkgadd -G -d . [packages]
    -G

    Adds packages to the current zone only. You must add Sun Cluster packages only to the global zone. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

    -d

    Specifies the location of the packages to install.

    packages

    Optional. Specifies the name of one or more packages to install. If no package name is specified, the pkgadd program displays a pick list of all packages that are available to install.

  5. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  6. Install any patches for the data services that you installed.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

    If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.


Next Steps

ProcedureHow to Install Data-Service Software Packages (scinstall)

Perform this procedure to install data services from the Sun Cluster Agents CD of the Sun Cluster 3.1 8/05 release. This procedure uses the interactive scinstall utility to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.


Note –

Do not use this procedure for the following kinds of data-service packages:


You do not need to perform this procedure if you used SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache or both and you do not intend to install any other data services. Instead, go to How to Configure Quorum Devices.

To install data services from the Sun Cluster 3.1 10/03 release or earlier, you can alternatively use the Web Start installer program to install the packages. See How to Install Data-Service Software Packages (Web Start installer).

Follow these guidelines to use the interactive scinstall utility in this procedure:

Steps
  1. Become superuser on the cluster node.

  2. Insert the Sun Cluster Agents CD in the CD-ROM drive on the node.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the directory where the CD-ROM is mounted.


    # cd /cdrom/cdrom0/
    
  4. Start the scinstall(1M) utility.


    # scinstall
    
  5. From the Main Menu, choose the menu item, Add support for new data services to this cluster node.

  6. Follow the prompts to select the data services to install.

    You must install the same set of data-service packages on each node. This requirement applies even if a node is not expected to host resources for an installed data service.

  7. After the data services are installed, quit the scinstall utility.

  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  9. Install any Sun Cluster data-service patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. This inability to obtain quorum causes the entire cluster to shut down.

    If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.


Next Steps

ProcedureHow to Install Data-Service Software Packages (Web Start installer)

Perform this procedure to install data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster Agents CD. This procedure uses the Web Start installer program on the CD-ROM to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.


Note –

Do not use this procedure for the following kinds of data-service packages:

You do not need to perform this procedure if you used SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache or both and you do not intend to install any other data services. Instead, go to How to Configure Quorum Devices.


To install data services from the Sun Cluster 3.1 10/03 release or earlier, you can alternatively follow the procedures in How to Install Data-Service Software Packages (scinstall).

You can run the installer program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the installer program, see the installer(1M) man page.

Before You Begin

If you intend to use the installer program with a GUI, ensure that the DISPLAY environment variable is set.

Steps
  1. Become superuser on the cluster node.

  2. Insert the Sun Cluster Agents CD in the CD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the directory of the CD-ROM where the installer program resides.


    # cd /cdrom/cdrom0/Solaris_arch/
    

    In the Solaris_arch/ directory, arch is sparc or x86.

  4. Start the Web Start installer program.


    # ./installer
    
  5. When you are prompted, select the type of installation.

    See the Sun Cluster Release Notes for a listing of the locales that are available for each data service.

    • To install all data services on the CD-ROM, select Typical.

    • To install only a subset of the data services on the CD-ROM, select Custom.

  6. When you are prompted, select the locale to install.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  7. Follow instructions on the screen to install the data-service packages on the node.

    After the installation is finished, the installer program provides an installation summary. This summary enables you to view logs that the program created during the installation. These logs are located in the /var/sadm/install/logs/ directory.

  8. Quit the installer program.

  9. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  10. Install any Sun Cluster data-service patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

    If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.


Next Steps

ProcedureHow to Configure Quorum Devices


Note –

You do not need to configure quorum devices in the following circumstances:

Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.


Perform this procedure one time only, after the cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.

Before You Begin

If you intend to configure a Network Appliance network-attached storage (NAS) device as a quorum device, do the following:

Steps
  1. If you want to use a shared SCSI disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.

    1. From one node of the cluster, display a list of all the devices that the system checks.

      You do not need to be logged in as superuser to run this command.


      % scdidadm -L
      

      Output resembles the following:


      1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
      2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      …
    2. Ensure that the output shows all connections between cluster nodes and storage devices.

    3. Determine the global device-ID name of each shared disk that you are configuring as a quorum device.


      Note –

      Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.


      Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d2 is shared by phys-schost-1 and phys-schost-2.

  2. Become superuser on one node of the cluster.

  3. Start the scsetup(1M) utility.


    # scsetup
    

    The Initial Cluster Setup screen is displayed.


    Note –

    If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 8.


  4. Answer the prompt Do you want to add any quorum disks?.

    • If your cluster is a two-node cluster, you must configure at least one shared quorum device. Type Yes to configure one or more quorum devices.

    • If your cluster has three or more nodes, quorum device configuration is optional.

      • Type No if you do not want to configure additional quorum devices. Then skip to Step 7.

      • Type Yes to configure additional quorum devices. Then proceed to Step 5.

  5. Specify what type of device you want to configure as a quorum device.

    • Choose scsi to configure a shared SCSI disk.

    • Choose netapp_nas to configure a Network Appliance NAS device.

  6. Specify the name of the device to configure as a quorum device.

    For a Network Appliance NAS device, also specify the following information:

    • The name of the NAS device

    • The LUN ID of the NAS device

  7. At the prompt Is it okay to reset "installmode"?, type Yes.

    After the scsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.

  8. Quit the scsetup utility.

Next Steps

Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

Interrupted scsetup processing — If the quorum setup process is interrupted or fails to be completed successfully, rerun scsetup.

Changes to quorum vote count — If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then add it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List” in Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

ProcedureHow to Verify the Quorum Configuration and Installation Mode

Perform this procedure to verify that quorum configuration was completed successfully and that cluster installation mode is disabled.

Steps
  1. From any node, verify the device and node quorum configurations.


    % scstat -q
    
  2. From any node, verify that cluster installation mode is disabled.

    You do not need to be superuser to run this command.


    % scconf -p | grep "install mode"
    Cluster install mode:                disabled

    Cluster installation is complete.

Next Steps

Go to Configuring the Cluster to install volume management software and perform other configuration tasks on the cluster or new cluster node.


Note –

If you added a new node to a cluster that uses VxVM, you must perform steps in SPARC: How to Install VERITAS Volume Manager Software to do one of the following tasks:


Configuring the Cluster

This section provides information and procedures to configure the software that you installed on the cluster or new cluster node. Before you start to perform these tasks, ensure that you completed the following tasks:

The following table lists the tasks to perform to configure your cluster. Complete the procedures in the order that is indicated.


Note –

If you added a new node to a cluster that uses VxVM, you must perform steps in SPARC: How to Install VERITAS Volume Manager Software to do one of the following tasks:


Table 2–5 Task Map: Configuring the Cluster

Task 

Instructions 

1. Install and configure volume management software: 

  • Install and configure Solstice DiskSuite or Solaris Volume Manager software

Chapter 3, Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software

Solstice DiskSuite or Solaris Volume Manager documentation 

  • SPARC: Install and configure VERITAS Volume Manager software.

Chapter 4, SPARC: Installing and Configuring VERITAS Volume Manager

VERITAS Volume Manager documentation 

2. Create and mount cluster file systems. 

How to Create Cluster File Systems

3. (Solaris 8 or SunPlex Installer installations) Create Internet Protocol (IP) Network Multipathing groups for each public-network adapter that is not already configured in an IP Network Multipathing group.

How to Configure Internet Protocol (IP) Network Multipathing Groups

4. (Optional) Change a node's private hostname.

How to Change Private Hostnames

5. Create or modify the NTP configuration file. 

How to Configure Network Time Protocol (NTP)

6. (Optional) SPARC: Install the Sun Cluster module to Sun Management Center software.

SPARC: Installing the Sun Cluster Module for Sun Management Center

Sun Management Center documentation 

7. Install third-party applications and configure the applications, data services, and resource groups. 

Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Third-party application documentation 

ProcedureHow to Create Cluster File Systems

Perform this procedure for each cluster file system that you want to create. Unlike a local file system, a cluster file system is accessible from any node in the cluster. If you used SunPlex Installer to install data services, SunPlex Installer might have already created one or more cluster file systems.


Caution – Caution –

Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.


Before You Begin

Perform the following tasks:

Steps
  1. Become superuser on any node in the cluster.


    Tip –

    For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.


  2. Create a file system.

    • For a UFS file system, use the newfs(1M) command.


      # newfs raw-disk-device
      

      The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

      Volume Manager 

      Sample Disk Device Name 

      Description 

      Solstice DiskSuite or Solaris Volume Manager 

      /dev/md/nfs/rdsk/d1

      Raw disk device d1 within the nfs disk set

      SPARC: VERITAS Volume Manager 

      /dev/vx/rdsk/oradg/vol01

      Raw disk device vol01 within the oradg disk group

      None 

      /dev/global/rdsk/d1s3

      Raw disk device d1s3

    • For a Sun StorEdge QFS file system, follow the procedures for defining the configuration in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

    • SPARC: For a VERITAS File System (VxFS) file system, follow the procedures that are provided in your VxFS documentation.

  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint/
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device

    mountpoint

    Name of the directory on which to mount the cluster file system

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.

    1. In each entry, specify the required mount options for the type of file system that you use.


      Note –

      Do not use the logging mount option for Solstice DiskSuite trans metadevices or Solaris Volume Manager transactional volumes. Trans metadevices and transactional volumes provide their own logging.

      In addition, Solaris Volume Manager transactional-volume logging (formerly Solstice DiskSuite trans-metadevice logging) is scheduled to be removed from the Solaris OS in an upcoming Solaris release. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.

  5. On any node in the cluster, run the sccheck(1M) utility.

    The sccheck utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  6. Mount the cluster file system.


    # mount /global/device-group/mountpoint/
    
    • For UFS and QFS, mount the cluster file system from any node in the cluster.

    • SPARC: For VxFS, mount the cluster file system from the current master of device-group to ensure that the file system mounts successfully. In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.


      Note –

      To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.


  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.


Example 2–4 Creating a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
…
 
(on each node)
# mkdir -p /global/oracle/d1
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type   ; pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
(save and exit)
 
(on one node)
# sccheck
# mount /global/oracle/d1
# mount
…
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles
on Sun Oct 3 08:56:16 2000

Next Steps

If you installed Sun Cluster software on the Solaris 8 OS or you used SunPlex Installer to install the cluster, go to How to Configure Internet Protocol (IP) Network Multipathing Groups.

If you want to change any private hostnames, go to How to Change Private Hostnames.

If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).

SPARC: If you want to configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.

Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. Follow procedures in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS and in the documentation that is supplied with your application software.

ProcedureHow to Configure Internet Protocol (IP) Network Multipathing Groups

Perform this task on each node of the cluster. If you used SunPlex Installer to install Sun Cluster HA for Apache or Sun Cluster HA for NFS, SunPlex Installer configured IP Network Multipathing groups for the public-network adapters those data services use. You must configure IP Network Multipathing groups for the remaining public-network adapters.


Note –

All public-network adapters must belong to an IP Network Multipathing group.


Before You Begin

Have available your completed Public Networks Worksheet.

Step

    Configure IP Network Multipathing groups.

    • Perform procedures in Deploying Network Multipathing in IP Network Multipathing Administration Guide (Solaris 8), Configuring Multipathing Interface Groups in System Administration Guide: IP Services (Solaris 9), or Configuring IPMP Groups in System Administration Guide: IP Services (Solaris 10).

    • Follow these additional requirements to configure IP Network Multipathing groups in a Sun Cluster configuration:

      • Each public network adapter must belong to a multipathing group.

      • In the following kinds of multipathing groups, you must configure a test IP address for each adapter in the group:

        • On the Solaris 8 OS, all multipathing groups require a test IP address for each adapter.

        • On the Solaris 9 or Solaris 10 OS, multipathing groups that contain two or more adapters require test IP addresses. If a multipathing group contains only one adapter, you do not need to configure a test IP address.

      • Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.

      • Test IP addresses must not be used by normal applications because the test IP addresses are not highly available.

      • In the /etc/default/mpathd file, the value of TRACK_INTERFACES_ONLY_WITH_GROUPS must be yes.

      • The name of a multipathing group has no requirements or restrictions.

Next Steps

If you want to change any private hostnames, go to How to Change Private Hostnames.

If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).

If you are using Sun Cluster on a SPARC based system and you want to use Sun Management Center to monitor the cluster, install the Sun Cluster module for Sun Management Center. Go to SPARC: Installing the Sun Cluster Module for Sun Management Center.

Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. Follow procedures in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS and in the documentation that is supplied with your application software.

ProcedureHow to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames, clusternodenodeid-priv, that are assigned during Sun Cluster software installation.


Note –

Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.


Perform this procedure on one active node of the cluster.

Steps
  1. Become superuser on a node in the cluster.

  2. Start the scsetup(1M) utility.


    # scsetup
    
  3. From the Main Menu, choose the menu item, Private hostnames.

  4. From the Private Hostname Menu, choose the menu item, Change a private hostname.

  5. Follow the prompts to change the private hostname.

    Repeat for each private hostname to change.

  6. Verify the new private hostnames.


    # scconf -pv | grep "private hostname"
    (phys-schost-1) Node private hostname:      phys-schost-1-priv
    (phys-schost-3) Node private hostname:      phys-schost-3-priv
    (phys-schost-2) Node private hostname:      phys-schost-2-priv
Next Steps

If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).

SPARC: If you want to configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.

Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

ProcedureHow to Configure Network Time Protocol (NTP)


Note –

If you installed your own /etc/inet/ntp.conf file before you installed Sun Cluster software, you do not need to perform this procedure. Determine your next step:


Perform this task to create or modify the NTP configuration file after you perform any of the following tasks:

If you added a node to a single-node cluster, you must ensure that the NTP configuration file that you use is copied to the original cluster node as well as to the new node.

The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time. Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.

See the Sun Cluster Concepts Guide for Solaris OS for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines on how to configure NTP for a Sun Cluster configuration.

Steps
  1. Become superuser on a cluster node.

  2. If you have your own file, copy your file to each node of the cluster.

  3. If you do not have your own /etc/inet/ntp.conf file to install, use the /etc/inet/ntp.conf.cluster file as your NTP configuration file.


    Note –

    Do not rename the ntp.conf.cluster file as ntp.conf.


    If the /etc/inet/ntp.conf.cluster file does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Sun Cluster software. Sun Cluster software creates the /etc/inet/ntp.conf.cluster file as the NTP configuration file if an /etc/inet/ntp.conf file is not already present on the node. If so, perform the following edits instead on that ntp.conf file.

    1. Use your preferred text editor to open the /etc/inet/ntp.conf.cluster file on one node of the cluster for editing.

    2. Ensure that an entry exists for the private hostname of each cluster node.

      If you changed any node's private hostname, ensure that the NTP configuration file contains the new private hostname.

    3. If necessary, make other modifications to meet your NTP requirements.

  4. Copy the NTP configuration file to all nodes in the cluster.

    The contents of the NTP configuration file must be identical on all cluster nodes.

  5. Stop the NTP daemon on each node.

    Wait for the command to complete successfully on each node before you proceed to Step 6.

    • For the Solaris 8 or Solaris 9 OS, use the following command:


      # /etc/init.d/xntpd stop
      
    • For the Solaris 10 OS, use the following command:


      # svcadm disable ntp
      
  6. Restart the NTP daemon on each node.

    • If you use the ntp.conf.cluster file, run the following command:


      # /etc/init.d/xntpd.cluster start
      

      The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file.

      • If the ntp.conf file exists, the script exits immediately without starting the NTP daemon.

      • If the ntp.conf file does not exist but the ntp.conf.cluster file does exist, the script starts the NTP daemon. In this case, the script uses the ntp.conf.cluster file as the NTP configuration file.

    • If you use the ntp.conf file, run one of the following commands:

      • For the Solaris 8 or Solaris 9 OS, use the following command:


        # /etc/init.d/xntpd start
        
      • For the Solaris 10 OS, use the following command:


        # svcadm enable ntp
        
Next Steps

SPARC: To configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.

Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

SPARC: Installing the Sun Cluster Module for Sun Management Center

This section provides information and procedures to install software for the Sun Cluster module to Sun Management Center.

The Sun Cluster module for Sun Management Center enables you to use Sun Management Center to monitor the cluster. The following table lists the tasks to perform to install the Sun Cluster–module software for Sun Management Center.

Table 2–6 Task Map: Installing the Sun Cluster Module for Sun Management Center

Task 

Instructions 

1. Install Sun Management Center server, help-server, agent, and console packages. 

Sun Management Center documentation 

SPARC: Installation Requirements for Sun Cluster Monitoring

2. Install Sun Cluster–module packages. 

SPARC: How to Install the Sun Cluster Module for Sun Management Center

3. Start Sun Management Center server, console, and agent processes. 

SPARC: How to Start Sun Management Center

4. Add each cluster node as a Sun Management Center agent host object. 

SPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object

5. Load the Sun Cluster module to begin to monitor the cluster. 

SPARC: How to Load the Sun Cluster Module

SPARC: Installation Requirements for Sun Cluster Monitoring

The Sun Cluster module for Sun Management Center is used to monitor a Sun Cluster configuration. Perform the following tasks before you install the Sun Cluster module packages.

ProcedureSPARC: How to Install the Sun Cluster Module for Sun Management Center

Perform this procedure to install the Sun Cluster–module server and help–server packages.


Note –

The Sun Cluster–module agent packages, SUNWscsal and SUNWscsam, are already added to cluster nodes during Sun Cluster software installation.


Before You Begin

Ensure that all Sun Management Center core packages are installed on the appropriate machines. This task includes installing Sun Management Center agent packages on each cluster node. See your Sun Management Center documentation for installation instructions.

Steps
  1. On the server machine, install the Sun Cluster–module server package SUNWscssv.

    1. Become superuser.

    2. Insert the Sun Cluster 2 of 2 CD-ROM for the SPARC platform in the CD-ROM drive.

      If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

    3. Change to the Solaris_sparc/Product/sun_cluster/Solaris_ver/Packages/ directory, where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10.


      # cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_ver/Packages/
      
    4. Install the Sun Cluster–module server package.


      # pkgadd -d . SUNWscssv
      
    5. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      
  2. On the Sun Management Center 3.0 help-server machine or the Sun Management Center 3.5 server machine, install the Sun Cluster–module help–server package SUNWscshl.

    Use the same procedure as in the previous step.

  3. Install any Sun Cluster–module patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

Next Steps

Start Sun Management Center. Go to SPARC: How to Start Sun Management Center.

ProcedureSPARC: How to Start Sun Management Center

Perform this procedure to start the Sun Management Center server, agent, and console processes.

Steps
  1. As superuser, on the Sun Management Center server machine, start the Sun Management Center server process.

    The install-dir is the directory on which you installed the Sun Management Center software. The default directory is /opt.


    # /install-dir/SUNWsymon/sbin/es-start -S
    
  2. As superuser, on each Sun Management Center agent machine (cluster node), start the Sun Management Center agent process.


    # /install-dir/SUNWsymon/sbin/es-start -a
    
  3. On each Sun Management Center agent machine (cluster node), ensure that the scsymon_srv daemon is running.


    # ps -ef | grep scsymon_srv
    

    If any cluster node is not already running the scsymon_srv daemon, start the daemon on that node.


    # /usr/cluster/lib/scsymon/scsymon_srv
    
  4. On the Sun Management Center console machine (administrative console), start the Sun Management Center console.

    You do not need to be superuser to start the console process.


    % /install-dir/SUNWsymon/sbin/es-start -c
    
Next Steps

Add a cluster node as a monitored host object. Go to SPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object.

ProcedureSPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object

Perform this procedure to create a Sun Management Center agent host object for a cluster node.

Steps
  1. Log in to Sun Management Center.

    See your Sun Management Center documentation.

  2. From the Sun Management Center main window, select a domain from the Sun Management Center Administrative Domains pull-down list.

    This domain contains the Sun Management Center agent host object that you create. During Sun Management Center software installation, a Default Domain was automatically created for you. You can use this domain, select another existing domain, or create a new domain.

    See your Sun Management Center documentation for information about how to create Sun Management Center domains.

  3. Choose Edit⇒Create an Object from the pull-down menu.

  4. Click the Node tab.

  5. From the Monitor Via pull-down list, select Sun Management Center Agent - Host.

  6. Fill in the name of the cluster node, for example, phys-schost-1, in the Node Label and Hostname text fields.

    Leave the IP text field blank. The Description text field is optional.

  7. In the Port text field, type the port number that you chose when you installed the Sun Management Center agent machine.

  8. Click OK.

    A Sun Management Center agent host object is created in the domain.

Next Steps

Load the Sun Cluster module. Go to SPARC: How to Load the Sun Cluster Module.

Troubleshooting

You need only one cluster node host object to use Sun Cluster–module monitoring and configuration functions for the entire cluster. However, if that cluster node becomes unavailable, connection to the cluster through that host object also becomes unavailable. Then you need another cluster-node host object to reconnect to the cluster.

ProcedureSPARC: How to Load the Sun Cluster Module

Perform this procedure to start cluster monitoring.

Steps
  1. In the Sun Management Center main window, right click the icon of a cluster node.

    The pull-down menu is displayed.

  2. Choose Load Module.

    The Load Module window lists each available Sun Management Center module and whether the module is currently loaded.

  3. Choose Sun Cluster: Not Loaded and click OK.

    The Module Loader window shows the current parameter information for the selected module.

  4. Click OK.

    After a few moments, the module is loaded. A Sun Cluster icon is then displayed in the Details window.

  5. Verify that the Sun Cluster module is loaded.

    Under the Operating System category, expand the Sun Cluster subtree in either of the following ways:

    • In the tree hierarchy on the left side of the window, place the cursor over the Sun Cluster module icon and single-click the left mouse button.

    • In the topology view on the right side of the window, place the cursor over the Sun Cluster module icon and double-click the left mouse button.

See Also

See the Sun Cluster module online help for information about how to use Sun Cluster module features.


Note –

The Help button in the Sun Management Center browser accesses online help for Sun Management Center, not the topics specific to the Sun Cluster module.


See Sun Management Center online help and your Sun Management Center documentation for information about how to use Sun Management Center.

Next Steps

Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Uninstalling the Software

This section provides the following procedures to uninstall or remove Sun Cluster software:

ProcedureHow to Uninstall Sun Cluster Software to Correct Installation Problems

Perform this procedure if the installed node cannot join the cluster or if you need to correct configuration information. For example, perform this procedure to reconfigure the transport adapters or the private-network address.


Note –

If the node has already joined the cluster and is no longer in installation mode, as described in Step 2 of How to Verify the Quorum Configuration and Installation Mode, do not perform this procedure. Instead, go to “How to Uninstall Sun Cluster Software From a Cluster Node” in Adding and Removing a Cluster Node in Sun Cluster System Administration Guide for Solaris OS.


Before You Begin

Attempt to reinstall the node. You can correct certain failed installations by repeating Sun Cluster software installation on the node.

Steps
  1. Add to the cluster's node-authentication list the node that you intend to uninstall.

    If you are uninstalling a single-node cluster, skip to Step 2.

    1. Become superuser on an active cluster member other than the node that you are uninstalling.

    2. Specify the name of the node to add to the authentication list.


      # /usr/cluster/bin/scconf -a -T node=nodename
      
      -a

      Add

      -T

      Specifies authentication options

      node=nodename

      Specifies the name of the node to add to the authentication list

      You can also use the scsetup(1M) utility to perform this task. See How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS for procedures.

  2. Become superuser on the node that you intend to uninstall.

  3. Shut down the node that you intend to uninstall.


    # shutdown -g0 -y -i0
    
  4. Reboot the node into noncluster mode.

    • On SPARC based systems, do the following:


      ok boot -x
      
    • On x86 based systems, do the following:


                          <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
      Boot args:
      
      Type   b [file-name] [boot-flags] <ENTER>  to boot with options
      or     i <ENTER>                           to enter boot interpreter
      or     <ENTER>                             to boot with defaults
      
                       <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -x
      
  5. Change to a directory, such as the root (/) directory, that does not contain any files that are delivered by the Sun Cluster packages.


    # cd /
    
  6. Uninstall Sun Cluster software from the node.


    # /usr/cluster/bin/scinstall -r
    

    See the scinstall(1M) man page for more information.

  7. Reinstall and reconfigure Sun Cluster software on the node.

    Refer to Table 2–1 for the list of all installation tasks and the order in which to perform the tasks.

ProcedureHow to Uninstall the SUNWscrdt Package

Perform this procedure on each node in the cluster.

Before You Begin

Verify that no applications are using the RSMRDT driver before performing this procedure.

Steps
  1. Become superuser on the node to which you want to uninstall the SUNWscrdt package.

  2. Uninstall the SUNWscrdt package.


    # pkgrm SUNWscrdt
    

ProcedureHow to Unload the RSMRDT Driver Manually

If the driver remains loaded in memory after completing How to Uninstall the SUNWscrdt Package, perform this procedure to unload the driver manually.

Steps
  1. Start the adb utility.


    # adb -kw
    
  2. Set the kernel variable clifrsmrdt_modunload_ok to 1.


    physmem NNNN 
    clifrsmrdt_modunload_ok/W 1
    
  3. Exit the adb utility by pressing Control-D.

  4. Find the clif_rsmrdt and rsmrdt module IDs.


    # modinfo | grep rdt
    
  5. Unload the clif_rsmrdt module.

    You must unload the clif_rsmrdt module before you unload the rsmrdt module.


    # modunload -i clif_rsmrdt_id
    
    clif_rsmrdt_id

    Specifies the numeric ID for the module being unloaded

  6. Unload the rsmrdt module.


    # modunload -i rsmrdt_id
    
    rsmrdt_id

    Specifies the numeric ID for the module being unloaded

  7. Verify that the module was successfully unloaded.


    # modinfo | grep rdt
    

Example 2–5 Unloading the RSMRDT Driver

The following example shows the console output after the RSMRDT driver is manually unloaded.


# adb -kw
physmem fc54
clifrsmrdt_modunload_ok/W 1
clifrsmrdt_modunload_ok: 0x0 = 0x1
^D
# modinfo | grep rsm
 88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1)
 93 f08e07d4 b95 - 1 clif_rsmrdt (CLUSTER-RSMRDT Interface module)
 94 f0d3d000 13db0 194 1 rsmrdt (Reliable Datagram Transport dri)
# modunload -i 93
# modunload -i 94
# modinfo | grep rsm
 88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1)
#

Troubleshooting

If the modunload command fails, applications are probably still using the driver. Terminate the applications before you run modunload again.