Sun Cluster Software Installation Guide for Solaris OS

Chapter 2 Installing Software on the Cluster

This chapter provides procedures for how to install software on cluster nodes and the administration console.

Installing the Software

This section provides information and procedures to install software on the cluster nodes.

The following task map lists the tasks that you perform to install software on multiple-node or single-node clusters. Complete the procedures in the order that is indicated.

Table 2–1 Task Map: Installing the Software

Task 

Instructions 

1. Plan the layout of your cluster configuration and prepare to install software. 

How to Prepare for Cluster Software Installation

2. (Optional) Install Cluster Control Panel (CCP) software on the administrative console.

How to Install Cluster Control Panel Software on an Administrative Console

3. Install the Solaris OS on all nodes. 

How to Install Solaris Software

4. (Optional) Configure internal disk mirroring.

How to Configure Internal Disk Mirroring

5. (Optional) SPARC: Install and configure Sun multipathing software.

How to Install Sun Multipathing Software

6. (Optional) SPARC: Install VERITAS File System software.

SPARC: How to Install VERITAS File System Software

7. Install Sun Cluster software and any data services that you will use. 

How to Install Sun Cluster Framework and Data-Service Software Packages

8. Set up directory paths. 

How to Set Up the Root Environment

ProcedureHow to Prepare for Cluster Software Installation

Before you begin to install software, make the following preparations.

  1. Ensure that the combination of hardware and software that you choose for your cluster is currently a supported Sun Cluster configuration.

    Contact your Sun sales representative for the most current information about supported cluster configurations.

  2. Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.

  3. Have available all related documentation, including third-party documents.

    The following is a partial list of products whose documentation you might need to reference during cluster installation:

    • Solaris OS

    • Solaris Volume Manager software

    • Sun StorEdge QFS software

    • VERITAS Volume Manager

    • Third-party applications

  4. Plan your cluster configuration.


    Caution – Caution –

    Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Sun Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.

    For example, the Oracle Real Application Clusters Guard option of Oracle RAC has special requirements for the hostnames that you use in the cluster. Another example with special requirements is Sun Cluster HA for SAP. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software.


  5. Obtain all necessary patches for your cluster configuration.

    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

Next Steps

If you want to use Cluster Control Panel software to connect from an administrative console to your cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.

Otherwise, choose the Solaris installation procedure to use.

ProcedureHow to Install Cluster Control Panel Software on an Administrative Console


Note –

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a single interface from which to start the cconsole, cssh, ctelnet, and crlogin tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time. For additional information, see the ccp(1M) man page.

You can use any desktop machine that runs a version of the Solaris OS that is supported by Sun Cluster 3.2 software as an administrative console. If you are using Sun Cluster software on a SPARC based system, you can also use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information about how to install Sun Management Center software.

Before You Begin

Ensure that a supported version of the Solaris OS and any Solaris patches are installed on the administrative console. All platforms require at least the End User Solaris Software Group.

  1. Become superuser on the administrative console.

  2. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86 (Solaris 10 only), and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


    adminconsole# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
    
  4. Install the SUNWccon package.


    adminconsole# pkgadd -d . SUNWccon
    
  5. (Optional) Install the SUNWscman package.


    adminconsole# pkgadd -d . SUNWscman
    

    When you install the SUNWscman package on the administrative console, you can view Sun Cluster man pages from the administrative console before you install Sun Cluster software on the cluster nodes.

  6. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      adminconsole# eject cdrom
      
  7. Create an /etc/clusters file on the administrative console.

    Add your cluster name and the physical node name of each cluster node to the file.


    adminconsole# vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  8. Create an /etc/serialports file.

    Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.


    adminconsole# vi /etc/serialports
    node1 ca-dev-hostname port
    node2 ca-dev-hostname port
    
    node1, node2

    Physical names of the cluster nodes.

    ca-dev-hostname

    Hostname of the console-access device.

    port

    Serial port number, or the Secure Shell port number for Secure Shell connections.

    Note these special instructions to create an /etc/serialports file:

    • For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.

    • For all other console-access devices, to connect to the console through a telnet connection, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.

    • For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.

    • For Secure Shell connections to node consoles, specify for each node the name of the console-access device and the port number to use for secure connection. The default port number for Secure Shell is 22.

    • To connect the administrative console directly to the cluster nodes or through a management network, specify for each node its hostname and the port number that the node uses to connect to the administrative console or the management network.

  9. (Optional) For convenience, set the directory paths on the administrative console.

    1. Add the /opt/SUNWcluster/bin/ directory to the PATH.

    2. Add the /opt/SUNWcluster/man/ directory to the MANPATH.

    3. If you installed the SUNWscman package, also add the /usr/cluster/man/ directory to the MANPATH.

  10. Start the CCP utility.


    adminconsole# /opt/SUNWcluster/bin/ccp &
    

    Click the cconsole, cssh, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:


    adminconsole# /opt/SUNWcluster/bin/ctelnet &
    

    The CCP software supports the following Secure Shell connections:

    • For secure connection to the node consoles, start the cconsole tool. Then from the Options menu of the Cluster Console window, enable the Use SSH check box.

    • For secure connection to the cluster nodes, use the cssh tool.

    See the procedure “How to Remotely Log In to Sun Cluster” in Beginning to Administer the Cluster in Sun Cluster System Administration Guide for Solaris OS for additional information about how to use the CCP utility. Also see the ccp(1M) man page.

Next Steps

Determine whether the Solaris OS is already installed to meet Sun Cluster software requirements. See Planning the Solaris OS for information about Sun Cluster installation requirements for the Solaris OS.

ProcedureHow to Install Solaris Software

If you do not use the scinstall custom JumpStart installation method to install software, perform this procedure to install the Solaris OS on each node in the cluster. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.


Tip –

To speed installation, you can install the Solaris OS on each node at the same time.


If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

Before You Begin

Perform the following tasks:

  1. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      As superuser, use the following command to start the cconsole utility:


      adminconsole# /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  2. Install the Solaris OS as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.


    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.


      Tip –

      To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size.

      • Create any other file-system partitions that you need, as described in System Disk Partitions.

    3. For ease of administration, set the same root password on each node.

  3. If you will use role-based access control (RBAC) instead of superuser to access the cluster nodes, set up an RBAC role that provides authorization for all Sun Cluster commands.

    This series of installation procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:

    • solaris.cluster.modify

    • solaris.cluster.admin

    • solaris.cluster.read

    See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.

  4. If you are adding a node to an existing cluster, add mount points for cluster file systems to the new node.

    1. From the active cluster node, display the names of all cluster file systems.


      phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}'
      
    2. On the new node, create a mount point for each cluster file system in the cluster.


      phys-schost-new# mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  5. If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.

    1. Ensure that the same vxio number is used on the VxVM-installed nodes.


      phys-schost# grep vxio /etc/name_to_major
      vxio NNN
      
    2. Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

    3. If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.

  6. If you installed the End User Solaris Software Group and you want to use any of the following Sun Cluster features, install additional Solaris software packages to support these features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • RSMRDT drivers

    • SPARC: SCI-PCI adapters

    • SPARC: For the Solaris 9 OS, use the following command:


      phys-schost# pkgadd -d . SUNWrsm SUNWrsmc SUNWrsmo SUNWrsmox
      
    • For the Solaris 10 OS, use the following command:


      phys-schost# pkgadd -G -d . SUNWrsm SUNWrsmo
      

      You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

  7. Install any required Solaris OS patches and hardware-related firmware and patches, including those for storage-array support. Also download any needed firmware that is contained in the hardware patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

  8. x86: Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    • On the Solaris 9 OS, set the default to kadb.


      phys-schost# eeprom boot-file=kadb
      
    • On the Solaris 10OS, set the default to kmdb in the GRUB boot parameters menu.


      grub edit> kernel /platform/i86pc/multiboot kmdb
      
  9. Update the /etc/inet/hosts or /etc/inet/ipnodes file on each node with all public IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. The ipnodes file can contain both IPv4 and IPv6 addresses. See Public Network IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.


    Note –

    During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file. Adding these IP addresses to the /etc/inet/ipnodes file is optional.


  10. If you will use ce adapters for the cluster interconnect, add the following entry to the /etc/system file.


    set ce:ce_taskq_disable=1

    This entry becomes effective after the next system reboot.

  11. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.

  12. (Optional) Configure public-network adapters in IPMP groups.

    If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a standalone system. See Part VI, IPMP, in System Administration Guide: IP Services for details.

    During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.

Next Steps

If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

Otherwise, to use Sun multipathing software, go to How to Install Sun Multipathing Software.

Otherwise, to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

See Also

See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.

ProcedureHow to Configure Internal Disk Mirroring

Perform this procedure on each node of the cluster to configure internal hardware RAID disk mirroring to mirror the system disk. This procedure is optional.


Note –

Do not perform this procedure under either of the following circumstances:


Before You Begin

Ensure that the Solaris operating system and any necessary patches are installed.

  1. Become superuser.

  2. Configure an internal mirror.


    phys-schost# raidctl -c clt0d0 clt1d0 
    
    -c clt0d0 clt1d0

    Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.

    For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.

Next Steps

To use Sun multipathing software, go to How to Install Sun Multipathing Software.

Otherwise, to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

ProcedureHow to Install Sun Multipathing Software

Perform this procedure on each node of the cluster to install and configure Sun multipathing software for fiber channel (FC) storage. Multipathing software manages multiple I/O paths to the shared cluster storage. This procedure is optional.

Before You Begin

Perform the following tasks:

  1. Become superuser.

  2. SPARC: For the Solaris 9 OS, install on each node Sun StorEdge Traffic Manager software and any necessary patches.

  3. Enable multipathing functionality.

    • For the Solaris 9 OS, change the value of the mpxio-disable parameter to no.

      Modify this entry in the /kernel/drv/scsi_vhci.conf file on each node.


      set mpxio-disable=no
    • For the Solaris 10 OS, issue the following command on each node:


      Caution – Caution –

      If Sun Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in a Sun Cluster environment.



      phys-schost# /usr/sbin/stmsboot -e
      
      -e

      Enables Solaris I/O multipathing.

      See the stmsboot(1M) man page for more information.

  4. SPARC: For the Solaris 9 OS, determine whether your version of Sun StorEdge SAN Foundation software includes built-in support for your storage array.

    If the software does not include built-in support for your storage array, edit the /kernel/drv/scsi_vhci.conf file on each node to include the necessary entries. For more information, see the release notes for your storage device.

  5. SPARC: For the Solaris 9 OS, shut down each node and perform a reconfiguration boot.

    The reconfiguration boot creates the new Solaris device files and links.


    phys-schost# shutdown -y -g0 -i0
    ok boot -r
    
  6. After the reconfiguration reboot is finished on all nodes, perform any additional tasks that are necessary to complete the configuration of your storage array.

    See installation instructions for your storage array in the Sun Cluster Hardware Administration Collection for details.

Troubleshooting

If you installed Sun multipathing software after Sun Cluster software was installed on the cluster, DID mappings might require updating. Issue the following commands on each node of the cluster to regenerate the DID namespace.

phys-schost# cldevice clearphys-schost# cldevice refresh(Solaris 9 only) phys-schost# cfgadm -c configurephys-schost# cldevice populate

See the cfgadm(1M) and cldevice(1CL) man pages for more information.

Next Steps

To install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

ProcedureSPARC: How to Install VERITAS File System Software

To use VERITAS File System (VxFS) software in the cluster, perform this procedure on each node of the cluster.

  1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.

  2. Install any Sun Cluster patches that are required to support VxFS.

    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

  3. In the /etc/system file on each node, set the following values.


    set rpcmod:svc_default_stksize=0x8000
    set lwp_default_stksize=0x6000

    These changes become effective at the next system reboot.

    • Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.

    • You must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

Next Steps

Install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

ProcedureHow to Install Sun Cluster Framework and Data-Service Software Packages

Follow this procedure to use the Sun JavaTM Enterprise System (Java ES) installer program to perform one or more of the following installation tasks:


Note –

This procedure uses the interactive form of the installer program. To use the noninteractive form of the installer program, such as when developing installation scripts, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Installation Guide for UNIX.


Before You Begin

Perform the following tasks:

  1. (Optional) To use the installer program with a GUI, ensure that the display environment of the cluster node to install is set to display the GUI.


    % xhost +
    % setenv DISPLAY nodename:0.0
    

    If you do not make these settings, the installer program runs in text-based mode.

  2. Become superuser on the cluster node to install.

  3. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  4. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  5. Start the installation wizard program.


    phys-schost# ./installer
    

    See the Sun Java Enterprise System 5 Installation Guide for UNIX for additional information about using the different forms and features of the Java ES installer program.

  6. Follow instructions on the screen to install Sun Cluster framework software and data services on the node.

    • If you do not want to install Sun Cluster Manager, formerly SunPlex Manager, deselect it.


      Note –

      You must install Sun Cluster Manager either on all nodes of the cluster or on none.


    • If you want to install Sun Cluster Geographic Edition software, select it.

      After the cluster is established, see Sun Cluster Geographic Edition Installation Guide for further installation procedures.

    • Choose Configure Later when prompted whether to configure Sun Cluster framework software.

    After installation is finished, you can view any available installation log.

  7. Install additional packages to use any of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i Release 2 SCI configuration with RSM enabled. Refer to Oracle9i Release 2 user documentation for detailed installation and configuration instructions.


    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.2 packages that each feature requires, in the order in which you must install each group of packages. The Java ES installer program does not automatically install these packages.


      Note –

      Install packages in the order in which they are listed in the following table.


      Feature 

      Additional Sun Cluster 3.2 Packages to Install 

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      • Solaris 9: SUNWsci SUNWscid SUNWscidx

      • Solaris 10: SUNWscir SUNWsci SUNWscidr SUNWscid

      RSMRDT drivers 

      SUNWscrdt

    2. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86 (Solaris 10 only), and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


      phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      
    3. Install the additional packages.

      • SPARC: For the Solaris 9 OS, use the following command:


        phys-schost# pkgadd -d . packages
        
      • For the Solaris 10 OS, use the following command:


        phys-schost# pkgadd -G -d . packages
        
  8. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  9. Apply any necessary patches to support Sun Cluster software.

    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

Next Steps

If you want to install Sun StorEdge QFS file system software, follow the procedures for initial installation in the Sun StorEdge QFS Installation and Upgrade Guide.

Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.

ProcedureHow to Set Up the Root Environment


Note –

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User's Work Environment in System Administration Guide: Basic Administration (Solaris 9 or Solaris 10) for more information.


Perform this procedure on each node in the cluster.

  1. Become superuser on a cluster node.

  2. Modify PATH and MANPATH entries in the .cshrc or .profile file.

    1. Add /usr/sbin/ and /usr/cluster/bin/ to the PATH.

    2. Add /usr/cluster/man/ to the MANPATH.

    See your Solaris OS documentation, volume manager documentation, and other application documentation for additional file paths to set.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

Next Steps

Configure Sun Cluster software on the cluster nodes. Go to Establishing a New Cluster or New Cluster Node.