Sun Cluster Software Installation Guide for Solaris OS

Installing the Software

This section provides information and procedures to install software on the cluster nodes.

The following task map lists the tasks that you perform to install software on multiple-host or single-host global clusters. Complete the procedures in the order that is indicated.

Table 2–1 Task Map: Installing the Software

Task 

Instructions 

Plan the layout of your cluster configuration and prepare to install software. 

How to Prepare for Cluster Software Installation

(Optional) Install and configure a quorum server.

How to Install and Configure Quorum Server Software

(Optional) Install Cluster Control Panel (CCP) software on the administrative console.

How to Install Cluster Control Panel Software on an Administrative Console

Install the Solaris OS on all nodes. 

How to Install Solaris Software

(Optional) Configure internal disk mirroring.

How to Configure Internal Disk Mirroring

(Optional) Install Sun Logical Domains (LDoms) software and create domains.

SPARC: How to Install Sun Logical Domains Software and Create Domains

(Optional) SPARC: Install and configure Solaris I/O multipathing software.

How to Install Solaris I/O Multipathing Software

(Optional) SPARC: Install Veritas File System software.

How to Install Veritas File System Software

Install Sun Cluster software and any data services that you will use. 

How to Install Sun Cluster Framework and Data-Service Software Packages

(Optional) Install Sun QFS software.

How to Install Sun QFS Software

Set up directory paths. 

How to Set Up the Root Environment

(Optional) Configure Solaris IP Filter.

How to Configure Solaris IP Filter

ProcedureHow to Prepare for Cluster Software Installation

Before you begin to install software, make the following preparations.

  1. Ensure that the combination of hardware and software that you choose for your cluster is currently a supported Sun Cluster configuration.

    Contact your Sun sales representative for the most current information about supported cluster configurations.

  2. Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.

  3. Have available all related documentation, including third-party documents.

    The following is a partial list of products whose documentation you might need to reference during cluster installation:

    • Solaris OS

    • Solaris Volume Manager software

    • Sun QFS software

    • Veritas Volume Manager

    • Third-party applications

  4. Plan your cluster configuration.


    Caution – Caution –

    Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Sun Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.

    For example, the Oracle Real Application Clusters Guard option of Oracle Real Application Clusters has special requirements for the hostnames that you use in the cluster. Another example with special requirements is Sun Cluster HA for SAP. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software.


  5. Obtain all necessary patches for your cluster configuration.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

Next Steps

If you want to use Cluster Control Panel software to connect from an administrative console to your cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.

Otherwise, choose the Solaris installation procedure to use.

ProcedureHow to Install and Configure Quorum Server Software

Perform this procedure to configure a host server as a quorum server.

Before You Begin

Perform the following tasks:

  1. Become superuser on the machine to install with Quorum Server software.

  2. (Optional) To use the installer program with a GUI, ensure that the display environment of the host server to install is set to display the GUI.


    # xhost +
    # setenv DISPLAY nodename:0.0
    
  3. Load the installation media into the drive.

    If the volume management daemon (vold(1M)) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  4. Change to the installation wizard directory of the media.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  5. Start the installation wizard.


    phys-schost# ./installer
    
  6. Follow instructions on the screen to install Quorum Server software on the host server.

    Choose the Configure Later option.


    Note –

    If the installer does not allow you to choose the Configure Later option, choose Configure Now.


    After installation is finished, you can view any available installation log. See the Sun Java Enterprise System 2006Q4 Installation Guide for UNIX for additional information about using the Java Enterprise System installer program.

  7. Apply any required Quorum Server patches.

  8. Unload the installation media from the drive.

    1. To ensure that the installation media is not being used, change to a directory that does not reside on the media.

    2. Eject the media.


      phys-schost# eject cdrom
      
  9. Apply any necessary patches to support the Quorum Server software.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  10. (Optional) Add the Quorum Server binary location to your PATH environment variable.


    quorumserver# PATH=$PATH:/usr/cluster/bin
    
  11. (Optional) Add the Quorum Server man-page location to your MANPATH environment variable.


    quorumserver# MANPATH=$MANPATH:/usr/cluster/man
    
  12. Configure the quorum server.

    Add the following entry to the /etc/scqsd/scqsd.conf file to specify configuration information about the quorum server.

    Identify the quorum server by using at least one of either an instance name or a port number. You must provide the port number, but the instance name is optional.

    • If you provide an instance name, that name must be unique among your quorum servers.

    • If you do not provide an instance name, always refer to this quorum server by the port on which it listens.


    /usr/cluster/lib/sc/scqsd [-d quorumdirectory] [-i instancename] -p port
    
    -d quorumdirectory

    The path to the directory where the quorum server can store quorum data.

    The quorum-server process creates one file per cluster in this directory to store cluster-specific quorum information.

    By default, the value of this option is /var/scqsd. This directory must be unique for each quorum server that you configure.

    -i instancename

    A unique name that you choose for the quorum-server instance.

    -p port

    The port number on which the quorum server listens for requests from the cluster.

  13. (Optional) To serve more than one cluster but use a different port number or instance, configure an additional entry for each additional instance of the quorum server that you need.

  14. Save and close the /etc/scqsd/scqsd.conf file.

  15. Start the newly configured quorum server.


    quorumserver# /usr/cluster/bin/clquorumserver start quorumserver
    
    quorumserver

    Identifies the quorum server. You can use the port number on which the quorum server listens. If you provided an instance name in the configuration file, you can use that name instead.

    • To start a single quorum server, provide either the instance name or the port number.

    • To start all quorum servers when you have multiple quorum servers configured, use the + operand.

Troubleshooting

The installer performs a simple pkgadd installation of the Sun Cluster Quorum Server packages and sets up the necessary directories. The software consists of the following packages:

The installation of these packages adds software to the /usr/cluster and /etc/scqsd directories. You cannot modify the location of the Sun Cluster Quorum Server software.

If you receive an installation error message regarding the Sun Cluster Quorum Server software, verify that the packages were properly installed.

Next Steps

If you want to use an administrative console to communicate with the cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.

Otherwise, go to How to Install Solaris Software.

ProcedureHow to Install Cluster Control Panel Software on an Administrative Console


Note –

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.

You cannot use this software to connect to Sun Logical Domains (LDoms) guest domains.


This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a single interface from which to start the cconsole, cssh, ctelnet, and crlogin tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time. For additional information, see the ccp(1M) man page.

You can use any desktop machine that runs a version of the Solaris OS that is supported by Sun Cluster 3.2 11/09 software as an administrative console. If you are using Sun Cluster software on a SPARC based system, you can also use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information about how to install Sun Management Center software.

Before You Begin

Ensure that a supported version of the Solaris OS and any Solaris patches are installed on the administrative console. All platforms require at least the End User Solaris Software Group.

  1. Become superuser on the administrative console.

  2. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  3. Change to the /cdrom/suncluster_3_0Packages/ directory, where arch is sparc or x86, and where ver is 10 for Solaris 10 .


    adminconsole# cd /cdrom/cdrom0//cdrom/suncluster_3_0/Packages/
    
  4. Install the SUNWccon package.


    adminconsole# pkgadd -d . SUNWccon
    
  5. (Optional) Install Sun Cluster man-page packages.


    adminconsole# pkgadd -d . pkgname
    

    Package Name 

    Description 

    SUNWscman

    Sun Cluster framework man pages 

    SUNWscdsman

    Sun Cluster data-service man pages 

    SUNWscqsman

    Sun Cluster Quorum Server man pages 

    When you install the Sun Cluster man-page packages on the administrative console, you can view them from the administrative console before you install Sun Cluster software on the cluster nodes or quorum server.

  6. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      adminconsole# eject cdrom
      
  7. Create an /etc/clusters file on the administrative console.

    Add your cluster name and the physical node name of each cluster node to the file.


    adminconsole# vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  8. Create an /etc/serialports file.

    Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.


    adminconsole# vi /etc/serialports
    node1 ca-dev-hostname port
    node2 ca-dev-hostname port
    
    node1, node2

    Physical names of the cluster nodes.

    ca-dev-hostname

    Hostname of the console-access device.

    port

    Serial port number, or the Secure Shell port number for Secure Shell connections.

    Note these special instructions to create an /etc/serialports file:

    • For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.

    • For all other console-access devices, to connect to the console through a telnet connection, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.

    • For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.

    • For Secure Shell connections to node consoles, specify for each node the name of the console-access device and the port number to use for secure connection. The default port number for Secure Shell is 22.

    • To connect the administrative console directly to the cluster nodes or through a management network, specify for each node its hostname and the port number that the node uses to connect to the administrative console or the management network.

  9. (Optional) For convenience, set the directory paths on the administrative console.

    1. Add the /opt/SUNWcluster/bin/ directory to the PATH.

    2. Add the /opt/SUNWcluster/man/ directory to the MANPATH.

    3. If you installed the SUNWscman package, also add the /usr/cluster/man/ directory to the MANPATH.

  10. Start the CCP utility.


    adminconsole# /opt/SUNWcluster/bin/ccp &
    

    Click the cconsole, cssh, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:


    adminconsole# /opt/SUNWcluster/bin/ctelnet &
    

    The CCP software supports the following Secure Shell connections:

    • For secure connection to the node consoles, start the cconsole tool. Then from the Options menu of the Cluster Console window, enable the Use SSH check box.

    • For secure connection to the cluster nodes, use the cssh tool.

    See the procedure “How to Remotely Log In to Sun Cluster” in Beginning to Administer the Cluster in Sun Cluster System Administration Guide for Solaris OS for additional information about how to use the CCP utility. Also see the ccp(1M) man page.

Next Steps

Determine whether the Solaris OS is already installed to meet Sun Cluster software requirements. See Planning the Solaris OS for information about Sun Cluster installation requirements for the Solaris OS.

ProcedureHow to Install Solaris Software

If you do not use the scinstall custom JumpStart installation method to install software, perform this procedure to install the Solaris OS on each node in the global cluster. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.


Tip –

To speed installation, you can install the Solaris OS on each node at the same time.


If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

Before You Begin

Perform the following tasks:

  1. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      As superuser, use the following command to start the cconsole utility:


      adminconsole# /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  2. Install the Solaris OS as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.


    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.


      Tip –

      To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Specify that slice 7 is at least 20 Mbytes in size.

      • (Optional) Create a file system of at least 512 Mbytes for use by the global-device subsystem.


        Note –

        Alternatively, do not create this dedicated file system and instead use a lofi device. You specify the use of a lofi device to the scinstall command when you establish the cluster.


      • Create any other file-system partitions that you need, as described in System Disk Partitions.

    3. For ease of administration, set the same root password on each node.

  3. If you will use role-based access control (RBAC) instead of superuser to access the cluster nodes, set up an RBAC role that provides authorization for all Sun Cluster commands.

    This series of installation procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:

    • solaris.cluster.modify

    • solaris.cluster.admin

    • solaris.cluster.read

    See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.

  4. If you are adding a node to an existing cluster, add mount points for cluster file systems to the new node.

    1. From the active cluster node, display the names of all cluster file systems.


      phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}'
      
    2. On the new node, create a mount point for each cluster file system in the cluster.


      phys-schost-new# mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  5. If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.

    1. Ensure that the same vxio number is used on the VxVM-installed nodes.


      phys-schost# grep vxio /etc/name_to_major
      vxio NNN
      
    2. Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

    3. If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.

  6. If you installed the End User Solaris Software Group and you want to use any of the following Sun Cluster features, install additional Solaris software packages to support these features.

    Feature 

    Mandatory Solaris Software Packages 

    RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) 

    SPARC: Solaris 9: SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox

    Solaris 10: SUNWrsm SUNWrsmo

    scsnapshot

    SUNWp15u SUNWp15v SUNWp15p

    Sun Cluster Manager

    SUNWapchr SUNWapchu

    • SPARC: For the Solaris 9 OS, use the following command:


      phys-schost# pkgadd -d . package
      
    • For the Solaris 10 OS, use the following command:


      phys-schost# pkgadd -G -d . package
      

      You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

  7. Install any required Solaris OS patches and hardware-related firmware and patches.

    Include those patches for storage-array support. Also download any needed firmware that is contained in the hardware patches.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  8. x86: Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    • On the Solaris 9 OS, set the default to kadb.


      phys-schost# eeprom boot-file=kadb
      
    • On the Solaris 10 OS, set the default to kmdb in the GRUB boot parameters menu.


      grub edit> kernel /platform/i86pc/multiboot kmdb
      
  9. Update the /etc/inet/hosts file on each node with all public IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service.


    Note –

    During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file.


  10. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.

  11. (Optional) Configure public-network adapters in IPMP groups.

    If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a stand-alone system. See Chapter 31, Administering IPMP (Tasks), in System Administration Guide: IP Services for details.

    During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.

Next Steps

If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

Otherwise, to use Solaris I/O multipathing software, go to How to Install Solaris I/O Multipathing Software.

Otherwise, to install VxFS, go to How to Install Veritas File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

See Also

See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.

ProcedureHow to Configure Internal Disk Mirroring

Perform this procedure on each node of the global cluster to configure internal hardware RAID disk mirroring to mirror the system disk. This procedure is optional.


Note –

Do not perform this procedure under either of the following circumstances:


Before You Begin

Ensure that the Solaris operating system and any necessary patches are installed.

  1. Become superuser.

  2. Configure an internal mirror.


    phys-schost# raidctl -c clt0d0 clt1d0 
    
    -c clt0d0 clt1d0

    Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.

    For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.

Next Steps

SPARC: To create Sun Logical Domains (LDoms), go to SPARC: How to Install Sun Logical Domains Software and Create Domains.

To use Solaris I/O multipathing software, go to How to Install Solaris I/O Multipathing Software.

Otherwise, to install VxFS, go to How to Install Veritas File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

ProcedureSPARC: How to Install Sun Logical Domains Software and Create Domains

Perform this procedure to install Sun Logical Domains (LDoms) software on a physically clustered machine and to create I/O and guest domains.

Before You Begin

Perform the following tasks:

  1. Become superuser on the machine.

  2. Install Sun Logical Domains software and configure domains.

    • Follow the procedures in Installing and Enabling Software in Logical Domains (LDoms) 1.0.3 Administration Guide.

      If you create guest domains, adhere to the Sun Cluster guidelines for creating guest domains in a cluster.

    • Use the mode=sc option for all virtual switch devices that connect the virtual network devices that are used as the cluster interconnect.

    • For shared storage, map only the full SCSI disks into the guest domains.

Next Steps

If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

Otherwise, to use Solaris I/O multipathing software, go to How to Install Solaris I/O Multipathing Software.

Otherwise, to install VxFS, go to How to Install Veritas File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

ProcedureHow to Install Solaris I/O Multipathing Software

Perform this procedure on each node of the global cluster to install and configure Solaris I/O multipathing software (MPxIO) for Fibre Channel (FC) storage. Multipathing software manages multiple I/O paths to the shared cluster storage. This procedure is optional.

Before You Begin

Perform the following tasks:

  1. Become superuser.


    Note –

    SPARC: If you installed Sun Logical Domains (LDoms) software, perform this procedure in the I/O domain and export it to the guest domains. Do not enable Solaris I/O multipathing software directly in guest domains.


  2. SPARC: For the Solaris 9 OS, install on each node Sun StorEdge Traffic Manager software and any necessary patches.

  3. Enable multipathing functionality.

    • SPARC: For the Solaris 9 OS, change the value of the mpxio-disable parameter to no.

      Modify this entry in the /kernel/drv/scsi_vhci.conf file on each node.


      set mpxio-disable=no
    • For the Solaris 10 OS, issue the following command on each node:


      Caution – Caution –

      If Sun Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in a Sun Cluster environment.



      phys-schost# /usr/sbin/stmsboot -e
      
      -e

      Enables Solaris I/O multipathing.

      See the stmsboot(1M) man page for more information.

  4. SPARC: For the Solaris 9 OS, determine whether your version of Sun StorEdge SAN Foundation software includes built-in support for your storage array.

    If the software does not include built-in support for your storage array, edit the /kernel/drv/scsi_vhci.conf file on each node to include the necessary entries. For more information, see the release notes for your storage device.

  5. SPARC: For the Solaris 9 OS, shut down each node and perform a reconfiguration boot.

    The reconfiguration boot creates the new Solaris device files and links.


    phys-schost# shutdown -y -g0 -i0
    ok boot -r
    
  6. After the reconfiguration reboot is finished on all nodes, perform any additional tasks that are necessary to complete the configuration of your storage array.

    See installation instructions for your storage array in the Sun Cluster Hardware Administration Collection for details.

Troubleshooting

If you installed Solaris I/O multipathing software after Sun Cluster software was installed on the cluster, DID mappings might require updating. Issue the following commands on each node of the cluster to regenerate the DID namespace.

phys-schost# cldevice clearphys-schost# cldevice refresh(Solaris 9 only) phys-schost# cfgadm -c configurephys-schost# cldevice populate

See the cfgadm(1M) and cldevice(1CL) man pages for more information.

Next Steps

To install VxFS, go to How to Install Veritas File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

ProcedureHow to Install Veritas File System Software

To use Veritas File System (VxFS) software in the cluster, perform this procedure on each node of the global cluster.

  1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.

  2. Install any Sun Cluster patches that are required to support VxFS.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  3. In the /etc/system file on each node, set the following values.


    set rpcmod:svc_default_stksize=0x8000
    set lwp_default_stksize=0x6000

    These changes become effective at the next system reboot.

    • Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.

    • You must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

Next Steps

Install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

ProcedureHow to Install Sun Cluster Framework and Data-Service Software Packages


Note –

You can alternatively deploy the Sun Cluster Plug-in for Sun N1TM Service Provisioning System to install Sun Cluster framework and data-service software. Follow instructions in the documentation that is provided with the plug-in. You can also access this information at http://wikis.sun.com/display/SunCluster/Sun+Cluster+Framework+Plug-in.


Follow this procedure to use the Sun JavaTM Enterprise System (Java ES) installer program to perform one or more of the following installation tasks:


Note –

This procedure uses the interactive form of the installer program. To use the noninteractive form of the installer program, such as when developing installation scripts, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.


Before You Begin

Perform the following tasks:

  1. (Solaris 10 only) Restore external access to RPC communication and optionally to Sun Java Web Console.

    During the installation of the Solaris 10 OS, if you choose not to enable network services for remote clients, a restricted network profile is used that disables external access for certain network services. The restricted services include the following services that affect cluster functionality:

    • The RPC communication service, which is required for cluster communication

    • The Sun Java Web Console service, which is required to use the Sun Cluster Manager GUI

    The following steps restore Solaris functionality that is used by the Sun Cluster framework but which is prevented if a restricted network profile is used.

    1. Perform the following commands to restore external access to RPC communication.


      phys-schost# svccfg
      svc:> select network/rpc/bind
      svc:/network/rpc/bind> setprop config/local_only=false
      svc:/network/rpc/bind> quit
      phys-schost# svcadm refresh network/rpc/bind:default
      phys-schost# svcprop network/rpc/bind:default | grep local_only
      

      The output of the last command should show that the local_only property is now set to false.

    2. (Optional) Perform the following commands to restore external access to Sun Java Web Console.


      phys-schost# svccfg
      svc:> select system/webconsole
      svc:/system/webconsole> setprop options/tcp_listen=true
      svc:/system/webconsole> quit
      phys-schost# /usr/sbin/smcwebserver restart
      phys-schost# netstat -a | grep 6789
      

      The output of the last command should return an entry for 6789, which is the port number that is used to connect to Sun Java Web Console.

      For more information about what services the restricted network profile restricts to local connections, see Planning Network Security in Solaris 10 10/09 Installation Guide: Planning for Installation and Upgrade.

  2. (Optional) To use the installer program with a GUI, ensure that the display environment of the cluster node to install is set to display the GUI.


    % xhost +
    % setenv DISPLAY nodename:0.0
    

    If you do not make these settings, the installer program runs in text-based mode.

  3. Become superuser on the cluster node to install.


    Note –

    If your physically clustered machines are configured with Sun LDoms, install Sun Cluster software only in I/O domains or guest domains.


  4. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  5. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  6. Start the installation wizard program.


    phys-schost# ./installer
    

    See the Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the different forms and features of the Java ES installer program.

  7. Follow instructions on the screen to install Sun Cluster framework software and data services on the node.

    • If you do not want to install Sun Cluster Manager, formerly SunPlex Manager, deselect it.


      Note –

      You must install Sun Cluster Manager either on all nodes of the cluster or on none.


    • If you want to install Sun Cluster Geographic Edition software, select it.

      After the cluster is established, see Sun Cluster Geographic Edition Installation Guide for further installation procedures.

    • Choose Configure Later when prompted whether to configure Sun Cluster framework software.

    After installation is finished, you can view any available installation log.

  8. Install additional packages to use any of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i Release 2 SCI configuration with RSM enabled. Refer to Oracle9i Release 2 user documentation for detailed installation and configuration instructions.


    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.2 11/09 packages that each feature requires, in the order in which you must install each group of packages. The Java ES installer program does not automatically install these packages.


      Note –

      Install packages in the order in which they are listed in the following table.


      Feature 

      Additional Sun Cluster 3.2 11/09 Packages to Install 

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      • Solaris 9: SUNWsci SUNWscid SUNWscidx

      • Solaris 10: SUNWscir SUNWsci SUNWscidr SUNWscid

      RSMRDT drivers 

      SUNWscrdt

    2. Change to the /cdrom/suncluster_3_0Packages/ directory, where arch is sparc or x86, and where ver is 10 for Solaris 10 .


      phys-schost# cd /cdrom/cdrom0//cdrom/suncluster_3_0Packages/
      
    3. Install the additional packages.

      • SPARC: For the Solaris 9 OS, use the following command:


        phys-schost# pkgadd -d . packages
        
      • For the Solaris 10 OS, use the following command:


        phys-schost# pkgadd -G -d . packages
        
  9. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  10. Apply any necessary patches to support Sun Cluster software.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  11. If you will use any of the following adapters for the cluster interconnect, uncomment the relevant entry in the /etc/system file on each node.

    Adapter 

    Entry 

    ce 

    set ce:ce_taskq_disable=1 

    ipge 

    set ipge:ipge_taskq_disable=1 

    ixge 

    set ixge:ixge_taskq_disable=1 

    This entry becomes effective after the next system reboot.

Next Steps

If you want to install Sun QFS file system software, follow the procedures for initial installation. See How to Install Sun QFS Software.

Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.

ProcedureHow to Install Sun QFS Software

Perform this procedure on each node in the global cluster.

  1. Ensure that Sun Cluster software is installed.

    See How to Install Sun Cluster Framework and Data-Service Software Packages.

  2. Become superuser on a cluster node.

  3. Install Sun QFS file system software.

    Follow procedures for initial installation in Installing Sun QFS.

Next Steps

Set up the root user environment. Go to How to Set Up the Root Environment.

ProcedureHow to Set Up the Root Environment


Note –

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User's Work Environment in System Administration Guide: Basic Administration (Solaris 9 or Solaris 10) for more information.


Perform this procedure on each node in the global cluster.

  1. Become superuser on a cluster node.

  2. Modify PATH and MANPATH entries in the .cshrc or .profile file.

    1. Add /usr/sbin/ and /usr/cluster/bin/ to the PATH.

    2. Add /usr/cluster/man/ to the MANPATH.

    See your Solaris OS documentation, volume manager documentation, and other application documentation for additional file paths to set.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

Next Steps

If you want to use Solaris IP Filter, go to How to Configure Solaris IP Filter.

Otherwise, configure Sun Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.

ProcedureHow to Configure Solaris IP Filter

Perform this procedure to configure Solaris IP Filter on the global cluster.


Note –

Only use Solaris IP Filter with failover data services. The use of Solaris IP Filter with scalable data services is not supported.


Observe the following guidelines:

For more information about the Solaris IP Filter feature, see Part IV, IP Security, in System Administration Guide: IP Services.

  1. Become superuser.

  2. Add filter rules to the /etc/ipf/ipf.conf file on all affected nodes.

    Observe the following guidelines and requirements when you add filter rules to Sun Cluster nodes.

    • (Solaris 10 only) In the ipf.conf file on each node, add rules to explicitly allow cluster interconnect traffic to pass unfiltered. Rules that are not interface specific are applied to all interfaces, including cluster interconnects. Ensure that traffic on these interfaces is not blocked mistakenly.

      For example, suppose the following rules are currently used:


      # Default block TCP/UDP unless some later rule overrides
      block return-rst in proto tcp/udp from any to any
      
      # Default block ping unless some later rule overrides
      block return-rst in proto icmp all

      To unblock cluster interconnect traffic, add the following rules. The subnets used are for example only. Derive the subnets to use by using the ifconfig interface command.


      # Unblock cluster traffic on 172.16.0.128/25 subnet (physical interconnect)
      pass in quick proto tcp/udp from 172.16.0.128/25 to any
      pass out quick proto tcp/udp from 172.16.0.128/25 to any
      
      # Unblock cluster traffic on 172.16.1.0/25 subnet (physical interconnect)
      pass in quick proto tcp/udp from 172.16.1.0/25 to any
      pass out quick proto tcp/udp from 172.16.1.0/25 to any
      
      # Unblock cluster traffic on 172.16.4.0/23 (clprivnet0 subnet)
      pass in quick proto tcp/udp from 172.16.4.0/23 to any
      pass out quick proto tcp/udp from 172.16.4.0/23 to any
    • Sun Cluster software fails over network addresses from node to node. No special procedure or code is needed at the time of failover.

    • All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.

    • Rules on a standby node will reference a non-existent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.

    • All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.

    For more information about Solaris IP Filter rules, see the ipf(4) man page.

  3. Enable the ipfilter SMF service.


    phys-schost# svcadm enable /network/ipfilter:default
    
Next Steps

Configure Sun Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.