Sun Cluster Software Installation Guide for Solaris OS

ProcedureHow to Install Solaris Software

Follow these procedures to install the Solaris OS on each node in the cluster or to install the Solaris OS on the master node that you will flash archive for a JumpStart installation. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.


Tip –

To speed installation, you can install the Solaris OS on each node at the same time.


If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

Before You Begin

Perform the following tasks:

Steps
  1. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      Use the following command to start the cconsole utility:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  2. Install the Solaris OS as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.


    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.


      Tip –

      To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem.

        If you intend to use SunPlex Installer to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size.

        If you intend to use SunPlex Installer to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10), also make this file system mount on /sds.


        Note –

        If you intend to use SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, SunPlex Installer must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10).


      • Create any other file-system partitions that you need, as described in System Disk Partitions.

    3. For ease of administration, set the same root password on each node.

  3. If you are adding a node to an existing cluster, prepare the cluster to accept the new node.

    1. On any active cluster member, start the scsetup(1M) utility.


      # scsetup
      

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task is completed without error.

    5. Quit the scsetup utility.

    6. From the active cluster node, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      
    7. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  4. If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.

    1. Ensure that the same vxio number is used on the VxVM-installed nodes.


      # grep vxio /etc/name_to_major
      vxio NNN
      
    2. Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

    3. If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.

  5. If you installed the End User Solaris Software Group, use the pkgadd command to manually install any additional Solaris software packages that you might need.

    The following Solaris packages are required to support some Sun Cluster functionality.


    Note –

    Install packages in the order in which they are listed in the following table.


    Feature 

    Mandatory Solaris Software Packages 

    RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) 

    Solaris 8 or Solaris 9: SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox

    Solaris 10: SUNWrsm SUNWrsmo

    SunPlex Manager 

    SUNWapchr SUNWapchu

    • For the Solaris 8 or Solaris 9 OS, use the following command:


      # pkgadd -d . packages
      
    • For the Solaris 10 OS, use the following command:


      # pkgadd -G -d . packages
      

      You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

  6. Install any required Solaris OS patches and hardware-related firmware and patches, including those for storage-array support. Also download any needed firmware that is contained in the hardware patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  7. x86: Set the default boot file to kadb.


    # eeprom boot-file=kadb
    

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  8. Update the /etc/inet/hosts file on each node with all IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. See IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.

  9. If you will use ce adapters for the cluster interconnect, add the following entry to the /etc/system file.


    set ce:ce_taskq_disable=1

    This entry becomes effective after the next system reboot.

  10. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.

Next Steps

If you intend to use Sun multipathing software, go to SPARC: How to Install Sun Multipathing Software.

If you intend to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

See Also

See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.