Sun Cluster Software Installation Guide for Solaris OS

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task. Follow these procedures to install the Solaris OS on each node in the cluster.

Tip –

To speed installation, you can install the Solaris OS on each node at the same time.

If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

  1. Ensure that the hardware setup is complete and that connections are verified before you install Solaris software.

    See the Sun Cluster Hardware Administration Collection and your server and storage device documentation for details.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. Have available your completed Local File System Layout Worksheet.

  4. If you use a naming service, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. You set up local hostname information in Step 11.

    See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.

  5. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time. Use the following command to start cconsole:

      # /opt/SUNWcluster/bin/cconsole clustername &

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  6. Install the Solaris OS as instructed in your Solaris installation documentation.

    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.

    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.

      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem. If you intend to use SunPlex Installer to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.

        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.

      • Specify that slice 7 is at least 20 Mbytes in size. If you intend to use SunPlex Installer to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9), also make this file system mounted on /sds.

      • Create any other file-system partitions that you need, as described in System Disk Partitions.

        Note –

        If you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, you must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9).

    3. For ease of administration, set the same root password on each node.

  7. If you are adding a node to an existing cluster, prepare the cluster to accept the new node.

    1. On any active cluster member, start the scsetup(1M) utility.

      # scsetup

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task completes without error.

    5. Quit the scsetup utility.

    6. From the active cluster node, display the names of all cluster file systems.

      % mount | grep global | egrep -v node@ | awk '{print $1}'

    7. On the new node, create a mount point for each cluster file system in the cluster.

      % mkdir -p mountpoint

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

    8. Determine whether VERITAS Volume Manager (VxVM) is installed on any nodes that are already in the cluster.

    9. If VxVM is installed on any existing cluster nodes, ensure that the same vxio number is used on the VxVM-installed nodes. Also ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

      # grep vxio /etc/name_to_major
      vxio NNN

      If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  8. If you installed the End User Solaris Software Group, use the pkgadd command to manually install any additional Solaris software packages that you might need.

    The following Solaris packages are required to support some Sun Cluster functionality.


    Required Solaris Software Packages (shown in installation order) 

    RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) 

    SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox

    SunPlex Manager 

    SUNWapchr SUNWapchu

  9. Install any hardware-related patches. Also download any needed firmware that is contained in the hardware patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  10. x86: Set the default boot file to kadb.

    # eeprom boot-file=kadb

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  11. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  12. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:

    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot.

    See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  13. Install Sun Cluster software packages.

    Go to How to Install Sun Cluster Software Packages.