Sun Cluster 3.1 10/03 Software Installation Guide

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task to install the Solaris operating environment on each node in the cluster.


Tip –

To speed installation, you can install the Solaris operating environment on each node at the same time.


If your nodes are already installed with the Solaris operating environment but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. If so, follow the steps that are described in this procedure to ensure successful installation of Sun Cluster software. See Planning the Solaris Operating Environment for information about required partitioning and other Sun Cluster installation requirements.

  1. Ensure that the hardware setup is complete and that connections are verified before you install Solaris software.

    See the Sun Cluster 3.1 Hardware Administration Collection and your server and storage device documentation for details.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. Have available your completed Local File System Layout Worksheet.

  4. Do you use a naming service?

    • If no, go to Step 5. You set up local hostname information in Step 11.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.

  5. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time. Use the following command to start cconsole:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  6. Install the Solaris operating environment as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris operating environment.


    You can use any method that is normally used to install Solaris software. These methods include the Solaris interactive installation program, Solaris JumpStart, and the Solaris Web Start program.

    During Solaris software installation, perform the following steps:

    1. Install at least the End User System Support software group.

      • If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or to use SCI-PCI adapters for the interconnect transport, only the higher-level software groups include the required RSMAPI software packages. These packages are SUNWrsm, SUNWrsmx, SUNWrsmo, and SUNWrsmox. If you install the End User System Support software group, you must install the RSMAPI software packages manually from the Solaris CD-ROM at Step 8.

      • If you intend to use SunPlex Manager, the required Apache software packages (SUNWapchr and SUNWapchu) are included with the higher-level software groups. If you install the End User System Support software group, you must install the Apache software packages manually from the Solaris CD-ROM at Step 9.

      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem. If you intend to use SunPlex Manager to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size. If you intend to use SunPlex Manager to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9), also make this file system mounted on /sds.

        Otherwise, create any file-system partitions that are needed to support your volume-manager software as described in System Disk Partitions.


        Note –

        If you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, you must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9).


    3. For ease of administration, set the same root password on each node.

  7. Are you installing a new node to an existing cluster?

    • If no, skip to Step 8.

    • If yes, perform the following steps:

    1. Have you added the new node to the cluster's authorized-node list?

    2. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      

    3. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

    4. Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?

      • If no, proceed to Step 8.

      • If yes, ensure that the same vxio number is used on the VxVM-installed nodes. Also ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.


        # grep vxio /etc/name_to_major
        vxio NNN
        

        If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  8. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If no, proceed to Step 9.

    • If yes and you installed the End User System Support software group, install the RSMAPI software packages from the Solaris CD-ROM. Otherwise, proceed to Step 9.


      # pkgadd -d . SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox
      

    • If yes and you installed a higher-level software group than the End User System Support software group, proceed to Step 9.

  9. Do you intend to use SunPlex Manager?

    • If no, or if you installed a higher-level software group than the End User System Support software group, proceed to Step 10.

    • If yes and you installed the End User System Support software group, install the Apache software packages from the Solaris CD-ROM.


      # pkgadd -d . SUNWapchr SUNWapchu
      

    • If yes and you installed a higher-level software group than the End User System Support software group, proceed to Step 14.

    Apache software packages must already be installed before SunPlex Manager is installed.

  10. Install any hardware-related patches and download any needed firmware that is contained in the hardware patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  11. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  12. Do you intend to use dynamic reconfiguration on Sun Enterprise 10000 servers?

    • If no, proceed to Step 14.

    • If yes, on each node add the following entry to the /etc/system file.


      set kernel_cage_enable=1

      This entry becomes effective after the next system reboot. See the Sun Cluster 3.1 10/03 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  13. Do you intend to use VERITAS File System (VxFS) software?

    • If no, proceed to Step 14.

    • If yes, perform the following steps.

    1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.

    2. Install any Sun Cluster patches that are required to support VxFS.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    3. In the /etc/system file on each node, set the value for the rpcmod:svc_default_stksize variable to 0x8000 and set the value of the lwp_default_stksize variable to 0x6000.


      set rpcmod:svc_default_stksize=0x8000
      set lwp_default_stksize=0x6000

      Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.

      Also, you must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

  14. Preinstall Sun Cluster software packages.

    Go to How to Preinstall Sun Cluster Software Packages.