Sun Cluster 3.1 Software Installation Guide

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task to install the Solaris operating environment on each node in the cluster.


Note –

If your nodes are already installed with the Solaris operating environment, you must still reinstall the Solaris software as described in this procedure to ensure successful installation of Sun Cluster software.


  1. Ensure that the hardware setup is complete and connections are verified before you install Solaris software.

    See the Sun Cluster 3.x Hardware Administration Manual and your server and storage device documentation for details.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. Have available your completed “Local File Systems With Mirrored Root Worksheet” in Sun Cluster 3.1 Release Notes or “Local File Systems with Non-Mirrored Root Worksheet” in Sun Cluster 3.1 Release Notes.

  4. Are you using a naming service?

    • If no, go to Step 5. You will set up local hostname information in Step 16.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses to any naming services (such as NIS or DNS) used by clients for access to cluster services. See IP Addresses for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.

  5. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    If Cluster Control Panel (CCP) is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. CCP also opens a master window from which you can send your input to all individual console windows at the same time.

    If you do not use CCP, connect to the consoles of each node individually.


    Tip –

    To save time, you can install the Solaris operating environment on each node at the same time.


  6. Do the nodes in the cluster use Ethernet adapters?

    • If no, go to Step 7.

    • If yes, ensure that the local-mac-address? variable is correctly set to true for Ethernet adapters.

      Sun Cluster 3.1 software does not support the local-mac-address? variable set to false for Ethernet adapters. This is a change from the requirement for Sun Cluster 3.0 software.

    1. Display the value of the local-mac-address? variable.

      • If the node is preinstalled with Solaris software, as superuser run the following command.


         # /usr/sbin/eeprom local-mac-address?
        

      • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


        ok printenv local-mac-address?
        

    2. Does the command return local-mac-address?=true on each node?

      • If yes, the variable settings are correct. Go to Step 7.

      • If no, change the variable setting on any node that is not set to true.

        • If the node is preinstalled with Solaris software, as superuser run the following command.


           # /usr/sbin/eeprom local-mac-address?=true
          

        • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


          ok setenv local-mac-address? true
          

    3. Repeat Step a to verify any changes you made in Step b.

      The new setting becomes effective at the next system reboot.

  7. Install the Solaris operating environment as instructed in the Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris operating environment.


    You can use any method normally used to install the Solaris operating environment to install the software on new nodes to be installed into a clustered environment. These methods include the Solaris interactive installation program, Solaris JumpStart, and Solaris Web Start.

    During Solaris software installation, do the following.

    1. Install at least the End User System Support software group.

      • If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport, the required RSMAPI software packages (SUNWrsm, SUNWrsmx, SUNWrsmo, and SUNWrsmox) are included with the higher-level software groups. If you install the End User System Support software group, you must install the SUNWrsm* packages manually from the Solaris CD-ROM at Step 12.

      • If you intend to use SunPlex Manager, the required Apache software packages (SUNWapchr and SUNWapchu) are included with the higher-level software groups. If you install the End User System Support software group, you must install the SUNWapch* packages manually from the Solaris CD-ROM at Step 13.

      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 MBytes for use by the global-devices subsystem. If you intend to use SunPlex Manager to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default used by scinstall.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • If you intend to use SunPlex Manager to install Solstice DiskSuite software (Solaris 8), configure Solaris Volume Manager software (Solaris 9), or install Sun Cluster HA for NFS or Sun Cluster HA for Apache in addition to installing Sun Cluster software, create a 20–Mbyte file system on slice 7 with a mount-point name of /sds.

        Otherwise, create any file system partitions needed to support your volume manager software as described in System Disk Partitions.

    3. Choose auto reboot.


      Note –

      The Solaris installation tool installs Solaris software and reboots the node before it displays the next prompts.


    4. For ease of administration, set the same root password on each node.

    5. Answer no when asked whether to enable automatic power-saving shutdown.

      You must disable automatic shutdown in Sun Cluster configurations. See the pmconfig(1M) and power.conf(4) man pages for more information.


    Note –

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. See the ifconfig(1M) man page for more information about Solaris interface groups.


  8. Are you installing a new node to an existing cluster?

  9. Have you added the new node to the cluster's authorized-node list?

  10. Create a mount point on the new node for each cluster file system in the cluster.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk `{print $1}'
      

    2. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  11. Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?

    • If yes, ensure that the same vxio number is used on the VxVM-installed nodes and that the vxio number is available for use on each of the nodes that do not have VxVM installed.


      # grep vxio /etc/name_to_major
      vxio NNN
      

      If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node by changing the /etc/name_to_major entry to use a different number.

    • If no, go to Step 12.

  12. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If yes and you installed the End User System Support software group, install the SUNWrsm* packages from the Solaris CD-ROM.


      # pkgadd -d . SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox
      

    • If no, or if you installed a higher-level software group, go to Step 13.

  13. Do you intend to use SunPlex Manager?

    • If yes and you installed the End User System Support software group, install the SUNWapch* packages from the Solaris CD-ROM.


      # pkgadd -d . SUNWapchr SUNWapchu
      

    • If no, or if you installed a higher-level software group, go to Step 14.

    Apache software packages must already be installed before SunPlex Manager is installed.

  14. Install any Solaris software patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions. If necessary, view the /etc/release file to see the exact version of Solaris software that is installed on a node.

  15. Install any hardware-related patches and download any needed firmware contained in the hardware patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  16. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  17. Do you intend to use dynamic reconfiguration on Sun Enterprise 10000 servers?

    • If yes, on each node add the following entry to the /etc/system file.


      set kernel_cage_enable=1

      This entry becomes effective after the next system reboot. See the Sun Cluster 3.1 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

    • If no, go to Step 18.

  18. Install Sun Cluster software on your cluster nodes.