Sun Cluster 3.0 12/01 Software Installation Guide

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task to install the Solaris operating environment on each node in the cluster.


Note -

If your nodes are already installed with the Solaris operating environment, you must still reinstall the Solaris software as described in this procedure to ensure successful installation of Sun Cluster software.


  1. Ensure that the hardware setup is complete and connections are verified before you install Solaris software.

    See the Sun Cluster 3.0 12/01 Hardware Guide and your server and storage device documentation for details.

  2. Ensure that your cluster configuration planning is complete.

    See "How to Prepare for Cluster Software Installation" for requirements and guidelines.

  3. Have available your completed "Local File System Layout Worksheet" from the Sun Cluster 3.0 Release Notes.

  4. Are you using a naming service?

    • If no, go to Step 5. You will set up local hostname information in Step 15.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.

  5. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    If Cluster Control Panel (CCP) is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. CCP also opens a master window from which you can send your input to all individual console windows at the same time.

    If you do not use CCP, connect to the consoles of each node individually.


    Tip -

    To save time, you can install the Solaris operating environment on each node at the same time.


  6. On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.

    Sun Cluster software does not support the local-mac-address variable set to true.

    1. Display the value of the local-mac-address variable.

      • If the node is preinstalled with Solaris software, as superuser run the following command.


         # /usr/sbin/eeprom local-mac-address?
        

      • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


        ok printenv local-mac-address?
        

    2. Does the command return local-mac-address?=false on each node?

      • If yes, the variable settings are correct. Go to Step 7.

      • If no, change the variable setting on any node that is not set to false.

        • If the node is preinstalled with Solaris software, as superuser run the following command.


           # /usr/sbin/eeprom local-mac-address?=false
          

        • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


          ok setenv local-mac-address? false
          

    3. Repeat Step a to verify any changes you made in Step b.

      The new setting becomes effective at the next system reboot.

  7. Install the Solaris operating environment as instructed in the Solaris installation documentation.


    Note -

    You must install all nodes in a cluster with the same version of the Solaris operating environment.


    You can use any method normally used to install the Solaris operating environment to install the software on new nodes to be installed into a clustered environment. These methods include the Solaris interactive installation program, Solaris JumpStart, and Solaris Web Start.

    During Solaris software installation, do the following.

    1. Install at least the End User System Support software group.

      See "Solaris Software Group Considerations" for information about additional Solaris software requirements.

      If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport, the required RSMAPI software packages (SUNWrsm, SUNWrsmx, SUNWrsmo, and SUNWrsmox) are included with the higher-level software groups. If you install the End User System Support software group, you must install the SUNWrsm* package manually from the Solaris CD-ROM at Step 12.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 100 MBytes for use by the global-devices subsystem. If you intend to use SunPlex Manager to install Sun Cluster software, you must create the file system with a mount point of /globaldevices. This mount point is the default used by scinstall.


        Note -

        A global-devices file system is required for Sun Cluster software installation to succeed.


      • If you plan to use SunPlex Manager to install Solstice DiskSuite while installing Sun Cluster software, create a file system on slice 7 of at least 10 Mbytes with a mount point of /sds. Otherwise, create any file system partitions needed to support your volume manager software as described in "System Disk Partitions".

    3. Choose auto reboot.


      Note -

      Solaris software is installed and the node reboots before the next prompts display.


    4. For ease of administration, set the same root password on each node.

    5. Answer no when asked whether to enable automatic power-saving shutdown.

      You must disable automatic shutdown in Sun Cluster configurations. See the pmconfig(1M) and power.conf(4) man pages for more information.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. See the ifconfig(1M) man page for more information about Solaris interface groups.


  8. Are you installing a new node to an existing cluster?

  9. Have you added the new node to the cluster's authorized-node list?

    • If yes, go to Step 10.

    • If no, run scsetup(1M) from another, active cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 12/01 System Administration Guide for procedures.

  10. Create a mount point on the new node for each cluster file system in the cluster.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk `{print $1}'
      

    2. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  11. Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?

    • If yes, add an entry to the /etc/name_to_major file on this node that sets the vxio driver value as 210.


      # vi /etc/name_to_major
      vxio 210

    • If no, go to Step 12.

  12. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If yes and you installed the End User System Support software group, install the SUNWrsm* packages from the Solaris CD-ROM.


      # pkgadd -d . SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox
      

    • If no, or if you installed a higher-level software group, go to Step 13.

  13. Install any Solaris software patches.

    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions. If necessary, view the /etc/release file to see the exact version of Solaris software that is installed on a node.

  14. Install any hardware-related patches and download any needed firmware contained in the hardware patches.

    See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.

  15. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  16. Do you intend to use dynamic reconfiguration?


    Note -

    To use dynamic reconfiguration in your cluster configuration, the servers must be supported to use dynamic reconfiguration with Sun Cluster software.


    • If yes, on each node add the following entry to the /etc/system file.


      set kernel_cage_enable=1

      This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.

    • If no, go to Step 17.

  17. Install Sun Cluster software on your cluster nodes.