Sun Cluster 3.0 U1 Installation Guide

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task to install the Solaris operating environment on each node in the cluster.


Note -

If your nodes are already installed with the Solaris operating environment, you must still reinstall the Solaris software as described in this procedure to ensure successful installation of Sun Cluster software.


  1. Ensure that the hardware setup is complete and connections are verified before you install Solaris software.

    See the Sun Cluster 3.0 U1 Hardware Guide and your server and storage device documentation for details.

  2. Have available your completed "Local File System Layout Worksheet" from the Sun Cluster 3.0 Release Notes.

  3. Are you using a naming service?

    • If no, proceed to Step 4. You will set up local hostname information in Step 12.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.

  4. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    If Cluster Control Panel (CCP) is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. CCP also opens a master window from which you can send your input to all individual console windows at the same time.

    If you do not use CCP, connect to the consoles of each node individually.


    Tip -

    To save time, you can install the Solaris operating environment on each node at the same time.


  5. On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.

    Sun Cluster software does not support the local-mac-address variable set to true.

    1. Display the value of the local-mac-address variable.

      • If the node is preinstalled with Solaris software, as superuser run the following command.


         # /usr/sbin/eeprom local-mac-address?
        

      • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


        ok printenv local-mac-address?
        

    2. Does the command return local-mac-address?=false on each node?

      • If yes, the variable settings are correct. Proceed to Step 6.

      • If no, change the variable setting on any node that is not set to false.

        • If the node is preinstalled with Solaris software, as superuser run the following command.


           # /usr/sbin/eeprom local-mac-address?=false
          

        • If the node is not yet installed with Solaris software, run the following command from the ok prompt.


          ok setenv local-mac-address? false
          

    3. Repeat Step a to verify any changes you made in Step b.

      The new setting becomes effective at the next system reboot.

  6. Install the Solaris operating environment as instructed in the Solaris installation documentation.


    Note -

    You must install all nodes in a cluster with the same version of the Solaris operating environment.


    You can use any method normally used to install the Solaris operating environment to install the software on new nodes to be installed into a clustered environment. These methods include the Solaris interactive installation program, Solaris JumpStart, and Solaris Web Start.

    During Solaris software installation, do the following.

    1. Install at least the End User System Support software group.


      Note -

      Sun Enterprise E10000 server servers require the Entire Distribution + OEM software group.


      You might need to install other Solaris software packages which are not part of the End User System Support software group, for example, the Apache HTTP server packages. Third-party software, such as Oracle, might also require additional Solaris packages. See your third-party documentation for any Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 100 MBytes for use by the global-devices subsystem. If you intend to use SunPlex Manager to install Sun Cluster software, you must create the file system with a mount point of /globaldevices. This mount point is the default used by scinstall.


        Note -

        A global-devices file system is required for Sun Cluster software installation to succeed.


      • If you plan to use SunPlex Manager to install Solstice DiskSuite while installing Sun Cluster software, create a file system on slice 7 of at least 10 Mbytes with a mount point of /sds. Otherwise, create any file system partitions needed to support your volume manager software as described in "System Disk Partitions".

    3. Choose auto reboot.


      Note -

      Solaris software is installed and the node reboots before the next prompts display.


    4. For ease of administration, set the same root password on each node.

    5. Answer no when asked whether to enable automatic power-saving shutdown.

      You must disable automatic shutdown in Sun Cluster configurations. See the pmconfig(1M) and power.conf(4) man pages for more information.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. See the ifconfig(1M) man page for more information about Solaris interface groups.


  7. Are you installing a new node to an existing cluster?

  8. Have you added the new node to the cluster's authorized-node list?

    • If yes, proceed to Step 9.

    • If no, run scsetup(1M) from another, active cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 U1 System Administration Guide for procedures.

  9. Create a mount point on the new node for each cluster file system in the cluster.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk `{print $1}'
      

    2. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  10. Install any Solaris software patches.

    See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions. If necessary, view the /etc/release file to see the exact version of Solaris software that is installed on a node.

  11. Install any hardware-related patches and download any needed firmware contained in the hardware patches.

    See the Sun Cluster 3.0 U1 Release Notes for the location of patches and installation instructions.

  12. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  13. Install Sun Cluster software on your cluster nodes.