Sun Cluster Software Installation Guide for Solaris OS

ProcedureHow to Install Solaris Software

If you do not use the scinstall custom JumpStart installation method to install software, perform this procedure to install the Solaris OS on each node in the cluster. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.


Tip –

To speed installation, you can install the Solaris OS on each node at the same time.


If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

Before You Begin

Perform the following tasks:

  1. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      As superuser, use the following command to start the cconsole utility:


      adminconsole# /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  2. Install the Solaris OS as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.


    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.


      Tip –

      To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size.

      • Create any other file-system partitions that you need, as described in System Disk Partitions.

    3. For ease of administration, set the same root password on each node.

  3. If you will use role-based access control (RBAC) instead of superuser to access the cluster nodes, set up an RBAC role that provides authorization for all Sun Cluster commands.

    This series of installation procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:

    • solaris.cluster.modify

    • solaris.cluster.admin

    • solaris.cluster.read

    See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.

  4. If you are adding a node to an existing cluster, add mount points for cluster file systems to the new node.

    1. From the active cluster node, display the names of all cluster file systems.


      phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}'
      
    2. On the new node, create a mount point for each cluster file system in the cluster.


      phys-schost-new# mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  5. If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.

    1. Ensure that the same vxio number is used on the VxVM-installed nodes.


      phys-schost# grep vxio /etc/name_to_major
      vxio NNN
      
    2. Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

    3. If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.

  6. If you installed the End User Solaris Software Group and you want to use any of the following Sun Cluster features, install additional Solaris software packages to support these features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • RSMRDT drivers

    • SPARC: SCI-PCI adapters

    • SPARC: For the Solaris 9 OS, use the following command:


      phys-schost# pkgadd -d . SUNWrsm SUNWrsmc SUNWrsmo SUNWrsmox
      
    • For the Solaris 10 OS, use the following command:


      phys-schost# pkgadd -G -d . SUNWrsm SUNWrsmo
      

      You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

  7. Install any required Solaris OS patches and hardware-related firmware and patches, including those for storage-array support. Also download any needed firmware that is contained in the hardware patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

  8. x86: Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    • On the Solaris 9 OS, set the default to kadb.


      phys-schost# eeprom boot-file=kadb
      
    • On the Solaris 10OS, set the default to kmdb in the GRUB boot parameters menu.


      grub edit> kernel /platform/i86pc/multiboot kmdb
      
  9. Update the /etc/inet/hosts or /etc/inet/ipnodes file on each node with all public IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. The ipnodes file can contain both IPv4 and IPv6 addresses. See Public Network IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.


    Note –

    During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file. Adding these IP addresses to the /etc/inet/ipnodes file is optional.


  10. If you will use ce adapters for the cluster interconnect, add the following entry to the /etc/system file.


    set ce:ce_taskq_disable=1

    This entry becomes effective after the next system reboot.

  11. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.

  12. (Optional) Configure public-network adapters in IPMP groups.

    If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a standalone system. See Part VI, IPMP, in System Administration Guide: IP Services for details.

    During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.

Next Steps

If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

Otherwise, to use Sun multipathing software, go to How to Install Sun Multipathing Software.

Otherwise, to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.

See Also

See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.