Oracle® Solaris Cluster Software Installation Guide

Exit Print View

Updated: September 2014, E39580-02
 
 

How to Install Oracle Solaris Software

Use this procedure to install the Oracle Solaris OS on the following systems, as applicable to your cluster configuration:

Before You Begin

Perform the following tasks:

  • Ensure that the hardware setup is complete and that connections are verified before you install Oracle Solaris software. See the Oracle Solaris Cluster 4.2 Hardware Administration Manual and your server and storage device documentation for details.

  • Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.

  • If you use a naming service, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. See Public-Network IP Addresses for planning guidelines. See your Oracle Solaris system administrator documentation for information about using Oracle Solaris naming services.

  1. Connect to the consoles of each node.
  2. Install the Oracle Solaris OS.

    Follow installation instructions in Installing Oracle Solaris 11.2 Systems .


    Note -  You must install all nodes in a cluster with the same version of the Oracle Solaris OS.

    You can use any method that is normally used to install the Oracle Solaris software. During Oracle Solaris software installation, perform the following steps:

    1. (Cluster nodes) Choose Manual Layout to set up the file systems.
      • Specify a slice that is at least 20 Mbytes in size.
      • Create any other file system partitions that you need, as described in System Disk Partitions.
    2. (Cluster nodes) For ease of administration, set the same root password on each node.

      Note -  This step is required if you plan to use the Oracle Solaris Cluster Manager GUI to administer Geographic Edition components. For more information about Oracle Solaris Cluster Manager, see Chapter 13, Using the Oracle Solaris Cluster GUI, in Oracle Solaris Cluster System Administration Guide .
  3. Ensure that the solaris publisher is valid.
    # pkg publisher
    PUBLISHER                           TYPE     STATUS   URI
    solaris                             origin   online   solaris-repository

    For information about setting the solaris publisher, see Adding and Updating Software in Oracle Solaris 11.2 .

  4. (Cluster nodes) If you will use role-based access control (RBAC) instead of the root role to access the cluster nodes, set up an RBAC role that provides authorization for all Oracle Solaris Cluster commands.

    This series of installation procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not the root role:

    • solaris.cluster.modify

    • solaris.cluster.admin

    • solaris.cluster.read

    See User Rights Management in Securing Users and Processes in Oracle Solaris 11.2 for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.

  5. (Cluster nodes) If you are adding a node to an existing cluster, add mount points for cluster file systems to the new node.
    1. From the active cluster node, display the names of all cluster file systems.
      phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}'
    2. On the new node, create a mount point for each cluster file system in the cluster.
      phys-schost-new# mkdir -p mountpoint

      For example, if the mount command returned the file system name /global/dg-schost-1, run mkdir –p /global/dg-schost-1 on the new node you are adding to the cluster.

  6. Install any required Oracle Solaris OS software updates and hardware-related firmware and updates.

    Include those updates for storage array support. Also download any needed firmware that is contained in the hardware updates.

    For instructions on updating your software, see Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide .

  7. (x86 only) (Cluster nodes) Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    grub edit> kernel /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS -k

    For more information, see How to Boot a System With the Kernel Debugger (kmdb) Enabled in Booting and Shutting Down Oracle Solaris 11.2 Systems .

  8. (Cluster nodes) Update the /etc/inet/hosts file on each node with all public IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service.


    Note -  During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file.
  9. (Optional) (Cluster nodes) Configure public-network adapters in IPMP groups.

    If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a stand-alone system. See Chapter 3, Administering IPMP, in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.2 for details.

    During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.

  10. (Optional) (Cluster nodes) If the Oracle Solaris Cluster software is not already installed and you want to use Oracle Solaris I/O multipathing, enable multipathing on each node.

    Caution

    Caution  -  If the Oracle Solaris Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Oracle Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot (1M) man page for using the stmsboot command in an Oracle Solaris Cluster environment.


    phys-schost# /usr/sbin/stmsboot -e
    –e

    Enables Oracle Solaris I/O multipathing.

    See How to Enable Multipathing in Managing SAN Devices and Multipathing in Oracle Solaris 11.2 and the stmsboot (1M) man page for more information.

Next Steps

If you want to use the pconsole utility, go to How to Install pconsole Software on an Administrative Console.

If you want to use a quorum server, go to How to Install and Configure Oracle Solaris Cluster Quorum Server Software.

If your cluster nodes support the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

SPARC: If you want to install Oracle VM Server for SPARC, go to How to Install Oracle VM Server for SPARC Software and Create Domains.

Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.

See also

See the Oracle Solaris Cluster System Administration Guide for procedures to perform dynamic reconfiguration tasks in an Oracle Solaris Cluster configuration.