How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (IPS Repositories)

You can set the AI server to install both Oracle Solaris OS and the Oracle Solaris Cluster framework and data service software from IPS repositories or the Unified Archives on all global-cluster nodes and establish the cluster. This procedure describes how to set up and use the scinstall(8) custom Automated Installer installation method to install and configure the cluster from IPS repositories.

  1. Set up your Automated Installer (AI) install server and DHCP server.

    Ensure that the AI install server meets the following requirements.

    • The install server is on the same subnet as the cluster nodes.

    • The install server is not itself a cluster node.

    • The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.

    • Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.

    Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 4, Setting Up the AI Server in Automatically Installing Oracle Solaris 11.4 Systems and Working With DHCP in Oracle Solaris 11.4.

  2. On the AI install server, assume the root role.
  3. On the AI install server, install the Oracle Solaris Cluster AI support package.
    1. Ensure that the solaris and ha-cluster publishers are valid.
      installserver# pkg publisher
      PUBLISHER        TYPE     STATUS   URI
      solaris          origin   online   solaris-repository
      ha-cluster       origin   online   ha-cluster-repository
    2. Install the cluster AI support package.
      installserver# pkg install ha-cluster/system/install

    Tip:

    If you used the clinstall utility to install cluster software on the cluster nodes, you can skip the next step if you issue the scinstall command from the same control node. The clauth authorizations you made before running the clinstall utility stay in force until you reboot the nodes into the cluster at the end of this procedure.
  4. Authorize acceptance of cluster configuration commands by the control node.
    1. Determine which system to use to issue the cluster creation command.

      This system is the control node.

    2. On all systems that you will configure in the cluster, other than the control node, authorize acceptance of commands from the control node.
      phys-schost# clauth enable -n control-node

      If you want to use the des (Diffie-Hellman) authentication protocol instead of the sys (unix) protocol, include -p des option in the command.

      phys-schost# clauth enable -p des -n control-node
  5. On the AI install server, start the scinstall utility.
    installserver# /usr/cluster/bin/scinstall

    The scinstall Main Menu is displayed.

  6. Select option 1 or option 2 from the Main Menu.
    *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install, restore, replicate, and configure a cluster from this Automated Installer install server
          * 2) Securely install, restore, replicate, and configure a cluster from this Automated Installer install server
          * 3) Print release information for this Automated Installer install server
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  
  7. Follow the menu prompts to supply your answers from the configuration planning worksheet.
  8. For each node, confirm the options you chose so that the scinstall utility performs the necessary configuration to install the cluster nodes from this AI server.

    The utility also prints instructions to add the DHCP macros on the DHCP server, and adds (if you chose secure installation) or clears (if you chose non-secure installation) the security keys for SPARC nodes. Follow those instructions.

  9. (Optional) To install extra software packages or to customize the target device, update the AI manifest for each node.

    The AI manifest is located in the following directory:

    /var/cluster/logs/install/autoscinstall.d/ \
    cluster-name/node-name/node-name_aimanifest.xml
    1. To install extra software packages, edit the AI manifest as follows:
      • Add the publisher name and the repository information. For example:

        <publisher name="aie">
        <origin name="http://aie.example.com:12345"/> 
        </publisher>
      • Add the package names that you want to install, in the software_data item of the AI manifest.

    2. To customize the target device, update the target element in the manifest file.

      scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.

      For more information, see Configuring an AI Server in Automatically Installing Oracle Solaris 11.4 Systems and the ai_manifest(5) man page.

    3. Run the installadm command for each node.
      # installadm update-manifest -n cluster-name-{sparc|i386} \ 
      -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \
      -m node-name_manifest

    Note that SPARC and i386 is the architecture of the cluster node.

  10. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] [...] &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.

  11. Shut down and boot each node to start the AI installation.

    The Oracle Solaris software is installed with the default configuration.

    Note:

    You cannot use this method if you want to customize the Oracle Solaris installation. If you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and Oracle Solaris Cluster software is not installed and configured. To customize Oracle Solaris during installation, instead follow instructions in How to Install Oracle Solaris Software, then install and configure the cluster by following instructions in How to Install Oracle Solaris Cluster Software (pkg).
    • SPARC:

      1. Shut down each node.

        phys-schost# shutdown -g0 -y -i0
      2. Boot the node with the following command:

        ok boot net:dhcp - install

        Note:

        Surround the dash (-) in the command with a space on each side.
    • x86:

      1. Reboot the node.

        # reboot -p
      2. During PXE boot, press Control-N.

        The GRUB menu is displayed.

      3. Immediately select the Automated Install entry and press Return.

        Note:

        If you do not select the Automated Install entry within 20 seconds, installation proceeds using the default interactive text installer method, which will not install and configure the Oracle Solaris Cluster software.

        On each node, a new boot environment (BE) is created and Automated Installer installs the Oracle Solaris OS and Oracle Solaris Cluster software. When the installation is successfully completed, each node is fully installed as a new cluster node. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.

  12. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks for Autofs Maps in Managing Network File Systems in Oracle Solaris 11.4 for more information about modifying the automounter map.

  13. (x86 only) Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    grub edit> kernel /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS -k

    For more information, see How to Boot a System With the Kernel Debugger (kmdb) Enabled in Booting and Shutting Down Oracle Solaris 11.4 Systems.

  14. If you performed a task that requires a cluster reboot, reboot the cluster.

    The following tasks require a reboot:

    • Installing software updates that require a node or cluster reboot

    • Making configuration changes that require a reboot to become active

    1. On one node, assume the root role.
    2. Shut down the cluster.
      phys-schost-1# cluster shutdown -y -g0 cluster-name

      Note:

      Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down. Cluster nodes remain in installation mode until the first time that you run the clsetup command. You run this command during the procedure How to Configure Quorum Devices.
    3. Reboot each node in the cluster.

      The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  15. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(8CL) man page.

  16. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.

      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  17. On each node, enable automatic node reboot if all monitored shared-disk paths fail.

    Note:

    At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled +
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===
      
      Node Name:                                      node
      …
      reboot_on_path_failure:                          enabled
      …
  18. If you use the LDAP naming service, you must manually configure it on the cluster nodes after they boot.

Next Steps

  1. Perform all of the following procedures that are appropriate for your cluster configuration.

  2. Configure quorum, if not already configured, and perform post installation tasks.

Troubleshooting

Disabled scinstall option – If the AI option of the scinstall command is not preceded by an asterisk, the option is disabled. This condition indicates that AI setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 9 to correct the AI setup, then restart the scinstall utility.