Go to main content

Installing and Configuring an Oracle® Solaris Cluster 4.4 Environment

Exit Print View

Updated: September 2019
 
 

Establishing a New Cluster With the Automated Installer

The interactive scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.

Private-network address

172.16.0.0

Private-network netmask

255.255.240.0

Cluster-transport adapters

Exactly two adapters

Cluster-transport switches

switch1 and switch2

Global fencing

Enabled

You can install and configure a new cluster by installing the Oracle Solaris and Oracle Solaris Cluster software packages from IPS repositories, or from an Oracle Solaris Unified Archive that is created on an existing cluster.

Besides forming a new cluster, you can also use the AI and the United Archives to replicate a cluster from the archive, and restore existing cluster nodes. You can also use the clzonecluster command to install a new zone cluster from the Unified Archives. For more information, see How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Unified Archives), How to Replicate a Cluster from the Unified Archives, How to Restore a Node from the Unified Archive in Administering an Oracle Solaris Cluster 4.4 Configuration, and How to Install a Zone Cluster from the Unified Archive in Administering an Oracle Solaris Cluster 4.4 Configuration.

These nodes can be physical machines or supported (SPARC only) Oracle VM Server for SPARC logical domains or guest domains, or a combination of any of these types of nodes.

AI uses a minimal boot image to boot the client. When you install the Oracle Solaris and Oracle Solaris Cluster software packages from IPS repositories, you must provide a source for the installation to obtain the boot image. The boot image content is published in the install-image/solaris-auto-install package. The downloaded boot image ISO file also contains the boot image. You can either specify the repository from which the package can be retrieved, or you can specify the location of the downloaded boot image ISO file.

  • To obtain the boot image from the repository, you will need to specify the publisher, the repository URL, and the architecture of the cluster nodes. If the repository uses HTTPS, you will also need to specify the SSL certificate and the private key, and provide the location of the files. You can request and download the key and certificate from the http://pkg-register.oracle.com site.

  • To use the downloaded boot image ISO file, you must save it in a directory that can be accessed from the AI install server. The AI boot image must be the same version as the Oracle Solaris software release that you plan to install on the cluster nodes. Also, the boot image file must have the same architecture as that of the cluster nodes.

    If you want to establish a new cluster from Oracle Unified Archives, either to install and configure a new cluster or to replicate a cluster from the archives, you do not need to provide the minimal boot image. The Unified Archive contains an image you can use. You do need to provide the path to access the Unified Archive.

When you install and configure a new cluster from either IPS repositories or Unified Archives, complete one of the following cluster configuration worksheets to plan your Typical mode or Custom mode installation:

  • Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.

    Component
    Description/Example
    Answer
    Custom Automated Installer Boot Image Source
    If you plan to use a downloaded AI ISO image file, you will need the following information:
    What is the full path name of the Automated Installer boot image ISO file?
    If you plan to use a repository to get the AI boot image, you will need the following information:
    What is the publisher for the boot image install-image/solaris-auto-install package?
    What is the repository of the publisher?
    What is the architecture of the cluster nodes?
    For repositories that use HTTPS:
    What is the full path of the certificate file for the repository?
    What is the full path of the private key file for the repository?
    You can request and download the key and certificate from the http://pkg-register.oracle.com site.
    Unified Archives
    If you plan to use the Unified Archives to install, you will need the following information:
    What is the location of the unified archive?
    Custom Automated Installer User root Password
    What is the password for the root account of the cluster nodes?
    Custom Automated Installer Repositories (when not using the Unified Archive)
    What is the repository of publisher solaris?
    What is the repository of publisher ha-cluster?
    For repositories that use HTTPS:
    What is the full path of the certificate file for the repository?
    What is the full path of the private key file for the repository?
    You can request and download the key and certificate from the http://pkg-register.oracle.com site.
    Select the Oracle Solaris Cluster components that you want to install. (Select one or more group packages to install.)
    Do you want to select any individual components that are contained in these group packages?
    Yes   |  No
    Cluster Name
    What is the name of the cluster that you want to establish?
    Cluster Nodes
    List the names of the cluster nodes that are planned for the initial cluster configuration. (For a single-node cluster, press Control-D alone.)
    Confirm that the auto-discovered MAC address for each node is correct.
    Cluster Transport Adapters and Cables
    (VLAN cannot be used with cluster AI.)
    First node name:
    Transport adapter names:
    First:
    Second:
    Specify for each additional node
    Node name:
    Transport adapter names:
    First:
    Second:
    Quorum Configuration
    (two-node cluster only)
    Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a quorum server as a quorum device.)
    First:  Yes | No
    Second:  Yes | No
  • Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.


    Note -  If you are installing a single-node cluster, the scinstall utility automatically uses the default private network address and netmask, even though the cluster does not use a private network.
    Component
    Description/Example
    Answer
    Custom Automated Installer Boot Image ISO source
    If you plan to use a downloaded AI ISO image file, you will need the following information:
    What is the full path name of the Automated Installer boot image ISO file?
    If you plan to use a repository to get the AI boot image, you will need the following information:
    What is the publisher for the boot image install-image/solaris-auto-install package?
    What is the repository of the publisher?
    What is the architecture of the cluster nodes?
    For repositories that use HTTPS:
    What is the full path of the certificate file for the repository?
    What is the full path of the private key file for the repository?
    You can request and download the key and certificate from the http://pkg-register.oracle.com site.
    Unified Archives
    If you plan to use the Unified Archives to install, you will need the following information:
    What is the location of the unified archive?
    Custom Automated Installer User root Password
    What is the password for the root account of the cluster nodes?
    Custom Automated Installer Repositories (when not using the Unified Archive)
    What is the repository of publisher solaris?
    What is the repository of publisher ha-cluster?
    For repositories that use HTTPS:
    What is the full path of the certificate file for the repository?
    What is the full path of the private key file for the repository?
    You can request and download the key and certificate from the http://pkg-register.oracle.com site.
    Select the Oracle Solaris Cluster components that you want to install. (Select one or more group packages to install.)
    Do you want to select any individual components that are contained in these group packages?
    Yes   |  No
    Cluster Name
    What is the name of the cluster that you want to establish?
    Cluster Nodes
    List the names of the cluster nodes that are planned for the initial cluster configuration. (For a single-node cluster, press Control-D alone.)
    Confirm that the auto-discovered MAC address for each node is correct.
    Network Address for the Cluster Transport
    (multiple-node cluster only)
    Do you want to accept the default network address (172.16.0.0)?
    Yes   |  No
    • If no, which private network address do you want to use?

    ___.___.___.___
    Do you want to accept the default netmask?
    Yes   |  No
    • If no, what are the maximum numbers of nodes, private networks, and zone clusters that you expect to configure in the cluster?

    _____ nodes
    _____ networks
    _____ zone clusters
    _____ exclusive-IP zone clusters
    Which netmask do you want to use? Choose from the values that are calculated by scinstall or supply your own.
    ___.___.___.___
    Minimum Number of Private Networks
    (multiple-node cluster only)
    Should this cluster use at least two private networks?
    Yes  |  No
    Point-to-Point Cables
    (two-node cluster only)
    Does this cluster use switches?
    Yes  |  No
    Cluster Switches
    (multiple-node cluster only)
    Transport switch name, if used:
    • Defaults: switch1 and switch2

    First:
    Second:
    Cluster Transport Adapters and Cables
    (multiple-node cluster only)
    (VLAN cannot be used with cluster AI.)
    First node name:
    Tran¯sport adapter name:
    First:
    Second:
    Where does each transport adapter connect to (a switch or another adapter)?
    • Switch defaults: switch1 and switch2

    First:
    Second:
    If a transport switch, do you want to use the default port name?
    First:  Yes | No
    Second:  Yes | No
    • If no, what is the name of the port that you want to use?

    First:
    Second:
    Specify for each additional node
    (multiple-node cluster only)
    Node name:
    Transport adapter name:
    First:
    Second:
    Where does each transport adapter connect to (a switch or another adapter)?
    • Switch defaults: switch1 and switch2

    First:
    Second:
    If a transport switch, do you want to use the default port name?
    First:  Yes | No
    Second:  Yes | No
    • If no, what is the name of the port that you want to use?

    First:
    Second:
    Global Fencing
    Do you want to disable global fencing? Answer No unless the shared storage does not support SCSI reservations or unless you want systems that are outside the cluster to access the shared storage.
    First:  Yes | No
    Second:  Yes | No
    Quorum Configuration
    (two-node cluster only)
    Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a quorum server as a quorum device.)
    First:  Yes | No
    Second:  Yes | No

Note -  If your physically clustered machines are configured with Oracle VM Server for SPARC, install the Oracle Solaris Cluster software only in I/O domains, control domains, or guest domains.

    Follow these guidelines to use the interactive scinstall utility in this procedure:

  • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

  • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

Perform the following tasks:

  • Ensure that the hardware setup is complete and connections are verified before you install Solaris software. See the Managing Hardware With Oracle Solaris Cluster 4.4 and your server and storage device documentation for details on how to set up the hardware.

  • Ensure that an Automated Installer install server and a DHCP server are configured. See Configuring an AI Server in Automatically Installing Oracle Solaris 11.4 Systems.

  • Determine the Ethernet address of the cluster node and the length of the subnet mask of the subnet that the address belongs to.

  • Determine the MAC address of each cluster node.

  • Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.

  • Have available the root user password for the cluster nodes.

  • SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See How to Install Oracle VM Server for SPARC Software and Create Domains.

  • If you plan to install from Unified Archives that are created on an existing cluster, have the path to the archive file and ensure that it can be accessed from the AI server.

  • If you plan to install from IPS repositories, determine which Oracle Solaris Cluster software packages you want to install.

    The following table lists the group packages for the Oracle Solaris Cluster 4.4 software that you can choose during an AI installation and the principal features that each group package contains. You must install at least the ha-cluster-framework-minimal group package.

    Feature
    ha-cluster-framework-full
    ha-cluster-data-services-full
    ha-cluster-framework-minimal
    ha-cluster-geo-full
    manager
    Framework
    X
    X
    X
    X
    X
    Agents
    X
    Localization
    X
    Framework man pages
    X
    Data Service man pages
    X
    Agent Builder
    X
    Generic Data Service
    X
    X
    Graphical User Interface
    X
    Disaster Recovery Framework
    X
  • Have available your completed Typical Mode or Custom Mode installation worksheet. See Establishing a New Cluster With the Automated Installer.

How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (IPS Repositories)

You can set the AI server to install both Oracle Solaris OS and the Oracle Solaris Cluster framework and data service software from IPS repositories or the Unified Archives on all global-cluster nodes and establish the cluster. This procedure describes how to set up and use the scinstall(8) custom Automated Installer installation method to install and configure the cluster from IPS repositories.

  1. Set up your Automated Installer (AI) install server and DHCP server.

    Ensure that the AI install server meets the following requirements.

    • The install server is on the same subnet as the cluster nodes.

    • The install server is not itself a cluster node.

    • The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.

    • Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.

    Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 4, Setting Up the AI Server in Automatically Installing Oracle Solaris 11.4 Systems and Working With DHCP in Oracle Solaris 11.4.

  2. On the AI install server, assume the root role.
  3. On the AI install server, install the Oracle Solaris Cluster AI support package.
    1. Ensure that the solaris and ha-cluster publishers are valid.
      installserver# pkg publisher
      PUBLISHER        TYPE     STATUS   URI
      solaris          origin   online   solaris-repository
      ha-cluster       origin   online   ha-cluster-repository
    2. Install the cluster AI support package.
      installserver# pkg install ha-cluster/system/install

    Tip  -  If you used the clinstall utility to install cluster software on the cluster nodes, you can skip the next step if you issue the scinstall command from the same control node. The clauth authorizations you made before running the clinstall utility stay in force until you reboot the nodes into the cluster at the end of this procedure.
  4. Authorize acceptance of cluster configuration commands by the control node.
    1. Determine which system to use to issue the cluster creation command.

      This system is the control node.

    2. On all systems that you will configure in the cluster, other than the control node, authorize acceptance of commands from the control node.
      phys-schost# clauth enable -n control-node

      If you want to use the des (Diffie-Hellman) authentication protocol instead of the sys (unix) protocol, include –p des in the command.

      phys-schost# clauth enable -p des -n control-node
  5. On the AI install server, start the scinstall utility.
    installserver# /usr/cluster/bin/scinstall

    The scinstall Main Menu is displayed.

  6. Select option 1 or option 2 from the Main Menu.
    *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install, restore, replicate, and configure a cluster from this Automated Installer install server
          * 2) Securely install, restore, replicate, and configure a cluster from this Automated Installer install server
          * 3) Print release information for this Automated Installer install server
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  
  7. Follow the menu prompts to supply your answers from the configuration planning worksheet.
  8. For each node, confirm the options you chose so that the scinstall utility performs the necessary configuration to install the cluster nodes from this AI server.

    The utility also prints instructions to add the DHCP macros on the DHCP server, and adds (if you chose secure installation) or clears (if you chose non-secure installation) the security keys for SPARC nodes. Follow those instructions.

  9. (Optional) To install extra software packages or to customize the target device, update the AI manifest for each node.

    The AI manifest is located in the following directory:

    /var/cluster/logs/install/autoscinstall.d/ \
    cluster-name/node-name/node-name_aimanifest.xml
    1. To install extra software packages, edit the AI manifest as follows:
      • Add the publisher name and the repository information. For example:

        <publisher name="aie">
        <origin name="http://aie.example.com:12345"/> 
        </publisher>
      • Add the package names that you want to install, in the software_data item of the AI manifest.

    2. To customize the target device, update the target element in the manifest file.

      scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.

      For more information, see Configuring an AI Server in Automatically Installing Oracle Solaris 11.4 Systems and the ai_manifest(5) man page.

    3. Run the installadm command for each node.
      # installadm update-manifest -n cluster-name-{sparc|i386} \ 
      -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \
      -m node-name_manifest

    Note that SPARC and i386 is the architecture of the cluster node.

  10. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] […]  &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.
  11. Shut down and boot each node to start the AI installation.

    The Oracle Solaris software is installed with the default configuration.


    Note -  You cannot use this method if you want to customize the Oracle Solaris installation. If you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and Oracle Solaris Cluster software is not installed and configured. To customize Oracle Solaris during installation, instead follow instructions in How to Install Oracle Solaris Software, then install and configure the cluster by following instructions in How to Install Oracle Solaris Cluster Software (pkg).
    • SPARC:
      1. Shut down each node.
        phys-schost# shutdown -g0 -y -i0
      2. Boot the node with the following command
        ok boot net:dhcp - install

        Note -  Surround the dash (-) in the command with a space on each side.
    • x86:
      1. Reboot the node.
        # reboot -p
      2. During PXE boot, press Control-N.

        The GRUB menu is displayed.

      3. Immediately select the Automated Install entry and press Return.

        Note -  If you do not select the Automated Install entry within 20 seconds, installation proceeds using the default interactive text installer method, which will not install and configure the Oracle Solaris Cluster software.

        On each node, a new boot environment (BE) is created and Automated Installer installs the Oracle Solaris OS and Oracle Solaris Cluster software. When the installation is successfully completed, each node is fully installed as a new cluster node. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.

  12. If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file system, exclude from the automounter map all shares that are part of the highly available local file system that is exported by HA for NFS.

    See Administrative Tasks for Autofs Maps in Managing Network File Systems in Oracle Solaris 11.4 for more information about modifying the automounter map.

  13. (x86 only) Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    grub edit> kernel /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS -k

    For more information, see How to Boot a System With the Kernel Debugger (kmdb) Enabled in Booting and Shutting Down Oracle Solaris 11.4 Systems.

  14. If you performed a task that requires a cluster reboot, reboot the cluster.

    The following tasks require a reboot:

    • Installing software updates that require a node or cluster reboot

    • Making configuration changes that require a reboot to become active

    1. On one node, assume the root role.
    2. Shut down the cluster.
      phys-schost-1# cluster shutdown -y -g0 cluster-name

      Note -  Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

      Cluster nodes remain in installation mode until the first time that you run the clsetup command. You run this command during the procedure How to Configure Quorum Devices.


    3. Reboot each node in the cluster.

    The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  15. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(8CL) man page.

  16. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  17. (Optional) On each node, enable automatic node reboot if all monitored shared-disk paths fail.

    Note -  At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled +
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===
      
      Node Name:                                      node
      …
      reboot_on_path_failure:                          enabled
      …
  18. If you use the LDAP naming service, you must manually configure it on the cluster nodes after they boot.

Next Steps

1. Perform all of the following procedures that are appropriate for your cluster configuration.

2. Configure quorum, if not already configured, and perform post installation tasks.

Troubleshooting

Disabled scinstall option – If the AI option of the scinstall command is not preceded by an asterisk, the option is disabled. This condition indicates that AI setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 9 to correct the AI setup, then restart the scinstall utility.

How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Unified Archives)

You will use the AI server to install a cluster from the Unified Archives and configure its nodes. This procedure retains all the software packages that are contained in the Unified Archives, but you must provide the new cluster configuration that you designed in the worksheet. Before you perform this procedure, you must first create the archive. See Step 1 below for instructions on creating the recovery archive.

The AI server sets up installation of the nodes from the Unified Archives and creates the cluster with the new configuration. Only a Unified Archive created in the global zone is accepted. You can use either a clone archive or a recovery archive. The following list describes the differences between the two archives:

  • When you install from a clone archive, only the global zone is installed. Any zones in the archive are not installed. When you install from a recovery archive, both the global zone and the zones contained in the archive are installed.

  • A clone archive does not contain system configuration, including IPMPs, VLANs, and VNICs.

  • A clone archive only contains the BE that is active when the archive is created, therefore only that BE is installed. A recovery archive can contain multiple BEs, but only the active BE is updated with the new cluster configuration.

This procedure prompts you for the cluster name, node names and their MAC addresses, the path to the Unified Archives, and the cluster configuration you designed in the worksheet.

  1. Assume the root role on a node of the global cluster and create an archive.
    phys-schost# archiveadm create -r archive-location

    Use the create command to create a clone archive or the create –r option to create a recovery archive. For more information on using the archiveadm command, see the archiveadm(8) man page.

  2. Set up your Automated Installer (AI) install server and DHCP server.

    Ensure that the AI install server meets the following requirements.

    • The install server is on the same subnet as the cluster nodes.

    • The install server is not itself a cluster node.

    • The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.

    • Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.

    Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 4, Setting Up the AI Server in Automatically Installing Oracle Solaris 11.4 Systems and Working With DHCP in Oracle Solaris 11.4.

  3. Log into the Automated Installer server and assume the root role.
  4. On the AI install server, install the Oracle Solaris Cluster AI support package.
    1. Ensure that the solaris and ha-cluster publishers are valid.
      installserver# pkg publisher
      PUBLISHER        TYPE     STATUS   URI
      solaris          origin   online   solaris-repository
      ha-cluster       origin   online   ha-cluster-repository
    2. Install the cluster AI support package.
      installserver# pkg install ha-cluster/system/install
  5. On the AI install server, start the scinstall utility.
    installserver# /usr/cluster/bin/scinstall

    The scinstall Main Menu is displayed.

  6. Type the option number and press Return.
    *** Main Menu ***
    
    Please select from one of the following (*) options:
    
    * 1) Install, restore, or replicate a cluster from this Automated Installer server
    * 2) Securely install, restore, or replicate a cluster from this Automated Installer server
    * 3) Print release information for this Automated Installer install server
    
    * ?) Help with menu options
    * q) Quit
    
    Option:  2

    Choose Option 1 if you want to install a cluster using a non-secure AI server installation. Choose Option 2 for a secure AI installation.

    The Custom Automated Installer Menu or Custom Secure Automated Installer Menu is displayed.

  7. Type the option number to Install and Configure a New Cluster from Unified Archives and press Return.

    The Custom Automated Installer User screen is displayed.

  8. Type the password and press Return.

    Type the password a second time to confirm it. The Typical or Customer Mode screen is displayed.

  9. Type the option number for the install mode you will use.

    The Cluster Name screen is displayed.

  10. Type the name of the cluster you want to install and press Return.

    The Cluster Nodes screen is displayed.

  11. Types the names of the cluster nodes that you plan to install from the Unified Archives and press Return.

    If the scinstall utility is unable to find the MAC address of the nodes, type in each address when prompted and press Return. You can then choose to install all the nodes from the same archive, or use a different archive for each node.

  12. Type the full path to the archive and press Return.

    The archive can either be a recovery archive or a clone archive.

    The Cluster Transport Adapters and Cables screen is displayed.

  13. Type the names of the cluster transport adapters and press Return.

    Select the type of each transport adapter. The Resource Security Configuration screen is displayed.

  14. Choose to enable to disable automatic quorum device selection and press Return.

    The Confirmation screen is displayed.

  15. For each node, confirm the options you chose so that the scinstall utility performs the necessary configuration to install the cluster nodes from this AI server.

    The utility also prints instructions to add the DHCP macros on the DHCP server, and adds (if you chose secure installation) or clears (if you chose non-secure installation) the security keys for SPARC nodes. Follow those instructions.

  16. (Optional) To customize the target device, update the AI manifest for each node.

    The AI manifest is located in the following directory:

    /var/cluster/logs/install/autoscinstall.d/ \
    cluster-name/node-name/node-name_aimanifest.xml
    1. To customize the target device, update the target element in the manifest file.

      scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.

      For more information, see Configuring an AI Server in Automatically Installing Oracle Solaris 11.4 Systems and the ai_manifest(5) man page.

    2. Run the installadm command for each node.
      # installadm update-manifest -n cluster-name-{sparc|i386} \ 
      -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \
      -m node-name_manifest

    Note that SPARC and i386 is the architecture of the cluster node.

  17. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] […]  &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.
  18. Shut down and boot each node to start the AI installation.

    The Oracle Solaris software is installed with the default configuration.


    Note -  You cannot use this method if you want to customize the Oracle Solaris installation. If you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and Oracle Solaris Cluster software is not installed and configured. To customize Oracle Solaris during installation, instead follow instructions in How to Install Oracle Solaris Software, then install and configure the cluster by following instructions in How to Install Oracle Solaris Cluster Software (pkg).
    • SPARC:
      1. Shut down each node.
        phys-schost# shutdown -g0 -y -i0
      2. Boot the node with the following command
        ok boot net:dhcp - install

        Note -  Surround the dash (-) in the command with a space on each side.
    • x86:
      1. Reboot the node.
        # reboot -p
      2. During PXE boot, press Control-N.

        The GRUB menu is displayed.

      3. Immediately select the Automated Install entry and press Return.

        Note -  If you do not select the Automated Install entry within 20 seconds, installation proceeds using the default interactive text installer method, which will not install and configure the Oracle Solaris Cluster software.

        Each node will be automatically rebooted a few times before the node completely joins the cluster. Ignore any error messages from SMF services on the console. On each node, the Automated Installer installs the software that is contained in the Unified Archives. When the installation is successfully completed, each node is fully installed as a new cluster node. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.

  19. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(8CL) man page.

  20. If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices on the node.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0 devices in the cluster.
  21. If you use the LDAP naming service, you must manually configure it on the cluster nodes after they boot.

How to Replicate a Cluster from the Unified Archives

You can use the Unified Archives to replicate a cluster and its nodes. This procedure retains all the software packages in the archives. Furthermore, this new cluster will have the exact configuration as the archive cluster or you can customize the private network properties and host identities, such as zone host names and logical host names in cluster resources.

Only the Unified Archive created in the global zone is accepted. You can use either a clone archive or a recovery archive. The following list describes the differences between the two archives:

  • When you install from a clone archive, only the global zone is installed. Any zones in the archive are not installed. When you install from a recovery archive, both the global zone and the zones contained in the archive are installed.

  • A clone archive does not contain system configuration , including IPMPs, VLANs, and VNICs.

  • A clone archive only contains the BE that is active when the archive is created, therefore only that BE in installed. A recovery archive can contain multiple BEs, but only the active BE is updated with the new cluster configuration.

To replicate a cluster from the Unified Archives created on an existing cluster, the hardware configuration of the new cluster must be the same as the source cluster. The number of nodes in the new cluster must be the same as in the source cluster, and the transport adapters must also be the same as in the source cluster.

  1. Assume the root role on a node of the global cluster and create an archive.
    phys-schost# archiveadm create -r archive-location

    Use the create command to create a clone archive or the –r option to create a recovery archive. When you create the archive, exclude the ZFS datasets that are on the shared storage. If you plan to migrate the data on the shared storage from the source cluster to the new cluster, use the traditional method.

    For more information on using the archiveadm command, see the archiveadm(8) man page.

  2. Set up your Automated Installer (AI) install server and DHCP server.

    Ensure that the AI install server meets the following requirements.

    • The install server is on the same subnet as the cluster nodes.

    • The install server is not itself a cluster node.

    • The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.

    • Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.

    Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 4, Setting Up the AI Server in Automatically Installing Oracle Solaris 11.4 Systems and Working With DHCP in Oracle Solaris 11.4.

  3. Log into the Automated Installer server and assume the root role.
  4. On the AI install server, install the Oracle Solaris Cluster AI support package.
    1. Ensure that the solaris and ha-cluster publishers are valid.
      installserver# pkg publisher
      PUBLISHER        TYPE     STATUS   URI
      solaris          origin   online   solaris-repository
      ha-cluster       origin   online   ha-cluster-repository
    2. Install the cluster AI support package.
      installserver# pkg install ha-cluster/system/install
  5. On the AI install server, start the scinstall utility.
    phys-schost# scinstall

    The scinstall Main Menu is displayed.

  6. Type the option number and press Return.
    *** Main Menu ***
    
    Please select from one of the following (*) options:
    
    * 1) Install, restore, or replicate a cluster from this Automated Installer server
    * 2) Securely install, restore, or replicate a cluster from this Automated Installer server
    * 3) Print release information for this Automated Installer install server
    
    * ?) Help with menu options
    * q) Quit
    
    Option:  2

    Choose Option 1 if you want to replicate a cluster using a non-secure AI server installation. Choose Option 2 for a secure AI replication.

    The Custom Automated Installer Menu or Custom Secure Automated Installer Menu is displayed.

  7. Type the option number to Replicate a Cluster from Unified Archives and press Return.

    The Custom Automated Installer User screen is displayed.

  8. Type the password and press Return.

    Type the password a second time to confirm it.

    The Cluster Name screen is displayed.

  9. Type the name of the cluster you want to replicate and press Return.

    The Cluster Nodes screen is displayed.

  10. Types the names of the cluster nodes that you plan to replicate from the Unified Archives.

    After you type the node names, press Control-D and then Return. If the scinstall utility is unable to find the MAC address of the nodes, type in each address when prompted and press Return.

  11. Type the full path to the archive for each node.

    A Unified Archive file must be created for each node in the source cluster, and only one archive can be specified per node in the new cluster. This 1:1 mapping ensures that one archive is mapped to one node in the source cluster. Similarly, the archive of one source node must be mapped to only one node in the new cluster.

    Press Return to confirm the archive files.

  12. If you want to use a different private network address and netmask, specify them in the Network Address for the Cluster Transport menu.
  13. Provide the path to the text file that contains the mapping from old host identities in the source cluster to the new host identities in the new cluster.

    To avoid using the same host identities in the new cluster as the source cluster, you can create and provide a text file that contains a 1:1 mapping from the old host identities in the source cluster to the new host identities that you intend to use in the new cluster. The text file can contain multiple lines, where each line has two columns. The first column is the hostname used in the source cluster, and the second column is the corresponding new hostname in the new cluster. The hostnames are for the logical hostnames, shared address resources, and zone clusters. For example:

    old-cluster-zc-host1          new-cluster-zc-host1
    old-cluster-zc-host2          new-cluster-zc-host2
    old-cluster-lh-1              new-cluster-lh1
    old-cluster-lh-2              new-cluster-lh2

    The Confirmation screen is displayed.

  14. For each node, confirm the options you chose so that the scinstall utility performs the necessary configuration to install the cluster nodes from this AI server.

    The utility also prints instructions to add the DHCP macros on the DHCP server, and adds or clears the security keys for SPARC nodes (if you chose secure installation). Follow those instructions.

  15. (Optional) To customize the target device, update the AI manifest for each node.

    The AI manifest is located in the following directory:

    /var/cluster/logs/install/autoscinstall.d/ \
    cluster-name/node-name/node-name_aimanifest.xml
    1. To customize the target device, update the target element in the manifest file.

      scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.

      For more information, see Configuring an AI Server in Automatically Installing Oracle Solaris 11.4 Systems and the ai_manifest(5) man page.

    2. Run the installadm command for each node.
      # installadm update-manifest -n cluster-name-{sparc|i386} \ 
      -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \
      -m node-name_manifest

    Note that SPARC and i386 is the architecture of the cluster node.

  16. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] […]  &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.
  17. Shut down and boot each node to start the AI installation.

    The Oracle Solaris software is installed with the default configuration.


    Note -  You cannot use this method if you want to customize the Oracle Solaris installation. If you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and Oracle Solaris Cluster software is not installed and configured. To customize Oracle Solaris during installation, instead follow instructions in How to Install Oracle Solaris Software, then install and configure the cluster by following instructions in How to Install Oracle Solaris Cluster Software (pkg).
    • SPARC:
      1. Shut down each node.
        phys-schost# shutdown -g0 -y -i0
      2. Boot the node with the following command
        ok boot net:dhcp - install

        Note -  Surround the dash (-) in the command with a space on each side.
    • x86:
      1. Reboot the node.
        # reboot -p
      2. During PXE boot, press Control-N.

        The GRUB menu is displayed.

      3. Immediately select the Automated Install entry and press Return.

        Note -  If you do not select the Automated Install entry within 20 seconds, installation proceeds using the default interactive text installer method, which will not install and configure the Oracle Solaris Cluster software.

        Each node will be automatically rebooted a few times before the node completely joins the cluster. Ignore any error messages from SMF services on the console. Each node is installed with the software contained in the Unified Archives. When the installation is successfully completed, each node is booted as a member of the new cluster, with the same cluster configuration as the archive but with a different system identity and system configuration. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.

  18. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(8CL) man page.

  19. The cluster objects, including resource groups and zone clusters, are offline after the last reboot. Check the configuration and make necessary changes in the new environment before bringing them online.

    If the source cluster uses another system as a cluster object (for example, using a system as a quorum device of the quorum server type), you must manually adjust the configuration both in the new cluster and on the quorum server in order for the device to work. For a quorum server, you can add a new quorum server quorum device and remove the one brought from the archive.


    Note -  If your source cluster uses Oracle Solaris Cluster Disaster Recovery Framework, follow the procedures in Chapter 5, Administering Cluster Partnerships in Administering the Disaster Recovery Framework for Oracle Solaris Cluster 4.4 to rename a cluster and reconstruct the partnership.

    If you need to make any changes to the zone cluster configuration or the resource groups in the cluster, reboot the zone cluster to Offline Running mode:

    phys-schost# clzonecluster reboot -o zoneclustername

    If you do not plan to make changes to the zone cluster configuration, you can reboot the cluster to Online Running mode:

    phys-schost# clzonecluster reboot zoneclustername

    You can also check the log file, /var/cluster/logs/install/sc_ai_config, for more information.