How to Replicate a Cluster from the Unified Archives

You can use the Unified Archives to replicate a cluster and its nodes. This procedure retains all the software packages in the archives. Furthermore, this new cluster will have the exact configuration as the archive cluster or you can customize the private network properties and host identities, such as zone host names and logical host names in cluster resources.

Only the Unified Archive created in the global zone is accepted. You can use either a clone archive or a recovery archive. The following list describes the differences between the two archives:

  • When you install from a clone archive, only the global zone is installed. Any zones in the archive are not installed. When you install from a recovery archive, both the global zone and the zones contained in the archive are installed.

  • A clone archive does not contain system configuration, including IPMPs, VLANs, and VNICs.

  • A clone archive only contains the BE that is active when the archive is created, therefore only that BE in installed. A recovery archive can contain multiple BEs, but only the active BE is updated with the new cluster configuration.

To replicate a cluster from the Unified Archives created on an existing cluster, the hardware configuration of the new cluster must be the same as the source cluster. The number of nodes in the new cluster must be the same as in the source cluster, and the transport adapters must also be the same as in the source cluster.

  1. Assume the root role on a node of the global cluster and create an archive.
    phys-schost# archiveadm create -r archive-location

    Use the create command to create a clone archive or the -r option to create a recovery archive. When you create the archive, exclude the ZFS datasets that are on the shared storage. If you plan to migrate the data on the shared storage from the source cluster to the new cluster, use the traditional method.

    For more information on using the archiveadm command, see the archiveadm(8) man page.

  2. Set up your Automated Installer (AI) install server and DHCP server.

    Ensure that the AI install server meets the following requirements.

    • The install server is on the same subnet as the cluster nodes.

    • The install server is not itself a cluster node.

    • The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.

    • Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.

    Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 4, Setting Up the AI Server in Automatically Installing Oracle Solaris 11.4 Systems and Working With DHCP in Oracle Solaris 11.4.

  3. Log into the Automated Installer server and assume the root role.
  4. On the AI install server, install the Oracle Solaris Cluster AI support package.
    1. Ensure that the solaris and ha-cluster publishers are valid.
      installserver# pkg publisher
      PUBLISHER        TYPE     STATUS   URI
      solaris          origin   online   solaris-repository
      ha-cluster       origin   online   ha-cluster-repository
    2. Install the cluster AI support package.
      installserver# pkg install ha-cluster/system/install
  5. On the AI install server, start the scinstall utility.
    phys-schost# scinstall

    The scinstall Main Menu is displayed.

  6. Type the option number and press Return.
    *** Main Menu ***
    
    Please select from one of the following (*) options:
    
    * 1) Install, restore, or replicate a cluster from this Automated Installer server
    * 2) Securely install, restore, or replicate a cluster from this Automated Installer server
    * 3) Print release information for this Automated Installer install server
    
    * ?) Help with menu options
    * q) Quit
    
    Option:  2

    Choose Option 1 if you want to replicate a cluster using a non-secure AI server installation. Choose Option 2 for a secure AI replication.

    The Custom Automated Installer Menu or Custom Secure Automated Installer Menu is displayed.

  7. Type the option number to Replicate a Cluster from Unified Archives and press Return.

    The Custom Automated Installer User screen is displayed.

  8. Type the password and press Return.

    Type the password a second time to confirm it.

    The Cluster Name screen is displayed.

  9. Type the name of the cluster you want to replicate and press Return.

    The Cluster Nodes screen is displayed.

  10. Types the names of the cluster nodes that you plan to replicate from the Unified Archives.

    After you type the node names, press Control-D and then Return. If the scinstall utility is unable to find the MAC address of the nodes, type in each address when prompted and press Return.

  11. Type the full path to the archive for each node.

    A Unified Archive file must be created for each node in the source cluster, and only one archive can be specified per node in the new cluster. This 1:1 mapping ensures that one archive is mapped to one node in the source cluster. Similarly, the archive of one source node must be mapped to only one node in the new cluster.

    Press Return to confirm the archive files.

  12. If you want to use a different private network address and netmask, specify them in the Network Address for the Cluster Transport menu.
  13. Provide the path to the text file that contains the mapping from old host identities in the source cluster to the new host identities in the new cluster.

    To avoid using the same host identities in the new cluster as the source cluster, you can create and provide a text file that contains a 1:1 mapping from the old host identities in the source cluster to the new host identities that you intend to use in the new cluster. The text file can contain multiple lines, where each line has two columns. The first column is the hostname used in the source cluster, and the second column is the corresponding new hostname in the new cluster. The hostnames are for the logical hostnames, shared address resources, and zone clusters. For example:

    old-cluster-zc-host1          new-cluster-zc-host1
    old-cluster-zc-host2          new-cluster-zc-host2
    old-cluster-lh-1              new-cluster-lh1
    old-cluster-lh-2              new-cluster-lh2

    The Confirmation screen is displayed.

  14. For each node, confirm the options you chose so that the scinstall utility performs the necessary configuration to install the cluster nodes from this AI server.

    The utility also prints instructions to add the DHCP macros on the DHCP server, and adds or clears the security keys for SPARC nodes (if you chose secure installation). Follow those instructions.

  15. (Optional) To customize the target device, update the AI manifest for each node.

    The AI manifest is located in the following directory:

    /var/cluster/logs/install/autoscinstall.d/ \
    cluster-name/node-name/node-name_aimanifest.xml
    1. To customize the target device, update the target element in the manifest file.

      scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.

      For more information, see Configuring an AI Server in Automatically Installing Oracle Solaris 11.4 Systems and the ai_manifest(5) man page.

    2. Run the installadm command for each node.
      # installadm update-manifest -n cluster-name-{sparc|i386} \ 
      -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \
      -m node-name_manifest

    Note that SPARC and i386 is the architecture of the cluster node.

  16. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] [...] &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.

  17. Shut down and boot each node to start the AI installation.

    The Oracle Solaris software is installed with the default configuration.

    Note:

    You cannot use this method if you want to customize the Oracle Solaris installation. If you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and Oracle Solaris Cluster software is not installed and configured. To customize Oracle Solaris during installation, instead follow instructions in How to Install Oracle Solaris Software, then install and configure the cluster by following instructions in How to Install Oracle Solaris Cluster Software (pkg).
    • SPARC:

      1. Shut down each node.

        phys-schost# shutdown -g0 -y -i0
      2. Boot the node with the following command

        ok boot net:dhcp - install

        Note:

        Surround the dash (-) in the command with a space on each side.
    • x86:

      1. Reboot the node.

        # reboot -p
      2. During PXE boot, press Control-N.

        The GRUB menu is displayed.

      3. Immediately select the Automated Install entry and press Return.

        Note:

        If you do not select the Automated Install entry within 20 seconds, installation proceeds using the default interactive text installer method, which will not install and configure the Oracle Solaris Cluster software.

        Each node will be automatically rebooted a few times before the node completely joins the cluster. Ignore any error messages from SMF services on the console. Each node is installed with the software contained in the Unified Archives. When the installation is successfully completed, each node is booted as a member of the new cluster, with the same cluster configuration as the archive but with a different system identity and system configuration. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log. N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.

  18. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(8CL) man page.

  19. The cluster objects, including resource groups and zone clusters, are offline after the last reboot. Check the configuration and make necessary changes in the new environment before bringing them online.

    If the source cluster uses another system as a cluster object (for example, using a system as a quorum device of the quorum server type), you must manually adjust the configuration both in the new cluster and on the quorum server in order for the device to work. For a quorum server, you can add a new quorum server quorum device and remove the one brought from the archive.

    Note:

    If your source cluster uses Oracle Solaris Cluster Disaster Recovery Framework, follow the procedures in Chapter 5, Administering Cluster Partnerships in Administering the Disaster Recovery Framework for Oracle Solaris Cluster 4.4 to rename a cluster and reconstruct the partnership.

    If you need to make any changes to the zone cluster configuration or the resource groups in the cluster, reboot the zone cluster to Offline Running mode:

    phys-schost# clzonecluster reboot -o zoneclustername

    If you do not plan to make changes to the zone cluster configuration, you can reboot the cluster to Online Running mode:

    phys-schost# clzonecluster reboot zoneclustername

    You can also check the log file, /var/cluster/logs/install/sc_ai_config, for more information.