Go to main content

Administering an Oracle® Solaris Cluster 4.4 Configuration

Exit Print View

Updated: November 2019
 
 

Restoring Cluster Nodes

You can use the Unified Archives to restore a cluster node so that it is exactly the same as the archive. Before you restore the node, you must first create a recovery archive on the cluster nodes. Only a recovery archive can be used; a clone archive cannot be used to restore a cluster node. See Step 1 below for instructions on creating the recovery archive.

This procedure prompts you for the cluster name, node names and their MAC addresses, and the path to the Unified Archives. For each archive that you specify, the scinstall utility verifies that the archive's source node name is the same as the node you are restoring. For instructions on restoring the nodes in a cluster from a Unified Archive, see How to Restore a Node from the Unified Archive.

How to Restore a Node from the Unified Archive

This procedure uses the interactive form of the scinstall utility on the Automated Installer server. You must have already set up the AI server and installed the ha-cluster/system/install packages from the Oracle Solaris Cluster repositories. The node name of the archive must be the same as the node that you are restoring.

    Follow these guidelines to use the interactive scinstall utility in this procedure:

  • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

  • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

  • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

  1. Assume the root role on a node of the global cluster and create a recovery archive.
    phys-schost# archiveadm create -r archive-location

    When you create an archive, exclude the ZFS datasets that are on the shared storage. If you plan to restore the data on the shared storage, use the traditional method.

    For more information on using the archiveadm command, see the archiveadm(8) man page.

  2. Log into the Automated Installer server and assume the root role.
  3. Start the scinstall utility.
    phys-schost# scinstall
  4. Type the option number to restore a cluster.
    *** Main Menu ***
    
    Please select from one of the following (*) options:
    
    * 1) Install, restore, or replicate a cluster from this Automated Installer server
    * 2) Securely install, restore, or replicate a cluster from this Automated Installer server
    * 3) Print release information for this Automated Installer install server
    
    * ?) Help with menu options
    * q) Quit
    
    Option:  2

    Choose Option 1 to restore a cluster node using a non-secure AI server installation. Choose Option 2 to restore a cluster node by using the secure AI server installation.

    The Custom Automated Installer Menu or Custom Secure Automated Installer Menu is displayed.

  5. Type the option number to Restore Cluster Nodes from Unified Archives.

    The Cluster Name screen is displayed.

  6. Type the cluster name that contains the nodes you want to restore.

    The Cluster Nodes screen is displayed.

  7. Type the names of the cluster nodes that you want to restore from the Unified Archives.

    Type one node name per line. When you are done, press Control-D and confirm the list by typing yes and pressing Return. If you want to restore all the nodes in the cluster, specify all the nodes.

    If the scinstall utility is unable to find the MAC address of the nodes, type in each address when prompted.

  8. Type the full path to the recovery archive.

    The archive used to restore a node must be a recovery archive. The archive file you use to restore a particular node must be created on the same node. Repeat this for each cluster node you want to restore.

  9. For each node, confirm the options you chose so that the scinstall utility performs the necessary configuration to install the cluster nodes from this AI server.

    The utility also prints instructions to add the DHCP macros on the DHCP server, and adds or clears the security keys for SPARC nodes (if you chose secure installation). Follow those instructions.

  10. (Optional) To customize the target device, update the AI manifest for each node.

    The AI manifest is located in the following directory:

    /var/cluster/logs/install/autoscinstall.d/ \
    cluster-name/node-name/node-name_aimanifest.xml
    1. To customize the target device, update the target element in the manifest file.

      Update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.


      Note -  scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file. For more information, see the ai_manifest(5) man page.
    2. Run the installadm command for each node.
      # installadm update-manifest -n cluster-name-{sparc|i386} \
      -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \
      -m node-name_manifest

    Note that SPARC and i386 is the architecture of the cluster node.

  11. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] […]  &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.
  12. Shut down and boot each node to start the AI installation.

    The Oracle Solaris software is installed with the default configuration.


    Note -  You cannot use this method if you want to customize the Oracle Solaris installation. If you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and Oracle Solaris Cluster software is not installed and configured.

    To customize Oracle Solaris during installation, instead follow instructions in How to Install Oracle Solaris Software in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment, then install and configure the cluster by following instructions in How to Install Oracle Solaris Cluster Software (pkg) in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.


    • SPARC:
      1. Shut down each node.
        phys-schost# cluster shutdown -g 0 -y
      2. Boot the node with the following command
        ok boot net:dhcp - install

        Note -  Surround the dash (-) in the command with a space on each side.
    • x86
      1. Reboot the node.
        # reboot -p
      2. During PXE boot, press Control-N.

        The GRUB menu is displayed.

      3. Immediately select the Automated Install entry.

        Note -  If you do not select the Automated Install entry within 20 seconds, installation proceeds, using the default interactive text installer method which will not install and configure the Oracle Solaris Cluster software.

        Each node will be automatically rebooted to join the cluster after the installation is finished. The node is restored to the same state as when the archive was created. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/sc_ai_config.log file on each node.

  13. From one node, verify that all nodes have joined the cluster.
    phys-schost# clnode status

    Output resembles the following.

    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-1                                   Online
    phys-schost-2                                   Online
    phys-schost-3                                   Online

    For more information, see the clnode(8CL) man page.