This procedure uses the interactive form of the scinstall utility on the Automated Installer server. You must have already set up the AI server and installed the ha-cluster/system/install packages from the Oracle Solaris Cluster repositories. The node name of the archive must be the same as the node that you are restoring.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
phys-schost# archiveadm create -r archive-location
When you create an archive, exclude the ZFS datasets that are on the shared storage. If you plan to restore the data on the shared storage, use the traditional method.
For more information on using the archiveadm command, see the archiveadm(1M) man page.
phys-schost# scinstall
*** Main Menu *** Please select from one of the following (*) options: * 1) Install, restore, or replicate a cluster from this Automated Installer server * 2) Securely install, restore, or replicate a cluster from this Automated Installer server * 3) Print release information for this Automated Installer install server * ?) Help with menu options * q) Quit Option: 2
Choose Option 1 to restore a cluster node using a non-secure AI server installation. Choose Option 2 to restore a cluster node using the secure AI server installation.
The Custom Automated Installer Menu or Custom Secure Automated Installer Menu is displayed.
The Cluster Name screen is displayed.
The Cluster Nodes screen is displayed.
Type one node name per line and press Return. When you are done, press Control-D and confirm the list by typing yes and pressing Return. If you want to restore all the nodes in the cluster, specify all the nodes.
If the scinstall utility is unable to find the MAC address of the nodes, type in each address when prompted and press Return.
The archive used to restore a node must be a recovery archive. The archive file you use to restore a particular node must be created on the same node. Repeat this for each cluster node you want to restore.
The utility also prints instructions to add the DHCP macros on the DHCP server, and adds or clears the security keys for SPARC nodes (if you chose secure installation). Follow those instructions.
The AI manifest is located in the following directory:
/var/cluster/logs/install/autoscinstall.d/ \ cluster-name/node-name/node-name_aimanifest.xml
Update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.
# installadm update-manifest -n cluster-name-{sparc|i386} \ -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \ -m node-name_manifest
Note that SPARC and i386 is the architecture of the cluster node.
As the root role, use the following command to start the pconsole utility:
adminconsole# pconsole host[:port] […] &
The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
The Oracle Solaris software is installed with the default configuration.
phys-schost# cluster shutdown -g 0 -y
ok boot net:dhcp - install
# reboot -p
The GRUB menu is displayed.
Each node will be automatically rebooted to join the cluster after the installation is finished. The node is restored to the same state as when the archive was created. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/sc_ai_config.log file on each node.
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.