The interactive scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.
172.16.0.0
255.255.240.0
Exactly two adapters
switch1 and switch2
Enabled
You can install and configure a new cluster by installing the Oracle Solaris and Oracle Solaris Cluster software packages from IPS repositories, or from an Oracle Solaris Unified Archive that is created on an existing cluster.
Besides forming a new cluster, you can also use the AI and the Unified Archives to replicate a cluster from the archive, and restore existing cluster nodes. You can also use the clzonecluster command to install a new zone cluster from the Unified Archives. For more information, see How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Unified Archives), How to Replicate a Cluster from the Unified Archives, How to Restore a Node from the Unified Archive in Oracle Solaris Cluster 4.3 System Administration Guide, and How to Install a Zone Cluster from the Unified Archive in Oracle Solaris Cluster 4.3 System Administration Guide.
You can also use this procedure to add new nodes to an existing cluster. These nodes can be physical machines or supported (SPARC only) Oracle VM Server for SPARC logical domains, or a combination of any of these types of nodes.
AI uses a minimal boot image to boot the client. When you install the Oracle Solaris and Oracle Solaris Cluster software packages from IPS repositories, you must provide a source for the installation to obtain the boot image. The boot image content is published in the install-image/solaris-auto-install package. The downloaded boot image ISO file also contains the boot image. You can either specify the repository from which the package can be retrieved, or you can specify the location of the downloaded boot image ISO file.
To obtain the boot image from the repository, you will need to specify the publisher, the repository URL, and the architecture of the cluster nodes. If the repository uses HTTPS, you will also need to specify the SSL certificate and the private key, and provide the location of the files. You can request and download the key and certificate from the http://pkg-register.oracle.com site.
To use the downloaded boot image ISO file, you must save it in a directory that can be accessed from the AI install server. The AI boot image must be the same version as the Oracle Solaris software release that you plan to install on the cluster nodes. Also, the boot image file must have the same architecture as that of the cluster nodes.
If you want to establish a new cluster from Oracle Unified Archives, either to install and configure a new cluster or to replicate a cluster from the archives, you do not need to provide the minimal boot image. The Unified Archive contains an image you can use. You do need to provide the path to access the Unified Archive.
When you install and configure a new cluster from either IPS repositories or Unified Archives, complete one of the following cluster configuration worksheets to plan your Typical mode or Custom mode installation:
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
|
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
|
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
Perform the following tasks:
Ensure that the hardware setup is complete and connections are verified before you install Solaris software. See the Oracle Solaris Cluster Hardware Administration Manual and your server and storage device documentation for details on how to set up the hardware.
Ensure that an Automated Installer install server and a DHCP server are configured. See Part 3, Installing Using an Install Server, in Installing Oracle Solaris 11.3 Systems.
Determine the Ethernet address of the cluster node and the length of the subnet mask of the subnet that the address belongs to.
Determine the MAC address of each cluster node.
Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.
Have available the root user password for the cluster nodes.
SPARC: If you are configuring Oracle VM Server for SPARC logical domains as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See How to Install Oracle VM Server for SPARC Software and Create Domains.
If you plan to install from Unified Archives that are created on an existing cluster, have the path to the archive file and ensure that it can be accessed from the AI server.
If you plan to install from IPS repositories, determine which Oracle Solaris Cluster software packages you want to install.
The following table lists the group packages for the Oracle Solaris Cluster 4.3 software that you can choose during an AI installation and the principal features that each group package contains. You must install at least the ha-cluster-framework-minimal group package.
|
Have available your completed Typical Mode or Custom Mode installation worksheet. See Establishing a New Oracle Solaris Cluster With the Automated Installer.
You can set the AI server to install both Oracle Solaris OS and the Oracle Solaris Cluster framework and data service software from IPS repositories or the Unified Archives on all global-cluster nodes and establish the cluster. This procedure describes how to set up and use the scinstall(1M) custom Automated Installer installation method to install and configure the cluster from IPS repositories.
Ensure that the AI install server meets the following requirements.
The install server is on the same subnet as the cluster nodes.
The install server is not itself a cluster node.
The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.
Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.
Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 8, Setting Up an AI Server in Installing Oracle Solaris 11.3 Systems and Working With DHCP in Oracle Solaris 11.3.
For more information, see How to Add a Node to an Existing Cluster or Zone Cluster in Oracle Solaris Cluster 4.3 System Administration Guide.
installserver# pkg publisher PUBLISHER TYPE STATUS URI solaris origin online solaris-repository ha-cluster origin online ha-cluster-repository
installserver# pkg install ha-cluster/system/install
installserver# /usr/cluster/bin/scinstall
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install, restore, or replicate a cluster from this Automated Installer install server * 2) Securely install, restore, or replicate a cluster from this Automated Installer install server * 3) Print release information for this Automated Installer install server * ?) Help with menu options * q) Quit Option:
The utility also prints instructions to add the DHCP macros on the DHCP server, and adds (if you chose secure installation) or clears (if you chose non-secure installation) the security keys for SPARC nodes. Follow those instructions.
The AI manifest is located in the following directory:
/var/cluster/logs/install/autoscinstall.d/ \ cluster-name/node-name/node-name_aimanifest.xml
Add the publisher name and the repository information. For example:
<publisher name="aie"> <origin name="http://aie.us.oracle.com:12345"/> </publisher>
Add the package names that you want to install, in the software_data item of the AI manifest.
scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.
For more information, see Part 3, Installing Using an Install Server, in Installing Oracle Solaris 11.3 Systems and the ai_manifest(4) man page.
# installadm update-manifest -n cluster-name-{sparc|i386} \ -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \ -m node-name_manifest
Note that SPARC and i386 is the architecture of the cluster node.
As the root role, use the following command to start the pconsole utility:
adminconsole# pconsole host[:port] […] &
The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
The Oracle Solaris software is installed with the default configuration.
ok boot net:dhcp - install
The GRUB menu is displayed.
On each node, a new boot environment (BE) is created and Automated Installer installs the Oracle Solaris OS and Oracle Solaris Cluster software. When the installation is successfully completed, each node is fully installed as a new cluster node. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.
See How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.3 for more information about modifying the automounter map.
The setting of this value enables you to reboot the node if you are unable to access a login prompt.
grub edit> kernel /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS -k
For more information, see How to Boot a System With the Kernel Debugger (kmdb) Enabled in Booting and Shutting Down Oracle Solaris 11.3 Systems.
The following tasks require a reboot:
Adding a new node to an existing cluster
Installing software updates that require a node or cluster reboot
Making configuration changes that require a reboot to become active
phys-schost-1# cluster shutdown -y -g0 cluster-name
Cluster nodes remain in installation mode until the first time that you run the clsetup command. You run this command during the procedure How to Configure Quorum Devices.
ok boot
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.
For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.3 Systems.
The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.
# /usr/sbin/ipadm show-addr ADDROBJ TYPE STATE ADDR clprivnet0/N static ok ip-address/netmask-length …
phys-schost# clnode set -p reboot_on_path_failure=enabled +
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
Next Steps
1. Perform all of the following procedures that are appropriate for your cluster configuration.
2. Configure quorum, if not already configured, and perform post installation tasks.
If you installed a multiple-node cluster and accepted automatic quorum configuration, post installation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a multiple-node cluster and declined automatic quorum configuration, perform post installation setup. Go to How to Configure Quorum Devices.
If you added a node to an existing two-node cluster, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.
If you added a new node to an existing cluster with at least three nodes that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.
If you added a new node to an existing cluster with at least three nodes that does not use a quorum device, verify the state of the cluster. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a single-node cluster, cluster establishment is complete. Go to Creating Cluster File Systems to install volume management software and configure the cluster.
Troubleshooting
Disabled scinstall option – If the AI option of the scinstall command is not preceded by an asterisk, the option is disabled. This condition indicates that AI setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 9 to correct the AI setup, then restart the scinstall utility.
You will use the AI server to install a cluster from the Unified Archives and configure its nodes. This procedure retains all the software packages that are contained in the Unified Archives, but you must provide the new cluster configuration that you designed in the worksheet. Before you perform this procedure, you must first create the archive. See Step 1 below for instructions on creating the recovery archive.
The AI server sets up installation of the nodes from the Unified Archives and creates the cluster with the new configuration. Only a Unified Archive created in the global zone is accepted. You can use either a clone archive or a recovery archive. The following list describes the differences between the two archives:
When you install from a clone archive, only the global zone is installed. Any zones in the archive are not installed. When you install from a recovery archive, both the global zone and the zones contained in the archive are installed.
A clone archive does not contain system configuration, including IPMPs, VLANs, and VNICs.
A clone archive only contains the BE that is active when the archive is created, therefore only that BE is installed. A recovery archive can contain multiple BEs, but only the active BE is updated with the new cluster configuration.
This procedure prompts you for the cluster name, node names and their MAC addresses, the path to the Unified Archives, and the cluster configuration you designed in the worksheet.
phys-schost# archiveadm create -r archive-location
Use the create command to create a clone archive or the create –r option to create a recovery archive. For more information on using the archiveadm command, see the archiveadm(1M) man page.
Ensure that the AI install server meets the following requirements.
The install server is on the same subnet as the cluster nodes.
The install server is not itself a cluster node.
The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.
Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.
Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 8, Setting Up an AI Server in Installing Oracle Solaris 11.3 Systems and Working With DHCP in Oracle Solaris 11.3.
installserver# pkg publisher PUBLISHER TYPE STATUS URI solaris origin online solaris-repository ha-cluster origin online ha-cluster-repository
installserver# pkg install ha-cluster/system/install
installserver# /usr/cluster/bin/scinstall
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install, restore, or replicate a cluster from this Automated Installer server * 2) Securely install, restore, or replicate a cluster from this Automated Installer server * 3) Print release information for this Automated Installer install server * ?) Help with menu options * q) Quit Option: 2
Choose Option 1 if you want to install a cluster using a non-secure AI server installation. Choose Option 2 for a secure AI installation.
The Custom Automated Installer Menu or Custom Secure Automated Installer Menu is displayed.
The Custom Automated Installer User screen is displayed.
Type the password a second time to confirm it. The Typical or Customer Mode screen is displayed.
The Cluster Name screen is displayed.
The Cluster Nodes screen is displayed.
If the scinstall utility is unable to find the MAC address of the nodes, type in each address when prompted and press Return. You can then choose to install all the nodes from the same archive, or use a different archive for each node.
The archive can either be a recovery archive or a clone archive.
The Cluster Transport Adapters and Cables screen is displayed.
Select the type of each transport adapter. The Resource Security Configuration screen is displayed.
The Confirmation screen is displayed.
The utility also prints instructions to add the DHCP macros on the DHCP server, and adds (if you chose secure installation) or clears (if you chose non-secure installation) the security keys for SPARC nodes. Follow those instructions.
The AI manifest is located in the following directory:
/var/cluster/logs/install/autoscinstall.d/ \ cluster-name/node-name/node-name_aimanifest.xml
scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.
For more information, see Part 3, Installing Using an Install Server, in Installing Oracle Solaris 11.3 Systems and the ai_manifest(4) man page.
# installadm update-manifest -n cluster-name-{sparc|i386} \ -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \ -m node-name_manifest
Note that SPARC and i386 is the architecture of the cluster node.
As the root role, use the following command to start the pconsole utility:
adminconsole# pconsole host[:port] […] &
The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
The Oracle Solaris software is installed with the default configuration.
ok boot net:dhcp - install
The GRUB menu is displayed.
Each node will be automatically rebooted a few times before the node completely joins the cluster. Ignore any error messages from SMF services on the console. On each node, the Automated Installer installs the software that is contained in the Unified Archives. When the installation is successfully completed, each node is fully installed as a new cluster node. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
See How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.3 for more information about modifying the automounter map.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.
# /usr/sbin/ipadm show-addr ADDROBJ TYPE STATE ADDR clprivnet0/N static ok ip-address/netmask-length …
You can use the Unified Archives to replicate a cluster and its nodes. This procedure retains all the software packages in the archives. Furthermore, this new cluster will have the exact configuration as the archive cluster or you can customize the private network properties and host identities, such as zone host names and logical host names in cluster resources.
Only the Unified Archive created in the global zone is accepted. You can use either a clone archive or a recovery archive. The following list describes the differences between the two archives:
When you install from a clone archive, only the global zone is installed. Any zones in the archive are not installed. When you install from a recovery archive, both the global zone and the zones contained in the archive are installed.
A clone archive does not contain system configuration , including IPMPs, VLANs, and VNICs.
A clone archive only contains the BE that is active when the archive is created, therefore only that BE in installed. A recovery archive can contain multiple BEs, but only the active BE is updated with the new cluster configuration.
To replicate a cluster from the Unified Archives created on an existing cluster, the hardware configuration of the new cluster must be the same as the source cluster. The number of nodes in the new cluster must be the same as in the source cluster, and the transport adapters must also be the same as in the source cluster.
phys-schost# archiveadm create -r archive-location
Use the create command to create a clone archive or the –r option to create a recovery archive. When you create the archive, exclude the ZFS datasets that are on the shared storage. If you plan to migrate the data on the shared storage from the source cluster to the new cluster, use the traditional method.
For more information on using the archiveadm command, see the archiveadm(1M) man page.
Ensure that the AI install server meets the following requirements.
The install server is on the same subnet as the cluster nodes.
The install server is not itself a cluster node.
The install server runs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.
Each new cluster node is configured as a custom AI installation client that uses the custom AI directory that you set up for Oracle Solaris Cluster installation.
Follow the appropriate instructions for your software platform and OS version to set up the AI install server and DHCP server. See Chapter 8, Setting Up an AI Server in Installing Oracle Solaris 11.3 Systems and Working With DHCP in Oracle Solaris 11.3.
installserver# pkg publisher PUBLISHER TYPE STATUS URI solaris origin online solaris-repository ha-cluster origin online ha-cluster-repository
installserver# pkg install ha-cluster/system/install
phys-schost# scinstall
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install, restore, or replicate a cluster from this Automated Installer server * 2) Securely install, restore, or replicate a cluster from this Automated Installer server * 3) Print release information for this Automated Installer install server * ?) Help with menu options * q) Quit Option: 2
Choose Option 1 if you want to replicate a cluster using a non-secure AI server installation. Choose Option 2 for a secure AI replication.
The Custom Automated Installer Menu or Custom Secure Automated Installer Menu is displayed.
The Custom Automated Installer User screen is displayed.
Type the password a second time to confirm it.
The Cluster Name screen is displayed.
The Cluster Nodes screen is displayed.
After you type the node names, press Control-D and then Return. If the scinstall utility is unable to find the MAC address of the nodes, type in each address when prompted and press Return.
A Unified Archive file must be created for each node in the source cluster, and only one archive can be specified per node in the new cluster. This 1:1 mapping ensures that one archive is mapped to one node in the source cluster. Similarly, the archive of one source node must be mapped to only one node in the new cluster.
Press Return to confirm the archive files.
To avoid using the same host identities in the new cluster as the source cluster, you can create and provide a text file that contains a 1:1 mapping from the old host identities in the source cluster to the new host identities that you intend to use in the new cluster. The text file can contain multiple lines, where each line has two columns. The first column is the hostname used in the source cluster, and the second column is the corresponding new hostname in the new cluster. The hostnames are for the logical hostnames, shared address resources, and zone clusters. For example:
old-cluster-zc-host1 new-cluster-zc-host1 old-cluster-zc-host2 new-cluster-zc-host2 old-cluster-lh-1 new-cluster-lh1 old-cluster-lh-2 new-cluster-lh2
The Confirmation screen is displayed.
The utility also prints instructions to add the DHCP macros on the DHCP server, and adds or clears the security keys for SPARC nodes (if you chose secure installation). Follow those instructions.
The AI manifest is located in the following directory:
/var/cluster/logs/install/autoscinstall.d/ \ cluster-name/node-name/node-name_aimanifest.xml
scinstall assumes the existing boot disk in the manifest file to be the target device. To customize the target device, update the target element in the manifest file based on how you want to use the supported criteria to locate the target device for the installation. For example, you can specify the disk_name sub-element.
For more information, see Part 3, Installing Using an Install Server, in Installing Oracle Solaris 11.3 Systems and the ai_manifest(4) man page.
# installadm update-manifest -n cluster-name-{sparc|i386} \ -f /var/cluster/logs/install/autoscinstall.d/cluster-name/node-name/node-name_aimanifest.xml \ -m node-name_manifest
Note that SPARC and i386 is the architecture of the cluster node.
As the root role, use the following command to start the pconsole utility:
adminconsole# pconsole host[:port] […] &
The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
The Oracle Solaris software is installed with the default configuration.
ok boot net:dhcp - install
The GRUB menu is displayed.
Each node will be automatically rebooted a few times before the node completely joins the cluster. Ignore any error messages from SMF services on the console. Each node is installed with the software contained in the Unified Archives. When the installation is successfully completed, each node is booted as a member of the new cluster, with the same cluster configuration as the archive but with a different system identity and system configuration. Oracle Solaris Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.N file and the /var/cluster/logs/install/sc_ai_config.log file on each node.
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
If the source cluster uses another system as a cluster object (for example, using a system as a quorum device of the quorum server type), you must manually adjust the configuration both in the new cluster and on the quorum server in order for the device to work. For a quorum server, you can add a new quorum server quorum device and remove the one brought from the archive.
If you need to make any changes to the zone cluster configuration or the resource groups in the cluster, reboot the zone cluster to Offline Running mode:
phys-schost#clzonecluster reboot -o zoneclustername
If you do not plan to make changes to the zone cluster configuration, you can reboot the cluster to Online Running mode:
phys-schost #clzonecluster reboot zoneclustername
You can also check the log file, /var/cluster/logs/install/sc_ai_config, for more information.