The scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.
172.16.0.0
255.255.240.0
Exactly two adapters
switch1 and switch2
Enabled
Limited
Complete one of the following cluster configuration worksheets to plan your Typical mode or Custom mode installation:
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
|
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
|
Perform this procedure from one node of the global cluster to configure Oracle Solaris Cluster software on all nodes of the cluster.
This procedure uses the interactive form of the scinstall command. For information about how to use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.
The clauth command controls the network access policies for machines that are to be configured as nodes of a new-cluster. For more information about the clauth command, see the clauth(1CL) man page.
Before using the browser interface to perform this task, you must install all the cluster packages, including the Oracle Solaris Cluster Manager packages. You can then access the browser on one of the cluster nodes. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide. After logging in, if the node is not configured as part of the cluster, the wizard will display a screen with the Configure button. Click Configure to launch the cluster configuration wizard.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support the Oracle Solaris Cluster software.
If the Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for the Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing the Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
Ensure that NWAM is disabled. See How to Install Oracle Solaris Cluster Software Packages for instructions.
SPARC: If you are configuring Oracle VM Server for SPARC logical domains as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that Oracle Solaris Cluster software packages and updates are installed on each node. See How to Install Oracle Solaris Cluster Software Packages.
Ensure that any adapters that you want to use as tagged VLAN adapters are configured and that you have their VLAN IDs.
Have available your completed Typical Mode or Custom Mode installation worksheet. See Configuring Oracle Solaris Cluster Software on All Nodes (scinstall).
Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.
During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.
After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.
Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.
TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.
# svccfg -s rpc/bind listprop config/enable_tcpwrappers config/enable_tcpwrappers boolean true
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false # svcadm refresh rpc/bind # svcadm restart rpc/bind
# ipadm create-ip interface # ipadm create-addr -T static -a local=address/prefix-length addrobj
For more information, see How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.3.
During initial cluster configuration, unless non-link-local IPv6 public network interfaces exist in the cluster, IPMP groups are automatically created based on matching subnets. These groups use transitive probes for interface monitoring and no test addresses are required.
If these automatically created IPMP groups would not meet your needs, or if IPMP groups would not be created because your configuration includes one or more non-link-local IPv6 public network interfaces, do one of the following:
For more information, see Configuring IPMP Groups in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.3.
For more information, see Chapter 2, Configuring High Availability by Using Link Aggregations in Managing Network Datalinks in Oracle Solaris 11.3.
This system is the control node.
phys-schost# clauth enable -n control-node
If you want to use the des (Diffie-Hellman) authentication protocol instead of the sys (unix) protocol, include –p des in the command.
phys-schost# clauth enable -p des -n control-node
For information about setting up DES authentication, see Administering Authentication With Secure RPC in Managing Kerberos and Other Authentication Services in Oracle Solaris 11.3.
phys-schost# scinstall
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node * 2) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1
The New Cluster and Cluster Node Menu is displayed.
The Typical or Custom Mode menu is displayed.
The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to continue.
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
phys-schost# clquorum reset
This feature automatically reboots a node if all monitored shared-disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.
phys-schost# clnode set -p reboot_on_path_failure=enabled +
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.
# /usr/sbin/ipadm show-addr ADDROBJ TYPE STATE ADDR clprivnet0/N static ok ip-address/netmask-length …
See How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.3 for more information about modifying the automounter map.
The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall utility in Typical Mode. The other cluster node is phys-schost-2. The adapter names are net2 and net3. The automatic selection of a quorum device is enabled.
Log file - /var/cluster/logs/install/scinstall.log.24747 Configuring global device using lofi on pred1: done Starting discovery of the cluster transport configuration. The following connections were discovered: phys-schost-1:net2 switch1 phys-schost-2:net2 phys-schost-1:net3 switch2 phys-schost-2:net3 Completed discovery of the cluster transport configuration. Started cluster check on "phys-schost-1". Started cluster check on "phys-schost-2". cluster check completed with no errors or warnings for "phys-schost-1". cluster check completed with no errors or warnings for "phys-schost-2". Configuring "phys-schost-2" … done Rebooting "phys-schost-2" … done Configuring "phys-schost-1" … done Rebooting "phys-schost-1" … Log file - /var/cluster/logs/install/scinstall.log.24747
Troubleshooting
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.
Next Steps
If you installed a single-node cluster, cluster establishment is complete. Go to Creating Cluster File Systems to install volume management software and configure the cluster.
If you installed a multiple-node cluster and chose automatic quorum configuration, post installation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a multiple-node cluster and declined automatic quorum configuration, perform post installation setup. Go to How to Configure Quorum Devices.
If you intend to configure any quorum devices in your cluster, go to How to Configure Quorum Devices.
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.