The scinstall utility runs in two modes of installation, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the cluster transport switches as switch1 and switch2.
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
Perform this procedure to add a new node to an existing global cluster. To use Automated Installer to add a new node, follow the instructions in How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (IPS Repositories).
This procedure uses the interactive form of the scinstall command. For information about how to use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(8) man page.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support the Oracle Solaris Cluster software.
If the Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for the Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing the Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that Oracle Solaris Cluster software packages and updates are installed on the node. See How to Install Oracle Solaris Cluster Software (pkg).
Ensure that the cluster is prepared for the addition of the new node. See How to Prepare the Cluster for Additional Global-Cluster Nodes.
Have available your completed Typical Mode or Custom Mode installation worksheet. See Configuring Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall).
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.
TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.
# svccfg -s rpc/bind listprop config/enable_tcpwrappers config/enable_tcpwrappers boolean true
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false # svcadm restart rpc/bind
# ipadm create-ip interface # ipadm create-addr -T static -a local=address/prefix-length addrobj
For more information, see How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.4.
During initial cluster configuration, unless non-link-local IPv6 public network interfaces exist in the cluster, IPMP groups are automatically created based on matching subnets. These groups use transitive probes for interface monitoring and no test addresses are required.
If these automatically created IPMP groups would not meet your needs, or if IPMP groups would not be created because your configuration includes one or more non-link-local IPv6 public network interfaces, do one of the following:
For more information, see Configuring IPMP Groups in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.4.
This system is the control node.
phys-schost# clauth enable -n control-node
If you want to use the des (Diffie-Hellman) authentication protocol instead of the sys (unix) protocol, include –p des in the command.
phys-schost# clauth enable -p des -n control-node
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node * 2) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1
The New Cluster and Cluster Node Menu is displayed.
The scinstall utility configures the node and boots the node into the cluster.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
phys-schost# claccess deny-all
Alternatively, you can use the clsetup utility. See How to Add a Node to an Existing Cluster or Zone Cluster in Administering an Oracle Solaris Cluster 4.4 Configuration for procedures.
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(8CL) man page.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.
# /usr/sbin/ipadm show-addr ADDROBJ TYPE STATE ADDR clprivnet0/N static ok ip-address/netmask-length …
phys-schost# pkg list
phys-schost# clnode set -p reboot_on_path_failure=enabled +
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
See Administrative Tasks for Autofs Maps in Managing Network File Systems in Oracle Solaris 11.4 for more information about modifying the automounter map.
The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.
Adding node "phys-schost-3" to the cluster configuration ... done Adding adapter "net2" to the cluster configuration ... done Adding adapter "net3" to the cluster configuration ... done Adding cable to the cluster configuration ... done Adding cable to the cluster configuration ... done Copying the config from "phys-schost-1" ... done Copying the postconfig file from "phys-schost-1" if it exists ... done Setting the node ID for "phys-schost-3" ... done (id=1) Verifying the major number for the "did" driver from "phys-schost-1" ... done Initializing NTP configuration ... done Updating nsswitch.conf ... done Adding cluster node entries to /etc/inet/hosts ... done Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files Updating "/etc/hostname.hme0". Verifying that power management is NOT configured ... done Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done Ensure network routing is disabled ... done Network routing has been disabled on this node. Having a cluster node act as a router is not supported by Oracle Solaris Cluster. Please do not re-enable network routing. Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done Updating file ("hosts") on node phys-schost-1 ... done Log file - /var/cluster/logs/install/scinstall.log.6952 Rebooting ...
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.
If you added a node to an existing cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.