Perform this procedure to add new nodes to an existing cluster.
Install the hardware on the new cluster node.
Install the host adapter on the new node and verify that any existing cluster interconnects can support the new node.
See the Sun Cluster Hardware Administration Manual for Solaris OS.
Install any additional storage.
See the appropriate manual from the Sun Cluster 3.x Hardware Administration Collection.
Ensure that the Solaris operating environment is installed to support Sun Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
Ensure that Sun Cluster software packages are installed on the node.
Complete the following configuration worksheet.
Table 2–9 Added Node Configuration Worksheet
See Planning the Solaris Operating Environment and Planning the Sun Cluster Environment for planning guidelines.
Are you adding this node to a single-node cluster?
From the existing cluster node, determine whether two cluster interconnects already exist.
You must have at least two cables or two adapters configured.
# scconf -p | grep cable # scconf -p | grep adapter |
Configure new cluster interconnects.
On the existing cluster node, start the scsetup(1M) utility.
# scsetup |
Select Cluster interconnect.
Select Add a transport cable.
Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport junction.
If necessary, repeat Step c to configure a second cluster interconnect.
When finished, quit the scsetup utility.
Verify that the cluster now has two cluster interconnects configured.
# scconf -p | grep cable # scconf -p | grep adapter |
The command output should show configuration information for at least two cluster interconnects.
Add the new node to the cluster authorized–nodes list.
On any active cluster member, start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
Select New nodes.
Select Specify the name of a machine which may add itself.
Follow the prompts to add the node's name to the list of recognized machines.
Verify that the task has succeeded.
The scsetup utility prints the message Command completed successfully if the task completes without error.
Quit the scsetup utility.
Become superuser on the cluster node to configure.
Start the scinstall utility.
# /usr/cluster/bin/scinstall |
Follow these guidelines to use the interactive scinstall utility:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
From the Main Menu, choose Install a cluster or cluster node.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install a cluster or cluster node 2) Configure a cluster to be JumpStarted from this install server 3) Add support for new data services to this cluster node * 4) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1 |
From the Install Menu, choose Add this machine as a node in an existing cluster.
Follow the menu prompts to supply your answers to Table 2–9, which you completed in Step 4.
The scinstall utility configures the node and boots the node into the cluster.
Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.
From an active cluster member, prevent any other nodes from joining the cluster.
# /usr/cluster/bin/scconf -a -T node=. |
Add
Specifies authentication options
Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster
Alternately, you can use the scsetup(1M) utility. See “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster System Administration Guide for Solaris OS for procedures.
SPARC: Do you intend to install VERITAS File System?
If yes, go to SPARC: How to Install VERITAS File System Software.
If no, set up the name-service look-up order. Go to How to Configure the Name-Service Switch.
The following example shows the scinstall command executed and the messages that the utility logs as scinstall completes configuration tasks on the node phys-schost-3. The sponsoring node is phys-schost-1.
>>> Confirmation <<< Your responses indicate the following options to scinstall: scinstall -ik \ -C sc-cluster \ -N phys-schost-1 \ -A trtype=dlpi,name=hme1 -A trtype=dlpi,name=hme3 \ -m endpoint=:hme1,endpoint=switch1 \ -m endpoint=:hme3,endpoint=switch2 Are these the options you want to use (yes/no) [yes]? Do you want to continue with the install (yes/no) [yes]? Checking device to use for global devices file system ... done Adding node "phys-schost-3" to the cluster configuration ... done Adding adapter "hme1" to the cluster configuration ... done Adding adapter "hme3" to the cluster configuration ... done Adding cable to the cluster configuration ... done Adding cable to the cluster configuration ... done Copying the config from "phys-schost-1" ... done Setting the node ID for "phys-schost-3" ... done (id=3) Verifying the major number for the "did" driver with "phys-schost-1" ...done Checking for global devices global file system ... done Updating vfstab ... done Verifying that NTP is configured ... done Installing a default NTP configuration ... done Please complete the NTP configuration after scinstall has finished. Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done Adding the "cluster" switch to "hosts" in nsswitch.conf ... done Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.61501001054 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management. Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...done Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing. Log file - /var/cluster/logs/install/scinstall.log.9853 Rebooting ... |