Perform this procedure to add a new node to an existing cluster.
Ensure that all necessary hardware is installed.
Ensure that the host adapter is installed on the new node.
See the Sun Cluster Hardware Administration Manual for Solaris OS.
Verify that any existing cluster interconnects can support the new node.
See the Sun Cluster Hardware Administration Manual for Solaris OS.
Ensure that any additional storage is installed.
See the appropriate manual from the Sun Cluster 3.x Hardware Administration Collection.
Ensure that the Solaris OS is installed to support Sun Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
Ensure that Sun Cluster software packages are installed on the node.
Complete the following configuration worksheet.
Table 2–8 Added Node Configuration Worksheet
See Planning the Solaris OS and Planning the Sun Cluster Environment for planning guidelines.
If you are adding this node to a single-node cluster, determine whether two cluster interconnects already exist.
You must have at least two cables or two adapters configured before you can add a node.
# scconf -p | grep cable # scconf -p | grep adapter |
If the output shows configuration information for two cables or for two adapters, proceed to Step 6.
If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.
On the existing cluster node, start the scsetup(1M) utility.
# scsetup |
Choose the menu item, Cluster interconnect.
Choose the menu item, Add a transport cable.
Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport junction.
If necessary, repeat Step c to configure a second cluster interconnect.
When finished, quit the scsetup utility.
Verify that the cluster now has two cluster interconnects configured.
# scconf -p | grep cable # scconf -p | grep adapter |
The command output should show configuration information for at least two cluster interconnects.
If you are adding this node to an existing cluster, add the new node to the cluster authorized–nodes list.
On any active cluster member, start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
Choose the menu item, New nodes.
Choose the menu item, Specify the name of a machine which may add itself.
Follow the prompts to add the node's name to the list of recognized machines.
The scsetup utility prints the message Command completed successfully if the task completes without error.
Quit the scsetup utility.
Become superuser on the cluster node to configure.
Install Sun Web Console packages.
These packages are required by Sun Cluster software, even if you do not use Sun Web Console.
Install additional packages if you intend to use any of the following features.
Remote Shared Memory Application Programming Interface (RSMAPI)
SCI-PCI adapters for the interconnect transport
RSMRDT drivers
Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.
Determine which packages you must install.
The following table lists the Sun Cluster 3.1 9/04 packages that each feature requires and the order in which you must install each group of packages. The scinstall utility does not automatically install these packages.
Feature |
Additional Sun Cluster 3.1 9/04 Packages to Install |
---|---|
RSMAPI |
SUNWscrif |
SCI-PCI adapters |
SUNWsci SUNWscid SUNWscidx |
RSMRDT drivers |
SUNWscrdt |
Ensure that any dependency Solaris packages are already installed.
On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ |
Install the additional packages.
# pkgadd -d . packages |
If you are adding a node to a single-node cluster, repeat these steps to add the same packages to the original cluster node.
On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ |
Start the scinstall utility.
# /usr/cluster/bin/scinstall |
Follow these guidelines to use the interactive scinstall utility:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
From the Main Menu, choose the menu item, Install a cluster or cluster node.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install a cluster or cluster node 2) Configure a cluster to be JumpStarted from this install server 3) Add support for new data services to this cluster node * 4) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1 |
From the Install Menu, choose the menu item, Add this machine as a node in an existing cluster.
Follow the menu prompts to supply your answers from the worksheet that you completed in Step 4.
The scinstall utility configures the node and boots the node into the cluster.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.
From an active cluster member, prevent any other nodes from joining the cluster.
# /usr/cluster/bin/scconf -a -T node=. |
Add
Specifies authentication options
Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster
Alternately, you can use the scsetup(1M) utility. See “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster System Administration Guide for Solaris OS for procedures.
Update the quorum vote count.
When you increase or decrease the number of node attachments to a quorum device, the cluster does not automatically recalculate the quorum vote count. This step reestablishes the correct quorum vote.
Use the scsetup utility to remove each quorum device and then add it back into the configuration. Do this for one quorum device at a time.
If the cluster has only one quorum device, configure a second quorum device before you remove and readd the original quorum device. Then remove the second quorum device to return the cluster to its original configuration.
Install Sun StorEdge QFS file system software.
Follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
(Optional) SPARC: To install VERITAS File System, go to SPARC: How to Install VERITAS File System Software.
Set up the name-service look-up order.
The following example shows the scinstall command executed and the messages that the utility logs as scinstall completes configuration tasks on the node phys-schost-3. The sponsoring node is phys-schost-1.
>>> Confirmation <<< Your responses indicate the following options to scinstall: scinstall -ik \ -C sc-cluster \ -N phys-schost-1 \ -A trtype=dlpi,name=hme1 -A trtype=dlpi,name=hme3 \ -m endpoint=:hme1,endpoint=switch1 \ -m endpoint=:hme3,endpoint=switch2 Are these the options you want to use (yes/no) [yes]? Do you want to continue with the install (yes/no) [yes]? Checking device to use for global devices file system ... done Adding node "phys-schost-3" to the cluster configuration ... done Adding adapter "hme1" to the cluster configuration ... done Adding adapter "hme3" to the cluster configuration ... done Adding cable to the cluster configuration ... done Adding cable to the cluster configuration ... done Copying the config from "phys-schost-1" ... done Setting the node ID for "phys-schost-3" ... done (id=3) Verifying the major number for the "did" driver with "phys-schost-1" ...done Checking for global devices global file system ... done Updating vfstab ... done Verifying that NTP is configured ... done Installing a default NTP configuration ... done Please complete the NTP configuration after scinstall has finished. Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done Adding the "cluster" switch to "hosts" in nsswitch.conf ... done Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.61501001054 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management. Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...done Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing. Log file - /var/cluster/logs/install/scinstall.log.9853 Rebooting ... |