This chapter provides the procedures for administering items that affect the entire cluster.
This is a list of the procedures in this chapter.
"6.2.1 How to Add a Cluster Node to the Authorized Node List"
"6.3.1 How to Remove a Node From the Cluster Software Configuration"
Task |
For Instructions, Go To... |
---|---|
Change the name of the cluster. | |
List node IDs and their corresponding node names. | |
Permit or deny new nodes to add themselves to the cluster. | |
Change the time for a cluster using the Network Time Protocol (NTP). | |
Bring down a node and enter the OpenBootTM PROM. |
If necessary, you can change the cluster name after initial installation.
Become superuser on a node in the cluster.
Enter the scsetup(1M) utility.
# scsetup |
The Main Menu appears.
To change the cluster name, enter 6 (Other cluster properties).
The Other Cluster Properties menu appears.
Make your selection from the menu and follow the onscreen instructions.
The following example shows the scconf(1M) command generated from the scsetup utility to change to the new cluster name, dromedary.
# scconf -c -C cluster=dromedary |
During Sun Cluster installation, each node is automatically assigned a unique node ID number. The node ID number is assigned to a node in the order in which it joins the cluster for the first time; once assigned, the number cannot be changed. The node ID number is often used in error messages to identify which cluster node the message concerns. Use this procedure to determine the mapping between node IDs and node names.
You do not need to be superuser to list configuration information.
The following example shows the node ID assignments
% scconf -pv | grep "Node ID" (phys-schost-1) Node ID: 1 (phys-schost-2) Node ID: 2 (phys-schost-3) Node ID: 3 |
Sun Cluster enables you to determine if new nodes can add themselves to the cluster and with what type of authentication. You can permit any new node to join the cluster over the public network, deny new nodes from joining the cluster, or indicate a specific node that can join the cluster. New nodes can be authenticated by using either standard UNIX or Diffie-Hellman (DES) authentication. If you select DES authentication, you must also configure all necessary encryption keys before a node can join. See the keyserv(1M) and publickey(4) man pages for more information.
Become superuser on a node in the cluster.
Enter the scsetup(1M) utility.
# scsetup |
The Main Menu appears.
To work with cluster authentication, enter 5 (New nodes).
The New Nodes menu appears.
Make your selection from the menu and follow the onscreen instructions.
The following example shows the scconf(1M) command generated from the scsetup utility that would prevent new machines from being added to the cluster.
# scconf -a -T node=. |
The following example shows the scconf command generated from the scsetup utility that would enable all new machines to be added to the cluster.
# scconf -r -T all |
The following example shows the scconf command generated from the scsetup utility to enable a single new machine to be added to the cluster.
# scconf -a -T node=phys-schost-4 |
The following example shows the scconf command generated from the scsetup utility to reset to standard UNIX authentication for new nodes joining the cluster.
# scconf -c -T authtype=unix |
The following example shows the scconf command generated from the scsetup utility to use DES authentication for new nodes joining the cluster.
# scconf -c -T authtype=des |
When using DES authentication, you need to also configure all necessary encryption keys before a node can join the cluster. See the keyserv(1M) and publickey(4) man pages for more information.
Sun Cluster uses the Network Time Protocol (NTP) to maintain time synchronization between cluster nodes. Adjustments in the cluster occur automatically as needed when nodes synchronize their time. See the Sun Cluster 3.0 Concepts document and the Network Time Protocol User's Guide for more information.
When using NTP, do not attempt to adjust the cluster time while the cluster is up and running. This includes using the date(1), rdate(1M), or xntpdate(1M) commands interactively or within cron(1M) scripts.
Become superuser on a node in the cluster.
Shut down the cluster.
# scshutdown -g0 -y |
Boot each node into non-cluster node.
ok boot -x |
On a single node, set the time of day by running the date(1) command.
# date HHMMSS |
On the other machines, synchronize the time to that node by running the rdate(1M) command.
# rdate hostname |
Boot each node to restart the cluster.
# reboot |
Verify that the change took place on all cluster nodes.
On each node, run the date(1M) command.
# date |
Use this procedure if you need to configure or change OpenBoot PROM settings.
Connect to the terminal concentrator port.
# telnet tc_name tc_port_number |
Specifies the name of the terminal concentrator.
Specifies the port number on the terminal concentrator. Port numbers are configuration dependent. Typically, ports 2 and 3 (5002 and 5003) are used for the first cluster installed at a site.
Shut down the cluster node gracefully by using the scswitch(1M) command to evacuate any resource or disk device groups and then shutdown(1M) to bring the node to the OBP prompt.
# scswitch -S -h node # shutdown -g 0 -y |
Send a break to the node.
telnet> send brk |
Execute the OpenBoot PROM commands.
The following table lists the tasks to perform when adding a node to an existing cluster.
Table 6-2 Task Map: Adding a Node
Task |
For Instructions, Go To... |
---|---|
Add the cluster interconnects to the new node. - Install the host adapter, add the transport junction, cable the interconnect. |
Sun Cluster 3.0 Hardware Guide - Adding and Replacing Cluster Interconnect and Public Hardware |
Add shared storage |
Sun Cluster 3.0 Hardware Guide - Installing and Replacing the StorEdge MultiPack Enclosure - Installing and Replacing the StorEdge D1000 Disk Array - Installing and Replacing the StorEdge A5x00 Disk Array |
Add the node to the authorized node list - Use scsetup. |
Sun Cluster 3.0 System Administration Guide - How to Add a Cluster Node |
Install and configure the software on the new cluster node - Install the Solaris Operating Environment and Sun Cluster software - Configure the node as part of the cluster |
Sun Cluster 3.0 Installation Guide - Installing and Configuring Sun Cluster Software |
Before adding a machine to an existing cluster, be sure the node has all of the necessary software and hardware correctly installed and configured, including a good physical connection to the private cluster interconnect, as indicated in the "Adding a Node" task map. Refer to the Sun Cluster 3.0 Installation Guide and the scinstall(1M) man page for more information regarding software installations. For hardware installations, refer to the Sun Cluster 3.0 Hardware Guide or the hardware documentation that shipped with your server.
Become superuser on a current cluster member node.
Execute the scsetup utility.
# scsetup |
The Main Menu appears.
Access the New Nodes Menu option by entering 5 at the Main Menu.
Modify the authorized list by entering 3 (Specify the name of the machine) at the New Nodes Menu.
Specify the name of a machine that can add itself.
Follow the prompts to add the cluster node. You will be asked for the name of the node to be added.
Verify that the node has been added to the authorized list.
# scconf -p | grep "Cluster new node" |
The following example shows how to add a node named phys-schost-3 to an existing cluster.
[Become a superuser.] [Execute scsetup utility.] # scsetup *** Main Menu *** Please select from one of the following options: Option: 5 *** New Nodes Menu *** Please select from one of the following options: ... 3) Specify the name of a machine which may add itself ... Option: 3 >>> Specify a Machine which may Install itself into the Cluster <<< ... Is it okay to continue (yes/no) [yes]? <Return> Name of the host to add to the list of recognized machines? phys-schost-3 Is it okay to proceed with the update (yes/no) [yes]? <Return> scconf -a -T node=phys-schost-3 Command completed successfully. [Quit the scsetup New Nodes Menu and Main Menu:] ... Option: q [Verify the node has been added.] # scconf -p | grep "Cluster new" Cluster new node authentication: unix Cluster new node list: phys-schost-3 |
Sun Cluster 3.0 Installation Guide: Installing and Configuring Sun Cluster Sofware.
The following table lists the tasks to perform when removing a node from an existing cluster.
Table 6-3 Task Map: Removing a Cluster Node
Task |
For Instructions, Go To... |
---|---|
Place node being removed into maintenance state - Use shutdown and scconf |
Sun Cluster 3.0 System Administration Guide: Chapter 4, Administering Quorum - How to Put a Cluster Node Into Maintenance State |
Remove node from all resource groups - Use scrgadm |
Sun Cluster 3.0 Data Services Installation and Configuration Guide: Chapter 9, Administering Data Service Resources - How to Remove a Node from an Existing Resource Group |
Remove node from all device groups of which the node is a member - Use volume manager commands |
Sun Cluster 3.0 System Administration Guide: Chapter 3, Administering Global Devices and Cluster File Systems - How to Remove a Node from a Disk Device Group (SDS) - How to Remove a Node from a Disk Device Group (VxVM) |
Remove all logical transport connections to the node being removed - Use scsetup |
Sun Cluster 3.0 System Administration Guide: Chapter 5, Administering Cluster Networks - How to Remove Cluster Transport Cables and Transport Adapters
To remove the physical hardware from the node, see Sun Cluster 3.0 Hardware Guide: Chapter 3, Installing and Maintaining Cluster Interconnect and Public Network Hardware. |
Remove all quorum devices shared with the node being removed - Use scsetup |
Sun Cluster 3.0 System Administration Guide: Chapter 4, Administering Quorum - How to Remove a Quorum Device |
Remove node from the cluster software configuration - Use scconf |
Sun Cluster 3.0 System Administration Guide: Chapter 6, Administering the Cluster - How to Remove a Cluster Node |
Remove required shared storage from the node and cluster - Follow the procedures in your volume manager documentation and hardware guide |
SDS or VxVM administration guide; Sun Cluster 3.0 Hardware Guide: - How to Remove a StorEdge MultiPack Enclosure - How to Remove a StorEdge D1000 Disk Array - How to Remove a StorEdge A5x00 Disk Array |
This is the last software configuration procedure that needs to be accomplished in the process for removing a node from a cluster. Before beginning this procedure, you must complete all the prerequisite tasks listed in the "Removing a Cluster Node" task map. When finished with this procedure, remove the hardware connections as described in the Sun Cluster 3.0 Hardware Guide.
Become superuser on a node in the cluster.
Be sure you have placed the node in maintenance state and removed it from all resource groups, device groups, and quorum device configurations before continuing with this procedure.
Determine the boot disks in the cluster.
# scconf -pvv | grep Local_Disk |
Identify which boot disks in the cluster are connected to the node being removed.
# scdidadm -L did-name |
Remove the localonly flag from each boot disk.
# scconf -c -D name=devicename,localonly=false |
Remove the node from all raw disk device groups, of which it is a member.
This step must be completed for each disk device group.
# scconf -pvv | grep nodename | grep Device # scconf -r -D name=devicename,nodelist=nodename |
Remove the node from the cluster.
# scconf -r -h node=nodename |
Verify the node removal using scstat.
# scstat -n |
After the device has been removed from the cluster, you must reinstall the Solaris operating environment on the removed host before it can be placed back into service in any capacity.
This example shows how to remove a node (phys-schost-2) from a cluster.
[Become superuser on any node.] [Determine the boot disks on the node:] # scconf -pvv | grep Local_Disk (dsk/d4) Device group type: Local_Disk (dsk/d3) Device group type: Local_Disk # scdidadm -L d4 ... 4 phys-schost-2:/dev/rdsk/c1t3d0 /dev/did/rdsk/d4 [Remove the localonly flag:] # scconf -c -D name=dsk/d4,localonly=false [Remove the node from all raw disk device groups:] # scconf -pvv | grep phys-schost-2 | grep Device (dsk/d4) Device group node list: phys-schost-2 (dsk/d2) Device group node list: phys-schost-1, phys-schost-2 (dsk/d1) Device group node list: phys-schost-1, phys-schost-2 # scconf -r -D name=dsk/d4,nodelist=phys-schost-2 # scconf -r -D name=dsk/d2,nodelist=phys-schost-2 # scconf -r -D name=dsk/d1,nodelist=phys-schost-2 [Remove the node from the cluster:] # scconf -r -h node=phys-schost-2 [Verify node removal:]# scstat -n -- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online |
Sun Cluster 3.0 Hardware Guide:
How to Remove a StorEdge MultiPack Enclosure
How to Remove a StorEdge D1000 Disk Array
How to Remove a StorEdge A5x00 Disk Array