1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
7. Administering Cluster Interconnects and Public Networks
How to Add a Node to the Authorized Node List
Creating a Non-Voting Node (Zone) in a Global Cluster
How to Create a Non-Voting Node in a Global Cluster
Removing a Node From a Cluster
How to Remove a Node From a Zone Cluster
How to Remove a Node From the Cluster Software Configuration
How to Remove a Non-Voting Node (Zone) From a Global Cluster
10. Configuring Control of CPU Usage
11. Patching Oracle Solaris Cluster Software and Firmware
12. Backing Up and Restoring a Cluster
13. Administering Oracle Solaris Cluster With the Graphical User Interfaces
This section provides instructions on how to remove a node on a global cluster or a zone cluster. You can also remove a specific zone cluster from a global cluster. The following table lists the tasks to perform to remove a node from an existing cluster. Perform the tasks in the order shown.
Caution - If you remove a node using only this procedure for a RAC configuration, the removal might cause the node to panic during a reboot. For instructions on how to remove a node from a RAC configuration, see How to Remove Support for Oracle RAC From Selected Nodes in Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide. After you complete that process, follow the appropriate steps below. |
Table 8-2 Task Map: Removing a Node
|
You can remove a node from a zone cluster by halting the node, uninstalling it, and removing the node from the configuration. If you decide later to add the node back into the zone cluster, follow the instructions in Table 8-1. Most of these steps are performed from the global-cluster node.
phys-schost# clzonecluster halt -n node zoneclustername
You can also use the clnode evacuate and shutdown commands within a zone cluster.
phys-schost# clzonecluster uninstall -n node zoneclustername
Use the following commands:
phys-schost# clzonecluster configure zoneclustername
clzc:sczone> remove node physical-host=zoneclusternodename
phys-schost# clzonecluster status
Perform this procedure to remove a node from the global cluster.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
On SPARC based systems, run the following command.
ok boot -x
On x86 based systems, run the following commands.
shutdown -g -y -i0 Press any key to continue
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.-
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps again to add the -x option to the kernel boot parameter command.
Note - If the node to be removed is not available or can no longer be booted, run the following command on any active cluster node: clnode clear -F <node-to-be-removed>. Verify the node removal by running clnode status <nodename>.
phys-schost# clnode remove -F
If the clnode remove command fails and a stale node reference exists, run clnode clear -F nodename on an active node.
Note - If you are removing the last node in the cluster, the node must be in noncluster mode with no active nodes left in the cluster.
phys-schost# clnode status nodename
Example 8-2 Removing a Node From the Cluster Software Configuration
This example shows how to remove a node (phys-schost-2) from a cluster. The clnode remove command is run in noncluster mode from the node you want to remove from the cluster (phys-schost-2).
[Remove the node from the cluster:] phys-schost-2# clnode remove phys-schost-1# clnode clear -F phys-schost-2 [Verify node removal:] phys-schost-1# clnode status -- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online
See Also
To uninstall Oracle Solaris Cluster software from the removed node, see How to Uninstall Oracle Solaris Cluster Software From a Cluster Node.
For hardware procedures, see the Oracle Solaris Cluster 3.3 Hardware Administration Manual.
For a comprehensive list of tasks for removing a cluster node, seeTable 8-2.
To add a node to an existing cluster, see How to Add a Node to the Authorized Node List.
Follow the procedures in Deleting a Non-Global Zone From the System in System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.
Use this procedure to detach a storage array from a single cluster node, in a cluster that has three-node or four-node connectivity.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
phys-schost# clresourcegroup status phys-schost# cldevicegroup status
Caution (SPARC only) - If your cluster is running Oracle RAC software, shut down the Oracle RAC database instance that is running on the node before you move the groups off the node. For instructions, see the Oracle Database Administration Guide. |
phys-schost# clnode evacuate node
The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from voting or non-voting nodes on the specified node to the next-preferred voting or non-voting node.
For the procedure on acquiescing I/O activity to Veritas shared disk groups, see your VxVM documentation.
For the procedure on putting a device group in maintenance state, see How to Put a Node Into Maintenance State.
If you use VxVM or a raw disk, use the cldevicegroup(1CL) command to remove the device groups.
phys-schost# clresourcegroup remove-node -z zone -n node + | resourcegroup
The name of the node.
The name of the non-voting node that can master the resource group. Specify zone only if you specified a non-voting node when you created the resource group.
See the Oracle Solaris Cluster Data Services Planning and Administration Guide for more information about changing a resource group's node list.
Note - Resource type, resource group, and resource property names are case sensitive when clresourcegroup is executed.
For the procedure on removing host adapters, see the documentation for the node.
phys-schost# pkgrm SUNWscucm
Caution (SPARC only) - If you do not remove the Oracle RAC software from the node that you disconnected, the node panics when the node is reintroduced to the cluster and potentially causes a loss of data availability. |
On SPARC based systems, run the following command.
ok boot
On x86 based systems, run the following commands.
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter. The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
phys-schost# devfsadm -C cldevice refresh
For procedures about bringing a Veritas shared disk group online, see your Veritas Volume Manager documentation.
For information about bringing a device group online, see How to Bring a Node Out of Maintenance State.
To correct any error messages that occurred while attempting to perform any of the cluster node removal procedures, perform the following procedure.
phys-schost# boot
If no, proceed to Step b.
If yes, perform the following steps to remove the node from device groups.
Follow procedures in How to Remove a Node From All Device Groups.
# mv /etc/cluster/ccr /etc/cluster/ccr.old