Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster 4.1 |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
7. Administering Cluster Interconnects and Public Networks
How to Add a Node to an Existing Cluster
Removing a Node From a Cluster
How to Remove a Node From a Zone Cluster
How to Remove a Node From the Cluster Software Configuration
10. Configuring Control of CPU Usage
This section provides instructions on how to remove a node on a global cluster or a zone cluster. You can also remove a specific zone cluster from a global cluster. The following table lists the tasks to perform to remove a node from an existing cluster. Perform the tasks in the order shown.
Caution - If you remove a node using only this procedure for a RAC configuration, the removal might cause the node to panic during a reboot. For instructions on how to remove a node from a RAC configuration, see How to Remove Support for Oracle RAC From Selected Nodes in Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide. After you complete that process, remove a node for a RAC configuration, follow the appropriate steps below. |
Table 8-2 Task Map: Removing a Node
|
You can remove a node from a zone cluster by halting the node, uninstalling it, and removing the node from the configuration. If you decide later to add the node back into the zone cluster, follow the instructions in Table 8-1. Most of these steps are performed from the global-cluster node.
phys-schost# clzonecluster halt -n node zoneclustername
You can also use the clnode evacuate and shutdown commands within a zone cluster.
phys-schost# clrg remove-node -n zonehostname -Z zoneclustername rg-name
phys-schost# clzonecluster uninstall -n node zoneclustername
Use the following commands:
phys-schost# clzonecluster configure zoneclustername
clzc:sczone> remove node physical-host=node
clzc:sczone> exit
phys-schost# clzonecluster status
Perform this procedure to remove a node from the global cluster.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Perform all steps in this procedure from a node of the global cluster.
For a zone-cluster node, follow the instructions in How to Remove a Node From a Zone Cluster before you perform this step.
On SPARC based systems, run the following command.
ok boot -x
On x86 based systems, run the following commands.
shutdown -g -y -i0 Press any key to continue
For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.1 Systems.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel$ /platform/i86pc/kernel/#ISADIR/unix -B $ZFS-BOOTFS -x
The screen displays the edited command.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps again to add the -x option to the kernel boot parameter command.
Note - If the node to be removed is not available or can no longer be booted, run the following command on any active cluster node: clnode clear -F <node-to-be-removed>. Verify the node removal by running clnode status <nodename>.
Run the following command from an active node:
phys-schost# clnode clear -F nodename
If you have resource groups that are have rg_system=true, you must change them to rg_system=false so that the clnode clear -F command will succeed. After you run clnode clear -F, reset the resource groups back to rg_system=true.
Run the following command from the node you want to remove:
phys-schost# clnode remove -F
Note - If you are removing the last node in the cluster, the node must be in noncluster mode with no active nodes left in the cluster.
phys-schost# clnode status nodename
Example 8-2 Removing a Node From the Cluster Software Configuration
This example shows how to remove a node (phys-schost-2) from a cluster. The clnode remove command is run in noncluster mode from the node you want to remove from the cluster (phys-schost-2).
[Remove the node from the cluster:] phys-schost-2# clnode remove phys-schost-1# clnode clear -F phys-schost-2 [Verify node removal:] phys-schost-1# clnode status -- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online
See Also
To uninstall Oracle Solaris Cluster software from the removed node, see How to Uninstall Oracle Solaris Cluster Software From a Cluster Node.
For hardware procedures, see the Oracle Solaris Cluster 4.1 Hardware Administration Manual.
For a comprehensive list of tasks for removing a cluster node, seeTable 8-2.
To add a node to an existing cluster, see How to Add a Node to an Existing Cluster.
Use this procedure to detach a storage array from a single cluster node, in a cluster that has three-node or four-node connectivity.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
phys-schost# clresourcegroup status phys-schost# cldevicegroup status
Caution (SPARC only) - If your cluster is running Oracle RAC software, shut down the Oracle RAC database instance that is running on the node before you move the groups off the node. For instructions, see the Oracle Database Administration Guide. |
phys-schost# clnode evacuate node
The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from the specified node to the next-preferred node.
For the procedure on putting a device group in maintenance state, see How to Put a Node Into Maintenance State.
If you use a raw disk, use the cldevicegroup(1CL) command to remove the device groups.
phys-schost# clresourcegroup remove-node -n node + | resourcegroup
The name of the node.
See the Oracle Solaris Cluster Data Services Planning and Administration Guide for more information about changing a resource group's node list.
Note - Resource type, resource group, and resource property names are case sensitive when clresourcegroup is executed.
Otherwise, skip this step.
If you are removing the host adapter from the node that you are disconnecting, skip to Step 11.
For the procedure on removing host adapters, see the documentation for the node.
phys-schost# pkg uninstall /ha-cluster/library/ucmm
Caution (SPARC only) - If you do not remove the Oracle RAC software from the node that you disconnected, the node panics when the node is reintroduced to the cluster and potentially causes a loss of data availability. |
On SPARC based systems, run the following command.
ok boot
On x86 based systems, run the following commands.
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.
phys-schost# devfsadm -C cldevice refresh
For information about bringing a device group online, see How to Bring a Node Out of Maintenance State.
To correct any error messages that occurred while attempting to perform any of the cluster node removal procedures, perform the following procedure.
Perform this procedure only on a global cluster.
phys-schost# boot
If no, proceed to Step b.
If yes, perform the following steps to remove the node from device groups.
Follow procedures in How to Remove a Node From All Device Groups.
# mv /etc/cluster/ccr /etc/cluster/ccr.old