Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Shut Down a Cluster Node


Caution – Caution –

Do not use send brk on a cluster console to shut down a cluster node. The command is not supported within a cluster.


For information about shutting down a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. SPARC: If your cluster is running Oracle Parallel Server or Oracle RAC, shut down all instances of the database.

    Refer to the Oracle Parallel Server or Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on the cluster node to be shut down.

  3. Switch all resource groups, resources, and device groups from the node being shut down to other cluster members.

    On the node to be shut down, type the following command. The clnode evacuate command switches over all resource groups and device groups including all non-global zones from the specified node to the next preferred node.


    # clnode evacuate node
    
    node

    Specifies the node from which you are switching resource groups and device groups.

  4. Shut down the cluster node.

    On the node to be shut down, type the following command.


    # shutdown -g0 -y -i0
    

    Verify that the cluster node is showing the ok prompt on a SPARC based system or the Press any key to continue message on the GRUB menu on an x86 based system.

  5. If necessary, power off the node.


Example 3–5 SPARC: Shutting Down a Cluster Node

The following example shows the console output when node phys-schost-1 is shut down. The -g0 option sets the grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages for this node appear on the consoles of other nodes in the cluster.


# clnode evacuate -S -h phys-schost-1
# shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime:
WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
Program terminated
ok 


Example 3–6 x86: Shutting Down a Cluster Node

The following example shows the console output when node phys-schost-1 is shut down. The -g0 option sets the grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages for this node appear on the consoles of other nodes in the cluster.


# clnode evacuate phys-schost-1
# shutdown -g0 -y
Shutdown started.    Wed Mar 10 13:47:32 PST 2004

Changing to init state 0 - please wait
Broadcast Message from root (console) on phys-schost-1 Wed Mar 10 13:47:32... 
THE SYSTEM phys-schost-1 IS BEING SHUT DOWN NOW ! ! !
Log off now or risk your files being damaged

phys-schost-1#
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling scswitch -S (evacuate)
failfasts disabled on node 1
Print services already stopped.
Mar 10 13:47:44 phys-schost-1 syslogd: going down on signal 15
umount: /global/.devices/node@2 busy
umount: /global/.devices/node@1 busy
The system is down.
syncing file systems... done
WARNING: CMM: Node being shut down.
Type any key to continue 

See Also

See How to Boot a Cluster Node to restart a cluster node that has been shut down.