Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Reboot a Node

To shut down or reboot other active nodes in the global cluster or zone cluster, wait until the node that you are rebooting has reached at least the following status:

Otherwise, the node will not be available to take over services from other nodes in the cluster that you shut down or reboot. For information about rebooting a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. If the global-cluster or zone-cluster node is running Oracle RAC, shut down all instances of the database on the node that you are shutting down.

    Refer to the Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on the node to shut down. Perform all steps in this procedure from a node of the global cluster.

  3. Shut down the global-cluster node by using the clnode evacuate and shutdown commands. Shut down the zone cluster with the clzonecluster halt command executed on a node of the global cluster. (The clnode evacuate and shutdown commands also work in a zone cluster.)

    For a global cluster, type the following commands on the node to shut down. The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from global or non-global zones on the specified node to the next-preferred global or non-global zones on other nodes.

    • On a SPARC based system, run the following commands.


      phys-schost# clnode evacuate node
      

      phys-schost# shutdown -g0 -y -i6
      
    • On an x86 based system, run the following commands.


      phys-schost# clnode evacuate node
      

      phys-schost# shutdown -g0 -y -i6
      

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.
    • Specify the zone-cluster node to shut down and reboot.


      phys-schost# clzonecluster reboot - node zoneclustername
      

    Note –

    Nodes must have a working connection to the cluster interconnect to attain cluster membership.


  4. Verify that the node booted without error and is online.

    • Verify that the global-cluster node is online.


      phys-schost# cluster status -t node
      
    • Verify that the zone-cluster node is online.


      phys-schost# clzonecluster status
      

Example 3–11 SPARC: Rebooting a Global-Cluster Node

The following example shows the console output when node phys-schost-1 is rebooted. Messages for this node, such as shutdown and startup notification, appear on the consoles of other nodes in the global cluster.


phys-schost# clnode evacuate phys-schost-1
phys-schost# shutdown -g0 -y -i6
Shutdown started.    Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 

WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 6
The system is coming down.  Please wait.
System services are now being stopped.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
rebooting...
Resetting ... 
,,,
Sun Ultra 1 SBus (UltraSPARC 143MHz), No Keyboard
OpenBoot 3.11, 128 MB memory installed, Serial #5932401.
Ethernet address 8:8:20:99:ab:77, Host ID: 8899ab77.
...
Rebooting with command: boot
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
The system is ready.
phys-schost-1 console login: 


Example 3–12 x86: Rebooting a Global-Cluster Node

The following example shows the console output when rebooting node phys-schost-1. Messages for this node, such as shutdown and startup notification, appear on the consoles of other nodes in the global cluster.


phys-schost# clnode evacuate phys-schost-1
phys-schost # shutdown -y -g0 -i6

GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
+-------------------------------------------------------------------------+
| Solaris 10 /sol_10_x86                                                  |
| Solaris failsafe                                                        |
|                                                                         |
+-------------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press enter to boot the selected OS, 'e' to edit the
commands before booting, or 'c' for a command-line.
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login: 


Example 3–13 Rebooting a Zone-Cluster Node

The following example shows how to reboot a node on a zone cluster.


phys-schost# clzonecluster reboot -n schost-4 sparse-sczone
Waiting for zone reboot commands to complete on all the nodes of the zone cluster "sparse-sczone"...
Sep  5 19:40:59 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' died.
phys-schost# Sep  5 19:41:27 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' joined.

phys-schost#
phys-schost# clzonecluster status

=== Zone Clusters ===

--- Zone Cluster Status ---
Name            Node Name   Zone HostName   Status   Zone Status
----            ---------   -------------   ------   -----------
sparse-sczone   schost-1    sczone-1        Online   Running
                schost-2    sczone-2        Online   Running
                schost-3    sczone-3        Online   Running
                schost-4    sczone-4        Online   Running

phys-schost#