Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Reboot a Cluster Node

If you intend to shut down or reboot other, active nodes in the cluster, wait until the node you are rebooting has reached at least the following status:

Otherwise, the node will not be available to take over services from other nodes in the cluster that you shut down or reboot. For information about rebooting a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. SPARC: If the cluster node is running Oracle Parallel Server or Oracle RAC, shut down all instances of the database.

    Refer to the Oracle Parallel Server or Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role the provides solaris.cluster.admin RBAC authorization on the cluster node to be shut down.

  3. Shut down the cluster node by using the clnode evacuate and shutdown commands.

    Enter the following commands on the node to be shut down. The clnode evacuate command switches over all device groups from the specified node to the next preferred node. The command also switches all resource groups from global or non-global zones on the specified node to the next-preferred global or non-global zones on other nodes.

    • On SPARC based systems, perform the following commands:


      # clnode evacuate  node
      # shutdown -g0 -y -i6
      
    • On x86 based systems, do the following:


      # clnode evacuate  node
      

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

    Note –

    Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.


  4. Verify that the node has booted without error, and is online.


    # cluster status -t node
    

Example 3–8 SPARC: Rebooting a Cluster Node

The following example shows the console output when node phys-schost-1 is rebooted. Messages for this node, such as shutdown and startup notification, appear on the consoles of other nodes in the cluster.


# clnode evacuate phys-schost-1
# shutdown -g0 -y -i6
Shutdown started.    Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 

WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 6
The system is coming down.  Please wait.
System services are now being stopped.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
rebooting...
Resetting ... 
,,,
Sun Ultra 1 SBus (UltraSPARC 143MHz), No Keyboard
OpenBoot 3.11, 128 MB memory installed, Serial #5932401.
Ethernet address 8:8:20:99:ab:77, Host ID: 8899ab77.
...
Rebooting with command: boot
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
The system is ready.
phys-schost-1 console login: 


Example 3–9 x86: Rebooting a Cluster Node

The following example shows the console output when rebooting node phys-schost-1. Messages for this node, such as shutdown and startup notification, appear on the consoles of other nodes in the cluster.


# clnode evacuate phys-schost-1
ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login: