Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Reboot a Cluster

To shut down a global cluster, run the cluster shutdown command and then boot the global cluster with the boot command on each node. To shut down a zone cluster, use the clzonecluster halt command and then use the clzonecluster boot command to boot the zone cluster. You can also use the clzonecluster reboot command. For more information, see the cluster(1CL), boot(1M), and clzonecluster(1CL) man pages.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. If your cluster is running Oracle RAC, shut down all instances of the database on the cluster you are shutting down.

    Refer to the Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on any node in the cluster. Perform all steps in this procedure from a node of the global cluster.

  3. Shut down the cluster.

    • Shut down the global cluster.


      phys-schost# cluster shutdown -g0 -y 
      
    • If you have a zone cluster, shut down the zone cluster from a global-cluster node.


      phys-schost# clzonecluster halt zoneclustername
      

    Each node is shut down. You can also use the cluster shutdown command within a zone cluster to shut down the zone cluster.


    Note –

    Nodes must have a working connection to the cluster interconnect to attain cluster membership.


  4. Boot each node.

    The order in which the nodes are booted is irrelevant unless you make configuration changes between shutdowns. If you make configuration changes between shutdowns, start the node with the most current configuration first.

    • For a global-cluster node on a SPARC based system, run the following command.


      ok boot
      
    • For a global-cluster node on an x86 based system, run the following commands.

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

    Note –

    Nodes must have a working connection to the cluster interconnect to attain cluster membership.


    For more information about GRUB-based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

    • For a zone cluster, type the following command on a single node of the global cluster to boot the zone cluster.


      phys-schost# clzonecluster boot zoneclustername
      

    Messages appear on the booted nodes' consoles as cluster components are activated.

  5. Verify that the nodes booted without error and are online.

    • The clnode status command reports the status of the nodes on the global cluster.


      phys-schost# clnode status
      
    • Running the clzonecluster status command on a global-cluster node reports the status of the zone-cluster nodes.


      phys-schost# clzonecluster status
      

      You can also run the cluster status command within a zone cluster to see the status of the nodes.


      Note –

      If a node's /var file system fills up, Sun Cluster might not be able to restart on that node. If this problem arises, see How to Repair a Full /var File System.



Example 3–5 Rebooting a Zone Cluster

The following example shows how to halt and boot a zone cluster called sparse-sczone. You can also use the clzonecluster reboot command.


phys-schost# clzonecluster halt sparse-sczone
Waiting for zone halt commands to complete on all the nodes of the zone cluster "sparse-sczone"...
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 4 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 2 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 1 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' died.
phys-schost#
phys-schost# clzonecluster boot sparse-sczone
Waiting for zone boot commands to complete on all the nodes of the zone cluster "sparse-sczone"...
phys-schost# Sep  5 19:18:23 schost-4  cl_runtime: NOTICE: Membership : Node 1 of cluster
 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 2 of cluster 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 4 of cluster 'sparse-sczone' joined.

phys-schost#
phys-schost# clzonecluster status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name            Node Name   Zone HostName   Status   Zone Status
----            ---------   -------------   ------   -----------
sparse-sczone   schost-1    sczone-1        Online   Running
                schost-2    sczone-2        Online   Running
                schost-3    sczone-3        Online   Running
                schost-4    sczone-4        Online   Running
phys-schost# 


Example 3–6 SPARC: Rebooting a Global Cluster

The following example shows the console output when normal global-cluster operation is stopped, all nodes are shut down to the ok prompt, and the global cluster is restarted. The -g 0 option sets the grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of other nodes in the global cluster.


phys-schost# cluster shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 
WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
...
The system is down.
syncing file systems... done
Program terminated
ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-2 (incarnation # 937690106) has become reachable.
NOTICE: Node phys-schost-3 (incarnation # 937690290) has become reachable.
NOTICE: cluster has reached quorum.
...
NOTICE: Cluster members: phys-schost-1 phys-schost-2 phys-schost-3.
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login:
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login: