Go to main content

Administering an Oracle® Solaris Cluster 4.4 Configuration

Exit Print View

Updated: March 2019
 
 

How to Reboot a Node

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.


Note -  You can also reboot a zone-cluster node by using the Oracle Solaris Cluster Manager browser interface. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.


Caution

Caution  -  If a method for any resource times out and cannot be killed, the node will be rebooted only if the resource's Failover_mode property is set to HARD. If the Failover_mode property is set to any other value, the node will not be rebooted.


To shut down or reboot other active nodes in the global cluster or zone cluster, wait until the multiuser-server milestone comes online for the node that you are rebooting. Otherwise, the node will not be available to take over services from other nodes in the cluster that you shut down or reboot.

  1. If the global-cluster or zone-cluster node is running Oracle RAC, shut down all instances of the database on the node that you are shutting down.

    Refer to the Oracle RAC product documentation for shutdown procedures.

  2. Assume the root role or a role that provides solaris.cluster.admin authorization on the node to shut down.

    Perform all steps in this procedure from a node of the global cluster.

  3. Shut down the global-cluster node by using the clnode evacuate and shutdown commands.

    Shut down the zone cluster with the clzonecluster halt command executed on a node of the global cluster. (The clnode evacuate and shutdown commands also work in a zone cluster.)

    For a global cluster, type the following commands on the node to shut down. The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from global zones on the specified node to the next-preferred global zone on other nodes.


    Note -  To shut down a single node, use the shutdown -g0 -y -i6 command. To shut down multiple nodes at the same time, use the shutdown -g0 -y -i0 command to halt the nodes. After all the nodes have halted, use the boot command on all nodes to boot them back in to the cluster.
    • On a SPARC based system, run the following commands to reboot a single node.

      phys-schost# clnode evacuate node
      phys-schost# shutdown -g0 -y -i6
    • On an x86 based system, run the following commands to reboot a single node.

      phys-schost# clnode evacuate node
      phys-schost# shutdown -g0 -y -i6

      When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.

    • Shut down all the zone cluster nodes.

      phys-schost# clzonecluster halt -n phys-schost +

    Note -  Nodes must have a working connection to the cluster interconnect to attain cluster membership.
  4. Verify that the node booted without error and is online.
    • Verify that the global-cluster node is online.
      phys-schost# cluster status -t node
    • Verify that the zone-cluster node is online.
      phys-schost# clzonecluster status
Example 27  SPARC: Rebooting a Global-Cluster Node

The following example shows the console output when node phys-schost-1 is rebooted. Messages for this node, such as shutdown and startup notification, appear on the consoles of other nodes in the global cluster.

phys-schost# clnode evacuate phys-schost-1
phys-schost# shutdown -g0 -y -i6
Shutdown started.    Wed Mar 10 13:47:32 phys-schost-1 cl_runtime:

WARNING: CMM monitoring disabled.
phys-schost-1#
INIT: New run level: 6
The system is coming down.  Please wait.
System services are now being stopped.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
rebooting...
Resetting ...
,,,
Sun Ultra 1 SBus (UltraSPARC 143MHz), No Keyboard
OpenBoot 3.11, 128 MB memory installed, Serial #5932401.
Ethernet address 8:8:20:99:ab:77, Host ID: 8899ab77.
...
Rebooting with command: boot
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
The system is ready.
phys-schost-1 console login: 
Example 28  Rebooting a Zone-Cluster Node

The following example shows how to reboot a node on a zone cluster.

phys-schost# clzonecluster reboot –n schost-4 sparse-sczone
Waiting for zone reboot commands to complete on all the nodes of the zone cluster
"sparse-sczone"...
Sep  5 19:40:59 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster
'sparse-sczone' died.
phys-schost# Sep  5 19:41:27 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster
'sparse-sczone' joined.

phys-schost#
phys-schost# clzonecluster status

=== Zone Clusters ===

--- Zone Cluster Status ---
Name            Node Name   Zone HostName   Status   Zone Status
----            ---------   -------------   ------   -----------
sparse-sczone   schost-1    sczone-1        Online   Running
                schost-2    sczone-2        Online   Running
                schost-3    sczone-3        Online   Running
                schost-4    sczone-4        Online   Running

phys-schost#