JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

Overview of Shutting Down and Booting a Cluster

How to Shut Down a Cluster

How to Boot a Cluster

How to Reboot a Cluster

Shutting Down and Booting a Single Node in a Cluster

How to Shut Down a Node

How to Boot a Node

How to Reboot a Node

How to Boot a Node in Noncluster Mode

Repairing a Full /var File System

How to Repair a Full /var File System

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Updating Your Software

12.  Backing Up and Restoring a Cluster

A.  Example

Index

Overview of Shutting Down and Booting a Cluster

The Oracle Solaris Cluster cluster shutdown command stops global cluster services in an orderly fashion and cleanly shuts down an entire global cluster. You can use the cluster shutdown command when moving the location of a global cluster, or to shut down the global cluster if an application error causes data corruption. The clzonecluster halt command stops a zone cluster that is running on a specific node or an entire zone cluster on all configured nodes. (You can also use the cluster shutdown command within a zone cluster.) For more information, see the cluster(1CL) man page.

In the procedures in this chapter, phys-schost# reflects a global-cluster prompt. The clzonecluster interactive shell prompt is clzc:schost>.


Note - Use the cluster shutdown command to ensure proper shutdown of the entire global cluster. The Oracle Solaris shutdown command is used with the clnode evacuate command to shut down individual nodes. For more information, see How to Shut Down a Cluster, Shutting Down and Booting a Single Node in a Cluster, or the clnode(1CL) man page.


The cluster shutdown and the clzonecluster halt commands stop all nodes in a global cluster or zone cluster, respectively, by performing the following actions:

  1. Takes all running resource groups offline.

  2. Unmounts all cluster file systems for a global cluster or a zone cluster.

  3. The cluster shutdown command shuts down active device services on a global cluster or a zone cluster.

  4. The cluster shutdown command runs init 0 and brings all nodes on the cluster to the OpenBoot PROM ok prompt on a SPARC based system or the Press any key to continue message on the GRUB menu of an x86 based system. For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.1 Systems. The clzonecluster halt command performs the zoneadm -z zoneclustername halt command to stop (but not shut down) the zones of the zone cluster.


Note - If necessary, you can boot a node in noncluster mode so that the node does not participate in cluster membership. Noncluster mode is useful when installing cluster software or for performing certain administrative procedures. See How to Boot a Node in Noncluster Mode for more information.


Table 3-1 Task List: Shutting Down and Booting a Cluster

Task
Instructions
Stop the cluster.
Start the cluster by booting all nodes. The nodes must have a working connection to the cluster interconnect to attain cluster membership.
Reboot the cluster.

How to Shut Down a Cluster

You can shut down a global cluster, a zone cluster, or all zone clusters.


Caution

Caution - Do not use send brk on a cluster console to shut down a global-cluster node or a zone-cluster node. The command is not supported within a cluster.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. If your global cluster or zone cluster is running Oracle Real Application Clusters (RAC), shut down all instances of the database on the cluster you are shutting down.

    Refer to the Oracle RAC product documentation for shutdown procedures.

  2. Assume a role that provides solaris.cluster.admin RBAC authorization on any node in the cluster.

    Perform all steps in this procedure from a node of the global cluster.

  3. Shut down the global cluster, the zone cluster, or all zone clusters.
    • Shut down the global cluster. This action also shuts down all zone clusters.
      phys-schost# cluster shutdown -g0 -y
    • Shut down a specific zone cluster.
      phys-schost# clzonecluster halt zoneclustername
    • Shut down all zone clusters.
      phys-schost# clzonecluster halt +

      You can also use the cluster shutdown command within a zone cluster to shut down that particular zone cluster.

  4. Verify that all nodes on the global cluster or zone cluster are showing the ok prompt on a SPARC based system or a GRUB menu on an x86 based system.

    Do not power off any nodes until all nodes are at the ok prompt on a SPARC based system or in a boot subsystem on an x86 based system.

    • Check the status of one or more global-cluster nodes from another global-cluster node which is still up and running in the cluster.
      phys-schost# cluster status -t node
    • Use the status subcommand to verify that the zone cluster was shut down.
      phys-schost# clzonecluster status
  5. If necessary, power off the nodes of the global cluster.

Example 3-1 Shutting Down a Zone Cluster

The following example shuts down a zone cluster called sczone.

phys-schost# clzonecluster halt sczone
Waiting for zone halt commands to complete on all the nodes of the zone cluster "sczone"...
Sep  5 19:06:01 schost-4 cl_runtime: NOTICE: Membership : Node 2 of cluster 'sczone' died.
Sep  5 19:06:01 schost-4 cl_runtime: NOTICE: Membership : Node 4 of cluster 'sczone' died.
Sep  5 19:06:01 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sczone' died.
Sep  5 19:06:01 schost-4 cl_runtime: NOTICE: Membership : Node 1 of cluster 'sczone' died.
phys-schost# 

Example 3-2 SPARC: Shutting Down a Global Cluster

The following example shows the console output when normal global-cluster operation is stopped and all nodes are shut down, enabling the ok prompt to be shown. The -g 0 option sets the shutdown grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of the other nodes in the global cluster.

phys-schost# cluster shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 
WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling clnode evacuate
The system is down.
syncing file systems... done
Program terminated
ok 

Example 3-3 x86: Shutting Down a Global Cluster

The following example shows the console output when normal global-cluster operation is stopped and all nodes are shut down. In this example, the ok prompt is not displayed on all of the nodes. The -g 0 option sets the shutdown grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of the other nodes in the global cluster.

phys-schost# cluster shutdown -g0 -y
May  2 10:32:57 phys-schost-1 cl_runtime: 
WARNING: CMM: Monitoring disabled.  
root@phys-schost-1#
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling clnode evacuate
failfasts already disabled on node 1
Print services already stopped.
May  2 10:33:13 phys-schost-1 syslogd: going down on signal 15
The system is down.
syncing file systems... done
Type any key to continue 

See Also

See How to Boot a Cluster to restart a global cluster or a zone cluster that was shut down.

How to Boot a Cluster

This procedure explains how to start a global cluster or zone cluster whose nodes have been shut down. For global-cluster nodes, the system displays the ok prompt on SPARC systems or the Press any key to continue message on the GRUB based x86 systems.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.


Note - To create a zone cluster, follow the instructions in Creating and Configuring a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.


  1. Boot each node into cluster mode.

    Perform all steps in this procedure from a node of the global cluster.

    • On SPARC based systems, run the following command.
      ok boot
    • On x86 based systems, run the following commands.

      When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.

      For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.1 Systems.


      Note - Nodes must have a working connection to the cluster interconnect to attain cluster membership.


    • If you have a zone cluster, you can boot the entire zone cluster.
      phys-schost# clzonecluster boot zoneclustername
    • If you have more than one zone cluster, you can boot all zone clusters. Use + instead of the zoneclustername.
  2. Verify that the nodes booted without error and are online.

    The cluster status command reports the global-cluster nodes' status.

    phys-schost# cluster status -t node

    When you run the clzonecluster status status command from a global-cluster node, the command reports the state of the zone-cluster node.

    phys-schost# clzonecluster status

    Note - If a node's /var file system fills up, Oracle Solaris Cluster might not be able to restart on that node. If this problem arises, see How to Repair a Full /var File System. For more information, see the clzonecluster(1CL) man page.


Example 3-4 SPARC: Booting a Global Cluster

The following example shows the console output when node phys-schost-1 is booted into the global cluster. Similar messages appear on the consoles of the other nodes in the global cluster. When the autoboot property of a zone cluster is set to true, the system automatically boots the zone-cluster node after booting the global-cluster node on that machine.

When a global-cluster node reboots, all zone cluster nodes on that machine halt. Any zone-cluster node on that same machine with the autoboot property set to true boots after the global-cluster node restarts.

ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
NOTICE: Node phys-schost-1 with votecount = 1 added.
NOTICE: Node phys-schost-2 with votecount = 1 added.
NOTICE: Node phys-schost-3 with votecount = 1 added.
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-2 (incarnation # 937690106) has become reachable.
NOTICE: Node phys-schost-3 (incarnation # 937690290) has become reachable.
NOTICE: cluster has reached quorum.
NOTICE: node phys-schost-1 is up; new incarnation number = 937846227.
NOTICE: node phys-schost-2 is up; new incarnation number = 937690106.
NOTICE: node phys-schost-3 is up; new incarnation number = 937690290.
NOTICE: Cluster members: phys-schost-1 phys-schost-2 phys-schost-3.
...

How to Reboot a Cluster

To shut down a global cluster, run the cluster shutdown command and then boot the global cluster with the boot command on each node. To shut down a zone cluster, use the clzonecluster halt command and then use the clzonecluster boot command to boot the zone cluster. You can also use the clzonecluster reboot command. For more information, see the cluster(1CL), boot(1M), and clzonecluster(1CL) man pages.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. If your cluster is running Oracle RAC, shut down all instances of the database on the cluster you are shutting down.

    Refer to the Oracle RAC product documentation for shutdown procedures.

  2. Assume a role that provides solaris.cluster.admin RBAC authorization on any node in the cluster.

    Perform all steps in this procedure from a node of the global cluster.

  3. Shut down the cluster.
    • Shut down the global cluster.
      phys-schost# cluster shutdown -g0 -y 
    • If you have a zone cluster, shut down the zone cluster from a global-cluster node.
      phys-schost# clzonecluster halt zoneclustername

    Each node is shut down. You can also use the cluster shutdown command within a zone cluster to shut down the zone cluster.


    Note - Nodes must have a working connection to the cluster interconnect to attain cluster membership.


  4. Boot each node.

    The order in which the nodes are booted is irrelevant unless you make configuration changes between shutdowns. If you make configuration changes between shutdowns, start the node with the most current configuration first.

    • For a global-cluster node on a SPARC based system, run the following command.

      ok boot
    • For a global-cluster node on an x86 based system, run the following commands.

      When the GRUB menu is displayed, select the appropriate Oracle Solaris OS entry and press Enter.


    Note - Nodes must have a working connection to the cluster interconnect to attain cluster membership.


    For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.1 Systems.

    • For a zone cluster, type the following command on a single node of the global cluster to boot the zone cluster.

      phys-schost# clzonecluster boot zoneclustername

    Messages appear on the booted nodes' consoles as cluster components are activated.

  5. Verify that the nodes booted without error and are online.
    • The clnode status command reports the status of the nodes on the global cluster.
      phys-schost# clnode status
    • Running the clzonecluster status command on a global-cluster node reports the status of the zone-cluster nodes.
      phys-schost# clzonecluster status

      You can also run the cluster status command within a zone cluster to see the status of the nodes.


      Note - If a node's /var file system fills up, Oracle Solaris Cluster might not be able to restart on that node. If this problem arises, see How to Repair a Full /var File System.


Example 3-5 Rebooting a Zone Cluster

The following example shows how to halt and boot a zone cluster called sparse-sczone. You can also use the clzonecluster reboot command.

phys-schost# clzonecluster halt sparse-sczone
Waiting for zone halt commands to complete on all the nodes of the zone cluster "sparse-sczone"...
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 4 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 2 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 1 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' died.
phys-schost#
phys-schost# clzonecluster boot sparse-sczone
Waiting for zone boot commands to complete on all the nodes of the zone cluster "sparse-sczone"...
phys-schost# Sep  5 19:18:23 schost-4  cl_runtime: NOTICE: Membership : Node 1 of cluster
 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 2 of cluster 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 4 of cluster 'sparse-sczone' joined.

phys-schost#
phys-schost# clzonecluster status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name            Node Name   Zone HostName   Status   Zone Status
----            ---------   -------------   ------   -----------
sparse-sczone   schost-1    sczone-1        Online   Running
                schost-2    sczone-2        Online   Running
                schost-3    sczone-3        Online   Running
                schost-4    sczone-4        Online   Running
phys-schost# 

Example 3-6 SPARC: Rebooting a Global Cluster

The following example shows the console output when normal global-cluster operation is stopped, all nodes are shut down to the ok prompt, and the global cluster is restarted. The -g 0 option sets the grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of other nodes in the global cluster.

phys-schost# cluster shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 
WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
...
The system is down.
syncing file systems... done
Program terminated
ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-2 (incarnation # 937690106) has become reachable.
NOTICE: Node phys-schost-3 (incarnation # 937690290) has become reachable.
NOTICE: cluster has reached quorum.
...
NOTICE: Cluster members: phys-schost-1 phys-schost-2 phys-schost-3.
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login:
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login: