The Sun Cluster scshutdown(1M) command stops cluster services in an orderly fashion and cleanly shuts down the cluster.
Use scshutdown instead of the shutdown or halt commands to ensure proper shutdown of the entire cluster. The Solaris shutdown command is used to shut down individual nodes.
The scshutdown command stops a cluster by:
Taking all running resource groups offline
Unmounting all cluster file systems
Shutting down active device services
Running init 0 and bringing all nodes to the ok PROM prompt
You might do this when moving a cluster from one location to another or if there was data corruption caused by an application error.
If necessary, you can boot a node so that it does not participate in the cluster membership, that is, in non-cluster mode. This is useful when installing cluster software or for performing certain administrative procedures. See "2.2.4 How to Boot a Cluster Node in Non-Cluster Mode" for more information.
.
Table 2-1 Task Map: Shutting Down and Booting a Cluster
Task |
For Instructions, Go To... |
---|---|
Stop the cluster - Use scshutdown | |
Start the cluster by booting all nodes. The nodes must have a working connection to the cluster interconnect to attain cluster membership. | |
Shut down the cluster - Use scshutdown At the ok prompt, boot each node individually with the boot command. The nodes must have a working connection to the cluster interconnect to attain cluster membership. |
(Optional). For a cluster running Oracle Parallel Server (OPS), shut down all OPS database instances.
Refer to the Oracle Parallel Server product documentation for shutdown procedures.
Become superuser on a node in the cluster.
Shut down the cluster immediately by using the scshutdown(1M) command.
From a single node in the cluster, enter the following command.
# scshutdown -g 0 -y |
Verify that all nodes have reached the ok PROM prompt.
If necessary, power off the nodes.
The following example shows the console output when stopping normal cluster operation and bringing down all nodes to the ok prompt. The -g 0 option sets the shutdown grace period to zero, -y provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of the other nodes in the cluster.
# scshutdown -g 0 -y Sep 2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled. phys-schost-1# INIT: New run level: 0 The system is coming down. Please wait. System services are now being stopped. /etc/rc0.d/K05initrgm: Calling scswitch -S (evacuate) The system is down. syncing file systems... done Program terminated ok |
See "2.1.2 How to Boot a Cluster" to restart a cluster that has been shut down.
To start a cluster whose nodes have been shut down and are at the ok PROM prompt, boot each node.
The order in which the nodes are booted does not matter unless if you make configuration changes between shutdowns. In this case, you should start the nodes such that the one with the most current configuration boots first.
ok boot |
Messages appear on the booted nodes' consoles as cluster components are activated.
Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.
Verify that the nodes booted without error and are online.
The scstat(1M) command reports the nodes' status.
# scstat -n |
The following example shows the console output when booting node phys-schost-1 into the cluster. Similar messages appear on the consoles of the other nodes in the cluster.
ok boot Rebooting with command: boot ... Hostname: phys-schost-1 Booting as part of a cluster NOTICE: Node 1 with votecount = 1 added. NOTICE: Node 2 with votecount = 1 added. NOTICE: Node 3 with votecount = 1 added. ... NOTICE: Node 1: attempting to join cluster ... NOTICE: Node 2 (incarnation # 937690106) has become reachable. NOTICE: Node 3 (incarnation # 937690290) has become reachable. NOTICE: cluster has reached quorum. NOTICE: node 1 is up; new incarnation number = 937846227. NOTICE: node 2 is up; new incarnation number = 937690106. NOTICE: node 3 is up; new incarnation number = 937690290. NOTICE: Cluster members: 1 2 3 ... NOTICE: Node 1: joined cluster ... The system is coming up. Please wait. checking ufs filesystems ... reservation program successfully exiting Print services started. volume management starting. The system is ready. phys-schost-1 console login: |
Run the scshutdown(1M) command to shut down the cluster, then boot the cluster with the boot command on each node.
(Optional). For a cluster running Oracle Parallel Server (OPS), shut down all OPS database instances.
Refer to the Oracle Parallel Server product documentation for shutdown procedures.
Become superuser on a node in the cluster.
Shut down the cluster by using the scshutdown command.
From a single node in the cluster, enter the following command.
# scshutdown -g 0 -y |
This shuts down each node to the ok PROM prompt.
Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.
Boot each node.
The order in which the nodes are booted does not matter unless if you make configuration changes between shutdowns. In this case, you should start the nodes such that the one with the most current configuration boots first.
ok boot |
Messages appear on the booted nodes' consoles as cluster components are activated.
Verify that the nodes booted without error and are online.
The scstat command reports the nodes' status.
# scstat -n |
The following example shows the console output when stopping normal cluster operation, bringing down all nodes to the ok prompt, then restarting the cluster. The -g 0 option sets the grace period to zero, -y provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of other nodes in the cluster.
# scshutdown -g 0 -y Sep 2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled. phys-schost-1# INIT: New run level: 0 The system is coming down. Please wait. ... The system is down. syncing file systems... done Program terminated ok boot Rebooting with command: boot ... Hostname: phys-schost-1 Booting as part of a cluster ... NOTICE: Node 1: attempting to join cluster ... NOTICE: Node 2 (incarnation # 937690106) has become reachable. NOTICE: Node 3 (incarnation # 937690290) has become reachable. NOTICE: cluster has reached quorum. ... NOTICE: Cluster members: 1 2 3 ... NOTICE: Node 1: joined cluster ... The system is coming up. Please wait. checking ufs filesystems ... reservation program successfully exiting Print services started. volume management starting. The system is ready. phys-schost-1 console login: |