Sun Cluster 3.0 System Administration Guide

2.2 Shutting Down and Booting a Single Cluster Node


Note -

Use the scswitch command in conjunction with the Solaris shutdown command to shut down an individual node. Use the scshutdown command only when shutting down an entire cluster.


Table 2-2 Task Map: Shutting Down and Booting a Cluster Node

Task 

For Instructions, Go To... 

Stop a cluster node  

    - Use scswitch(1M)and shutdown(1M)

"2.2.1 How to Shut Down a Cluster Node"

Start a node by booting it.  

The node must have a working connection to the cluster interconnect to attain cluster membership. 

"2.2.2 How to Boot a Cluster Node"

Stop and restart (reboot) a cluster node 

    - Use scswitch and shutdown

The node must have a working connection to the cluster interconnect to attain cluster membership. 

"2.2.3 How to Reboot a Cluster Node"

Boot a node so that it does not participate in cluster membership 

    - Use scswitch and shutdown, then boot -x

"2.2.4 How to Boot a Cluster Node in Non-Cluster Mode"

2.2.1 How to Shut Down a Cluster Node

  1. (Optional). For a cluster node running Oracle Parallel Server (OPS), shut down all OPS database instances.

    Refer to the Oracle Parallel Server product documentation for shutdown procedures.

  2. Become superuser on the cluster node to be shut down.

  3. Shut down the cluster node by using the scswitch and shutdown commands.

    On the node to be shut down, enter the following command.


    # scswitch -S -h node
    # shutdown -g 0 -y
    
  4. Verify that the cluster node has reached the ok PROM prompt.

  5. If necessary, power off the node.

2.2.1.1 Example--Shutting Down a Cluster Node

The following example shows the console output when shutting down node phys-schost-1. The -g 0 option sets the grace period to zero, -y provides an automatic yes response to the confirmation question. Shutdown messages for this node appear on the consoles of other nodes in the cluster.


# scswitch -S -h phys-schost-1
# shutdown -g 0 -y
Sep  2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled.phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
Program terminated
ok 

2.2.1.2 Where to Go From Here

See "2.2.2 How to Boot a Cluster Node" to restart a cluster node that has been shut down.

2.2.2 How to Boot a Cluster Node


Note -

Starting a cluster node can be affected by the quorum configuration. In a two-node cluster, you must have a quorum device configured such that the total quorum count for the cluster is three (one for each node and one for the quorum device). In this situation, if the first node is shut down, the second node continues to have quorum and runs as the sole cluster member. For the first node to come back in the cluster as a cluster node, the second node must be up and running and the required cluster quorum count (two) must be present.


  1. To start a cluster node that has been shut down, boot the node.


    ok boot
    

    Messages appear on the booted node's console, and on the member nodes' consoles, as cluster components are activated.


    Note -

    A cluster node must have a working connection to the cluster interconnect to attain cluster membership.


  2. Verify that the node has booted without error, and is online.

    The scstat(1M) command reports the status of a node.


    # scstat -n
    

2.2.2.1 Example--Booting a Cluster Node

The following example shows the console output when booting node phys-schost-1 into the cluster.


ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node 1: attempting to join cluster
...
NOTICE: Node 1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login:

2.2.3 How to Reboot a Cluster Node

  1. (Optional). For a cluster node running Oracle Parallel Server (OPS), shut down all OPS database instances.

    Refer to the Oracle Parallel Server product documentation for shutdown procedures.

  2. Become superuser on the cluster node to be shut down.

  3. Shut down the cluster node by using the scswitch and shutdown commands.

    Enter these commands on the node to be shut down.


    # scswitch -S -h node
    # shutdown -g 0 -y -i 6
    

    The -i 6 option with the shutdown command causes the node to reboot after it shuts down to the ok PROM prompt.


    Note -

    Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.


  4. Verify that the node has booted without error, and is online.

    The scstat(1M) command reports the status of a node.


    # scstat -n
    

2.2.3.1 Example--Rebooting a Cluster Node

The following example shows the console output when shutting down and restarting node phys-schost-1. The -g 0 option sets the grace period to zero, -y provides an automatic yes response to the confirmation question. Shutdown and startup messages for this node appear on the consoles of other nodes in the cluster.


# scswitch -S -h phys-schost-1
# shutdown -g 0 -y -i 6
Sep  2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 6
The system is coming down.  Please wait.
System services are now being stopped.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
rebooting...
Resetting ... 
,,,
Sun Ultra 1 SBus (UltraSPARC 143MHz), No Keyboard
OpenBoot 3.11, 128 MB memory installed, Serial #7982421.
Ethernet address 8:0:20:79:cd:55, Host ID: 8079cd55.
...
Rebooting with command: boot
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node 1: attempting to join cluster
...
NOTICE: Node 1: joined cluster
...
The system is coming up.  Please wait.
The system is ready.
phys-schost-1 console login: 

2.2.4 How to Boot a Cluster Node in Non-Cluster Mode

You can boot a node so that it does not participate in the cluster membership, that is, in non-cluster mode. This is useful when installing the cluster software or for performing certain administrative procedures, such as patching a node.

  1. Become superuser on the cluster node to be started in non-cluster mode.

  2. Shut down the node by using the scswitch and shutdown commands.


    # scswitch -S -h node
    # shutdown -g 0 -y
    
  3. Verify that the node is at the ok PROM prompt.

  4. Boot the node in non-cluster mode by using the boot(1M) command with the -x option.


    ok boot -x
    

    Messages appear on the node's console stating that the node is not part of the cluster.

2.2.4.1 Example--Booting a Cluster Node in Non-Cluster Mode

The following example shows the console output when shutting down node phys-schost-1 then restarting it in non-cluster mode. The -g -0 option sets the grace period to zero, -y provides an automatic yes response to the confirmation question. Shutdown messages for this node appear on the consoles of other nodes in the cluster.


# scswitch -S -h phys-schost-1
# shutdown -g 0 -y
Sep  2 10:08:46 phys-schost-1 cl_runtime: WARNING: CMM monitoring disabled.
phys-schost-1# 
...
rg_name = schost-sa-1 ...
offline node = phys-schost-2 ...
num of  node = 0 ...
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
Print services stopped.
syslogd: going down on signal 15
...
The system is down.
syncing file systems... done
WARNING: node 1 is being shut down.
Program terminated
ok boot -x
...
Not booting as part of cluster
...
The system is ready.
phys-schost-1 console login: