Go to main content

Oracle® Solaris Cluster 4.3 System Administration Guide

Exit Print View

Updated: June 2017
 
 

How to Reboot a Cluster

To shut down a global cluster, run the cluster shutdown command and then boot the global cluster with the boot command on each node. To shut down a zone cluster, use the clzonecluster halt command and then use the clzonecluster boot command to boot the zone cluster. You can also use the clzonecluster reboot command. For more information, see the cluster(1CL), boot(1M), and clzonecluster(1CL) man pages.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. If your cluster is running Oracle RAC, shut down all instances of the database on the cluster you are shutting down.

    Refer to the Oracle RAC product documentation for shutdown procedures.

  2. Assume a role that provides solaris.cluster.admin authorization on any node in the cluster.

    Perform all steps in this procedure from a node of the global cluster.

  3. Shut down the cluster.
    • Shut down the global cluster.
      phys-schost# cluster shutdown -g0 -y 
    • If you have a zone cluster, shut down the zone cluster from a global-cluster node.
      phys-schost# clzonecluster halt zone-cluster-name

    Each node is shut down. You can also use the cluster shutdown command within a zone cluster to shut down the zone cluster.


    Note -  Nodes must have a working connection to the cluster interconnect to attain cluster membership.
  4. Boot each node.

    The order in which the nodes are booted is irrelevant unless you make configuration changes between shutdowns. If you make configuration changes between shutdowns, start the node with the most current configuration first.

    • For a global-cluster node on a SPARC based system, run the following command.

      ok boot
    • For a global-cluster node on an x86 based system, run the following commands.

      When the GRUB menu is displayed, select the appropriate Oracle Solaris OS entry and press Enter.

      For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.3 Systems.

    • For a zone cluster, type the following command on a single node of the global cluster to boot the zone cluster.

      phys-schost# clzonecluster boot zone-cluster-name

    Note -  Nodes must have a working connection to the cluster interconnect to attain cluster membership.

    Messages appear on the booted nodes' consoles as cluster components are activated.

  5. Verify that the nodes booted without error and are online.
    • The clnode status command reports the status of the nodes on the global cluster.
      phys-schost# clnode status
    • Running the clzonecluster status command on a global-cluster node reports the status of the zone-cluster nodes.
      phys-schost# clzonecluster status

      You can also run the cluster status command within a zone cluster to see the status of the nodes.


      Note -  If a node's /var file system fills up, Oracle Solaris Cluster might not be able to restart on that node. If this problem arises, see How to Repair a Full /var File System.
Example 19  Rebooting a Zone Cluster

The following example shows how to halt and boot a zone cluster called sparse-sczone. You can also use the clzonecluster reboot command.

phys-schost# clzonecluster halt sparse-sczone
Waiting for zone halt commands to complete on all the nodes of the zone cluster "sparse-sczone"...
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 4 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 2 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 1 of cluster 'sparse-sczone' died.
Sep  5 19:17:46 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' died.
phys-schost#
phys-schost# clzonecluster boot sparse-sczone
Waiting for zone boot commands to complete on all the nodes of the zone cluster "sparse-sczone"...
phys-schost# Sep  5 19:18:23 schost-4  cl_runtime: NOTICE: Membership : Node 1 of cluster
'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 2 of cluster 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 3 of cluster 'sparse-sczone' joined.
Sep  5 19:18:23 schost-4 cl_runtime: NOTICE: Membership : Node 4 of cluster 'sparse-sczone' joined.

phys-schost#
phys-schost# clzonecluster status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name            Node Name   Zone HostName   Status   Zone Status
----            ---------   -------------   ------   -----------
sparse-sczone   schost-1    sczone-1        Online   Running
                schost-2    sczone-2        Online   Running
                schost-3    sczone-3        Online   Running
                schost-4    sczone-4        Online   Running
phys-schost# 
Example 20  SPARC: Rebooting a Global Cluster

The following example shows the console output when normal global-cluster operation is stopped, all nodes are shut down to the ok prompt, and the global cluster is restarted. The –g 0 option sets the grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of other nodes in the global cluster.

phys-schost# cluster shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime:
WARNING: CMM monitoring disabled.
phys-schost-1#
INIT: New run level: 0
The system is coming down.  Please wait.
...
The system is down.
syncing file systems... done
Program terminated
ok boot
Rebooting with command: boot
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-2 (incarnation # 937690106) has become reachable.
NOTICE: Node phys-schost-3 (incarnation # 937690290) has become reachable.
NOTICE: cluster has reached quorum.
...
NOTICE: Cluster members: phys-schost-1 phys-schost-2 phys-schost-3.
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login:
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login: 
Example 21  x86: Rebooting a Cluster

The following example shows the console output when normal cluster operation is topped, all nodes are shut down, and the cluster is restarted. The –g 0 option sets the grace period to zero, and -y provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of other nodes in the cluster.

# cluster shutdown -g0 -y
May  2 10:32:57 phys-schost-1 cl_runtime:
WARNING: CMM: Monitoring disabled.
root@phys-schost-1#
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling clnode evacuate
failfasts already disabled on node 1
Print services already stopped.
May  2 10:33:13 phys-schost-1 syslogd: going down on signal 15
The system is down.
syncing file systems... done
Type any key to continue

ATI RAGE SDRAM BIOS P/N GR-xlint.007-4.330
*                                        BIOS Lan-Console 2.0
Copyright (C) 1999-2001  Intel Corporation
MAC ADDR: 00 02 47 31 38 3C
AMIBIOS (C)1985-2002 American Megatrends Inc.,
Copyright 1996-2002 Intel Corporation
SCB20.86B.1064.P18.0208191106
SCB2 Production BIOS Version 2.08
BIOS Build 1064
2 X Intel(R) Pentium(R) III CPU family      1400MHz
Testing system memory, memory size=2048MB
2048MB Extended Memory Passed
512K L2 Cache SRAM Passed
ATAPI CD-ROM SAMSUNG CD-ROM SN-124

Press <F2> to enter SETUP, <F12> Network

Adaptec AIC-7899 SCSI BIOS v2.57S4
(c) 2000 Adaptec, Inc. All Rights Reserved.
Press <Ctrl><A> for SCSISelect(TM) Utility!

Ch B,  SCSI ID: 0 SEAGATE  ST336605LC        160
SCSI ID: 1 SEAGATE  ST336605LC        160
SCSI ID: 6 ESG-SHV  SCA HSBP M18      ASYN
Ch A,  SCSI ID: 2 SUN      StorEdge 3310     160
SCSI ID: 3 SUN      StorEdge 3310     160

AMIBIOS (C)1985-2002 American Megatrends Inc.,
Copyright 1996-2002 Intel Corporation
SCB20.86B.1064.P18.0208191106
SCB2 Production BIOS Version 2.08
BIOS Build 1064

2 X Intel(R) Pentium(R) III CPU family      1400MHz
Testing system memory, memory size=2048MB
2048MB Extended Memory Passed
512K L2 Cache SRAM Passed
ATAPI CD-ROM SAMSUNG CD-ROM SN-124

SunOS - Intel Platform Edition             Primary Boot Subsystem, vsn 2.0

Current Disk Partition Information

Part#   Status    Type      Start       Length
================================================
1     Active   X86 BOOT     2428       21852
2              SOLARIS     24280     71662420
3              <unused> 
4              <unused>
Please select the partition you wish to boot: *       *

Solaris DCB

loading /solaris/boot.bin

SunOS Secondary Boot version 3.00

Solaris Intel Platform Edition Booting System

Autobooting from bootpath: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/
pci8086,341a@7,1/sd@0,0:a

If the system hardware has changed, or to boot from a different
device, interrupt the autoboot process by pressing ESC.
Press ESCape to interrupt autoboot in 2 seconds.
Initializing system
Please wait...
Warning: Resource Conflict - both devices are added

NON-ACPI device: ISY0050
Port: 3F0-3F5, 3F7; IRQ: 6; DMA: 2
ACPI device: ISY0050
Port: 3F2-3F3, 3F4-3F5, 3F7; IRQ: 6; DMA: 2

<<< Current Boot Parameters >>>
Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/
sd@0,0:a
Boot args:

Type    b [file-name] [boot-flags] <ENTER>  to boot with options
or      i <ENTER>                           to enter boot interpreter
or      <ENTER>                             to boot with defaults

<<< timeout in 5 seconds >>>

Select (b)oot or (i)nterpreter: b
Size: 275683 + 22092 + 150244 Bytes
/platform/i86pc/kernel/unix loaded - 0xac000 bytes used
SunOS Release 5.9 Version Generic_112234-07 32-bit
Copyright 1983-2003 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
configuring IPv4 interfaces: e1000g2.
Hostname: phys-schost-1
Booting as part of a cluster
NOTICE: CMM: Node phys-schost-1 (nodeid = 1) with votecount = 1 added.
NOTICE: CMM: Node phys-schost-2 (nodeid = 2) with votecount = 1 added.
NOTICE: CMM: Quorum device 1 (/dev/did/rdsk/d1s2) added; votecount = 1, bitmask
of nodes with configured paths = 0x3.
NOTICE: clcomm: Adapter e1000g3 constructed
NOTICE: clcomm: Path phys-schost-1:e1000g3 - phys-schost-2:e1000g3 being constructed
NOTICE: clcomm: Path phys-schost-1:e1000g3 - phys-schost-2:e1000g3 being initiated
NOTICE: clcomm: Path phys-schost-1:e1000g3 - phys-schost-2:e1000g3 online
NOTICE: clcomm: Adapter e1000g0 constructed
NOTICE: clcomm: Path phys-schost-1:e1000g0 - phys-schost-2:e1000g0 being constructed
NOTICE: CMM: Node phys-schost-1: attempting to join cluster.
NOTICE: clcomm: Path phys-schost-1:e1000g0 - phys-schost-2:e1000g0 being initiated
NOTICE: CMM: Quorum device /dev/did/rdsk/d1s2: owner set to node 1.
NOTICE: CMM: Cluster has reached quorum.
NOTICE: CMM: Node phys-schost-1 (nodeid = 1) is up; new incarnation number = 1068496374.
NOTICE: CMM: Node phys-schost-2 (nodeid = 2) is up; new incarnation number = 1068496374.
NOTICE: CMM: Cluster members: phys-schost-1 phys-schost-2.
NOTICE: CMM: node reconfiguration #1 completed.
NOTICE: CMM: Node phys-schost-1: joined cluster.
WARNING: mod_installdrv: no major number for rsmrdt
ip: joining multicasts failed (18) on clprivnet0 - will use link layer
broadcasts for multicast
The system is coming up.  Please wait.
checking ufs filesystems
/dev/rdsk/c1t0d0s5: is clean.
NOTICE: clcomm: Path phys-schost-1:e1000g0 - phys-schost-2:e1000g0 online
NIS domain name is dev.eng.mycompany.com
starting rpc services: rpcbind keyserv ypbind done.
Setting netmask of e1000g2 to 192.168.255.0
Setting netmask of e1000g3 to 192.168.255.128
Setting netmask of e1000g0 to 192.168.255.128
Setting netmask of clprivnet0 to 192.168.255.0
Setting default IPv4 interface for multicast: add net 224.0/4: gateway phys-schost-1
syslog service starting.
obtaining access to all attached disks


*****************************************************************************
*
* The X-server can not be started on display :0...
*
*****************************************************************************
volume management starting.
Starting Fault Injection Server...
The system is ready.

phys-schost-1 console login: