Sun Cluster System Administration Guide for Solaris OS

Chapter 3 Shutting Down and Booting a Cluster

This chapter provides information about and procedures for shutting down and booting a cluster and individual cluster nodes. For information about booting a non-global zone, see Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

For a high-level description of the related procedures in this chapter, see Table 3–1 and Table 3–2.

Overview of Shutting Down and Booting a Cluster

The Sun Cluster cluster(1CL) shutdown command stops cluster services in an orderly fashion and cleanly shuts down the entire cluster. You can use the cluster shutdown command when moving the location of a cluster. You can also use the command to shut down the cluster if an application error causes data corruption.


Note –

Use the cluster shutdown command instead of the shutdown or halt commands to ensure proper shutdown of the entire cluster. The Solaris shutdown command is used with the clnode(1CL) evacuate command to shut down individual nodes. See How to Shut Down a Cluster or Shutting Down and Booting a Single Cluster Node for more information.


The cluster shutdown command stops all nodes in a cluster by performing the following actions:

  1. Takes offline all running resource groups.

  2. Unmounts all cluster file systems.

  3. Shuts down active device services.

  4. Runs init 0 and brings all nodes to the OpenBootTM PROM ok prompt on a SPARC based system or to the GRUB menu on an x86 based system. The GRUB menus are described in more detail in Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.


Note –

If necessary, you can boot a node in noncluster mode so that the node does not participate in cluster membership. Noncluster mode is useful when installing cluster software or for performing certain administrative procedures. See How to Boot a Cluster Node in Noncluster Mode for more information.


Table 3–1 Task List: Shutting Down and Booting a Cluster

Task 

For Instructions 

Stop the cluster. 

    -Use cluster(1CL) shutdown

See How to Shut Down a Cluster

Start the cluster by booting all nodes. 

The nodes must have a working connection to the cluster interconnect to attain cluster membership. 

See How to Boot a Cluster

Reboot the cluster. 

    - Use cluster shutdown.

At the Press any key to continue message, boot each node individually by pressing a key.

The nodes must have a working connection to the cluster interconnect to attain cluster membership. 

See How to Reboot a Cluster

ProcedureHow to Shut Down a Cluster


Caution – Caution –

Do not use send brk on a cluster console to shut down a cluster node. The command is not supported within a cluster.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. SPARC: If your cluster is running Oracle Parallel Server or Oracle Real Application Clusters (RAC), shut down all instances of the database.

    Refer to the Oracle Parallel Server or Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on any node in the cluster.

  3. Shut down the cluster immediately.

    From a single node in the cluster, type the following command.


    # cluster shutdown -g0 -y
    
  4. Verify that all nodes are showing the ok prompt on a SPARC-based system or a GRUB menu on an x86 based system.

    Do not power off any nodes until all cluster nodes are at the ok prompt on a SPARC-based system or in a Boot Subsystem on an x86 based system.


    # cluster status -t node
    
  5. If necessary, power off the nodes.


Example 3–1 SPARC: Shutting Down a Cluster

The following example shows the console output when normal cluster operation is stopped and all nodes are shut down so that the ok prompt is shown. The -g 0 option sets the shutdown grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of the other nodes in the cluster.


# cluster shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 
WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling scswitch -S (evacuate)
The system is down.
syncing file systems... done
Program terminated
ok 


Example 3–2 x86: Shutting Down a Cluster

The following example shows the console output when normal cluster operation is stopped all nodes are shut down. In this example, the ok prompt is not displayed on all of the nodes. The -g 0 option sets the shutdown grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of the other nodes in the cluster.


# cluster shutdown -g0 -y
May  2 10:32:57 phys-schost-1 cl_runtime: 
WARNING: CMM: Monitoring disabled.  
root@phys-schost-1#
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling scswitch -S (evacuate)
failfasts already disabled on node 1
Print services already stopped.
May  2 10:33:13 phys-schost-1 syslogd: going down on signal 15
The system is down.
syncing file systems... done
Type any key to continue 

See Also

See How to Boot a Cluster to restart a cluster that has been shut down.

ProcedureHow to Boot a Cluster

This procedure explains how to start a cluster whose nodes have been shut down and are at the ok prompt on SPARC systems or at the Press any key to continue message on the GRUB-based x86 systems.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. Boot each node into cluster mode.

    • On SPARC based systems, do the following:


      ok boot
      
    • On x86 based systems, do the following:

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

    Note –

    Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.


    For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

  2. Verify that the nodes booted without error and are online.

    The cluster(1CL) status command reports the nodes' status.


    # cluster status -t node
    

    Note –

    If a cluster node's /var file system fills up, Sun Cluster might not be able to restart on that node. If this problem arises, see How to Repair a Full /var File System.



Example 3–3 SPARC: Booting a Cluster

The following example shows the console output when node phys-schost-1 is booted into the cluster. Similar messages appear on the consoles of the other nodes in the cluster.


ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
NOTICE: Node phys-schost-1 with votecount = 1 added.
NOTICE: Node phys-schost-2 with votecount = 1 added.
NOTICE: Node phys-schost-3 with votecount = 1 added.
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-2 (incarnation # 937690106) has become reachable.
NOTICE: Node phys-schost-3 (incarnation # 937690290) has become reachable.
NOTICE: cluster has reached quorum.
NOTICE: node phys-schost-1 is up; new incarnation number = 937846227.
NOTICE: node phys-schost-2 is up; new incarnation number = 937690106.
NOTICE: node phys-schost-3 is up; new incarnation number = 937690290.
NOTICE: Cluster members: phys-schost-1 phys-schost-2 phys-schost-3.
...

ProcedureHow to Reboot a Cluster

Run the cluster(1CL) shutdown command to shut down the cluster, then boot the cluster with the boot(1M) command on each node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. SPARC: If your cluster is running Oracle Parallel Server or Oracle RAC, shut down all instances of the database.

    Refer to the Oracle Parallel Server or Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on any node in the cluster.

  3. Shut down the cluster.

    From a single node in the cluster, type the following command.


    # cluster shutdown -g0 -y 
    

    Each node is shut down.


    Note –

    Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.


  4. Boot each node.

    The order in which the nodes are booted is irrelevant unless you make configuration changes between shutdowns. If you make configuration changes between shutdowns, start the node with the most current configuration first.

    • On SPARC based systems, do the following:


      ok boot
      
    • On x86 based systems, do the following:

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

    Note –

    Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.


    For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

    Messages appear on the booted nodes' consoles as cluster components are activated.

  5. Verify that the nodes booted without error and are online.

    The scstat command reports the nodes' status.


    # cluster status -t node
    

    Note –

    If a cluster node's /var file system fills up, Sun Cluster might not be able to restart on that node. If this problem arises, see How to Repair a Full /var File System.



Example 3–4 SPARC: Rebooting a Cluster

The following example shows the console output when normal cluster operation is stopped, all nodes are shut down to the ok prompt, and the cluster is restarted. The -g 0 option sets the grace period to zero, and -y provides an automatic yes response to the confirmation question. Shutdown messages also appear on the consoles of other nodes in the cluster.


# cluster shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 
WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
...
The system is down.
syncing file systems... done
Program terminated
ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-2 (incarnation # 937690106) has become reachable.
NOTICE: Node phys-schost-3 (incarnation # 937690290) has become reachable.
NOTICE: cluster has reached quorum.
...
NOTICE: Cluster members: phys-schost-1 phys-schost-2 phys-schost-3.
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login:
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login: 

Shutting Down and Booting a Single Cluster Node


Note –

Use the clnode(1CL) evacuate command in conjunction with the Solaris shutdown(1M) command to shut down an individual node. Use the cluster shutdown command only when shutting down an entire cluster. For information on shutting down and booting a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.


Table 3–2 Task Map: Shutting Down and Booting a Cluster Node

Task 

Tool 

Instructions 

Stop a cluster node 

Use the clnode(1CL) evacuate command and the shutdown command

How to Shut Down a Cluster Node

Start a node 

The node must have a working connection to the cluster interconnect to attain cluster membership. 

Use the boot or b commands

How to Boot a Cluster Node

Stop and restart (reboot) a cluster node 

The node must have a working connection to the cluster interconnect to attain cluster membership. 

Use the clnode evacuate and shutdown commands

How to Reboot a Cluster Node

Boot a node so that the node does not participate in cluster membership 

Use the clnode evacuate and the shutdown commands, then use the boot -x or shutdown -g -y -i0 commands

How to Boot a Cluster Node in Noncluster Mode

ProcedureHow to Shut Down a Cluster Node


Caution – Caution –

Do not use send brk on a cluster console to shut down a cluster node. The command is not supported within a cluster.


For information about shutting down a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. SPARC: If your cluster is running Oracle Parallel Server or Oracle RAC, shut down all instances of the database.

    Refer to the Oracle Parallel Server or Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on the cluster node to be shut down.

  3. Switch all resource groups, resources, and device groups from the node being shut down to other cluster members.

    On the node to be shut down, type the following command. The clnode evacuate command switches over all resource groups and device groups including all non-global zones from the specified node to the next preferred node.


    # clnode evacuate node
    
    node

    Specifies the node from which you are switching resource groups and device groups.

  4. Shut down the cluster node.

    On the node to be shut down, type the following command.


    # shutdown -g0 -y -i0
    

    Verify that the cluster node is showing the ok prompt on a SPARC based system or the Press any key to continue message on the GRUB menu on an x86 based system.

  5. If necessary, power off the node.


Example 3–5 SPARC: Shutting Down a Cluster Node

The following example shows the console output when node phys-schost-1 is shut down. The -g0 option sets the grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages for this node appear on the consoles of other nodes in the cluster.


# clnode evacuate -S -h phys-schost-1
# shutdown -g0 -y
Wed Mar 10 13:47:32 phys-schost-1 cl_runtime:
WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
Program terminated
ok 


Example 3–6 x86: Shutting Down a Cluster Node

The following example shows the console output when node phys-schost-1 is shut down. The -g0 option sets the grace period to zero, and the -y option provides an automatic yes response to the confirmation question. Shutdown messages for this node appear on the consoles of other nodes in the cluster.


# clnode evacuate phys-schost-1
# shutdown -g0 -y
Shutdown started.    Wed Mar 10 13:47:32 PST 2004

Changing to init state 0 - please wait
Broadcast Message from root (console) on phys-schost-1 Wed Mar 10 13:47:32... 
THE SYSTEM phys-schost-1 IS BEING SHUT DOWN NOW ! ! !
Log off now or risk your files being damaged

phys-schost-1#
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
/etc/rc0.d/K05initrgm: Calling scswitch -S (evacuate)
failfasts disabled on node 1
Print services already stopped.
Mar 10 13:47:44 phys-schost-1 syslogd: going down on signal 15
umount: /global/.devices/node@2 busy
umount: /global/.devices/node@1 busy
The system is down.
syncing file systems... done
WARNING: CMM: Node being shut down.
Type any key to continue 

See Also

See How to Boot a Cluster Node to restart a cluster node that has been shut down.

ProcedureHow to Boot a Cluster Node

If you intend to shut down or reboot other, active nodes in the cluster, wait until the node you are booting has reached at least the following status:

Otherwise, the node will not be available to take over services from other nodes in the cluster that you shut down or reboot. For information about booting a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.


Note –

Starting a cluster node can be affected by the quorum configuration. In a two-node cluster, you must have a quorum device configured so that the total quorum count for the cluster is three. You should have one quorum count for each node and one quorum count for the quorum device. In this situation, if the first node is shut down, the second node continues to have quorum and runs as the sole cluster member. For the first node to come back in the cluster as a cluster node, the second node must be up and running. The required cluster quorum count (two) must be present.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. To start a cluster node that has been shut down, boot the node.

    • On SPARC based systems, do the following:


      ok boot
      
    • On x86 based systems, do the following:

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

    Messages appear on the booted nodes' consoles as cluster components are activated.


    Note –

    A cluster node must have a working connection to the cluster interconnect to attain cluster membership.


  2. Verify that the node has booted without error, and is online.

    The cluster status command reports the status of a node.


    # cluster status -t node
    

    Note –

    If a cluster node's /var file system fills up, Sun Cluster might not be able to restart on that node. If this problem arises, see How to Repair a Full /var File System.



Example 3–7 SPARC: Booting a Cluster Node

The following example shows the console output when node phys-schost-1 is booted into the cluster.


ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login:

ProcedureHow to Reboot a Cluster Node

If you intend to shut down or reboot other, active nodes in the cluster, wait until the node you are rebooting has reached at least the following status:

Otherwise, the node will not be available to take over services from other nodes in the cluster that you shut down or reboot. For information about rebooting a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. SPARC: If the cluster node is running Oracle Parallel Server or Oracle RAC, shut down all instances of the database.

    Refer to the Oracle Parallel Server or Oracle RAC product documentation for shutdown procedures.

  2. Become superuser or assume a role the provides solaris.cluster.admin RBAC authorization on the cluster node to be shut down.

  3. Shut down the cluster node by using the clnode evacuate and shutdown commands.

    Enter the following commands on the node to be shut down. The clnode evacuate command switches over all device groups from the specified node to the next preferred node. The command also switches all resource groups from global or non-global zones on the specified node to the next-preferred global or non-global zones on other nodes.

    • On SPARC based systems, do the following:


      # clnode evacuate  node
      # shutdown -g0 -y -i6
      
    • On x86 based systems, do the following:


      # clnode evacuate  node
      

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

    Note –

    Cluster nodes must have a working connection to the cluster interconnect to attain cluster membership.


  4. Verify that the node has booted without error, and is online.


    # cluster status -t node
    

Example 3–8 SPARC: Rebooting a Cluster Node

The following example shows the console output when node phys-schost-1 is rebooted. Messages for this node, such as shutdown and startup notification, appear on the consoles of other nodes in the cluster.


# clnode evacuate phys-schost-1
# shutdown -g0 -y -i6
Shutdown started.    Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 

WARNING: CMM monitoring disabled.
phys-schost-1# 
INIT: New run level: 6
The system is coming down.  Please wait.
System services are now being stopped.
Notice: rgmd is being stopped.
Notice: rpc.pmfd is being stopped.
Notice: rpc.fed is being stopped.
umount: /global/.devices/node@1 busy
umount: /global/phys-schost-1 busy
The system is down.
syncing file systems... done
rebooting...
Resetting ... 
,,,
Sun Ultra 1 SBus (UltraSPARC 143MHz), No Keyboard
OpenBoot 3.11, 128 MB memory installed, Serial #5932401.
Ethernet address 8:8:20:99:ab:77, Host ID: 8899ab77.
...
Rebooting with command: boot
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
The system is ready.
phys-schost-1 console login: 


Example 3–9 x86: Rebooting a Cluster Node

The following example shows the console output when rebooting node phys-schost-1. Messages for this node, such as shutdown and startup notification, appear on the consoles of other nodes in the cluster.


# clnode evacuate phys-schost-1
ok boot
Rebooting with command: boot 
...
Hostname: phys-schost-1
Booting as part of a cluster
...
NOTICE: Node phys-schost-1: attempting to join cluster
...
NOTICE: Node phys-schost-1: joined cluster
...
The system is coming up.  Please wait.
checking ufs filesystems
...
reservation program successfully exiting
Print services started.
volume management starting.
The system is ready.
phys-schost-1 console login: 

ProcedureHow to Boot a Cluster Node in Noncluster Mode

You can boot a node so that the node does not participate in the cluster membership, that is, in noncluster mode. Noncluster mode is useful when installing the cluster software or performing certain administrative procedures, such as patching a node.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role the provides solaris.cluster.admin RBAC authorization on the cluster node to be started in noncluster mode.

  2. Shut down the node by using the clnode evacuate and shutdown commands.

    The clnode evacuate command switches over all device groups from the specified node to the next preferred node. The command also switches all resource groups from global or non-global zones on the specified node to the next-preferred global or non-global zones on other nodes.


    # clnode evacuate  node
    # shutdown -g0 -y
    
  3. Verify that the node is showing the ok prompt on a Solaris based system or the Press any key to continue message on a GRUB menu on an x86 based system.

  4. Boot the node in noncluster mode.

    • On SPARC based systems, perform the following command:


      phys-schost# boot -xs
      
    • On x86 based system, perform the following commands:


      phys-schost# shutdown -g -y -i0
      
      Press any key to continue
    1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

      The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

      For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

    2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

      The GRUB boot parameters screen appears similar to the following:


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot                                     |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.
    3. Add -x to the command to specify that the system boot into noncluster mode.


      [ Minimal BASH-like line editing is supported. For the first word, TAB
      lists possible command completions. Anywhere else TAB lists the possible
      completions of a device/filename. ESC at any time exits. ]
      
      grub edit> kernel /platform/i86pc/multiboot -x
    4. Press the Enter key to accept the change and return to the boot parameters screen.

      The screen displays the edited command.


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot -x                                  |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.-
    5. Type b to boot the node into noncluster mode.


      Note –

      This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.



Example 3–10 SPARC: Booting a Cluster Node in Noncluster Mode

The following example shows the console output when node phys-schost-1 is shut down and restarted in noncluster mode. The -g0 option sets the grace period to zero, the -y option provides an automatic yes response to the confirmation question, and -i0 invokes run level 0 (zero). Shutdown messages for this node appear on the consoles of other nodes in the cluster.


# clnode evacuate phys-schost-1
# cluster shutdown -g0 -y
Shutdown started.    Wed Mar 10 13:47:32 phys-schost-1 cl_runtime: 

WARNING: CMM monitoring disabled.
phys-schost-1# 
...
rg_name = schost-sa-1 ...
offline node = phys-schost-2 ...
num of node = 0 ...
phys-schost-1# 
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
Print services stopped.
syslogd: going down on signal 15
...
The system is down.
syncing file systems... done
WARNING: node phys-schost-1 is being shut down.
Program terminated

ok boot -x
...
Not booting as part of cluster
...
The system is ready.
phys-schost-1 console login:

Repairing a Full /var File System

Both Solaris software and Sun Cluster software write error messages to the /var/adm/messages file, which over time can fill the /var file system. If a cluster node's /var file system fills up, Sun Cluster might not be able to restart on that node. Additionally, you might not be able to log in to the node.

ProcedureHow to Repair a Full /var File System

If a node reports a full /var file system and continues to run Sun Cluster services, use this procedure to clear the full file system. Refer to Viewing System Messages in System Administration Guide: Advanced Administration for more information.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

  1. Become superuser on the cluster node with the full /var file system.

  2. Clear the full file system.

    For example, delete nonessential files that are contained in the file system.