Sun Cluster System Administration Guide for Solaris OS

Chapter 8 Administering the Cluster

This chapter provides the procedures for administering items that affect an entire global cluster or a zone cluster:

Overview of Administering the Cluster

This section describes how to perform administrative tasks for the entire global cluster or zone cluster. The following table lists these administrative tasks and the associated procedures. For Solaris 10 OS, you generally perform cluster administrative tasks in the global zone. To administer a zone cluster, at least one machine that will host the zone cluster must be up in cluster mode. All zone-cluster nodes are not required to be up and running; Sun Cluster replays any configuration changes when the node that is currently out of the cluster rejoins the cluster.

In this chapter, phys-schost# reflects a global-cluster prompt. The clzonecluster interactive shell prompt is clzc:schost>.

Table 8–1 Task List: Administering the Cluster

Task 

Instructions 

Change the name of the cluster 

How to Change the Cluster Name

List node IDs and their corresponding node names 

How to Map Node ID to Node Name

Permit or deny new nodes to add themselves to the cluster 

How to Work With New Cluster Node Authentication

Change the time for a cluster by using the Network Time Protocol (NTP) 

How to Reset the Time of Day in a Cluster

Shut down a node to the OpenBoot PROM ok prompt on a SPARC based system or to the Press any key to continue message in a GRUB menu on an x86 based system

SPARC: How to Display the OpenBoot PROM (OBP) on a Node

Change the private hostname 

How to Change the Node Private Hostname

Put a cluster node in maintenance state 

How to Put a Node Into Maintenance State

Bring a cluster node out of maintenance state 

How to Bring a Node Out of Maintenance State

Add a node to a cluster 

Adding a Node

Remove a node from a cluster 

Removing a Node on a Global Cluster or a Zone Cluster

Moving a zone cluster; preparing a zone cluster for applications 

Performing Zone-Cluster Administrative Tasks

Uninstall Sun Cluster software from a node 

How to Uninstall Sun Cluster Software From a Cluster Node

Correct error messages 

How to Correct Error Messages

ProcedureHow to Change the Cluster Name

If necessary, you can change the cluster name after initial installation.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on any node in the global cluster.

  2. Start the clsetup utility.


    phys-schost# clsetup
    

    The Main Menu is displayed.

  3. To change the cluster name, type the number that corresponds to the option for Other Cluster Properties.

    The Other Cluster Properties menu is displayed.

  4. Make your selection from the menu and follow the onscreen instructions.

  5. If you want the service tag for Sun Cluster to reflect the new cluster name, delete the existing Sun Cluster tag and restart the cluster. To delete the Sun Cluster service tag instance, complete the following substeps on all nodes in the cluster.

    1. List all of the service tags.


      phys-schost# stclient -x
      
    2. Find the Sun Cluster service tag instance number, then run the following command.


      phys-schost# stclient -d -i service_tag_instance_number
      
    3. Reboot all the nodes in the cluster.


      phys-schost# reboot
      

Example 8–1 Changing the Cluster Name

The following example shows the cluster(1CL) command generated from the clsetup(1CL) utility to change to the new cluster name, dromedary.


phys-schost# cluster -c dromedary

ProcedureHow to Map Node ID to Node Name

During Sun Cluster installation, each node is automatically assigned a unique node ID number. The node ID number is assigned to a node in the order in which it joins the cluster for the first time. After the node ID number is assigned, the number cannot be changed. The node ID number is often used in error messages to identify which cluster node the message concerns. Use this procedure to determine the mapping between node IDs and node names.

You do not need to be superuser to list configuration information for a global cluster or a zone cluster. One step in this procedure is performed from a node of the global cluster. The other step is performed from a zone-cluster node.

  1. Use the clnode(1CL) command to list the cluster configuration information for the global cluster.


    phys-schost# clnode show | grep Node
    
  2. You can also list the Node IDs for a zone cluster. The zone-cluster node has the same Node ID as the global cluster-node where it is running.


    phys-schost# zlogin sczone clnode -v | grep Node
    

Example 8–2 Mapping the Node ID to the Node Name

The following example shows the node ID assignments for a global cluster.


phys-schost# clnode show | grep Node
=== Cluster Nodes ===
Node Name:				phys-schost1
  Node ID:				1
Node Name: 				phys-schost2
  Node ID:				2
Node Name:				phys-schost3
  Node ID:				3

ProcedureHow to Work With New Cluster Node Authentication

Sun Cluster enables you to determine if new nodes can add themselves to the global cluster and the type of authentication to use. You can permit any new node to join the cluster over the public network, deny new nodes from joining the cluster, or indicate a specific node that can join the cluster. New nodes can be authenticated by using either standard UNIX or Diffie-Hellman (DES) authentication. If you select DES authentication, you must also configure all necessary encryption keys before a node can join. See the keyserv(1M) and publickey(4) man pages for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on any node in the global cluster.

  2. Start the clsetup(1CL) utility.


    phys-schost# clsetup
    

    The Main Menu is displayed.

  3. To work with cluster authentication, type the number that corresponds to the option for new nodes.

    The New Nodes menu is displayed.

  4. Make your selection from the menu and follow the onscreen instructions.


Example 8–3 Preventing a New Machine From Being Added to the Global Cluster

The clsetup utility generates the claccess command. The following example shows the claccess command that prevents new machines from being added to the cluster.


phys-schost# claccess deny -h hostname


Example 8–4 Permitting All New Machines to Be Added to the Global Cluster

The clsetup utility generates the claccess command. The following example shows the claccess command that enables all new machines to be added to the cluster.


phys-schost# claccess allow-all


Example 8–5 Specifying a New Machine to Be Added to the Global Cluster

The clsetup utility generates the claccess command. The following example shows the claccess command that enables a single new machine to be added to the cluster.


phys-schost# claccess allow -h hostname


Example 8–6 Setting the Authentication to Standard UNIX

The clsetup utility generates the claccess command. The following example shows the claccess command that resets to standard UNIX authentication for new nodes that are joining the cluster.


phys-schost# claccess set -p protocol=sys


Example 8–7 Setting the Authentication to DES

The clsetup utility generates the claccess command. The following example shows the claccess command that uses DES authentication for new nodes that are joining the cluster.


phys-schost# claccess set -p protocol=des

When using DES authentication, you must also configure all necessary encryption keys before a node can join the cluster. For more information, see the keyserv(1M) and publickey(4) man pages.


ProcedureHow to Reset the Time of Day in a Cluster

Sun Cluster software uses the Network Time Protocol (NTP) to maintain time synchronization between cluster nodes. Adjustments in the global cluster occur automatically as needed when nodes synchronize their time. For more information, see the Sun Cluster Concepts Guide for Solaris OS and the Network Time Protocol User's Guide.


Caution – Caution –

When using NTP, do not attempt to adjust the cluster time while the cluster is up and running. Do not adjust the time by using the date(1), rdate(1M), xntpd(1M), or svcadm(1M) commands interactively or within cron(1M) scripts.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on any node in the global cluster.

  2. Shut down the global cluster.


    phys-schost# cluster shutdown -g0 -y -i 0
    
  3. Verify that the node is showing the ok prompt on a SPARC based system or the Press any key to continue message on the GRUB menu on an x86 based system.

  4. Boot the node in noncluster mode.

    • On SPARC based systems, run the following command.


      ok boot -x
      
    • On x86 based systems, run the following commands.


      # shutdown -g -y -i0
      
      Press any key to continue
    1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

      The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

      For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

    2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

      The GRUB boot parameters screen appears similar to the following:


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot                                     |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.
    3. Add -x to the command to specify system boot into noncluster mode.


      [ Minimal BASH-like line editing is supported. For the first word, TAB
      lists possible command completions. Anywhere else TAB lists the possible
      completions of a device/filename. ESC at any time exits. ]
      
      grub edit> kernel /platform/i86pc/multiboot -x
    4. Press the Enter key to accept the change and return to the boot parameters screen.

      The screen displays the edited command.


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot -x                                  |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.-
    5. Type b to boot the node into noncluster mode.


      Note –

      This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps again to add the -x option to the kernel boot parameter command.


  5. On a single node, set the time of day by running the date command.


    phys-schost# date HHMM.SS
    
  6. On the other machines, synchronize the time to that node by running the rdate(1M) command.


    phys-schost# rdate hostname
    
  7. Boot each node to restart the cluster.


    phys-schost# reboot
    
  8. Verify that the change occurred on all cluster nodes.

    On each node, run the date command.


    phys-schost# date
    

ProcedureSPARC: How to Display the OpenBoot PROM (OBP) on a Node

Use this procedure if you need to configure or change OpenBoot™ PROM settings.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Connect to the console on the node to be shut down.


    # telnet tc_name tc_port_number
    
    tc_name

    Specifies the name of the terminal concentrator.

    tc_port_number

    Specifies the port number on the terminal concentrator. Port numbers are configuration dependent. Typically, ports 2 and 3 (5002 and 5003) are used for the first cluster installed at a site.

  2. Shut down the cluster node gracefully by using the clnode evacuate command, then the shutdown command. The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from the global cluster's specified voting or non-voting node to the next-preferred voting or non-voting node.


    phys-schost# clnode evacuate node
    # shutdown -g0 -y
    

    Caution – Caution –

    Do not use send brk on a cluster console to shut down a cluster node.


  3. Execute the OBP commands.

ProcedureHow to Change the Node Private Hostname

Use this procedure to change the private hostname of a cluster node after installation has been completed.

Default private host names are assigned during initial cluster installation. The default private hostname takes the form clusternode< nodeid>-priv, for example: clusternode3-priv . Change a private hostname only if the name is already in use in the domain.


Caution – Caution –

Do not attempt to assign IP addresses to new private host names. The clustering software assigns them.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Disable, on all nodes in the cluster, any data service resources or other applications that might cache private host names.


    phys-schost# clresource disable resource[,...]
    

    Include the following in the applications you disable.

    • HA-DNS and HA-NFS services, if configured

    • Any application that has been custom-configured to use the private hostname

    • Any application that is being used by clients over the private interconnect

    For information about using the clresource command, see the clresource(1CL) man page and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  2. If your NTP configuration file refers to the private hostname that you are changing, bring down the Network Time Protocol (NTP) daemon on each node of the cluster.

    • SPARC: If you are using Solaris 9 OS, use the xntpd command to shut down the Network Time Protocol (NTP) daemon. See the xntpd(1M) man page for more information about the NTP daemon.


      phys-schost# /etc/init.d/xntpd.cluster stop
      
    • If you are using Solaris 10 OS, use the svcadm command to shut down the Network Time Protocol (NTP) daemon. See the svcadm(1M) man page for more information about the NTP daemon.


      phys-schost# svcadm disable ntp
      
  3. Run the clsetup(1CL) utility to change the private hostname of the appropriate node.

    Run the utility from only one of the nodes in the cluster.


    Note –

    When selecting a new private hostname, ensure that the name is unique to the cluster node.


  4. Type the number that corresponds to the option for the private hostname.

  5. Type the number that corresponds to the option for changing a private hostname.

    Answer the questions when prompted. You are asked the name of the node whose private hostname you are changing (clusternode< nodeid>-priv), and the new private hostname.

  6. Flush the name service cache.

    Perform this step on each node in the cluster. Flushing prevents the cluster applications and data services from trying to access the old private hostname.


    phys-schost# nscd -i hosts
    
  7. If you changed a private hostname in your NTP configuration file, update your NTP configuration file (ntp.conf or ntp.conf.cluster) on each node.

    1. Use the editing tool of your choice.

      If you perform this step at installation, also remember to remove names for nodes that are configured. The default template is preconfigured with 16 nodes. Typically, the ntp.conf.cluster file is identical on each cluster node.

    2. Verify that you can successfully ping the new private hostname from all cluster nodes.

    3. Restart the NTP daemon.

      Perform this step on each node of the cluster.

      • SPARC: If you are using Solaris 9 OS, use the xntpd command to restart the NTP daemon.

        If you are using the ntp.conf.cluster file, type the following:


        # /etc/init.d/xntpd.cluster start
        

        If you are using the ntp.conf file, type the following:


        # /etc/init.d/xntpd start
        
      • If you are using Solaris 10 OS, use the svcadm command to restart the NTP daemon.


        # svcadm enable ntp
        
  8. Enable all data service resources and other applications that were disabled in Step 1.


    phys-schost# clresource disable resource[,...]
    

    For information about using the scswitch command, see the clresource(1CL) man page and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


Example 8–8 Changing the Private Hostname

The following example changes the private hostname from clusternode2-priv to clusternode4-priv, on node phys-schost-2 .


[Disable all applications and data services as necessary.]
phys-schost-1# /etc/init.d/xntpd stop
phys-schost-1# clnode show | grep node
 ...
 private hostname:                           clusternode1-priv
 private hostname:                           clusternode2-priv
 private hostname:                           clusternode3-priv
 ...
phys-schost-1# clsetup
phys-schost-1# nscd -i hosts
phys-schost-1# vi /etc/inet/ntp.conf
 ...
 peer clusternode1-priv
 peer clusternode4-priv
 peer clusternode3-priv
phys-schost-1# ping clusternode4-priv
phys-schost-1# /etc/init.d/xntpd start
[Enable all applications and data services disabled at the beginning of the procedure.]

ProcedureHow to Add a Private Hostname for a Non-Voting Node on a Global Cluster

Use this procedure to add a private hostname for a non-voting node on a global cluster after installation has been completed. In the procedures in this chapter, phys-schost# reflects a global-cluster prompt. Perform this procedure only on a global cluster.

  1. Run the clsetup(1CL) utility to add a private hostname on the appropriate zone.


    phys-schost# clsetup
    
  2. Type the number that corresponds to the option for private host names and press the Return key.

  3. Type the number that corresponds to the option for adding a zone private hostname and press the Return key.

    Answer the questions when prompted. There is no default for a global-cluster non-voting node private hostname. You will need to provide a hostname.

ProcedureHow to Change the Private Hostname on a Non-Voting Node on a Global Cluster

Use this procedure to change the private hostname of a non-voting node after installation has been completed.

Private host names are assigned during initial cluster installation. The private hostname takes the form clusternode< nodeid>-priv, for example: clusternode3-priv . Change a private hostname only if the name is already in use in the domain.


Caution – Caution –

Do not attempt to assign IP addresses to new private hostnames. The clustering software assigns them.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. On all nodes in the global cluster, disable any data service resources or other applications that might cache private host names.


    phys-schost# clresource disable resource1, resource2
    

    Include the following in the applications you disable.

    • HA-DNS and HA-NFS services, if configured

    • Any application that has been custom-configured to use the private hostname

    • Any application that is being used by clients over the private interconnect

    For information about using the clresource command, see the clresource(1CL) man page and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  2. Run the clsetup(1CL) utility to change the private hostname of the appropriate non-voting node on the global cluster.


    phys-schost# clsetup
    

    You need to perform this step only from one of the nodes in the cluster.


    Note –

    When selecting a new private hostname, ensure that the name is unique to the cluster.


  3. Type the number that corresponds to the option for private hostnames and press the Return key.

  4. Type the number that corresponds to the option for adding a zone private hostname and press the Return key.

    No default exists for a non-voting node of a global cluster's private hostname. You need to provide a hostname.

  5. Type the number that corresponds to the option for changing a zone private hostname.

    Answer the questions when prompted. You are asked for the name of the non-voting node whose private hostname is being changed (clusternode< nodeid>-priv), and the new private hostname.

  6. Flush the name service cache.

    Perform this step on each node in the cluster. Flushing prevents the cluster applications and data services from trying to access the old private hostname.


    phys-schost# nscd -i hosts
    
  7. Enable all data service resources and other applications that were disabled in Step 1.

ProcedureHow to Delete the Private Hostname for a Non-Voting Node on a Global Cluster

Use this procedure to delete a private hostname for a non-voting node on a global cluster. Perform this procedure only on a global cluster.

  1. Run the clsetup(1CL) utility to delete a private hostname on the appropriate zone.

  2. Type the number that corresponds to the option for zone private hostname.

  3. Type the number that corresponds to the option for deleting a zone private hostname.

  4. Type the name of the non-voting node's private hostname that you are deleting.

ProcedureHow to Put a Node Into Maintenance State

Put a global-cluster node into maintenance state when taking the node out of service for an extended period of time. This way, the node does not contribute to the quorum count while it is being serviced. To put a node into maintenance state, the node must be shut down with clnode(1CL) evacuate and cluster(1CL) shutdown commands.


Note –

Use the Solaris shutdown command to shut down a single node. Use the cluster shutdown command only when shutting down an entire cluster.


When a cluster node is shut down and put in maintenance state, all quorum devices that are configured with ports to the node have their quorum vote counts decremented by one. The node and quorum device vote counts are incremented by one when the node is removed from maintenance mode and brought back online.

Use the clquorum(1CL) disable command to put a cluster node into maintenance state.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the global-cluster node that you are putting into maintenance state.

  2. Evacuate any resource groups and device groups from the node. The clnode evacuate command switches over all resource groups and device groups, including all non-voting nodes from the specified node to the next-preferred node.


    phys-schost# clnode evacuate node
    
  3. Shut down the node that you evacuated.


    phys-schost# shutdown -g0 -y-i 0
    
  4. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on another node in the cluster and put the node that you shut down in Step 3 in maintenance state.


    phys-schost# clquorum disable  node
    
    node

    Specifies the name of a node that you want to put into maintenance mode.

  5. Verify that the global-cluster node is now in maintenance state.


    phys-schost# clquorum status node
    

    The node that you put into maintenance state should have a Status of offline and 0 (zero) for Present and Possible quorum votes.


Example 8–9 Putting a Global-Cluster Node Into Maintenance State

The following example puts a cluster node into maintenance state and verifies the results. The clnode status output shows the Node votes for phys-schost-1 to be 0 (zero) and the status to be Offline. The Quorum Summary should also show reduced vote counts. Depending on your configuration, the Quorum Votes by Device output might indicate that some quorum disk devices are offline as well.


[On the node to be put into maintenance state:]
phys-schost-1# clnode  evacuate phys-schost-1
phys-schost-1# shutdown -g0 -y -i0

[On another node in the cluster:]
phys-schost-2# clquorum disable phys-schost-1
phys-schost-2# clquorum status phys-schost-1

-- Quorum Votes by Node --

Node Name           Present       Possible       Status
---------           -------       --------       ------
phys-schost-1       0             0              Offline
phys-schost-2       1             1              Online
phys-schost-3       1             1              Online

See Also

To bring a node back online, see How to Bring a Node Out of Maintenance State.

ProcedureHow to Bring a Node Out of Maintenance State

Use the following procedure to bring a global-cluster node back online and reset the quorum vote count to the default. For cluster nodes, the default quorum count is one. For quorum devices, the default quorum count is N-1, where N is the number of nodes with nonzero vote counts that have ports to the quorum device.

When a node has been put in maintenance state, the node's quorum vote count is decremented by one. All quorum devices that are configured with ports to the node will also have their quorum vote counts decremented. When the quorum vote count is reset and a node removed from maintenance state, both the node's quorum vote count and the quorum device vote count are incremented by one.

Run this procedure any time a global-cluster node has been put in maintenance state and you are removing it from maintenance state.


Caution – Caution –

If you do not specify either the globaldev or node options, the quorum count is reset for the entire cluster.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on any node of the global cluster other than the one in maintenance state.

  2. Depending on the number of nodes that you have in your global cluster configuration, perform one of the following steps:

    • If you have two nodes in your cluster configuration, go to Step 4.

    • If you have more than two nodes in your cluster configuration, go to Step 3.

  3. If the node that you are removing from maintenance state will have quorum devices, reset the cluster quorum count from a node other than the one in maintenance state.

    You must reset the quorum count from a node other than the node in maintenance state before rebooting the node, or the node might hang while waiting for quorum.


    phys-schost# clquorum reset
    
    reset

    The change flag that resets quorum.

  4. Boot the node that you are removing from maintenance state.

  5. Verify the quorum vote count.


    phys-schost# clquorum status
    

    The node that you removed from maintenance state should have a status of online and show the appropriate vote count for Present and Possible quorum votes.


Example 8–10 Removing a Cluster Node From Maintenance State and Resetting the Quorum Vote Count

The following example resets the quorum count for a cluster node and its quorum devices to their defaults and verifies the result. The scstat -q output shows the Node votes for phys-schost-1 to be 1 and the status to be online. The Quorum Summary should also show an increase in vote counts.


phys-schost-2# clquorum reset

phys-schost-1# clquorum status

--- Quorum Votes Summary ---

            Needed   Present   Possible
            ------   -------   --------
            4        6         6


--- Quorum Votes by Node ---

Node Name        Present       Possible      Status
---------        -------       --------      ------
phys-schost-2    1             1             Online
phys-schost-3    1             1             Online


--- Quorum Votes by Device ---

Device Name           Present      Possible      Status
-----------           -------      --------      ------
/dev/did/rdsk/d3s2    1            1             Online
/dev/did/rdsk/d17s2   0            1             Online
/dev/did/rdsk/d31s2   1            1             Online
`

Adding a Node

This section provides instructions on adding a node to a global cluster or a zone cluster. You can create a new zone-cluster node on a node of the global cluster that hosts the zone cluster, as long as that global-cluster node does not already host a node of that particular zone cluster. You cannot convert an existing non-voting node on a global cluster into a zone-cluster node.

The following table lists the tasks to perform to add a node to an existing cluster. Perform the tasks in the order shown.

Table 8–2 Task Map: Adding a Node to an Existing Global or Zone Cluster

Task 

Instructions 

Install the host adapter on the node and verify that the existing cluster interconnects can support the new node 

Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

Add shared storage 

Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

Prepare the cluster for additional nodes 

How to Prepare the Cluster for Additional Global-Cluster Nodes in Sun Cluster Software Installation Guide for Solaris OS

Add the node to the authorized node list by using clsetup

How to Add a Node to the Authorized Node List

Install and configure the software on the new cluster node 

Chapter 2, Installing Software on Global-Cluster Nodes, in Sun Cluster Software Installation Guide for Solaris OS

If the cluster is configured in a Sun Cluster Geographic Edition partnership, configure the new node as an active participant in the configuration 

How to Add a New Node to a Cluster in a Partnership in Sun Cluster Geographic Edition System Administration Guide

ProcedureHow to Add a Node to the Authorized Node List

Before adding a Solaris host or a virtual machine to an existing global cluster or a zone cluster, ensure that the node has all of the necessary hardware correctly installed and configured, including an operational physical connection to the private cluster interconnect.

For hardware installation information, refer to the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS or the hardware documentation that shipped with your server.

This procedure enables a machine to install itself into a cluster by adding its node name to the list of authorized nodes for that cluster.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. On a current global-cluster member, become superuser on the current cluster member. Perform these steps from a node of a global cluster.

  2. Ensure that you have correctly completed all prerequisite hardware installation and configuration tasks that are listed in the task map for Adding a Node.

  3. Start the clsetup utility.


    phys-schost# clsetup
    

    The Main Menu is displayed.


    Note –

    To add a node to a zone cluster, use the clzonecluster utility. See Step 9 for instructions to manually add a zone to a zone cluster.


  4. Type the number that corresponds to the option for displaying the New Nodes Menu and press the Return key.

  5. Type the number that corresponds to the option to modify the authorized list and press the Return key. Specify the name of a machine that can add itself.

    Follow the prompts to add the node's name to the cluster. You are asked for the name of the node to be added.

  6. Verify that the task has been performed successfully.

    The clsetup utility prints a “Command completed successfully” message if it completes the task without error.

  7. To prevent any new machines from being added to the cluster, type the number that corresponds to option to instruct the cluster to ignore requests to add new machines. Press the Return key.

    Follow the clsetup prompts. This option tells the cluster to ignore all requests over the public network from any new machine that is trying to add itself to the cluster.

  8. Quit the clsetup utility.

  9. To manually add a node to a zone cluster, you must specify the Solaris host and the virtual node name. You must also specify a network resource to be used for public network communication on each node. In the following example, the zone name is sczone, and bge0 is the public network adapter on both machines.


    clzc:sczone>add node
    

    clzc:sczone:node>set physical-host=phys-cluster-1
    

    clzc:sczone:node>set hostname=hostname1
    

    clzc:sczone:node>add net
    

    clzc:sczone:node:net>set address=hostname1
    

    clzc:sczone:node:net>set physical=bge0
    

    clzc:sczone:node:net>end
    

    clzc:sczone:node>end
    

    clzc:sczone>add node
    

    clzc:sczone:node>set physical-host=phys-cluster-2
    

    clzc:sczone:node>set hostname=hostname2
    

    clzc:sczone:node>add net
    

    clzc:sczone:node:net>set address=hostname2
    

    clzc:sczone:node:net>set physical=bge0
    

    clzc:sczone:node:net>end
    

    clzc:sczone:node>end
    

    For detailed instructions on configuring the node, see Configuring a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

  10. Install and configure the software on the new cluster node.

    Use either scinstall or JumpStartTM software to complete the installation and configuration of the new node, as described in the Sun Cluster Software Installation Guide for Solaris OS.


Example 8–11 Adding a Global-Cluster Node to the Authorized Node List

The following example shows how to add a node named phys-schost-3 to the authorized node list in an existing cluster.


[Become superuser and execute the clsetup utility.]
phys-schost# clsetup
[Select New nodes>Specify the name of a machine which may add itself.]
[Answer the questions when prompted.]
[Verify that the scconf command completed successfully.]
 
claccess allow -h phys-schost-3
 
    Command completed successfully.
[Select Prevent any new machines from being added to the cluster.]
[Quit the clsetup New Nodes Menu and Main Menu.]
[Install the cluster software.]

See Also

clsetup(1CL)

For a complete list of tasks for adding a cluster node, see Table 8–2, “Task Map: Adding a Cluster Node.”

To add a node to an existing resource group, see the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Administering a Non-Voting Node in a Global Cluster

This section provides the following information and procedures to create a non-voting node, simply referred to as a zone, on a global-cluster node.

ProcedureHow to Create a Non-Voting Node in a Global Cluster

  1. Become superuser on the global-cluster node where you are creating the non-voting node.

    You must be working in the global zone.

  2. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  3. Configure, install, and boot the new zone.


    Note –

    You must set the autoboot property to true to support resource-group functionality in the non-voting node on the global cluster.


    Follow procedures in the Solaris documentation:

    1. Perform procedures in Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

    2. Perform procedures in Installing and Booting Zones in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

    3. Perform procedures in How to Boot a Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

  4. Verify that the zone is in the ready state.


    phys-schost# zoneadm list -v
    ID  NAME     STATUS       PATH
     0  global   running      /
     1  my-zone  ready        /zone-path
    
  5. For a whole-root zone with the ip-type property set to exclusive: If the zone might host a logical-hostname resource, configure a file system resource that mounts the method directory from the global zone.


    phys-schost# zonecfg -z sczone
    zonecfg:sczone> add fs
    zonecfg:sczone:fs> set dir=/usr/cluster/lib/rgm
    zonecfg:sczone:fs> set special=/usr/cluster/lib/rgm
    zonecfg:sczone:fs> set type=lofs
    zonecfg:sczone:fs> end
    zonecfg:sczone> exit
    
  6. (Optional) For a shared-IP zone, assign a private IP address and a private hostname to the zone.

    The following command chooses and assigns an available IP address from the cluster's private IP-address range. The command also assigns the specified private hostname, or host alias, to the zone and maps it to the assigned private IP address.


    phys-schost# clnode set -p zprivatehostname=hostalias node:zone
    
    -p

    Specifies a property.

    zprivatehostname=hostalias

    Specifies the zone private hostname, or host alias.

    node

    The name of the node.

    zone

    The name of the global-cluster non-voting node.

  7. Perform the initial internal zone configuration.

    Follow the procedures in Performing the Initial Internal Zone Configuration in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. Choose either of the following methods:

    • Log in to the zone.

    • Use an /etc/sysidcfg file.

  8. In the non-voting node, modify the nsswitch.conf file.

    These changes enable the zone to resolve searches for cluster-specific hostnames and IP addresses.

    1. Log in to the zone.


      phys-schost# zlogin -c zonename
      
    2. Open the /etc/nsswitch.conf file for editing.


      sczone# vi /etc/nsswitch.conf
      
    3. Add the cluster switch to the beginning of the lookups for the hosts and netmasks entries, followed by the files switch.

      The modified entries should appear similar to the following:


      …
      hosts:      cluster files nis [NOTFOUND=return]
      …
      netmasks:   cluster files nis [NOTFOUND=return]
      …
    4. For all other entries, ensure that the files switch is the first switch that is listed in the entry.

    5. Exit the zone.

  9. If you created an exclusive-IP zone, configure IPMP groups in each /etc/hostname.interface file that is on the zone.

    You must configure an IPMP group for each public-network adapter that is used for data-service traffic in the zone. This information is not inherited from the global zone. See Public Networks in Sun Cluster Software Installation Guide for Solaris OS for more information about configuring IPMP groups in a cluster.

  10. Set up name-to-address mappings for all logical hostname resources that are used by the zone.

    1. Add name-to-address mappings to the /etc/inet/hosts file on the zone.

      This information is not inherited from the global zone.

    2. If you use a name server, add the name-to-address mappings.

ProcedureHow to Remove a Non-Voting Node on a Global Cluster

  1. Become superuser on the global-cluster node where you will create the non-voting node.

  2. Delete the non-voting node from the system.

    Follow the procedures in Deleting a Non-Global Zone From the System in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

Performing Zone-Cluster Administrative Tasks

You can perform other administrative tasks on a zone cluster, such as moving the zone path, preparing a zone cluster to run applications, and cloning a zone cluster. All of these commands must be performed from the voting node of the global cluster.


Note –

The Sun Cluster commands that you run only from the voting node in the global cluster are not valid for use with zone clusters. See the appropriate Sun Cluster man page for information about the valid use of a command in zones.


Table 8–3 Other Zone-Cluster Tasks

Task 

Instructions 

Move the zone path to a new zone path 

clzonecluster move -f zonepath zoneclustername

Prepare the zone cluster to run applications 

clzonecluster ready -n nodename zoneclustername

Clone a zone cluster 

clzonecluster clone -Z source- zoneclustername [-m copymethod] zoneclustername

Halt the source zone cluster before you use the clone subcommand. The target zone cluster must already be configured.

Removing a Node on a Global Cluster or a Zone Cluster

This section provides instructions on how to remove a node on a global cluster or a zone cluster. You can also remove a specific zone cluster from a global cluster. The following table lists the tasks to perform to remove a node from an existing cluster. Perform the tasks in the order shown.


Caution – Caution –

If you remove a node using only this procedure for a RAC configuration, the removal might cause the node to panic during a reboot. For instructions on how to remove a node from a RAC configuration, see How to Remove Sun Cluster Support for Oracle RAC From Selected Nodes in Sun Cluster Data Service for Oracle RAC Guide for Solaris OS. After you complete that process, follow the appropriate steps below.


Table 8–4 Task Map: Removing a Node

Task 

Instructions 

Move all resource groups and device groups off the node to be removed 

clnode evacuate node

Verify that the node can be removed by checking the allowed hosts 

If the node cannot be removed, give the node access to the cluster configuration 

claccess show node

claccess allow -h node-to-remove

Remove the node from all device groups 

How to Remove a Node From a Device Group (Solaris Volume Manager)

 

Remove all quorum devices connected to the node being removed 

This step is optional if you are removing a node from a two-node cluster.

How to Remove a Quorum Device

Note that although you must remove the quorum device before you remove the storage device in the next step, you can add the quorum device back immediately afterward. 

How to Remove the Last Quorum Device From a Cluster

Put the node being removed into noncluster mode 

How to Put a Node Into Maintenance State

Remove a node from a zone cluster 

How to Remove a Node From a Zone Cluster

Remove a node from the cluster software configuration 

How to Remove a Node From the Cluster Software Configuration

(Optional) Uninstall Sun Cluster software from a cluster node 

How to Uninstall Sun Cluster Software From a Cluster Node

Remove an entire zone cluster 

How to Remove a Zone Cluster

ProcedureHow to Remove a Node From a Zone Cluster

You can remove a node from a zone cluster by halting the node, uninstalling it, and removing the node from the configuration. If you decide later to add the node back into the zone cluster, follow the instructions in Adding a Node Most of these steps are performed from the global-cluster node.

  1. Become superuser on a node of the global cluster.

  2. Shut down the zone-cluster node you want to remove by specifying the node and its zone cluster.


    phys-schost# clzonecluster halt -n node zoneclustername
    

    You can also use the clnode evacuate and shutdown commands within a zone cluster.

  3. Uninstall the zone-cluster node.


    phys-schost# clzonecluster uninstall -n node zoneclustername
    
  4. Remove the zone-cluster node from the configuration.

    Use the following commands:


    phys-schost# clzonecluster configure zoneclustername
    

    clzc:sczone> remove node physical-host=zoneclusternodename
    
  5. Verify that the node was removed from the zone cluster.


    phys-schost# clzonecluster status
    

ProcedureHow to Remove a Node From the Cluster Software Configuration

Perform this procedure to remove a node from the global cluster.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Ensure that you have removed the node from all resource groups, device groups, and quorum device configurations and put it into maintenance state before you continue with this procedure.

  2. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the node that you want to remove. Perform all steps in this procedure from a node of the global cluster.

  3. Boot the global-cluster node that you want to remove into noncluster mode. For a zone-cluster node, follow the instructions in How to Remove a Node From a Zone Cluster before you perform this step.

    • On SPARC based systems, run the following command.


      ok boot -x
      
    • On x86 based systems, run the following commands.


      shutdown -g -y -i0
      
      Press any key to continue
    1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

      The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

      For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

    2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

      The GRUB boot parameters screen appears similar to the following:


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot                                     |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.
    3. Add -x to the command to specify system boot into noncluster mode.


      [ Minimal BASH-like line editing is supported. For the first word, TAB
      lists possible command completions. Anywhere else TAB lists the possible
      completions of a device/filename. ESC at any time exits. ]
      
      grub edit> kernel /platform/i86pc/multiboot -x
    4. Press the Enter key to accept the change and return to the boot parameters screen.

      The screen displays the edited command.


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot -x                                  |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.-
    5. Type b to boot the node into noncluster mode.

      This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps again to add the -x option to the kernel boot parameter command.


      Note –

      If the node to be removed is not available or can no longer be booted, run the following command on any active cluster node: clnode clear -F <node-to-be-removed>. Verify the node removal by running clnode status <nodename>.


  4. From the node you want to remove, delete the node from the cluster.


    phys-schost# clnode remove -F
    

    If the clnode remove command fails and a stale node reference exists, run clnode clear -F nodename on an active node.


    Note –

    If you are removing the last node in the cluster, the node must be in noncluster mode with no active nodes left in the cluster.


  5. From another cluster node, verify the node removal.


    phys-schost# clnode status nodename
    
  6. Complete the node removal.


Example 8–12 Removing a Node From the Cluster Software Configuration

This example shows how to remove a node (phys-schost-2) from a cluster. The clnode remove command is run in noncluster mode from the node you want to remove from the cluster (phys-schost-2).


[Remove the node from the cluster:]
phys-schost-2# clnode remove
phys-schost-1# clnode clear -F phys-schost-2
[Verify node removal:]
phys-schost-1# clnode status
-- Cluster Nodes --
                    Node name           Status
                    ---------           ------
  Cluster node:     phys-schost-1       Online

See Also

To uninstall Sun Cluster software from the removed node, see How to Uninstall Sun Cluster Software From a Cluster Node.

For hardware procedures, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

For a comprehensive list of tasks for removing a cluster node, see Table 8–4.

To add a node to an existing cluster, see How to Add a Node to the Authorized Node List.

ProcedureHow to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity

Use this procedure to detach a storage array from a single cluster node, in a cluster that has three-node or four-node connectivity.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Determine the resource groups and device groups that are running on the node to be disconnected.


    phys-schost# clresourcegroup status
    phys-schost# cldevicegroup status
    
  3. If necessary, move all resource groups and device groups off the node to be disconnected.


    Caution (SPARC only) – Caution (SPARC only) –

    If your cluster is running Oracle RAC software, shut down the Oracle RAC database instance that is running on the node before you move the groups off the node. For instructions, see the Oracle Database Administration Guide.



    phys-schost# clnode evacuate node
    

    The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from voting or non-voting nodes on the specified node to the next-preferred voting or non-voting node.

  4. Put the device groups into maintenance state.

    For the procedure on acquiescing I/O activity to Veritas shared disk groups, see your VxVM documentation.

    For the procedure on putting a device group in maintenance state, see How to Put a Node Into Maintenance State.

  5. Remove the node from the device groups.

    • If you use VxVM or a raw disk, use the cldevicegroup(1CL) command to remove the device groups.

    • If you use Solstice DiskSuite, use the metaset command to remove the device groups.

  6. For each resource group that contains an HAStoragePlus resource, remove the node from the resource group's node list.


    phys-schost# clresourcegroup remove-node -z zone -n node + | resourcegroup
    
    node

    The name of the node.

    zone

    The name of the non-voting node that can master the resource group. Specify zone only if you specified a non-voting node when you created the resource group.

    See the Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more information about changing a resource group's node list.


    Note –

    Resource type, resource group, and resource property names are case sensitive when clresourcegroup is executed.


  7. If the storage array that you are removing is the last storage array that is connected to the node, disconnect the fiber-optic cable between the node and the hub or switch that is connected to this storage array (otherwise, skip this step).

  8. If you are removing the host adapter from the node that you are disconnecting, and power off the node. If you are removing the host adapter from the node that you are disconnecting, skip to Step 11.

  9. Remove the host adapter from the node.

    For the procedure on removing host adapters, see the documentation for the node.

  10. Without booting the node, power on the node.

  11. If Oracle RAC software has been installed, remove the Oracle RAC software package from the node that you are disconnecting.


    phys-schost# pkgrm SUNWscucm 
    

    Caution (SPARC only) – Caution (SPARC only) –

    If you do not remove the Oracle RAC software from the node that you disconnected, the node panics when the node is reintroduced to the cluster and potentially causes a loss of data availability.


  12. Boot the node in cluster mode.

    • On SPARC based systems, run the following command.


      ok boot
      
    • On x86 based systems, run the following commands.

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.
  13. On the node, update the device namespace by updating the /devices and /dev entries.


    phys-schost# devfsadm -C 
     cldevice refresh
    
  14. Bring the device groups back online.

    For procedures about bringing a Veritas shared disk group online, see your Veritas Volume Manager documentation.

    For information about bringing a device group online, see How to Bring a Node Out of Maintenance State.

ProcedureHow to Remove a Zone Cluster

You can delete a specific zone cluster or use a wildcard to remove all zone clusters that are configured on the global cluster. The zone cluster must be configured before you remove it.

  1. Become a superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the node of the global cluster. Perform all steps in this procedure from a node of the global cluster.

  2. Delete all resource groups and their resources from the zone cluster.


    phys-schost# clresourcegroup delete -F -Z zoneclustername +
    

    Note –

    This step is performed from a global-cluster node. To perform this step from a node of the zone cluster instead, log into the zone-cluster node and omit -Z zonecluster from the command.


  3. Halt the zone cluster.


    phys-schost# clzonecluster halt zoneclustername
    
  4. Uninstall the zone cluster.


    phys-schost# clzonecluster uninstall zoneclustername
    
  5. Unconfigure the zone cluster.


    phys-schost# clzonecluster delete zoneclustername
    

Example 8–13 Removing a Zone Cluster From a Global Cluster


phys-schost# clresourcegroup delete -F -Z sczone +

phys-schost# clzonecluster halt sczone

phys-schost# clzonecluster uninstall sczone

phys-schost# clzonecluster delete sczone

ProcedureHow to Remove a File System From a Zone Cluster

Perform this procedure to remove a file system from a zone cluster. Supported file system types in a zone cluster include UFS, Vxfs, stand-alone QFS, shared QFS, ZFS (exported as a data set), and loopback file systems. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on a node of the global cluster that hosts the zone cluster. Some steps in this procedure are performed from a node of the global cluster. Other steps are performed from a node of the zone cluster.

  2. Delete the resources related to the file system being removed.

    1. Identify and remove the Sun Cluster resource types, such as HAStoragePlus and SUNW.ScalMountPoint, that are configured for the zone cluster's file system that you are removing.


      phys-schost# clresource delete -F -Z zoneclustername fs_zone_resources
      
    2. If applicable, identify and remove the Sun Cluster resources of type SUNW.qfs that are configured in the global cluster for the file system that you are removing.


      phys-schost# clresource delete -F fs_global_resouces
      

      Use the -F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(1CL) man page.


    Tip –

    If the resource group for the removed resource later becomes empty, you can safely delete the resource group.


  3. Determine the path to the file-system mount point directory. For example:


    phys-schost# clzonecluster configure zoneclustername
    
  4. Remove the file system from the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    

    clzc:zoneclustername> remove fs dir=filesystemdirectory
    

    clzc:zoneclustername> commit
    

    The file system mount point is specified by dir=.

  5. Verify the removal of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 8–14 Removing a Highly Available File System in a Zone Cluster

This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.


phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           fs
   dir:                                     /local/ufs-1
   special:                                 /dev/md/ds1/dsk/d0
   raw:                                     /dev/md/ds1/rdsk/d0
   type:                                    ufs
   options:                                 [logging]
 ...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove fs dir=/local/ufs-1
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone


Example 8–15 Removing a Highly Available ZFS File System in a Zone Cluster

This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.


phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           dataset
   name:                                     HAzpool
...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove dataset name=HAzpool
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone


Example 8–16 Removing a Shared QFS File System in a Zone Cluster

This example shows how to remove a configured shared file system with a mount-point directory of /db_qfs/Data. The file system has the following characteristics:


phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           fs
   dir:                                     /db_qfs/Data
   special:                                 Data
   type:                                    samfs
...
phys-schost# clresource delete -F -Z sczone scal-Data-rs
phys-schost# clresource delete -F Data-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove fs dir=/db_qfs/Data
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone

ProcedureHow to Remove a Storage Device From a Zone Cluster

You can remove storage devices, such as SVM disksets and DID devices, from a zone cluster. Perform this procedure to remove a storage device from a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster. Some steps in this procedure are performed from a node of the global cluster. Other steps can be performed from a node of the zone cluster.

  2. Delete the resources related to the devices being removed. Identify and remove the Sun Cluster resource types, such as SUNW.HAStoragePlus and SUNW.ScalDeviceGroup, that are configured for the zone cluster's devices that you are removing.


    phys-schost# clresource delete -F -Z zoneclustername dev_zone_resources
    
  3. Determine the match entry for the devices to be removed.


    phys-schost# clzonecluster show -v zoneclustername
    ...
     Resource Name:       device
        match:              <device_match>
     ...
  4. Remove the devices from the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> remove device match=<devices_match>
    clzc:zoneclustername> commit
    clzc:zoneclustername> end
    
  5. Reboot the zone cluster.


    phys-schost# clzonecluster reboot zoneclustername
    
  6. Verify the removal of the devices.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 8–17 Removing an SVM Disk Set From a Zone Cluster

This example shows how to remove an SVM disk set called apachedg configured in a zone cluster called sczone. The set number of the apachedg disk set is 3. The devices are used by the zc_rs resource that is configured in the cluster.


phys-schost# clzonecluster show -v sczone
...
  Resource Name:      device
     match:             /dev/md/apachedg/*dsk/*
  Resource Name:      device
     match:             /dev/md/shared/3/*dsk/*
...
phys-schost# clresource delete -F -Z sczone zc_rs

phys-schost# ls -l /dev/md/apachedg
lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/apachedg -> shared/3
phys-schost# clzonecluster configure sczone
clzc:sczone> remove device match=/dev/md/apachedg/*dsk/*
clzc:sczone> remove device match=/dev/md/shared/3/*dsk/*
clzc:sczone> commit
clzc:sczone> end
phys-schost# clzonecluster reboot sczone
phys-schost# clzonecluster show -v sczone


Example 8–18 Removing a DID Device From a Zone Cluster

This example shows how to remove DID devices d10 and d11, which are configured in a zone cluster called sczone. The devices are used by the zc_rs resource that is configured in the cluster.


phys-schost# clzonecluster show -v sczone
...
 Resource Name:       device
     match:             /dev/did/*dsk/d10*
 Resource Name:       device
    match:              /dev/did/*dsk/d11*
...
phys-schost# clresource delete -F -Z sczone zc_rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove device match=/dev/did/*dsk/d10*
clzc:sczone> remove device match=/dev/did/*dsk/d11*
clzc:sczone> commit
clzc:sczone> end
phys-schost# clzonecluster reboot sczone
phys-schost# clzonecluster show -v sczone

ProcedureHow to Uninstall Sun Cluster Software From a Cluster Node

Perform this procedure to uninstall Sun Cluster software from a global-cluster node before you disconnect it from a fully established cluster configuration. You can use this procedure to uninstall software from the last remaining node of a cluster.


Note –

To uninstall Sun Cluster software from a node that has not yet joined the cluster or is still in installation mode, do not perform this procedure. Instead, go to “How to Uninstall Sun Cluster Software to Correct Installation Problems” in the Sun Cluster Software Installation Guide for Solaris OS.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Ensure that you have correctly completed all prerequisite tasks in the task map to remove a cluster node.

    See Table 8–4.


    Note –

    Ensure that you have removed the node from the cluster configuration by using clnode remove before you continue with this procedure.


  2. Become superuser on an active member of the global cluster other than the global-cluster node that you are uninstalling. Perform this procedure from a global-cluster node.

  3. From the active cluster member, add the node that you intend to uninstall to the cluster's node authentication list.


    phys-schost# claccess allow -h hostname
    
    -h

    Specifies the name of the node to be added to the node's authentication list.

    Alternately, you can use the clsetup(1CL) utility. See How to Add a Node to the Authorized Node List for procedures.

  4. Become superuser on the node to uninstall.

  5. If you have a zone cluster, uninstall it.


    phys-schost# clzonecluster uninstall -F zoneclustername
    

    For specific steps, How to Remove a Zone Cluster.

  6. Reboot the global-cluster node into noncluster mode.

    • On a SPARC based system, run the following command.


      # shutdown -g0 -y -i0ok boot -x
      
    • On an x86 based system, run the following commands.


      # shutdown -g0 -y -i0
      ...
                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/
      sd@0,0:a
      Boot args:
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -x
      
  7. In the /etc/vfstab file, remove all globally mounted file-system entries except the /global/.devices global mounts.

  8. If you intend to reinstall Sun Cluster software on this node, remove the Sun Cluster entry from the Sun Java Enterprise System (Java ES) product registry.

    If the Java ES product registry contains a record that Sun Cluster software was installed, the Java ES installer shows the Sun Cluster component grayed out and does not permit reinstallation.

    1. Start the Java ES uninstaller.

      Run the following command, where ver is the version of the Java ES distribution from which you installed Sun Cluster software.


      # /var/sadm/prod/SUNWentsysver/uninstall
      
    2. Follow the prompts to select Sun Cluster to uninstall.

      For more information about using the uninstall command, see Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Installation Guide for UNIX in Sun Java Enterprise System 5 Installation Guide for UNIX.

  9. If you do not intend to reinstall the Sun Cluster software on this cluster, disconnect the transport cables and the transport switch, if any, from the other cluster devices.

    1. If the uninstalled node is connected to a storage device that uses a parallel SCSI interface, install a SCSI terminator to the open SCSI connector of the storage device after you disconnect the transport cables.

      If the uninstalled node is connected to a storage device that uses Fibre Channel interfaces, no termination is necessary.

    2. Follow the documentation that shipped with your host adapter and server for disconnection procedures.

ProcedureHow to Correct Error Messages

To correct any error messages that occurred while attempting to perform any of the cluster node removal procedures , perform the following procedure.

  1. Attempt to rejoin the node to the global cluster. Perform this procedure only on a global cluster.


    phys-schost# boot
    
  2. Did the node successfully rejoin the cluster?

    • If no, proceed to Step 3.

    • If yes, perform the following steps to remove the node from device groups.

    1. If the node successfully rejoins the cluster, remove the node from the remaining device group or groups.

      Follow procedures in How to Remove a Node From All Device Groups.

    2. After you remove the node from all device groups, return to How to Uninstall Sun Cluster Software From a Cluster Node and repeat the procedure.

  3. If the node could not rejoin the cluster, rename the node's /etc/cluster/ccr file to any other name you choose, for example, ccr.old.


    # mv /etc/cluster/ccr /etc/cluster/ccr.old
    
  4. Return to How to Uninstall Sun Cluster Software From a Cluster Node and repeat the procedure.

Troubleshooting a Node Uninstallation

This section describes error messages that you might receive when you run the scinstall -r command and the corrective actions to take.

Unremoved Cluster File System Entries

The following error messages indicate that the global-cluster node you removed still has cluster file systems referenced in its vfstab file.


Verifying that no unexpected global mounts remain in /etc/vfstab ... failed
scinstall:  global-mount1 is still configured as a global mount.
scinstall:  global-mount1 is still configured as a global mount.
scinstall:  /global/dg1 is still configured as a global mount.
 
scinstall:  It is not safe to uninstall with these outstanding errors.
scinstall:  Refer to the documentation for complete uninstall instructions.
scinstall:  Uninstall failed.

To correct this error, return to How to Uninstall Sun Cluster Software From a Cluster Node and repeat the procedure. Ensure that you successfully complete Step 7 in the procedure before you rerun the clnode remove command.

Unremoved Listing in Device Groups

The following error messages indicate that the node you removed is still listed with a device group.


Verifying that no device services still reference this node ... failed
scinstall:  This node is still configured to host device service "
service".
scinstall:  This node is still configured to host device service "
service2".
scinstall:  This node is still configured to host device service "
service3".
scinstall:  This node is still configured to host device service "
dg1".
 
scinstall:  It is not safe to uninstall with these outstanding errors.          
scinstall:  Refer to the documentation for complete uninstall instructions.
scinstall:  Uninstall failed.

Creating, Setting Up, and Managing the Sun Cluster SNMP Event MIB

This section describes how to create, set up, and manage the Simple Network Management Protocol (SNMP) event Management Information Base (MIB). This section also describes how to enable, disable, and change the Sun Cluster SNMP event MIB.

The Sun Cluster software currently supports one MIB, the event MIB. The SNMP manager software traps cluster events in real time. When enabled, the SNMP manager automatically sends trap notifications to all hosts that are defined by the clsnmphost command. The MIB maintains a read-only table of the most current 50 events. Because clusters generate numerous notifications, only events with a severity of warning or greater are sent as trap notifications. This information does not persist across reboots.

The SNMP event MIB is defined in the sun-cluster-event-mib.mib file and is located in the /usr/cluster/lib/mib directory. You can use this definition to interpret the SNMP trap information.

The default port number for the event SNMP module is 11161, and the default port for the SNMP traps is 11162. These port numbers can be changed by modifying the Common Agent Container property file, which is /etc/cacao/instances/default/private/cacao.properties.

Creating, setting up, and managing a Sun Cluster SNMP event MIB can involve the following tasks.

Table 8–5 Task Map: Creating, Setting Up, and Managing the Sun Cluster SNMP Event MIB

Task 

Instructions 

Enable an SNMP event MIB 

How to Enable an SNMP Event MIB

Disable an SNMP event MIB 

How to Disable an SNMP Event MIB

Change an SNMP event MIB 

How to Change an SNMP Event MIB

Add an SNMP host to the list of hosts that will receive trap notifications for the MIBs 

How to Enable an SNMP Host to Receive SNMP Traps on a Node

Remove an SNMP host 

How to Disable an SNMP Host From Receiving SNMP Traps on a Node

Add an SNMP user 

How to Add an SNMP User on a Node

Remove an SNMP user 

How to Remove an SNMP User From a Node

ProcedureHow to Enable an SNMP Event MIB

This procedure shows how to enable an SNMP event MIB.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Enable the SNMP event MIB.


    phys-schost-1# clsnmpmib enable [-n node] MIB
    
    [-n node]

    Specifies the node on which the event MIB that you want to enable is located. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.

    MIB

    Specifies the name of the MIB that you want to enable. In this case, the MIB name must be event.

ProcedureHow to Disable an SNMP Event MIB

This procedure shows how to disable an SNMP event MIB.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Disable the SNMP event MIB.


    phys-schost-1# clsnmpmib disable -n node MIB
    
    -n node

    Specifies the node on which the event MIB that you want to disable is located. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.

    MIB

    Specifies the type of the MIB that you want to disable. In this case, you must specify event.

ProcedureHow to Change an SNMP Event MIB

This procedure shows how to change the protocol for an SNMP event MIB.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Change the protocol of the SNMP event MIB.


    phys-schost-1# clsnmpmib set -n node -p version=value MIB
    
    -n node

    Specifies the node on which the event MIB that you want to change is located. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.

    -p version=value

    Specifies the version of SNMP protocol to use with the MIBs. You specify value as follows:

    • version=SNMPv2

    • version=snmpv2

    • version=2

    • version=SNMPv3

    • version=snmpv3

    • version=3

    MIB

    Specifies the name of the MIB or MIBs to which to apply the subcommand. In this case, you must specify event.

ProcedureHow to Enable an SNMP Host to Receive SNMP Traps on a Node

This procedure shows how to add an SNMP host on a node to the list of hosts that will receive trap notifications for the MIBs.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Add the host to the SNMP host list of a community on another node.


    phys-schost-1# clsnmphost add -c SNMPcommunity [-n node] host
    
    -c SNMPcommunity

    Specifies the SNMP community name that is used in conjunction with the hostname.

    You must specify the SNMP community name SNMPcommunity when you add a host to a community other than public. If you use the add subcommand without the -c option, the subcommand uses public as the default community name.

    If the specified community name does not exist, this command creates the community.

    -n node

    Specifies the name of the node of the SNMP host that is provided access to the SNMP MIBs in the cluster. You can specify a node name or a node ID. If you do not specify this option, the current node is used by default.

    host

    Specifies the name, IP address, or IPv6 address of a host that is provided access to the SNMP MIBs in the cluster.

ProcedureHow to Disable an SNMP Host From Receiving SNMP Traps on a Node

This procedure shows how to remove an SNMP host on a node from the list of hosts that will receive trap notifications for the MIBs.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Remove the host from the SNMP host list of a community on the specified node.


    phys-schost-1# clsnmphost remove -c SNMPcommunity -n node host
    
    remove

    Removes the specified SNMP host from the specified node.

    -c SNMPcommunity

    Specifies the name of the SNMP community from which the SNMP host is removed.

    -n node

    Specifies the name of the node on which the SNMP host is removed from the configuration. You can specify a node name or a node ID. If you do not specify this option, the current node is used by default.

    host

    Specifies the name, IP address, or IPv6 address of the host that is removed from the configuration.

    To remove all hosts in the specified SNMP community, use a plus sign (+) for host with the -c option. To remove all hosts, use the plus sign (+) for host.

ProcedureHow to Add an SNMP User on a Node

This procedure shows how to add an SNMP user to the SNMP user configuration on a node.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Add the SNMP user.


    phys-schost-1# clsnmpuser create -n node -a authentication \
                  -f password user
    
    -n node

    Specifies the node on which the SNMP user is added. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.

    -a authentication

    Specifies the authentication protocol that is used to authorize the user. The value of the authentication protocol can be SHA or MD5.

    -f password

    Specifies a file that contains the SNMP user passwords. If you do not specify this option when you create a new user, the command prompts for a password. This option is valid only with the add subcommand.

    You must specify user passwords on separate lines in the following format:

    user:password
    

    Passwords cannot contain the following characters or a space:

    • ; (semicolon)

    • : (colon)

    • \ (backslash)

    • \n (newline)

    user

    Specifies the name of the SNMP user that you want to add.

ProcedureHow to Remove an SNMP User From a Node

This procedure shows how to remove an SNMP user from the SNMP user configuration on a node.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Remove the SNMP user.


    phys-schost-1# clsnmpuser delete -n node user
    
    -n node

    Specifies the node from which the SNMP user is removed. You can specify a node ID or a node name. If you do not specify this option, the current node is used by default.

    user

    Specifies the name of the SNMP user that you want to remove.

Troubleshooting

This section contains a troubleshooting procedure that you can use for testing purposes.

ProcedureHow to Take a Solaris Volume Manager Metaset From Nodes Booted in Noncluster Mode

Use this procedure to run an application outside the global cluster for testing purposes.

  1. Determine if the quorum device is used in the Solaris Volume Manager metaset, and determine if the quorum device uses SCSI2 or SCSI3 reservations.


    phys-schost# clquorum show
    
    1. If the quorum device is in the Solaris Volume Manager metaset, add a new quorum device which is not part of the metaset to be taken later in noncluster mode.


      phys-schost# clquorum add did
      
    2. Remove the old quorum device.


      phys-schost# clqorum remove did
      
    3. If the quorum device uses a SCSI2 reservation, scrub the SCSI2 reservation from the old quorum and verify that no SCSI2 reservations remain.


      phys-schost# /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/dids2
      phys-schost# /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/dids2
      
  2. Evacuate the global-cluster node that you want to boot in noncluster mode.


    phys-schost# clresourcegroup evacuate -n targetnode
    
  3. Take offline any resource group or resource groups that contain HAStorage or HAStoragePlus resources and contain devices or file systems affected by the metaset that you want to later take in noncluster mode.


    phys-schost# clresourcegroup offline resourcegroupname
    
  4. Disable all the resources in the resource groups that you took offline.


    phys-schost# clresource disable resourcename
    
  5. Unmanage the resource groups.


    phys-schost# clresourcegroup unmanage resourcegroupname
    
  6. Take offline the corresponding device group or device groups.


    phys-schost# cldevicegroup offline devicegroupname
    
  7. Disable the device group or device groups.


    phys-schost# cldevicegroup disable devicegroupname
    
  8. Boot the passive node into noncluster mode.


    phys-schost# reboot -x
    
  9. Verify that the boot process has been completed on the passive node before proceeding.

    • Solaris 9

      The login prompt appears only after the boot process has been completed, so no action is required.

    • Solaris 10


      phys-schost# svcs -x
      
  10. Determine if any SCSI3 reservations exist on the disks in the metasets. Run the following command on all disks in the metasets.


    phys-schost# /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dids2
    
  11. If any SCSI3 reservations exist on the disks, scrub them.


    phys-schost# /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dids2
    
  12. Take the metaset on the evacuated node.


    phys-schost# metaset -s name -C take -f
    
  13. Mount the file system or file systems that contain the defined device on the metaset.


    phys-schost# mount device mountpoint
    
  14. Start the application and perform the desired test. After finishing the test, stop the application.

  15. Reboot the node and wait until the boot process has ended.


    phys-schost# reboot
    
  16. Bring online the device group or device groups.


    phys-schost# cldevicegroup online -e devicegroupname
    
  17. Start the resource group or resource groups.


    phys-schost# clresourcegroup online -eM  resourcegroupname