Sun Cluster System Administration Guide for Solaris OS

Overview of Administering the Cluster

This section describes how to perform administrative tasks for the entire global cluster or zone cluster. The following table lists these administrative tasks and the associated procedures. For Solaris 10 OS, you generally perform cluster administrative tasks in the global zone. To administer a zone cluster, at least one machine that will host the zone cluster must be up in cluster mode. All zone-cluster nodes are not required to be up and running; Sun Cluster replays any configuration changes when the node that is currently out of the cluster rejoins the cluster.

In this chapter, phys-schost# reflects a global-cluster prompt. The clzonecluster interactive shell prompt is clzc:schost>.

Table 8–1 Task List: Administering the Cluster

Task 

Instructions 

Change the name of the cluster 

How to Change the Cluster Name

List node IDs and their corresponding node names 

How to Map Node ID to Node Name

Permit or deny new nodes to add themselves to the cluster 

How to Work With New Cluster Node Authentication

Change the time for a cluster by using the Network Time Protocol (NTP) 

How to Reset the Time of Day in a Cluster

Shut down a node to the OpenBoot PROM ok prompt on a SPARC based system or to the Press any key to continue message in a GRUB menu on an x86 based system

SPARC: How to Display the OpenBoot PROM (OBP) on a Node

Change the private hostname 

How to Change the Node Private Hostname

Put a cluster node in maintenance state 

How to Put a Node Into Maintenance State

Bring a cluster node out of maintenance state 

How to Bring a Node Out of Maintenance State

Add a node to a cluster 

Adding a Node

Remove a node from a cluster 

Removing a Node on a Global Cluster or a Zone Cluster

Moving a zone cluster; preparing a zone cluster for applications 

Performing Zone-Cluster Administrative Tasks

Uninstall Sun Cluster software from a node 

How to Uninstall Sun Cluster Software From a Cluster Node

Correct error messages 

How to Correct Error Messages

ProcedureHow to Change the Cluster Name

If necessary, you can change the cluster name after initial installation.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on any node in the global cluster.

  2. Start the clsetup utility.


    phys-schost# clsetup
    

    The Main Menu is displayed.

  3. To change the cluster name, type the number that corresponds to the option for Other Cluster Properties.

    The Other Cluster Properties menu is displayed.

  4. Make your selection from the menu and follow the onscreen instructions.

  5. If you want the service tag for Sun Cluster to reflect the new cluster name, delete the existing Sun Cluster tag and restart the cluster. To delete the Sun Cluster service tag instance, complete the following substeps on all nodes in the cluster.

    1. List all of the service tags.


      phys-schost# stclient -x
      
    2. Find the Sun Cluster service tag instance number, then run the following command.


      phys-schost# stclient -d -i service_tag_instance_number
      
    3. Reboot all the nodes in the cluster.


      phys-schost# reboot
      

Example 8–1 Changing the Cluster Name

The following example shows the cluster(1CL) command generated from the clsetup(1CL) utility to change to the new cluster name, dromedary.


phys-schost# cluster -c dromedary

ProcedureHow to Map Node ID to Node Name

During Sun Cluster installation, each node is automatically assigned a unique node ID number. The node ID number is assigned to a node in the order in which it joins the cluster for the first time. After the node ID number is assigned, the number cannot be changed. The node ID number is often used in error messages to identify which cluster node the message concerns. Use this procedure to determine the mapping between node IDs and node names.

You do not need to be superuser to list configuration information for a global cluster or a zone cluster. One step in this procedure is performed from a node of the global cluster. The other step is performed from a zone-cluster node.

  1. Use the clnode(1CL) command to list the cluster configuration information for the global cluster.


    phys-schost# clnode show | grep Node
    
  2. You can also list the Node IDs for a zone cluster. The zone-cluster node has the same Node ID as the global cluster-node where it is running.


    phys-schost# zlogin sczone clnode -v | grep Node
    

Example 8–2 Mapping the Node ID to the Node Name

The following example shows the node ID assignments for a global cluster.


phys-schost# clnode show | grep Node
=== Cluster Nodes ===
Node Name:				phys-schost1
  Node ID:				1
Node Name: 				phys-schost2
  Node ID:				2
Node Name:				phys-schost3
  Node ID:				3

ProcedureHow to Work With New Cluster Node Authentication

Sun Cluster enables you to determine if new nodes can add themselves to the global cluster and the type of authentication to use. You can permit any new node to join the cluster over the public network, deny new nodes from joining the cluster, or indicate a specific node that can join the cluster. New nodes can be authenticated by using either standard UNIX or Diffie-Hellman (DES) authentication. If you select DES authentication, you must also configure all necessary encryption keys before a node can join. See the keyserv(1M) and publickey(4) man pages for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on any node in the global cluster.

  2. Start the clsetup(1CL) utility.


    phys-schost# clsetup
    

    The Main Menu is displayed.

  3. To work with cluster authentication, type the number that corresponds to the option for new nodes.

    The New Nodes menu is displayed.

  4. Make your selection from the menu and follow the onscreen instructions.


Example 8–3 Preventing a New Machine From Being Added to the Global Cluster

The clsetup utility generates the claccess command. The following example shows the claccess command that prevents new machines from being added to the cluster.


phys-schost# claccess deny -h hostname


Example 8–4 Permitting All New Machines to Be Added to the Global Cluster

The clsetup utility generates the claccess command. The following example shows the claccess command that enables all new machines to be added to the cluster.


phys-schost# claccess allow-all


Example 8–5 Specifying a New Machine to Be Added to the Global Cluster

The clsetup utility generates the claccess command. The following example shows the claccess command that enables a single new machine to be added to the cluster.


phys-schost# claccess allow -h hostname


Example 8–6 Setting the Authentication to Standard UNIX

The clsetup utility generates the claccess command. The following example shows the claccess command that resets to standard UNIX authentication for new nodes that are joining the cluster.


phys-schost# claccess set -p protocol=sys


Example 8–7 Setting the Authentication to DES

The clsetup utility generates the claccess command. The following example shows the claccess command that uses DES authentication for new nodes that are joining the cluster.


phys-schost# claccess set -p protocol=des

When using DES authentication, you must also configure all necessary encryption keys before a node can join the cluster. For more information, see the keyserv(1M) and publickey(4) man pages.


ProcedureHow to Reset the Time of Day in a Cluster

Sun Cluster software uses the Network Time Protocol (NTP) to maintain time synchronization between cluster nodes. Adjustments in the global cluster occur automatically as needed when nodes synchronize their time. For more information, see the Sun Cluster Concepts Guide for Solaris OS and the Network Time Protocol User's Guide.


Caution – Caution –

When using NTP, do not attempt to adjust the cluster time while the cluster is up and running. Do not adjust the time by using the date(1), rdate(1M), xntpd(1M), or svcadm(1M) commands interactively or within cron(1M) scripts.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on any node in the global cluster.

  2. Shut down the global cluster.


    phys-schost# cluster shutdown -g0 -y -i 0
    
  3. Verify that the node is showing the ok prompt on a SPARC based system or the Press any key to continue message on the GRUB menu on an x86 based system.

  4. Boot the node in noncluster mode.

    • On SPARC based systems, run the following command.


      ok boot -x
      
    • On x86 based systems, run the following commands.


      # shutdown -g -y -i0
      
      Press any key to continue
    1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

      The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

      For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

    2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

      The GRUB boot parameters screen appears similar to the following:


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot                                     |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.
    3. Add -x to the command to specify system boot into noncluster mode.


      [ Minimal BASH-like line editing is supported. For the first word, TAB
      lists possible command completions. Anywhere else TAB lists the possible
      completions of a device/filename. ESC at any time exits. ]
      
      grub edit> kernel /platform/i86pc/multiboot -x
    4. Press the Enter key to accept the change and return to the boot parameters screen.

      The screen displays the edited command.


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot -x                                  |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.-
    5. Type b to boot the node into noncluster mode.


      Note –

      This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps again to add the -x option to the kernel boot parameter command.


  5. On a single node, set the time of day by running the date command.


    phys-schost# date HHMM.SS
    
  6. On the other machines, synchronize the time to that node by running the rdate(1M) command.


    phys-schost# rdate hostname
    
  7. Boot each node to restart the cluster.


    phys-schost# reboot
    
  8. Verify that the change occurred on all cluster nodes.

    On each node, run the date command.


    phys-schost# date
    

ProcedureSPARC: How to Display the OpenBoot PROM (OBP) on a Node

Use this procedure if you need to configure or change OpenBoot™ PROM settings.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Connect to the console on the node to be shut down.


    # telnet tc_name tc_port_number
    
    tc_name

    Specifies the name of the terminal concentrator.

    tc_port_number

    Specifies the port number on the terminal concentrator. Port numbers are configuration dependent. Typically, ports 2 and 3 (5002 and 5003) are used for the first cluster installed at a site.

  2. Shut down the cluster node gracefully by using the clnode evacuate command, then the shutdown command. The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from the global cluster's specified voting or non-voting node to the next-preferred voting or non-voting node.


    phys-schost# clnode evacuate node
    # shutdown -g0 -y
    

    Caution – Caution –

    Do not use send brk on a cluster console to shut down a cluster node.


  3. Execute the OBP commands.

ProcedureHow to Change the Node Private Hostname

Use this procedure to change the private hostname of a cluster node after installation has been completed.

Default private host names are assigned during initial cluster installation. The default private hostname takes the form clusternode< nodeid>-priv, for example: clusternode3-priv . Change a private hostname only if the name is already in use in the domain.


Caution – Caution –

Do not attempt to assign IP addresses to new private host names. The clustering software assigns them.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Disable, on all nodes in the cluster, any data service resources or other applications that might cache private host names.


    phys-schost# clresource disable resource[,...]
    

    Include the following in the applications you disable.

    • HA-DNS and HA-NFS services, if configured

    • Any application that has been custom-configured to use the private hostname

    • Any application that is being used by clients over the private interconnect

    For information about using the clresource command, see the clresource(1CL) man page and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  2. If your NTP configuration file refers to the private hostname that you are changing, bring down the Network Time Protocol (NTP) daemon on each node of the cluster.

    • SPARC: If you are using Solaris 9 OS, use the xntpd command to shut down the Network Time Protocol (NTP) daemon. See the xntpd(1M) man page for more information about the NTP daemon.


      phys-schost# /etc/init.d/xntpd.cluster stop
      
    • If you are using Solaris 10 OS, use the svcadm command to shut down the Network Time Protocol (NTP) daemon. See the svcadm(1M) man page for more information about the NTP daemon.


      phys-schost# svcadm disable ntp
      
  3. Run the clsetup(1CL) utility to change the private hostname of the appropriate node.

    Run the utility from only one of the nodes in the cluster.


    Note –

    When selecting a new private hostname, ensure that the name is unique to the cluster node.


  4. Type the number that corresponds to the option for the private hostname.

  5. Type the number that corresponds to the option for changing a private hostname.

    Answer the questions when prompted. You are asked the name of the node whose private hostname you are changing (clusternode< nodeid>-priv), and the new private hostname.

  6. Flush the name service cache.

    Perform this step on each node in the cluster. Flushing prevents the cluster applications and data services from trying to access the old private hostname.


    phys-schost# nscd -i hosts
    
  7. If you changed a private hostname in your NTP configuration file, update your NTP configuration file (ntp.conf or ntp.conf.cluster) on each node.

    1. Use the editing tool of your choice.

      If you perform this step at installation, also remember to remove names for nodes that are configured. The default template is preconfigured with 16 nodes. Typically, the ntp.conf.cluster file is identical on each cluster node.

    2. Verify that you can successfully ping the new private hostname from all cluster nodes.

    3. Restart the NTP daemon.

      Perform this step on each node of the cluster.

      • SPARC: If you are using Solaris 9 OS, use the xntpd command to restart the NTP daemon.

        If you are using the ntp.conf.cluster file, type the following:


        # /etc/init.d/xntpd.cluster start
        

        If you are using the ntp.conf file, type the following:


        # /etc/init.d/xntpd start
        
      • If you are using Solaris 10 OS, use the svcadm command to restart the NTP daemon.


        # svcadm enable ntp
        
  8. Enable all data service resources and other applications that were disabled in Step 1.


    phys-schost# clresource disable resource[,...]
    

    For information about using the scswitch command, see the clresource(1CL) man page and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


Example 8–8 Changing the Private Hostname

The following example changes the private hostname from clusternode2-priv to clusternode4-priv, on node phys-schost-2 .


[Disable all applications and data services as necessary.]
phys-schost-1# /etc/init.d/xntpd stop
phys-schost-1# clnode show | grep node
 ...
 private hostname:                           clusternode1-priv
 private hostname:                           clusternode2-priv
 private hostname:                           clusternode3-priv
 ...
phys-schost-1# clsetup
phys-schost-1# nscd -i hosts
phys-schost-1# vi /etc/inet/ntp.conf
 ...
 peer clusternode1-priv
 peer clusternode4-priv
 peer clusternode3-priv
phys-schost-1# ping clusternode4-priv
phys-schost-1# /etc/init.d/xntpd start
[Enable all applications and data services disabled at the beginning of the procedure.]

ProcedureHow to Add a Private Hostname for a Non-Voting Node on a Global Cluster

Use this procedure to add a private hostname for a non-voting node on a global cluster after installation has been completed. In the procedures in this chapter, phys-schost# reflects a global-cluster prompt. Perform this procedure only on a global cluster.

  1. Run the clsetup(1CL) utility to add a private hostname on the appropriate zone.


    phys-schost# clsetup
    
  2. Type the number that corresponds to the option for private host names and press the Return key.

  3. Type the number that corresponds to the option for adding a zone private hostname and press the Return key.

    Answer the questions when prompted. There is no default for a global-cluster non-voting node private hostname. You will need to provide a hostname.

ProcedureHow to Change the Private Hostname on a Non-Voting Node on a Global Cluster

Use this procedure to change the private hostname of a non-voting node after installation has been completed.

Private host names are assigned during initial cluster installation. The private hostname takes the form clusternode< nodeid>-priv, for example: clusternode3-priv . Change a private hostname only if the name is already in use in the domain.


Caution – Caution –

Do not attempt to assign IP addresses to new private hostnames. The clustering software assigns them.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. On all nodes in the global cluster, disable any data service resources or other applications that might cache private host names.


    phys-schost# clresource disable resource1, resource2
    

    Include the following in the applications you disable.

    • HA-DNS and HA-NFS services, if configured

    • Any application that has been custom-configured to use the private hostname

    • Any application that is being used by clients over the private interconnect

    For information about using the clresource command, see the clresource(1CL) man page and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  2. Run the clsetup(1CL) utility to change the private hostname of the appropriate non-voting node on the global cluster.


    phys-schost# clsetup
    

    You need to perform this step only from one of the nodes in the cluster.


    Note –

    When selecting a new private hostname, ensure that the name is unique to the cluster.


  3. Type the number that corresponds to the option for private hostnames and press the Return key.

  4. Type the number that corresponds to the option for adding a zone private hostname and press the Return key.

    No default exists for a non-voting node of a global cluster's private hostname. You need to provide a hostname.

  5. Type the number that corresponds to the option for changing a zone private hostname.

    Answer the questions when prompted. You are asked for the name of the non-voting node whose private hostname is being changed (clusternode< nodeid>-priv), and the new private hostname.

  6. Flush the name service cache.

    Perform this step on each node in the cluster. Flushing prevents the cluster applications and data services from trying to access the old private hostname.


    phys-schost# nscd -i hosts
    
  7. Enable all data service resources and other applications that were disabled in Step 1.

ProcedureHow to Delete the Private Hostname for a Non-Voting Node on a Global Cluster

Use this procedure to delete a private hostname for a non-voting node on a global cluster. Perform this procedure only on a global cluster.

  1. Run the clsetup(1CL) utility to delete a private hostname on the appropriate zone.

  2. Type the number that corresponds to the option for zone private hostname.

  3. Type the number that corresponds to the option for deleting a zone private hostname.

  4. Type the name of the non-voting node's private hostname that you are deleting.

ProcedureHow to Put a Node Into Maintenance State

Put a global-cluster node into maintenance state when taking the node out of service for an extended period of time. This way, the node does not contribute to the quorum count while it is being serviced. To put a node into maintenance state, the node must be shut down with clnode(1CL) evacuate and cluster(1CL) shutdown commands.


Note –

Use the Solaris shutdown command to shut down a single node. Use the cluster shutdown command only when shutting down an entire cluster.


When a cluster node is shut down and put in maintenance state, all quorum devices that are configured with ports to the node have their quorum vote counts decremented by one. The node and quorum device vote counts are incremented by one when the node is removed from maintenance mode and brought back online.

Use the clquorum(1CL) disable command to put a cluster node into maintenance state.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the global-cluster node that you are putting into maintenance state.

  2. Evacuate any resource groups and device groups from the node. The clnode evacuate command switches over all resource groups and device groups, including all non-voting nodes from the specified node to the next-preferred node.


    phys-schost# clnode evacuate node
    
  3. Shut down the node that you evacuated.


    phys-schost# shutdown -g0 -y-i 0
    
  4. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on another node in the cluster and put the node that you shut down in Step 3 in maintenance state.


    phys-schost# clquorum disable  node
    
    node

    Specifies the name of a node that you want to put into maintenance mode.

  5. Verify that the global-cluster node is now in maintenance state.


    phys-schost# clquorum status node
    

    The node that you put into maintenance state should have a Status of offline and 0 (zero) for Present and Possible quorum votes.


Example 8–9 Putting a Global-Cluster Node Into Maintenance State

The following example puts a cluster node into maintenance state and verifies the results. The clnode status output shows the Node votes for phys-schost-1 to be 0 (zero) and the status to be Offline. The Quorum Summary should also show reduced vote counts. Depending on your configuration, the Quorum Votes by Device output might indicate that some quorum disk devices are offline as well.


[On the node to be put into maintenance state:]
phys-schost-1# clnode  evacuate phys-schost-1
phys-schost-1# shutdown -g0 -y -i0

[On another node in the cluster:]
phys-schost-2# clquorum disable phys-schost-1
phys-schost-2# clquorum status phys-schost-1

-- Quorum Votes by Node --

Node Name           Present       Possible       Status
---------           -------       --------       ------
phys-schost-1       0             0              Offline
phys-schost-2       1             1              Online
phys-schost-3       1             1              Online

See Also

To bring a node back online, see How to Bring a Node Out of Maintenance State.

ProcedureHow to Bring a Node Out of Maintenance State

Use the following procedure to bring a global-cluster node back online and reset the quorum vote count to the default. For cluster nodes, the default quorum count is one. For quorum devices, the default quorum count is N-1, where N is the number of nodes with nonzero vote counts that have ports to the quorum device.

When a node has been put in maintenance state, the node's quorum vote count is decremented by one. All quorum devices that are configured with ports to the node will also have their quorum vote counts decremented. When the quorum vote count is reset and a node removed from maintenance state, both the node's quorum vote count and the quorum device vote count are incremented by one.

Run this procedure any time a global-cluster node has been put in maintenance state and you are removing it from maintenance state.


Caution – Caution –

If you do not specify either the globaldev or node options, the quorum count is reset for the entire cluster.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on any node of the global cluster other than the one in maintenance state.

  2. Depending on the number of nodes that you have in your global cluster configuration, perform one of the following steps:

    • If you have two nodes in your cluster configuration, go to Step 4.

    • If you have more than two nodes in your cluster configuration, go to Step 3.

  3. If the node that you are removing from maintenance state will have quorum devices, reset the cluster quorum count from a node other than the one in maintenance state.

    You must reset the quorum count from a node other than the node in maintenance state before rebooting the node, or the node might hang while waiting for quorum.


    phys-schost# clquorum reset
    
    reset

    The change flag that resets quorum.

  4. Boot the node that you are removing from maintenance state.

  5. Verify the quorum vote count.


    phys-schost# clquorum status
    

    The node that you removed from maintenance state should have a status of online and show the appropriate vote count for Present and Possible quorum votes.


Example 8–10 Removing a Cluster Node From Maintenance State and Resetting the Quorum Vote Count

The following example resets the quorum count for a cluster node and its quorum devices to their defaults and verifies the result. The scstat -q output shows the Node votes for phys-schost-1 to be 1 and the status to be online. The Quorum Summary should also show an increase in vote counts.


phys-schost-2# clquorum reset

phys-schost-1# clquorum status

--- Quorum Votes Summary ---

            Needed   Present   Possible
            ------   -------   --------
            4        6         6


--- Quorum Votes by Node ---

Node Name        Present       Possible      Status
---------        -------       --------      ------
phys-schost-2    1             1             Online
phys-schost-3    1             1             Online


--- Quorum Votes by Device ---

Device Name           Present      Possible      Status
-----------           -------      --------      ------
/dev/did/rdsk/d3s2    1            1             Online
/dev/did/rdsk/d17s2   0            1             Online
/dev/did/rdsk/d31s2   1            1             Online
`