Sun Cluster System Administration Guide for Solaris OS

Removing a Node on a Global Cluster or a Zone Cluster

This section provides instructions on how to remove a node on a global cluster or a zone cluster. You can also remove a specific zone cluster from a global cluster. The following table lists the tasks to perform to remove a node from an existing cluster. Perform the tasks in the order shown.


Caution – Caution –

If you remove a node using only this procedure for a RAC configuration, the removal might cause the node to panic during a reboot. For instructions on how to remove a node from a RAC configuration, see How to Remove Sun Cluster Support for Oracle RAC From Selected Nodes in Sun Cluster Data Service for Oracle RAC Guide for Solaris OS. After you complete that process, follow the appropriate steps below.


Table 8–4 Task Map: Removing a Node

Task 

Instructions 

Move all resource groups and device groups off the node to be removed 

clnode evacuate node

Verify that the node can be removed by checking the allowed hosts 

If the node cannot be removed, give the node access to the cluster configuration 

claccess show node

claccess allow -h node-to-remove

Remove the node from all device groups 

How to Remove a Node From a Device Group (Solaris Volume Manager)

 

Remove all quorum devices connected to the node being removed 

This step is optional if you are removing a node from a two-node cluster.

How to Remove a Quorum Device

Note that although you must remove the quorum device before you remove the storage device in the next step, you can add the quorum device back immediately afterward. 

How to Remove the Last Quorum Device From a Cluster

Put the node being removed into noncluster mode 

How to Put a Node Into Maintenance State

Remove a node from a zone cluster 

How to Remove a Node From a Zone Cluster

Remove a node from the cluster software configuration 

How to Remove a Node From the Cluster Software Configuration

(Optional) Uninstall Sun Cluster software from a cluster node 

How to Uninstall Sun Cluster Software From a Cluster Node

Remove an entire zone cluster 

How to Remove a Zone Cluster

ProcedureHow to Remove a Node From a Zone Cluster

You can remove a node from a zone cluster by halting the node, uninstalling it, and removing the node from the configuration. If you decide later to add the node back into the zone cluster, follow the instructions in Adding a Node Most of these steps are performed from the global-cluster node.

  1. Become superuser on a node of the global cluster.

  2. Shut down the zone-cluster node you want to remove by specifying the node and its zone cluster.


    phys-schost# clzonecluster halt -n node zoneclustername
    

    You can also use the clnode evacuate and shutdown commands within a zone cluster.

  3. Uninstall the zone-cluster node.


    phys-schost# clzonecluster uninstall -n node zoneclustername
    
  4. Remove the zone-cluster node from the configuration.

    Use the following commands:


    phys-schost# clzonecluster configure zoneclustername
    

    clzc:sczone> remove node physical-host=zoneclusternodename
    
  5. Verify that the node was removed from the zone cluster.


    phys-schost# clzonecluster status
    

ProcedureHow to Remove a Node From the Cluster Software Configuration

Perform this procedure to remove a node from the global cluster.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Ensure that you have removed the node from all resource groups, device groups, and quorum device configurations and put it into maintenance state before you continue with this procedure.

  2. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the node that you want to remove. Perform all steps in this procedure from a node of the global cluster.

  3. Boot the global-cluster node that you want to remove into noncluster mode. For a zone-cluster node, follow the instructions in How to Remove a Node From a Zone Cluster before you perform this step.

    • On SPARC based systems, run the following command.


      ok boot -x
      
    • On x86 based systems, run the following commands.


      shutdown -g -y -i0
      
      Press any key to continue
    1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

      The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

      For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

    2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

      The GRUB boot parameters screen appears similar to the following:


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot                                     |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.
    3. Add -x to the command to specify system boot into noncluster mode.


      [ Minimal BASH-like line editing is supported. For the first word, TAB
      lists possible command completions. Anywhere else TAB lists the possible
      completions of a device/filename. ESC at any time exits. ]
      
      grub edit> kernel /platform/i86pc/multiboot -x
    4. Press the Enter key to accept the change and return to the boot parameters screen.

      The screen displays the edited command.


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot -x                                  |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.-
    5. Type b to boot the node into noncluster mode.

      This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps again to add the -x option to the kernel boot parameter command.


      Note –

      If the node to be removed is not available or can no longer be booted, run the following command on any active cluster node: clnode clear -F <node-to-be-removed>. Verify the node removal by running clnode status <nodename>.


  4. From the node you want to remove, delete the node from the cluster.


    phys-schost# clnode remove -F
    

    If the clnode remove command fails and a stale node reference exists, run clnode clear -F nodename on an active node.


    Note –

    If you are removing the last node in the cluster, the node must be in noncluster mode with no active nodes left in the cluster.


  5. From another cluster node, verify the node removal.


    phys-schost# clnode status nodename
    
  6. Complete the node removal.


Example 8–12 Removing a Node From the Cluster Software Configuration

This example shows how to remove a node (phys-schost-2) from a cluster. The clnode remove command is run in noncluster mode from the node you want to remove from the cluster (phys-schost-2).


[Remove the node from the cluster:]
phys-schost-2# clnode remove
phys-schost-1# clnode clear -F phys-schost-2
[Verify node removal:]
phys-schost-1# clnode status
-- Cluster Nodes --
                    Node name           Status
                    ---------           ------
  Cluster node:     phys-schost-1       Online

See Also

To uninstall Sun Cluster software from the removed node, see How to Uninstall Sun Cluster Software From a Cluster Node.

For hardware procedures, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

For a comprehensive list of tasks for removing a cluster node, see Table 8–4.

To add a node to an existing cluster, see How to Add a Node to the Authorized Node List.

ProcedureHow to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity

Use this procedure to detach a storage array from a single cluster node, in a cluster that has three-node or four-node connectivity.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Determine the resource groups and device groups that are running on the node to be disconnected.


    phys-schost# clresourcegroup status
    phys-schost# cldevicegroup status
    
  3. If necessary, move all resource groups and device groups off the node to be disconnected.


    Caution (SPARC only) – Caution (SPARC only) –

    If your cluster is running Oracle RAC software, shut down the Oracle RAC database instance that is running on the node before you move the groups off the node. For instructions, see the Oracle Database Administration Guide.



    phys-schost# clnode evacuate node
    

    The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from voting or non-voting nodes on the specified node to the next-preferred voting or non-voting node.

  4. Put the device groups into maintenance state.

    For the procedure on acquiescing I/O activity to Veritas shared disk groups, see your VxVM documentation.

    For the procedure on putting a device group in maintenance state, see How to Put a Node Into Maintenance State.

  5. Remove the node from the device groups.

    • If you use VxVM or a raw disk, use the cldevicegroup(1CL) command to remove the device groups.

    • If you use Solstice DiskSuite, use the metaset command to remove the device groups.

  6. For each resource group that contains an HAStoragePlus resource, remove the node from the resource group's node list.


    phys-schost# clresourcegroup remove-node -z zone -n node + | resourcegroup
    
    node

    The name of the node.

    zone

    The name of the non-voting node that can master the resource group. Specify zone only if you specified a non-voting node when you created the resource group.

    See the Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more information about changing a resource group's node list.


    Note –

    Resource type, resource group, and resource property names are case sensitive when clresourcegroup is executed.


  7. If the storage array that you are removing is the last storage array that is connected to the node, disconnect the fiber-optic cable between the node and the hub or switch that is connected to this storage array (otherwise, skip this step).

  8. If you are removing the host adapter from the node that you are disconnecting, and power off the node. If you are removing the host adapter from the node that you are disconnecting, skip to Step 11.

  9. Remove the host adapter from the node.

    For the procedure on removing host adapters, see the documentation for the node.

  10. Without booting the node, power on the node.

  11. If Oracle RAC software has been installed, remove the Oracle RAC software package from the node that you are disconnecting.


    phys-schost# pkgrm SUNWscucm 
    

    Caution (SPARC only) – Caution (SPARC only) –

    If you do not remove the Oracle RAC software from the node that you disconnected, the node panics when the node is reintroduced to the cluster and potentially causes a loss of data availability.


  12. Boot the node in cluster mode.

    • On SPARC based systems, run the following command.


      ok boot
      
    • On x86 based systems, run the following commands.

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.
  13. On the node, update the device namespace by updating the /devices and /dev entries.


    phys-schost# devfsadm -C 
     cldevice refresh
    
  14. Bring the device groups back online.

    For procedures about bringing a Veritas shared disk group online, see your Veritas Volume Manager documentation.

    For information about bringing a device group online, see How to Bring a Node Out of Maintenance State.

ProcedureHow to Remove a Zone Cluster

You can delete a specific zone cluster or use a wildcard to remove all zone clusters that are configured on the global cluster. The zone cluster must be configured before you remove it.

  1. Become a superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the node of the global cluster. Perform all steps in this procedure from a node of the global cluster.

  2. Delete all resource groups and their resources from the zone cluster.


    phys-schost# clresourcegroup delete -F -Z zoneclustername +
    

    Note –

    This step is performed from a global-cluster node. To perform this step from a node of the zone cluster instead, log into the zone-cluster node and omit -Z zonecluster from the command.


  3. Halt the zone cluster.


    phys-schost# clzonecluster halt zoneclustername
    
  4. Uninstall the zone cluster.


    phys-schost# clzonecluster uninstall zoneclustername
    
  5. Unconfigure the zone cluster.


    phys-schost# clzonecluster delete zoneclustername
    

Example 8–13 Removing a Zone Cluster From a Global Cluster


phys-schost# clresourcegroup delete -F -Z sczone +

phys-schost# clzonecluster halt sczone

phys-schost# clzonecluster uninstall sczone

phys-schost# clzonecluster delete sczone

ProcedureHow to Remove a File System From a Zone Cluster

Perform this procedure to remove a file system from a zone cluster. Supported file system types in a zone cluster include UFS, Vxfs, stand-alone QFS, shared QFS, ZFS (exported as a data set), and loopback file systems. For instructions on adding a file system to a zone cluster, see Adding File Systems to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

The phys-schost# prompt reflects a global-cluster prompt. This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on a node of the global cluster that hosts the zone cluster. Some steps in this procedure are performed from a node of the global cluster. Other steps are performed from a node of the zone cluster.

  2. Delete the resources related to the file system being removed.

    1. Identify and remove the Sun Cluster resource types, such as HAStoragePlus and SUNW.ScalMountPoint, that are configured for the zone cluster's file system that you are removing.


      phys-schost# clresource delete -F -Z zoneclustername fs_zone_resources
      
    2. If applicable, identify and remove the Sun Cluster resources of type SUNW.qfs that are configured in the global cluster for the file system that you are removing.


      phys-schost# clresource delete -F fs_global_resouces
      

      Use the -F option carefully because it forces the deletion of all the resources you specify, even if you did not disable them first. All the resources you specified are removed from the resource-dependency settings of other resources, which can cause a loss of service in the cluster. Dependent resources that are not deleted can be left in an invalid state or in an error state. For more information, see the clresource(1CL) man page.


    Tip –

    If the resource group for the removed resource later becomes empty, you can safely delete the resource group.


  3. Determine the path to the file-system mount point directory. For example:


    phys-schost# clzonecluster configure zoneclustername
    
  4. Remove the file system from the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    

    clzc:zoneclustername> remove fs dir=filesystemdirectory
    

    clzc:zoneclustername> commit
    

    The file system mount point is specified by dir=.

  5. Verify the removal of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 8–14 Removing a Highly Available File System in a Zone Cluster

This example shows how to remove a file system with a mount-point directory (/local/ufs-1) that is configured in a zone cluster called sczone. The resource is hasp-rs and is of the type HAStoragePlus.


phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           fs
   dir:                                     /local/ufs-1
   special:                                 /dev/md/ds1/dsk/d0
   raw:                                     /dev/md/ds1/rdsk/d0
   type:                                    ufs
   options:                                 [logging]
 ...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove fs dir=/local/ufs-1
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone


Example 8–15 Removing a Highly Available ZFS File System in a Zone Cluster

This example shows to remove a ZFS file systems in a ZFS pool called HAzpool, which is configured in the sczone zone cluster in resource hasp-rs of type SUNW.HAStoragePlus.


phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           dataset
   name:                                     HAzpool
...
phys-schost# clresource delete -F -Z sczone hasp-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove dataset name=HAzpool
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone


Example 8–16 Removing a Shared QFS File System in a Zone Cluster

This example shows how to remove a configured shared file system with a mount-point directory of /db_qfs/Data. The file system has the following characteristics:


phys-schost# clzonecluster show -v sczone
...
 Resource Name:                           fs
   dir:                                     /db_qfs/Data
   special:                                 Data
   type:                                    samfs
...
phys-schost# clresource delete -F -Z sczone scal-Data-rs
phys-schost# clresource delete -F Data-rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove fs dir=/db_qfs/Data
clzc:sczone> commit
phys-schost# clzonecluster show -v sczone

ProcedureHow to Remove a Storage Device From a Zone Cluster

You can remove storage devices, such as SVM disksets and DID devices, from a zone cluster. Perform this procedure to remove a storage device from a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster. Some steps in this procedure are performed from a node of the global cluster. Other steps can be performed from a node of the zone cluster.

  2. Delete the resources related to the devices being removed. Identify and remove the Sun Cluster resource types, such as SUNW.HAStoragePlus and SUNW.ScalDeviceGroup, that are configured for the zone cluster's devices that you are removing.


    phys-schost# clresource delete -F -Z zoneclustername dev_zone_resources
    
  3. Determine the match entry for the devices to be removed.


    phys-schost# clzonecluster show -v zoneclustername
    ...
     Resource Name:       device
        match:              <device_match>
     ...
  4. Remove the devices from the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> remove device match=<devices_match>
    clzc:zoneclustername> commit
    clzc:zoneclustername> end
    
  5. Reboot the zone cluster.


    phys-schost# clzonecluster reboot zoneclustername
    
  6. Verify the removal of the devices.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 8–17 Removing an SVM Disk Set From a Zone Cluster

This example shows how to remove an SVM disk set called apachedg configured in a zone cluster called sczone. The set number of the apachedg disk set is 3. The devices are used by the zc_rs resource that is configured in the cluster.


phys-schost# clzonecluster show -v sczone
...
  Resource Name:      device
     match:             /dev/md/apachedg/*dsk/*
  Resource Name:      device
     match:             /dev/md/shared/3/*dsk/*
...
phys-schost# clresource delete -F -Z sczone zc_rs

phys-schost# ls -l /dev/md/apachedg
lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/apachedg -> shared/3
phys-schost# clzonecluster configure sczone
clzc:sczone> remove device match=/dev/md/apachedg/*dsk/*
clzc:sczone> remove device match=/dev/md/shared/3/*dsk/*
clzc:sczone> commit
clzc:sczone> end
phys-schost# clzonecluster reboot sczone
phys-schost# clzonecluster show -v sczone


Example 8–18 Removing a DID Device From a Zone Cluster

This example shows how to remove DID devices d10 and d11, which are configured in a zone cluster called sczone. The devices are used by the zc_rs resource that is configured in the cluster.


phys-schost# clzonecluster show -v sczone
...
 Resource Name:       device
     match:             /dev/did/*dsk/d10*
 Resource Name:       device
    match:              /dev/did/*dsk/d11*
...
phys-schost# clresource delete -F -Z sczone zc_rs
phys-schost# clzonecluster configure sczone
clzc:sczone> remove device match=/dev/did/*dsk/d10*
clzc:sczone> remove device match=/dev/did/*dsk/d11*
clzc:sczone> commit
clzc:sczone> end
phys-schost# clzonecluster reboot sczone
phys-schost# clzonecluster show -v sczone

ProcedureHow to Uninstall Sun Cluster Software From a Cluster Node

Perform this procedure to uninstall Sun Cluster software from a global-cluster node before you disconnect it from a fully established cluster configuration. You can use this procedure to uninstall software from the last remaining node of a cluster.


Note –

To uninstall Sun Cluster software from a node that has not yet joined the cluster or is still in installation mode, do not perform this procedure. Instead, go to “How to Uninstall Sun Cluster Software to Correct Installation Problems” in the Sun Cluster Software Installation Guide for Solaris OS.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Ensure that you have correctly completed all prerequisite tasks in the task map to remove a cluster node.

    See Table 8–4.


    Note –

    Ensure that you have removed the node from the cluster configuration by using clnode remove before you continue with this procedure.


  2. Become superuser on an active member of the global cluster other than the global-cluster node that you are uninstalling. Perform this procedure from a global-cluster node.

  3. From the active cluster member, add the node that you intend to uninstall to the cluster's node authentication list.


    phys-schost# claccess allow -h hostname
    
    -h

    Specifies the name of the node to be added to the node's authentication list.

    Alternately, you can use the clsetup(1CL) utility. See How to Add a Node to the Authorized Node List for procedures.

  4. Become superuser on the node to uninstall.

  5. If you have a zone cluster, uninstall it.


    phys-schost# clzonecluster uninstall -F zoneclustername
    

    For specific steps, How to Remove a Zone Cluster.

  6. Reboot the global-cluster node into noncluster mode.

    • On a SPARC based system, run the following command.


      # shutdown -g0 -y -i0ok boot -x
      
    • On an x86 based system, run the following commands.


      # shutdown -g0 -y -i0
      ...
                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/
      sd@0,0:a
      Boot args:
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -x
      
  7. In the /etc/vfstab file, remove all globally mounted file-system entries except the /global/.devices global mounts.

  8. If you intend to reinstall Sun Cluster software on this node, remove the Sun Cluster entry from the Sun Java Enterprise System (Java ES) product registry.

    If the Java ES product registry contains a record that Sun Cluster software was installed, the Java ES installer shows the Sun Cluster component grayed out and does not permit reinstallation.

    1. Start the Java ES uninstaller.

      Run the following command, where ver is the version of the Java ES distribution from which you installed Sun Cluster software.


      # /var/sadm/prod/SUNWentsysver/uninstall
      
    2. Follow the prompts to select Sun Cluster to uninstall.

      For more information about using the uninstall command, see Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Installation Guide for UNIX in Sun Java Enterprise System 5 Installation Guide for UNIX.

  9. If you do not intend to reinstall the Sun Cluster software on this cluster, disconnect the transport cables and the transport switch, if any, from the other cluster devices.

    1. If the uninstalled node is connected to a storage device that uses a parallel SCSI interface, install a SCSI terminator to the open SCSI connector of the storage device after you disconnect the transport cables.

      If the uninstalled node is connected to a storage device that uses Fibre Channel interfaces, no termination is necessary.

    2. Follow the documentation that shipped with your host adapter and server for disconnection procedures.

ProcedureHow to Correct Error Messages

To correct any error messages that occurred while attempting to perform any of the cluster node removal procedures , perform the following procedure.

  1. Attempt to rejoin the node to the global cluster. Perform this procedure only on a global cluster.


    phys-schost# boot
    
  2. Did the node successfully rejoin the cluster?

    • If no, proceed to Step 3.

    • If yes, perform the following steps to remove the node from device groups.

    1. If the node successfully rejoins the cluster, remove the node from the remaining device group or groups.

      Follow procedures in How to Remove a Node From All Device Groups.

    2. After you remove the node from all device groups, return to How to Uninstall Sun Cluster Software From a Cluster Node and repeat the procedure.

  3. If the node could not rejoin the cluster, rename the node's /etc/cluster/ccr file to any other name you choose, for example, ccr.old.


    # mv /etc/cluster/ccr /etc/cluster/ccr.old
    
  4. Return to How to Uninstall Sun Cluster Software From a Cluster Node and repeat the procedure.

Troubleshooting a Node Uninstallation

This section describes error messages that you might receive when you run the scinstall -r command and the corrective actions to take.

Unremoved Cluster File System Entries

The following error messages indicate that the global-cluster node you removed still has cluster file systems referenced in its vfstab file.


Verifying that no unexpected global mounts remain in /etc/vfstab ... failed
scinstall:  global-mount1 is still configured as a global mount.
scinstall:  global-mount1 is still configured as a global mount.
scinstall:  /global/dg1 is still configured as a global mount.
 
scinstall:  It is not safe to uninstall with these outstanding errors.
scinstall:  Refer to the documentation for complete uninstall instructions.
scinstall:  Uninstall failed.

To correct this error, return to How to Uninstall Sun Cluster Software From a Cluster Node and repeat the procedure. Ensure that you successfully complete Step 7 in the procedure before you rerun the clnode remove command.

Unremoved Listing in Device Groups

The following error messages indicate that the node you removed is still listed with a device group.


Verifying that no device services still reference this node ... failed
scinstall:  This node is still configured to host device service "
service".
scinstall:  This node is still configured to host device service "
service2".
scinstall:  This node is still configured to host device service "
service3".
scinstall:  This node is still configured to host device service "
dg1".
 
scinstall:  It is not safe to uninstall with these outstanding errors.          
scinstall:  Refer to the documentation for complete uninstall instructions.
scinstall:  Uninstall failed.