Sun Cluster System Administration Guide for Solaris OS

Troubleshooting

This section contains a troubleshooting procedure that you can use for testing purposes.

ProcedureHow to Take a Solaris Volume Manager Metaset From Nodes Booted in Noncluster Mode

Use this procedure to run an application outside the global cluster for testing purposes.

  1. Determine if the quorum device is used in the Solaris Volume Manager metaset, and determine if the quorum device uses SCSI2 or SCSI3 reservations.


    phys-schost# clquorum show
    
    1. If the quorum device is in the Solaris Volume Manager metaset, add a new quorum device which is not part of the metaset to be taken later in noncluster mode.


      phys-schost# clquorum add did
      
    2. Remove the old quorum device.


      phys-schost# clqorum remove did
      
    3. If the quorum device uses a SCSI2 reservation, scrub the SCSI2 reservation from the old quorum and verify that no SCSI2 reservations remain.


      phys-schost# /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/dids2
      phys-schost# /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/dids2
      
  2. Evacuate the global-cluster node that you want to boot in noncluster mode.


    phys-schost# clresourcegroup evacuate -n targetnode
    
  3. Take offline any resource group or resource groups that contain HAStorage or HAStoragePlus resources and contain devices or file systems affected by the metaset that you want to later take in noncluster mode.


    phys-schost# clresourcegroup offline resourcegroupname
    
  4. Disable all the resources in the resource groups that you took offline.


    phys-schost# clresource disable resourcename
    
  5. Unmanage the resource groups.


    phys-schost# clresourcegroup unmanage resourcegroupname
    
  6. Take offline the corresponding device group or device groups.


    phys-schost# cldevicegroup offline devicegroupname
    
  7. Disable the device group or device groups.


    phys-schost# cldevicegroup disable devicegroupname
    
  8. Boot the passive node into noncluster mode.


    phys-schost# reboot -x
    
  9. Verify that the boot process has been completed on the passive node before proceeding.

    • Solaris 9

      The login prompt appears only after the boot process has been completed, so no action is required.

    • Solaris 10


      phys-schost# svcs -x
      
  10. Determine if any SCSI3 reservations exist on the disks in the metasets. Run the following command on all disks in the metasets.


    phys-schost# /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dids2
    
  11. If any SCSI3 reservations exist on the disks, scrub them.


    phys-schost# /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dids2
    
  12. Take the metaset on the evacuated node.


    phys-schost# metaset -s name -C take -f
    
  13. Mount the file system or file systems that contain the defined device on the metaset.


    phys-schost# mount device mountpoint
    
  14. Start the application and perform the desired test. After finishing the test, stop the application.

  15. Reboot the node and wait until the boot process has ended.


    phys-schost# reboot
    
  16. Bring online the device group or device groups.


    phys-schost# cldevicegroup online -e devicegroupname
    
  17. Start the resource group or resource groups.


    phys-schost# clresourcegroup online -eM  resourcegroupname