Go to main content

Administering an Oracle® Solaris Cluster 4.4 Configuration

Exit Print View

Updated: November 2019

Running an Application Outside the Global Cluster

How to Take a Solaris Volume Manager Metaset From Nodes Booted in Noncluster Mode

Use this procedure to run an application outside the global cluster for testing purposes.

  1. Determine if the quorum device is used in the Solaris Volume Manager metaset, and determine if the quorum device uses SCSI2 or SCSI3 reservations.
    phys-schost# clquorum show
    1. If the quorum device is in the Solaris Volume Manager metaset, add a new quorum device which is not part of the metaset to be taken later in noncluster mode.
      phys-schost# clquorum add did
    2. Remove the old quorum device.
      phys-schost# clquorum remove did
    3. If the quorum device uses a SCSI2 reservation, scrub the SCSI2 reservation from the old quorum and verify that no SCSI2 reservations remain.

      The following command finds the Persistent Group Reservation Emulation (PGRE) keys. If there are no keys on the disk, an errno=22 message is displayed.

      # /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/dids2

      After you locate the keys, scrub the PGRE keys.

      # /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/dids2


      Caution  -  If you scrub the active quorum device keys from the disk, the cluster will panic on the next reconfiguration with a Lost operational quorum message.

  2. Evacuate the global-cluster node that you want to boot in noncluster mode.
    phys-schost# clresourcegroup evacuate -n target-node
  3. Take offline any resource group or resource groups that contain HAStorage or HAStoragePlus resources and contain devices or file systems affected by the metaset that you want to later take in noncluster mode.
    phys-schost# clresourcegroup offline resource-group
  4. Disable all the resources in the resource groups that you took offline.
    phys-schost# clresource disable resource
  5. Unmanage the resource groups.
    phys-schost# clresourcegroup unmanage resource-group
  6. Take offline the corresponding device group or device groups.
    phys-schost# cldevicegroup offline device-group
  7. Disable the device group or device groups.
    phys-schost# cldevicegroup disable device-group
  8. Boot the passive node into noncluster mode.
    phys-schost# shutdown -g0 -i0 -y
    ok> boot -x
  9. Verify that the boot process has been completed on the passive node before proceeding.
    phys-schost# svcs -x
  10. Determine if any SCSI3 reservations exist on the disks in the metasets.

    Run the following command on all disks in the metasets.

    phys-schost# /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dids2
  11. If any SCSI3 reservations exist on the disks, scrub them.
    phys-schost# /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dids2
  12. Take the metaset on the evacuated node.
    phys-schost# metaset -s name -C take -f
  13. Mount the file system or file systems that contain the defined device on the metaset.
    phys-schost# mount device mountpoint
  14. Start the application and perform the desired test. After finishing the test, stop the application.
  15. Reboot the node and wait until the boot process has ended.
    phys-schost# reboot
  16. Bring online the device group or device groups.
    phys-schost# cldevicegroup online -e device-group
  17. Start the resource group or resource groups.
    phys-schost# clresourcegroup online -eM resource-group