Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Add and Register a Replicated Device Group (ZFS)

To replicate ZFS, you must create a named device group and list the disks that belong to the zpool. A device can belong to only one device group at a time, so if you already have a Sun Cluster device group that contains the device, you must delete the group before you add that device to a new ZFS device group.

The name of the Sun Cluster device group that you create (Solaris Volume Manager, Veritas Volume Manager, or raw-disk) must be the same as the name of the replicated device group.


Caution – Caution –

Full support for ZFS with third-party data-replication technologies is pending. See the latest Sun Cluster Release Notes for updates on ZFS support.


  1. Delete the default device groups that correspond to the devices in the zpool.

    For example, if you have a zpool called mypool that contains two devices /dev/did/dsk/d2 and /dev/did/dsk/d13, you must delete the two default device groups called d2 and d13.


    # cldevicegroup offline dsk/d2 dsk/d13
    # cldevicegroup remove dsk/d2 dsk/d13
    
  2. Create a named device group with DIDs that correspond to those in the device group you removed in Step #1.


    # cldevicegroup create -d d2,d13 -t rawdisk mypool
    

    This action creates a device group called mypool (with the same name as the zpool), which manages the raw devices /dev/did/dsk/d2 and /dev/did/dsk/d13.

  3. Create a zpool that contains those devices.


    # zpool create mypool mirror /dev/did/dsk/d2 /dev/did/dsk/d13
    
  4. Create a resource group to manage migration of the replicated devices (in the device group) with only global zones in its nodelist.


    # clrg create -n pnode1,pnode2 migrate_truecopydg-rg
    
  5. Create a hasp-rs resource in the resource group you created in Step 4, setting theglobaldevicepaths property to a device group of type raw-disk. You created this device group in Step #2.


    # clrs create -t HAStoragePlus -x globaldevicepaths=mypool -g \
    migrate_truecopydg-rg hasp2migrate_mypool
    
  6. If the application resource group will run in local zones, create a new resource group with the nodelist containing the appropriate local zones. The global zones corresponding to the local zones must be in the nodelist of the resource group created in Step #4. Set the +++ value in the rg_affinities property from this resource group to the resource group you created in Step #4.


    # clrg create -n pnode1:zone-1,pnode2:zone-2 -p \
    RG_affinities=+++migrate_truecopydg-rg sybase-rg
    
  7. Create an HAStoragePlus resource (hasp-rs) for the zpool you created in Step #3 in the resource group that you created in either Step #4 or #6. Set the resource_dependencies property to the hasp-rs resource that you created in Step #5.


    # clrs create -g sybase-rg -t HAStoragePlus -p zpools=mypool \
    -p resource_dependencies=hasp2migrate_mypool \
    -p ZpoolsSearchDir=/dev/did/dsk hasp2import_mypool
    
  8. Use the new resource group name where a device group name is required.