Oracle® Solaris Cluster Geographic Edition Data Replication Guide for EMC Symmetrix Remote Data Facility

Exit Print View

Updated: July 2014, E39668-01
 
 

Configuring the Other Entities on the Secondary Cluster

Next, you need to configure any volume manager, the Geographic Edition device groups, and the highly available cluster file system.

How to Replicate the Configuration Information From the Primary Cluster, When Using Raw-Disk Device Groups

  1. On the primary cluster, start replication for the devgroup1 device group.
    phys-paris-1# symrdf -g devgroup1 -noprompt establish
    
    An RDF 'Incremental Establish' operation execution is in progress for device group
    'devgroup1'. Please wait...
    Write Disable device(s) on RA at target (R2)..............Done.
    Suspend RDF link(s).......................................Done.
    Mark target (R2) devices to refresh from source (R1)......Started.
    Device: 054 ............................................. Marked.
    Mark target (R2) devices to refresh from source (R1)......Done.
    Suspend RDF link(s).......................................Done.
    Merge device track tables between source and target.......Started.
    Device: 09C ............................................. Merged.
    Merge device track tables between source and target.......Done.
    Resume RDF link(s)........................................Done.
    
    The RDF 'Incremental Establish' operation successfully initiated for device group
    'devgroup1'. 
  2. On the primary cluster, confirm that the state of the SRDF pair is synchronized.
    phys-newyork-1# symrdf -g devgroup1 verify
    
    All devices in the RDF group 'devgroup1' are in the 'Synchronized' state.
  3. On the primary cluster, split the pair by using the symrdf split command.
    phys-paris-1# symrdf -g devgroup1 -noprompt split
    
    An RDF 'Split' operation execution is in progress for device group 'devgroup1'.
    Please wait...
    
    Suspend RDF link(s).......................................Done.
    Read/Write Enable device(s) on RA at target (R2)..........Done.
    The RDF 'Split' operation device group 'devgroup1'. 
  4. Map the EMC disk drive to the corresponding DID numbers.

    You use these mappings when you create the raw-disk device group.

    1. Use the symrdf command to find devices in the SRDF device group.
      phys-paris-1# symrdf -g devgroup1 query
      …
      DEV001  00DD RW       0        3 NR 00DD RW       0        0 S..   Split
      DEV002  00DE RW       0        3 NR 00DE RW       0        0 S..   Split
      …
    2. Display detailed information about all devices.
      phys-paris-1# symdev show 00DD
      …
      Symmetrix ID: 000187990182
      
      Device Physical Name     : /dev/rdsk/c6t5006048ACCC81DD0d18s2
      
      Device Symmetrix Name    : 00DD 
    3. Once you know the ctd label, use the cldevice command to see more information about that device.
      phys-paris-1# cldevice show c6t5006048ACCC81DD0d18
      
      === DID Device Instances ===
      
      DID Device Name:                                /dev/did/rdsk/d5
      Full Device Path:
      pemc3:/dev/rdsk/c8t5006048ACCC81DEFd18
      Full Device Path:
      pemc3:/dev/rdsk/c6t5006048ACCC81DD0d18
      Full Device Path:
      pemc4:/dev/rdsk/c6t5006048ACCC81DD0d18
      Full Device Path:
      pemc4:/dev/rdsk/c8t5006048ACCC81DEFd18
      Replication:                                     none
      default_fencing:                                 global

      In this example, you see that the ctd label c6t5006048ACCC81DD0d18 maps to /dev/did/rdsk/d5.

    4. Repeat steps as needed for each of the disks in the device group and on each cluster.
  5. Create the device group, file system, or ZFS storage pool you want to use.

    Use the LUNs in the SRDF device group.

    If you create a ZFS storage pool, observe the following requirements and restrictions:

    • Mirrored and unmirrored ZFS storage pools are supported.

    • ZFS storage pool spares are not supported with storage-based replication in a Geographic Edition configuration. The information about the spare that is stored in the storage pool results in the storage pool being incompatible with the remote system after it has been replicated.

    • ZFS can be used with either Synchronous or Asynchronous mode. If you use Asynchronous mode, ensure that SRDF is configured to preserve write ordering, even after a rolling failure.

    For more information, see Oracle Solaris Cluster System Administration Guide .

  6. Create an HAStoragePlus resource for the device group, file system, or ZFS storage pool you will use.

    For more information, see Oracle Solaris Cluster Data Services Planning and Administration Guide .

  7. Confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
    phys-newyork-1# clresourcegroup online -emM apprg1
    phs-newyork-1# clresourcegroup offline apprg1
  8. Unmount the file system.
    phys-newyork-1# umount /mounts/sample
  9. Take the Geographic Edition device group offline.
    phys-newyork-1# cldevicegroup offline raw-device-group
  10. Reestablish the SRDF pair.
    phys-newyork-1# symrdf -g devgroup1 -noprompt establish

    Initial configuration on the secondary cluster is now complete.