JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Geographic Edition Data Replication Guide for EMC Symmetrix Remote Data Facility     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Replicating Data With EMC Symmetrix Remote Data Facility Software

Administering Data Replication in an SRDF Protection Group

Initial Configuration of SRDF Software

Enabling the SRDF -symforce Option

Configuring Data Replication With SRDF Software on the Primary Cluster

Setting Up SRDF Device Groups

Checking the Configuration of SRDF Devices

How to Create an RDF1 Device Group

Configuring Data Replication With SRDF Software on the Secondary Cluster

How to Create the RDF2 Device Group on the Secondary Cluster

Configuring the Other Entities on the Secondary Cluster

How to Replicate the Configuration Information From the Primary Cluster, When Using Raw-Disk Device Groups

2.  Administering SRDF Protection Groups

3.  Migrating Services That Use SRDF Data Replication

A.  Geographic Edition Properties for SRDF

Index

Initial Configuration of SRDF Software

This section describes the steps you need to perform to configure SRDF software on the primary and secondary clusters. It also includes information about the preconditions for creating SRDF protection groups.

Initial configuration of the primary and secondary clusters includes the following:

Geographic Edition software supports the hardware configurations that are supported by the Oracle Solaris Cluster software. Contact your Oracle service representative for information about current supported Oracle Solaris Cluster configurations.

Enabling the SRDF -symforce Option

All nodes of both clusters must have the SRDF property SYMAPI_ALLOW_RDF_SYMFORCE enabled. This setting is required for proper function of certain geopg operations. Ensure that the SRDF /var/symapi/config/options file has the following entry:

SYMAPI_ALLOW_RDF_SYMFORCE=TRUE

See your EMC Symmetrix Remote Data Facility documentation for more information.

Configuring Data Replication With SRDF Software on the Primary Cluster

This section describes the steps you must perform on the primary cluster before you can configure SRDF data replication with Geographic Edition software. The following information is in this section:

Setting Up SRDF Device Groups

SRDF devices are configured in pairs. The mirroring relationship between the pairs becomes operational as soon as the SRDF links are online. If you have dynamic SRDF available, you have the capability to change relationships between R1 and R2 volumes in your device pairings on the fly without requiring a BIN file configuration change.


Note - Do not configure a replicated volume as a quorum device. Locate any quorum devices on a shared, unreplicated volume or use a quorum server.


The EMC Symmetrix database file on each host stores configuration information about the EMC Symmetrix units attached to the host. The EMC Symmetrix global memory stores information about the pair state of operating EMC SRDF devices.

EMC SRDF device groups are the entities that you add to Geographic Edition protection groups to enable the Geographic Edition software to manage EMC Symmetrix pairs.

The SRDF device group can hold one of two types of devices:

As a result, you can create two types of SRDF device group, RDF1 and RDF2. An SRDF device can be moved to another device group only if the source and destination groups are of the same group type.

You can create RDF1 device groups on a host attached to the EMC Symmetrix software that contains the RDF1 devices. You can create RDF2 device groups on a host attached to the EMC Symmetrix software that contains the RDF2 devices. You can perform the same SRDF operations from the primary or secondary cluster, using the device group that was built on that side.

When you add remote data facility devices to a device group, all of the devices must adhere to the following restrictions:

Checking the Configuration of SRDF Devices

Before adding SRDF devices to a device group, use the symrdf list command to list the EMC Symmetrix devices configured on the EMC Symmetrix units attached to your host.

# symrdf list

By default, the command displays devices by their EMC Symmetrix device name, a hexadecimal number that the EMC Symmetrix software assigns to each physical device. To display devices by their physical host name, use the pd argument with the symrdf command.

# symrdf list pd

How to Create an RDF1 Device Group

The following steps create a device group of type RDF1 and add an RDF1 EMC Symmetrix device to the group.

  1. Create a device group named devgroup1.
    phys-paris-1# symdg create devgroup1 -type rdf1
  2. Add an RDF1 device, with the EMC Symmetrix device name of 085, to the device group on the EMC Symmetrix storage unit identified by the number 000000003264.

    A default logical name of the form DEV001 is assigned to the RDF1 device.

    phys-paris-1# symld -g devgroup1 -sid 3264 add dev 085

Next Steps

Create the Oracle Solaris Cluster device groups, file systems, or ZFS storage pools you want to use, specifying the LUNs in the SRDF device group. You also need to create an HAStoragePlus resource for the device group, file system, or ZFS storage pool you use.

If you create a ZFS storage pool, observe the following requirements and restrictions:

For more information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide. For information about creating an HAStoragePlus resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.

Configuring Data Replication With SRDF Software on the Secondary Cluster

This section describes the steps you must complete on the secondary cluster before you can configure SRDF data replication in Geographic Edition software.

How to Create the RDF2 Device Group on the Secondary Cluster

Before You Begin

Before you can issue the SRDF commands on the secondary cluster, you need to create a RDF2 type device group on the secondary cluster that contains the same definitions as the RDF1 device group.


Note - Do not configure a replicated volume as a quorum device. Locate any quorum devices on a shared, unreplicated volume or use a quorum server.


  1. Use the symdg export command to create a text file, devgroup1.txt, that contains the RDF1 group definitions.
    phys-paris-1# symdg export devgroup -f devgroup.txt -rdf
  2. Use the rcp or ftp command to transfer the file to the secondary cluster.
    phys-paris-1# rcp devgroup1.txt phys-newyork-1:/.
    phys-paris-1# rcp devgroup1.txt phys-newyork-2:/.
  3. On the secondary cluster, use the symdg import command to create the RDF2 device group by using the definitions from the text file.

    Run the following command on each node in the newyork cluster.

    # symdg import devgroup1 -f devgroup1.txt
    
    Adding standard device 054 as DEV001...
    Adding standard device 055 as DEV002...

Configuring the Other Entities on the Secondary Cluster

Next, you need to configure any volume manager, the Oracle Solaris Cluster device groups, and the highly available cluster file system.

How to Replicate the Configuration Information From the Primary Cluster, When Using Raw-Disk Device Groups

  1. On the primary cluster, start replication for the devgroup1 device group.
    phys-paris-1# symrdf -g devgroup1 -noprompt establish
    
    An RDF 'Incremental Establish' operation execution is in progress for device group 
    'devgroup1'. Please wait...
    Write Disable device(s) on RA at target (R2)..............Done.
    Suspend RDF link(s).......................................Done.
    Mark target (R2) devices to refresh from source (R1)......Started.
    Device: 054 ............................................. Marked.
    Mark target (R2) devices to refresh from source (R1)......Done.
    Suspend RDF link(s).......................................Done.
    Merge device track tables between source and target.......Started.
    Device: 09C ............................................. Merged.
    Merge device track tables between source and target.......Done.
    Resume RDF link(s)........................................Done.
    
    The RDF 'Incremental Establish' operation successfully initiated for device group 
    'devgroup1'. 
  2. On the primary cluster, confirm that the state of the SRDF pair is synchronized.
    phys-newyork-1# symrdf -g devgroup1 verify
    
    All devices in the RDF group 'devgroup1' are in the 'Synchronized' state.
  3. On the primary cluster, split the pair by using the symrdf split command.
    phys-paris-1# symrdf -g devgroup1 -noprompt split
    
    An RDF 'Split' operation execution is in progress for device group 'devgroup1'.
    Please wait...
    
    Suspend RDF link(s).......................................Done.
    Read/Write Enable device(s) on RA at target (R2)..........Done.
    The RDF 'Split' operation device group 'devgroup1'. 
  4. Map the EMC disk drive to the corresponding DID numbers.

    You use these mappings when you create the raw-disk device group.

    1. Use the symrdf command to find devices in the SRDF device group.
      phys-paris-1# symrdf -g devgroup1 query
      …
      DEV001  00DD RW       0        3 NR 00DD RW       0        0 S..   Split       
      DEV002  00DE RW       0        3 NR 00DE RW       0        0 S..   Split       
      …
    2. Display detailed information about all devices.
      phys-paris-1# symdev show 00DD
      …
      Symmetrix ID: 000187990182
      
         Device Physical Name     : /dev/rdsk/c6t5006048ACCC81DD0d18s2
      
         Device Symmetrix Name    : 00DD 
    3. Once you know the ctd label, use the cldevice command to see more information about that device.
      phys-paris-1# cldevice show c6t5006048ACCC81DD0d18
      
      === DID Device Instances ===                   
      
      DID Device Name:                                /dev/did/rdsk/d5
        Full Device Path:                                
      pemc3:/dev/rdsk/c8t5006048ACCC81DEFd18
        Full Device Path:                                
      pemc3:/dev/rdsk/c6t5006048ACCC81DD0d18
        Full Device Path:                                
      pemc4:/dev/rdsk/c6t5006048ACCC81DD0d18
        Full Device Path:                                
      pemc4:/dev/rdsk/c8t5006048ACCC81DEFd18
        Replication:                                     none
        default_fencing:                                 global

      In this example, you see that the ctd label c6t5006048ACCC81DD0d18 maps to /dev/did/rdsk/d5.

    4. Repeat steps as needed for each of the disks in the device group and on each cluster.
  5. Create the device group, file system, or ZFS storage pool you want to use.

    Use the LUNs in the SRDF device group.

    If you create a ZFS storage pool, observe the following requirements and restrictions:

    • Mirrored and unmirrored ZFS storage pools are supported.

    • ZFS storage pool spares are not supported with storage-based replication in a Geographic Edition configuration. The information about the spare that is stored in the storage pool results in the storage pool being incompatible with the remote system after it has been replicated.

    • ZFS can be used with either Synchronous or Asynchronous mode. If you use Asynchronous mode, ensure that SRDF is configured to preserve write ordering, even after a rolling failure.

    For more information, see Oracle Solaris Cluster System Administration Guide.

  6. Create an HAStoragePlus resource for the device group, file system, or ZFS storage pool you will use.

    For more information, see Oracle Solaris Cluster Data Services Planning and Administration Guide

  7. Confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
    phys-newyork-1# clresourcegroup online -emM apprg1
    phs-newyork-1# clresourcegroup offline apprg1
  8. Unmount the file system.
    phys-newyork-1# umount /mounts/sample
  9. Take the Oracle Solaris Cluster device group offline.
    phys-newyork-1# cldevicegroup offline rawdg
  10. Reestablish the SRDF pair.
    phys-newyork-1# symrdf -g devgroup1 -noprompt establish

    Initial configuration on the secondary cluster is now complete.