Sun Cluster Geographic Edition Data Replication Guide for EMC Symmetrix Remote Data Facility

Initial Configuration of EMC Symmetrix Remote Data Facility Software

This section describes the steps you need to perform to configure EMC Symmetrix Remote Data Facility software on the primary and secondary clusters. It also includes information about the preconditions for creating EMC Symmetrix Remote Data Facility protection groups.

Initial configuration of the primary and secondary clusters includes the following:

Sun Cluster Geographic Edition software supports the hardware configurations that are supported by the Sun Cluster software. Contact your Sun service representative for information about current supported Sun Cluster configurations.

Configuring Data Replication With EMC Symmetrix Remote Data Facility Software on the Primary Cluster

This section describes the steps you must perform on the primary cluster before you can configure EMC Symmetrix Remote Data Facility data replication with Sun Cluster Geographic Edition software.

Setting Up Device Groups

EMC Symmetrix Remote Data Facility devices are configured in pairs. The mirroring relationship between the pairs becomes operational as soon as the EMC Symmetrix Remote Data Facility links are online. If you enable a two-way mirror disks with dynamic EMC Symmetrix Remote Data Facility functionality, you can set up the pair of devices at any time.

The EMC Symmetrix global memory stores information about the pair state of operating EMC Symmetrix Remote Data Facility devices.

Sun Cluster device groups are the entities that you can create and use to manage and control EMC Symmetrix Remote Data Facility pairs. The SYMCLI database file of the host stores information about the device group and the devices that are contained by the group.

The EMC Symmetrix Remote Data Facility device group can hold one of two types of devices:

As a result, you can create two types of EMC Symmetrix Remote Data Facility device group, RDF1 and RDF2. An EMC Symmetrix Remote Data Facility device can be moved to another device group only if the source and destination groups are of the same group type.

You can create RDF1 device groups on a host attached to the EMC Symmetrix software that contains the RDF1 devices. You can create RDF2 device groups on a host attached to the EMC Symmetrix software that contains the RDF2 devices. You can perform the same EMC Symmetrix Remote Data Facility operations from the primary or secondary cluster, using the device group that was built on that side.

When you add remote data facility devices to a device group, all of the devices must adhere to the following restrictions:

Checking the Configuration of EMC Symmetrix Remote Data Facility Devices

Before adding EMC Symmetrix Remote Data Facility devices to a device group, use the symrdf list command to list the EMC Symmetrix Remote Data Facility devices configured on the EMC Symmetrix units attached to your host.


# symrdf list

By default, the command displays devices by their EMC Symmetrix device name, a hexadecimal number that the EMC Symmetrix software assigns to each physical device. To display devices by their physical host name, use the pd argument with the symrdf command.


# symrdf list pd

Creating a RDF1 Device Group

The following steps create a device group of type RDF1 and add an RDF1 EMC Symmetrix device to the group.

  1. Create a device group named devgroup1.


    phys-paris-1# symdg create devgroup1 -type rdf1
    
  2. Add an RDF1 device, with the EMC Symmetrix device name of 085, to the device group on the EMC Symmetrix storage unit identified by the number 000000003264.

    A default logical name of the form DEV001 is assigned to the RDF1 device.


    phys-paris-1# symld -g devgroup1 -sid 3264 add dev 085
    

ProcedureHow to Configure the Volumes for Use With EMC Symmetrix Remote Data Facility Replication

EMC Symmetrix Remote Data Facility supports VERITAS Volume Manager volumes. You must configure VERITAS Volume Manager volumes on the disks you selected for your EMC Symmetrix Remote Data Facility device group.

  1. Create VERITAS Volume Manager disk groups on shared disks in cluster-paris.

    For example, the d1 and d2 disks are configured as part of a VERITAS Volume Manager disk group, which is called dg1, by using commands, such as vxdiskadm and vxdg.

  2. After configuration is complete, verify that the disk group was created by using the vxdg list command.

    This command should list dg1 as a disk group.

  3. Create the VERITAS Volume Manager volume.

    For example, a volume that is called vol1 is created in the dg1 disk group. The appropriate VERITAS Volume Manager commands, such as vxassist, are used to configure the volume.

ProcedureHow to Configure the Sun Cluster Device Group That Is Controlled by EMC Symmetrix Remote Data Facility

  1. Register the VERITAS Volume Manager disk group that you configured in the previous procedure with Sun Cluster.

    Use the Sun Cluster commands, scsetup or scconf.

    For more information about these commands, refer to the scsetup(1M) or the scconf(1M) man page.

  2. Synchronize the VERITAS Volume Manager configuration with Sun Cluster software, again by using the scsetup or scconf commands.

  3. After configuration is complete, verify the disk group registration.


    phys-paris-1# scstat -D
    

    The VERITAS Volume Manager disk group, dg1, should be displayed in the output.

    For more information about the scstat command, see the scstat(1M) man page.

ProcedureHow to Configure a Highly Available File System for EMC Symmetrix Remote Data Facility Replication

Before You Begin

Before you configure the file system on cluster-paris, ensure that the Sun Cluster entities you require, such as application resource groups, device groups, volumes, and mount points, have already been configured.

  1. Create the required file system on the vol1 volume at the command line.

  2. Create the required mount points on all cluster paris nodes.

  3. Add an entry to the /etc/vfstab file that contains information such as the mount location.

    Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.


    Note –

    You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster.


  4. Add the HAStoragePlus resource to the application resource group, apprg1.

    Adding the resource to the application resource group ensures that the necessary file systems are mounted before the application is brought online.

    For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  5. Verify that the disk group was registered properly.


    phys-paris-1# scstat -D
    

    The following command should display the VERITAS Volume Manager disk group dg1.


Example 1–1 Configuring a Highly Available Cluster File System

This example creates a locally mounted filesystem, with HAStoragePlus. The filesystem created in this example is mounted locally every time the resource is brought online.

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).


    phys-paris-1# newfs dev/vx/dsk/dg1/vol1
    
  2. Create mount points on all cluster paris nodes.


    phys-paris-1# mkdir /mounts/sample
    
  3. Add the following entry to the /etc/vfstab file:


    phys-paris-1# /dev/vs/dsk/dg1/vol1 /dev/vx/rdsk/dg1/vol1 /mounts/sample \
    ufs 2 no logging
    
  4. Add the HAStoragePlus resource type.


    phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \
    -x GlobalDevicePaths=dg1
    

Configuring Data Replication With EMC Symmetrix Remote Data Facility Software on the Secondary Cluster

This section describes the steps you must complete on the secondary cluster before you can configure EMC Symmetrix Remote Data Facility data replication in Sun Cluster Geographic Edition software.

ProcedureHow to Create the RDF2 Device Group on the Secondary Cluster

Before You Begin

Before you can issue the EMC Symmetrix Remote Data Facility commands on the secondary cluster, you need to create a RDF2 type device group on the secondary cluster that contains the same definitions as the RDF1 device group.

  1. Use the symdg export command to create a text file, devgroup1.txt, that contains the RDF1 group definitions.


    phys-paris-1# symdg export devgroup -f devgroup.txt -rdf
    
  2. Use the rcp or ftp command to transfer the file to the secondary cluster.


    phys-paris-1# rcp devgroup1.txt phys-newyork-2:/.
    
  3. On the secondary cluster, use the symdg import command to create the RDF2 device group by using the definitions from the text file.

    Run the following command on each node in the newyork cluster.


    phys-newyork-1# symdg import devgroup1 -f devgroup1.txt
    
    Adding standard device 054 as DEV001...
    Adding standard device 055 as DEV002...

Configuring the Other Entities on the Secondary Cluster

Next, you need to configure the volume manager, the Sun Cluster device groups, and the highly available cluster file system. You can configure these entities in two ways:

Each of these methods is described in the following procedures.

ProcedureHow to Replicate the Volume Manager Configuration Information From the Primary Cluster

  1. Start replication for the devgroup1 device group.


    phys-paris-1# symrdf -g devgroup1 -noprompt establish
    
    An RDF 'Incremental Establish' operation execution is in progress for device group 
    'devgroup1'. Please wait... 
    Write Disable device(s) on RA at target (R2)..............Done. 
    Suspend RDF link(s).......................................Done.
    Mark target (R2) devices to refresh from source (R1)......Started. 
    Device: 054 ............................................. Marked. 
    Mark target (R2) devices to refresh from source (R1)......Done. 
    Suspend RDF link(s).......................................Done. 
    Merge device track tables between source and target.......Started. 
    Device: 09C ............................................. Merged. 
    Merge device track tables between source and target.......Done. 
    Resume RDF link(s)........................................Done. 
    
    The RDF 'Incremental Establish' operation successfully initiated for device group 
    'devgroup1'. 
  2. Confirm that the state of the EMC Symmetrix Remote Data Facility pair is synchronized.


    phys-newyork-1# symrdf -g devgroup1 verify
    
    All devices in the RDF group 'devgroup1' are in the 'Synchronized' state.
  3. Split the pair by using the symrdf split command.


    phys-paris-1# symrdf -g devgroup1 -noprompt split
    
    An RDF 'Split' operatiaon execution is in progress for device group 'devgroup1'. 
    Please wait... 
    
    Suspend RDF link(s).......................................Done. 
    Read/Write Enable device(s) on RA at target (R2)..........Done. 
    The RDF 'Split' operation device group 'devgroup1'. 
  4. Enable all the volumes to be scanned.


    phys-newyork-1# vxdctl enable
    
  5. Import the VERITAS Volume Manager disk group, dg1.


    phys-newyork-1# vxdg -C import dg1
    
  6. Verify that the VERITAS Volume Manager disk group was successfully imported.


    phys-newyork-1# vxdg list
    
  7. Enable the VERITAS Volume Manager volume.


    phys-newyork-1# /usr/sbin/vxrecover -g dg1 -s -b
    
  8. Verify that the VERITAS Volume Manager volumes are recognized and enabled.


    phys-newyork-1# vxprint
    
  9. Register the VERITAS Volume Manager disk group, dg1, in Sun Cluster software.


    phys-newyork-1# scconf -a -D type=vxvm, name=dg1, \
    nodelist=phys-newyork-1:phys-newyork-2
    
  10. Add an entry to the /etc/vfstab file on phys-newyork-1.


    phys-newyork-1# /dev/vx/dsk/dg1/vol1 /dev/vx/rdsk/dg1/vol1 \
    /mounts/sample ufs 2 no logging
    
  11. Create a mount directory on newyork.


    phys-newyork-1# mkdir -p /mounts/sample
    phys-newyork-2# mkdir -p /mounts/sample
    
  12. Create an application resource group, apprg1, by using the scrgadm command.


    phys-newyork-1# scrgadm -a -g apprg1
    
  13. Create the HAStoragePlus resource in apprg1.


    phys-newyork-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \
    -x GlobalDevicePaths=dg1 \
  14. Confirm that the application resource group is correctly configured by bringing it online and taking it offline again.


    phys-newyork-1# scswitch -Z -g apprg1
    phs-newyork-1# scswitch -F -g apprg1
    
  15. Unmount the file system.


    phys-newyork-1# umount /mounts/sample
    
  16. Take the Sun Cluster device group offline.


    phys-newyork-1# scswitch -F -D dg1
    
  17. Verify that the VERITAS Volume Manager disk group was deported.


    phys-newyork-1# vxdg list
    
  18. Reestablish the EMC Symmetrix Remote Data Facility pair.


    phys-newyork-1# symrdf -g devgroup1 -noprompt establish
    

    Initial configuration on the secondary cluster is now complete.

ProcedureHow to Create a Copy of the Volume Manager Configuration

This task copies the volume manager configuration from the primary cluster, cluster-paris, to LUNs of the secondary cluster, cluster-newyork, by using the VERITAS Volume Manager commands vxdiskadm and vxassist command.


Note –

The device group, devgroup1, must be in the Split state throughout this procedure.


  1. Confirm that the pair is in the Split state.


    phys-newyork-1# symrdf -g devgroup1 query
    
            Source (R1) View                 Target (R2) View     MODES           
    --------------------------------    ------------------------ ----- ------------
                 ST                  LI      ST                                    
    Standard      A                   N       A                                   
    Logical       T  R1 Inv   R2 Inv  K       T  R1 Inv   R2 Inv       RDF Pair    
    Device  Dev   E  Tracks   Tracks  S Dev   E  Tracks   Tracks MDA   STATE       
    -------------------------------- -- ------------------------ ----- ------------
    
    DEV001  00EC RW       0        0 NR 00EC RW       0        0 S..   Split       
    DEV002  00ED RW       0        0 NR 00ED RW       0        0 S..   Split      
  2. Import the VERITAS Volume Manager disk group.


    phys-newyork-1# vxdg -C import dg1
    
  3. Verify that the VERITAS Volume Manager disk group was successfully imported.


    phys-newyork-1# vxdg list
    
  4. Enable the VERITAS Volume Manager volume.


    phys-newyork-1# /usr/sbin/vxrecover -g dg1 -s -b
    
  5. Verify that the VERITAS Volume Manager volumes are recognized and enabled.


    phys-newyork-1# vxprint
    
  6. Register the VERITAS Volume Manager disk group, dg1, in Sun Cluster software.


    phys-newyork-1# scconf -a -D type=vxvm, name=dg1, \
    nodelist=phys-newyork-1:phys-newyork-2
    
  7. Create a VERITAS Volume Manager volume.

  8. Synchronize the VERITAS Volume Manager information with the Sun Cluster device group and verify the output.


    phys-newyork-1# scconf -c -D name=dg1, sync
    phys-newyork-1# scstat -D
    
  9. Create a mount directory on phys-newyork-1.


    phys-newyork-1# mkdir -p /mounts/sample
    
  10. Create an application resource group, apprg1 by using the scrgadm command.


    phys-newyork-1# scrgadm -a -g apprg1
    
  11. Create the HAStoragePlus resource in apprg1.


    phys-newyork-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \
    -x GlobalDevicePaths=dg1
    
  12. If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.


    phys-newyork-1# scswitch -z -g apprg1 -h phys-newyork-1
    phs-newyork-1# scswitch -F -g apprg1
    
  13. Unmount the file system.


    phys-newyork-1# umount /mounts/sample
    
  14. Take the Sun Cluster device group offline.


    phys-newyork-1# scswitch -F -D dg1
    
  15. Verify that the VERITAS Volume Manager disk group was deported.


    phys-newyork-1# vxdg list