Sun Cluster Geographic Edition Data Replication Guide for EMC Symmetrix Remote Data Facility

Configuring Data Replication With EMC Symmetrix Remote Data Facility Software on the Primary Cluster

This section describes the steps you must perform on the primary cluster before you can configure EMC Symmetrix Remote Data Facility data replication with Sun Cluster Geographic Edition software.

Setting Up EMC Symmetrix Remote Data Facility Device Groups

EMC Symmetrix Remote Data Facility devices are configured in pairs. The mirroring relationship between the pairs becomes operational as soon as the EMC Symmetrix Remote Data Facility links are online. If you have dynamic SRDF available, you have the capability to change relationships between R1 and R2 volumes in your device pairings on the fly without requiring a BIN file configuration change.

The EMC Symmetrix database file on each host stores configuration information about the EMC Symmetrix units attached to the host. The EMC Symmetrix global memory stores information about the pair state of operating EMC SRDF devices.

EMC SRDF device groups are the entities that you add to Sun Cluster Geographic Edition protection groups to enable the Sun Cluster Geographic Edition software to manage EMC Symmetrix pairs.

The EMC Symmetrix Remote Data Facility device group can hold one of two types of devices:

As a result, you can create two types of EMC Symmetrix Remote Data Facility device group, RDF1 and RDF2. An EMC Symmetrix Remote Data Facility device can be moved to another device group only if the source and destination groups are of the same group type.

You can create RDF1 device groups on a host attached to the EMC Symmetrix software that contains the RDF1 devices. You can create RDF2 device groups on a host attached to the EMC Symmetrix software that contains the RDF2 devices. You can perform the same EMC Symmetrix Remote Data Facility operations from the primary or secondary cluster, using the device group that was built on that side.

When you add remote data facility devices to a device group, all of the devices must adhere to the following restrictions:

Checking the Configuration of EMC Symmetrix Remote Data Facility Devices

Before adding EMC Symmetrix Remote Data Facility devices to a device group, use the symrdf list command to list the EMC Symmetrix devices configured on the EMC Symmetrix units attached to your host.


# symrdf list

By default, the command displays devices by their EMC Symmetrix device name, a hexadecimal number that the EMC Symmetrix software assigns to each physical device. To display devices by their physical host name, use the pd argument with the symrdf command.


# symrdf list pd

Creating an RDF1 Device Group

The following steps create a device group of type RDF1 and add an RDF1 EMC Symmetrix device to the group.

  1. Create a device group named devgroup1.


    phys-paris-1# symdg create devgroup1 -type rdf1
    
  2. Add an RDF1 device, with the EMC Symmetrix device name of 085, to the device group on the EMC Symmetrix storage unit identified by the number 000000003264.

    A default logical name of the form DEV001 is assigned to the RDF1 device.


    phys-paris-1# symld -g devgroup1 -sid 3264 add dev 085
    

ProcedureHow to Set Up Raw-Disk Device Groups for Sun Cluster Geographic Edition Systems

Sun Cluster Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Sun Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Sun Cluster Geographic Edition.

  1. For the devices that you want to use, unconfigure the predefined device groups.

    The following commands remove the predefined device groups for d7 and d8.


    phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8
    
  2. Create the new raw-disk device group, including the desired devices.

    Ensure that the new DID does not contain any slashes. The following command creates a global device group, rawdg, which contains d7 and d8.


    phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \
    -t rawdisk -d d7,d8 rawdg
    

Example 1–1 Configuring a Raw-Disk Device Group

This example illustrates configuring the device group on the primary cluster, configuring the same device group on the partner cluster, and adding the group to an EMC Symmetrix protection group.


Remove the automatically created device groups from the primary cluster.
phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8 
phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8
phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8

Create the raw-disk device group on the primary cluster.
phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \ 
-t rawdisk -d d7,d8 rawdg

Remove the automatically created device groups from the partner cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6
phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6
phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6

Create the raw-disk device group on the partner cluster.
phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \
-t rawdisk  -d d5,d6 rawdg

Add the raw-disk device group to the protection group rawpg.
phys-paris-1# geopg create -d srdf -p Nodelist=phys-paris1,phys-paris-2 \
-o Primary -p cluster_dgs=rawdg -s paris-newyork-ps rawpg

Next Steps

When configuring the partner cluster, create a raw-disk device group of the same name as the one you created here. See How to Replicate the Configuration Information From the Primary Cluster, When Using Raw-Disk Device Groups for the instructions about this task.

After you have configured the device group on both clusters, you can use the device group name wherever one is required in Sun Cluster Geographic Edition commands such as geopg.

ProcedureHow to Configure VERITAS Volume Manager Volumes for Use With EMC Symmetrix Remote Data Facility Replication

EMC Symmetrix Remote Data Facility data replication is supported with VERITAS Volume Manager volumes and raw-disk device groups. If you are using VERITAS Volume Manager, you must configure VERITAS Volume Manager volumes on the disks you selected for your EMC Symmetrix Remote Data Facility device group.

  1. On cluster-paris, create VERITAS Volume Manager disk groups on shared disks that will be replicated to the partner cluster cluster-newyork.

    For example, the d1 and d2 disks are configured as part of a VERITAS Volume Manager disk group which is called dg1, by using commands such as vxdiskadm and vxdg. These disks are the ones that will be replicated to the partner cluster.

  2. After configuration is complete, verify that the disk group was created by using the vxdg list command.

    This command should list dg1 as a disk group.

  3. Create the VERITAS Volume Manager volume.

    For example, a volume that is called vol1 is created in the dg1 disk group. The appropriate VERITAS Volume Manager commands, such as vxassist, are used to configure the volume.

Next Steps

Perform the steps in How to Configure the Sun Cluster Device Group for a VERITAS Volume Manager Disk Group to configure the VERITAS Volume Manager volume as a Sun Cluster device group.

ProcedureHow to Configure the Sun Cluster Device Group for a VERITAS Volume Manager Disk Group

  1. Register the VERITAS Volume Manager disk group that you configured in the previous procedure with Sun Cluster.

    Use the Sun Cluster commands clsetup or cldevice and cldevicegroup.

    For more information about these commands, refer to the clsetup(1CL) man page or the cldevice(1CL) and cldevicegroup(1CL) man pages.

  2. Synchronize the VERITAS Volume Manager configuration with Sun Cluster software, again by using the clsetup or cldevice and cldevicegroup commands.

  3. After configuration is complete, verify the disk group registration.


    phys-paris-1# cldevicegroup show devicegroupname
    

    The VERITAS Volume Manager disk group, dg1, should be displayed in the output.

    For more information about the cldevicegroup command, see the cldevicegroup(1CL) man page.

ProcedureHow to Configure a Highly Available File System for EMC Symmetrix Remote Data Facility Replication

Before You Begin

Before you configure the file system on cluster-paris, ensure that the Sun Cluster entities you require, such as application resource groups, device groups, and volumes, have already been configured.

  1. Create the required file system on the vol1 volume at the command line.

  2. On each node in the cluster, create mount points for the file system you just created.


    # mkdir -p /mounts/sample
    
    /mounts/sample

    Your mount point.

  3. Add an entry to the /etc/vfstab file that contains information such as the mount location.

    Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.


    Note –

    You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster.


  4. Add the HAStoragePlus resource to the application resource group, apprg1.

    Adding the resource to the application resource group ensures that the necessary file systems are mounted before the application is brought online.

    For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  5. Verify that the device group was registered properly.

    The following command should display the device group dg1.


    phys-paris-1# cldevicegroup show dg1
    

Example 1–2 Configuring a Highly Available Cluster File System

This example creates a locally mounted filesystem, with HAStoragePlus. The filesystem created in this example is mounted locally every time the resource is brought online.

This example assumes that the following already exist:

  1. Create a UNIX file system (UFS).


    phys-paris-1# newfs dev/vx/dsk/dg1/vol1
    
  2. On each node in the cluster, create mount points for the file system.


    phys-paris-1# mkdir -p /mounts/sample
    phys-paris-2# mkdir -p /mounts/sample
    
  3. Create mount points on all cluster paris nodes.


    phys-paris-1# mkdir /mounts/sample
    
  4. Add the following entry to the /etc/vfstab file:


    phys-paris-1# /dev/vs/dsk/dg1/vol1 /dev/vx/rdsk/dg1/vol1 /mounts/sample \
    ufs 2 no logging
    
  5. Add the HAStoragePlus resource type.


    phys-paris-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \
    -p GlobalDevicePaths=dg1 rs-hasp