Sun Cluster Geographic Edition Data Replication Guide for Hitachi TrueCopy

Configuring Data Replication With Hitachi TrueCopy Software on the Primary Cluster

This section describes the steps you must perform on the primary cluster before you can configure Hitachi TrueCopy data replication in Sun Cluster Geographic Edition software. To illustrate each step, this section uses an example of two disks, or LUNs, that are called d1 and d2. These disks are in a Hitachi TrueCopy array that holds data for an application that is called apprg1.

Configuring the /etc/horcm.conf File

Configure Hitachi TrueCopy device groups on shared disks in the primary cluster by editing the /etc/horcm.conf file on each node of the cluster that has access to the Hitachi array. Disks d1 and d2 are configured to belong to a Hitachi TrueCopy device group, devgroup1. The application, apprg1, can run on all nodes that have Hitachi TrueCopy device groups configured.

For more information about how to configure the /etc/horcm.conf file, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.

The following table describes the configuration information from our example that is found in the /etc/horcm.conf file.

Table 1–2 Example Section of the /etc/horcm.conf File on the Primary Cluster

dev_group

dev_name

port number

TargetID

LU number

MU number

devgroup1

pair1

CL1–A

0

1

 

devgroup1

pair2

CL1–A

0

2

 

The configuration information in the table indicates that the Hitachi TrueCopy device group, devgroup1, contains two pairs. The first pair, pair1, is from the d1 disk, which is identified by the tuple <CL1–A , 0, 1>. The second pair, pair2, is from the d2 disk and is identified by the tuple <CL1–A, 0, 2>. The replicas of disks d1 and d2 are located in a geographically separated Hitachi TrueCopy array. The remote Hitachi TrueCopy is connected to the partner cluster.

ProcedureHow to Set Up Raw-Disk Device Groups for Sun Cluster Geographic Edition Systems

Sun Cluster Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Sun Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Sun Cluster Geographic Edition.

  1. For the devices that you want to use, unconfigure the predefined device groups.

    The following commands remove the predefined device groups for d7 and d8.


    phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8
    
  2. Create the new raw-disk device group, including the desired devices.

    Ensure that the new DID does not contain any slashes. The following command creates a global device group rawdg containing d7 and d8.


    phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \
    -t rawdisk -d d7,d8 rawdg
    

Example 1–1 Configuring a Raw-Disk Device Group

The following commands illustrate configuring the device group on the primary cluster, configuring the same device group on the partner cluster, and adding the group to a Hitachi TrueCopy protection group.


Remove the automatically created device groups from the primary cluster.
phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8
phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8
phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8

Create the raw-disk device group on the primary cluster.
phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \
-t rawdisk -d d7,d8 rawdg

Remove the automatically created device groups from the partner cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6
phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6
phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6

Create the raw-disk device group on the partner cluster.
phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \
-t rawdisk -d d5,d6 rawdg

Add the raw-disk device group to the protection group rawpg.
phys-paris-1# geopg create -d truecopy -p Nodelist=phys-paris-1,phys-paris-2 \
-o Primary -p cluster_dgs=rawdg -s paris-newyork-ps rawpg

Next Steps

When configuring the partner cluster, create a raw-disk device group of the same name as the one you created here. See How to Replicate the Configuration Information From the Primary Cluster When Using Raw-Disk Device Groups for the instructions about this task.

Once you have configured the device group on both clusters, you can use the device group name wherever one is required in Sun Cluster Geographic Edition commands such as geopg.

ProcedureHow to Configure the VERITAS Volume Manager Volumes for Use With Hitachi TrueCopy Replication

Hitachi TrueCopy supports VERITAS Volume Manager volumes and raw-disk device groups. If you are using VERITAS Volume Manager, you must configure VERITAS Volume Manager volumes on disks d1 and d2.


Caution – Caution –

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster System Administration Guide for Solaris OS for more information.


  1. Create VERITAS Volume Manager disk groups on shared disks in cluster-paris.

    For example, the d1 and d2 disks are configured as part of a VERITAS Volume Manager disk group, which is called oradg1, by using commands, such as vxdiskadm and vxdg.

  2. After configuration is complete, verify that the disk group was created by using the vxdg list command.

    This command should list oradg1 as a disk group.

  3. Create the VERITAS Volume Manager volume.

    For example, a volume that is called vol1 is created in the oradg1 disk group. The appropriate VERITAS Volume Manager commands, such as vxassist, are used to configure the volume.

Next Steps

To complete your configuration , proceed to How to Configure the Sun Cluster Device Group That Is Controlled by Hitachi TrueCopy Software to create the Sun Cluster device group for this disk group.

ProcedureHow to Configure the Sun Cluster Device Group That Is Controlled by Hitachi TrueCopy Software

Before You Begin

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster System Administration Guide for Solaris OS for more information.

  1. Register the VERITAS Volume Manager disk group that you configured in the previous procedure.

    Use the Sun Cluster command cldevicegroup.

    For more information about this command, refer to the cldevicegroup(1CL) man page.

  2. Create a mount directory on each node of the cluster.


    phys-newyork-1# mkdir -p /mounts/sample
    phys-newyork-2# mkdir -p /mounts/sample
    
  3. Synchronize the VERITAS Volume Manager configuration with Sun Cluster software, again by using the cldevicegroup command.

  4. After configuration is complete, verify the disk group registration.


    # cldevicegroup status
    

    The VERITAS Volume Manager disk group, oradg1, should be displayed in the output.

    For more information about the cldevicegroup command, see the cldevicegroup(1CL) man page.

ProcedureHow to Configure a Highly Available File System for Hitachi TrueCopy Replication

Before You Begin

Before you configure the file system on cluster-paris, ensure that the Sun Cluster entities you require, such as application resource groups, device groups, and mount points, have already been configured.

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster System Administration Guide for Solaris OS for more information.

  1. Create the required file system on the vol1 volume at the command line.

  2. Add an entry to the /etc/vfstab file that contains information such as the mount location.

    Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.


    Note –

    You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. Data must not be mounted on the secondary cluster or data on the primary will not be replicated to the secondary cluster. Otherwise, the data will not be replicated from the primary cluster to the secondary cluster.


  3. Add the HAStoragePlus resource to the application resource group, apprg1.

    Adding the resource to the application resource group ensures that the necessary file systems are remounted before the application is brought online.

    For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


Example 1–2 Configuring a Highly Available Cluster Global File System

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).


    phys-paris-1# newfs dev/vx/dsk/oradg1/vol1
    

    The following entry is created in the /etc/vfstab file:


    # /dev/vs/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 /mounts/sample \
    ufs 2 no logging
  2. Add the HAStoragePlus resource type.


    phys-paris-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \
    -p GlobalDevicePaths=oradg1 rs-has