Sun Cluster Geographic Edition Data Replication Guide for Hitachi TrueCopy

Chapter 1 Replicating Data With Hitachi TrueCopy Software

During data replication, data from a primary cluster is copied to a backup or secondary cluster. The secondary cluster can be located at a geographically separated site from the primary cluster. This distance depends on the distance support that is available from your data replication product.

The Sun Cluster Geographic Edition software supports the use of Hitachi TrueCopy software for data replication. Before you start replicating data with Hitachi TrueCopy software, you must be familiar with the Hitachi TrueCopy documentation, have the Hitachi TrueCopy product, and have the latest Hitachi TrueCopy patches installed on your system. For information about installing the Hitachi TrueCopy software, see the Hitachi TrueCopy product documentation.

This chapter contains the procedures for configuring and administering data replication with Hitachi TrueCopy software. The chapter contains the following sections:

For information about creating and deleting data replication device groups, see Administering Hitachi TrueCopy Data Replication Device Groups. For information about obtaining a global and a detailed runtime status of replication, see Checking the Runtime Status of Hitachi TrueCopy Data Replication.

Administering Data Replication in a Hitachi TrueCopy Protection Group

This section summarizes the steps for configuring Hitachi TrueCopy data replication in a protection group.

Table 1–1 Administration Tasks for Hitachi TrueCopy Data Replication

Task 

Description 

Perform an initial configuration of the Hitachi TrueCopy software. 

See Initial Configuration of Hitachi TrueCopy Software.

Create a protection group that is configured for Hitachi TrueCopy data replication. 

See How to Create and Configure a Hitachi TrueCopy Protection Group That Does Not Use Oracle Real Application Clusters.

Add a device group that is controlled by Hitachi TrueCopy. 

See How to Add a Data Replication Device Group to a Hitachi TrueCopy Protection Group.

Add an application resource group to the protection group. 

See How to Add an Application Resource Group to a Hitachi TrueCopy Protection Group.

Replicate the protection group configuration to a secondary cluster. 

See How to Replicate the Hitachi TrueCopy Protection Group Configuration to a Secondary Cluster.

Test the configured partnership and protection groups to validate the setup.  

Perform a trial switchover or takeover and test some simple failure scenarios. See Chapter 3, Migrating Services That Use Hitachi TrueCopy Data Replication.

Activate the protection group. 

See How to Activate a Hitachi TrueCopy Protection Group.

Check the runtime status of replication. 

See Checking the Runtime Status of Hitachi TrueCopy Data Replication.

Detect failure. 

See Detecting Cluster Failure on a System That Uses Hitachi TrueCopy Data Replication.

Migrate services by using a switchover. 

See Migrating Services That Use Hitachi TrueCopy Data Replication With a Switchover.

Migrate services by using a takeover. 

See Forcing a Takeover on a System That Uses Hitachi TrueCopy Data Replication.

Recover data after forcing a takeover. 

See Recovering Services to a Cluster on a System That Uses Hitachi TrueCopy Replication.

Detect and recover from a data replication error. 

See Recovering From a Hitachi TrueCopy Data Replication Error.

Initial Configuration of Hitachi TrueCopy Software

This section describes how to configure Hitachi TrueCopy software on the primary and secondary cluster. It also includes information about the preconditions for creating Hitachi TrueCopy protection groups.

Initial configuration of the primary and secondary clusters includes the following:

If you use the Hitachi TrueCopy Command Control Interface (CCI) for data replication, you must use RAID Manager. For information about which version you should use, see the Sun Cluster Geographic Edition Installation Guide.


Note –

This model requires specific hardware configurations with Sun StorEdge 9970/9980 Array or Hitachi Lightning 9900 Series Storage. Contact your Sun service representative for information about Sun Cluster configurations that are currently supported.


Sun Cluster Geographic Edition software supports the hardware configurations that are supported by the Sun Cluster software. Contact your Sun service representative for information about current supported Sun Cluster configurations.


Caution – Caution –

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.


Configuring Data Replication With Hitachi TrueCopy Software on the Primary Cluster

This section describes the steps you must perform on the primary cluster before you can configure Hitachi TrueCopy data replication in Sun Cluster Geographic Edition software. To illustrate each step, this section uses an example of two disks, or LUNs, that are called d1 and d2. These disks are in a Hitachi TrueCopy array that holds data for an application that is called apprg1.

Configuring the /etc/horcm.conf File

Configure Hitachi TrueCopy device groups on shared disks in the primary cluster by editing the /etc/horcm.conf file on each node of the cluster that has access to the Hitachi array. Disks d1 and d2 are configured to belong to a Hitachi TrueCopy device group, devgroup1. The application, apprg1, can run on all nodes that have Hitachi TrueCopy device groups configured.

For more information about how to configure the /etc/horcm.conf file, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.

The following table describes the configuration information from our example that is found in the /etc/horcm.conf file.

Table 1–2 Example Section of the /etc/horcm.conf File on the Primary Cluster

dev_group

dev_name

port number

TargetID

LU number

MU number

devgroup1

pair1

CL1–A

0

1

 

devgroup1

pair2

CL1–A

0

2

 

The configuration information in the table indicates that the Hitachi TrueCopy device group, devgroup1, contains two pairs. The first pair, pair1, is from the d1 disk, which is identified by the tuple <CL1–A , 0, 1>. The second pair, pair2, is from the d2 disk and is identified by the tuple <CL1–A, 0, 2>. The replicas of disks d1 and d2 are located in a geographically separated Hitachi TrueCopy array. The remote Hitachi TrueCopy is connected to the partner cluster.

ProcedureHow to Configure the Volumes for Use With Hitachi TrueCopy Replication

Hitachi TrueCopy supports VERITAS Volume Manager volumes. You must configure VERITAS Volume Manager volumes on disks d1 and d2.


Caution – Caution –

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.


  1. Create VERITAS Volume Manager disk groups on shared disks in cluster-paris.

    For example, the d1 and d2 disks are configured as part of a VERITAS Volume Manager disk group, which is called oradg1, by using commands, such as vxdiskadm and vxdg.

  2. After configuration is complete, verify that the disk group was created by using the vxdg list command.

    This command should list oradg1 as a disk group.

  3. Create the VERITAS Volume Manager volume.

    For example, a volume that is called vol1 is created in the oradg1 disk group. The appropriate VERITAS Volume Manager commands, such as vxassist, are used to configure the volume.

ProcedureHow to Configure the Sun Cluster Device Group That Is Controlled by Hitachi TrueCopy Software

Before You Begin

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.

  1. Register the VERITAS Volume Manager disk group that you configured in the previous procedure.

    Use the Sun Cluster commands, scsetup or scconf.

    For more information about these commands, refer to the scsetup(1M) or the scconf(1M) man page.

  2. Synchronize the VERITAS Volume Manager configuration with Sun Cluster software, again by using the scsetup or scconf commands.

  3. After configuration is complete, verify the disk group registration.


    # scstat -D

    The VERITAS Volume Manager disk group, oradg1, should be displayed in the output.

    For more information about the scstat command, see the scstat(1M) man page.

ProcedureHow to Configure a Highly Available File System for Hitachi TrueCopy Replication

Before You Begin

Before you configure the file system on cluster-paris, ensure that the Sun Cluster entities you require, such as application resource groups, device groups, and mount points, have already been configured.

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.

  1. Create the required file system on the vol1 volume at the command line.

  2. Add an entry to the /etc/vfstab file that contains information such as the mount location.

    Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.


    Note –

    You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. Data must not be mounted on the secondary cluster or data on the primary will not be replicated to the secondary cluster. Otherwise, the data will not be replicated from the primary cluster to the secondary cluster.


  3. Add the HAStoragePlus resource to the application resource group, apprg1.

    Adding the resource to the application resource group ensures that the necessary file systems are remounted before the application is brought online.

    For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


Example 1–1 Configuring a Highly Available Cluster Global File System

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).


    phys-paris-1# newfs dev/vx/dsk/oradg1/vol1

    The following entry is created in the /etc/vfstab file:


    # /dev/vs/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 /mounts/sample \
    ufs 2 no logging
  2. Add the HAStoragePlus resource type.


    phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \
    -x GlobalDevicePaths=oradg1

Configuring Data Replication With Hitachi TrueCopy Software on the Secondary Cluster

This section describes the steps you must complete on the secondary cluster before you can configure Hitachi TrueCopy data replication in Sun Cluster Geographic Edition software.

Configuring the /etc/horcm.conf File

You must configure the Hitachi TrueCopy device group on shared disks in the secondary cluster as you did on the primary cluster by editing the /etc/horcm.conf file on each node of the cluster that has access to the Hitachi array. Disks d1 and d2 are configured to belong to a Hitachi TrueCopy device group that is called devgroup1. The application, apprg1, can run on all nodes that have Hitachi TrueCopy device groups configured.

For more information about how to configure the /etc/horcm.conf file, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.

The following table describes the configuration information from the example that is found in the /etc/horcm.conf file.

Table 1–3 Example Section of the /etc/horcm.conf File on the Secondary Cluster

dev_group

dev_name

port number

TargetID

LU number

MU number

devgroup1

pair1

CL1–C

0

20

 

devgroup1

pair2

CL1–C

0

21

 

The configuration information in the table indicates that the Hitachi TrueCopy device group, devgroup1, contains two pairs. The first pair, pair1, is from the d1 disk, which is identified by the tuple <CL1–C , 0, 20>. The second pair, pair2, is from the d2 disk and is identified by the tuple <CL1–C, 0, 21>.

After you have configured the /etc/horcm.conf file on the secondary cluster, you can see the status of the pairs by using the pairdisplay command as follows:


phys-paris-1# pairdisplay -g devgroup1
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M
devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..  SMPL ----  ------,----- ----  -
devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..SMPL ----  ------,----- ----  -
devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..  SMPL ----  ------,----- ----  -
devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..SMPL ----  ------,----- ----  -

Configuring the Other Entities on the Secondary Cluster

Next, you need to configure the volume manager, the Sun Cluster device groups, and the highly available cluster global file system. You can configure these entities in two ways:

Each of these methods is described in the following procedures.

ProcedureHow to Replicate the Volume Manager Configuration Information From the Primary Cluster

Before You Begin

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.

  1. Start replication for the devgroup1 device group.


    phys-paris-1# paircreate -g devgroup1 -vl -f async
    
    phys-paris-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-A , 0, 1) 54321   1..P-VOL COPY ASYNC ,12345 609   -
    devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,-----   1   -
    devgroup1 pair2(L) (CL1-A , 0, 2) 54321   2..P-VOL COPY ASYNC ,12345 610   -
    devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,-----   2   -
  2. Wait for the state of the pair to become PAIR on the secondary cluster.


    phys-newyork-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1     - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..P-VOL PAIR ASYNC,12345, 609   - 
    devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2     - 
    devgroup1 pair2(R) (CL1-A , 0, 2)54321    2..P-VOL PAIR ASYNC,12345, 610   -
  3. Split the pair by using the pairsplit command and confirm that the secondary volumes on cluster-newyork are writable by using the -rw option.


    phys-newyork-1# pairsplit -g devgroup1 -rw 
    phys-newyork-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, -----  1    - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..P-VOL PSUS ASYNC,12345  609   W 
    devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,-----   2    - 
    devgroup1 pair2(R) (CL1-A , 0, 2) 54321   2..P-VOL PSUS ASYNC,12345  610   W
  4. Import the VERITAS Volume Manager disk group, oradg1.


    phys-newyork-1# vxdg -C import oradg1
  5. Verify that the VERITAS Volume Manager disk group was successfully imported.


    phys-newyork-1# vxdg list
  6. Enable the VERITAS Volume Manager volume.


    phys-newyork-1# /usr/sbin/vxrecover -g oradg1 -s -b
  7. Verify that the VERITAS Volume Manager volumes are recognized and enabled.


    phys-newyork-1# vxprint
  8. Register the VERITAS Volume Manager disk group, oradg1, in Sun Cluster.


    phys-newyork-1# scconf -a -D type=vxvm, name=oradg1, \
    nodelist=phys-newyork-1:phys-newyork-2
  9. Synchronize the volume manager information with the Sun Cluster device group and verify the output.


    phys-newyork-1# scconf -c -D name=oradg1,sync
    phys-newyork-1# scstat -D
  10. Add an entry to the /etc/vfstab file on phys-newyork-1.


    phys-newyork-1# /dev/vx/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 \
    /mounts/sample ufs 2 no logging
  11. Create a mount directory on phys-newyork-1.


    phys-newyork-1# mkdir -p /mounts/sample
  12. Create an application resource group, apprg1, by using the scrgadm command.


    phys-newyork-1# scrgadm -a -g apprg1
  13. Create the HAStoragePlus resource in apprg1.


    phys-newyork-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \
    -x GlobalDevicePaths=oradg1 \
  14. If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.


    phys-newyork-1# scswitch -z -g apprg1 -h phys-newyork-1
    phs-newyork-1# scswitch -F -g apprg1
  15. Unmount the file system.


    phys-newyork-1# umount /mounts/sample
  16. Take the Sun Cluster device group offline.


    phys-newyork-1# scswitch -F -D oradg1
  17. Verify that the VERITAS Volume Manager disk group was deported.


    phys-newyork-1# vxdg list
  18. Reestablish the Hitachi TrueCopy pair.


    phys-newyork-1# pairresync -g devgroup1 
    phys-newyork-1# pairdisplay -g devgroup1 
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----   1    - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..P-VOL PAIR ASYNC,12345  609   W 
    devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,-----   2    - 
    devgroup1 pair2(R) (CL1-A , 0, 2) 54321   2..P-VOL PAIR ASYNC,12345  610   W

    Initial configuration on the secondary cluster is now complete.

ProcedureHow to Create a Copy of the Volume Manager Configuration

This task copies the volume manager configuration from the primary cluster, cluster-paris, to LUNs of the secondary cluster, cluster-newyork, by using the VERITAS Volume Manager commands vxdiskadm and vxassist command.


Note –

The device group, devgroup1, must be in the SMPL state throughout this procedure.


  1. Confirm that the pair is in the SMPL state.


    phys-newyork-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..SMPL ---- ------,----- ----   - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..SMPL ---- ------,----- ----   - 
    devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..SMPL ---- ------,----- ----   - 
    devgroup1 pair2(R) (CL1-A, 0, 2) 54321    2..SMPL ---- ------,----- ----   -
  2. Create VERITAS Volume Manager disk groups on shared disks in cluster-paris.

    For example, the d1 and d2 disks are configured as part of a VERITAS Volume Manager disk group, which is called oradg1, by using commands, such as vxdiskadm and vxdg.

  3. After configuration is complete, verify that the disk group was created by using the vxdg list command.

    This command should list oradg1 as a disk group.

  4. Create the VERITAS Volume Manager volume.

    For example, a volume that is called vol1 is created in the oradg1 disk group. The appropriate VERITAS Volume Manager commands, such as vxassist, are used to configure the volume.

  5. Import the VERITAS Volume Manager disk group.


    phys-newyork-1# vxdg -C import oradg1
  6. Verify that the VERITAS Volume Manager disk group was successfully imported.


    phys-newyork-1# vxdg list
  7. Enable the VERITAS Volume Manager volume.


    phys-newyork-1# /usr/sbin/vxrecover -g oradg1 -s -b
  8. Verify that the VERITAS Volume Manager volumes are recognized and enabled.


    phys-newyork-1# vxprint
  9. Register the VERITAS Volume Manager disk group, oradg1, in Sun Cluster.


    phys-newyork-1# scconf -a -D type=vxvm, name=oradg1, \
    nodelist=phys-newyork-1:phys-newyork-2
  10. Synchronize the VERITAS Volume Manager information with the Sun Cluster device group and verify the output.


    phys-newyork-1# scconf -c -D name=oradg1, sync
    phys-newyork-1# scstat -D
  11. Create a UNIX file system.


    phys-newyork-1# newfs dev/vx/dsk/oradg1/vol1
  12. Add an entry to the /etc/vfstab file on phys-newyork-1.


    phys-newyork-1# /dev/vx/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 /mounts/sample \
    ufs 2 no logging
  13. Create a mount directory on phys-newyork-1.


    phys-newyork-1# mkdir -p /mounts/sample
  14. Create an application resource group, apprg1, by using the scrgadm command.


    phys-newyork-1# scrgadm -a -g apprg1
  15. Create the HAStoragePlus resource in apprg1.


    phys-newyork-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \
    -x GlobalDevicePaths=oradg1 \
  16. If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.


    phys-newyork-1# scswitch -z -g apprg1 -h phys-newyork-1
    phs-newyork-1# scswitch -F -g apprg1
  17. Unmount the file system.


    phys-newyork-1# umount /mounts/sample
  18. Take the Sun Cluster device group offline.


    phys-newyork-1# scswitch -F -D oradg1
  19. Verify that the VERITAS Volume Manager disk group was deported.


    phys-newyork-1# vxdg list
  20. Verify that the pair is still in the SMPL state.


    phys-newyork-1# pairdisplay -g devgroup1 
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..SMPL ---- ------,-----  ----  - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..SMPL ---- ------,-----  ----  - 
    devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..SMPL ---- ------,-----  ----  - 
    devgroup1 pair2(R) (CL1-A, 0, 2)  54321   2..SMPL ---- ------,-----  ----  -