Go to main content

Oracle® Solaris Cluster Geographic Edition Data Replication Guide for Hitachi TrueCopy and Universal Replicator

Exit Print View

Updated: July 2016
 
 

Configuring Data Replication With Hitachi TrueCopy or Universal Replicator Software on the Secondary Cluster

This section describes the steps that you must complete on the secondary cluster before you can configure Hitachi TrueCopy or Universal Replicator data replication in the Geographic Edition framework.

Configuring the /etc/horcm.conf File on the Nodes of the Secondary Cluster

For more information about how to configure the /etc/horcm.conf file, see the documentation for Hitachi TrueCopy and Universal Replicator.

On each node of the secondary cluster, you must configure the /etc/horcm.conf file with the same Hitachi TrueCopy or Universal Replicator data replication component names and device names that are configured on the primary cluster, and assign them to LUNs and to journal volumes on the local shared storage array.

Figure 5, Table 5, Example HORCM_LDEV Section of the /etc/horcm.conf File on the Secondary Cluster and Figure 6, Table 6, Example HORCM_DEV Section of the /etc/horcm.conf File on the Secondary Cluster show the entries in the /etc/horcm.conf file on the nodes of the secondary cluster for the data replication components configured on the primary cluster in Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster. Figure 5, Table 5, Example HORCM_LDEV Section of the /etc/horcm.conf File on the Secondary Cluster shows the HORCM_LDEV parameter configured with two locally attached LUNs, designated by their serial numbers and logical device (ldev) numbers, and associated with a journal ID, as they were on the primary cluster.


Note -  If you want to ensure the consistency of replicated data with Hitachi Universal Replicator on both the primary cluster and the secondary cluster, you must specify a journal volume ID in the third property configuration field of HORCM_LDEV for each device in a Hitachi Universal Replicator data replication component. Otherwise, journalling does not occur and Hitachi Universal Replicator's functionality in Geographic Edition configurations is no greater than the functionality of Hitachi TrueCopy.
Table 5  Example HORCM_LDEV Section of the /etc/horcm.conf File on the Secondary Cluster
# dev_group
dev_name
serial#:jid#
ldev
devgroup1
pair1
10132:1
00:14
devgroup1
pair2
10132:1
00:15

The following table shows the HORCM_DEV parameter configured with two LUNs designated by their port, CL1-C, target 0, and LU numbers 22 and 23.

Table 6  Example HORCM_DEV Section of the /etc/horcm.conf File on the Secondary Cluster
# dev_group
dev_name
port number
TargetID
LU number
MU number
devgroup2
pair3
CL1–C
0
22
devgroup2
pair4
CL1–C
0
23

After you have configured the /etc/horcm.conf file on the secondary cluster, you can view the status of the pairs by using the pairdisplay command as follows:

phys-paris-1# pairdisplay -g devgroup1
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M
devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..  SMPL ----  ------,----- ----  -
devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..SMPL ----  ------,----- ----  -
devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..  SMPL ----  ------,----- ----  -
devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..SMPL ----  ------,----- ----  -

Configuring the Other Entities on the Secondary Cluster

Next, you need to configure any volume manager, the Oracle Solaris Cluster device groups, and the highly available cluster file system.

How to Replicate the Configuration Information From the Primary Cluster When Using Raw-Disk Device Groups

Before You Begin

If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Geographic Edition framework does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as an Oracle Solaris Cluster quorum device. See Using Storage-Based Data Replication Within a Campus Cluster in Oracle Solaris Cluster 4.3 System Administration Guide for more information.

  1. Start replication for the devgroup1 device group.
    phys-paris-1# paircreate -g devgroup1 -vl -f async
    
    phys-paris-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-A , 0, 1) 54321   1..P-VOL COPY ASYNC ,12345 609   -
    devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,-----   1   -
    devgroup1 pair2(L) (CL1-A , 0, 2) 54321   2..P-VOL COPY ASYNC ,12345 610   -
    devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,-----   2   -
  2. Wait for the state of the pair to become PAIR on the secondary cluster.
    phys-newyork-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1     - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..P-VOL PAIR ASYNC,12345, 609   - 
    devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2     - 
    devgroup1 pair2(R) (CL1-A , 0, 2)54321    2..P-VOL PAIR ASYNC,12345, 610   -
  3. Split the pair by using the pairsplit command and confirm that the secondary volumes on cluster-newyork are writable by using the –rw option.
    phys-newyork-1# pairsplit -g devgroup1 -rw 
    phys-newyork-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, -----  1    - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..P-VOL PSUS ASYNC,12345  609   W 
    devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,-----   2    - 
    devgroup1 pair2(R) (CL1-A , 0, 2) 54321   2..P-VOL PSUS ASYNC,12345  610   W
  4. Create a raw-disk device group on the partner cluster.

    Use the same device group name that you used on the primary cluster.

    You can use the same DIDs on each cluster. In the following command, the newyork cluster is the partner of the paris cluster.

    phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6
    phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6
    phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6
    phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \
    -t rawdisk -d d5,d6 rawdg
  5. Verify that the device group rawdg was created.
    phys-newyork-1# cldevicegroup show rawdg
  6. Synchronize the volume manager information with the Oracle Solaris Cluster device group and verify the output.
    phys-newyork-1# cldevicegroup sync rawdg1
    phys-newyork-1# cldevicegroup status
  7. Add an entry to the /etc/vfstab file on each node of the newyork cluster.
    /dev/global/dsk/d5s2 /dev/global/rdsk/d5s2 /mounts/sample ufs 2 no logging
  8. Create a mount directory on each node of the newyork cluster.
    phys-newyork-1# mkdir -p /mounts/sample
    phys-newyork-2# mkdir -p /mounts/sample
  9. Create an application resource group, apprg1, by using the clresourcegroup command.
    phys-newyork-1# clresourcegroup create apprg1
  10. Create the HAStoragePlus resource in apprg1.
    phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \
    -p GlobalDevicePaths=rawdg1 rs-hasp

    This HAStoragePlus resource is required for Geographic Edition configurations, because the framework relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.

  11. If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
    phys-newyork-1# clresourcegroup switch -emM -n phys-newyork-1 apprg1
    phs-newyork-1# clresourcegroup offline apprg1
  12. Unmount the file system.
    phys-newyork-1# umount /mounts/sample
  13. Take the Oracle Solaris Cluster device group offline.
    phys-newyork-1# cldevicegroup offline rawdg1
  14. Reestablish the Hitachi TrueCopy or Universal Replicator pair.
    phys-newyork-1# pairresync -g devgroup1
    phys-newyork-1# pairdisplay -g devgroup1
    Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M 
    devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----   1    - 
    devgroup1 pair1(R) (CL1-A , 0, 1) 54321   1..P-VOL PAIR ASYNC,12345  609   W 
    devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,-----   2    - 
    devgroup1 pair2(R) (CL1-A , 0, 2) 54321   2..P-VOL PAIR ASYNC,12345  610   W

    Initial configuration on the secondary cluster is now complete.