This section describes the steps that you must complete on the secondary cluster before you can configure Hitachi TrueCopy or Universal Replicator data replication in the Sun Cluster Geographic Edition software.
For more information about how to configure the /etc/horcm.conf file, see the documentation for Hitachi TrueCopy and Universal Replicator.
On each node of the secondary cluster, you must configure the /etc/horcm.conf file with the same Hitachi TrueCopy or Universal Replicator device group names and device names that are configured on the primary cluster, and assign them to LUNs and to journal volumes on the local shared storage array.
Table 1–4 and Table 1–5 show the entries in the /etc/horcm.conf file on the nodes of the secondary cluster for the device groups configured on the primary cluster in Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster. Table 1–4 shows the HORCM_LDEV parameter configured with two locally attached LUNs, designated by their serial numbers and logical device (ldev) numbers, and associated with a journal ID, as they were on the primary cluster.
If you want to ensure the consistency of replicated data with Hitachi Universal Replicator on both the primary cluster and the secondary cluster, you must specify a journal volume ID in the third property configuration field of HORCM_LDEV for each device in a Hitachi Universal Replicator device group. Otherwise, journaling does not occur and Hitachi Universal Replicator's functionality in Sun Cluster Geographic Edition configurations is no greater than the functionality of Hitachi TrueCopy.
# dev_group |
dev_name |
serial#:jid# |
ldev |
devgroup1 |
pair1 |
10132:1 |
00:14 |
devgroup1 |
pair2 |
10132:1 |
00:15 |
The following table shows the HORCM_DEV parameter configured with two LUNs designated by their port, CL1-C, target 0, and LU numbers 22 and 23.
Table 1–5 Example HORCM_DEV Section of the /etc/horcm.conf File on the Secondary Cluster
# dev_group |
dev_name |
port number |
TargetID |
LU number |
MU number |
devgroup2 |
pair3 |
CL1–C |
0 |
22 | |
devgroup2 |
pair4 |
CL1–C |
0 |
23 |
After you have configured the /etc/horcm.conf file on the secondary cluster, you can view the status of the pairs by using the pairdisplay command as follows:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1.. SMPL ---- ------,----- ---- - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..SMPL ---- ------,----- ---- - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2.. SMPL ---- ------,----- ---- - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..SMPL ---- ------,----- ---- - |
Next, you need to configure any volume manager, the Sun Cluster device groups, and the highly available cluster file system. This process is slightly different depending upon whether you are using Veritas Volume Manager or raw-disk device groups. The following procedures provide instructions:
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication Within a Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.
Start replication for the devgroup1 device group.
phys-paris-1# paircreate -g devgroup1 -vl -f async phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..P-VOL COPY ASYNC ,12345 609 - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..P-VOL COPY ASYNC ,12345 610 - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,----- 2 - |
Wait for the state of the pair to become PAIR on the secondary cluster.
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345, 609 - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2 - devgroup1 pair2(R) (CL1-A , 0, 2)54321 2..P-VOL PAIR ASYNC,12345, 610 - |
Split the pair by using the pairsplit command and confirm that the secondary volumes on cluster-newyork are writable by using the -rw option.
phys-newyork-1# pairsplit -g devgroup1 -rw phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, ----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PSUS ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PSUS ASYNC,12345 610 W |
Import the Veritas Volume Manager disk group, oradg1.
phys-newyork-1# vxdg -C import oradg1 |
Verify that the Veritas Volume Manager disk group was successfully imported.
phys-newyork-1# vxdg list |
Enable the Veritas Volume Manager volume.
phys-newyork-1# /usr/sbin/vxrecover -g oradg1 -s -b |
Verify that the Veritas Volume Manager volumes are recognized and enabled.
phys-newyork-1# vxprint |
Register the Veritas Volume Manager disk group, oradg1, in Sun Cluster.
phys-newyork-1# cldevicegroup create -t vxvm -n phys-newyork-1,phys-newyork-2 oradg1 |
Synchronize the volume manager information with the Sun Cluster device group and verify the output.
phys-newyork-1# cldevicegroup sync oradg1 phys-newyork-1# cldevicegroup status |
Add an entry to the /etc/vfstab file on phys-newyork-1.
phys-newyork-1# /dev/vx/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 \ /mounts/sample ufs 2 no logging |
Create a mount directory on phys-newyork-1.
phys-newyork-1# mkdir -p /mounts/sample |
Create an application resource group, apprg1, by using the clresourcegroup command.
phys-newyork-1# clresourcegroup create apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \ -p GlobalDevicePaths=oradg1 rs-hasp |
This HAStoragePlus resource is required for Sun Cluster Geographic Edition systems, because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# clresourcegroup switch -emM -n phys-newyork-1 apprg1 phs-newyork-1# clresourcegroup offline apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# cldevicegroup offline oradg1 |
Verify that the Veritas Volume Manager disk group was deported.
phys-newyork-1# vxdg list |
Reestablish the Hitachi TrueCopy or Universal Replicator pair.
phys-newyork-1# pairresync -g devgroup1 phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PAIR ASYNC,12345 610 W |
Initial configuration on the secondary cluster is now complete.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication Within a Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.
Start replication for the devgroup1 device group.
phys-paris-1# paircreate -g devgroup1 -vl -f async phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..P-VOL COPY ASYNC ,12345 609 - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..P-VOL COPY ASYNC ,12345 610 - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,----- 2 - |
Wait for the state of the pair to become PAIR on the secondary cluster.
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345, 609 - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2 - devgroup1 pair2(R) (CL1-A , 0, 2)54321 2..P-VOL PAIR ASYNC,12345, 610 - |
Split the pair by using the pairsplit command and confirm that the secondary volumes on cluster-newyork are writable by using the -rw option.
phys-newyork-1# pairsplit -g devgroup1 -rw phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, ----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PSUS ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PSUS ASYNC,12345 610 W |
Create a raw-disk device group on the partner cluster.
Use the same device group name that you used on the primary cluster.
You can use the same DIDs on each cluster. In the following command, the newyork cluster is the partner of the paris cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg |
Check that the device group rawdg was created.
phys-newyork-1# cldevicegroup show rawdg |
Synchronize the volume manager information with the Sun Cluster device group and verify the output.
phys-newyork-1# cldevicegroup sync rawdg1 phys-newyork-1# cldevicegroup status |
Add an entry to the /etc/vfstab file on each node of the newyork cluster.
/dev/global/dsk/d5s2 /dev/global/rdsk/d5s2 /mounts/sample ufs 2 no logging |
Create a mount directory on each node of the newyork cluster.
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample |
Create an application resource group, apprg1, by using the clresourcegroup command.
phys-newyork-1# clresourcegroup create apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \ -p GlobalDevicePaths=rawdg1 rs-hasp |
This HAStoragePlus resource is required for Sun Cluster Geographic Edition systems, because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# clresourcegroup switch -emM -n phys-newyork-1 apprg1 phs-newyork-1# clresourcegroup offline apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# cldevicegroup offline rawdg1 |
Reestablish the Hitachi TrueCopy or Universal Replicator pair.
phys-newyork-1# pairresync -g devgroup1 phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PAIR ASYNC,12345 610 W |
Initial configuration on the secondary cluster is now complete.