This section describes the steps that you must complete on the secondary cluster before you can configure Hitachi TrueCopy or Universal Replicator data replication in the Geographic Edition framework.
For more information about how to configure the /etc/horcm.conf file, see the documentation for Hitachi TrueCopy and Universal Replicator.
On each node of the secondary cluster, you must configure the /etc/horcm.conf file with the same Hitachi TrueCopy or Universal Replicator data replication component names and device names that are configured on the primary cluster, and assign them to LUNs and to journal volumes on the local shared storage array.
Figure 5, Table 5, Example HORCM_LDEV Section of the /etc/horcm.conf File on the Secondary Cluster and Figure 6, Table 6, Example HORCM_DEV Section of the /etc/horcm.conf File on the Secondary Cluster show the entries in the /etc/horcm.conf file on the nodes of the secondary cluster for the data replication components configured on the primary cluster in Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster. Figure 5, Table 5, Example HORCM_LDEV Section of the /etc/horcm.conf File on the Secondary Cluster shows the HORCM_LDEV parameter configured with two locally attached LUNs, designated by their serial numbers and logical device (ldev) numbers, and associated with a journal ID, as they were on the primary cluster.
|
The following table shows the HORCM_DEV parameter configured with two LUNs designated by their port, CL1-C, target 0, and LU numbers 22 and 23.
|
After you have configured the /etc/horcm.conf file on the secondary cluster, you can view the status of the pairs by using the pairdisplay command as follows:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1.. SMPL ---- ------,----- ---- - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..SMPL ---- ------,----- ---- - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2.. SMPL ---- ------,----- ---- - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..SMPL ---- ------,----- ---- -
Next, you need to configure any volume manager, the Oracle Solaris Cluster device groups, and the highly available cluster file system.
Before You Begin
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Geographic Edition framework does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as an Oracle Solaris Cluster quorum device. See Using Storage-Based Data Replication Within a Campus Cluster in Oracle Solaris Cluster 4.3 System Administration Guide for more information.
phys-paris-1# paircreate -g devgroup1 -vl -f async phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..P-VOL COPY ASYNC ,12345 609 - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..P-VOL COPY ASYNC ,12345 610 - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,----- 2 -
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345, 609 - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2 - devgroup1 pair2(R) (CL1-A , 0, 2)54321 2..P-VOL PAIR ASYNC,12345, 610 -
phys-newyork-1# pairsplit -g devgroup1 -rw phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, ----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PSUS ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PSUS ASYNC,12345 610 W
Use the same device group name that you used on the primary cluster.
You can use the same DIDs on each cluster. In the following command, the newyork cluster is the partner of the paris cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg
phys-newyork-1# cldevicegroup show rawdg
phys-newyork-1# cldevicegroup sync rawdg1 phys-newyork-1# cldevicegroup status
/dev/global/dsk/d5s2 /dev/global/rdsk/d5s2 /mounts/sample ufs 2 no logging
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample
phys-newyork-1# clresourcegroup create apprg1
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \ -p GlobalDevicePaths=rawdg1 rs-hasp
This HAStoragePlus resource is required for Geographic Edition configurations, because the framework relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
phys-newyork-1# clresourcegroup switch -emM -n phys-newyork-1 apprg1 phs-newyork-1# clresourcegroup offline apprg1
phys-newyork-1# umount /mounts/sample
phys-newyork-1# cldevicegroup offline rawdg1
phys-newyork-1# pairresync -g devgroup1 phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PAIR ASYNC,12345 610 W
Initial configuration on the secondary cluster is now complete.