This section describes how to configure Hitachi TrueCopy software on the primary and secondary cluster. It also includes information about the preconditions for creating Hitachi TrueCopy protection groups.
Initial configuration of the primary and secondary clusters includes the following:
Configuring a Hitachi TrueCopy device group, devgroup1, with the required number of disks
Configuring the VERITAS Volume Manager disk group, oradg1
Configuring the VERITAS Volume Manager volume, vol1
Configuring the file system, which includes creating the file system, creating mount points, and adding entries to the /etc/vfstab file
Creating an application resource group, apprg1, which contains a HAStoragePlus resource
If you use the Hitachi TrueCopy Command Control Interface (CCI) for data replication, you must use RAID Manager. For information about which version you should use, see the Sun Cluster Geographic Edition Installation Guide.
This model requires specific hardware configurations with Sun StorEdge 9970/9980 Array or Hitachi Lightning 9900 Series Storage. Contact your Sun service representative for information about current supported Sun Cluster configurations.
Sun Cluster Geographic Edition software supports the hardware configurations that are supported by the Sun Cluster software. Contact your Sun service representative for information about current supported Sun Cluster configurations.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
This section describes the steps you must perform on the primary cluster before you can configure Hitachi TrueCopy data replication in Sun Cluster Geographic Edition software. To illustrate each step, this section uses an example of two disks, or LUNs, that are called d1 and d2. These disks are in a Hitachi TrueCopy array that holds data for an application that is called apprg1.
First, configure the Hitachi TrueCopy device groups on shared disks in the primary cluster. Disks d1 and d2 are configured to belong to a Hitachi TrueCopy device group that is called devgroup1. This configuration information is specified in the /etc/horcm.conf file on each of the cluster's nodes that has access to the Hitachi array. The application, apprg1, can run on these cluster nodes.
For more information about how to configure the /etc/horcm.conf file, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.
The following table describes the configuration information from our example that is found in the /etc/horcm.conf file.
Table 9–2 Example Section of the /etc/horcm.conf File on the Primary Cluster
dev_group |
dev_name |
port number |
TargetID |
LU number |
MU number |
devgroup1 |
pair1 |
CL1–A |
0 |
1 | |
devgroup1 |
pair2 |
CL1–A |
0 |
2 |
The configuration information in the table indicates that the Hitachi TrueCopy device group, devgroup1, contains two pairs. The first pair, pair1, is from the d1 disk, which is identified by the tuple <CL1–A , 0, 1>. The second pair, pair2, is from the d2 disk and is identified by the tuple <CL1–A, 0, 2>. The replicas of disks d1 and d2 are located in a geographically separated Hitachi TrueCopy array. The remote Hitachi TrueCopy is connected to the partner cluster.
Hitachi TrueCopy supports VERITAS Volume Manager volumes. You must configure VERITAS Volume Manager volumes on disks d1 and d2.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
Create VERITAS Volume Manager disk groups on shared disks in cluster-paris.
For example, the d1 and d2 disks are configured as part of a VERITAS Volume Manager disk group, which is called oradg1, by using commands, such as vxdiskadm and vxdg.
After configuration is complete, verify that the disk group was created by using the vxdg list command.
The output of this command should show oradg1 as a disk group.
Create the VERITAS Volume Manager volume.
For example, a volume that is called vol1 is created in the oradg1 disk group. The appropriate VERITAS Volume Manager commands, such as vxassist, are used to configure the volume.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
Register the VERITAS Volume Manager disk group that you configured in the previous procedure.
Use the Sun Cluster commands, scsetup or scconf.
For more information about these commands, refer to the scsetup(1M) or the scconf(1M) man page.
Synchronize the VERITAS Volume Manager configuration with Sun Cluster software, again by using the scsetup or scconf commands.
After configuration is complete, verify the disk group registration.
# scstat -D |
The VERITAS Volume Manager disk group, oradg1, should be displayed in the output.
For more information about the scstat command, see the scstat(1M) man page.
Before you configure the file system on cluster-paris, ensure that the Sun Cluster entities you require, such as application resource groups, device groups, and mount points, have already been configured.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
Create the required file system on the vol1 volume at the command line.
Add an entry to the /etc/vfstab file that contains information such as the mount location.
Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.
You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. Data must not be mounted on the secondary cluster or data on the primary will not be replicated to the secondary cluster. Otherwise, the data will not be replicated from the primary cluster to the secondary cluster.
Add the HAStoragePlus resource to the application resource group, apprg1.
Adding the resource to the application resource group ensures that the necessary file systems are remounted before the application is brought online.
For more information about the HAStoragePlus resource type, refer to the Sun Cluster 3.1 Data Service Planning and Administration Guide.
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
# newfs dev/vx/dsk/oradg1/vol1 |
An entry in the /etc/vfstab file is created as follows:
# /dev/vs/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 /mounts/sample ufs 2 no logging |
Add the HAStoragePlus resource type.
# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE -x GlobalDevicePaths=oradg1 |
This section describes the steps you must complete on the secondary cluster before you can configure Hitachi TrueCopy data replication in Sun Cluster Geographic Edition software.
You must configure the Hitachi TrueCopy device group on shared disks in the secondary cluster just as you did on the primary cluster. Disks d1 and d2 are configured to belong to a Hitachi TrueCopy device group that is called devgroup1. This configuration information is specified in the /etc/horcm.conf file on each of the cluster's nodes that has access to the Hitachi array. The application, apprg1, can run on these cluster nodes.
For more information about how to configure the /etc/horcm.conf file, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.
The following table describes the configuration information from our example that is found in the /etc/horcm.conf file.
Table 9–3 Example Section of the /etc/horcm.conf File on the Secondary Cluster
dev_group |
dev_name |
port number |
TargetID |
LU number |
MU number |
devgroup1 |
pair1 |
CL1–C |
0 |
20 | |
devgroup1 |
pair2 |
CL1–C |
0 |
21 |
The configuration information in the table indicates that the Hitachi TrueCopy device group, devgroup1, contains two pairs. The first pair, pair1, is from the d1 disk, which is identified by the tuple <CL1–C , 0, 20>. The second pair, pair2, is from the d2 disk and is identified by the tuple <CL1–C, 0, 21>.
After you have configured the /etc/horcm.conf file on the secondary cluster, you can see the status of the pairs by using the pairdisplay command as follows:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1.. SMPL ---- ------,----- ---- - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..SMPL ---- ------,----- ---- - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2.. SMPL ---- ------,----- ---- - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..SMPL ---- ------,----- ---- - |
Next, you need to configure the volume manager, the Sun Cluster device groups, and the highly available cluster global file system. You can configure these entities in two ways:
By replicating the volume manager information from cluster-paris
By creating a copy of the volume manager configuration on the LUNs of cluster-newyork by using the VERITAS Volume Manager commands vxdiskadm and vxassist
Each of these methods is described in the following procedures.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
Start replication for the devgroup1 device group.
phys-paris-1# paircreate -g devgroup1 -vl -f async phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..P-VOL COPY ASYNC ,12345 609 - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..P-VOL COPY ASYNC ,12345 610 - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,----- 2 - |
Wait for the state of the pair to become PAIR on the secondary cluster.
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345, 609 - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2 - devgroup1 pair2(R) (CL1-A , 0, 2)54321 2..P-VOL PAIR ASYNC,12345, 610 - |
Split the pair by using the pairsplit command and confirm that the secondary volumes on cluster-newyork are writable by using the -rw option.
phys-newyork-1# pairsplit -g devgroup1 -rw phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, ----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PSUS ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PSUS ASYNC,12345 610 W |
Import the VERITAS Volume Manager disk group, oradg1.
phys-newyork-1# vxdg -C import oradg1 |
Verify that the VERITAS Volume Manager disk group was successfully imported.
phys-newyork-1# vxdg list |
Enable the VERITAS Volume Manager volume.
phys-newyork-1# /usr/sbin/vxrecover -g oradg1 -s -b |
Verify that the VERITAS Volume Manager volumes are recognized and enabled.
phys-newyork-1# vxprint |
Register the VERITAS Volume Manager disk group, oradg1, in Sun Cluster.
phys-newyork-1# scconf -a -D type=vxvm, name=oradg1, \ nodelist=phys-newyork-1:phys-newyork-2 |
Synchronize the volume manager information with the Sun Cluster device group and verify the output.
phys-newyork-1# scconf -c -D name=oradg1,sync phys-newyork-1# scstat -D |
Add an entry to the /etc/vfstab file on phys-newyork-1.
phys-newyork-1# /dev/vx/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 \ /mounts/sample ufs 2 no logging |
Create a mount directory on phys-newyork-1.
phys-newyork-1# mkdir -p /mounts/sample |
Create an application resource group, apprg1, by using the scrgadm command.
phys-newyork-1# scrgadm -a -g apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \ -x GlobalDevicePaths=oradg1 \ |
If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# scswitch -z -g apprg1 -h phys-newyork-1 phs-newyork-1# scswitch -F -g apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# scswitch -F -D oradg1 |
Verify that the VERITAS Volume Manager disk group was deported.
phys-newyork-1# vxdg list |
Reestablish the Hitachi TrueCopy pair.
phys-newyork-1# pairresync -g devgroup1 phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PAIR ASYNC,12345 610 W |
Initial configuration on the secondary cluster is now complete.
This task copies the volume manager configuration from the primary cluster, cluster-paris, to LUNs of the secondary cluster, cluster-newyork, using the VERITAS Volume Manager commands vxdiskadm and vxassist command.
The device group, devgroup1, must be in the SMPL state throughout this procedure.
Confirm that the pair is in the SMPL state.
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..SMPL ---- ------,----- ---- - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..SMPL ---- ------,----- ---- - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..SMPL ---- ------,----- ---- - devgroup1 pair2(R) (CL1-A, 0, 2) 54321 2..SMPL ---- ------,----- ---- - |
Create VERITAS Volume Manager disk groups on shared disks in cluster-paris.
For example, the d1 and d2 disks are configured as part of a VERITAS Volume Manager disk group, which is called oradg1, by using commands, such as vxdiskadm and vxdg.
After configuration is complete, verify that the disk group was created by using the vxdg list command.
The output of this command should show oradg1 as a disk group.
Create the VERITAS Volume Manager volume.
For example, a volume that is called vol1 is created in the oradg1 disk group. The appropriate VERITAS Volume Manager commands, such as vxassist, are used to configure the volume.
Import the VERITAS Volume Manager disk group.
phys-newyork-1# vxdg -C import oradg1 |
Verify that the VERITAS Volume Manager disk group was successfully imported.
phys-newyork-1# vxdg list |
Enable the VERITAS Volume Manager volume.
phys-newyork-1# /usr/sbin/vxrecover -g oradg1 -s -b |
Verify that the VERITAS Volume Manager volumes are recognized and enabled.
phys-newyork-1# vxprint |
Register the VERITAS Volume Manager disk group, oradg1, in Sun Cluster.
phys-newyork-1# scconf -a -D type=vxvm, name=oradg1, \ nodelist=phys-newyork-1:phys-newyork-2 |
Synchronize the VERITAS Volume Manager information with the Sun Cluster device group and verify the output.
phys-newyork-1# scconf -c -D name=oradg1, sync phys-newyork-1# scstat -D |
Create a UNIX file system.
phys-newyork-1# newfs dev/vx/dsk/oradg1/vol1 |
Add an entry to the /etc/vfstab file on phys-newyork-1.
phys-newyork-1# /dev/vx/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 /mounts/sample \ ufs 2 no logging |
Create a mount directory on phys-newyork-1.
phys-newyork-1# mkdir -p /mounts/sample |
Create an application resource group, apprg1 by using the scrgadm command.
phys-newyork-1# scrgadm -a -g apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/mounts/sample -x AffinityOn=TRUE \ -x GlobalDevicePaths=oradg1 \ |
If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# scswitch -z -g apprg1 -h phys-newyork-1 phs-newyork-1# scswitch -F -g apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# scswitch -F -D oradg1 |
Verify that the VERITAS Volume Manager disk group was deported.
phys-newyork-1# vxdg list |
Verify that the pair is still in the SMPL state.
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..SMPL ---- ------,----- ---- - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..SMPL ---- ------,----- ---- - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..SMPL ---- ------,----- ---- - devgroup1 pair2(R) (CL1-A, 0, 2) 54321 2..SMPL ---- ------,----- ---- - |