This section describes how to configure Hitachi TrueCopy or Universal Replicator software on the primary and secondary cluster. It also includes information about the preconditions for creating Hitachi TrueCopy and Universal Replicator protection groups. This section provides the following information:
Starting in the Sun Cluster Geographic Edition 3.2 11/09 release, Hitachi Universal Replicator can provide guaranteed data consistency in asynchronous mode replication, in which the replication fence level is set to async. Asynchronous mode replication is commonly used between a primary data center and a distant disaster recovery site. Guaranteed data consistency in asynchronous mode is therefore critical to the functioning of a disaster recovery system.
Guaranteed data consistency in asynchronous replication mode requires the following:
You must run Hitachi Universal Replicator. Hitachi TrueCopy cannot always guarantee data consistency in asynchronous mode.
On both clusters of the Sun Cluster Geographic Edition partnership, you must have Hitachi storage arrays that are supported for use with Hitachi Universal Replicator. Talk to your Sun representative for a list of currently supported hardware.
You must configure journal volumes on the Hitachi storage arrays at both sites. For instructions, see the Hitachi documentation for your array.
A journal volume must be associated with each asynchronously replicated paired device in the /etc/horcm.conf file. You configure this association in /etc/horcm.conf as a property of the parameter HORCM_LDEV. You cannot use the property HORCM_DEV. For details, see Configuration of the /etc/horcm.conf File and Journal Volumes.
Each asynchronously replicated Hitachi device group that is used by one particular service or application must be assigned the same consistency group ID (CTGID) as the protection group that manages it. To do so, you can complete the following steps:
Create the protection group with the CTGID that you want to use.
Add uninitialized Hitachi device groups to the protection group.
Start the protection group.
For details, see Ensuring Data Consistency for Hitachi Universal Replicator in Asynchronous Mode.
Initial configuration of the primary and secondary clusters includes the following:
Configuring a Hitachi TrueCopy or Universal Replicator device group, devgroup1, with the required number of disks
If you are using raw-disk device groups, configuring a raw-disk group rawdg
If you are using Veritas Volume Manager:
Configuring the Veritas Volume Manager disk group, oradg1
Configuring the Veritas Volume Manager volume, vol1
Configuring the Sun Cluster device group for the Veritas Volume Manager disk group, oradg1
Configuring the file system, which includes creating the file system, creating mount points, and adding entries to the /etc/vfstab file
Creating an application resource group, apprg1, which contains a HAStoragePlus resource
Observe the following requirements and guidelines:
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support using a Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication Within a Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.
If you use the Hitachi TrueCopy and Universal Replicator Command Control Interface (CCI) for data replication, you must use RAID Manager. For information about which version you should use, see the Sun Cluster Geographic Edition Installation Guide.
This model requires specific hardware configurations with Sun StorEdgeTM 9970/9980 Array or Hitachi Lightning 9900 Series Storage. Contact your Sun service representative for information about Sun Cluster configurations that are currently supported.
All Hitachi TrueCopy or Universal Replicator device groups with the same consistency group ID (CTGID) must be added to the same protection group.
Sun Cluster Geographic Edition software uses the default CCI instance to manage the Hitachi TrueCopy or Universal Replicator devices. Sun Cluster Geographic Edition software starts the default CCI instance whenever a TrueCopy device group is managed by Sun Cluster Geographic Edition software. Applications that are not under the control of Sun Cluster Geographic Edition software can also use the default CCI instance or any other instances without risk to Sun Cluster Geographic Edition or application processes or data.
Sun Cluster Geographic Edition software supports the hardware configurations that are supported by the Sun Cluster software. Contact your Sun service representative for information about current supported Sun Cluster configurations.
The Sun Cluster device groups that are listed in the cluster_dgs protection group property must exist and have the same device group name on both the primary cluster and the secondary cluster.
This section describes the tasks that you must perform on the primary cluster before you can configure Hitachi TrueCopy or Universal Replicator data replication in the Sun Cluster Geographic Edition software.
In all examples in this document, the “primary” cluster is the cluster on which the application data service is started during routine operations. The partner cluster is “secondary.” The primary cluster is named cluster-paris, and the secondary cluster is named cluster-newyork. The cluster-paris cluster consists of two nodes, phys-paris-1 and phys-paris-2. The cluster-newyork cluster also consists of two nodes, phys-newyork-1 and phys-newyork-2. Two device groups are configured on each cluster. The devgroup1 device group contains the paired devices pair1 and pair2. The devgroup2 device group contains the paired devices pair3 and pair4.
As used with the Sun Cluster Geographic Edition configuration, a Hitachi TrueCopy or Universal Replicator device group is a named entity consisting of sets of paired Logical Unit Numbers (LUNs). One member of each pair of LUNs is located in local storage on the primary cluster and the other member is located in local storage on a Sun Cluster Geographic Edition partner cluster. Data is written to one member of a pair of LUNs in local storage on the primary cluster and replicated to the other member of the pair on local storage on the secondary cluster. Each LUN in a pair is assigned the same name as the name that is assigned to the other LUN in the pair. Thus, data that is written to the LUN assigned the pair1 device name on the primary cluster is replicated to the LUN assigned the pair1 device name on the secondary cluster. Data that is written to the LUN assigned the pair2 device name on the primary cluster is replicated to the LUN assigned the pair2 device name on the secondary cluster.
On each storage-attached node of each cluster, pairs are given names and assigned to a device group in the /etc/horcm.conf file. Additionally, in this file, each device group is assigned a name that is the same on all storage-attached nodes of all clusters that are participating in a Sun Cluster Geographic Edition partnership.
In the /etc/horcm.conf file, you configure each Hitachi TrueCopy or Universal Replicator device group as a property of either the HORCM_DEV parameter or the HORCM_LDEV parameter. Depending on their intended use, you might configure one device group in the /etc/horcm.conf file as a property of HORCM_DEV and another device group as a property of HORCM_LDEV. However, a single device group can only be configured as a property of HORCM_DEV or of HORCM_LDEV. For any one device group, the selected parameter, HORCM_DEV or HORCM_LDEV, must be consistent on all storage-attached nodes of all clusters that are participating in the Sun Cluster Geographic Edition partnership.
Of the parameters that are configured in the /etc/horcm.conf file, only HORCM_DEV and HORCM_LDEV have requirements that are specific to the Sun Cluster Geographic Edition configuration. For information about configuring other parameters in the /etc/horcm.conf file, see the documentation for Hitachi TrueCopy and Universal Replicator.
Entries in the /etc/horcm.conf file for Hitachi Universal Replicator device groups can associate journal volumes with data LUNs. Journal volumes are specially configured LUNs on the storage system array. On both the primary and secondary arrays, local journal volumes store data that has been written to application data storage on the primary cluster, but not yet replicated to application data storage on the secondary cluster. Journal volumes thereby enable Hitachi Universal Replicator to maintain the consistency of data even if the connection between the paired clusters in a Sun Cluster Geographic Edition partnership temporarily fails. A journal volume can be used by more than one device group on the local cluster, but typically is assigned to just one device group. Hitachi TrueCopy does not support journaling.
If you want to implement journaling, you must configure Hitachi Universal Replicator device groups as properties of the HORCM_LDEV parameter because only that parameter supports the association of data LUNs with journal volumes in the Sun Cluster Geographic Edition Hitachi Universal Replicator module. If you configure Hitachi Universal Replicator device groups by using the HORCM_DEV parameter, no journaling occurs, and Hitachi Universal Replicator has no greater functionality than does Hitachi TrueCopy.
On each storage-attached node of the primary cluster, you configure Hitachi TrueCopy and Universal Replicator device groups as properties of the HORCM_DEV or HORCM_LDEV parameter in the /etc/horcm.conf file, and associate them with LUNs and, if appropriate, journal volumes. All devices that are configured in this file, including journal volumes, must be in locally attached storage. The /etc/horcm.conf file is read by the HORCM daemon when it starts, which occurs during reboot or when the Sun Cluster Geographic Edition software is started. If you change the /etc/horcm.conf file on any node after the Sun Cluster Geographic Edition software is started, and you do not anticipate rebooting, you must restart the HORCM daemon on that node by using the commands:
phys-paris-1# horcm-installation-directory/usr/bin/horcmshutdown.sh phys-paris-1# horcm-installation-directory/usr/bin/horcmstart.sh |
Table 1-2 shows the configuration of one journaling Hitachi Universal Replicator device group in the /etc/horcm.conf file as a property of the HORCM_LDEV parameter. Each LUN in the device group is described on a single line consisting of four space-delimited entries. The LUNs in the devgroup1 device group are named pair1 and pair2. The administrator chooses the device group and paired device names. In the third field of the file, each LUN is described by its serial number, followed by a colon, followed by the journal ID of its associated journal volume. In the logical device number (ldev) field, the controller unit (CU) is followed by a colon, which is followed by the logical device number. Both values are in hexadecimal format. All entries are supplied by the raidscan command, which is described in more detail in Hitachi's documentation. The ldev value that is supplied by the raidscan command is in decimal format, so you must convert the value to base 16 to obtain the correct format for the entry in the ldev field. You can only use the configuration shown in Table 1–2 with Hitachi Universal Replicator, as Hitachi TrueCopy does not support journaling.
If you want to ensure the consistency of replicated data with Hitachi Universal Replicator on both the primary cluster and the secondary cluster, you must specify a journal volume ID in the third property configuration field of HORCM_LDEV for each device in a Hitachi Universal Replicator device group. Otherwise, journaling does not occur and Hitachi Universal Replicator's functionality in Sun Cluster Geographic Edition configurations is no greater than the functionality of Hitachi TrueCopy.
# dev_group |
dev_name |
serial#:jid# |
ldev |
devgroup1 |
pair1 |
10136:0 |
00:12 |
devgroup1 |
pair2 |
10136:0 |
00:13 |
Table 1–3 shows the configuration of one non-journaling Hitachi TrueCopy or Universal Replicator device group in the /etc/horcm.conf file as a property of the HORCM_DEV parameter. Each LUN in the device group is described on a single line consisting of five space-delimited entries. The table describes a device group named devgroup2 that is composed of two LUNs in a single shared storage array that is attached to the nodes of the primary cluster. The LUNs have the device names pair3 and pair4 and are designated by their port, CL1-A, target 0, and LU numbers, 3 and 4. The port number, target ID, and LU numbers are supplied by the raidscan command, which is described in more detail in Hitachi's documentation. For Hitachi TrueCopy and Universal Replicator, there is no entry in the MU number field.
Table 1–3 Example HORCM_DEV Section of the /etc/horcm.conf File on the Primary Cluster
# dev_group |
dev_name |
port number |
TargetID |
LU number |
MU number |
devgroup2 |
pair3 |
CL1-A |
0 |
3 |
- |
devgroup2 |
pair4 |
CL1-A |
0 |
4 |
- |
Sun Cluster Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Sun Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Sun Cluster Geographic Edition.
For the devices that you want to use, unconfigure the predefined device groups.
The following commands remove the predefined device groups for d7 and d8.
phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8 phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8 phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8 |
Create the new raw-disk device group, including the desired devices.
Ensure that the new DID does not contain any slashes. The following command creates a global device group rawdg containing d7 and d8.
phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \ -t rawdisk -d d7,d8 rawdg |
The following commands illustrate configuring the device group on the primary cluster, configuring the same device group on the partner cluster, and adding the group to a Hitachi TrueCopy or Universal Replicator protection group.
Remove the automatically created device groups from the primary cluster. phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8 phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8 phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8 Create the raw-disk device group on the primary cluster. phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \ -t rawdisk -d d7,d8 rawdg Remove the automatically created device groups from the partner cluster. phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 Create the raw-disk device group on the partner cluster. phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg Add the raw-disk device group to the protection group rawpg. phys-paris-1# geopg create -d truecopy -p Nodelist=phys-paris-1,phys-paris-2 \ -o Primary -p cluster_dgs=rawdg -s paris-newyork-ps rawpg |
When configuring the partner cluster, create a raw-disk device group of the same name as the one you created here. See How to Replicate the Configuration Information From the Primary Cluster When Using Raw-Disk Device Groups for the instructions about this task.
Once you have configured the device group on both clusters, you can use the device group name wherever one is required in Sun Cluster Geographic Edition commands such as geopg.
If you intend to mirror data service storage by using Veritas Volume Manager, you must configure a Veritas Volume Manager disk group on the primary cluster containing the LUNs in a single Hitachi TrueCopy or Universal Replicator device group, and create a mirrored volume from those LUNs. For example, the previously configured pair1 device in the devgroup1 device group on the primary cluster is mirrored with the pair2 device in the devgroup1 device group on the primary cluster. See Configuration of the /etc/horcm.conf File and Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster. For details on the configuration of Veritas disk groups and volumes, see the Veritas Volume Manager documentation.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication Within a Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.
Register the Veritas Volume Manager disk group that you previously configured.
Use the Sun Cluster command cldevicegroup.
For more information about this command, refer to the cldevicegroup(1CL) man page.
Create a mount directory on each node of the cluster.
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample |
Synchronize the Veritas Volume Manager configuration with Sun Cluster software, again by using the cldevicegroup command.
After configuration is complete, verify the disk group registration.
# cldevicegroup status |
The Veritas Volume Manager disk group, oradg1, should be displayed in the output.
For more information about the cldevicegroup command, see the cldevicegroup(1CL) man page.
Before you configure the file system on cluster-paris, ensure that the Sun Cluster entities you require, such as application resource groups, device groups, and mount points, have already been configured.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication Within a Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.
Create the required file system on the vol1 volume at the command line.
Add an entry to the /etc/vfstab file that contains information such as the mount location.
Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.
You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. Data must not be mounted on the secondary cluster or data on the primary will not be replicated to the secondary cluster. Otherwise, the data will not be replicated from the primary cluster to the secondary cluster.
Add the HAStoragePlus resource to the application resource group, apprg1.
Adding the resource to the application resource group ensures that the necessary file systems are remounted before the application is brought online.
For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
phys-paris-1# newfs dev/vx/dsk/oradg1/vol1 |
The following entry is created in the /etc/vfstab file:
# /dev/vs/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 /mounts/sample \ ufs 2 no logging |
Add the HAStoragePlus resource type.
phys-paris-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \ -p GlobalDevicePaths=oradg1 rs-has |
This section describes the steps that you must complete on the secondary cluster before you can configure Hitachi TrueCopy or Universal Replicator data replication in the Sun Cluster Geographic Edition software.
For more information about how to configure the /etc/horcm.conf file, see the documentation for Hitachi TrueCopy and Universal Replicator.
On each node of the secondary cluster, you must configure the /etc/horcm.conf file with the same Hitachi TrueCopy or Universal Replicator device group names and device names that are configured on the primary cluster, and assign them to LUNs and to journal volumes on the local shared storage array.
Table 1–4 and Table 1–5 show the entries in the /etc/horcm.conf file on the nodes of the secondary cluster for the device groups configured on the primary cluster in Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster. Table 1–4 shows the HORCM_LDEV parameter configured with two locally attached LUNs, designated by their serial numbers and logical device (ldev) numbers, and associated with a journal ID, as they were on the primary cluster.
If you want to ensure the consistency of replicated data with Hitachi Universal Replicator on both the primary cluster and the secondary cluster, you must specify a journal volume ID in the third property configuration field of HORCM_LDEV for each device in a Hitachi Universal Replicator device group. Otherwise, journaling does not occur and Hitachi Universal Replicator's functionality in Sun Cluster Geographic Edition configurations is no greater than the functionality of Hitachi TrueCopy.
# dev_group |
dev_name |
serial#:jid# |
ldev |
devgroup1 |
pair1 |
10132:1 |
00:14 |
devgroup1 |
pair2 |
10132:1 |
00:15 |
The following table shows the HORCM_DEV parameter configured with two LUNs designated by their port, CL1-C, target 0, and LU numbers 22 and 23.
Table 1–5 Example HORCM_DEV Section of the /etc/horcm.conf File on the Secondary Cluster
# dev_group |
dev_name |
port number |
TargetID |
LU number |
MU number |
devgroup2 |
pair3 |
CL1–C |
0 |
22 | |
devgroup2 |
pair4 |
CL1–C |
0 |
23 |
After you have configured the /etc/horcm.conf file on the secondary cluster, you can view the status of the pairs by using the pairdisplay command as follows:
phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1.. SMPL ---- ------,----- ---- - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..SMPL ---- ------,----- ---- - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2.. SMPL ---- ------,----- ---- - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..SMPL ---- ------,----- ---- - |
Next, you need to configure any volume manager, the Sun Cluster device groups, and the highly available cluster file system. This process is slightly different depending upon whether you are using Veritas Volume Manager or raw-disk device groups. The following procedures provide instructions:
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication Within a Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.
Start replication for the devgroup1 device group.
phys-paris-1# paircreate -g devgroup1 -vl -f async phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..P-VOL COPY ASYNC ,12345 609 - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..P-VOL COPY ASYNC ,12345 610 - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,----- 2 - |
Wait for the state of the pair to become PAIR on the secondary cluster.
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345, 609 - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2 - devgroup1 pair2(R) (CL1-A , 0, 2)54321 2..P-VOL PAIR ASYNC,12345, 610 - |
Split the pair by using the pairsplit command and confirm that the secondary volumes on cluster-newyork are writable by using the -rw option.
phys-newyork-1# pairsplit -g devgroup1 -rw phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, ----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PSUS ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PSUS ASYNC,12345 610 W |
Import the Veritas Volume Manager disk group, oradg1.
phys-newyork-1# vxdg -C import oradg1 |
Verify that the Veritas Volume Manager disk group was successfully imported.
phys-newyork-1# vxdg list |
Enable the Veritas Volume Manager volume.
phys-newyork-1# /usr/sbin/vxrecover -g oradg1 -s -b |
Verify that the Veritas Volume Manager volumes are recognized and enabled.
phys-newyork-1# vxprint |
Register the Veritas Volume Manager disk group, oradg1, in Sun Cluster.
phys-newyork-1# cldevicegroup create -t vxvm -n phys-newyork-1,phys-newyork-2 oradg1 |
Synchronize the volume manager information with the Sun Cluster device group and verify the output.
phys-newyork-1# cldevicegroup sync oradg1 phys-newyork-1# cldevicegroup status |
Add an entry to the /etc/vfstab file on phys-newyork-1.
phys-newyork-1# /dev/vx/dsk/oradg1/vol1 /dev/vx/rdsk/oradg1/vol1 \ /mounts/sample ufs 2 no logging |
Create a mount directory on phys-newyork-1.
phys-newyork-1# mkdir -p /mounts/sample |
Create an application resource group, apprg1, by using the clresourcegroup command.
phys-newyork-1# clresourcegroup create apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \ -p GlobalDevicePaths=oradg1 rs-hasp |
This HAStoragePlus resource is required for Sun Cluster Geographic Edition systems, because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# clresourcegroup switch -emM -n phys-newyork-1 apprg1 phs-newyork-1# clresourcegroup offline apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# cldevicegroup offline oradg1 |
Verify that the Veritas Volume Manager disk group was deported.
phys-newyork-1# vxdg list |
Reestablish the Hitachi TrueCopy or Universal Replicator pair.
phys-newyork-1# pairresync -g devgroup1 phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PAIR ASYNC,12345 610 W |
Initial configuration on the secondary cluster is now complete.
If you are using storage-based replication, do not configure a replicated volume as a quorum device. The Sun Cluster Geographic Edition software does not support Hitachi TrueCopy or Universal Replicator S-VOL and Command Device as a Sun Cluster quorum device. See Using Storage-Based Data Replication Within a Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.
Start replication for the devgroup1 device group.
phys-paris-1# paircreate -g devgroup1 -vl -f async phys-paris-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-A , 0, 1) 54321 1..P-VOL COPY ASYNC ,12345 609 - devgroup1 pair1(R) (CL1-C , 0, 20)12345 609..S-VOL COPY ASYNC ,----- 1 - devgroup1 pair2(L) (CL1-A , 0, 2) 54321 2..P-VOL COPY ASYNC ,12345 610 - devgroup1 pair2(R) (CL1-C , 0, 21)12345 610..S-VOL COPY ASYNC ,----- 2 - |
Wait for the state of the pair to become PAIR on the secondary cluster.
phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,-----, 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345, 609 - devgroup1 pair2(L) (CL1-C , 0, 21)12345 610..S-VOL PAIR ASYNC,-----, 2 - devgroup1 pair2(R) (CL1-A , 0, 2)54321 2..P-VOL PAIR ASYNC,12345, 610 - |
Split the pair by using the pairsplit command and confirm that the secondary volumes on cluster-newyork are writable by using the -rw option.
phys-newyork-1# pairsplit -g devgroup1 -rw phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL SSUS ASYNC, ----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PSUS ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL SSUS ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PSUS ASYNC,12345 610 W |
Create a raw-disk device group on the partner cluster.
Use the same device group name that you used on the primary cluster.
You can use the same DIDs on each cluster. In the following command, the newyork cluster is the partner of the paris cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg |
Check that the device group rawdg was created.
phys-newyork-1# cldevicegroup show rawdg |
Synchronize the volume manager information with the Sun Cluster device group and verify the output.
phys-newyork-1# cldevicegroup sync rawdg1 phys-newyork-1# cldevicegroup status |
Add an entry to the /etc/vfstab file on each node of the newyork cluster.
/dev/global/dsk/d5s2 /dev/global/rdsk/d5s2 /mounts/sample ufs 2 no logging |
Create a mount directory on each node of the newyork cluster.
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample |
Create an application resource group, apprg1, by using the clresourcegroup command.
phys-newyork-1# clresourcegroup create apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \ -p GlobalDevicePaths=rawdg1 rs-hasp |
This HAStoragePlus resource is required for Sun Cluster Geographic Edition systems, because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
If necessary, confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# clresourcegroup switch -emM -n phys-newyork-1 apprg1 phs-newyork-1# clresourcegroup offline apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# cldevicegroup offline rawdg1 |
Reestablish the Hitachi TrueCopy or Universal Replicator pair.
phys-newyork-1# pairresync -g devgroup1 phys-newyork-1# pairdisplay -g devgroup1 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M devgroup1 pair1(L) (CL1-C , 0, 20)12345 609..S-VOL PAIR ASYNC,----- 1 - devgroup1 pair1(R) (CL1-A , 0, 1) 54321 1..P-VOL PAIR ASYNC,12345 609 W devgroup1 pair2(L) (CL1-C , 0,21) 12345 610..S-VOL PAIR ASYNC,----- 2 - devgroup1 pair2(R) (CL1-A , 0, 2) 54321 2..P-VOL PAIR ASYNC,12345 610 W |
Initial configuration on the secondary cluster is now complete.