The following table lists the tasks you must perform to set up an Hitachi TrueCopy storage-based replicated device.
Table 5–2 Task Map: Administering an Hitachi TrueCopy Storage-Based Replicate Device
Task |
Instructions |
---|---|
Install the TrueCopy software on your storage device and nodes |
The documentation that shipped with your Hitachi storage device. |
Configure the Hitachi replication group | |
Configure the DID device |
How to Configure DID Devices for Replication Using Hitachi TrueCopy |
Register the replicated group |
How to Add and Register a Device Group (Solaris Volume Manager) or SPARC: How to Register a Disk Group as a Device Group (Veritas Volume Manager) |
Verify the configuration |
How to Verify a Hitachi TrueCopy Replicated Global Device Group Configuration |
First, configure the Hitachi TrueCopy device groups on shared disks in the primary cluster. This configuration information is specified in the/etc/horcm.conf file on each of the cluster's nodes that has access to the Hitachi array. For more information about how to configure the /etc/horcm.conf file, see the Sun StorEdge SE 9900 V Series Command and Control Interface User and Reference Guide.
The name of the Sun Cluster device group that you create (Solaris Volume Manager, Veritas Volume Manager, or raw-disk) must be the same as the name of the replicated device group.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on all nodes connected to the storage array.
Add the horcm entry to the /etc/services file.
horcm 9970/udp |
Specify a port number and protocol name for the new entry.
Specify the Hitachi TrueCopy device group configuration information in the /etc/horcm.conf file.
For instructions, refer to the documentation that shipped with your TrueCopy software.
Start the TrueCopy CCI daemon by running the horcmstart.sh command on all nodes.
# /usr/bin/horcmstart.sh |
If you have not already created the replica pairs, create them now.
Use the paircreate command to create your replica pairs with the desired fence level. For instructions on creating the replica pairs, refer to your TrueCopy documentation.
On each node configured with replicated devices, verify that data replication is set up correctly by using the pairdisplay command.
# pairdisplay -g group-name Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M group-name pair1(L) (CL1-C , 0, 9) 54321 58..P-VOL PAIR DATA ,12345 29 - group-name pair1(R) (CL1-A , 0, 29)12345 29..S-VOL PAIR DATA ,----- 58 - |
Verify that all nodes can master the replication groups.
Determine which node contains the primary replica and which node contains the secondary replica by using the pairdisplay command.
# pairdisplay -g group-name Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M group-name pair1(L) (CL1-C , 0, 9) 54321 58..P-VOL PAIR DATA ,12345 29 - group-name pair1(R) (CL1-A , 0, 29)12345 29..S-VOL PAIR DATA ,----- 58 - |
The node with the local (L) device in the P-VOL state contains the primary replica and the node with the local (L) device in the S-VOL state contains the secondary replica.
Make the secondary node the master by running the horctakeover command on the node that contains the secondary replica.
# horctakeover -g group-name |
Wait for the initial data copy to complete before proceeding to the next step.
Verify that the node that performed the horctakeover now has the local (L) device in the P-VOL state.
# pairdisplay -g group-name Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M group-name pair1(L) (CL1-C , 0, 9) 54321 58..S-VOL PAIR DATA ,12345 29 - group-name pair1(R) (CL1-A , 0, 29)12345 29..P-VOL PAIR DATA ,----- 58 - |
Run the horctakeover command on the node that originally contained the primary replica.
# horctakeover -g group-name |
Verify that the primary node has changed back to the original configuration by running the pairdisplay command.
# pairdisplay -g group-name Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M group-name pair1(L) (CL1-C , 0, 9) 54321 58..P-VOL PAIR DATA ,12345 29 - group-name pair1(R) (CL1-A , 0, 29)12345 29..S-VOL PAIR DATA ,----- 58 - |
Continue the configuration of your replicated device by following the instructions in How to Configure DID Devices for Replication Using Hitachi TrueCopy.
After you have configured a device group for your replicated device, you must configure the device identifier (DID) driver that the replicated device uses.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on any node of the cluster.
Verify that the horcm daemon is running on all nodes.
The following command will start the daemon if it is not running. The system will display a message if the daemon is already running.
# /usr/bin/horcmstart.sh |
Determine which node contains the secondary replica by running the pairdisplay command.
# pairdisplay -g group-name Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#,P/S,Status,Fence,Seq#,P-LDEV# M group-name pair1(L) (CL1-C , 0, 9) 54321 58..P-VOL PAIR DATA ,12345 29 - group-name pair1(R) (CL1-A , 0, 29)12345 29..S-VOL PAIR DATA ,----- 58 - |
The node with the local (L) device in the S-VOL state contains the secondary replica.
On the node with secondary replica (as determined by the previous step), configure the DID devices for use with storage-based replication.
This command combines the two separate DID instances for the device replica pairs into a single, logical DID instance. The single instance enables the device to be used by volume management software from both sides.
If multiple nodes are connected to the secondary replica, run this command on only one of these nodes.
# cldevice replicate -D primary-replica-nodename -S secondary replica-nodename |
Specifies the name of the remote node that contains the primary replica.
Specifies a source node other than the current node.
Specifies the name of the remote node that contains the secondary replica.
By default, the current node is the source node. Use the -S option to specify a different source node.
Verify that the DID instances have been combined.
# cldevice list -v logical_DID_device |
Verify that the TrueCopy replication is set.
# cldevice show logical_DID_device |
The command output should indicate that TrueCopy is the replication type.
If the DID remapping did not successfully combine all replicated devices, combine the individual replicated devices manually.
Exercise extreme care when combining DID instances manually. Improper device remapping can cause data corruption.
On all nodes that contains the secondary replica, run the cldevice combine command.
# cldevice combine -d destination-instance source-instance |
The remote DID instance, which corresponds to the primary replica.
The local DID instance, which corresponds to the secondary replica.
Verify that the DID remapping occurred successfully.
# cldevice list desination-instance source-instance |
One of the DID instances should not be listed.
On all nodes, verify that the DID devices for all combined DID instances are accessible.
# cldevice list -v |
To complete the configuration of your replicated device group, perform the steps in the following procedures.
How to Add and Register a Device Group (Solaris Volume Manager) or SPARC: How to Register a Disk Group as a Device Group (Veritas Volume Manager)
When registering the device group, make sure to give it the same name as the TrueCopy replication group.
How to Verify a Hitachi TrueCopy Replicated Global Device Group Configuration
Before you verify the global device group, you must first create it. You can use a Solaris Volume Manager device group, a Veritas Volume Manager device group, or raw-disk device group. For information about creating a Solaris Volume Manager device group, see How to Add and Register a Device Group (Solaris Volume Manager). For information about creating a Veritas Volume Manager device group, see SPARC: How to Create a New Disk Group When Encapsulating Disks (Veritas Volume Manager).
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.
Verify that the primary device group corresponds to the same node as the node that contains the primary replica.
# pairdisplay -g group-name # cldevicegroup status -n nodename group-name |
Verify that the replication property is set for the device group.
# cldevicegroup show -n nodename group-name |
Verify that the replicated property is set for the device.
# usr/cluster/bin/cldevice status [-s state] [-n node[,?]] [+| [disk-device ]] |
Perform a trial switchover to ensure that the device groups are configured correctly and the replicas can move between nodes.
If the device group is offline, bring it online.
# cldevicegroup switch -n nodename group-name |
The node to which the device group is switched. This node becomes the new primary
Verify that the switchover was successful by comparing the output of the following commands.
# pairdisplay -g group-name # cldevicegroup status -n nodename group-name |
This example completes the Sun Cluster specific steps necessary to set up TrueCopy replication in your cluster. The example assumes that you have already performed the following tasks:
Set up your Hitachi LUNs
Installed the TrueCopy software on your storage device and cluster nodes
Configured the replication pairs on your cluster nodes
For instructions about configuring your replication pairs, see How to Configure a Hitachi TrueCopy Replication Group.
This example involves a three-node cluster that uses TrueCopy. The cluster is spread across two remote sites, with two nodes at one site and one node at the other site. Each site has its own Hitachi storage device.
The following examples show the TrueCopy /etc/horcm.conf configuration file on each node.
HORCM_DEV #dev_group dev_name port# TargetID LU# MU# VG01 pair1 CL1-A 0 29 VG01 pair2 CL1-A 0 30 VG01 pair3 CL1-A 0 31 HORCM_INST #dev_group ip_address service VG01 node-3 horcm |
HORCM_DEV #dev_group dev_name port# TargetID LU# MU# VG01 pair1 CL1-A 0 29 VG01 pair2 CL1-A 0 30 VG01 pair3 CL1-A 0 31 HORCM_INST #dev_group ip_address service VG01 node-3 horcm |
HORCM_DEV #dev_group dev_name port# TargetID LU# MU# VG01 pair1 CL1-C 0 09 VG01 pair2 CL1-C 0 10 VG01 pair3 CL1-C 0 11 HORCM_INST #dev_group ip_address service VG01 node-1 horcm VG01 node-2 horcm |
In the preceding examples, three LUNs are replicated between the two sites. The LUNs are all in a replication group named VG01. The pairdisplay command verifies this information and shows that Node 3 has the primary replica.
# pairdisplay -g VG01 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, Seq#,P-LDEV# M VG01 pair1(L) (CL1-A , 0, 29)61114 29..S-VOL PAIR DATA ,----- 58 - VG01 pair1(R) (CL1-C , 0, 9)20064 58..P-VOL PAIR DATA ,61114 29 - VG01 pair2(L) (CL1-A , 0, 30)61114 30..S-VOL PAIR DATA ,----- 59 - VG01 pair2(R) (CL1-C , 0, 10)20064 59..P-VOL PAIR DATA ,61114 30 - VG01 pair3(L) (CL1-A , 0, 31)61114 31..S-VOL PAIR DATA ,----- 60 - VG01 pair3(R) (CL1-C , 0, 11)20064 60..P-VOL PAIR DATA ,61114 31 - |
# pairdisplay -g VG01 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, Seq#,P-LDEV# M VG01 pair1(L) (CL1-A , 0, 29)61114 29..S-VOL PAIR DATA ,----- 58 - VG01 pair1(R) (CL1-C , 0, 9)20064 58..P-VOL PAIR DATA ,61114 29 - VG01 pair2(L) (CL1-A , 0, 30)61114 30..S-VOL PAIR DATA ,----- 59 - VG01 pair2(R) (CL1-C , 0, 10)20064 59..P-VOL PAIR DATA ,61114 30 - VG01 pair3(L) (CL1-A , 0, 31)61114 31..S-VOL PAIR DATA ,----- 60 - VG01 pair3(R) (CL1-C , 0, 11)20064 60..P-VOL PAIR DATA ,61114 31 - |
# pairdisplay -g VG01 Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, Seq#,P-LDEV# M VG01 pair1(L) (CL1-C , 0, 9)20064 58..P-VOL PAIR DATA ,61114 29 - VG01 pair1(R) (CL1-A , 0, 29)61114 29..S-VOL PAIR DATA ,----- 58 - VG01 pair2(L) (CL1-C , 0, 10)20064 59..P-VOL PAIR DATA ,61114 30 - VG01 pair2(R) (CL1-A , 0, 30)61114 30..S-VOL PAIR DATA ,----- 59 - VG01 pair3(L) (CL1-C , 0, 11)20064 60..P-VOL PAIR DATA ,61114 31 - VG01 pair3(R) (CL1-A , 0, 31)61114 31..S-VOL PAIR DATA ,----- 60 - |
To see which disks are being used, use the -fd option of the pairdisplay command as shown in the following examples.
# pairdisplay -fd -g VG01 Group PairVol(L/R) Device_File ,Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV# M VG01 pair1(L) c6t500060E8000000000000EEBA0000001Dd0s2 61114 29..S-VOL PAIR DATA ,----- 58 - VG01 pair1(R) c5t50060E800000000000004E600000003Ad0s2 20064 58..P-VOL PAIR DATA ,61114 29 - VG01 pair2(L) c6t500060E8000000000000EEBA0000001Ed0s2 61114 30..S-VOL PAIR DATA ,----- 59 - VG01 pair2(R) c5t50060E800000000000004E600000003Bd0s2 0064 59..P-VOL PAIR DATA ,61114 30 - VG01 pair3(L) c6t500060E8000000000000EEBA0000001Fd0s2 61114 31..S-VOL PAIR DATA ,----- 60 - VG01 pair3(R) c5t50060E800000000000004E600000003Cd0s2 20064 60..P-VOL PAIR DATA ,61114 31 - |
# pairdisplay -fd -g VG01 Group PairVol(L/R) Device_File ,Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV# M VG01 pair1(L) c5t500060E8000000000000EEBA0000001Dd0s2 61114 29..S-VOL PAIR DATA ,----- 58 - VG01 pair1(R) c5t50060E800000000000004E600000003Ad0s2 20064 58..P-VOL PAIR DATA ,61114 29 - VG01 pair2(L) c5t500060E8000000000000EEBA0000001Ed0s2 61114 30..S-VOL PAIR DATA ,----- 59 - VG01 pair2(R) c5t50060E800000000000004E600000003Bd0s2 20064 59..P-VOL PAIR DATA ,61114 30 - VG01 pair3(L) c5t500060E8000000000000EEBA0000001Fd0s2 61114 31..S-VOL PAIR DATA ,----- 60 - VG01 pair3(R) c5t50060E800000000000004E600000003Cd0s2 20064 60..P-VOL PAIR DATA ,61114 31 - |
# pairdisplay -fd -g VG01 Group PairVol(L/R) Device_File ,Seq#,LDEV#.P/S,Status,Fence ,Seq#,P-LDEV# M VG01 pair1(L) c5t50060E800000000000004E600000003Ad0s2 20064 58..P-VOL PAIR DATA ,61114 29 - VG01 pair1(R) c6t500060E8000000000000EEBA0000001Dd0s2 61114 29..S-VOL PAIR DATA ,----- 58 - VG01 pair2(L) c5t50060E800000000000004E600000003Bd0s2 20064 59..P-VOL PAIR DATA ,61114 30 - VG01 pair2(R) c6t500060E8000000000000EEBA0000001Ed0s2 61114 30..S-VOL PAIR DATA ,----- 59 - VG01 pair3(L) c5t50060E800000000000004E600000003Cd0s2 20064 60..P-VOL PAIR DATA ,61114 31 - VG01 pair3(R) c6t500060E8000000000000EEBA0000001Fd0s2 61114 31..S-VOL PAIR DATA ,----- 60 - |
These examples show that the following disks are being used:
On Node 1:
c6t500060E8000000000000EEBA0000001Dd0s2
c6t500060E8000000000000EEBA0000001Ed0s2
c6t500060E8000000000000EEBA0000001Fd0s
On Node 2:
c5t500060E8000000000000EEBA0000001Dd0s2
c5t500060E8000000000000EEBA0000001Ed0s2
c5t500060E8000000000000EEBA0000001Fd0s2
On Node 3:
c5t50060E800000000000004E600000003Ad0s2
c5t50060E800000000000004E600000003Bd0s2
c5t50060E800000000000004E600000003Cd0s2
To see the DID devices that corresponds to these disks, use the cldevice list command as shown in the following examples.
# cldevice list -v DID Device Full Device Path ---------- ---------------- 1 node-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 node-1:/dev/rdsk/c0t6d0 /dev/did/rdsk/d2 11 node-1:/dev/rdsk/c6t500060E8000000000000EEBA00000020d0 /dev/did/rdsk/d11 11 node-2:/dev/rdsk/c5t500060E8000000000000EEBA00000020d0 /dev/did/rdsk/d11 12 node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Fd0 /dev/did/rdsk/d12 12 node-2:/dev/rdsk/c5t500060E8000000000000EEBA0000001Fd0 /dev/did/rdsk/d12 13 node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Ed0 /dev/did/rdsk/d13 13 node-2:/dev/rdsk/c5t500060E8000000000000EEBA0000001Ed0 /dev/did/rdsk/d13 14 node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Dd0 /dev/did/rdsk/d14 14 node-2:/dev/rdsk/c5t500060E8000000000000EEBA0000001Dd0 /dev/did/rdsk/d14 18 node-3:/dev/rdsk/c0t0d0 /dev/did/rdsk/d18 19 node-3:/dev/rdsk/c0t6d0 /dev/did/rdsk/d19 20 node-3:/dev/rdsk/c5t50060E800000000000004E6000000013d0 /dev/did/rdsk/d20 21 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Dd0 /dev/did/rdsk/d21 22 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Cd0 /dev/did/rdsk/d2223 23 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Bd0 /dev/did/rdsk/d23 24 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Ad0 /dev/did/rdsk/d24 |
When combining the DID instances for each pair of replicated devices, cldevice list should combine DID instance 12 with 22, instance 13 with 23 and instance 14 with 24. Because Node 3 has the primary replica, run the cldevice -T command from either Node 1 or Node 2. Always combine the instances from a node that has the secondary replica. Run this command from a single node only, not on both nodes.
The following example shows the output when combining DID instances by running the command on Node 1.
# cldevice replicate -D node-3 Remapping instances for devices replicated with node-3... VG01 pair1 L node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Dd0 VG01 pair1 R node-3:/dev/rdsk/c5t50060E800000000000004E600000003Ad0 Combining instance 14 with 24 VG01 pair2 L node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Ed0 VG01 pair2 R node-3:/dev/rdsk/c5t50060E800000000000004E600000003Bd0 Combining instance 13 with 23 VG01 pair3 L node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Fd0 VG01 pair3 R node-3:/dev/rdsk/c5t50060E800000000000004E600000003Cd0 Combining instance 12 with 22 |
Checking the cldevice list output, the LUNs from both sites now have the same DID instance. Having the same DID instance makes each replica pair look like a single DID device, as the following example shows.
# cldevice list -v DID Device Full Device Path ---------- ---------------- 1 node-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 node-1:/dev/rdsk/c0t6d0 /dev/did/rdsk/d2 11 node-1:/dev/rdsk/c6t500060E8000000000000EEBA00000020d0 /dev/did/rdsk/d11 11 node-2:/dev/rdsk/c5t500060E8000000000000EEBA00000020d0 /dev/did/rdsk/d11 18 node-3:/dev/rdsk/c0t0d0 /dev/did/rdsk/d18 19 node-3:/dev/rdsk/c0t6d0 /dev/did/rdsk/d19 20 node-3:/dev/rdsk/c5t50060E800000000000004E6000000013d0 /dev/did/rdsk/d20 21 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Dd0 /dev/did/rdsk/d21 22 node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Fd0 /dev/did/rdsk/d1222 22 node-2:/dev/rdsk/c5t500060E8000000000000EEBA0000001Fd0 /dev/did/rdsk/d12 22 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Cd0 /dev/did/rdsk/d22 23 node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Ed0 /dev/did/rdsk/d13 23 node-2:/dev/rdsk/c5t500060E8000000000000EEBA0000001Ed0 /dev/did/rdsk/d13 23 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Bd0 /dev/did/rdsk/d23 24 node-1:/dev/rdsk/c6t500060E8000000000000EEBA0000001Dd0 /dev/did/rdsk/d24 24 node-2:/dev/rdsk/c5t500060E8000000000000EEBA0000001Dd0 /dev/did/rdsk/d24 24 node-3:/dev/rdsk/c5t50060E800000000000004E600000003Ad0 /dev/did/rdsk/d24 |
The next step is to create the volume manager device group. Issue this command from the node that has the primary replica, in this example Node 3. Give the device group the same name as the replica group, as the following example shows.
# metaset -s VG01 -ah phys-deneb-3 # metaset -s VG01 -ah phys-deneb-1 # metaset -s VG01 -ah phys-deneb-2 # metaset -s VG01 -a /dev/did/rdsk/d22 # metaset -s VG01 -a /dev/did/rdsk/d23 # metaset -s VG01 -a /dev/did/rdsk/d24 # metaset Set name = VG01, Set number = 1 Host Owner phys-deneb-3 Yes phys-deneb-1 phys-deneb-2 Drive Dbase d22 Yes d23 Yes d24 Yes |
At this point the device group is usable, metadevices can be created, and the device group can be moved to any of the three nodes. However, to make switchovers and failovers more efficient, run cldevicegroup set to mark the device group as replicated in cluster configuration.
# cldevicegroup sync VG01 # cldevicegroup show VG01 === Device Groups=== Device Group Name VG01 Type: SVM failback: no Node List: phys-deneb-3, phys-deneb-1, phys-deneb-2 preferenced: yes numsecondaries: 1 device names: VG01 Replication type: truecopy |
Configuration of the replication group is complete with this step. To verify that the configuration was successful, perform the steps in How to Verify a Hitachi TrueCopy Replicated Global Device Group Configuration.