Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Geographic Edition Data Replication Guide for EMC Symmetrix Remote Data Facility Oracle Solaris Cluster 4.1 |
1. Replicating Data With EMC Symmetrix Remote Data Facility Software
Administering Data Replication in an SRDF Protection Group
Initial Configuration of SRDF Software
Enabling the SRDF -symforce Option
Configuring Data Replication With SRDF Software on the Primary Cluster
Checking the Configuration of SRDF Devices
How to Create an RDF1 Device Group
Configuring Data Replication With SRDF Software on the Secondary Cluster
How to Create the RDF2 Device Group on the Secondary Cluster
2. Administering SRDF Protection Groups
3. Migrating Services That Use SRDF Data Replication
This section describes the steps you need to perform to configure SRDF software on the primary and secondary clusters. It also includes information about the preconditions for creating SRDF protection groups.
Configuring Data Replication With SRDF Software on the Primary Cluster
Configuring Data Replication With SRDF Software on the Secondary Cluster
Initial configuration of the primary and secondary clusters includes the following:
Configuring an SRDF device group, devgroup1, with the required number of disks
If using a raw-disk device group, configuring a raw-disk group, rawdg
Configuring the file system, which includes creating the file system, creating mount points, and adding entries to the /etc/vfstab file
Creating an application resource group, apprg1, which contains an HAStoragePlus resource
Geographic Edition software supports the hardware configurations that are supported by the Oracle Solaris Cluster software. Contact your Oracle service representative for information about current supported Oracle Solaris Cluster configurations.
All nodes of both clusters must have the SRDF property SYMAPI_ALLOW_RDF_SYMFORCE enabled. This setting is required for proper function of certain geopg operations. Ensure that the SRDF /var/symapi/config/options file has the following entry:
SYMAPI_ALLOW_RDF_SYMFORCE=TRUE
See your EMC Symmetrix Remote Data Facility documentation for more information.
This section describes the steps you must perform on the primary cluster before you can configure SRDF data replication with Geographic Edition software. The following information is in this section:
SRDF devices are configured in pairs. The mirroring relationship between the pairs becomes operational as soon as the SRDF links are online. If you have dynamic SRDF available, you have the capability to change relationships between R1 and R2 volumes in your device pairings on the fly without requiring a BIN file configuration change.
Note - Do not configure a replicated volume as a quorum device. Locate any quorum devices on a shared, unreplicated volume or use a quorum server.
The EMC Symmetrix database file on each host stores configuration information about the EMC Symmetrix units attached to the host. The EMC Symmetrix global memory stores information about the pair state of operating EMC SRDF devices.
EMC SRDF device groups are the entities that you add to Geographic Edition protection groups to enable the Geographic Edition software to manage EMC Symmetrix pairs.
The SRDF device group can hold one of two types of devices:
RDF1 source device, which acts as the primary
RDF2 target device, which acts as the secondary
As a result, you can create two types of SRDF device group, RDF1 and RDF2. An SRDF device can be moved to another device group only if the source and destination groups are of the same group type.
You can create RDF1 device groups on a host attached to the EMC Symmetrix software that contains the RDF1 devices. You can create RDF2 device groups on a host attached to the EMC Symmetrix software that contains the RDF2 devices. You can perform the same SRDF operations from the primary or secondary cluster, using the device group that was built on that side.
When you add remote data facility devices to a device group, all of the devices must adhere to the following restrictions:
The device must be an SRDF device.
The device must be either an RDF1 or RDF2 type device, as specified by the device group type.
The device must belong to the same SRDF group number.
The SRDF device group configuration must be the same on all nodes of both the primary and secondary clusters. For example, if you have a device group DG1, which is configured as RDF1, on node1 of clusterA, then node2 of clusterA should also have a device group called DG1 with the same disk set. Also, clusterB should have an SRDF device group called DG1, which is configured as RDF2, defined on all nodes.
Before adding SRDF devices to a device group, use the symrdf list command to list the EMC Symmetrix devices configured on the EMC Symmetrix units attached to your host.
# symrdf list
By default, the command displays devices by their EMC Symmetrix device name, a hexadecimal number that the EMC Symmetrix software assigns to each physical device. To display devices by their physical host name, use the pd argument with the symrdf command.
# symrdf list pd
The following steps create a device group of type RDF1 and add an RDF1 EMC Symmetrix device to the group.
phys-paris-1# symdg create devgroup1 -type rdf1
A default logical name of the form DEV001 is assigned to the RDF1 device.
phys-paris-1# symld -g devgroup1 -sid 3264 add dev 085
Next Steps
Create the Oracle Solaris Cluster device groups, file systems, or ZFS storage pools you want to use, specifying the LUNs in the SRDF device group. You also need to create an HAStoragePlus resource for the device group, file system, or ZFS storage pool you use.
If you create a ZFS storage pool, observe the following requirements and restrictions:
Mirrored and unmirrored ZFS storage pools are supported.
ZFS storage pool spares are not supported with storage-based replication in a Geographic Edition configuration. The information about the spare that is stored in the storage pool results in the storage pool being incompatible with the remote system after it has been replicated.
ZFS can be used with either Synchronous or Asynchronous mode. If you use Asynchronous mode, ensure that SRDF is configured to preserve write ordering, even after a rolling failure.
For more information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide. For information about creating an HAStoragePlus resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.
This section describes the steps you must complete on the secondary cluster before you can configure SRDF data replication in Geographic Edition software.
Before You Begin
Before you can issue the SRDF commands on the secondary cluster, you need to create a RDF2 type device group on the secondary cluster that contains the same definitions as the RDF1 device group.
Note - Do not configure a replicated volume as a quorum device. Locate any quorum devices on a shared, unreplicated volume or use a quorum server.
phys-paris-1# symdg export devgroup -f devgroup.txt -rdf
phys-paris-1# rcp devgroup1.txt phys-newyork-1:/. phys-paris-1# rcp devgroup1.txt phys-newyork-2:/.
Run the following command on each node in the newyork cluster.
# symdg import devgroup1 -f devgroup1.txt Adding standard device 054 as DEV001... Adding standard device 055 as DEV002...
Next, you need to configure any volume manager, the Oracle Solaris Cluster device groups, and the highly available cluster file system.
phys-paris-1# symrdf -g devgroup1 -noprompt establish An RDF 'Incremental Establish' operation execution is in progress for device group 'devgroup1'. Please wait... Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Mark target (R2) devices to refresh from source (R1)......Started. Device: 054 ............................................. Marked. Mark target (R2) devices to refresh from source (R1)......Done. Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Device: 09C ............................................. Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'devgroup1'.
phys-newyork-1# symrdf -g devgroup1 verify All devices in the RDF group 'devgroup1' are in the 'Synchronized' state.
phys-paris-1# symrdf -g devgroup1 -noprompt split An RDF 'Split' operation execution is in progress for device group 'devgroup1'. Please wait... Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. The RDF 'Split' operation device group 'devgroup1'.
You use these mappings when you create the raw-disk device group.
phys-paris-1# symrdf -g devgroup1 query … DEV001 00DD RW 0 3 NR 00DD RW 0 0 S.. Split DEV002 00DE RW 0 3 NR 00DE RW 0 0 S.. Split …
phys-paris-1# symdev show 00DD … Symmetrix ID: 000187990182 Device Physical Name : /dev/rdsk/c6t5006048ACCC81DD0d18s2 Device Symmetrix Name : 00DD
phys-paris-1# cldevice show c6t5006048ACCC81DD0d18 === DID Device Instances === DID Device Name: /dev/did/rdsk/d5 Full Device Path: pemc3:/dev/rdsk/c8t5006048ACCC81DEFd18 Full Device Path: pemc3:/dev/rdsk/c6t5006048ACCC81DD0d18 Full Device Path: pemc4:/dev/rdsk/c6t5006048ACCC81DD0d18 Full Device Path: pemc4:/dev/rdsk/c8t5006048ACCC81DEFd18 Replication: none default_fencing: global
In this example, you see that the ctd label c6t5006048ACCC81DD0d18 maps to /dev/did/rdsk/d5.
Use the LUNs in the SRDF device group.
If you create a ZFS storage pool, observe the following requirements and restrictions:
Mirrored and unmirrored ZFS storage pools are supported.
ZFS storage pool spares are not supported with storage-based replication in a Geographic Edition configuration. The information about the spare that is stored in the storage pool results in the storage pool being incompatible with the remote system after it has been replicated.
ZFS can be used with either Synchronous or Asynchronous mode. If you use Asynchronous mode, ensure that SRDF is configured to preserve write ordering, even after a rolling failure.
For more information, see Oracle Solaris Cluster System Administration Guide.
For more information, see Oracle Solaris Cluster Data Services Planning and Administration Guide
phys-newyork-1# clresourcegroup online -emM apprg1 phs-newyork-1# clresourcegroup offline apprg1
phys-newyork-1# umount /mounts/sample
phys-newyork-1# cldevicegroup offline rawdg
phys-newyork-1# symrdf -g devgroup1 -noprompt establish
Initial configuration on the secondary cluster is now complete.