Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Geographic Edition Data Replication Guide for EMC Symmetrix Remote Data Facility |
1. Replicating Data With EMC Symmetrix Remote Data Facility Software
Administering Data Replication in an SRDF Protection Group
Initial Configuration of SRDF Software
Setting the Path to the SRDF SYMCLI
How to Set the Path to the SRDF SYMCLI
Configuring Data Replication With SRDF Software on the Primary Cluster
Checking the Configuration of SRDF Devices
How to Set Up Raw-Disk Device Groups for Geographic Edition Systems
How to Configure Veritas Volume Manager Volumes for Use With SRDF Replication
How to Configure the Oracle Solaris Cluster Device Group for a Veritas Volume Manager Disk Group
How to Configure a Highly Available File System for SRDF Replication
Configuring Data Replication With SRDF Software on the Secondary Cluster
How to Create the RDF2 Device Group on the Secondary Cluster
Configuring the Other Entities on the Secondary Cluster
How to Replicate the Veritas Volume Manager Configuration Information From the Primary Cluster
2. Administering SRDF Protection Groups
3. Migrating Services That Use SRDF Data Replication
This section describes the steps you need to perform to configure SRDF software on the primary and secondary clusters. It also includes information about the preconditions for creating SRDF protection groups.
Configuring Data Replication With SRDF Software on the Primary Cluster
Configuring Data Replication With SRDF Software on the Secondary Cluster
Initial configuration of the primary and secondary clusters includes the following:
Configuring an SRDF device group, devgroup1, with the required number of disks
If using a raw-disk device group, configuring a raw-disk group, rawdg
If using Veritas Volume Manager:
Configuring the Veritas Volume Manager disk group, dg1
Configuring the Veritas Volume Manager volume, vol1
Configuring the Oracle Solaris Cluster device group for the Veritas Volume Manager volume
Configuring the file system, which includes creating the file system, creating mount points, and adding entries to the /etc/vfstab file
Creating an application resource group, apprg1, which contains an HAStoragePlus resource
Geographic Edition software supports the hardware configurations that are supported by the Oracle Solaris Cluster software. Contact your Oracle service representative for information about current supported Oracle Solaris Cluster configurations.
The Geographic Edition software installation process on a single-node cluster creates the /var/cluster/rgm/physnode_affinities file. Its existence causes positive and negative resource group affinities to be enforced at the level of the physical node, as they are in all multi-node clusters. Without this file, a single-node cluster uses resource group affinities at the level of the zone-node. The absence of this file can cause the malfunction of clustered applications, so do not remove it unless you clearly understand the potential consequences of its removal.
Table 1-2 Task Map: Steps in Configuring SRDF Data Replication for Geographic Edition Systems
|
To ensure that the Geographic Edition infrastructure uses a current, supported version of SRDF, you must manual set the location of the correct SYMCLI on all nodes of all clusters in the partnership.
Perform this procedure on each cluster node, in each partner cluster.
# ln -s /opt/emc/SYMCLI/srdfversion /opt/emc/SYMCLI/scgeo_default
If /opt/emc/SYMCLI/scgeo_default is not found, Geographic Edition software uses the SYMCLI of the latest version of SRDF software that is currently installed on the node and that is supported by Geographic Edition software.
This section describes the steps you must perform on the primary cluster before you can configure SRDF data replication with Geographic Edition software.
SRDF devices are configured in pairs. The mirroring relationship between the pairs becomes operational as soon as the SRDF links are online. If you have dynamic SRDF available, you have the capability to change relationships between R1 and R2 volumes in your device pairings on the fly without requiring a BIN file configuration change.
Note - Do not configure a replicated volume as a quorum device. Locate any quorum devices on a shared, unreplicated volume or use a quorum server.
The EMC Symmetrix database file on each host stores configuration information about the EMC Symmetrix units attached to the host. The EMC Symmetrix global memory stores information about the pair state of operating EMC SRDF devices.
EMC SRDF device groups are the entities that you add to Geographic Edition protection groups to enable the Geographic Edition software to manage EMC Symmetrix pairs.
The SRDF device group can hold one of two types of devices:
RDF1 source device, which acts as the primary
RDF2 target device, which acts as the secondary
As a result, you can create two types of SRDF device group, RDF1 and RDF2. An SRDF device can be moved to another device group only if the source and destination groups are of the same group type.
You can create RDF1 device groups on a host attached to the EMC Symmetrix software that contains the RDF1 devices. You can create RDF2 device groups on a host attached to the EMC Symmetrix software that contains the RDF2 devices. You can perform the same SRDF operations from the primary or secondary cluster, using the device group that was built on that side.
When you add remote data facility devices to a device group, all of the devices must adhere to the following restrictions:
The device must be an SRDF device.
The device must be either an RDF1 or RDF2 type device, as specified by the device group type.
The device must belong to the same SRDF group number.
The SRDF device group configuration must be the same on all nodes of both the primary and secondary clusters. For example, if you have a device group DG1, which is configured as RDF1, on node1 of clusterA, then node2 of clusterA should also have a device group called DG1 with the same disk set. Also, clusterB should have an SRDF device group called DG1, which is configured as RDF2, defined on all nodes.
Before adding SRDF devices to a device group, use the symrdf list command to list the EMC Symmetrix devices configured on the EMC Symmetrix units attached to your host.
# symrdf list
By default, the command displays devices by their EMC Symmetrix device name, a hexadecimal number that the EMC Symmetrix software assigns to each physical device. To display devices by their physical host name, use the pd argument with the symrdf command.
# symrdf list pd
The following steps create a device group of type RDF1 and add an RDF1 EMC Symmetrix device to the group.
Create a device group named devgroup1.
phys-paris-1# symdg create devgroup1 -type rdf1
Add an RDF1 device, with the EMC Symmetrix device name of 085, to the device group on the EMC Symmetrix storage unit identified by the number 000000003264.
A default logical name of the form DEV001 is assigned to the RDF1 device.
phys-paris-1# symld -g devgroup1 -sid 3264 add dev 085
Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Oracle Solaris Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Geographic Edition.
The following commands remove the predefined device groups for d7 and d8.
phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8 phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8 phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8
Ensure that the new DID does not contain any slashes. The following command creates a global device group, rawdg, which contains d7 and d8.
phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \ -t rawdisk -d d7,d8 rawdg
Example 1-1 Configuring a Raw-Disk Device Group
This example illustrates configuring the device group on the primary cluster, configuring the same device group on the partner cluster, and adding the group to an EMC Symmetrix protection group. Geographic Edition requires that the same Oracle Solaris Cluster device group, in this example rawdg, exists on both clusters..
Remove the automatically created device groups from the primary cluster. phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8 phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8 phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8 Create the raw-disk device group on the primary cluster. phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \ -t rawdisk -d d7,d8 rawdg Remove the automatically created device groups from the partner cluster. phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 Create the raw-disk device group on the partner cluster. phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg Add the raw-disk device group to the protection group rawpg. phys-paris-1# geopg create -d srdf -p Nodelist=phys-paris1,phys-paris-2 \ -o Primary -p cluster_dgs=rawdg -s paris-newyork-ps rawpg
Next Steps
Create a raw-disk device group on the partner cluster with the same name as the one you created here. See How to Replicate the Configuration Information From the Primary Cluster, When Using Raw-Disk Device Groups for the instructions about this task.
After you have configured the device group on both clusters, you can use the device group name wherever one is required in Geographic Edition commands such as geopg.
SRDF data replication is supported with Veritas Volume Manager volumes and raw-disk device groups. If you are using Veritas Volume Manager, you must configure Veritas Volume Manager volumes on the disks you selected for your SRDF device group.
For example, the d1 and d2 disks are configured as part of a Veritas Volume Manager disk group which is called dg1, by using commands such as vxdiskadm and vxdg. These disks are the ones that will be replicated to the partner cluster.
This command should list dg1 as a disk group.
For example, a volume that is called vol1 is created in the dg1 disk group. The appropriate Veritas Volume Manager commands, such as vxassist, are used to configure the volume.
Next Steps
Perform the steps in How to Configure the Oracle Solaris Cluster Device Group for a Veritas Volume Manager Disk Group to configure the Veritas Volume Manager volume as a Oracle Solaris Cluster device group.
Use the Oracle Solaris Cluster commands clsetup or cldevice and cldevicegroup.
For more information about these commands, refer to the clsetup(1CL) man page or the cldevice(1CL) and cldevicegroup(1CL) man pages.
phys-paris-1# cldevicegroup show devicegroupname
The Veritas Volume Manager disk group, dg1, should be displayed in the output.
For more information about the cldevicegroup command, see the cldevicegroup(1CL) man page.
Before You Begin
Before you configure the file system on cluster-paris, ensure that the Oracle Solaris Cluster entities you require, such as application resource groups, device groups, and volumes, have already been configured.
# mkdir -p /mounts/sample
Your mount point.
Whether the file system is to be mounted locally or globally depends on various factors, such as your performance requirements, or the type of application resource group you are using.
Note - You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Oracle Solaris Cluster software and the Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster.
Adding the resource to the application resource group ensures that the necessary file systems are mounted before the application is brought online.
For more information about the HAStoragePlus resource type, refer to the Oracle Solaris Cluster Data Services Planning and Administration Guide.
The following command should display the device group dg1.
phys-paris-1# cldevicegroup show dg1
Example 1-2 Configuring a Highly Available Cluster File System
This example creates a locally mounted filesystem, with HAStoragePlus. The filesystem created in this example is mounted locally every time the resource is brought online.
This example assumes that the following already exist:
The apprg1 resource group
The dg1 VxVM device group
The vol1 VxVM volume
Create a UNIX file system (UFS).
phys-paris-1# newfs dev/vx/dsk/dg1/vol1
On each node in the cluster, create mount points for the file system.
phys-paris-1# mkdir -p /mounts/sample phys-paris-2# mkdir -p /mounts/sample
Create mount points on all cluster paris nodes.
phys-paris-1# mkdir /mounts/sample
Add the following entry to the /etc/vfstab file:
phys-paris-1# /dev/vs/dsk/dg1/vol1 /dev/vx/rdsk/dg1/vol1 /mounts/sample \ ufs 2 no logging
Add the HAStoragePlus resource type.
phys-paris-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p Affinityon=TRUE \ -p GlobalDevicePaths=dg1 rs-hasp
This section describes the steps you must complete on the secondary cluster before you can configure SRDF data replication in Geographic Edition software.
Before You Begin
Before you can issue the SRDF commands on the secondary cluster, you need to create a RDF2 type device group on the secondary cluster that contains the same definitions as the RDF1 device group.
Note - Do not configure a replicated volume as a quorum device. Locate any quorum devices on a shared, unreplicated volume or use a quorum server.
phys-paris-1# symdg export devgroup -f devgroup.txt -rdf
phys-paris-1# rcp devgroup1.txt phys-newyork-1:/. phys-paris-1# rcp devgroup1.txt phys-newyork-2:/.
Run the following command on each node in the newyork cluster.
# symdg import devgroup1 -f devgroup1.txt Adding standard device 054 as DEV001... Adding standard device 055 as DEV002...
Next, you need to configure any volume manager, the Oracle Solaris Cluster device groups, and the highly available cluster file system. This process is slightly different depending on whether you are using Veritas Volume Manager or raw-disk device groups. The following procedures provide instructions:
phys-paris-1# symrdf -g devgroup1 -noprompt establish An RDF 'Incremental Establish' operation execution is in progress for device group 'devgroup1'. Please wait... Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Mark target (R2) devices to refresh from source (R1)......Started. Device: 054 ............................................. Marked. Mark target (R2) devices to refresh from source (R1)......Done. Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Device: 09C ............................................. Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'devgroup1'.
phys-newyork-1# symrdf -g devgroup1 verify All devices in the RDF group 'devgroup1' are in the 'Synchronized' state.
phys-paris-1# symrdf -g devgroup1 -noprompt split An RDF 'Split' operation execution is in progress for device group 'devgroup1'. Please wait... Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. The RDF 'Split' operation device group 'devgroup1'.
phys-newyork-1# vxdctl enable
phys-newyork-1# vxdg -C import dg1
phys-newyork-1# vxdg list
phys-newyork-1# /usr/sbin/vxrecover -g dg1 -s -b
phys-newyork-1# vxprint
phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t vxvm dg1
/dev/vx/dsk/dg1/vol1 /dev/vx/rdsk/dg1/vol1 /mounts/sample ufs 2 no logging
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample
phys-newyork-1# clresourcegroup create apprg1
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p AffinityOn=TRUE \ -p GlobalDevicePaths=dg1 rs-hasp
This HAStoragePlus resource is required for Geographic Edition systems because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
phys-newyork-1# clresourcegroup online -emM apprg1 phs-newyork-1# clresourcegroup offline apprg1
phys-newyork-1# umount /mounts/sample
phys-newyork-1# cldevicegroup offline dg1
phys-newyork-1# vxdg list
phys-newyork-1# symrdf -g devgroup1 -noprompt establish
Initial configuration on the secondary cluster is now complete.
phys-paris-1# symrdf -g devgroup1 -noprompt establish An RDF 'Incremental Establish' operation execution is in progress for device group 'devgroup1'. Please wait... Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Mark target (R2) devices to refresh from source (R1)......Started. Device: 054 ............................................. Marked. Mark target (R2) devices to refresh from source (R1)......Done. Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Device: 09C ............................................. Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'devgroup1'.
phys-newyork-1# symrdf -g devgroup1 verify All devices in the RDF group 'devgroup1' are in the 'Synchronized' state.
phys-paris-1# symrdf -g devgroup1 -noprompt split An RDF 'Split' operation execution is in progress for device group 'devgroup1'. Please wait... Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. The RDF 'Split' operation device group 'devgroup1'.
You use these mappings when you create the raw-disk device group.
phys-paris-1# symrdf -g devgroup1 query … DEV001 00DD RW 0 3 NR 00DD RW 0 0 S.. Split DEV002 00DE RW 0 3 NR 00DE RW 0 0 S.. Split …
phys-paris-1# /etc/powermt display dev=all > /tmp/file
Logical device ID=00DD state=alive; policy=BasicFailover; priority=0; queued-IOs=0 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3073 pci@1d/SUNW,qlc@1 c6t5006048ACCC81DD0d18s0 FA 1dA active alive 0 0 3075 pci@1d/SUNW,qlc@2 c8t5006048ACCC81DEFd18s0 FA 16cB unlic alive 0 0
In this example, you see that the logical device ID 00DD maps to the ctd label c6t5006048ACCC81DD0d18.
phys-paris-1# cldevice show c6t5006048ACCC81DD0d18 === DID Device Instances === DID Device Name: /dev/did/rdsk/d5 Full Device Path: pemc3:/dev/rdsk/c8t5006048ACCC81DEFd18 Full Device Path: pemc3:/dev/rdsk/c6t5006048ACCC81DD0d18 Full Device Path: pemc4:/dev/rdsk/c6t5006048ACCC81DD0d18 Full Device Path: pemc4:/dev/rdsk/c8t5006048ACCC81DEFd18 Replication: none default_fencing: global
In this example, you see that the ctd label c6t5006048ACCC81DD0d18 maps to /dev/did/rdsk/d5.
Use the same device group name as you used for the one on the primary cluster.
In the following command, the newyork cluster is the partner of the paris cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg
phys-newyork-1# cldevicegroup show rawdg
/dev/global/dsk/d5s2 /dev/global/rdsk/d5s2 /mounts/sample ufs 2 no logging
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample
phys-newyork-1# newfs /dev/global/rdsk/d5s2 phys-newyork-1# mount /mounts/sample
phys-newyork-1# clresourcegroup create apprg1
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p AffinityOn=TRUE \ -p GlobalDevicePaths=rawdg rs-hasp
This HAStoragePlus resource is required for Geographic Edition systems, because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
phys-newyork-1# clresourcegroup online -emM apprg1 phs-newyork-1# clresourcegroup offline apprg1
phys-newyork-1# umount /mounts/sample
phys-newyork-1# cldevicegroup offline rawdg
phys-newyork-1# symrdf -g devgroup1 -noprompt establish
Initial configuration on the secondary cluster is now complete.