Next, you need to configure any volume manager, the Sun Cluster device groups, and the highly available cluster file system. This process is slightly different depending on whether you are using VERITAS Volume Manager or raw-disk device groups. The following procedures provide instructions:
Start replication for the devgroup1 device group.
phys-paris-1# symrdf -g devgroup1 -noprompt establish An RDF 'Incremental Establish' operation execution is in progress for device group 'devgroup1'. Please wait... Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Mark target (R2) devices to refresh from source (R1)......Started. Device: 054 ............................................. Marked. Mark target (R2) devices to refresh from source (R1)......Done. Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Device: 09C ............................................. Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'devgroup1'. |
Confirm that the state of the EMC Symmetrix Remote Data Facility pair is synchronized.
phys-newyork-1# symrdf -g devgroup1 verify All devices in the RDF group 'devgroup1' are in the 'Synchronized' state. |
Split the pair by using the symrdf split command.
phys-paris-1# symrdf -g devgroup1 -noprompt split An RDF 'Split' operation execution is in progress for device group 'devgroup1'. Please wait... Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. The RDF 'Split' operation device group 'devgroup1'. |
Enable all the volumes to be scanned.
phys-newyork-1# vxdctl enable |
Import the VERITAS Volume Manager disk group, dg1.
phys-newyork-1# vxdg -C import dg1 |
Verify that the VERITAS Volume Manager disk group was successfully imported.
phys-newyork-1# vxdg list |
Enable the VERITAS Volume Manager volume.
phys-newyork-1# /usr/sbin/vxrecover -g dg1 -s -b |
Verify that the VERITAS Volume Manager volumes are recognized and enabled.
phys-newyork-1# vxprint |
Create the VERITAS Volume Manager disk group, dg1, in Sun Cluster software.
phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t vxvm dg1 |
Add an entry to the /etc/vfstab file on phys-newyork-1.
/dev/vx/dsk/dg1/vol1 /dev/vx/rdsk/dg1/vol1 /mounts/sample ufs 2 no logging |
Create a mount directory on newyork.
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample |
Create an application resource group, apprg1, by using the clresourcegroup command.
phys-newyork-1# clresourcegroup create apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p AffinityOn=TRUE \ -p GlobalDevicePaths=dg1 rs-hasp |
This HAStoragePlus resource is required for Sun Cluster Geographic Edition systems because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
Confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# clresourcegroup online -emM apprg1 phs-newyork-1# clresourcegroup offline apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# cldevicegroup offline dg1 |
Verify that the VERITAS Volume Manager disk group was deported.
phys-newyork-1# vxdg list |
Reestablish the EMC Symmetrix Remote Data Facility pair.
phys-newyork-1# symrdf -g devgroup1 -noprompt establish |
Initial configuration on the secondary cluster is now complete.
On the primary cluster, start replication for the devgroup1 device group.
phys-paris-1# symrdf -g devgroup1 -noprompt establish An RDF 'Incremental Establish' operation execution is in progress for device group 'devgroup1'. Please wait... Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Mark target (R2) devices to refresh from source (R1)......Started. Device: 054 ............................................. Marked. Mark target (R2) devices to refresh from source (R1)......Done. Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Device: 09C ............................................. Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'devgroup1'. |
On the primary cluster, confirm that the state of the EMC Symmetrix Remote Data Facility pair is synchronized.
phys-newyork-1# symrdf -g devgroup1 verify All devices in the RDF group 'devgroup1' are in the 'Synchronized' state. |
On the primary cluster, split the pair by using the symrdf split command.
phys-paris-1# symrdf -g devgroup1 -noprompt split An RDF 'Split' operation execution is in progress for device group 'devgroup1'. Please wait... Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. The RDF 'Split' operation device group 'devgroup1'. |
Map the EMC disk drive to the corresponding DID numbers.
You use these mappings when you create the raw-disk device group.
Use the symrdf command to find devices in the SRDF device group.
phys-paris-1# symrdf -g devgroup1 query … DEV001 00DD RW 0 3 NR 00DD RW 0 0 S.. Split DEV002 00DE RW 0 3 NR 00DE RW 0 0 S.. Split … |
Use the powermt command to write detailed information about all devices into a temporary file.
phys-paris-1# /etc/powermt display dev=all > /tmp/file |
Open the temporary file and look for the ctd label that applies to the appropriate device.
Logical device ID=00DD state=alive; policy=BasicFailover; priority=0; queued-IOs=0 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3073 pci@1d/SUNW,qlc@1 c6t5006048ACCC81DD0d18s0 FA 1dA active alive 0 0 3075 pci@1d/SUNW,qlc@2 c8t5006048ACCC81DEFd18s0 FA 16cB unlic alive 0 0 |
In this example, you see that the logical device ID 00DD maps to the ctd label c6t5006048ACCC81DD0d18.
Once you know the ctd label, use the cldevice command to see more information about that device.
phys-paris-1# cldevice show c6t5006048ACCC81DD0d18 === DID Device Instances === DID Device Name: /dev/did/rdsk/d5 Full Device Path: pemc3:/dev/rdsk/c8t5006048ACCC81DEFd18 Full Device Path: pemc3:/dev/rdsk/c6t5006048ACCC81DD0d18 Full Device Path: pemc4:/dev/rdsk/c6t5006048ACCC81DD0d18 Full Device Path: pemc4:/dev/rdsk/c8t5006048ACCC81DEFd18 Replication: none default_fencing: global |
In this example, you see that the ctd label c6t5006048ACCC81DD0d18 maps to /dev/did/rdsk/d5.
Repeat steps as needed for each of the disks in the device group and on each cluster.
Create a raw-disk device group on the partner cluster.
Use the same device group name as you used for the one on the primary cluster.
In the following command, the newyork cluster is the partner of the paris cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg phys-newyork-1# /usr/cluster/lib/dcs/dgconv -d d5 rawdg phys-newyork-1# /usr/cluster/lib/dcs/dgconv -d d6 rawdg |
Add an entry to the /etc/vfstab file on phys-newyork-1.
/dev/global/dsk/d5s2 /dev/global/rdsk/d5s2 /mounts/sample ufs 2 no logging |
Create a mount directory on newyork.
phys-newyork-1# mkdir -p /mounts/sample phys-newyork-2# mkdir -p /mounts/sample |
Make a file system for the new device.
phys-newyork-1# newfs /dev/global/rdsk/d5s2 phys-newyork-1# mount /mounts/sample |
Create an application resource group, apprg1, by using the clresourcegroup command.
phys-newyork-1# clresourcegroup create apprg1 |
Create the HAStoragePlus resource in apprg1.
phys-newyork-1# clresource create -g apprg1 -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/mounts/sample -p AffinityOn=TRUE \ -p GlobalDevicePaths=rawdg rs-hasp |
This HAStoragePlus resource is required for Sun Cluster Geographic Edition systems, because the software relies on the resource to bring the device groups and file systems online when the protection group starts on the primary cluster.
Confirm that the application resource group is correctly configured by bringing it online and taking it offline again.
phys-newyork-1# clresourcegroup online -emM apprg1 phs-newyork-1# clresourcegroup offline apprg1 |
Unmount the file system.
phys-newyork-1# umount /mounts/sample |
Take the Sun Cluster device group offline.
phys-newyork-1# cldevicegroup offline rawdg |
Reestablish the EMC Symmetrix Remote Data Facility pair.
phys-newyork-1# symrdf -g devgroup1 -noprompt establish |
Initial configuration on the secondary cluster is now complete.