This chapter provides guidelines for configuring data replication between clusters by using Sun StorEdge Availability Suite 3.1 software.
This chapter also contains an example of how data replication was configured for an NFS application by using Sun StorEdge Availability Suite 3.1 software. This example uses a specific cluster configuration and provides detailed information about how individual tasks can be performed. It does not include all of the steps that are required by other applications or other cluster configurations.
This chapter contains the following sections:
The following procedures are in this chapter:
How to Configure a Disk Device Group on the Secondary Cluster
How to Configure the File System on the Primary Cluster for the NFS Application
How to Configure the File System on the Secondary Cluster for the NFS Application
How to Create a Replication Resource Group on the Primary Cluster
How to Create a Replication Resource Group on the Secondary Cluster
How to Create an Application Resource Group on the Primary Cluster
How to Create an Application Resource Group on the Secondary Cluster
How to Configure the Application to Read and Write to the Secondary Volume
This section introduces disaster tolerance and describes the data replication methods used by Sun StorEdge Availability Suite 3.1 software.
Disaster tolerance is the ability of a system to restore an application on an alternate cluster when the primary cluster fails. Disaster tolerance is based on data replication and failover.
Data replication is the copying of data from a primary cluster to a backup or secondary cluster. Through data replication, the secondary cluster has an up-to-date copy of the data on the primary cluster. The secondary cluster can be located far away from the primary cluster.
Failover is the automatic relocation of a resource group or device group from a primary cluster to a secondary cluster. If the primary cluster fails, the application and the data are immediately available on the secondary cluster.
This section describes the remote mirror replication method and the point-in-time snapshot method used by Sun StorEdge Availability Suite 3.1 software. This software uses the sndradm(1RPC) and iiadm(1II) commands to replicate data. For more information about these commands, see the Sun Cluster 3.0 and Sun StorEdge Software Integration Guide.
Remote mirror replication is illustrated in Figure 6–1. Data from the master volume of the primary disk is replicated to the master volume of the secondary disk through a TCP/IP connection. A remote mirror bitmap tracks differences between the master volume on the primary disk and the master volume on the secondary disk.
Remote mirror replication can be performed synchronously in real time, or asynchronously. Each volume set in each cluster can be configured individually, for synchronous replication or asynchronous replication.
In synchronous data replication, a write operation is not confirmed as complete until the remote volume has been updated.
In asynchronous data replication, a write operation is confirmed as complete before the remote volume is updated. Asynchronous data replication provides greater flexibility over long distances and low bandwidth.
Point-in-time snapshot is illustrated in Figure 6–2. Data from the master volume of each disk is copied to the shadow volume on the same disk. The point-in-time bitmap tracks differences between the master volume and the shadow volume. When data is copied to the shadow volume, the point-in-time bitmap is reset.
The following figure illustrates how remote mirror replication and point-in-time snapshot are used in Example Configuration.
This section provides guidelines for configuring data replication between clusters. This section also contains tips for configuring replication resource groups and application resource groups. Use these guidelines when you are configuring data replication for your cluster.
This section discusses the following topics:
Replication resource groups collocate the device group under Sun StorEdge Availability Suite 3.1 software control with the logical hostname resource. A replication resource group must have the following characteristics:
Be a failover resource group
A failover resource can run on only one node at a time. When a failover occurs, failover resources take part in the failover.
Have a logical hostname resource
The logical hostname must be hosted by the primary cluster. After a failover or switchover, the logical hostname must be hosted by the secondary cluster. The Domain Name System (DNS) is used to associate the logical hostname with a cluster.
Have an HAStoragePlus resource
The HAStoragePlus resource enforces the switchover of the device group when the replication resource group is switched over or failed over. Sun Cluster software also enforces the switchover of the replication resource group when the device group is switched over. In this way, the replication resource group and the device group are always collocated, or mastered by the same node.
The following extension properties must be defined in the HAStoragePlus resource:
GlobalDevicePaths. This extension property defines the device group to which a volume belongs.
AffinityOn property = True. This extension property causes the device group to switch over or fail over when the replication resource group switches over or fails over. This feature is called an affinity switchover.
For more information about HAStoragePlus, see the SUNW.HAStoragePlus(5) man page.
Be named after the device group with which it is collocated, followed by -stor-rg
For example, devicegroup-stor-rg.
Be online on both the primary cluster and the secondary cluster
To be highly available, an application must be managed as a resource in an application resource group. An application resource group can be configured for a failover application or a scalable application.
Application resources and application resource groups configured on the primary cluster must also be configured on the secondary cluster. Also, the data accessed by the application resource must be replicated to the secondary cluster.
This section provides guidelines for configuring the following application resource groups:
In a failover application, an application runs on one node at a time. If that node fails, the application fails over to another node in the same cluster. A resource group for a failover application must have the following characteristics:
Have an HAStoragePlus resource to enforce the switchover of the device group when the application resource group is switched over or failed over
The device group is collocated with the replication resource group and the application resource group. Therefore, the switchover of the application resource group enforces the switchover of the device group and replication resource group. The application resource group, the replication resource group, and the device group are mastered by the same node.
Note, however, that a switchover or failover of the device group or the replication resource group does not cause a switchover or failover of the application resource group.
If the application data is globally mounted, the presence of an HAStoragePlus resource in the application resource group is not compulsory but is advised.
If the application data is mounted locally, the presence of an HAStoragePlus resource in the application resource group is compulsory.
Without an HAStoragePlus resource, the switchover or failover of the application resource group would not trigger the switchover or failover of the replication resource group and device group. After a switchover or failover, the application resource group, replication resource group, and device group would not be mastered by the same node.
For more information about HAStoragePlus, see the SUNW.HAStoragePlus(5) man page.
Must be online on the primary cluster and offline on the secondary cluster
The application resource group must be brought online on the secondary cluster when the secondary cluster takes over as the primary cluster.
The following figure illustrates the configuration of an application resource group and a replication resource group in a failover application.
In a scalable application, an application runs on several nodes to create a single, logical service. If a node that is running a scalable application fails, failover does not occur. The application continues to run on the other nodes.
When a scalable application is managed as a resource in an application resource group, it is not necessary to collocate the application resource group with the device group. Therefore, it is not necessary to create an HAStoragePlus resource for the application resource group.
A resource group for a scalable application must have the following characteristics:
Have a dependency on the shared address resource group
The shared address is used by the nodes that are running the scalable application, to distribute incoming data.
Be online on the primary cluster and offline on the secondary cluster
The following figure illustrates the configuration of resource groups in a scalable application.
If the primary cluster fails, the application must be switched over to the secondary cluster as soon as possible. To enable the secondary cluster to take over, the DNS must be updated. In addition, the secondary volume must be mounted on the mount point directory for the application file system.
The DNS associates a client with the logical hostname of an application. After a failover or switchover, the DNS mapping to the primary cluster must be removed, and a DNS mapping to the secondary cluster must be created. The following figure shows how the DNS maps a client to a cluster.
To update the DNS, use the nsupdate command. For information, see the nsupdate(1M) man page. For an example of how to cope with a failover or switchover, see Example of How to Cope With a Failover or Switchover.
After repair, the primary cluster can be brought back online. To switch back to the original primary cluster, perform the following steps:
Synchronize the primary cluster with the secondary cluster to ensure that the primary volume is up-to-date.
Update the DNS so that clients can access the application on the primary cluster.
Mount the primary volume on to the mount point directory for the application file system.
This section provides a step-by-step example of how data replication was configured for an NFS application by using Sun StorEdge Availability Suite 3.1 software.
Figure 6–7 illustrates the cluster configuration used in the example configuration. The secondary cluster in the example configuration contains one node, but other cluster configurations can be used.
Table 6–1 summarizes the hardware and software required by the example configuration. The operating environment, Sun Cluster software, and volume manager software must be installed on the cluster nodes before you install Sun StorEdge Availability Suite 3.1 software and patches.
Table 6–1 Required Hardware and Software
Hardware or Software |
Requirement |
---|---|
Node hardware |
Sun StorEdge Availability Suite 3.1 software is supported on all servers using the Solaris operating environment. For information about which hardware to use, see the Sun Cluster 3.x Hardware Administration Manual. |
Disk space |
Approximately 11 Mbytes. |
Operating environment |
Solaris 8 or Solaris 9 releases that are supported by Sun Cluster software. All nodes must use the same version of operating environment. For information about installation, seeInstalling the Software. |
Sun Cluster software |
Sun Cluster 3.1 4/04 software. For information about installation, see Chapter 2, Installing and Configuring Sun Cluster Software and How to Install Sun Cluster Software on a Single-Node Cluster. |
Volume manager software |
Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager (VxVM). All nodes must use the same version of volume manager software. Information about installation is inInstalling and Configuring Solstice DiskSuite/Solaris Volume Manager Software and SPARC: Installing and Configuring VxVM Software. |
Sun StorEdge Availability Suite 3.1 software |
For information about how to install the software, see the Sun StorEdge Availability Suite 3.1 Point-in-Time Copy Software Installation Guide and the Sun StorEdge Availability Suite 3.1 Remote Mirror Software Installation Guide. |
Sun StorEdge Availability Suite 3.1 software patches |
For information about the latest patches, see http://sunsolve.sun.com. |
This chapter describes how disk device groups and resource groups were configured for an NFS application. The following table lists the names of the groups and resources that were created for the example configuration.
Table 6–2 Summary of the Groups and Resources in the Example Configuration
Group or Resource |
Name |
Description |
---|---|---|
Disk device group |
devicegroup |
The disk device group. |
Replication resource group and resources |
devicegroup-stor-rg |
The replication resource group. |
lhost-reprg-prim, lhost-reprg-sec |
The logical hostnames for the replication resource group on the primary cluster and the secondary cluster. |
|
devicegroup-stor |
The HAStoragePlus resource for the replication resource group. |
|
Application resource group and resources |
nfs-rg |
The application resource group. |
lhost-nfsrg-prim, lhost-nfsrg-sec |
The logical hostnames for the application resource group on the primary cluster and the secondary cluster. |
|
nfs-dg-rs |
The HAStoragePlus resource for the application. |
|
nfs-rs |
The NFS resource. |
With the exception of devicegroup-stor-rg, the names of the groups and resources are example names that can be changed as required. The replication resource group must have a name with the format devicegroup-stor-rg.
This section describes how to configure a disk device group on the primary cluster and the secondary cluster. This example configuration uses VxVM software. For information about Solstice DiskSuite/Solaris Volume Manager software, see Chapter 3, Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software.
The following figure illustrates the volumes that were created in the disk device group.
The volumes defined in this section must not include disk label private areas, for example, cylinder 0. The VxVM software manages this constraint automatically.
Create a disk group that contains four volumes, volume 1 through volume 4.
For information about configuring a disk group by using the VxVM software, see Chapter 4, SPARC: Installing and Configuring VERITAS Volume Manager.
Access nodeA as superuser.
nodeA is the first node of the primary cluster. For a reminder of which node is nodeA, see Figure 6–7.
Configure the disk group to create a disk device group.
nodeA# /usr/cluster/bin/scconf -a -D type=vxvm,name=devicegroup \ ,nodelist=nodeA:nodeB |
The disk device group is called devicegroup.
Start the disk device group.
nodeA# /usr/cluster/bin/scswitch -z -D devicegroup -h nodeA |
Synchronize the disk device group with the Sun Cluster software.
nodeA# /usr/cluster/bin/scconf -c -D name=devicegroup,sync |
Create the file system for the disk device group.
nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol01 < /dev/null nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol02 < /dev/null nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol03 < /dev/null nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol04 < /dev/null |
Enable remote access between the nodes in the primary cluster and secondary cluster by adding the following entities to the /.rhosts file on nodeA and nodeB.
nodeC + + root |
Follow the procedure in How to Configure a Disk Device Group on the Primary Cluster, with these exceptions:
This section describes how the file systems were configured for the NFS application.
On nodeA and nodeB, create a mount point directory for the NFS file system.
For example:
nodeA# mkdir /global/mountpoint |
On nodeA and nodeB, configure the master volume to be mounted automatically on the mount point.
Add or replace the following text to the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.
/dev/vx/dsk/devicegroup/vol01 /dev/vx/rdsk/devicegroup/vol01 \ /global/mountpoint ufs 3 no global,logging |
For a reminder of the volumes names and volume numbers used in the disk device group, see Figure 6–8.
On nodeA, create a volume for the file system information that is used by Sun StorEdge Availability Suite 3.1 software.
nodeA# /usr/sbin/vxassist -g devicegroup make vol05 120m disk1 |
Volume 5 contains the file system information that is used by Sun StorEdge Availability Suite 3.1 software.
On nodeA, resynchronize the device group with the Sun Cluster software.
nodeA# /usr/cluster/bin/scconf -c -D name=devicegroup,sync |
On nodeA, create the file system for volume 5.
nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol05 |
On nodeA and nodeB, create a mount point for volume 5.
For example:
nodeA# mkdir /global/etc |
On nodeA and nodeB, configure volume 5 to be mounted automatically on the mount point.
Add or replace the following text to the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.
/dev/vx/dsk/devicegroup/vol05 /dev/vx/rdsk/devicegroup/vol05 \ /global/etc ufs 3 yes global,logging |
Mount volume 5 on nodeA.
nodeA# mount /global/etc |
Make volume 5 accessible to remote systems.
Create a directory called /global/etc/SUNW.nfs on nodeA.
nodeA# mkdir -p /global/etc/SUNW.nfs |
Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeA.
nodeA# touch /global/etc/SUNW.nfs/dfstab.nfs-rs |
Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeA:
share -F nfs -o rw -d "HA NFS" /global/mountpoint |
Repeat the procedure in How to Configure the File System on the Primary Cluster for the NFS Application, with these exceptions:
Replace nodeA with nodeC.
Do not use nodeB.
This section describes how a replication resource group was created on the primary cluster and on the secondary cluster.
Access nodeA as superuser.
Register SUNW.HAStoragePlus as a resource type.
nodeA# /usr/cluster/bin/scrgadm -a -t SUNW.HAStoragePlus |
Create a replication resource group for the disk device group.
nodeA# /usr/cluster/bin/scrgadm -a -g devicegroup-stor-rg -h nodeA,nodeB |
The name of the disk device group.
The name of the replication resource group.
Specifies the cluster nodes that can master the replication resource group.
Add a SUNW.HAStoragePlus resource to the replication resource group.
nodeA# /usr/cluster/bin/scrgadm -a -j devicegroup-stor \ -g devicegroup-stor-rg -t SUNW.HAStoragePlus \ -x GlobalDevicePaths=devicegroup \ -x AffinityOn=True |
The HAStoragePlus resource for the replication resource group.
Specifies the extension property that Sun StorEdge Availability Suite 3.1 software relies on.
Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the replication resource group.
nodeA# /usr/cluster/bin/scrgadm -a -L \ -j lhost-reprg-prim -g devicegroup-stor-rg -l lhost-reprg-prim |
Where lhost-reprg-prim is the logical hostname for the replication resource group on the primary cluster.
Enable the resources, manage the resource group, and bring the resource group online.
nodeA# /usr/cluster/bin/scswitch -Z -g devicegroup-stor-rg nodeA# /usr/cluster/bin/scswitch -z -g devicegroup-stor-rg -h nodeA |
Verify that the resource group is online.
nodeA# /usr/cluster/bin/scstat -g |
Examine the resource group state field to confirm that the replication resource group is online for nodeA and nodeB.
Repeat the procedure in How to Create a Replication Resource Group on the Primary Cluster, with these exceptions:
Replace nodeA with nodeC.
Do not use nodeB.
Replace references to lhost-reprg-prim with lhost-reprg-sec.
This section describes how application resource groups were created for an NFS application. The procedures in this section are specific to the application. The procedures cannot be used for another type of application.
Access nodeA as superuser.
Register SUNW.nfs as a resource type.
nodeA# scrgadm -a -t SUNW.nfs |
If SUNW.HAStoragePlus has not been registered as a resource type, register it.
nodeA# scrgadm -a -t SUNW.HAStoragePlus |
Create an application resource group for the devicegroup.
nodeA# scrgadm -a -g nfs-rg \ -y Pathprefix=/global/etc \ -y Auto_start_on_new_cluster=False \ -y RG_dependencies=devicegroup-stor-rg |
Is the name of the application resource group.
Specifies a directory into which the resources in the group can write administrative files.
Specifies that the application resource group is not started automatically.
Specifies the resource groups that the application resource group depends on. In this example, the application resource group depends on the replication resource group.
If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.
Add a SUNW.HAStoragePlus resource to the application resource group.
nodeA# scrgadm -a -j nfs-dg-rs -g nfs-rg \ -t SUNW.HAStoragePlus \ -x FileSystemMountPoints=/global/mountpoint \ -x AffinityOn=True |
Is the name of the HAStoragePlus resource for the NFS application.
Specifies that the mount point for the file system is global.
Specifies that the resource is of the type SUNW.HAStoragePlus.
Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the application resource group.
nodeA# /usr/cluster/bin/scrgadm -a -L -j lhost-nfsrg-prim -g nfs-rg \ -l lhost-nfsrg-prim |
Where lhost-nfsrg-prim is the logical hostname of the application resource group on the primary cluster.
Enable the resources, manage the application resource group, and bring the application resource group online.
Bring the HAStoragePlus resource for the NFS application online.
nodeA# /usr/cluster/bin/scrgadm -a -g nfs-rg \ -j nfs-rs -t SUNW.nfs -y Resource_dependencies=nfs-dg-rs |
Bring the application resource group online on nodeA .
nodeA# /usr/cluster/bin/scswitch -Z -g nfs-rg nodeA# /usr/cluster/bin/scswitch -z -g nfs-rg -h nodeA |
Verify that the application resource group is online.
nodeA# /usr/cluster/bin/scstat -g |
Examine the resource group state field to determine whether the application resource group is online for nodeA and nodeB.
Create the application group resource as described in Step 1 through Step 6 of How to Create an Application Resource Group on the Primary Cluster, with the following exceptions:
Replace nodeA with nodeC.
Ignore references to nodeB.
Replace references to lhost-nfsrg-prim with lhost-nfsrg-sec.
Ensure that the application resource group does not come online on nodeC.
nodeC# /usr/cluster/bin/scswitch -n -j nfs-rs nodeC# /usr/cluster/bin/scswitch -n -j nfs-dg-rs nodeC# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-sec nodeC# /usr/cluster/bin/scswitch -z -g nfs-rg -h "" |
The resource group remains offline after a reboot, because Auto_start_on_new_cluster=False.
If the global volume is mounted on the primary cluster, unmount the global volume from the secondary cluster.
nodeC# umount /global/mountpoint |
If the volume is mounted on a secondary cluster, the synchronization fails.
This section describes how data replication was enabled for the example configuration. This section uses the Sun StorEdge Availability Suite 3.1 software commands sndradm and iiadm. For more information about these commands, see the Sun Cluster 3.0 and Sun StorEdge Software Integration Guide.
Access nodeA as superuser.
Flush all transactions.
nodeA# /usr/sbin/lockfs -a -f |
Confirm that the logical hostnames lhost-reprg-prim and lhost-reprg-sec are online.
nodeA# /usr/cluster/bin/scstat -g |
Examine the state field of the resource group.
Enable remote mirror replication from the primary cluster to the secondary cluster.
This step enables replication from the master volume of the primary cluster to the master volume of the secondary cluster. In addition, this step enables replication to the remote mirror bitmap on volume 4.
If the primary cluster and secondary cluster are unsynchronized, run this command:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -e lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
If the primary cluster and secondary cluster are synchronized, run this command:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -E lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
Enable autosynchronization.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -a on lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
This step enables autosynchronization. When the active state of autosynchronization is set to on, the volume sets are resynchronized if the system reboots or a failure occurs.
Verify that the cluster is in logging mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01 autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devicegroup, state: logging |
In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.
Enable point-in-time snapshot.
nodeA# /usr/opt/SUNWesm/sbin/iiadm -e ind \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol02 \ /dev/vx/rdsk/devicegroup/vol03 nodeA# /usr/opt/SUNWesm/sbin/iiadm -w \ /dev/vx/rdsk/devicegroup/vol02 |
This step enables the master volume of the primary disk to be copied to the shadow volume on the same disk. In this example, the master volume is volume 1, the shadow volume is volume 2, and the point-in-time bitmap volume is volume 3.
Attach the point-in-time snapshot to the remote mirror set.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -I a \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol02 \ /dev/vx/rdsk/devicegroup/vol03 |
This step associates the point-in-time snapshot with the remote mirror volume set. Sun StorEdge Availability Suite 3.1 software ensures that a point-in-time snapshot is taken before remote mirror replication can occur.
Access nodeC as superuser.
Flush all transactions.
nodeC# /usr/sbin/lockfs -a -f |
Enable remote mirror replication from the primary cluster to the secondary cluster.
nodeC# /usr/opt/SUNWesm/sbin/sndradm -n -e lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
The primary cluster detects the presence of the secondary cluster and starts synchronization. Refer to the system log file /var/opt/SUNWesm/ds.log for information about the status of the clusters.
Enable independent point-in-time snapshot.
nodeC# /usr/opt/SUNWesm/sbin/iiadm -e ind \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol02 \ /dev/vx/rdsk/devicegroup/vol03 nodeC# /usr/opt/SUNWesm/sbin/iiadm -w \ /dev/vx/rdsk/devicegroup/vol02 |
Attach the point-in-time snapshot to the remote mirror set.
nodeC# /usr/opt/SUNWesm/sbin/sndradm -I a \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol02 \ /dev/vx/rdsk/devicegroup/vol03 |
This section describes how data replication was performed for the example configuration. This section uses the Sun StorEdge Availability Suite 3.1 software commands sndradm and iiadm. For more information about these commands, see the Sun Cluster 3.0 and Sun StorEdge Software Integration Guide.
In this procedure, the master volume of the primary disk is replicated to the master volume on the secondary disk. The master volume is volume 1 and the remote mirror bitmap volume is volume 4.
Access nodeA as superuser.
Verify that the cluster is in logging mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01 autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devicegroup, state: logging |
In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.
Flush all transactions.
nodeA# /usr/sbin/lockfs -a -f |
Copy the master volume of nodeA to the master volume of nodeC.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -m lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
Wait until the replication is complete and the volumes are synchronized.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -w lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
Confirm that the cluster is in replicating mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01 autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devicegroup, state: replicating |
In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite 3.1 software.
In this procedure, point-in-time snapshot was used to synchronize the shadow volume of the primary cluster to the master volume of the primary cluster. The master volume is volume 1 and the shadow volume is volume 2.
Access nodeA as superuser.
Quiesce the application that is running on nodeA.
nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs |
Put the primary cluster in to logging mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.
Synchronize the shadow volume of the primary cluster to the master volume of the primary cluster.
nodeA# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02 nodeA# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02 |
Synchronize the shadow volume of the secondary cluster to the master volume of the secondary cluster.
nodeC# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02 nodeC# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02 |
Restart the application on nodeA.
nodeA# /usr/cluster/bin/scswitch -e -j nfs-rs |
Resynchronize the secondary volume with the primary volume.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
This section describes how the replication configuration was confirmed in the example configuration.
Verify that the primary cluster is in replicating mode, with autosynchronization on.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01 autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devicegroup, state: replicating |
In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite 3.1 software.
If the primary cluster is not in replicating mode, put it in to replicating mode, as follows:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
Make a directory on a client machine.
Mount the directory to the application on the primary cluster, and display the mounted directory.
Mount the directory to the application on the secondary cluster, and display the mounted directory.
Unmount the directory to the application on the primary cluster.
client-machine# umount /dir |
Take the application resource group offline on the primary cluster.
nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs nodeA# /usr/cluster/bin/scswitch -n -j nfs-dg-rs nodeA# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-prim nodeA# /usr/cluster/bin/scswitch -z -g nfs-rg -h "" |
Put the primary cluster in to logging mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.
Bring the application resource group online on the secondary cluster.
nodeC# /usr/cluster/bin/scswitch -Z -g nfs-rg |
Access the client machine as superuser.
You see a prompt like this:
client-machine# |
Mount the directory that was created in Step 2 to the application on the secondary cluster.
client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /dir |
Display the mounted directory.
client-machine# ls /dir |
Ensure that the directory displayed in Step 3 is the same as that displayed in Step 4.
Return the application on the primary cluster to the mounted directory.
Take the application resource group offline on the secondary cluster.
nodeC# /usr/cluster/bin/scswitch -n -j nfs-rs nodeC# /usr/cluster/bin/scswitch -n -j nfs-dg-rs nodeC# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-sec nodeC# /usr/cluster/bin/scswitch -z -g nfs-rg -h "" |
Ensure that the global volume is unmounted from the secondary cluster.
nodeC# umount /global/mountpoint |
Bring the application resource group online on the primary cluster.
nodeA# /usr/cluster/bin/scswitch -Z -g nfs-rg |
Put the primary cluster in to replicating mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite 3.1 software.
This section describes how a switchover was provoked and how the application was transferred to the secondary cluster. After a switchover or failover, you must update the DNS entry and configure the application to read and write to the secondary volume.
Put the primary cluster into logging mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.
Confirm that the primary cluster and the secondary cluster are in logging mode, with autosynchronization off.
On nodeA, run this command:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01 autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag: devicegroup, state: logging |
On nodeC, run this command:
nodeC# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 <- lhost-reprg-prim:/dev/vx/rdsk/devicegroup/vol01 autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag: devicegroup, state: logging |
For nodeA and nodeC, the state should be logging, and the active state of autosynchronization should be off.
Confirm that the secondary cluster is ready to take over from the primary cluster.
nodeC# /usr/sbin/fsck -y /dev/vx/rdsk/devicegroup/vol01 |
Switch over to the secondary cluster.
nodeC# scswitch -Z -g nfs-rg nodeC# scswitch -Z -g nfs-rg -h nodeC |
For an illustration of how the DNS maps a client to a cluster, see Figure 6–6.
Start the nsupdate command.
For information, see the nsupdate(1M) man page.
Remove the current DNS mapping between the client machine and the logical hostname of the application resource group on the primary cluster.
> update delete client-machine A > update delete IPaddress1.in-addr.arpa TTL PTR client machine |
Is the full name of the client. For example, mymachine.mycompany.com.
Is the IP address is of the logical hostname lhost-nfsrg-prim, in reverse order.
Is the time to live, in seconds. A typical value is 3600.
Create the new DNS mapping between the client machine and the logical hostname of the application resource group on the secondary cluster.
> update add client-machine TTL A IPaddress2 > update add IPaddress3.in-addr.arpa TTL PTR client-machine |
Is the IP address is of the logical hostname lhost-nfsrg-sec, in forward order.
Is the IP address is of the logical hostname lhost-nfsrg-sec, in reverse order.
Configure the secondary volume to be mounted to the mount point directory for the NFS file system.
client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /xxx |
The mount point was created in Step 1 of How to Configure the File System on the Primary Cluster for the NFS Application.
Confirm that the secondary cluster has write access to the mount point.
client-machine# touch /xxx/data.1 client-machine# umount /xxx |