This appendix provides an alternative to host-based replication that does not use Sun Cluster Geographic Edition. Sun recommends that you use Sun Cluster Geographic Edition for host-based replication to simplify the configuration and operation of host-based replication within a cluster. See Understanding Data Replication.
The example in this appendix shows how to configure host-based data replication between clusters using Sun StorageTek Availability Suite 3.1 or 3.2 software or Sun StorageTek Availability Suite 4.0 software. The example illustrates a complete cluster configuration for an NFS application that provides detailed information about how individual tasks can be performed. All tasks should be performed in the global-cluster voting node. The example does not include all of the steps that are required by other applications or other cluster configurations.
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes, ensure that you can assume an RBAC role that provides authorization for all Sun Cluster commands. This series of data replication procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See the System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.
This section introduces disaster tolerance and describes the data replication methods that Sun StorageTek Availability Suite software uses.
Disaster tolerance is the ability of a system to restore an application on an alternate cluster when the primary cluster fails. Disaster tolerance is based on data replication and failover. Failover is the automatic relocation of a resource group or device group from a primary cluster to a secondary cluster. If the primary cluster fails, the application and the data are immediately available on the secondary cluster.
This section describes the remote mirror replication method and the point-in-time snapshot method used by Sun StorageTek Availability Suite software. This software uses the sndradm(1RPC) and iiadm(1II) commands to replicate data.
Figure A–1 shows remote mirror replication. Data from the master volume of the primary disk is replicated to the master volume of the secondary disk through a TCP/IP connection. A remote mirror bitmap tracks differences between the master volume on the primary disk and the master volume on the secondary disk.
Remote mirror replication can be performed synchronously in real time, or asynchronously. Each volume set in each cluster can be configured individually, for synchronous replication or asynchronous replication.
In synchronous data replication, a write operation is not confirmed as complete until the remote volume has been updated.
In asynchronous data replication, a write operation is confirmed as complete before the remote volume is updated. Asynchronous data replication provides greater flexibility over long distances and low bandwidth.
Figure A–2 shows point-in-time snapshot. Data from the master volume of each disk is copied to the shadow volume on the same disk. The point-in-time bitmap tracks differences between the master volume and the shadow volume. When data is copied to the shadow volume, the point-in-time bitmap is reset.
Figure A–3 illustrates how remote mirror replication and point-in-time snapshot are used in this example configuration.
This section provides guidelines for configuring data replication between clusters. This section also contains tips for configuring replication resource groups and application resource groups. Use these guidelines when you are configuring data replication for your cluster.
This section discusses the following topics:
Replication resource groups collocate the device group under Sun StorageTek Availability Suite software control with the logical hostname resource. A replication resource group must have the following characteristics:
Be a failover resource group
A failover resource can run on only one node at a time. When a failover occurs, failover resources take part in the failover.
Have a logical hostname resource
The logical hostname must be hosted by the primary cluster. After a failover, the logical hostname must be hosted by the secondary cluster. The Domain Name System (DNS) is used to associate the logical hostname with a cluster.
Have an HAStoragePlus resource
The HAStoragePlus resource enforces the failover of the device group when the replication resource group is switched over or failed over. Sun Cluster software also enforces the failover of the replication resource group when the device group is switched over. In this way, the replication resource group and the device group are always colocated, or mastered by the same node.
The following extension properties must be defined in the HAStoragePlus resource:
GlobalDevicePaths. This extension property defines the device group to which a volume belongs.
AffinityOn property = True. This extension property causes the device group to switch over or fail over when the replication resource group switches over or fails over. This feature is called an affinity switchover.
ZPoolsSearchDir. This extension property is required for using ZFS file system.
For more information about HAStoragePlus, see the SUNW.HAStoragePlus(5) man page.
Be named after the device group with which it is colocated, followed by -stor-rg
For example, devgrp-stor-rg.
Be online on both the primary cluster and the secondary cluster
To be highly available, an application must be managed as a resource in an application resource group. An application resource group can be configured for a failover application or a scalable application.
Application resources and application resource groups configured on the primary cluster must also be configured on the secondary cluster. Also, the data accessed by the application resource must be replicated to the secondary cluster.
This section provides guidelines for configuring the following application resource groups:
In a failover application, an application runs on one node at a time. If that node fails, the application fails over to another node in the same cluster. A resource group for a failover application must have the following characteristics:
Have an HAStoragePlus resource to enforce the failover of the device group when the application resource group is switched over or failed over
The device group is colocated with the replication resource group and the application resource group. Therefore, the failover of the application resource group enforces the failover of the device group and replication resource group. The application resource group, the replication resource group, and the device group are mastered by the same node.
Note, however, that a failover of the device group or the replication resource group does not cause a failover of the application resource group.
If the application data is globally mounted, the presence of an HAStoragePlus resource in the application resource group is not required but is advised.
If the application data is mounted locally, the presence of an HAStoragePlus resource in the application resource group is required.
Without an HAStoragePlus resource, the failover of the application resource group would not trigger the failover of the replication resource group and device group. After a failover, the application resource group, replication resource group, and device group would not be mastered by the same node.
For more information about HAStoragePlus, see the SUNW.HAStoragePlus(5) man page.
Must be online on the primary cluster and offline on the secondary cluster
The application resource group must be brought online on the secondary cluster when the secondary cluster takes over as the primary cluster.
Figure A–4 illustrates the configuration of an application resource group and a replication resource group in a failover application.
In a scalable application, an application runs on several nodes to create a single, logical service. If a node that is running a scalable application fails, failover does not occur. The application continues to run on the other nodes.
When a scalable application is managed as a resource in an application resource group, it is not necessary to collocate the application resource group with the device group. Therefore, it is not necessary to create an HAStoragePlus resource for the application resource group.
A resource group for a scalable application must have the following characteristics:
Have a dependency on the shared address resource group
The nodes that are running the scalable application use the shared address to distribute incoming data.
Be online on the primary cluster and offline on the secondary cluster
Figure A–5 illustrates the configuration of resource groups in a scalable application.
If the primary cluster fails, the application must be switched over to the secondary cluster as soon as possible. To enable the secondary cluster to take over, the DNS must be updated.
The DNS associates a client with the logical hostname of an application. After a failover, the DNS mapping to the primary cluster must be removed, and a DNS mapping to the secondary cluster must be created. Figure A–6 shows how the DNS maps a client to a cluster.
To update the DNS, use the nsupdate command. For information, see the nsupdate(1M) man page. For an example of how to manage a failover, see Example of How to Manage a Failover.
After repair, the primary cluster can be brought back online. To switch back to the original primary cluster, perform the following tasks:
Synchronize the primary cluster with the secondary cluster to ensure that the primary volume is up-to-date.
Update the DNS so that clients can access the application on the primary cluster.
Table A–1 lists the tasks in this example of how data replication was configured for an NFS application by using Sun StorageTek Availability Suite software.
Table A–1 Task Map: Example of a Data Replication Configuration
Task |
Instructions |
---|---|
1. Connect and install the clusters | |
2. Configure device groups, file systems for the NFS application, and resource groups on the primary cluster and on the secondary cluster |
Example of How to Configure Device Groups and Resource Groups |
3. Enable data replication on the primary cluster and on the secondary cluster | |
4. Perform data replication | |
5. Verify the data replication configuration |
Figure A–7 illustrates the cluster configuration the example configuration uses. The secondary cluster in the example configuration contains one node, but other cluster configurations can be used.
Table A–2 summarizes the hardware and software that the example configuration requires. The Solaris OS, Sun Cluster software, and volume manager software must be installed on the cluster nodes before Sun StorageTek Availability Suite software and patches are installed.
Table A–2 Required Hardware and Software
Hardware or Software |
Requirement |
---|---|
Node hardware |
Sun StorageTek Availability Suite software is supported on all servers that use Solaris OS. For information about which hardware to use, see the Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS. |
Disk space |
Approximately 15 Mbytes. |
Solaris OS |
Solaris OS releases that are supported by Sun Cluster software. All nodes must use the same version of the Solaris OS. For information about installation, see the Sun Cluster Software Installation Guide for Solaris OS |
Sun Cluster software |
Sun Cluster 3.2 2/08 software. For information about installation, see the Sun Cluster Software Installation Guide for Solaris OS. |
Volume manager software |
Solaris Volume Manager software or Veritas Volume Manager (VxVM) software. All nodes must use the same version of volume manager software. For information about installation see Chapter 4, Configuring Solaris Volume Manager Software, in Sun Cluster Software Installation Guide for Solaris OS and Chapter 5, Installing and Configuring Veritas Volume Manager, in Sun Cluster Software Installation Guide for Solaris OS |
Sun StorageTek Availability Suite software |
For information about how to install the software, see the installation manuals for your release of Sun StorageTek Availability Suite or Sun StorageTek Availability Suite software:
|
Sun StorageTek Availability Suite software patches |
For information about the latest patches, see http://www.sunsolve.com. |
This section describes how device groups and resource groups are configured for an NFS application. For additional information, see Configuring Replication Resource Groups and Configuring Application Resource Groups.
This section contains the following procedures:
How to Configure the File System on the Primary Cluster for the NFS Application
How to Configure the File System on the Secondary Cluster for the NFS Application
How to Create a Replication Resource Group on the Primary Cluster
How to Create a Replication Resource Group on the Secondary Cluster
How to Create an NFS Application Resource Group on the Primary Cluster
How to Create an NFS Application Resource Group on the Secondary Cluster
The following table lists the names of the groups and resources that are created for the example configuration.
Table A–3 Summary of the Groups and Resources in the Example Configuration
Group or Resource |
Name |
Description |
---|---|---|
Device group |
devgrp |
The device group |
Replication resource group and resources |
devgrp-stor-rg |
The replication resource group |
lhost-reprg-prim, lhost-reprg-sec |
The logical host names for the replication resource group on the primary cluster and the secondary cluster |
|
devgrp-stor |
The HAStoragePlus resource for the replication resource group |
|
Application resource group and resources |
nfs-rg |
The application resource group |
lhost-nfsrg-prim, lhost-nfsrg-sec |
The logical host names for the application resource group on the primary cluster and the secondary cluster |
|
nfs-dg-rs |
The HAStoragePlus resource for the application |
|
nfs-rs |
The NFS resource |
With the exception of devgrp-stor-rg, the names of the groups and resources are example names that can be changed as required. The replication resource group must have a name with the format devicegroupname-stor-rg.
This example configuration uses VxVM software. For information about Solaris Volume Manager software, see the Chapter 4, Configuring Solaris Volume Manager Software, in Sun Cluster Software Installation Guide for Solaris OS.
The following figure illustrates the volumes that are created in the device group.
The volumes that are defined in this procedure must not include disk-label private areas, for example, cylinder 0. The VxVM software manages this constraint automatically.
Ensure that you have completed the following tasks:
Read the guidelines and requirements in the following sections:
Set up the primary and secondary clusters as described in Connecting and Installing the Clusters.
Access nodeA as superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
The node nodeA is the first node of the primary cluster. For a reminder of which node is nodeA, see Figure A–7.
Create a disk group on nodeA that contains volume 1, vol01 through volume 4, vol04.
For information about configuring a disk group by using the VxVM software, see the Chapter 5, Installing and Configuring Veritas Volume Manager, in Sun Cluster Software Installation Guide for Solaris OS.
Configure the disk group to create a device group.
nodeA# cldevicegroup create -t vxvm -n nodeA nodeB devgrp |
The device group is called devgrp.
Create the file system for the device group.
nodeA# newfs /dev/vx/rdsk/devgrp/vol01 < /dev/null nodeA# newfs /dev/vx/rdsk/devgrp/vol02 < /dev/null |
No file system is needed for vol03 or vol04, which are instead used as raw volumes.
Go to How to Configure a Device Group on the Secondary Cluster.
Complete the procedure How to Configure a Device Group on the Primary Cluster.
Access nodeC as superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Create a disk group on nodeC that contains four volumes: volume 1, vol01, through volume 4, vol04.
Configure the disk group to create a device group.
nodeC# cldevicegroup create -t vxvm -n nodeC devgrp |
The device group is named devgrp.
Create the file system for the device group.
nodeC# newfs /dev/vx/rdsk/devgrp/vol01 < /dev/null nodeC# newfs /dev/vx/rdsk/devgrp/vol02 < /dev/null |
No file system is needed for vol03 or vol04, which are instead used as raw volumes.
Go to How to Configure the File System on the Primary Cluster for the NFS Application.
Complete the procedure How to Configure a Device Group on the Secondary Cluster.
On nodeA and nodeB, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
On nodeA and nodeB, create a mount-point directory for the NFS file system.
For example:
nodeA# mkdir /global/mountpoint |
On nodeA and nodeB, configure the master volume to be mounted automatically on the mount point.
Add or replace the following text in the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.
/dev/vx/dsk/devgrp/vol01 /dev/vx/rdsk/devgrp/vol01 \ /global/mountpoint ufs 3 no global,logging |
For a reminder of the volumes names and volume numbers that are used in the device group, see Figure A–8.
On nodeA, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.
nodeA# vxassist -g devgrp make vol05 120m disk1 |
Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.
On nodeA, resynchronize the device group with the Sun Cluster software.
nodeA# cldevicegroup sync devgrp |
On nodeA, create the file system for vol05.
nodeA# newfs /dev/vx/rdsk/devgrp/vol05 |
On nodeA and nodeB, create a mount point for vol05.
The following example creates the mount point /global/etc.
nodeA# mkdir /global/etc |
On nodeA and nodeB, configure vol05 to be mounted automatically on the mount point.
Add or replace the following text in the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.
/dev/vx/dsk/devgrp/vol05 /dev/vx/rdsk/devgrp/vol05 \ /global/etc ufs 3 yes global,logging |
Mount vol05 on nodeA.
nodeA# mount /global/etc |
Make vol05 accessible to remote systems.
Create a directory called /global/etc/SUNW.nfs on nodeA.
nodeA# mkdir -p /global/etc/SUNW.nfs |
Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeA.
nodeA# touch /global/etc/SUNW.nfs/dfstab.nfs-rs |
Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeA.
share -F nfs -o rw -d "HA NFS" /global/mountpoint |
Go to How to Configure the File System on the Secondary Cluster for the NFS Application.
Complete the procedure How to Configure the File System on the Primary Cluster for the NFS Application.
On nodeC, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
On nodeC, create a mount-point directory for the NFS file system.
For example:
nodeC# mkdir /global/mountpoint |
On nodeC, configure the master volume to be mounted automatically on the mount point.
Add or replace the following text in the /etc/vfstab file on nodeC. The text must be on a single line.
/dev/vx/dsk/devgrp/vol01 /dev/vx/rdsk/devgrp/vol01 \ /global/mountpoint ufs 3 no global,logging |
On nodeC, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.
nodeC# vxassist -g devgrp make vol05 120m disk1 |
Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.
On nodeC, resynchronize the device group with the Sun Cluster software.
nodeC# cldevicegroup sync devgrp |
On nodeC, create the file system for vol05.
nodeC# newfs /dev/vx/rdsk/devgrp/vol05 |
On nodeC, create a mount point for vol05.
The following example creates the mount point /global/etc.
nodeC# mkdir /global/etc |
On nodeC, configure vol05 to be mounted automatically on the mount point.
Add or replace the following text in the /etc/vfstab file on nodeC. The text must be on a single line.
/dev/vx/dsk/devgrp/vol05 /dev/vx/rdsk/devgrp/vol05 \ /global/etc ufs 3 yes global,logging |
Mount vol05 on nodeC.
nodeC# mount /global/etc |
Make vol05 accessible to remote systems.
Create a directory called /global/etc/SUNW.nfs on nodeC.
nodeC# mkdir -p /global/etc/SUNW.nfs |
Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeC.
nodeC# touch /global/etc/SUNW.nfs/dfstab.nfs-rs |
Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeC:
share -F nfs -o rw -d "HA NFS" /global/mountpoint |
Go to How to Create a Replication Resource Group on the Primary Cluster.
Complete the procedure How to Configure the File System on the Secondary Cluster for the NFS Application.
Access nodeA as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.
Register the SUNW.HAStoragePlus resource type.
nodeA# clresourcetype register SUNW.HAStoragePlus |
Create a replication resource group for the device group.
nodeA# clresourcegroup create -n nodeA,nodeB devgrp-stor-rg |
Specifies that cluster nodes nodeA and nodeB can master the replication resource group.
The name of the replication resource group. In this name, devgrp specifies the name of the device group.
Add a SUNW.HAStoragePlus resource to the replication resource group.
nodeA# clresource create -g devgrp-stor-rg -t SUNW.HAStoragePlus \ -p GlobalDevicePaths=devgrp \ -p AffinityOn=True \ devgrp-stor |
Specifies the resource group to which resource is added.
Specifies the extension property that Sun StorageTek Availability Suite software relies on.
Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the replication resource group.
nodeA# clreslogicalhostname create -g devgrp-stor-rg lhost-reprg-prim |
The logical hostname for the replication resource group on the primary cluster is named lhost-reprg-prim.
Enable the resources, manage the resource group, and bring the resource group online.
nodeA# clresourcegroup online -e -M -n nodeA devgrp-stor-rg |
Enables associated resources.
Manages the resource group.
Specifies the node on which to bring the resource group online.
Verify that the resource group is online.
nodeA# clresourcegroup status devgrp-stor-rg |
Examine the resource group state field to confirm that the replication resource group is online on nodeA.
Go to How to Create a Replication Resource Group on the Secondary Cluster.
Complete the procedure How to Create a Replication Resource Group on the Primary Cluster.
Access nodeC as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.
Register SUNW.HAStoragePlus as a resource type.
nodeC# clresourcetype register SUNW.HAStoragePlus |
Create a replication resource group for the device group.
nodeC# clresourcegroup create -n nodeC devgrp-stor-rg |
Creates the resource group.
Specifies the node list for the resource group.
The name of the device group.
The name of the replication resource group.
Add a SUNW.HAStoragePlus resource to the replication resource group.
nodeC# clresource create \ -t SUNW.HAStoragePlus \ -p GlobalDevicePaths=devgrp \ -p AffinityOn=True \ devgrp-stor |
Creates the resource.
Specifies the resource type.
Specifies the extension property that Sun StorageTek Availability Suite software relies on.
Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.
The HAStoragePlus resource for the replication resource group.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the replication resource group.
nodeC# clreslogicalhostname create -g devgrp-stor-rg lhost-reprg-sec |
The logical hostname for the replication resource group on the primary cluster is named lhost-reprg-sec.
Enable the resources, manage the resource group, and bring the resource group online.
nodeC# clresourcegroup online -e -M -n nodeC devgrp-stor-rg |
Brings online.
Enables associated resources.
Manages the resource group.
Specifies the node on which to bring the resource group online.
Verify that the resource group is online.
nodeC# clresourcegroup status devgrp-stor-rg |
Examine the resource group state field to confirm that the replication resource group is online on nodeC.
Go to How to Create an NFS Application Resource Group on the Primary Cluster.
This procedure describes how application resource groups are created for NFS. This procedure is specific to this application and cannot be used for another type of application.
Complete the procedure How to Create a Replication Resource Group on the Secondary Cluster.
Access nodeA as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.
Register SUNW.nfs as a resource type.
nodeA# clresourcetype register SUNW.nfs |
If SUNW.HAStoragePlus has not been registered as a resource type, register it.
nodeA# clresourcetype register SUNW.HAStoragePlus |
Create an application resource group for the device group devgrp.
nodeA# clresourcegroup create \ -p Pathprefix=/global/etc \ -p Auto_start_on_new_cluster=False \ -p RG_dependencies=devgrp-stor-rg \ nfs-rg |
Specifies the directory into which the resources in the group can write administrative files.
Specifies that the application resource group is not started automatically.
Specifies the resource group that the application resource group depends on. In this example, the application resource group depends on the replication resource group devgrp-stor-rg.
If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.
The name of the application resource group.
Add a SUNW.HAStoragePlus resource to the application resource group.
nodeA# clresource create -g nfs-rg \ -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=/global/mountpoint \ -p AffinityOn=True \ nfs-dg-rs |
Creates the resource.
Specifies the resource group to which the resource is added.
Specifies that the resource is of the type SUNW.HAStoragePlus.
Specifies that the mount point for the file system is global.
Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -p GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.
The name of the HAStoragePlus resource for the NFS application.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the application resource group.
nodeA# clreslogicalhostname create -g nfs-rg \ lhost-nfsrg-prim |
The logical hostname of the application resource group on the primary cluster is named lhost-nfsrg-prim.
Enable the resources, manage the application resource group, and bring the application resource group online.
Enable the HAStoragePlus resource for the NFS application.
nodeA# clresource enable nfs-rs |
Bring the application resource group online on nodeA .
nodeA# clresourcegroup online -e -M -n nodeA nfs-rg |
Brings the resource group online.
Enables the associated resources.
Manages the resource group.
Specifies the node on which to bring the resource group online.
The name of the resource group.
Verify that the application resource group is online.
nodeA# clresourcegroup status |
Examine the resource group state field to determine whether the application resource group is online for nodeA and nodeB.
Go to How to Create an NFS Application Resource Group on the Secondary Cluster.
Complete the procedure How to Create an NFS Application Resource Group on the Primary Cluster.
Access nodeC as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.
Register SUNW.nfs as a resource type.
nodeC# clresourcetype register SUNW.nfs |
If SUNW.HAStoragePlus has not been registered as a resource type, register it.
nodeC# clresourcetype register SUNW.HAStoragePlus |
Create an application resource group for the device group.
nodeC# clresourcegroup create \ -p Pathprefix=/global/etc \ -p Auto_start_on_new_cluster=False \ -p RG_dependencies=devgrp-stor-rg \ nfs-rg |
Creates the resource group.
Specifies a property of the resource group.
Specifies a directory into which the resources in the group can write administrative files.
Specifies that the application resource group is not started automatically.
Specifies the resource groups that the application resource group depends on. In this example, the application resource group depends on the replication resource group.
If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.
The name of the application resource group.
Add a SUNW.HAStoragePlus resource to the application resource group.
nodeC# clresource create -g nfs-rg \ -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=/global/mountpoint \ -p AffinityOn=True \ nfs-dg-rs |
Creates the resource.
Specifies the resource group to which the resource is added.
Specifies that the resource is of the type SUNW.HAStoragePlus.
Specifies a property of the resource.
Specifies that the mount point for the file system is global.
Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.
The name of the HAStoragePlus resource for the NFS application.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the application resource group.
nodeC# clreslogicalhostname create -g nfs-rg \ lhost-nfsrg-sec |
The logical hostname of the application resource group on the secondary cluster is named lhost-nfsrg-sec.
Add an NFS resource to the application resource group.
nodeC# clresource create -g nfs-rg \ -t SUNW.nfs -p Resource_dependencies=nfs-dg-rs nfs-rg |
Ensure that the application resource group does not come online on nodeC.
nodeC# clresource disable -n nodeC nfs-rs nodeC# clresource disable -n nodeC nfs-dg-rs nodeC# clresource disable -n nodeC lhost-nfsrg-sec nodeC# clresourcegroup online -n "" nfs-rg |
The resource group remains offline after a reboot, because Auto_start_on_new_cluster=False.
If the global volume is mounted on the primary cluster, unmount the global volume from the secondary cluster.
nodeC# umount /global/mountpoint |
If the volume is mounted on a secondary cluster, the synchronization fails.
Go to Example of How to Enable Data Replication.
This section describes how data replication is enabled for the example configuration. This section uses the Sun StorageTek Availability Suite software commands sndradm and iiadm. For more information about these commands, see the Sun StorageTek Availability documentation.
This section contains the following procedures:
Access nodeA as superuser or assume a role that provides solaris.cluster.read RBAC authorization.
Flush all transactions.
nodeA# lockfs -a -f |
Confirm that the logical host names lhost-reprg-prim and lhost-reprg-sec are online.
nodeA# clresourcegroup status nodeC# clresourcegroup status |
Examine the state field of the resource group.
Enable remote mirror replication from the primary cluster to the secondary cluster.
This step enables replication from the master volume on the primary cluster to the master volume on the secondary cluster. In addition, this step enables replication to the remote mirror bitmap on vol04.
If the primary cluster and secondary cluster are unsynchronized, run this command:
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -e lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -e lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
If the primary cluster and secondary cluster are synchronized, run this command:
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -E lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -E lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
Enable autosynchronization.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -a on lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -a on lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
This step enables autosynchronization. When the active state of autosynchronization is set to on, the volume sets are resynchronized if the system reboots or a failure occurs.
Verify that the cluster is in logging mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devgrp/vol01 autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devgrp, state: logging |
In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.
Enable point-in-time snapshot.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/iiadm -e ind \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 nodeA# /usr/opt/SUNWesm/sbin/iiadm -w \ /dev/vx/rdsk/devgrp/vol02 |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/iiadm -e ind \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 nodeA# /usr/sbin/iiadm -w \ /dev/vx/rdsk/devgrp/vol02 |
This step enables the master volume on the primary cluster to be copied to the shadow volume on the same cluster. The master volume, shadow volume, and point-in-time bitmap volume must be in the same device group. In this example, the master volume is vol01, the shadow volume is vol02, and the point-in-time bitmap volume is vol03.
Attach the point-in-time snapshot to the remote mirror set.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -I a \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -I a \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 |
This step associates the point-in-time snapshot with the remote mirror volume set. Sun StorageTek Availability Suite software ensures that a point-in-time snapshot is taken before remote mirror replication can occur.
Go to How to Enable Replication on the Secondary Cluster.
Complete the procedure How to Enable Replication on the Primary Cluster.
Access nodeC as superuser.
Flush all transactions.
nodeC# lockfs -a -f |
Enable remote mirror replication from the primary cluster to the secondary cluster.
For Sun StorEdge Availability Suite software:
nodeC# /usr/opt/SUNWesm/sbin/sndradm -n -e lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeC# /usr/sbin/sndradm -n -e lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
The primary cluster detects the presence of the secondary cluster and starts synchronization. Refer to the system log file /var/opt/SUNWesm/ds.log for Sun StorEdge Availability Suite or /var/adm for Sun StorageTek Availability Suite for information about the status of the clusters.
Enable independent point-in-time snapshot.
For Sun StorEdge Availability Suite software:
nodeC# /usr/opt/SUNWesm/sbin/iiadm -e ind \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 nodeC# /usr/opt/SUNWesm/sbin/iiadm -w \ /dev/vx/rdsk/devgrp/vol02 |
For Sun StorageTek Availability Suite software:
nodeC# /usr/sbin/iiadm -e ind \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 nodeC# /usr/sbin/iiadm -w \ /dev/vx/rdsk/devgrp/vol02 |
Attach the point-in-time snapshot to the remote mirror set.
For Sun StorEdge Availability Suite software:
nodeC# /usr/opt/SUNWesm/sbin/sndradm -I a \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 |
For Sun StorageTek Availability Suite software:
nodeC# /usr/sbin/sndradm -I a \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol02 \ /dev/vx/rdsk/devgrp/vol03 |
Go to Example of How to Perform Data Replication.
This section describes how data replication is performed for the example configuration. This section uses the Sun StorageTek Availability Suite software commands sndradm and iiadm. For more information about these commands, see the Sun StorageTek Availability Suite documentation.
This section contains the following procedures:
In this procedure, the master volume of the primary disk is replicated to the master volume on the secondary disk. The master volume is vol01 and the remote mirror bitmap volume is vol04.
Access nodeA as superuser.
Verify that the cluster is in logging mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devgrp/vol01 autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devgrp, state: logging |
In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.
Flush all transactions.
nodeA# lockfs -a -f |
Copy the master volume of nodeA to the master volume of nodeC.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -m lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -m lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
Wait until the replication is complete and the volumes are synchronized.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -w lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -w lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
Confirm that the cluster is in replicating mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devgrp/vol01 autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devgrp, state: replicating |
In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorageTek Availability Suite software.
Go to How to Perform a Point-in-Time Snapshot.
In this procedure, point-in-time snapshot is used to synchronize the shadow volume of the primary cluster to the master volume of the primary cluster. The master volume is vol01, the bitmap volume is vol04, and the shadow volume is vol02.
Complete the procedure How to Perform a Remote Mirror Replication.
Access nodeA as superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorization.
Disable the resource that is running on nodeA.
nodeA# clresource disable -n nodeA nfs-rs |
Change the primary cluster to logging mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.
Synchronize the shadow volume of the primary cluster to the master volume of the primary cluster.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devgrp/vol02 nodeA# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devgrp/vol02 |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/iiadm -u s /dev/vx/rdsk/devgrp/vol02 nodeA# /usr/sbin/iiadm -w /dev/vx/rdsk/devgrp/vol02 |
Synchronize the shadow volume of the secondary cluster to the master volume of the secondary cluster.
For Sun StorEdge Availability Suite software:
nodeC# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devgrp/vol02 nodeC# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devgrp/vol02 |
For Sun StorageTek Availability Suite software:
nodeC# /usr/sbin/iiadm -u s /dev/vx/rdsk/devgrp/vol02 nodeC# /usr/sbin/iiadm -w /dev/vx/rdsk/devgrp/vol02 |
Restart the application on nodeA.
nodeA# clresource enable -n nodeA nfs-rs |
Resynchronize the secondary volume with the primary volume.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
Go to How to Verify That Replication Is Configured Correctly.
Complete the procedure How to Perform a Point-in-Time Snapshot.
Access nodeA and nodeC as superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
Verify that the primary cluster is in replicating mode, with autosynchronization on.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devgrp/vol01 autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devgrp, state: replicating |
In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorageTek Availability Suite software.
If the primary cluster is not in replicating mode, put it into replicating mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
Create a directory on a client machine.
Mount the directory to the application on the primary cluster, and display the mounted directory.
Mount the directory to the application on the secondary cluster, and display the mounted directory.
Unmount the directory from the application on the primary cluster.
client-machine# umount /dir |
Take the application resource group offline on the primary cluster.
nodeA# clresource disable -n nodeA nfs-rs nodeA# clresource disable -n nodeA nfs-dg-rs nodeA# clresource disable -n nodeA lhost-nfsrg-prim nodeA# clresourcegroup online -n "" nfs-rg |
Change the primary cluster to logging mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.
Ensure that the PathPrefix directory is available.
nodeC# mount | grep /global/etc |
Bring the application resource group online on the secondary cluster.
nodeC# clresourcegroup online -n nodeC nfs-rg |
Access the client machine as superuser.
You see a prompt that resembles the following:
client-machine# |
Mount the directory that was created in Step 4 to the application on the secondary cluster.
client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /dir |
Display the mounted directory.
client-machine# ls /dir |
Ensure that the directory displayed in Step 5 is the same as the directory displayed in Step 6.
Return the application on the primary cluster to the mounted directory.
Take the application resource group offline on the secondary cluster.
nodeC# clresource disable -n nodeC nfs-rs nodeC# clresource disable -n nodeC nfs-dg-rs nodeC# clresource disable -n nodeC lhost-nfsrg-sec nodeC# clresourcegroup online -n "" nfs-rg |
Ensure that the global volume is unmounted from the secondary cluster.
nodeC# umount /global/mountpoint |
Bring the application resource group online on the primary cluster.
nodeA# clresourcegroup online -n nodeA nfs-rg |
Change the primary cluster to replicating mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
When the primary volume is written to, the secondary volume is updated by Sun StorageTek Availability Suite software.
Example of How to Manage a Failover
This section describes how to provoke a failover and how the application is transferred to the secondary cluster. After a failover, update the DNS entries. For additional information, see Guidelines for Managing a Failover.
This section contains the following procedures:
Access nodeA and nodeC as superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
Change the primary cluster to logging mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
When the data volume on the disk is written to, the bitmap volume on the same device group is updated. No replication occurs.
Confirm that the primary cluster and the secondary cluster are in logging mode, with autosynchronization off.
On nodeA, confirm the mode and setting:
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devgrp/vol01 autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag: devgrp, state: logging |
On nodeC, confirm the mode and setting:
For Sun StorEdge Availability Suite software:
nodeC# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeC# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 <- lhost-reprg-prim:/dev/vx/rdsk/devgrp/vol01 autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag: devgrp, state: logging |
For nodeA and nodeC, the state should be logging, and the active state of autosynchronization should be off.
Confirm that the secondary cluster is ready to take over from the primary cluster.
nodeC# fsck -y /dev/vx/rdsk/devgrp/vol01 |
Switch over to the secondary cluster.
nodeC# clresourcegroup switch -n nodeC nfs-rg |
Go to How to Update the DNS Entry.
For an illustration of how DNS maps a client to a cluster, see Figure A–6.
Complete the procedure How to Provoke a Switchover.
Start the nsupdate command.
For information, see the nsupdate(1M) man page.
Remove the current DNS mapping between the logical hostname of the application resource group and the cluster IP address, for both clusters.
> update delete lhost-nfsrg-prim A > update delete lhost-nfsrg-sec A > update delete ipaddress1rev.in-addr.arpa ttl PTR lhost-nfsrg-prim > update delete ipaddress2rev.in-addr.arpa ttl PTR lhost-nfsrg-sec |
The IP address of the primary cluster, in reverse order.
The IP address of the secondary cluster, in reverse order.
The time to live, in seconds. A typical value is 3600.
Create a new DNS mapping between the logical hostname of the application resource group and the cluster IP address, for both clusters.
Map the primary logical hostname to the IP address of the secondary cluster and map the secondary logical hostname to the IP address of the primary cluster.
> update add lhost-nfsrg-prim ttl A ipaddress2fwd > update add lhost-nfsrg-sec ttl A ipaddress1fwd > update add ipaddress2rev.in-addr.arpa ttl PTR lhost-nfsrg-prim > update add ipaddress1rev.in-addr.arpa ttl PTR lhost-nfsrg-sec |
The IP address of the secondary cluster, in forward order.
The IP address of the primary cluster, in forward order.