This section describes how to provoke a switchover and how the application is transferred to the secondary cluster. After a switchover or failover, update the DNS entries. For additional information, see Guidelines for Managing a Failover or Switchover.
This section contains the following procedures:
Access nodeA and nodeC as superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
Change the primary cluster to logging mode.
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devgrp/vol01 \ /dev/vx/rdsk/devgrp/vol04 ip sync |
When the data volume on the disk is written to, the bitmap volume on the same device group is updated. No replication occurs.
Confirm that the primary cluster and the secondary cluster are in logging mode, with autosynchronization off.
On nodeA, confirm the mode and setting:
For Sun StorEdge Availability Suite software:
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeA# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devgrp/vol01 autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag: devgrp, state: logging |
On nodeC, confirm the mode and setting:
For Sun StorEdge Availability Suite software:
nodeC# /usr/opt/SUNWesm/sbin/sndradm -P |
For Sun StorageTek Availability Suite software:
nodeC# /usr/sbin/sndradm -P |
The output should resemble the following:
/dev/vx/rdsk/devgrp/vol01 <- lhost-reprg-prim:/dev/vx/rdsk/devgrp/vol01 autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag: devgrp, state: logging |
For nodeA and nodeC, the state should be logging, and the active state of autosynchronization should be off.
Confirm that the secondary cluster is ready to take over from the primary cluster.
nodeC# fsck -y /dev/vx/rdsk/devgrp/vol01 |
Switch over to the secondary cluster.
nodeC# clresourcegroup switch -n nodeC nfs-rg |
Go to How to Update the DNS Entry.
For an illustration of how DNS maps a client to a cluster, see Figure 4–8.
Complete the procedure How to Provoke a Switchover.
Start the nsupdate command.
For information, see the nsupdate(1M) man page.
Remove the current DNS mapping between the logical host name of the application resource group and the cluster IP address, for both clusters.
> update delete lhost-nfsrg-prim A > update delete lhost-nfsrg-sec A > update delete ipaddress1rev.in-addr.arpa ttl PTR lhost-nfsrg-prim > update delete ipaddress2rev.in-addr.arpa ttl PTR lhost-nfsrg-sec |
The IP address of the primary cluster, in reverse order.
The IP address of the secondary cluster, in reverse order.
The time to live, in seconds. A typical value is 3600.
Create a new DNS mapping between the logical host name of the application resource group and the cluster IP address, for both clusters.
Map the primary logical host name to the IP address of the secondary cluster and map the secondary logical host name to the IP address of the primary cluster.
> update add lhost-nfsrg-prim ttl A ipaddress2fwd > update add lhost-nfsrg-sec ttl A ipaddress1fwd > update add ipaddress2rev.in-addr.arpa ttl PTR lhost-nfsrg-prim > update add ipaddress1rev.in-addr.arpa ttl PTR lhost-nfsrg-sec |
The IP address of the secondary cluster, in forward order.
The IP address of the primary cluster, in forward order.