This section describes how data replication was performed for the example configuration. This section uses the Sun StorEdge Availability Suite 3.1 software commands sndradm and iiadm. For more information about these commands, see the Sun Cluster 3.0 and Sun StorEdge Software Integration Guide.
In this procedure, the master volume of the primary disk is replicated to the master volume on the secondary disk. The master volume is volume 1 and the remote mirror bitmap volume is volume 4.
Access nodeA as superuser.
Verify that the cluster is in logging mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01 autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devicegroup, state: logging |
In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.
Flush all transactions.
nodeA# /usr/sbin/lockfs -a -f |
Copy the master volume of nodeA to the master volume of nodeC.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -m lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
Wait until the replication is complete and the volumes are synchronized.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -w lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
Confirm that the cluster is in replicating mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -P |
The output should look like this:
/dev/vx/rdsk/devicegroup/vol01 -> lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01 autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag: devicegroup, state: replicating |
In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite 3.1 software.
In this procedure, point-in-time snapshot was used to synchronize the shadow volume of the primary cluster to the master volume of the primary cluster. The master volume is volume 1 and the shadow volume is volume 2.
Access nodeA as superuser.
Quiesce the application that is running on nodeA.
nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs |
Put the primary cluster in to logging mode.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |
When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.
Synchronize the shadow volume of the primary cluster to the master volume of the primary cluster.
nodeA# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02 nodeA# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02 |
Synchronize the shadow volume of the secondary cluster to the master volume of the secondary cluster.
nodeC# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02 nodeC# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02 |
Restart the application on nodeA.
nodeA# /usr/cluster/bin/scswitch -e -j nfs-rs |
Resynchronize the secondary volume with the primary volume.
nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \ /dev/vx/rdsk/devicegroup/vol01 \ /dev/vx/rdsk/devicegroup/vol04 ip sync |