Sun Cluster Software Installation Guide for Solaris OS

Example of How to Perform Data Replication

This section describes how data replication is performed for the example configuration. This section uses the Sun StorEdge Availability Suite software commands sndradm and iiadm. For more information about these commands, see the Sun Cluster 3.0 and Sun StorEdge Software Integration Guide.

This section contains the following procedures:

ProcedureHow to Perform a Remote Mirror Replication

In this procedure, the master volume of the primary disk is replicated to the master volume on the secondary disk. The master volume is vol01 and the remote mirror bitmap volume is vol04.

Steps
  1. Access nodeA as superuser.

  2. Verify that the cluster is in logging mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should resemble the following:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: logging

    In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.

  3. Flush all transactions.


    nodeA# /usr/sbin/lockfs -a -f
    
  4. Repeat Step 1 through Step 3 on nodeC.

  5. Copy the master volume of nodeA to the master volume of nodeC.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -m lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
  6. Wait until the replication is complete and the volumes are synchronized.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -w lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
  7. Confirm that the cluster is in replicating mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should resemble the following:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: replicating

    In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.

Next Steps

Go to How to Perform a Point-in-Time Snapshot.

ProcedureHow to Perform a Point-in-Time Snapshot

In this procedure, point-in-time snapshot is used to synchronize the shadow volume of the primary cluster to the master volume of the primary cluster. The master volume is vol01, the bitmap volume is vol04, and the shadow volume is vol02.

Before You Begin

Ensure that you completed steps in How to Perform a Remote Mirror Replication.

Steps
  1. Access nodeA as superuser.

  2. Disable the resource that is running on nodeA.


    nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs
    
  3. Change the primary cluster to logging mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    

    When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.

  4. Synchronize the shadow volume of the primary cluster to the master volume of the primary cluster.


    nodeA# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02
    nodeA# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02
    
  5. Synchronize the shadow volume of the secondary cluster to the master volume of the secondary cluster.


    nodeC# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02
    nodeC# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02
    
  6. Restart the application on nodeA.


    nodeA# /usr/cluster/bin/scswitch -e -j nfs-rs
    
  7. Resynchronize the secondary volume with the primary volume.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
Next Steps

Go to How to Verify That Replication Is Configured Correctly

ProcedureHow to Verify That Replication Is Configured Correctly

Before You Begin

Ensure that you completed steps in How to Perform a Point-in-Time Snapshot.

Steps
  1. Verify that the primary cluster is in replicating mode, with autosynchronization on.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should resemble the following:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: replicating

    In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.

  2. If the primary cluster is not in replicating mode, put it into replicating mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
  3. Create a directory on a client machine.

    1. Log in to a client machine as superuser.

      You see a prompt that resembles the following:


      client-machine#
    2. Create a directory on the client machine.


      client-machine# mkdir /dir
      
  4. Mount the directory to the application on the primary cluster, and display the mounted directory.

    1. Mount the directory to the application on the primary cluster.


      client-machine# mount -o rw lhost-nfsrg-prim:/global/mountpoint /dir
      
    2. Display the mounted directory.


      client-machine# ls /dir
      
  5. Mount the directory to the application on the secondary cluster, and display the mounted directory.

    1. Unmount the directory to the application on the primary cluster.


      client-machine# umount /dir
      
    2. Take the application resource group offline on the primary cluster.


      nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs
      nodeA# /usr/cluster/bin/scswitch -n -j nfs-dg-rs
      nodeA# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-prim
      nodeA# /usr/cluster/bin/scswitch -z -g nfs-rg -h ""
      
    3. Change the primary cluster to logging mode.


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      

      When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.

    4. Ensure that the PathPrefix directory is available.


      nodeC# mount | grep /global/etc
      
    5. Bring the application resource group online on the secondary cluster.


      nodeC# /usr/cluster/bin/scswitch -Z -g nfs-rg
      
    6. Access the client machine as superuser.

      You see a prompt that resembles the following:


      client-machine#
    7. Mount the directory that was created in Step 3 to the application on the secondary cluster.


      client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /dir
      
    8. Display the mounted directory.


      client-machine# ls /dir
      
  6. Ensure that the directory displayed in Step 4 is the same as that displayed in Step 5.

  7. Return the application on the primary cluster to the mounted directory.

    1. Take the application resource group offline on the secondary cluster.


      nodeC# /usr/cluster/bin/scswitch -n -j nfs-rs
      nodeC# /usr/cluster/bin/scswitch -n -j nfs-dg-rs
      nodeC# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-sec
      nodeC# /usr/cluster/bin/scswitch -z -g nfs-rg -h ""
      
    2. Ensure that the global volume is unmounted from the secondary cluster.


      nodeC# umount /global/mountpoint
      
    3. Bring the application resource group online on the primary cluster.


      nodeA# /usr/cluster/bin/scswitch -Z -g nfs-rg
      
    4. Change the primary cluster to replicating mode.


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      

      When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.

See Also

Example of How to Manage a Failover or Switchover