Sun Cluster Software Installation Guide for Solaris OS

Example of How to Verify That Replication Is Configured Correctly

This section describes how the replication configuration was confirmed in the example configuration.

How to Verify That Replication Is Configured Correctly
  1. Verify that the primary cluster is in replicating mode, with autosynchronization on.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should look like this:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: replicating

    In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.

    If the primary cluster is not in replicating mode, put it in to replicating mode, as follows:


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
  2. Make a directory on a client machine.

    1. Log in to a client machine as superuser.

      You see a prompt like this:


      client-machine#
    2. Make a directory on the client machine.


      client-machine# mkdir /dir
      
  3. Mount the directory to the application on the primary cluster, and display the mounted directory.

    1. Mount the directory to the application on the primary cluster.


      client-machine# mount -o rw lhost-nfsrg-prim:/global/mountpoint /dir
      
    2. Display the mounted directory.


      client-machine# ls /dir
      
  4. Mount the directory to the application on the secondary cluster, and display the mounted directory.

    1. Unmount the directory to the application on the primary cluster.


      client-machine# umount /dir
      
    2. Take the application resource group offline on the primary cluster.


      nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs
      nodeA# /usr/cluster/bin/scswitch -n -j nfs-dg-rs
      nodeA# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-prim
      nodeA# /usr/cluster/bin/scswitch -z -g nfs-rg -h ""
      
    3. Put the primary cluster in to logging mode.


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      

      When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.

    4. Bring the application resource group online on the secondary cluster.


      nodeC# /usr/cluster/bin/scswitch -Z -g nfs-rg
      
    5. Access the client machine as superuser.

      You see a prompt like this:


      client-machine#
    6. Mount the directory that was created in Step 2 to the application on the secondary cluster.


      client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /dir
      
    7. Display the mounted directory.


      client-machine# ls /dir
      
  5. Ensure that the directory displayed in Step 3 is the same as that displayed in Step 4.

  6. Return the application on the primary cluster to the mounted directory.

    1. Take the application resource group offline on the secondary cluster.


      nodeC# /usr/cluster/bin/scswitch -n -j nfs-rs
      nodeC# /usr/cluster/bin/scswitch -n -j nfs-dg-rs
      nodeC# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-sec
      nodeC# /usr/cluster/bin/scswitch -z -g nfs-rg -h ""
      
    2. Ensure that the global volume is unmounted from the secondary cluster.


      nodeC# umount /global/mountpoint
      
    3. Bring the application resource group online on the primary cluster.


      nodeA# /usr/cluster/bin/scswitch -Z -g nfs-rg
      
    4. Put the primary cluster in to replicating mode.


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      

      When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.