Sun Cluster Software Installation Guide for Solaris OS

Example of How to Cope With a Failover or Switchover

This section describes how a switchover was provoked and how the application was transferred to the secondary cluster. After a switchover or failover, you must update the DNS entry and configure the application to read and write to the secondary volume.

How to Provoke a Switchover
  1. Put the primary cluster into logging mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    

    When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.

  2. Confirm that the primary cluster and the secondary cluster are in logging mode, with autosynchronization off.

    1. On nodeA, run this command:


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
      

      The output should look like this:


      /dev/vx/rdsk/devicegroup/vol01 ->
      lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
      autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag:
      devicegroup, state: logging
    2. On nodeC, run this command:


      nodeC# /usr/opt/SUNWesm/sbin/sndradm -P
      

      The output should look like this:


      /dev/vx/rdsk/devicegroup/vol01 <-
      lhost-reprg-prim:/dev/vx/rdsk/devicegroup/vol01
      autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag:
      devicegroup, state: logging

    For nodeA and nodeC, the state should be logging, and the active state of autosynchronization should be off.

  3. Confirm that the secondary cluster is ready to take over from the primary cluster.


    nodeC# /usr/sbin/fsck -y /dev/vx/rdsk/devicegroup/vol01
    
  4. Switch over to the secondary cluster.


    nodeC# scswitch -Z -g nfs-rg
    nodeC# scswitch -Z -g nfs-rg -h nodeC
    
How to Update the DNS Entry

For an illustration of how the DNS maps a client to a cluster, see Figure 6–6.

  1. Start the nsupdate command.

    For information, see the nsupdate(1M) man page.

  2. Remove the current DNS mapping between the client machine and the logical hostname of the application resource group on the primary cluster.


    > update delete client-machine A
    > update delete IPaddress1.in-addr.arpa TTL PTR client machine
    
    client-machine

    Is the full name of the client. For example, mymachine.mycompany.com.

    IPaddress1

    Is the IP address is of the logical hostname lhost-nfsrg-prim, in reverse order.

    TTL

    Is the time to live, in seconds. A typical value is 3600.

  3. Create the new DNS mapping between the client machine and the logical hostname of the application resource group on the secondary cluster.


    > update add client-machine TTL A IPaddress2
    > update add IPaddress3.in-addr.arpa TTL PTR client-machine
    
    IPaddress2

    Is the IP address is of the logical hostname lhost-nfsrg-sec, in forward order.

    IPaddress3

    Is the IP address is of the logical hostname lhost-nfsrg-sec, in reverse order.

How to Configure the Application to Read and Write to the Secondary Volume
  1. Configure the secondary volume to be mounted to the mount point directory for the NFS file system.


    client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /xxx
    

    The mount point was created in Step 1 of How to Configure the File System on the Primary Cluster for the NFS Application.

  2. Confirm that the secondary cluster has write access to the mount point.


    client-machine# touch /xxx/data.1
    client-machine# umount /xxx