Sun Cluster Software Installation Guide for Solaris OS

Chapter 6 Configuring Data Replication With Sun StorEdge Availability Suite Software

This chapter provides guidelines for configuring data replication between clusters by using Sun StorEdge Availability Suite 3.1 or 3.2 software.

This chapter also contains an example of how data replication was configured for an NFS application by using Sun StorEdge Availability Suite software. This example uses a specific cluster configuration and provides detailed information about how individual tasks can be performed. It does not include all of the steps that are required by other applications or other cluster configurations.

This chapter contains the following sections:

Introduction to Data Replication

This section introduces disaster tolerance and describes the data replication methods that Sun StorEdge Availability Suite software uses.

What Is Disaster Tolerance?

Disaster tolerance is the ability of a system to restore an application on an alternate cluster when the primary cluster fails. Disaster tolerance is based on data replication and failover.

Data replication is the copying of data from a primary cluster to a backup or secondary cluster. Through data replication, the secondary cluster has an up-to-date copy of the data on the primary cluster. The secondary cluster can be located far away from the primary cluster.

Failover is the automatic relocation of a resource group or device group from a primary cluster to a secondary cluster. If the primary cluster fails, the application and the data are immediately available on the secondary cluster.

Data Replication Methods Used by Sun StorEdge Availability Suite Software

This section describes the remote mirror replication method and the point-in-time snapshot method used by Sun StorEdge Availability Suite software. This software uses the sndradm(1RPC) and iiadm(1II) commands to replicate data. For more information about these commands, see one of the following manuals:

Remote Mirror Replication

Remote mirror replication is illustrated in Figure 6–1. Data from the master volume of the primary disk is replicated to the master volume of the secondary disk through a TCP/IP connection. A remote mirror bitmap tracks differences between the master volume on the primary disk and the master volume on the secondary disk.

Figure 6–1 Remote Mirror Replication

Figure illustrates remote mirror replication from the master volume of the primary disk to the master volume of the secondary disk.

Remote mirror replication can be performed synchronously in real time, or asynchronously. Each volume set in each cluster can be configured individually, for synchronous replication or asynchronous replication.

Point-in-Time Snapshot

Point-in-time snapshot is illustrated in Figure 6–2. Data from the master volume of each disk is copied to the shadow volume on the same disk. The point-in-time bitmap tracks differences between the master volume and the shadow volume. When data is copied to the shadow volume, the point-in-time bitmap is reset.

Figure 6–2 Point-in-Time Snapshot

Figure shows point-in-time snapshot.

Replication in the Example Configuration

The following figure illustrates how remote mirror replication and point-in-time snapshot are used in this example configuration.

Figure 6–3 Replication in the Example Configuration

Figure shows how remote mirror replication and point-in-time snapshot are used by the configuration example.

Guidelines for Configuring Data Replication

This section provides guidelines for configuring data replication between clusters. This section also contains tips for configuring replication resource groups and application resource groups. Use these guidelines when you are configuring data replication for your cluster.

This section discusses the following topics:

Configuring Replication Resource Groups

Replication resource groups colocate the device group under Sun StorEdge Availability Suite software control with the logical hostname resource. A replication resource group must have the following characteristics:

Configuring Application Resource Groups

To be highly available, an application must be managed as a resource in an application resource group. An application resource group can be configured for a failover application or a scalable application.

Application resources and application resource groups configured on the primary cluster must also be configured on the secondary cluster. Also, the data accessed by the application resource must be replicated to the secondary cluster.

This section provides guidelines for configuring the following application resource groups:

Configuring Resource Groups for a Failover Application

In a failover application, an application runs on one node at a time. If that node fails, the application fails over to another node in the same cluster. A resource group for a failover application must have the following characteristics:

The following figure illustrates the configuration of an application resource group and a replication resource group in a failover application.

Figure 6–4 Configuration of Resource Groups in a Failover Application

Figure illustrates the configuration of an application resource group and a replication resource group in a failover application.

Configuring Resource Groups for a Scalable Application

In a scalable application, an application runs on several nodes to create a single, logical service. If a node that is running a scalable application fails, failover does not occur. The application continues to run on the other nodes.

When a scalable application is managed as a resource in an application resource group, it is not necessary to colocate the application resource group with the device group. Therefore, it is not necessary to create an HAStoragePlus resource for the application resource group.

A resource group for a scalable application must have the following characteristics:

The following figure illustrates the configuration of resource groups in a scalable application.

Figure 6–5 Configuration of Resource Groups in a Scalable Application

Figure illustrates the configuration of a resource groups in a scalable application.

Guidelines for Managing a Failover or Switchover

If the primary cluster fails, the application must be switched over to the secondary cluster as soon as possible. To enable the secondary cluster to take over, the DNS must be updated.

The DNS associates a client with the logical hostname of an application. After a failover or switchover, the DNS mapping to the primary cluster must be removed, and a DNS mapping to the secondary cluster must be created. The following figure shows how the DNS maps a client to a cluster.

Figure 6–6 DNS Mapping of a Client to a Cluster

 Figure shows how the DNS maps a client to a cluster.

To update the DNS, use the nsupdate command. For information, see the nsupdate(1M) man page. For an example of how to manage a failover or switchover, see Example of How to Manage a Failover or Switchover.

After repair, the primary cluster can be brought back online. To switch back to the original primary cluster, perform the following tasks:

  1. Synchronize the primary cluster with the secondary cluster to ensure that the primary volume is up-to-date.

  2. Update the DNS so that clients can access the application on the primary cluster.

Task Map: Example of a Data–Replication Configuration

The following task map lists the tasks in this example of how data replication was configured for an NFS application by using Sun StorEdge Availability Suite software.

Table 6–1 Task Map: Example of a Data Replication Configuration

Task 

Instructions 

1. Connect and install the clusters. 

Connecting and Installing the Clusters

2. Configure disk device groups, file systems for the NFS application, and resource groups on the primary cluster and on the secondary cluster. 

Example of How to Configure Device Groups and Resource Groups

3. Enable data replication on the primary cluster and on the secondary cluster. 

How to Enable Replication on the Primary Cluster

How to Enable Replication on the Secondary Cluster

4. Perform data replication. 

How to Perform a Remote Mirror Replication

How to Perform a Point-in-Time Snapshot

5. Verify the data replication configuration. 

How to Verify That Replication Is Configured Correctly

Connecting and Installing the Clusters

Figure 6–7 illustrates the cluster configuration used in the example configuration. The secondary cluster in the example configuration contains one node, but other cluster configurations can be used.

Figure 6–7 Example Cluster Configuration

Figure illustrates the cluster configuration used in the example configuration.

Table 6–2 summarizes the hardware and software required by the example configuration. The Solaris OS, Sun Cluster software, and volume manager software must be installed on the cluster nodes before you install Sun StorEdge Availability Suite software and patches.

Table 6–2 Required Hardware and Software

Hardware or Software 

Requirement 

Node hardware 

Sun StorEdge Availability Suite software is supported on all servers using the Solaris OS. 

For information about which hardware to use, see the Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS

Disk space 

Approximately 15 Mbytes. 

Solaris OS 

Solaris OS releases that are supported by Sun Cluster software. 

All nodes must use the same version of the Solaris OS. 

For information about installation, see Installing the Software.

Sun Cluster software 

Sun Cluster 3.1 8/05 software. 

For information about installation, see Chapter 2, Installing and Configuring Sun Cluster Software.

Volume manager software 

Solstice DiskSuite or Solaris Volume Manager software or VERITAS Volume Manager (VxVM) software. 

All nodes must use the same version of volume manager software. 

Information about installation is in Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software and SPARC: Installing and Configuring VxVM Software.

Sun StorEdge Availability Suite software 

For information about how to install the software, see the installation manuals for your release of Sun StorEdge Availability Suite software: 

Sun StorEdge Availability Suite software patches 

For information about the latest patches, see http://www.sunsolve.com.

Example of How to Configure Device Groups and Resource Groups

This section describes how disk device groups and resource groups are configured for an NFS application. For additional information, see Configuring Replication Resource Groups and Configuring Application Resource Groups.

This section contains the following procedures:

The following table lists the names of the groups and resources that are created for the example configuration.

Table 6–3 Summary of the Groups and Resources in the Example Configuration

Group or Resource 

Name 

Description 

Disk device group 

devicegroup

The disk device group 

Replication resource group and resources 

devicegroup-stor-rg

The replication resource group 

lhost-reprg-prim, lhost-reprg-sec

The logical hostnames for the replication resource group on the primary cluster and the secondary cluster 

devicegroup-stor

The HAStoragePlus resource for the replication resource group 

Application resource group and resources 

nfs-rg

The application resource group 

lhost-nfsrg-prim, lhost-nfsrg-sec

The logical hostnames for the application resource group on the primary cluster and the secondary cluster 

nfs-dg-rs

The HAStoragePlus resource for the application 

nfs-rs

The NFS resource 

With the exception of devicegroup-stor-rg, the names of the groups and resources are example names that can be changed as required. The replication resource group must have a name with the format devicegroup-stor-rg .

This example configuration uses VxVM software. For information about Solstice DiskSuite or Solaris Volume Manager software, see Chapter 3, Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software.

The following figure illustrates the volumes that are created in the disk device group.

Figure 6–8 Volumes for the Disk Device Group

Figure shows the volumes created in the disk device group.


Note –

The volumes defined in this procedure must not include disk-label private areas, for example, cylinder 0. The VxVM software manages this constraint automatically.


ProcedureHow to Configure a Disk Device Group on the Primary Cluster

Before You Begin

Ensure that you have completed the following tasks:

Steps
  1. Access nodeA as superuser.

    nodeA is the first node of the primary cluster. For a reminder of which node is nodeA, see Figure 6–7.

  2. Create a disk group on nodeA that contains four volumes: volume 1, vol01 through volume 4, vol04.

    For information about configuring a disk group by using the VxVM software, see Chapter 4, SPARC: Installing and Configuring VERITAS Volume Manager.

  3. Configure the disk group to create a disk device group.


    nodeA# /usr/cluster/bin/scconf -a \
    -D type=vxvm,name=devicegroup,nodelist=nodeA:nodeB
    

    The disk device group is called devicegroup.

  4. Create the file system for the disk device group.


    nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol01 < /dev/null
    nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol02 < /dev/null
    

    No file system is needed for vol03 or vol04, which are instead used as raw volumes.

Next Steps

Go to How to Configure a Disk Device Group on the Secondary Cluster.

ProcedureHow to Configure a Disk Device Group on the Secondary Cluster

Before You Begin

Ensure that you completed steps in How to Configure a Disk Device Group on the Primary Cluster.

Steps
  1. Access nodeC as superuser.

  2. Create a disk group on nodeC that contains four volumes: volume 1, vol01 through volume 4, vol04.

  3. Configure the disk group to create a disk device group.


    nodeC# /usr/cluster/bin/scconf -a \
    -D type=vxvm,name=devicegroup,nodelist=nodeC
    

    The disk device group is called devicegroup.

  4. Create the file system for the disk device group.


    nodeC# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol01 < /dev/null
    nodeC# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol02 < /dev/null
    

    No file system is needed for vol03 or vol04, which are instead used as raw volumes.

Next Steps

Go to How to Configure the File System on the Primary Cluster for the NFS Application.

ProcedureHow to Configure the File System on the Primary Cluster for the NFS Application

Before You Begin

Ensure that you completed steps in How to Configure a Disk Device Group on the Secondary Cluster.

Steps
  1. On nodeA and nodeB, create a mount point directory for the NFS file system.

    For example:


    nodeA# mkdir /global/mountpoint
    
  2. On nodeA and nodeB, configure the master volume to be mounted automatically on the mount point.

    Add or replace the following text to the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.


    /dev/vx/dsk/devicegroup/vol01 /dev/vx/rdsk/devicegroup/vol01 \
    /global/mountpoint ufs 3 no global,logging

    For a reminder of the volumes names and volume numbers used in the disk device group, see Figure 6–8.

  3. On nodeA, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.


    nodeA# /usr/sbin/vxassist -g devicegroup make vol05 120m disk1
    

    Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.

  4. On nodeA, resynchronize the device group with the Sun Cluster software.


    nodeA# /usr/cluster/bin/scconf -c -D name=devicegroup,sync
    
  5. On nodeA, create the file system for vol05.


    nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol05
    
  6. On nodeA and nodeB, create a mount point for vol05.

    For example:


    nodeA# mkdir /global/etc
    
  7. On nodeA and nodeB, configure vol05 to be mounted automatically on the mount point.

    Add or replace the following text to the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.


    /dev/vx/dsk/devicegroup/vol05 /dev/vx/rdsk/devicegroup/vol05 \
    /global/etc ufs 3 yes global,logging
  8. Mount vol05 on nodeA.


    nodeA# mount /global/etc
    
  9. Make vol05 accessible to remote systems.

    1. Create a directory called /global/etc/SUNW.nfs on nodeA.


      nodeA# mkdir -p /global/etc/SUNW.nfs
      
    2. Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeA.


      nodeA# touch /global/etc/SUNW.nfs/dfstab.nfs-rs
      
    3. Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeA:


      share -F nfs -o rw -d "HA NFS" /global/mountpoint
      
Next Steps

Go to How to Configure the File System on the Secondary Cluster for the NFS Application.

ProcedureHow to Configure the File System on the Secondary Cluster for the NFS Application

Before You Begin

Ensure that you completed steps in How to Configure the File System on the Primary Cluster for the NFS Application.

Steps
  1. On nodeC, create a mount point directory for the NFS file system.

    For example:


    nodeC# mkdir /global/mountpoint
    
  2. On nodeC, configure the master volume to be mounted automatically on the mount point.

    Add or replace the following text to the /etc/vfstab file on nodeC. The text must be on a single line.


    /dev/vx/dsk/devicegroup/vol01 /dev/vx/rdsk/devicegroup/vol01 \
    /global/mountpoint ufs 3 no global,logging
  3. On nodeC, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.


    nodeC# /usr/sbin/vxassist -g devicegroup make vol05 120m disk1
    

    Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.

  4. On nodeC, resynchronize the device group with the Sun Cluster software.


    nodeC# /usr/cluster/bin/scconf -c -D name=devicegroup,sync
    
  5. On nodeC, create the file system for vol05.


    nodeC# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol05
    
  6. On nodeC, create a mount point for vol05.

    For example:


    nodeC# mkdir /global/etc
    
  7. On nodeC, configure vol05 to be mounted automatically on the mount point.

    Add or replace the following text to the /etc/vfstab file on nodeC. The text must be on a single line.


    /dev/vx/dsk/devicegroup/vol05 /dev/vx/rdsk/devicegroup/vol05 \
    /global/etc ufs 3 yes global,logging
  8. Mount vol05 on nodeC.


    nodeC# mount /global/etc
    
  9. Make vol05 accessible to remote systems.

    1. Create a directory called /global/etc/SUNW.nfs on nodeC.


      nodeC# mkdir -p /global/etc/SUNW.nfs
      
    2. Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeC.


      nodeC# touch /global/etc/SUNW.nfs/dfstab.nfs-rs
      
    3. Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeC:


      share -F nfs -o rw -d "HA NFS" /global/mountpoint
      
Next Steps

Go to How to Create a Replication Resource Group on the Primary Cluster.

ProcedureHow to Create a Replication Resource Group on the Primary Cluster

Before You Begin

Ensure that you completed steps in How to Configure the File System on the Secondary Cluster for the NFS Application.

Steps
  1. Access nodeA as superuser.

  2. Register SUNW.HAStoragePlus as a resource type.


    nodeA# /usr/cluster/bin/scrgadm -a -t SUNW.HAStoragePlus
    
  3. Create a replication resource group for the disk device group.


    nodeA# /usr/cluster/bin/scrgadm -a -g devicegroup-stor-rg -h nodeA,nodeB
    
    devicegroup

    The name of the disk device group

    devicegroup-stor-rg

    The name of the replication resource group

    -h nodeA, nodeB

    Specifies the cluster nodes that can master the replication resource group

  4. Add a SUNW.HAStoragePlus resource to the replication resource group.


    nodeA# /usr/cluster/bin/scrgadm -a -j devicegroup-stor \
    -g devicegroup-stor-rg -t SUNW.HAStoragePlus \
    -x GlobalDevicePaths=devicegroup \
    -x AffinityOn=True
    
    devicegroup-stor

    The HAStoragePlus resource for the replication resource group.

    -x GlobalDevicePaths=

    Specifies the extension property that Sun StorEdge Availability Suite software relies on.

    -x AffinityOn=True

    Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  5. Add a logical hostname resource to the replication resource group.


    nodeA# /usr/cluster/bin/scrgadm -a -L -j lhost-reprg-prim \
    -g devicegroup-stor-rg -l lhost-reprg-prim
    

    lhost-reprg-prim is the logical hostname for the replication resource group on the primary cluster.

  6. Enable the resources, manage the resource group, and bring the resource group online.


    nodeA# /usr/cluster/bin/scswitch -Z -g devicegroup-stor-rg
    nodeA# /usr/cluster/bin/scswitch -z -g devicegroup-stor-rg -h nodeA
    
  7. Verify that the resource group is online.


    nodeA# /usr/cluster/bin/scstat -g
    

    Examine the resource group state field to confirm that the replication resource group is online on nodeA.

Next Steps

Go to How to Create a Replication Resource Group on the Secondary Cluster.

ProcedureHow to Create a Replication Resource Group on the Secondary Cluster

Before You Begin

Ensure that you completed steps in How to Create a Replication Resource Group on the Primary Cluster.

Steps
  1. Access nodeC as superuser.

  2. Register SUNW.HAStoragePlus as a resource type.


    nodeC# /usr/cluster/bin/scrgadm -a -t SUNW.HAStoragePlus
    
  3. Create a replication resource group for the disk device group.


    nodeC# /usr/cluster/bin/scrgadm -a -g devicegroup-stor-rg -h nodeC
    
    devicegroup

    The name of the disk device group

    devicegroup-stor-rg

    The name of the replication resource group

    -h nodeC

    Specifies the cluster node that can master the replication resource group

  4. Add a SUNW.HAStoragePlus resource to the replication resource group.


    nodeC# /usr/cluster/bin/scrgadm -a -j devicegroup-stor \
    -g devicegroup-stor-rg -t SUNW.HAStoragePlus \
    -x GlobalDevicePaths=devicegroup \
    -x AffinityOn=True
    
    devicegroup-stor

    The HAStoragePlus resource for the replication resource group.

    -x GlobalDevicePaths=

    Specifies the extension property that Sun StorEdge Availability Suite software relies on.

    -x AffinityOn=True

    Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  5. Add a logical hostname resource to the replication resource group.


    nodeC# /usr/cluster/bin/scrgadm -a -L -j lhost-reprg-sec \
    -g devicegroup-stor-rg -l lhost-reprg-sec
    

    lhost-reprg-sec is the logical hostname for the replication resource group on the primary cluster.

  6. Enable the resources, manage the resource group, and bring the resource group online.


    nodeC# /usr/cluster/bin/scswitch -Z -g devicegroup-stor-rg
    
  7. Verify that the resource group is online.


    nodeC# /usr/cluster/bin/scstat -g
    

    Examine the resource group state field to confirm that the replication resource group is online on nodeC.

Next Steps

Go to How to Create an NFS Application Resource Group on the Primary Cluster.

ProcedureHow to Create an NFS Application Resource Group on the Primary Cluster

This procedure describes how application resource groups are created for NFS. This procedure is specific to this application and cannot be used for another type of application.

Before You Begin

Ensure that you completed steps in How to Create a Replication Resource Group on the Secondary Cluster.

Steps
  1. Access nodeA as superuser.

  2. Register SUNW.nfs as a resource type.


    nodeA# scrgadm -a -t SUNW.nfs
    
  3. If SUNW.HAStoragePlus has not been registered as a resource type, register it.


    nodeA# scrgadm -a -t SUNW.HAStoragePlus
    
  4. Create an application resource group for the devicegroup.


    nodeA# scrgadm -a -g nfs-rg \
    -y Pathprefix=/global/etc \
    -y Auto_start_on_new_cluster=False \
    -y RG_dependencies=devicegroup-stor-rg
    
    nfs-rg

    The name of the application resource group.

    Pathprefix=/global/etc

    Specifies a directory into which the resources in the group can write administrative files.

    Auto_start_on_new_cluster=False

    Specifies that the application resource group is not started automatically.

    RG_dependencies=devicegroup-stor-rg

    Specifies the resource groups that the application resource group depends on. In this example, the application resource group depends on the replication resource group.

    If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.

  5. Add a SUNW.HAStoragePlus resource to the application resource group.


    nodeA# scrgadm -a -j nfs-dg-rs -g nfs-rg \
    -t SUNW.HAStoragePlus \
    -x FileSystemMountPoints=/global/mountpoint \
    -x AffinityOn=True
    
    nfs-dg-rs

    Is the name of the HAStoragePlus resource for the NFS application.

    -x FileSystemMountPoints=/global/

    Specifies that the mount point for the file system is global.

    -t SUNW.HAStoragePlus

    Specifies that the resource is of the type SUNW.HAStoragePlus.

    -x AffinityOn=True

    Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  6. Add a logical hostname resource to the application resource group.


    nodeA# /usr/cluster/bin/scrgadm -a -L -j lhost-nfsrg-prim -g nfs-rg \
    -l lhost-nfsrg-prim
    

    lhost-nfsrg-prim is the logical hostname of the application resource group on the primary cluster.

  7. Enable the resources, manage the application resource group, and bring the application resource group online.

    1. Bring the HAStoragePlus resource for the NFS application online.


      nodeA# /usr/cluster/bin/scrgadm -a -g nfs-rg \
      -j nfs-rs -t SUNW.nfs -y Resource_dependencies=nfs-dg-rs
      
    2. Bring the application resource group online on nodeA .


      nodeA# /usr/cluster/bin/scswitch -Z -g nfs-rg
      nodeA# /usr/cluster/bin/scswitch -z -g nfs-rg -h nodeA
      
  8. Verify that the application resource group is online.


    nodeA# /usr/cluster/bin/scstat -g
    

    Examine the resource group state field to determine whether the application resource group is online for nodeA and nodeB.

Next Steps

Go to How to Create an NFS Application Resource Group on the Secondary Cluster.

ProcedureHow to Create an NFS Application Resource Group on the Secondary Cluster

Before You Begin

Ensure that you completed steps in How to Create an NFS Application Resource Group on the Primary Cluster.

Steps
  1. Access nodeC as superuser.

  2. Register SUNW.nfs as a resource type.


    nodeC# scrgadm -a -t SUNW.nfs
    
  3. If SUNW.HAStoragePlus has not been registered as a resource type, register it.


    nodeC# scrgadm -a -t SUNW.HAStoragePlus
    
  4. Create an application resource group for the devicegroup.


    nodeC# scrgadm -a -g nfs-rg \
    -y Pathprefix=/global/etc \
    -y Auto_start_on_new_cluster=False \
    -y RG_dependencies=devicegroup-stor-rg
    
    nfs-rg

    The name of the application resource group.

    Pathprefix=/global/etc

    Specifies a directory into which the resources in the group can write administrative files.

    Auto_start_on_new_cluster=False

    Specifies that the application resource group is not started automatically.

    RG_dependencies=devicegroup-stor-rg

    Specifies the resource groups that the application resource group depends on. In this example, the application resource group depends on the replication resource group.

    If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.

  5. Add a SUNW.HAStoragePlus resource to the application resource group.


    nodeC# scrgadm -a -j nfs-dg-rs -g nfs-rg \
    -t SUNW.HAStoragePlus \
    -x FileSystemMountPoints=/global/mountpoint \
    -x AffinityOn=True
    
    nfs-dg-rs

    Is the name of the HAStoragePlus resource for the NFS application.

    -x FileSystemMountPoints=/global/

    Specifies that the mount point for the file system is global.

    -t SUNW.HAStoragePlus

    Specifies that the resource is of the type SUNW.HAStoragePlus.

    -x AffinityOn=True

    Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  6. Add a logical hostname resource to the application resource group.


    nodeC# /usr/cluster/bin/scrgadm -a -L -j lhost-nfsrg-sec -g nfs-rg \
    -l lhost-nfsrg-sec
    

    lhost-nfsrg-sec is the logical hostname of the application resource group on the secondary cluster.

  7. Add an NFS resource to the application resource group.


    nodeC# /usr/cluster/bin/scrgadm -a -g nfs-rg \
    -j nfs-rs -t SUNW.nfs -y Resource_dependencies=nfs-dg-rs
    
  8. Ensure that the application resource group does not come online on nodeC.


    nodeC# /usr/cluster/bin/scswitch -n -j nfs-rs
    nodeC# /usr/cluster/bin/scswitch -n -j nfs-dg-rs
    nodeC# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-sec
    nodeC# /usr/cluster/bin/scswitch -z -g nfs-rg -h ""
    

    The resource group remains offline after a reboot, because Auto_start_on_new_cluster=False.

  9. If the global volume is mounted on the primary cluster, unmount the global volume from the secondary cluster.


    nodeC# umount /global/mountpoint
    

    If the volume is mounted on a secondary cluster, the synchronization fails.

Next Steps

Go to Example of How to Enable Data Replication

Example of How to Enable Data Replication

This section describes how data replication is enabled for the example configuration. This section uses the Sun StorEdge Availability Suite software commands sndradm and iiadm. For more information about these commands, see the Sun Cluster 3.0 and Sun StorEdge Software Integration Guide.

This section contains the following procedures:

ProcedureHow to Enable Replication on the Primary Cluster

Steps
  1. Access nodeA as superuser.

  2. Flush all transactions.


    nodeA# /usr/sbin/lockfs -a -f
    
  3. Confirm that the logical hostnames lhost-reprg-prim and lhost-reprg-sec are online.


    nodeA# /usr/cluster/bin/scstat -g
    nodeC# /usr/cluster/bin/scstat -g
    

    Examine the state field of the resource group.

  4. Enable remote mirror replication from the primary cluster to the secondary cluster.

    This step enables replication from the master volume on the primary cluster to the master volume on the secondary cluster. In addition, this step enables replication to the remote mirror bitmap on vol04.

    • If the primary cluster and secondary cluster are unsynchronized, run this command:


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -e lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      
    • If the primary cluster and secondary cluster are synchronized, run this command:


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -E lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      
  5. Enable autosynchronization.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -a on lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    

    This step enables autosynchronization. When the active state of autosynchronization is set to on, the volume sets are resynchronized if the system reboots or a failure occurs.

  6. Verify that the cluster is in logging mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should resemble the following:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: logging

    In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.

  7. Enable point-in-time snapshot.


    nodeA# /usr/opt/SUNWesm/sbin/iiadm -e ind \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol02 \
    /dev/vx/rdsk/devicegroup/vol03
    nodeA# /usr/opt/SUNWesm/sbin/iiadm -w \
    /dev/vx/rdsk/devicegroup/vol02
    

    This step enables the master volume on the primary cluster to be copied to the shadow volume on the same cluster. The master volume, shadow volume, and point-in-time bitmap volume must be in the same device group. In this example, the master volume is vol01, the shadow volume is vol02, and the point-in-time bitmap volume is vol03.

  8. Attach the point-in-time snapshot to the remote mirror set.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -I a \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol02 \
    /dev/vx/rdsk/devicegroup/vol03
    

    This step associates the point-in-time snapshot with the remote mirror volume set. Sun StorEdge Availability Suite software ensures that a point-in-time snapshot is taken before remote mirror replication can occur.

Next Steps

Go to How to Enable Replication on the Secondary Cluster.

ProcedureHow to Enable Replication on the Secondary Cluster

Before You Begin

Ensure that you completed steps in How to Enable Replication on the Primary Cluster.

Steps
  1. Access nodeC as superuser.

  2. Flush all transactions.


    nodeC# /usr/sbin/lockfs -a -f
    
  3. Enable remote mirror replication from the primary cluster to the secondary cluster.


    nodeC# /usr/opt/SUNWesm/sbin/sndradm -n -e lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    

    The primary cluster detects the presence of the secondary cluster and starts synchronization. Refer to the system log file /var/opt/SUNWesm/ds.log for information about the status of the clusters.

  4. Enable independent point-in-time snapshot.


    nodeC# /usr/opt/SUNWesm/sbin/iiadm -e ind \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol02 \
    /dev/vx/rdsk/devicegroup/vol03
    nodeC# /usr/opt/SUNWesm/sbin/iiadm -w \
    /dev/vx/rdsk/devicegroup/vol02
    
  5. Attach the point-in-time snapshot to the remote mirror set.


    nodeC# /usr/opt/SUNWesm/sbin/sndradm -I a \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol02 \
    /dev/vx/rdsk/devicegroup/vol03
    
Next Steps

Go to Example of How to Perform Data Replication.

Example of How to Perform Data Replication

This section describes how data replication is performed for the example configuration. This section uses the Sun StorEdge Availability Suite software commands sndradm and iiadm. For more information about these commands, see the Sun Cluster 3.0 and Sun StorEdge Software Integration Guide.

This section contains the following procedures:

ProcedureHow to Perform a Remote Mirror Replication

In this procedure, the master volume of the primary disk is replicated to the master volume on the secondary disk. The master volume is vol01 and the remote mirror bitmap volume is vol04.

Steps
  1. Access nodeA as superuser.

  2. Verify that the cluster is in logging mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should resemble the following:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: off, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: logging

    In logging mode, the state is logging, and the active state of autosynchronization is off. When the data volume on the disk is written to, the bitmap file on the same disk is updated.

  3. Flush all transactions.


    nodeA# /usr/sbin/lockfs -a -f
    
  4. Repeat Step 1 through Step 3 on nodeC.

  5. Copy the master volume of nodeA to the master volume of nodeC.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -m lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
  6. Wait until the replication is complete and the volumes are synchronized.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -w lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
  7. Confirm that the cluster is in replicating mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should resemble the following:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: replicating

    In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.

Next Steps

Go to How to Perform a Point-in-Time Snapshot.

ProcedureHow to Perform a Point-in-Time Snapshot

In this procedure, point-in-time snapshot is used to synchronize the shadow volume of the primary cluster to the master volume of the primary cluster. The master volume is vol01, the bitmap volume is vol04, and the shadow volume is vol02.

Before You Begin

Ensure that you completed steps in How to Perform a Remote Mirror Replication.

Steps
  1. Access nodeA as superuser.

  2. Disable the resource that is running on nodeA.


    nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs
    
  3. Change the primary cluster to logging mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    

    When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.

  4. Synchronize the shadow volume of the primary cluster to the master volume of the primary cluster.


    nodeA# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02
    nodeA# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02
    
  5. Synchronize the shadow volume of the secondary cluster to the master volume of the secondary cluster.


    nodeC# /usr/opt/SUNWesm/sbin/iiadm -u s /dev/vx/rdsk/devicegroup/vol02
    nodeC# /usr/opt/SUNWesm/sbin/iiadm -w /dev/vx/rdsk/devicegroup/vol02
    
  6. Restart the application on nodeA.


    nodeA# /usr/cluster/bin/scswitch -e -j nfs-rs
    
  7. Resynchronize the secondary volume with the primary volume.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
Next Steps

Go to How to Verify That Replication Is Configured Correctly

ProcedureHow to Verify That Replication Is Configured Correctly

Before You Begin

Ensure that you completed steps in How to Perform a Point-in-Time Snapshot.

Steps
  1. Verify that the primary cluster is in replicating mode, with autosynchronization on.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
    

    The output should resemble the following:


    /dev/vx/rdsk/devicegroup/vol01 ->
    lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
    autosync: on, max q writes:4194304, max q fbas:16384, mode:sync,ctag:
    devicegroup, state: replicating

    In replicating mode, the state is replicating, and the active state of autosynchronization is on. When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.

  2. If the primary cluster is not in replicating mode, put it into replicating mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    
  3. Create a directory on a client machine.

    1. Log in to a client machine as superuser.

      You see a prompt that resembles the following:


      client-machine#
    2. Create a directory on the client machine.


      client-machine# mkdir /dir
      
  4. Mount the directory to the application on the primary cluster, and display the mounted directory.

    1. Mount the directory to the application on the primary cluster.


      client-machine# mount -o rw lhost-nfsrg-prim:/global/mountpoint /dir
      
    2. Display the mounted directory.


      client-machine# ls /dir
      
  5. Mount the directory to the application on the secondary cluster, and display the mounted directory.

    1. Unmount the directory to the application on the primary cluster.


      client-machine# umount /dir
      
    2. Take the application resource group offline on the primary cluster.


      nodeA# /usr/cluster/bin/scswitch -n -j nfs-rs
      nodeA# /usr/cluster/bin/scswitch -n -j nfs-dg-rs
      nodeA# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-prim
      nodeA# /usr/cluster/bin/scswitch -z -g nfs-rg -h ""
      
    3. Change the primary cluster to logging mode.


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      

      When the data volume on the disk is written to, the bitmap file on the same disk is updated. No replication occurs.

    4. Ensure that the PathPrefix directory is available.


      nodeC# mount | grep /global/etc
      
    5. Bring the application resource group online on the secondary cluster.


      nodeC# /usr/cluster/bin/scswitch -Z -g nfs-rg
      
    6. Access the client machine as superuser.

      You see a prompt that resembles the following:


      client-machine#
    7. Mount the directory that was created in Step 3 to the application on the secondary cluster.


      client-machine# mount -o rw lhost-nfsrg-sec:/global/mountpoint /dir
      
    8. Display the mounted directory.


      client-machine# ls /dir
      
  6. Ensure that the directory displayed in Step 4 is the same as that displayed in Step 5.

  7. Return the application on the primary cluster to the mounted directory.

    1. Take the application resource group offline on the secondary cluster.


      nodeC# /usr/cluster/bin/scswitch -n -j nfs-rs
      nodeC# /usr/cluster/bin/scswitch -n -j nfs-dg-rs
      nodeC# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-sec
      nodeC# /usr/cluster/bin/scswitch -z -g nfs-rg -h ""
      
    2. Ensure that the global volume is unmounted from the secondary cluster.


      nodeC# umount /global/mountpoint
      
    3. Bring the application resource group online on the primary cluster.


      nodeA# /usr/cluster/bin/scswitch -Z -g nfs-rg
      
    4. Change the primary cluster to replicating mode.


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -u lhost-reprg-prim \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
      /dev/vx/rdsk/devicegroup/vol01 \
      /dev/vx/rdsk/devicegroup/vol04 ip sync
      

      When the primary volume is written to, the secondary volume is updated by Sun StorEdge Availability Suite software.

See Also

Example of How to Manage a Failover or Switchover

Example of How to Manage a Failover or Switchover

This section describes how to provoke a switchover and how the application is transferred to the secondary cluster. After a switchover or failover, update the DNS entries. For additional information, see Guidelines for Managing a Failover or Switchover.

This section contains the following procedures:

ProcedureHow to Provoke a Switchover

Steps
  1. Change the primary cluster to logging mode.


    nodeA# /usr/opt/SUNWesm/sbin/sndradm -n -l lhost-reprg-prim \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 lhost-reprg-sec \
    /dev/vx/rdsk/devicegroup/vol01 \
    /dev/vx/rdsk/devicegroup/vol04 ip sync
    

    When the data volume on the disk is written to, the bitmap volume on the same device group is updated. No replication occurs.

  2. Confirm that the primary cluster and the secondary cluster are in logging mode, with autosynchronization off.

    1. On nodeA, confirm the mode and setting:


      nodeA# /usr/opt/SUNWesm/sbin/sndradm -P
      

      The output should resemble the following:


      /dev/vx/rdsk/devicegroup/vol01 ->
      lhost-reprg-sec:/dev/vx/rdsk/devicegroup/vol01
      autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag:
      devicegroup, state: logging
    2. On nodeC, confirm the mode and setting:


      nodeC# /usr/opt/SUNWesm/sbin/sndradm -P
      

      The output should resemble the following:


      /dev/vx/rdsk/devicegroup/vol01 <-
      lhost-reprg-prim:/dev/vx/rdsk/devicegroup/vol01
      autosync:off, max q writes:4194304,max q fbas:16384,mode:sync,ctag:
      devicegroup, state: logging

    For nodeA and nodeC, the state should be logging, and the active state of autosynchronization should be off.

  3. Confirm that the secondary cluster is ready to take over from the primary cluster.


    nodeC# /usr/sbin/fsck -y /dev/vx/rdsk/devicegroup/vol01
    
  4. Switch over to the secondary cluster.


    nodeC# scswitch -Z -g nfs-rg
    
Next Steps

Go to How to Update the DNS Entry.

ProcedureHow to Update the DNS Entry

For an illustration of how DNS maps a client to a cluster, see Figure 6–6.

Before You Begin

Ensure that you completed all steps in How to Provoke a Switchover.

Steps
  1. Start the nsupdate command.

    For information, see the nsupdate(1M) man page.

  2. Remove the current DNS mapping between the logical hostname of the application resource group and the cluster IP address, for both clusters.


    > update delete lhost-nfsrg-prim A
    > update delete lhost-nfsrg-sec A
    > update delete ipaddress1rev.in-addr.arpa ttl PTR lhost-nfsrg-prim
    > update delete ipaddress2rev.in-addr.arpa ttl PTR lhost-nfsrg-sec
    
    ipaddress1rev

    The IP address of the primary cluster, in reverse order.

    ipaddress2rev

    The IP address of the secondary cluster, in reverse order.

    ttl

    The time to live, in seconds. A typical value is 3600.

  3. Create a new DNS mapping between the logical hostname of the application resource group and the cluster IP address, for both clusters.

    Map the primary logical hostname to the IP address of the secondary cluster and map the secondary logical hostname to the IP address of the primary cluster.


    > update add lhost-nfsrg-prim ttl A ipaddress2fwd
    > update add lhost-nfsrg-sec ttl A ipaddress1fwd
    > update add ipaddress2rev.in-addr.arpa ttl PTR lhost-nfsrg-prim
    > update add ipaddress1rev.in-addr.arpa ttl PTR lhost-nfsrg-sec
    
    ipaddress2fwd

    The IP address of the secondary cluster, in forward order.

    ipaddress1fwd

    The IP address of the primary cluster, in forward order.

    ipaddress2rev

    The IP address of the secondary cluster, in reverse order.

    ipaddress1rev

    The IP address of the primary cluster, in reverse order.