Sun Cluster Geographic Edition Data Replication Guide for Sun StorEdge Availability Suite

Chapter 1 Replicating Data With Sun StorEdge Availability Suite 3.2.1 Software

During data replication, data from a primary cluster is copied to a backup or secondary cluster. The secondary cluster can be located at a geographically separated site from the primary cluster. This distance depends on the distance support that is available from your data replication product.

Sun Cluster Geographic Edition software supports the use of Sun StorEdge Availability Suite 3.2.1 remote mirror software for data replication. Before you can replicate data with Sun StorEdge Availability Suite 3.2.1 software, you must be familiar with the Sun StorEdge Availability Suite 3.2.1 documentation, have the Sun StorEdge Availability Suite 3.2.1 product, and the latest Sun StorEdge Availability Suite 3.2.1 patches installed on your system. For information about installing Sun StorEdge Availability Suite 3.2.1 software and its latest patches, see Sun StorEdge Availability Suite 3.2 Software Installation Guide.

This chapter describes the procedures for configuring data replication with Sun StorEdge Availability Suite 3.2.1 software. This chapter contains the following sections:

Task Summary of Replicating Data in a Sun StorEdge Availability Suite 3.2.1 Protection Group

This section summarizes the steps for configuring Sun StorEdge Availability Suite 3.2.1 data replication in a protection group.

Table 1–1 Administration Tasks for Sun StorEdge Availability Suite 3.2.1 Data Replication

Task 

Description 

Perform an initial configuration of the Sun StorEdge Availability Suite 3.2.1 software. 

See Initial Configuration of Sun StorEdge Availability Suite 3.2.1 Software.

Create a protection group that is configured for Sun StorEdge Availability Suite 3.2.1 data replication. 

See How to Create and Configure a Sun StorEdge Availability Suite 3.2.1 Protection Group.

Add a device group that is controlled by Sun StorEdge Availability Suite 3.2.1. 

See How to Add a Data Replication Device Group to a Sun StorEdge Availability Suite 3.2.1 Protection Group.

Add an application resource groups to the protection group. 

See How to Add an Application Resource Group to a Sun StorEdge Availability Suite 3.2.1 Protection Group.

Replicate the protection group configuration to a secondary cluster. 

See How to Replicate the Sun StorEdge Availability Suite 3.2.1 Protection Group Configuration to a Partner Cluster.

Activate the protection group. 

See How to Activate a Sun StorEdge Availability Suite 3.2.1 Protection Group.

Verify the protection group configuration.  

Perform a trial switchover or takeover and test some simple failure scenarios before bringing your system online. See Chapter 3, Migrating Services That Use Sun StorEdge Availability Suite 3.2.1 Data Replication.

Check the runtime status of replication. 

See Checking the Runtime Status of Sun StorEdge Availability Suite 3.2.1 Data Replication.

Detect failure. 

See Detecting Cluster Failure on a System That Uses Sun StorEdge Availability Suite 3.2.1 Data Replication.

Migrate services by using a switchover. 

See Migrating Services That Use Sun StorEdge Availability Suite 3.2.1 With a Switchover.

Migrate services by using a takeover. 

See Forcing a Takeover on Systems That Use Sun StorEdge Availability Suite 3.2.1.

Recover data after forcing a takeover. 

See Recovering Sun StorEdge Availability Suite 3.2.1 Data After a Takeover.

Overview of Sun StorEdge Availability Suite 3.2.1 Data Replication

This section provides an overview of Sun StorEdge Availability Suite 3.2.1 resource groups and outlines some limitations of Sun StorEdge Availability Suite 3.2.1 replication on clusters of more than two nodes.

Sun StorEdge Availability Suite 3.2.1 Lightweight Resource Groups

To a protection group, you can add a device group that is controlled by the Sun StorEdge Availability Suite 3.2.1 software. The Sun Cluster Geographic Edition software creates a lightweight resource group for each device group. The name of a lightweight resource group has the following format:

AVSdevicegroupname-stor-rg

For example, a device group named avsdg that is controlled by the Sun StorEdge Availability Suite 3.2.1 software has a lightweight resource group named avsdg-stor-rg.

The lightweight resource group collocates the logical host and the device group, a requirement of data replication with the Sun StorEdge Availability Suite 3.2.1 remote mirror software.

Each lightweight resource group contains two resources:

For more information about lightweight resource groups, see the Sun StorEdge Availability Suite 3.2.1 documentation.

Sun StorEdge Availability Suite 3.2.1 Replication Resource Groups

When a device group that is controlled by the Sun StorEdge Availability Suite 3.2.1 software is added to a protection group, the Sun Cluster Geographic Edition software creates a special replication resource for that device group in the replication resource group. By monitoring these replication resource groups, the Sun Cluster Geographic Edition software monitors the overall status of replication. One replication resource group with one replication resource is created for each protection group.

The name of the replication resource group has the following format:

AVSprotectiongroupname-rep-rg

The replication resource in the replication resource group monitors the replication status of the device group on the local cluster, which is reported by the Sun StorEdge Availability Suite 3.2.1 remote mirror software.

The name of a replication resource has the following format:

AVSdevicegroupname-rep-rs

Initial Configuration of Sun StorEdge Availability Suite 3.2.1 Software

This section describes the initial steps you must perform before you can configure Sun StorEdge Availability Suite 3.2.1 replication in the Sun Cluster Geographic Edition product.

The example protection group, avspg, in this section has been configured on a partnership that consists of two clusters, cluster-paris and cluster-newyork. An application, which is encapsulated in the apprg1 resource group, is protected by the avspg protection group. The application data is contained in the avsdg device group. The volumes in the avsdg device group can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.

The resource group, apprg1, and the device group, avsdg, are present on both the cluster-paris cluster and the cluster-newyork cluster. The avspg protection group protects the application data by replicating data between the cluster-paris cluster and the cluster-newyork cluster.


Note –

Replication of each device group requires a logical host on the local cluster and a logical host on the partner cluster.


You cannot use the slash character (/) in a cluster tag in the Sun Cluster Geographic Edition software. If you are using raw DID devices, you cannot use predefined DID device group names such as dsk/s3.

To use DIDs with raw device groups, see How to Use DIDs With Raw Device Groups.

Sun StorEdge Availability Suite Volume Set

Before you can define the Sun StorEdge Availability Suite 3.2.1 volume set, you must determine the following:

The volset file is located at /var/cluster/geo/avs/devicegroupname-volset.ini on all nodes of the primary and secondary clusters of the protection group. For example, the volset file for the device group avsdg is located at /var/cluster/geo/avs/avsdg-volset.ini.

The fields in the volume set file that are handled by the Sun Cluster Geographic Edition software are described in the following table. The Sun Cluster Geographic Edition software does not handle other parameters of the volume set, including disk queue, size of memory queue, and number of asynchronous threads. You must adjust these parameters manually by using Sun StorEdge Availability Suite 3.2.1 commands.

Field 

Meaning 

Description 

phost

Primary host 

The logical host of the server on which the primary volume resides. 

pdev

Primary device 

Primary volume partition. Specify full path names only. 

pbitmap

Primary bitmap 

Volume partition in which the bitmap of the primary partition is stored. Specify full path names only. 

shost

Secondary host 

The logical host of the server on which the secondary volume resides. 

sdev

Secondary device 

Secondary volume partition. Specify full path names only. 

sbitmap

Secondary bitmap 

Volume partition in which the bitmap of the secondary partition is stored. Specify full path names only. 

ip

Network transfer protocol 

IP address. 

sync | async

Operating mode 

sync is the mode in which the I/O operation is confirmed as complete only when the volume on the secondary cluster has been updated.

async is the mode in which the primary host I/O operation is confirmed as complete before updating the volumes on the secondary cluster.

g iogroupname

I/O group name 

An I/O group name. The set must be configured in the same I/O group on both the primary and the secondary cluster. This parameter is optional and need only be configured if you have an I/O group. 

C tag 

The device group name or resource tag of the local data and bitmap volumes in cases where this information is not implied by the name of the volume. For example, /dev/md/avsset/rdsk/vol indicates a device group named avsset. As another example, /dev/vx/rdsk/avsdg/vol indicates a device group named avsdg.

The Sun Cluster Geographic Edition software does not modify the value of the Sun StorEdge Availability Suite 3.2.1 parameters. The software controls only the role of the volume set during switchover and takeover operations.

For more information about the format of the volume set files, refer to the Sun StorEdge Availability Suite 3.2.1 documentation.

ProcedureHow to Use DIDs With Raw Device Groups

  1. Remove the DIDs you want to use from the predefined DID device group.

  2. Add the DIDs to a raw device group. Ensure that the new DID does not contain any slashes.

  3. Create the same group name on each cluster of the partnership. You can use the same DIDs on each cluster.

  4. Use the new group name where a device group name is required.

ProcedureHow to Configure the Sun StorEdge Availability Suite 3.2.1 Volume in Sun Cluster

This procedure configures Sun StorEdge Availability Suite 3.2.1 volumes in a Sun Cluster environment. These volumes can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.

The volumes are encapsulated at the Sun Cluster device-group level. The Sun StorEdge Availability Suite 3.2.1 software interacts with the Solaris Volume Manager disksets, or VERITAS Volume Manager disk group, or raw device through this device group interface. The path to the volumes depends on the volume type, as described in the following table.

Volume Type 

Path 

Solaris Volume Manager 

/dev/md/disksetname/rdsk/d#, where # represents a number

VERITAS Volume Manager 

/dev/vx/rdsk/diskgroupname/volumename

Raw device 

/dev/did/rdsk/d#s#

  1. Create a diskset, avsset, by using Solaris Volume Manager or a disk group, avsdg, by using VERITAS Volume Manager or a raw device on cluster-paris and cluster-newyork.

    For example, if you configure the volume by using a raw device, choose a raw device group, dsk/d3, on cluster-paris and cluster-newyork.

  2. Create two volumes in the diskset or disk group on cluster-paris.

    The Sun StorEdge Availability Suite software requires a dedicated bitmap volume for each data volume to track which modifications to the data volume when the system is in logging mode.

    If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s3 and /dev/did/rdsk/d3s4, on the /dev/did/rdsk/d3 device on cluster-paris.

  3. Create two volumes in the diskset or disk group on cluster-newyork.

    If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s5 and /dev/did/rdsk/d3s6, on the /dev/did/rdsk/d3 device on cluster-paris.

Enabling a Sun StorEdge Availability Suite 3.2.1 Volume Set

You can enable the Sun StorEdge Availability Suite 3.2.1 volume sets in one of two ways:

Automatically Enabling a Solaris Volume Manager Volume Set

In this example, the cluster-paris cluster is the primary and avsset is a device group that contains a Solaris Volume Manager diskset.


Example 1–1 Automatically Enabling a Solaris Volume Manager Volume Set

This example has the following entries in the /var/cluster/geo/avs/avsset-volset.ini file:


logicalhost-paris-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 
logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 
ip async C avsset

The avsset-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates d100 from cluster-paris to d100 on cluster-newyork by using the bitmap volumes and logical hostnames that are specified in the file.


Automatically Enabling a VERITAS Volume Manager Volume Set

In this example, the cluster-paris cluster is the primary and avsdg is a device group that contains a VERITAS Volume Manager disk group.


Example 1–2 Automatically Enabling a VERITAS Volume Manager Volume Set

This example has the following entries in the /var/cluster/geo/avs/avsdg-volset.ini file:


logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris \
/dev/vx/rdsk/avsdg/vol-bitmap-paris 
logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork \
/dev/vx/rdsk/avsdg/vol-bitmap-ny 
ip async C avsdg

The avsdg-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates vol-data-paris from cluster-paris to vol-data-newyork on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.


Automatically Enabling a Raw Device Volume Set

In this example, the cluster-paris cluster is the primary and rawdg is the name of the device group that contains a raw device disk group, /dev/did/rdsk/d3.


Example 1–3 Automatically Enabling a Raw Device Volume Set

This example has the following entries in /var/cluster/geo/avs/avsdg-volset.ini file:


logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 
logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 
ip async C rawdg

The rawdg-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates d3s3 from cluster-paris to d3s5 on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.


Manually Enabling Volume Sets

After you have added the device group to the protection group, avspg, you can manually enable the Sun StorEdge Availability Suite 3.2.1 volume sets.


Example 1–4 Manually Enabling the Sun StorEdge Availability Suite 3.2.1 Volume Set

This example manually enables a Solaris Volume Manager volume set.


phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 \
/dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \
logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \
/dev/md/avsset/rdsk/d101 ip async C avsset


Example 1–5 Manually Enabling a VERITAS Volume Manager Volume Set

This example manually enables a VERITAS Volume Manager volume set.


phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 
/dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris 
logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork 
/dev/vx/rdsk/avsdg/vol-bitmap-newyork ip async C avsdg


Example 1–6 Manually Enabling a Raw Device Volume Set

This example manually enables a raw device volume set.


phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 
/dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 
/dev/did/rdsk/d3s6 ip async C dsk/d3

Information about the sndradm command execution is written to the Sun StorEdge Availability Suite 3.2.1 log file, /var/opt/SUNWesm/ds.log. Refer to this file if errors occur while manually enabling the volume set.

ProcedureHow to Configure the Sun Cluster Device Group That Is Controlled by Sun StorEdge Availability Suite 3.2.1

Sun StorEdge Availability Suite 3.2.1 software supports Solaris Volume Manager, VERITAS Volume Manager, and raw device volumes.

  1. Ensure that the device group that contains the volume set that you want to replicate is registered with Sun Cluster software.

    For more information about these commands, refer to the scsetup(1M) or the scconf(1M) man page.

  2. If you are using a VERITAS Volume Manager device group, synchronize the VERITAS Volume Manager configuration by using the Sun Cluster command scsetup or scconf.

  3. Ensure that the device group is displayed in the output of the scstat -D command.

    For more information about this command, see the scstat(1M) man page.

  4. Repeat steps 1–3 on both clusters, cluster-paris and cluster-newyork.

ProcedureHow to Configure a Highly Available Cluster Global File System for Use With Sun StorEdge Availability Suite 3.2.1

  1. Create the required file system on the volume set that you created in the previous step, vol-data-paris.

    The application writes to this file system.

  2. Add an entry to the /etc/vfstab file that contains information such as the mount location.


    Note –

    You must specify the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. You must not mount data on the secondary cluster because data on the primary will not be replicated to the secondary cluster.


  3. To handle the new file system, add the HAStoragePlus resource to the application resource group, apprg1.

    Adding this resource ensures that the necessary file systems are remounted before the application is started.

    For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  4. Repeat steps 1–3 on both cluster-paris and cluster-newyork.


Example 1–7 Configuring a Highly Available Cluster Global File System for Solaris Volume Manager Volumes

This example configures a highly available cluster global file system for Solaris Volume Manager volumes. This example assumes that the resource group apprg1 already exists.

  1. Create a UNIX file system (UFS).


    # newfs /dev/md/avsset/rdsk/d100

    This command creates the following entry in the /etc/vfstab file:


    /dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 
    /global/sample ufs 2 no logging
  2. Add the HAStoragePlus resource.


    # scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus 
    -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE 


Example 1–8 Configuring a Highly Available Cluster Global File System for VERITAS Volume Manager Volumes

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).


    # newfs /dev/vx/rdsk/avsdg/vol-data-paris

    This command creates the following entry is created in the /etc/vfstab file:


    /dev/vx/dsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-data-paris 
    /global/sample ufs 2 no logging
  2. Add the HAStoragePlus resource.


    # scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus 
    -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE 


Example 1–9 Configuring a Highly Available Cluster Global File System for Raw Device Volumes

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).


    # newfs /dev/did/rdsk/d3s3

    This command creates the following entry in the /etc/vfstab file:


    /dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 
    /global/sample ufs 2 no logging
  2. Add the HAStoragePlus resource.


    # scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus 
    -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE