Sun Cluster Geographic Edition System Administration Guide

Initial Configuration of Sun StorEdge Availability Suite 3.2.1 Software

This section describes the initial steps you must perform before you can configure Sun StorEdge Availability Suite 3.2.1 replication in the Sun Cluster Geographic Edition product.

This section uses an example of a protection group, avspg, that is configured on a partnership that consists of two clusters, cluster-paris and cluster-newyork. An application, which is encapsulated in the apprg1 resource group, is to be protected by the avspg protection group. The application data is held by some volume in the avsdg device group. These volumes can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.

The resource group, apprg1, and the device group, avsdg, are present on both cluster-paris and cluster-newyork. The application data is protected by avspg by data replication between cluster-paris and cluster-newyork.


Note –

Replication of each device group requires one logical host on the local cluster and one logical host on the partner cluster.


You cannot use the slash character (/) in a cluster tag Sun Cluster Geographic Edition software. If you are using raw DID devices, you cannot use predefined DID device group names such as dsk/s3.

To use DIDs with raw device groups, complete the following procedure.

Sun StorEdge Availability Suite Volume Set

Before you can define the Sun StorEdge Availability Suite 3.2.1 volume set, you must determine the following:

The volset file is located at /var/cluster/geo/avs/device-group-name-volset.ini on all the nodes of the protection group's primary cluster and secondary cluster. For example, the volset file for the device group avsdg would be located at /var/cluster/geo/avs/avsdg-volset.ini.

The fields in the volume set file that are handled by the Sun Cluster Geographic Edition software are described in the following table. The Sun Cluster Geographic Edition software does not handle other parameters of the volume set, including disk queue, size of memory queue, and number of asynchronous threads. You must adjust these parameters manually by using Sun StorEdge Availability Suite 3.2.1 commands.

Field 

Meaning 

Description 

phost

Primary host 

The logical host of the server on which the primary volume resides. 

pdev

Primary device 

Primary volume partition. Specify full path names only. 

pbitmap

Primary bitmap 

Volume partition in which the bitmap of the primary partition is stored. Specify full path names only. 

shost

Secondary host 

The logical host of the server on which the secondary volume resides. 

sdev

Secondary device 

Secondary volume partition. Specify full path names only. 

sbitmap

Secondary bitmap 

Volume partition in which the bitmap of the secondary partition is stored. Specify full path names only. 

ip

Network transfer protocol 

Specify IP. 

sync | async

Operating mode 

sync is the mode in which the I/O operation is confirmed as complete only when the volume on the secondary cluster has been updated.

async is the mode in which the primary host I/O operation is confirmed as complete before updating the volumes on the secondary cluster.

g io-groupname

I/O group name 

An I/O group name can be specified by the g character. The set must be configured in the same I/O group on both the primary and the secondary cluster.

C tag 

Specifies the device group name or resource tag of the local data and bitmap volumes in cases where this information is not implied by the name of the volume. For example, /dev/md/avsset/rdsk/vol indicates a device group named avsset. As another example, /dev/vx/rdsk/avsdg/vol indicates a device group named avsdg.

Sun Cluster Geographic Edition software does not modify the value of the Sun StorEdge Availability Suite 3.2.1 parameters. The software only controls the role of the volume set during switchover and takeover operations.

For more information about the format of the volume set files, refer to the Sun StorEdge Availability Suite 3.2.1 documentation.

ProcedureHow to Use DIDs With Raw Device Groups

Steps
  1. Remove all the DIDs you want to use from its predefined DID device group.

  2. Add the DIDs to a raw device group with a name that does not contain any slashes.

  3. Create the same group name on each cluster of the partnership. You can use the same DIDs on each cluster.

  4. Use this new name where a device group name is required.

ProcedureHow to Configure the Sun StorEdge Availability Suite 3.2.1 Volume in Sun Cluster

This procedure provides an example of how to configure Sun StorEdge Availability Suite 3.2.1 volumes in Sun Cluster. These volumes can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.

The volumes are encapsulated at the Sun Cluster device-group level. The Sun StorEdge Availability Suite 3.2.1 software interacts with the Solaris Volume Manager disksets, or VERITAS Volume Manager disk group, or raw device through this device group interface. The path to the volumes depends on the volume type as described in the following table.

Volume Type 

Path 

Solaris Volume Manager 

/dev/md/diskset-name/rdsk/d#, where # represents a number

VERITAS Volume Manager 

/dev/vx/rdsk/disk-group-name/volume-name

Raw device 

/dev/did/rdsk/d#s#

Steps
  1. Create a diskset, avsset, by using Solaris Volume Manager or a disk group, avsdg, by using VERITAS Volume Manager or a raw device on cluster-paris and cluster-newyork.

    For example, if you configure the volume by using a raw device, choose a raw device group, dsk/d3, on cluster-paris and cluster-newyork.

  2. Create two volumes in the diskset or disk group on cluster-paris.

    The Sun StorEdge Availability Suite software requires a dedicated bitmap volume for each data volume to track which modifications to the data volume when the system is in logging mode.

    If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s3 and /dev/did/rdsk/d3s4, on the /dev/did/rdsk/d3 device on cluster-paris.

  3. Create two volumes in the diskset or disk group on cluster-newyork.

    If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s5 and /dev/did/rdsk/d3s6, on the /dev/did/rdsk/d3 device on cluster-paris.

Enabling a Sun StorEdge Availability Suite 3.2.1 Volume Set

The Sun StorEdge Availability Suite 3.2.1 volume sets can be enabled in two ways:

Automatically Enabling a Solaris Volume Manager Volume Set

In this example, the cluster-paris cluster is the primary and avsset is a device group that contains a Solaris Volume Manager diskset.


Example 6–1 Automatically Enabling a Solaris Volume Manager Volume Set

This example has the following entries in /var/cluster/geo/avs/avsset-volset.ini:


logicalhost-paris-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 
logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 
ip async g - C avsset

The avsset-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates d100 from cluster-paris to d100 on cluster-newyork by using the bitmap volumes and logical hostnames that are specified in the file.


Automatically Enabling a VERITAS Volume Manager Volume Set

In this example, the cluster-paris cluster is the primary and avsdg is a device group that contains a VERITAS Volume Manager disk group.


Example 6–2 Automatically Enabling a VERITAS Volume Manager Volume Set

This example has the following entries in the /var/cluster/geo/avs/avsdg-volset.ini file:


logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris \
/dev/vx/rdsk/avsdg/vol-bitmap-paris 
logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork \
/dev/vx/rdsk/avsdg/vol-bitmap-ny 
ip async g - C avsdg

The avsdg-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates vol-data-paris from cluster-paris to vol-data-newyork on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.


Automatically Enabling a Raw Device Volume Set

In this example, the cluster-paris cluster is the primary and rawdg is the name of the device group that contains a raw device disk group, /dev/did/rdsk/d3.


Example 6–3 Automatically Enabling a Raw Device Volume Set

This example has the following entries in /var/cluster/geo/avs/avsdg-volset.ini file:


logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 
logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 
ip async g - C rawdg

The rawdg-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates d3s3 from cluster-paris to d3s5 on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.


Manually Enabling Volume Sets

After you have added the device group to the protection group, avspg, you can manually enable the Sun StorEdge Availability Suite 3.2.1 volume sets.


Example 6–4 Manually Enabling the Sun StorEdge Availability Suite 3.2.1 Volume Set

The following example illustrates how to manually enable a Solaris Volume Manager volume set:


phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 \
/dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \
logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \
/dev/md/avsset/rdsk/d101 ip async C avsset


Example 6–5 Manually Enabling a VERITAS Volume Manager Volume Set

The following example illustrates how to manually enable a VERITAS Volume Manager volume set:


phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 
/dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris 
logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork 
/dev/vx/rdsk/avsdg/vol-bitmap-newyork ip async C avsdg


Example 6–6 Manually Enabling a Raw Device Volume Set

The following example illustrates how to manually enable a raw device volume set:


phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 
/dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 
/dev/did/rdsk/d3s6 ip async C dsk/d3

Information about sndradm command execution is printed in the Sun StorEdge Availability Suite 3.2.1 log file, /var/opt/SUNWesm/ds.log. Refer to this file if you encounter errors while manually enabling the volume set.

ProcedureHow to Configure the Sun Cluster Device Group That Is Controlled by Sun StorEdge Availability Suite 3.2.1

Sun StorEdge Availability Suite 3.2.1 software supports Solaris Volume Manager, VERITAS Volume Manager, and raw device volumes.

Steps
  1. Ensure that the device group that contains the volume set that you want to replicate is register with Sun Cluster.

    For more information about these commands, refer to the scsetup(1M) or the scconf(1M) man page.

  2. If you are using a VERITAS Volume Manager device group, synchronize the VERITAS Volume Manager configuration by using one of the Sun Cluster commands, scsetup or scconf.

  3. After you have finished configuring the device group, it should be displayed in the output of the scstat -D command.

    For more information about this command, see the scstat(1M) man page.

  4. Repeat steps 1–3 on both cluster-paris and cluster-newyork.

ProcedureHow to Configure a Highly Available Cluster Global File System for Use With Sun StorEdge Availability Suite 3.2.1

Steps
  1. Create the required file system on the volume set that you created in the previous step, vol-data-paris.

    The application writes to this file system.

  2. Add an entry to the /etc/vfstab file that contains information such as the mount location.


    Note –

    You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. Data must not be mounted on the secondary cluster or data on the primary will not be replicated to the secondary cluster. Otherwise, the data will not be replicated from the primary cluster to the secondary cluster.


  3. To handle the new file system, add the HAStoragePlus resource to the application resource group, apprg1.

    Adding this resource ensures that the necessary file systems are remounted before the application is started.

    For more information about the HAStoragePlus resource type, refer to the Sun Cluster 3.1 Data Service Planning and Administration Guide.

  4. Repeat steps 1–3 on both cluster-paris and cluster-newyork.


Example 6–7 Configuring a Highly Available Cluster Global File System for Solaris Volume Manager Volumes

This example assumes that the resource group apprg1 already exists.

  1. Create a UNIX file system (UFS).


    phys-paris-1# newfs /dev/md/avsset/rdsk/d100
  2. An entry is created in the /etc/vfstab file as follows.


    /dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 
    /global/sample ufs 2 no logging
  3. Add the HAStoragePlus resource.


    phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus 
    -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE 


Example 6–8 Configuring a Highly Available Cluster Global File System for VERITAS Volume Manager Volumes

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).


    phys-paris-1# newfs /dev/vx/rdsk/avsdg/vol-data-paris
  2. An entry is created in the /etc/vfstab file as follows:


    /dev/vx/dsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-data-paris 
    /global/sample ufs 2 no logging
  3. Add the HAStoragePlus resource.


    phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus 
    -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE 


Example 6–9 Configuring a Highly Available Cluster Global File System for Raw Device Volumes

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).


    phys-paris-1# newfs /dev/did/rdsk/d3s3
  2. An entry is created in the /etc/vfstab file as follows:


    /dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 
    /global/sample ufs 2 no logging
  3. Add the HAStoragePlus resource.


    phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus 
    -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE