This section describes the initial steps you must perform before you can configure Sun StorageTek Availability Suite replication in the Sun Cluster Geographic Edition product.
The example protection group, avspg, in this section has been configured on a partnership that consists of two clusters, cluster-paris and cluster-newyork. An application, which is encapsulated in the apprg1 resource group, is protected by the avspg protection group. The application data is contained in the avsdg device group. The volumes in the avsdg device group can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.
The resource group, apprg1, and the device group, avsdg, are present on both the cluster-paris cluster and the cluster-newyork cluster. The avspg protection group protects the application data by replicating data between the cluster-paris cluster and the cluster-newyork cluster.
Replication of each device group requires a logical host on the local cluster and a logical host on the partner cluster.
You cannot use the slash character (/) in a cluster tag in the Sun Cluster Geographic Edition software. If you are using raw DID devices, you cannot use predefined DID device group names such as dsk/s3.
To use DIDs with raw device groups, see How to Set Up Raw-Disk Device Groups for Sun Cluster Geographic Edition Systems.
This section provides the following information:
How to Set Up Raw-Disk Device Groups for Sun Cluster Geographic Edition Systems
How to Configure a Sun StorageTek Availability Suite Volume in Sun Cluster
Before you can define a Sun StorageTek Availability Suite volume set, you must determine the following:
The data volumes to replicate such as vol-data-paris in avsdg on cluster-paris and vol-data-newyork in avsdg on cluster-newyork.
The bitmap volume that is needed for replication, such as vol-bitmap-paris in avsdg on cluster-paris and vol-bitmap-newyork in avsdg on cluster-newyork.
The logical host to use exclusively for replication of the device group avsdg, such as the logical host logicalhost-paris-1 on cluster-paris and the logical host logicalhost-newyork-1 on cluster-newyork.
The logical host that is used for Sun StorageTek Availability Suite replication must be different from the Sun Cluster Geographic Edition infrastructure logical host. For more information, see Configuring Logical Hostnames in Sun Cluster Geographic Edition System Administration Guideabout configuring logical hostnames.
The volset file is located at /var/cluster/geo/avs/devicegroupname-volset.ini on all nodes of the primary and secondary clusters of the protection group. For example, the volset file for the device group avsdg is located at /var/cluster/geo/avs/avsdg-volset.ini.
The fields in the volume set file that are handled by the Sun Cluster Geographic Edition software are described in the following table. The Sun Cluster Geographic Edition software does not handle other parameters of the volume set, including disk queue, size of memory queue, and number of asynchronous threads. You must adjust these parameters manually by using Sun StorageTek Availability Suite commands.
Field |
Meaning |
Description |
---|---|---|
phost |
Primary host |
The logical host of the server on which the primary volume resides. |
pdev |
Primary device |
Primary volume partition. Specify full path names only. |
pbitmap |
Primary bitmap |
Volume partition in which the bitmap of the primary partition is stored. Specify full path names only. |
shost |
Secondary host |
The logical host of the server on which the secondary volume resides. |
sdev |
Secondary device |
Secondary volume partition. Specify full path names only. |
sbitmap |
Secondary bitmap |
Volume partition in which the bitmap of the secondary partition is stored. Specify full path names only. |
ip |
Network transfer protocol |
IP address. |
sync | async |
Operating mode |
sync is the mode in which the I/O operation is confirmed as complete only when the volume on the secondary cluster has been updated. async is the mode in which the primary host I/O operation is confirmed as complete before updating the volumes on the secondary cluster. |
g iogroupname |
I/O group name |
An I/O group name. The set must be configured in the same I/O group on both the primary and the secondary cluster. This parameter is optional and need only be configured if you have an I/O group. |
C |
C tag |
The device group name or resource tag of the local data and bitmap volumes in cases where this information is not implied by the name of the volume. For example, /dev/md/avsset/rdsk/vol indicates a device group named avsset. As another example, /dev/vx/rdsk/avsdg/vol indicates a device group named avsdg. |
The Sun Cluster Geographic Edition software does not modify the value of the Sun StorageTek Availability Suite parameters. The software controls only the role of the volume set during switchover and takeover operations.
For more information about the format of the volume set files, refer to the Sun StorageTek Availability Suite documentation.
Sun Cluster Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Sun Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Sun Cluster Geographic Edition.
For the devices that you want to use, unconfigure the predefined device groups.
The following commands remove the predefined device groups for d7 and d8.
phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8 phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8 phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8 |
Create the new raw-disk device group, including the desired devices.
Ensure that the new DID does not contain any slashes. The following command creates a global device group, rawdg, which contains d7 and d8.
phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \ -t rawdisk -d d7,d8 rawdg phys-paris-1# /usr/cluster/lib/dcs/dgconv -d d7 rawdg phys-paris-1# /usr/cluster/lib/dcs/dgconv -d d8 rawdg |
On the partner cluster, unconfigure the predefined device groups for the devices that you want to use.
You can use the same DIDs on each cluster. In the following command, the newyork cluster is the partner of the paris cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6 |
Create the raw-disk device group on the partner cluster.
Use the same device group name that you used on the primary cluster.
phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg |
Use the new group name where a device group name is required.
The following command adds rawdg to the AVS protection group rawpg.
phys-paris-1# geopg add-device-group -p local_logical_host=paris-1h \ -p remote_logical_host=newyork-1h rawdg rawpg |
This procedure configures Sun StorageTek Availability Suite volumes in a Sun Cluster environment. These volumes can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.
The volumes are encapsulated at the Sun Cluster device-group level. The Sun StorageTek Availability Suite software interacts with the Solaris Volume Manager disksets, or VERITAS Volume Manager disk group, or raw device through this device group interface. The path to the volumes depends on the volume type, as described in the following table.
Volume Type |
Path |
---|---|
Solaris Volume Manager |
/dev/md/disksetname/rdsk/d#, where # represents a number |
VERITAS Volume Manager |
/dev/vx/rdsk/diskgroupname/volumename |
Raw device |
/dev/did/rdsk/d#s# |
Create a disk set, avsset, by using Solaris Volume Manager or a disk group, avsdg, by using VERITAS Volume Manager or a raw device on cluster-paris and cluster-newyork.
For example, if you configure the volume by using a raw device, choose a raw device group, dsk/d3, on cluster-paris and cluster-newyork.
Create two volumes in the disk set or disk group on cluster-paris.
The Sun StorageTek Availability Suite software requires a dedicated bitmap volume for each data volume to track which modifications to the data volume when the system is in logging mode.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s3 and /dev/did/rdsk/d3s4, on the /dev/did/rdsk/d3 device on cluster-paris.
Create two volumes in the disk set or disk group on cluster-newyork.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s5 and /dev/did/rdsk/d3s6, on the /dev/did/rdsk/d3 device on cluster-paris.
You can enable the Sun StorageTek Availability Suite volume sets in one of two ways:
Automatically, when the device group is added to the protection group, avspg
Use the automatic procedures to prepare the devicegroupname-volset.ini file when you are setting up Sun StorageTek Availability Suite software for the first time. After you have prepared the file, when you add the device group to the protection group, set the Enable_volume_set property of a device group to True. The Sun StorageTek Availability Suite software reads the information in the devicegroupname-volset.ini file to automatically enable the device group.
Manually, after the device group is added to the protection group, avspg
Use the manual procedures to enable the volume sets when you are creating volumes on a system that has been configured.
In this example, the cluster-paris cluster is the primary and avsset is a device group that contains a Solaris Volume Manager disk set.
This example has the following entries in the /var/cluster/geo/avs/avsset-volset.ini file:
logicalhost-paris-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 ip async C avsset |
The avsset-volset.ini file contains the following entries:
lh-paris-1 – Primary host
/dev/md/avsset/rdsk/d100 – Primary data
/dev/md/avsset/rdsk/d101 – Primary bitmap
lh-newyork-1 – Secondary host
/dev/md/avsset/rdsk/d100 – Secondary data
/dev/md/avsset/rdsk/d101 – Secondary bitmap
ip – Protocol
async – Mode
C – C tag
avsset – Disk set
The sample configuration file defines a volume set that replicates d100 from cluster-paris to d100 on cluster-newyork by using the bitmap volumes and logical hostnames that are specified in the file.
In this example, the cluster-paris cluster is the primary and avsdg is a device group that contains a VERITAS Volume Manager disk group.
This example has the following entries in the /var/cluster/geo/avs/avsdg-volset.ini file:
logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork /dev/vx/rdsk/avsdg/vol-bitmap-ny ip async C avsdg |
The avsdg-volset.ini file contains the following entries:
lh-paris-1 – Primary host
/dev/vx/rdsk/avsdg/vol-data-paris – Primary data
/dev/vx/rdsk/avsdg/vol-bitmap-paris – Primary bitmap
lh-newyork-1 is the secondary host.
/dev/vx/rdsk/avsdg/vol-data-newyork – Secondary data
/dev/vx/rdsk/avsdg/vol-bitmap-ny – Secondary bitmap
ip – Protocol
async – Mode
C – C flag
avsdg – Device group
The sample configuration file defines a volume set that replicates vol-data-paris from cluster-paris to vol-data-newyork on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.
In this example, the cluster-paris cluster is the primary and rawdg is the name of the device group that contains a raw device disk group, /dev/did/rdsk/d3.
This example has the following entries in /var/cluster/geo/avs/avsdg-volset.ini file:
logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 ip async C rawdg |
The rawdg-volset.ini file contains the following entries:
logicalhost-paris-1 – Primary host
/dev/did/rdsk/d3s3 – Primary data
/dev/did/rdsk/d3s4 – Primary bitmap
logicalhost-newyork-1 – Secondary host
/dev/did/rdsk/d3s5 – Secondary data
/dev/did/rdsk/d3s6 – Secondary bitmap
ip – Protocol
async – Mode
C – C flag
rawdg – Device group
The sample configuration file defines a volume set that replicates d3s3 from cluster-paris to d3s5 on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.
After you have added the device group to the protection group, avspg, you can manually enable the Sun StorageTek Availability Suite volume sets. Because the Sun Availability Suite commands are installed in different locations in the supported software versions, the following examples illustrate how to enable volume sets for each software version.
This example manually enables a Solaris Volume Manager volume set when using Sun StorageTek Availability Suite 4.0.
phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \ /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \ logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \ /dev/md/avsset/rdsk/d101 ip async C avsset |
This example manually enables a Solaris Volume Manager volume set when using Sun StorEdge Availability Suite 3.2.1.
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 \ /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \ logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \ /dev/md/avsset/rdsk/d101 ip async C avsset |
This example manually enables a VERITAS Volume Manager volume set when using Sun StorageTek Availability Suite 4.0.
phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \ /dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris \ logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork \ /dev/vx/rdsk/avsdg/vol-bitmap-newyork ip async C avsdg |
This example manually enables a VERITAS Volume Manager volume set when using Sun StorEdge Availability Suite 3.2.1.
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 \ /dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris \ logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork \ /dev/vx/rdsk/avsdg/vol-bitmap-newyork ip async C avsdg |
This example manually enables a raw device volume set when using Sun StorageTek Availability Suite 4.0.
phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \ /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 \ /dev/did/rdsk/d3s6 ip async C dsk/d3 |
This example manually enables a raw device volume set when using Sun StorEdge Availability Suite 3.2.1.
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 \ /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 \ /dev/did/rdsk/d3s6 ip async C dsk/d3 |
Information about the sndradm command execution is written to the Sun StorageTek Availability Suite log file at the following locations:
When using Sun StorageTek Availability Suite 4.0, /var/adm/ds.log
When using Sun StorEdge Availability Suite 3.2.1, /var/opt/SUNWesm/ds.log
Refer to this file if errors occur while manually enabling the volume set.
Sun StorageTek Availability Suite software supports Solaris Volume Manager, VERITAS Volume Manager, and raw device volumes.
Ensure that the device group that contains the volume set that you want to replicate is registered with Sun Cluster software.
# cldevicegroup show -v dg1 |
For more information about this command, refer to the cldevicegroup(1CL) man page.
If you are using a VERITAS Volume Manager device group, synchronize the VERITAS Volume Manager configuration by using the Sun Cluster command clsetup or cldevicegroup.
Ensure that the device group is displayed in the output of the cldevicegroup show command.
# cldevicegroup show -v dg1 |
For more information about this command, see the cldevicegroup(1CL) man page.
Repeat steps 1–3 on both clusters, cluster-paris and cluster-newyork.
Create the required file system on the volume set that you created in the previous step, vol-data-paris.
The application writes to this file system.
Add an entry to the /etc/vfstab file that contains information such as the mount location.
You must specify the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. You must not mount data on the secondary cluster because data on the primary will not be replicated to the secondary cluster.
To handle the new file system, add the HAStoragePlus resource to the application resource group, apprg1.
Adding this resource ensures that the necessary file systems are remounted before the application is started.
For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Repeat steps 1–3 on both cluster-paris and cluster-newyork.
This example configures a highly available cluster global file system for Solaris Volume Manager volumes. This example assumes that the resource group apprg1 already exists.
Create a UNIX file system (UFS).
# newfs /dev/md/avsset/rdsk/d100 |
This command creates the following entry in the /etc/vfstab file:
/dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
# clresource create -g apprg1 -t SUNWHAStoragePlus \ -p FilesystemMountPoints=/global/sample -p Affinityon=TRUE rs-hasp |
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
# newfs /dev/vx/rdsk/avsdg/vol-data-paris |
This command creates the following entry is created in the /etc/vfstab file:
/dev/vx/dsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-data-paris /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
# clresource create -g apprg1 -t SUNWHAStoragePlus \ -p FilesystemMountPoints=/global/sample -p Affinityon=TRUE rs-hasp |
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
# newfs /dev/did/rdsk/d3s3 |
This command creates the following entry in the /etc/vfstab file:
/dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
# clresource create -g apprg1 -t SUNWHAStoragePlus \ -p FilesystemMountPoints=/global/sample -p Affinityon=TRUE rs-hasp |