This section describes the initial steps you must perform before you can configure Sun StorEdge Availability Suite 3.2.1 replication in the Sun Cluster Geographic Edition product.
This section uses an example of a protection group, avspg, that is configured on a partnership that consists of two clusters, cluster-paris and cluster-newyork. An application, which is encapsulated in the apprg1 resource group, is to be protected by the avspg protection group. The application data is held by some volume in the avsdg device group. These volumes can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.
The resource group, apprg1, and the device group, avsdg, are present on both cluster-paris and cluster-newyork. The application data is protected by avspg by data replication between cluster-paris and cluster-newyork.
Replication of each device group requires one logical host on the local cluster and one logical host on the partner cluster.
You cannot use the slash character (/) in a cluster tag Sun Cluster Geographic Edition software. If you are using raw DID devices, you cannot use predefined DID device group names such as dsk/s3.
To use DIDs with raw device groups, complete the following procedure.
Before you can define the Sun StorEdge Availability Suite 3.2.1 volume set, you must determine the following:
The data volumes to be replicated. Examples are vol-data-paris in avsdg on cluster-paris and vol-data-newyork in avsdg on cluster-newyork.
The bitmap volume that is needed for replication. Examples are vol-bitmap-paris in avsdg on cluster-paris and vol-bitmap-newyork in avsdg on cluster-newyork.
The logical host to be used exclusively for replication of the device group avsdg. Examples are the logical host logicalhost-paris-1 on cluster-paris and the logical host logicalhost-newyork-1 on cluster-newyork.
The logical host that is used for Sun StorEdge Availability Suite 3.2.1 replication cannot be the same as the Sun Cluster Geographic Edition infrastructure logical host. For more information about configuring logical hostnames, see Configuring Logical Hostnames.
The volset file is located at /var/cluster/geo/avs/device-group-name-volset.ini on all the nodes of the protection group's primary cluster and secondary cluster. For example, the volset file for the device group avsdg would be located at /var/cluster/geo/avs/avsdg-volset.ini.
The fields in the volume set file that are handled by the Sun Cluster Geographic Edition software are described in the following table. The Sun Cluster Geographic Edition software does not handle other parameters of the volume set, including disk queue, size of memory queue, and number of asynchronous threads. You must adjust these parameters manually by using Sun StorEdge Availability Suite 3.2.1 commands.
Field |
Meaning |
Description |
---|---|---|
phost |
Primary host |
The logical host of the server on which the primary volume resides. |
pdev |
Primary device |
Primary volume partition. Specify full path names only. |
pbitmap |
Primary bitmap |
Volume partition in which the bitmap of the primary partition is stored. Specify full path names only. |
shost |
Secondary host |
The logical host of the server on which the secondary volume resides. |
sdev |
Secondary device |
Secondary volume partition. Specify full path names only. |
sbitmap |
Secondary bitmap |
Volume partition in which the bitmap of the secondary partition is stored. Specify full path names only. |
ip |
Network transfer protocol |
Specify IP. |
sync | async |
Operating mode |
sync is the mode in which the I/O operation is confirmed as complete only when the volume on the secondary cluster has been updated. async is the mode in which the primary host I/O operation is confirmed as complete before updating the volumes on the secondary cluster. |
g io-groupname |
I/O group name |
An I/O group name can be specified by the g character. The set must be configured in the same I/O group on both the primary and the secondary cluster. |
C |
C tag |
Specifies the device group name or resource tag of the local data and bitmap volumes in cases where this information is not implied by the name of the volume. For example, /dev/md/avsset/rdsk/vol indicates a device group named avsset. As another example, /dev/vx/rdsk/avsdg/vol indicates a device group named avsdg. |
Sun Cluster Geographic Edition software does not modify the value of the Sun StorEdge Availability Suite 3.2.1 parameters. The software only controls the role of the volume set during switchover and takeover operations.
For more information about the format of the volume set files, refer to the Sun StorEdge Availability Suite 3.2.1 documentation.
Remove all the DIDs you want to use from its predefined DID device group.
Add the DIDs to a raw device group with a name that does not contain any slashes.
Create the same group name on each cluster of the partnership. You can use the same DIDs on each cluster.
Use this new name where a device group name is required.
This procedure provides an example of how to configure Sun StorEdge Availability Suite 3.2.1 volumes in Sun Cluster. These volumes can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.
The volumes are encapsulated at the Sun Cluster device-group level. The Sun StorEdge Availability Suite 3.2.1 software interacts with the Solaris Volume Manager disksets, or VERITAS Volume Manager disk group, or raw device through this device group interface. The path to the volumes depends on the volume type as described in the following table.
Volume Type |
Path |
---|---|
Solaris Volume Manager |
/dev/md/diskset-name/rdsk/d#, where # represents a number |
VERITAS Volume Manager |
/dev/vx/rdsk/disk-group-name/volume-name |
Raw device |
/dev/did/rdsk/d#s# |
Create a diskset, avsset, by using Solaris Volume Manager or a disk group, avsdg, by using VERITAS Volume Manager or a raw device on cluster-paris and cluster-newyork.
For example, if you configure the volume by using a raw device, choose a raw device group, dsk/d3, on cluster-paris and cluster-newyork.
Create two volumes in the diskset or disk group on cluster-paris.
The Sun StorEdge Availability Suite software requires a dedicated bitmap volume for each data volume to track which modifications to the data volume when the system is in logging mode.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s3 and /dev/did/rdsk/d3s4, on the /dev/did/rdsk/d3 device on cluster-paris.
Create two volumes in the diskset or disk group on cluster-newyork.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s5 and /dev/did/rdsk/d3s6, on the /dev/did/rdsk/d3 device on cluster-paris.
The Sun StorEdge Availability Suite 3.2.1 volume sets can be enabled in two ways:
Automatically, when the device group is added to the protection group, avspg
Use the automatic procedures to prepare the device-group-name-volset.ini file when you are setting up Sun StorEdge Availability Suite 3.2.1 software for the first time. After you have prepared the file, when you add the device group to the protection group, set the device group's Enable_volume_set property to True. The information in the device-group-name-volset.ini file will be read by the Sun StorEdge Availability Suite command to automatically enable the device group.
Manually, after the device group is added to the protection group, avspg
Use the manual procedures to enable the volume sets when you are creating volumes on a system that has already been configured.
In this example, the cluster-paris cluster is the primary and avsset is a device group that contains a Solaris Volume Manager diskset.
This example has the following entries in /var/cluster/geo/avs/avsset-volset.ini:
logicalhost-paris-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 ip async g - C avsset |
The avsset-volset.ini file contains the following entries:
lh-paris-1 is the primary host.
/dev/md/avsset/rdsk/d100 is the primary data.
/dev/md/avsset/rdsk/d101 is the primary bitmap.
lh-newyork-1 is the secondary host.
/dev/md/avsset/rdsk/d100 is the secondary data.
/dev/md/avsset/rdsk/d101 is the secondary bitmap.
ip is the protocol.
async is the mode.
g is the G flag.
- is the IO group.
C is the C tag.
avsset is the diskset.
The sample configuration file defines a volume set that replicates d100 from cluster-paris to d100 on cluster-newyork by using the bitmap volumes and logical hostnames that are specified in the file.
In this example, the cluster-paris cluster is the primary and avsdg is a device group that contains a VERITAS Volume Manager disk group.
This example has the following entries in the /var/cluster/geo/avs/avsdg-volset.ini file:
logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris \ /dev/vx/rdsk/avsdg/vol-bitmap-paris logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork \ /dev/vx/rdsk/avsdg/vol-bitmap-ny ip async g - C avsdg |
The avsdg-volset.ini file contains the following entries:
lh-paris-1 is the primary host.
/dev/vx/rdsk/avsdg/vol-data-paris is the primary data.
/dev/vx/rdsk/avsdg/vol-bitmap-paris is the primary bitmap.
lh-newyork-1 is the secondary host.
/dev/vx/rdsk/avsdg/vol-data-newyork is the secondary data.
/dev/vx/rdsk/avsdg/vol-bitmap-ny is the secondary bitmap.
ip is the protocol.
async is the mode.
g is the G flag.
- is the IO group.
C is the C flag.
avsdg is the device group.
The sample configuration file defines a volume set that replicates vol-data-paris from cluster-paris to vol-data-newyork on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.
In this example, the cluster-paris cluster is the primary and rawdg is the name of the device group that contains a raw device disk group, /dev/did/rdsk/d3.
This example has the following entries in /var/cluster/geo/avs/avsdg-volset.ini file:
logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 ip async g - C rawdg |
The rawdg-volset.ini file contains the following entries:
logicalhost-paris-1 is the primary host.
/dev/did/rdsk/d3s3 is the primary data.
/dev/did/rdsk/d3s4 is the primary bitmap.
logicalhost-newyork-1 is the secondary host.
/dev/did/rdsk/d3s5 is the secondary data.
/dev/did/rdsk/d3s6 is the secondary bitmap.
ip is the protocol.
async is the mode.
g is the G flag.
- is the IO group.
C is the C flag.
rawdg is the device group.
The sample configuration file defines a volume set that replicates d3s3 from cluster-paris to d3s5 on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.
After you have added the device group to the protection group, avspg, you can manually enable the Sun StorEdge Availability Suite 3.2.1 volume sets.
The following example illustrates how to manually enable a Solaris Volume Manager volume set:
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 \ /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \ logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \ /dev/md/avsset/rdsk/d101 ip async C avsset |
The following example illustrates how to manually enable a VERITAS Volume Manager volume set:
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork /dev/vx/rdsk/avsdg/vol-bitmap-newyork ip async C avsdg |
The following example illustrates how to manually enable a raw device volume set:
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 ip async C dsk/d3 |
Information about sndradm command execution is printed in the Sun StorEdge Availability Suite 3.2.1 log file, /var/opt/SUNWesm/ds.log. Refer to this file if you encounter errors while manually enabling the volume set.
Sun StorEdge Availability Suite 3.2.1 software supports Solaris Volume Manager, VERITAS Volume Manager, and raw device volumes.
Ensure that the device group that contains the volume set that you want to replicate is register with Sun Cluster.
For more information about these commands, refer to the scsetup(1M) or the scconf(1M) man page.
If you are using a VERITAS Volume Manager device group, synchronize the VERITAS Volume Manager configuration by using one of the Sun Cluster commands, scsetup or scconf.
After you have finished configuring the device group, it should be displayed in the output of the scstat -D command.
For more information about this command, see the scstat(1M) man page.
Repeat steps 1–3 on both cluster-paris and cluster-newyork.
Create the required file system on the volume set that you created in the previous step, vol-data-paris.
The application writes to this file system.
Add an entry to the /etc/vfstab file that contains information such as the mount location.
You must set the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. Data must not be mounted on the secondary cluster or data on the primary will not be replicated to the secondary cluster. Otherwise, the data will not be replicated from the primary cluster to the secondary cluster.
To handle the new file system, add the HAStoragePlus resource to the application resource group, apprg1.
Adding this resource ensures that the necessary file systems are remounted before the application is started.
For more information about the HAStoragePlus resource type, refer to the Sun Cluster 3.1 Data Service Planning and Administration Guide.
Repeat steps 1–3 on both cluster-paris and cluster-newyork.
This example assumes that the resource group apprg1 already exists.
Create a UNIX file system (UFS).
phys-paris-1# newfs /dev/md/avsset/rdsk/d100 |
An entry is created in the /etc/vfstab file as follows.
/dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE |
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
phys-paris-1# newfs /dev/vx/rdsk/avsdg/vol-data-paris |
An entry is created in the /etc/vfstab file as follows:
/dev/vx/dsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-data-paris /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE |
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
phys-paris-1# newfs /dev/did/rdsk/d3s3 |
An entry is created in the /etc/vfstab file as follows:
/dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
phys-paris-1# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE |