This section describes the initial steps you must perform before you can configure Sun StorEdge Availability Suite 3.2.1 replication in the Sun Cluster Geographic Edition product.
The example protection group, avspg, in this section has been configured on a partnership that consists of two clusters, cluster-paris and cluster-newyork. An application, which is encapsulated in the apprg1 resource group, is protected by the avspg protection group. The application data is contained in the avsdg device group. The volumes in the avsdg device group can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.
The resource group, apprg1, and the device group, avsdg, are present on both the cluster-paris cluster and the cluster-newyork cluster. The avspg protection group protects the application data by replicating data between the cluster-paris cluster and the cluster-newyork cluster.
Replication of each device group requires a logical host on the local cluster and a logical host on the partner cluster.
You cannot use the slash character (/) in a cluster tag in the Sun Cluster Geographic Edition software. If you are using raw DID devices, you cannot use predefined DID device group names such as dsk/s3.
To use DIDs with raw device groups, see How to Use DIDs With Raw Device Groups.
Before you can define the Sun StorEdge Availability Suite 3.2.1 volume set, you must determine the following:
The data volumes to replicate such as vol-data-paris in avsdg on cluster-paris and vol-data-newyork in avsdg on cluster-newyork.
The bitmap volume that is needed for replication, such as vol-bitmap-paris in avsdg on cluster-paris and vol-bitmap-newyork in avsdg on cluster-newyork.
The logical host to use exclusively for replication of the device group avsdg, such as the logical host logicalhost-paris-1 on cluster-paris and the logical host logicalhost-newyork-1 on cluster-newyork.
The logical host that is used for Sun StorEdge Availability Suite 3.2.1 replication must be different from the Sun Cluster Geographic Edition infrastructure logical host. For more information, see Configuring Logical Hostnames in Sun Cluster Geographic Edition System Administration Guideabout configuring logical hostnames.
The volset file is located at /var/cluster/geo/avs/devicegroupname-volset.ini on all nodes of the primary and secondary clusters of the protection group. For example, the volset file for the device group avsdg is located at /var/cluster/geo/avs/avsdg-volset.ini.
The fields in the volume set file that are handled by the Sun Cluster Geographic Edition software are described in the following table. The Sun Cluster Geographic Edition software does not handle other parameters of the volume set, including disk queue, size of memory queue, and number of asynchronous threads. You must adjust these parameters manually by using Sun StorEdge Availability Suite 3.2.1 commands.
Field |
Meaning |
Description |
---|---|---|
phost |
Primary host |
The logical host of the server on which the primary volume resides. |
pdev |
Primary device |
Primary volume partition. Specify full path names only. |
pbitmap |
Primary bitmap |
Volume partition in which the bitmap of the primary partition is stored. Specify full path names only. |
shost |
Secondary host |
The logical host of the server on which the secondary volume resides. |
sdev |
Secondary device |
Secondary volume partition. Specify full path names only. |
sbitmap |
Secondary bitmap |
Volume partition in which the bitmap of the secondary partition is stored. Specify full path names only. |
ip |
Network transfer protocol |
IP address. |
sync | async |
Operating mode |
sync is the mode in which the I/O operation is confirmed as complete only when the volume on the secondary cluster has been updated. async is the mode in which the primary host I/O operation is confirmed as complete before updating the volumes on the secondary cluster. |
g iogroupname |
I/O group name |
An I/O group name. The set must be configured in the same I/O group on both the primary and the secondary cluster. This parameter is optional and need only be configured if you have an I/O group. |
C |
C tag |
The device group name or resource tag of the local data and bitmap volumes in cases where this information is not implied by the name of the volume. For example, /dev/md/avsset/rdsk/vol indicates a device group named avsset. As another example, /dev/vx/rdsk/avsdg/vol indicates a device group named avsdg. |
The Sun Cluster Geographic Edition software does not modify the value of the Sun StorEdge Availability Suite 3.2.1 parameters. The software controls only the role of the volume set during switchover and takeover operations.
For more information about the format of the volume set files, refer to the Sun StorEdge Availability Suite 3.2.1 documentation.
Remove the DIDs you want to use from the predefined DID device group.
Add the DIDs to a raw device group. Ensure that the new DID does not contain any slashes.
Create the same group name on each cluster of the partnership. You can use the same DIDs on each cluster.
Use the new group name where a device group name is required.
This procedure configures Sun StorEdge Availability Suite 3.2.1 volumes in a Sun Cluster environment. These volumes can be Solaris Volume Manager volumes, VERITAS Volume Manager volumes, or raw device volumes.
The volumes are encapsulated at the Sun Cluster device-group level. The Sun StorEdge Availability Suite 3.2.1 software interacts with the Solaris Volume Manager disksets, or VERITAS Volume Manager disk group, or raw device through this device group interface. The path to the volumes depends on the volume type, as described in the following table.
Volume Type |
Path |
---|---|
Solaris Volume Manager |
/dev/md/disksetname/rdsk/d#, where # represents a number |
VERITAS Volume Manager |
/dev/vx/rdsk/diskgroupname/volumename |
Raw device |
/dev/did/rdsk/d#s# |
Create a diskset, avsset, by using Solaris Volume Manager or a disk group, avsdg, by using VERITAS Volume Manager or a raw device on cluster-paris and cluster-newyork.
For example, if you configure the volume by using a raw device, choose a raw device group, dsk/d3, on cluster-paris and cluster-newyork.
Create two volumes in the diskset or disk group on cluster-paris.
The Sun StorEdge Availability Suite software requires a dedicated bitmap volume for each data volume to track which modifications to the data volume when the system is in logging mode.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s3 and /dev/did/rdsk/d3s4, on the /dev/did/rdsk/d3 device on cluster-paris.
Create two volumes in the diskset or disk group on cluster-newyork.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s5 and /dev/did/rdsk/d3s6, on the /dev/did/rdsk/d3 device on cluster-paris.
You can enable the Sun StorEdge Availability Suite 3.2.1 volume sets in one of two ways:
Automatically, when the device group is added to the protection group, avspg
Use the automatic procedures to prepare the devicegroupname-volset.ini file when you are setting up Sun StorEdge Availability Suite 3.2.1 software for the first time. After you have prepared the file, when you add the device group to the protection group, set the Enable_volume_set property of a device group to True. The Sun StorEdge Availability Suite software reads the information in the devicegroupname-volset.ini file to automatically enable the device group.
Manually, after the device group is added to the protection group, avspg
Use the manual procedures to enable the volume sets when you are creating volumes on a system that has been configured.
In this example, the cluster-paris cluster is the primary and avsset is a device group that contains a Solaris Volume Manager diskset.
This example has the following entries in the /var/cluster/geo/avs/avsset-volset.ini file:
logicalhost-paris-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 ip async C avsset |
The avsset-volset.ini file contains the following entries:
lh-paris-1 – Primary host
/dev/md/avsset/rdsk/d100 – Primary data
/dev/md/avsset/rdsk/d101 – Primary bitmap
lh-newyork-1 – Secondary host
/dev/md/avsset/rdsk/d100 – Secondary data
/dev/md/avsset/rdsk/d101 – Secondary bitmap
ip – Protocol
async – Mode
C – C tag
avsset – Diskset
The sample configuration file defines a volume set that replicates d100 from cluster-paris to d100 on cluster-newyork by using the bitmap volumes and logical hostnames that are specified in the file.
In this example, the cluster-paris cluster is the primary and avsdg is a device group that contains a VERITAS Volume Manager disk group.
This example has the following entries in the /var/cluster/geo/avs/avsdg-volset.ini file:
logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris \ /dev/vx/rdsk/avsdg/vol-bitmap-paris logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork \ /dev/vx/rdsk/avsdg/vol-bitmap-ny ip async C avsdg |
The avsdg-volset.ini file contains the following entries:
lh-paris-1 – Primary host
/dev/vx/rdsk/avsdg/vol-data-paris – Primary data
/dev/vx/rdsk/avsdg/vol-bitmap-paris – Primary bitmap
lh-newyork-1 is the secondary host.
/dev/vx/rdsk/avsdg/vol-data-newyork – Secondary data
/dev/vx/rdsk/avsdg/vol-bitmap-ny – Secondary bitmap
ip – Protocol
async – Mode
C – C flag
avsdg – Device group
The sample configuration file defines a volume set that replicates vol-data-paris from cluster-paris to vol-data-newyork on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.
In this example, the cluster-paris cluster is the primary and rawdg is the name of the device group that contains a raw device disk group, /dev/did/rdsk/d3.
This example has the following entries in /var/cluster/geo/avs/avsdg-volset.ini file:
logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 ip async C rawdg |
The rawdg-volset.ini file contains the following entries:
logicalhost-paris-1 – Primary host
/dev/did/rdsk/d3s3 – Primary data
/dev/did/rdsk/d3s4 – Primary bitmap
logicalhost-newyork-1 – Secondary host
/dev/did/rdsk/d3s5 – Secondary data
/dev/did/rdsk/d3s6 – Secondary bitmap
ip – Protocol
async – Mode
C – C flag
rawdg – Device group
The sample configuration file defines a volume set that replicates d3s3 from cluster-paris to d3s5 on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.
After you have added the device group to the protection group, avspg, you can manually enable the Sun StorEdge Availability Suite 3.2.1 volume sets.
This example manually enables a Solaris Volume Manager volume set.
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 \ /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \ logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \ /dev/md/avsset/rdsk/d101 ip async C avsset |
This example manually enables a VERITAS Volume Manager volume set.
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork /dev/vx/rdsk/avsdg/vol-bitmap-newyork ip async C avsdg |
This example manually enables a raw device volume set.
phys-paris-1# /usr/opt/SUNWesm/sbin/sndradm -e logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 ip async C dsk/d3 |
Information about the sndradm command execution is written to the Sun StorEdge Availability Suite 3.2.1 log file, /var/opt/SUNWesm/ds.log. Refer to this file if errors occur while manually enabling the volume set.
Sun StorEdge Availability Suite 3.2.1 software supports Solaris Volume Manager, VERITAS Volume Manager, and raw device volumes.
Ensure that the device group that contains the volume set that you want to replicate is registered with Sun Cluster software.
For more information about these commands, refer to the scsetup(1M) or the scconf(1M) man page.
If you are using a VERITAS Volume Manager device group, synchronize the VERITAS Volume Manager configuration by using the Sun Cluster command scsetup or scconf.
Ensure that the device group is displayed in the output of the scstat -D command.
For more information about this command, see the scstat(1M) man page.
Repeat steps 1–3 on both clusters, cluster-paris and cluster-newyork.
Create the required file system on the volume set that you created in the previous step, vol-data-paris.
The application writes to this file system.
Add an entry to the /etc/vfstab file that contains information such as the mount location.
You must specify the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Sun Cluster software and the Sun Cluster Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. You must not mount data on the secondary cluster because data on the primary will not be replicated to the secondary cluster.
To handle the new file system, add the HAStoragePlus resource to the application resource group, apprg1.
Adding this resource ensures that the necessary file systems are remounted before the application is started.
For more information about the HAStoragePlus resource type, refer to the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Repeat steps 1–3 on both cluster-paris and cluster-newyork.
This example configures a highly available cluster global file system for Solaris Volume Manager volumes. This example assumes that the resource group apprg1 already exists.
Create a UNIX file system (UFS).
# newfs /dev/md/avsset/rdsk/d100 |
This command creates the following entry in the /etc/vfstab file:
/dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE |
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
# newfs /dev/vx/rdsk/avsdg/vol-data-paris |
This command creates the following entry is created in the /etc/vfstab file:
/dev/vx/dsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-data-paris /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE |
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
# newfs /dev/did/rdsk/d3s3 |
This command creates the following entry in the /etc/vfstab file:
/dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 /global/sample ufs 2 no logging |
Add the HAStoragePlus resource.
# scrgadm -a -j rs-hasp -g apprg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/sample -x AffinityOn=TRUE |