JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Geographic Edition Data Replication Guide for Sun StorageTek Availability Suite
search filter icon
search icon

Document Information

Preface

1.  Replicating Data With Sun StorageTek Availability Suite Software

Task Summary of Replicating Data in a Sun StorageTek Availability Suite Protection Group

Overview of Sun StorageTek Availability Suite Data Replication

Sun StorageTek Availability Suite Lightweight Resource Groups

Sun StorageTek Availability Suite Replication Resource Groups

Protecting Data on Replicated Volumes From Resynchronization Failure

Initial Configuration of Sun StorageTek Availability Suite Software

Sun StorageTek Availability Suite Volume Sets

Resources Required For A Volume Set

Automatic Configuration of Volume Sets

Automatically Enabling Fallback Snapshots

How to Set Up Raw-Disk Device Groups for Geographic Edition Systems

How to Configure a Sun StorageTek Availability Suite Volume in Oracle Solaris Cluster

Enabling a Sun StorageTek Availability Suite Volume Set

Automatically Enabling a Solaris Volume Manager Volume Set

Automatically Enabling a VxVM Volume Set

Automatically Enabling a Raw Device Volume Set

Manually Enabling Volume Sets

Managing Fallback Snapshots Manually

The Snapshot_volume Property

Manually Enabling Fallback Snapshots

Manually Disabling Fallback Snapshots

Manually Modifying Fallback Snapshots

How to Configure the Oracle Solaris Cluster Device Group That Is Controlled by Sun StorageTek Availability Suite

How to Configure a Highly Available Cluster Global File System for Use With Sun StorageTek Availability Suite

2.  Administering Sun StorageTek Availability Suite Protection Groups

3.  Migrating Services That Use Sun StorageTek Availability Suite Data Replication

A.  Geographic Edition Properties for Sun StorageTek Availability Suite

Index

Initial Configuration of Sun StorageTek Availability Suite Software

This section describes the initial steps you must perform before you can configure Sun StorageTek Availability Suite replication in the Geographic Edition product.

The example protection group, avspg, in this section has been configured on a partnership that consists of two clusters, cluster-paris and cluster-newyork. An application, which is encapsulated in the apprg1 resource group, is protected by the avspg protection group. The application data is contained in the avsdg device group. The volumes in the avsdg device group can be Solaris Volume Manager volumes, Veritas Volume Manager (VxVM) volumes, or raw device volumes.

The resource group, apprg1, and the device group, avsdg, are present on both the cluster-paris cluster and the cluster-newyork cluster. The avspg protection group protects the application data by replicating data between the cluster-paris cluster and the cluster-newyork cluster.


Note - Replication of each device group requires a logical host on the local cluster and a logical host on the partner cluster.


You cannot use the slash character (/) in a cluster tag in the Geographic Edition software. If you are using raw DID devices, you cannot use predefined DID device group names such as dsk/s3.

To use DIDs with raw device groups, see How to Set Up Raw-Disk Device Groups for Geographic Edition Systems.

This section provides the following information:

Sun StorageTek Availability Suite Volume Sets

This section describes the storage resources and the files required to configure a volume set by using the Sun StorageTek Availability Suite software.

Resources Required For A Volume Set

Before you can define a Sun StorageTek Availability Suite volume set, you must determine the following:

Automatic Configuration of Volume Sets

One devicegroupname-volset.ini file is required for each device group that will be replicated. The volset file is located at /var/cluster/geo/avs/devicegroupname-volset.ini on all nodes of the primary and secondary clusters of the protection group. For example, the volset file for the device group avsdg is located at /var/cluster/geo/avs/avsdg-volset.ini.

The fields in the volume set file that are handled by the Geographic Edition software are described in the following table. The Geographic Edition software does not handle other parameters of the volume set, including size of memory queue, and number of asynchronous threads. You must adjust these parameters manually by using Sun StorageTek Availability Suite commands.

Field
Meaning
Description
phost
Primary host
The logical host of the server on which the primary volume resides.
pdev
Primary device
Primary volume partition. Specify full path names only.
pbitmap
Primary bitmap
Volume partition in which the bitmap of the primary partition is stored. Specify full path names only.
shost
Secondary host
The logical host of the server on which the secondary volume resides.
sdev
Secondary device
Secondary volume partition. Specify full path names only.
sbitmap
Secondary bitmap
Volume partition in which the bitmap of the secondary partition is stored. Specify full path names only.
ip
Network transfer protocol
IP address.
sync | async
Operating mode
sync is the mode in which the I/O operation is confirmed as complete only when the volume on the secondary cluster has been updated.

async is the mode in which the primary host I/O operation is confirmed as complete before updating the volumes on the secondary cluster.

g iogroupname
I/O group name
An I/O group name. The set must be configured in the same I/O group on both the primary and the secondary cluster. This parameter is optional and need only be configured if you have an I/O group.
q qdev
disk queue volume
Volume to be used as a disk-based I/O queue for an asynchronous disk set. Specify full path name only.
C
C tag
The device group name or resource tag of the local data and bitmap volumes in cases where this information is not implied by the name of the volume. For example, /dev/md/avsset/rdsk/vol indicates a device group named avsset. As another example, /dev/vx/rdsk/avsdg/vol indicates a device group named avsdg.

Details on sizing the disk queue volume can be found in the Sun StorageTek Availability Suite 4.0 Software Installation and Configuration Guide.

The Geographic Edition software does not modify the value of the Sun StorageTek Availability Suite parameters. The software controls only the role of the volume set during switchover and takeover operations.

For more information about the format of the volume set files, refer to the Sun StorageTek Availability Suite documentation.

Automatically Enabling Fallback Snapshots

In Geographic Edition software you can automatically enable fallback snapshots to protect your replicated secondary volumes from corruption by an incomplete resynchronization as described in Protecting Data on Replicated Volumes From Resynchronization Failure. To do so, on each cluster you will configure one /var/cluster/geo/avs/devicegroupname-snapshot.ini file for each device group whose volumes you want to protect. The devicegroupname-snapshot.ini files are read when the device group is added to a protection group, at the same time that the /var/cluster/geo/avs/devicegroupname-volset.ini files of the device group are read. You can also add fallback snapshots to the volumes of a device group after the device group is added to a protection group, as described in Manually Enabling Fallback Snapshots, but automatic configuration is simpler.

A fallback snapshot for one volume in a device group is enabled by using a single line in the devicegroupname-snapshot.ini file in the following format:

master_vol shadow_vol bitmap_shadow_vol

The volumes used by the fallback snapshot are described in Sun StorageTek Availability Suite Volume Sets. The variable master_vol is the path name of the replicated volume, shadow_vol is the path name of the compact dependent shadow volume that acts as a fallback for the secondary volume, and bitmap_shadow_vol is the path name of the bitmap volume for the compact dependent shadow volume. Full path names for each volume are required, and all three volumes must be in the same device group. For a single replicated volume it is easiest to use the same volume names on each cluster, but it is not required that you do so. For example, the shadow volume on cluster-paris might be /dev/md/avsset/rdsk/d102, while the shadow volume on cluster-newyork might be /dev/md/avsset/rdsk/d108.

The following example shows one line from the /var/cluster/geo/avs/avsset-snapshot.ini file that enables a fallback snapshot on one cluster for the secondary volume /dev/md/avsset/rdsk/d100 in the device group avsset. The device group avsset was created by using Solaris Volume Manager software, but any type of device group supported by the Geographic Edition software can be used with fallback snapshots.

/dev/md/avsset/rdsk/d100  /dev/md/avsset/rdsk/d102  /dev/md/avsset/rdsk/d103

This example line contains the following types of entries:

How to Set Up Raw-Disk Device Groups for Geographic Edition Systems

Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Oracle Solaris Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Geographic Edition.

  1. For the devices that you want to use, unconfigure the predefined device groups.

    The following commands remove the predefined device groups for d7 and d8.

    phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8
    phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8
  2. Create the new raw-disk device group, including the desired devices.

    Ensure that the new DID does not contain any slashes. The following command creates a global device group, rawdg, which contains d7 and d8.

    phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \
    -t rawdisk -d d7,d8 rawdg
  3. Verify that the device group rawdg was created.
    phys-paris-1# cldevicegroup show rawdg
  4. On the partner cluster, unconfigure the predefined device groups for the devices that you want to use.

    You can use the same DIDs on each cluster. In the following command, the newyork cluster is the partner of the paris cluster.

    phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6
    phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6
    phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6
  5. Create the raw-disk device group on the partner cluster.

    Use the same device group name that you used on the primary cluster.

    phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \
    -t rawdisk -d d5,d6 rawdg
  6. Use the new group name where a device group name is required.

    The following command adds rawdg to the AVS protection group rawpg. The device group to be added must exist and must have the same name, in this case rawdg, on both clusters.

    phys-paris-1# geopg add-device-group -p local_logical_host=paris-1h \
    -p remote_logical_host=newyork-1h rawdg rawpg

How to Configure a Sun StorageTek Availability Suite Volume in Oracle Solaris Cluster

This procedure configures Sun StorageTek Availability Suite volumes in a Oracle Solaris Cluster environment. These volumes can be Solaris Volume Manager volumes, VxVM volumes, or raw device volumes.

The volumes are encapsulated at the Oracle Solaris Cluster device-group level. The Sun StorageTek Availability Suite software interacts with the Solaris Volume Manager disksets, or VxVM disk group, or raw device through this device group interface. The path to the volumes depends on the volume type, as described in the following table.

Volume Type
Path
Solaris Volume Manager
/dev/md/disksetname/rdsk/d#, where # represents a number
VxVM
/dev/vx/rdsk/diskgroupname/volumename
Raw device
/dev/did/rdsk/d#s#
  1. Create a disk set, avsset, by using Solaris Volume Manager or a disk group, avsdg, by using VxVM or a raw device on cluster-paris and cluster-newyork.

    For example, if you configure the volume by using a raw device, choose a raw device group, dsk/d3, on cluster-paris and cluster-newyork.

  2. Create two volumes in the disk set or disk group on cluster-paris.

    The Sun StorageTek Availability Suite software requires a dedicated bitmap volume for each data volume to track which modifications to the data volume when the system is in logging mode.

    If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s3 and /dev/did/rdsk/d3s4, on the /dev/did/rdsk/d3 device on cluster-paris.

  3. Create two volumes in the disk set or disk group on cluster-newyork.

    If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s5 and /dev/did/rdsk/d3s6, on the /dev/did/rdsk/d3 device on cluster-paris.

  4. (Optional) Create two volumes on cluster-paris and two volumes on cluster-newyork for the fallback snapshots.

    You can optionally create two additional volumes on each cluster for each data volume for which a fallback snapshot will be created, as described in Sun StorageTek Availability Suite Volume Sets. The compact dependent shadow volume can normally be 10% of the size of the volume it will protect. The bitmap shadow volume is sized according to the rules described in the Sun StorageTek Availability Suite 4.0 Point-in-Time Copy Software Administration Guide. The volumes used by the fallback snapshot must be in the same device group as the replicated volume they protect.

Enabling a Sun StorageTek Availability Suite Volume Set

You can enable the Sun StorageTek Availability Suite volume sets and fallback snapshots in one of two ways:

Automatically Enabling a Solaris Volume Manager Volume Set

In this example, the cluster-paris cluster is the primary and avsset is a device group that contains a Solaris Volume Manager disk set.

Example 1-1 Automatically Enabling a Solaris Volume Manager Volume Set

This example has the following entries in the /var/cluster/geo/avs/avsset-volset.ini file. Each volume must be defined on a single line in the file:

logicalhost-paris-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101
logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101
ip async C avsset

The avsset-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates d100 from cluster-paris to d100 on cluster-newyork by using the bitmap volumes and logical hostnames that are specified in the file.

Automatically Enabling a VxVM Volume Set

In this example, the cluster-paris cluster is the primary and avsdg is a device group that contains a VxVM disk group.

Example 1-2 Automatically Enabling a VxVM Volume Set

This example has the following entries in the /var/cluster/geo/avs/avsdg-volset.ini file. Each volume must be defined on a single line in the file:

logicalhost-paris-1 /dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris 
logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork /dev/vx/rdsk/avsdg/vol-bitmap-ny 
ip async C avsdg

The avsdg-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates vol-data-paris from cluster-paris to vol-data-newyork on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.

Automatically Enabling a Raw Device Volume Set

In this example, the cluster-paris cluster is the primary and rawdg is the name of the device group that contains a raw device disk group, /dev/did/rdsk/d3.

Example 1-3 Automatically Enabling a Raw Device Volume Set

This example has the following entries in /var/cluster/geo/avs/avsdg-volset.ini file. Each volume must be defined on a single line in the file:

logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 
logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 
ip async C rawdg

The rawdg-volset.ini file contains the following entries:

The sample configuration file defines a volume set that replicates d3s3 from cluster-paris to d3s5 on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.

Manually Enabling Volume Sets

After you have added the device group to the protection group, avspg, you can manually enable the Sun StorageTek Availability Suite volume sets and fallback snapshots. Because the Sun StorageTek Availability Suite commands are installed in different locations in the supported software versions, the following examples illustrate how to enable volume sets for each software version.

Example 1-4 Manually Enabling a Sun StorageTek Availability Suite 4.0 Volume Set

This example manually enables a Solaris Volume Manager volume set when using Sun StorageTek Availability Suite 4.0.

phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \
/dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \
logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \
/dev/md/avsset/rdsk/d101 ip async C avsset

Example 1-5 Manually Enabling a VxVM Volume Set

This example manually enables a VxVM volume set when using Sun StorageTek Availability Suite 4.0.

phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \
/dev/vx/rdsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-bitmap-paris \
logicalhost-newyork-1 /dev/vx/rdsk/avsdg/vol-data-newyork \
/dev/vx/rdsk/avsdg/vol-bitmap-newyork ip async C avsdg

Example 1-6 Manually Enabling a Raw Device Volume Set

This example manually enables a raw device volume set when using Sun StorageTek Availability Suite 4.0.

phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \
/dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 \
/dev/did/rdsk/d3s6 ip async C dsk/d3

Information about the sndradm command execution is written to the Sun StorageTek Availability Suite log file at /var/adm/ds.log.Refer to this file if errors occur while manually enabling the volume set.

Managing Fallback Snapshots Manually

Fallback snapshots are described in Protecting Data on Replicated Volumes From Resynchronization Failure. The easiest way to enable a fallback snapshot for a volume is to use the automatic configuration procedures described in Automatically Enabling Fallback Snapshots. However, if a device group is added to a protection group without configuring automatic fallback snapshots for its volumes, they can still be configured manually. This section describes the procedures for manually enabling, disabling and modifying a fallback snapshot for a volume in such a device group.

The Snapshot_volume Property

One replication resource group, containing one replication resource, is automatically created for a device group on each cluster when it is added to a protection group, as described in Sun StorageTek Availability Suite Replication Resource Groups. The Snapshot_volume property of the replication resource can be used to configure fallback snapshots for its device group. The Snapshot_volume property is a string array, so it can be set to as many fallback snapshot configurations as you have volumes in the device group.

You can enable a fallback snapshot on any of the volumes configured on the device group by appending an entry to those already assigned to the Snapshot_volume property. Each entry is a string of the format:

master_vol:shadow_vol:bitmap_shadow_vol

The variable master_vol is set to the full path name of the secondary volume, shadow_vol is set to the full path name of the compact dependent shadow volume that serves as a fallback snapshot for the secondary volume, and bitmap_shadow_vol is set to the full path name of the bitmap volume for the shadow volume. The three fields are separated by colons, and no spaces are permitted anywhere in the entry.


Note - The Snapshot_volume property is set on the replication resource associated with a device group, not on the device group itself. To view the value of the Snapshot_volume property, you must therefore use the clresource show command on the replication resource devicegroupname-rep-rs.


Manually Enabling Fallback Snapshots

To manually enable a fallback snapshot, the replicated volume must already be configured and added to a protection group as described in How to Add a Data Replication Device Group to a Sun StorageTek Availability Suite Protection Group. You must also prepare two volumes on each cluster to use for the fallback snapshot as described in Sun StorageTek Availability Suite Volume Sets.

Because the Snapshot_volume property can contain multiple values in the format master_vol:shadow_vol:bitmap_shadow_vol, you append a new entry to those already assigned to the property by using the += (plus-equal) operator, as shown in this example:

-p  Snapshot_volume+=/dev/md/rdsk/avsset/d100:/dev/md/rdsk/avsset/d102:/dev/md/rdsk/avsset/d103

In this entry the replicated volume is /dev/md/avsset/rdsk/d100, in the device group avsset. The fallback snapshot uses the shadow volume /dev/md/avsset/rdsk/d102. Its bitmap shadow volume is /dev/md/avsset/rdsk/d103.

Example 1-7 Manually Enabling a Fallback Snapshot

This example configures fallback snapshots on both clusters for a replicated volume /dev/md/avsset/rdsk/d100 in the Sun StorageTek Availability Suite device group avsset. For simplicity, this example assumes that you are enabling fallback snapshots for the replicated volume on both clusters. It also assumes the same path names for the replicated volume, the shadow volume and the bitmap shadow volume on both clusters. In practice you can use different volume names on each cluster in a partnership as long as the volumes on any one cluster are in the same device group, and the device group to which they belong has the same name on both clusters.

In this example a fallback snapshot on each cluster is configured by using the compact dependent shadow volume /dev/md/avsset/rdsk/d102 and the bitmap shadow volume /dev/md/avsset/rdsk/d103. The protection group of the replicated volume is avspg. The device group avsset is created by using Solaris Volume Manager software, but any type of device group supported by the Geographic Edition software can be used with fallback snapshots.

  1. Perform this step on one node of either cluster.

    Verify which cluster is the current primary and which is the current secondary for the device group containing the volume for which you are enabling a fallback snapshot:

    phys-newyork-1# /usr/sbin/sndradm -P
  2. Perform this step on one node of either cluster.

    Identify the resource group used for the replication of the device group avsset. It will have a name of the form protectiongroupname-rep-rg and it will contain a resource named devicegroupname-rep-rs, as described in Sun StorageTek Availability Suite Replication Resource Groups. In this example the replication resource group is called avspg-rep-rg, and the replication resource is called avsset-rep-rs.

    phys-newyork-1# geopg list
  3. Perform this step on one node of each cluster on which you want to configure fallback snapshots.

    Append the entry /dev/md/avsset/rdsk/d100:/dev/md/avsset/rdsk/d102:/dev/md/avsset/rdsk/d103 to the Snapshot_volume property on the resource avsset-rep-rs. Do not put spaces adjacent to the colons, and ensure that you include the + sign in the operator:

    phys-newyork-1# clresource set -g avspg-rep-rg
    -p Snapshot_volume+=/dev/md/avsset/rdsk/d100:/dev/md/avsset/rdsk/d102:/dev/md/avsset/rdsk/d103
    avsset-rep-rs
  4. To enable the fallback snapshot, perform this step on one node of the cluster that is currently secondary for the device group.

    Attach the snapshot volume to the secondary replicated volume. In this command you will again specify the master volume, shadow volume and bitmap shadow volume, separated by spaces:

    phys-newyork-1# /usr/sbin/sndradm -C avsset -I a /dev/md/avsset/rdsk/d100
    /dev/md/avsset/rdsk/d102 /dev/md/avsset/rdsk/d103

Manually Disabling Fallback Snapshots

A Snapshot_volume property can contain multiple entries, one for each replicated volume in its associated device group. If you want to disable the fallback snapshot for just one of the replicated volumes in a device group, you must identify the exact entry for that volume and explicitly remove it by using the -= (minus-equal) operator as shown in this example:

-p Snapshot_volume-=/dev/md/rdsk/avsset/d100:/dev/md/rdsk/avsset/d102:/dev/md/rdsk/avsset/d103

You can locate the specific entry for the fallback snapshot you want to disable by using the clresource show command on the devicegroupname-rep-rs resource.

Example 1-8 Manually Disabling a Fallback Snapshot

This example disables the fallback snapshot for the secondary replicated volume /dev/md/avsset/rdsk/d100. This fallback snapshot was enabled in Example 1-7.

  1. Perform this step on one node of either cluster.

    Verify which cluster is the current primary and which is the current secondary for the device group containing the volume for which you are disabling a fallback snapshot:

    phys-newyork-1# /usr/sbin/sndradm -P
  2. Perform this step on one node of either cluster.

    Identify the resource group used for the replication of the device group avsset. It will have a name of the form protectiongroupname-rep-rg and it will contain a resource named devicegroupname-rep-rs, as described in Sun StorageTek Availability Suite Replication Resource Groups. In this example the replication resource group is called avspg-rep-rg, and the replication resource is called avsset-rep-rs.

    phys-newyork-1# geopg list
  3. Perform this step on one node of each cluster.

    Locate the entry you want to delete from those configured on the Snapshot_volume property of the replication resource:

    phys-newyork-1# clresource show -p Snapshot_property avsset-rep-rs
  4. Perform this step on one node of each cluster.

    Unconfigure the Snapshot_volume property. The operator -= removes the specified value from the property. Ensure that you include the - sign in the operator, and that you specify the Snapshot_volume entry exactly as it appears in the output of the clresource show command:

    phys-newyork-1# clresource set -g avspg-rep-rg
    -p Snapshot_volume-=/dev/md/avsset/rdsk/d100:/dev/md/avsset/rdsk/d102:/dev/md/avsset/rdsk/d103
    avsset-rep-rs
  5. Perform this step on one node of the cluster that is currently secondary for the device group.

    Detach the snapshot volume from the replicated data volume. In this command you will again specify the master volume, shadow volume and bitmap shadow volume, separated by spaces:

    phys-newyork-1# /usr/sbin/sndradm -C avsset -I d /dev/md/avsset/rdsk/d100
    /dev/md/avsset/rdsk/d102 /dev/md/avsset/rdsk/d103

Manually Modifying Fallback Snapshots

To manually modify a fallback snapshot, delete the entry you want to change from the Snapshot_volume property, then add the new entry. Follow the procedures that are described in Manually Disabling Fallback Snapshots and in Manually Enabling Fallback Snapshots.

How to Configure the Oracle Solaris Cluster Device Group That Is Controlled by Sun StorageTek Availability Suite

Sun StorageTek Availability Suite software supports Solaris Volume Manager, VxVM, and raw device volumes.

  1. Ensure that the device group that contains the volume set that you want to replicate is registered with Oracle Solaris Cluster software.
    # cldevicegroup show -v dg1

    For more information about this command, refer to the cldevicegroup(1CL) man page.

  2. If you are using a VxVM device group, synchronize the VxVM configuration by using the Oracle Solaris Cluster command clsetup or cldevicegroup.
  3. Ensure that the device group is displayed in the output of the cldevicegroup show command.
    # cldevicegroup show -v dg1

    For more information about this command, see the cldevicegroup(1CL) man page.

  4. Repeat steps 1–3 on both clusters, cluster-paris and cluster-newyork.

How to Configure a Highly Available Cluster Global File System for Use With Sun StorageTek Availability Suite

  1. Create the required file system on the volume set that you created in the previous step, vol-data-paris.

    The application writes to this file system.

  2. Add an entry to the /etc/vfstab file that contains information such as the mount location.

    Note - You must specify the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Oracle Solaris Cluster software and the Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. You must not mount data on the secondary cluster because data on the primary will not be replicated to the secondary cluster.


  3. To handle the new file system, add the HAStoragePlus resource to the application resource group, apprg1.

    Adding this resource ensures that the necessary file systems are remounted before the application is started.

    For more information about the HAStoragePlus resource type, refer to the Oracle Solaris Cluster Data Services Planning and Administration Guide.

  4. Repeat steps 1–3 on both cluster-paris and cluster-newyork.

Example 1-9 Configuring a Highly Available Cluster Global File System for Solaris Volume Manager Volumes

This example configures a highly available cluster global file system for Solaris Volume Manager volumes. This example assumes that the resource group apprg1 already exists.

  1. Create a UNIX file system (UFS).

    # newfs /dev/md/avsset/rdsk/d100

    This command creates the following entry in the /etc/vfstab file:

    /dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 /global/sample ufs 2 no logging
  2. Add the HAStoragePlus resource.

    # clresource create -g apprg1 -t SUNWHAStoragePlus \
    -p FilesystemMountPoints=/global/sample -p Affinityon=TRUE rs-hasp
     

Example 1-10 Configuring a Highly Available Cluster Global File System for VxVM Volumes

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).

    # newfs /dev/vx/rdsk/avsdg/vol-data-paris

    This command creates the following entry is created in the /etc/vfstab file:

    /dev/vx/dsk/avsdg/vol-data-paris /dev/vx/rdsk/avsdg/vol-data-paris 
    /global/sample ufs 2 no logging
  2. Add the HAStoragePlus resource.

    # clresource create -g apprg1 -t SUNWHAStoragePlus \
    -p FilesystemMountPoints=/global/sample -p Affinityon=TRUE rs-hasp
     

Example 1-11 Configuring a Highly Available Cluster Global File System for Raw Device Volumes

This example assumes that the apprg1 resource group already exists.

  1. Create a UNIX file system (UFS).

    # newfs /dev/did/rdsk/d3s3

    This command creates the following entry in the /etc/vfstab file:

    /dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 
    /global/sample ufs 2 no logging
  2. Add the HAStoragePlus resource.

    # clresource create -g apprg1 -t SUNWHAStoragePlus \
    -p FilesystemMountPoints=/global/sample -p Affinityon=TRUE rs-hasp