C H A P T E R  3

Preparing to Use the Software

This chapter provides information about how to set up the software before you first use it. The topics include the following:


Increasing the Default Number of Volumes Allowed

The following sections describe how to change the default number of volumes you can use with the software.

The default number of Remote Mirror volume sets you can enable is 64. Follow these procedures to increase this number.

The default number of storage volume (SV) driver devices you can configure is 4096. This number of devices are divided for use between the Remote Mirror and Point-in-Time Copy software. Follow these procedures to increase this number.



Note - After editing the files in this section, restart the Remote Mirror data services using the dscfgadm -d -r command, followed by the dscfgadm -e -r command, for changes to take effect. Also, if you edit the rdc.conf file to use more than 64 volume sets, ensure that you have enough system resources.



Using More Than 64 Volume Sets

If you configure more than 64 volume sets, edit the rdc_max_sets field in the /usr/kernel/drv/rdc.conf file on each machine running the Remote Mirror software. The default number of configured volume sets is 64. For example, to use 128 sets, change the file as follows:


#
# rdc_max_sets
# - Configure the maximum number of RDC sets that can be enabled on
# this host. The actual maximum number of sets that can be
# enabled will be the minimum of this value and nsc_max_devices
#(see nsctl.conf) at the time the rdc kernel module is loaded.
#
rdc_max_sets=128;

Be sure to include the semicolon character (;) at the end of the rdc_max_sets field.

Save and close this file, then restart the Remote Mirror data services using the dscfgadm -d -r command followed by the dscfgadm -e -r command.

Change the number of storage volume (sv) driver devices, as described in Increasing the Storage Volume Device Limit

Increasing the Storage Volume Device Limit

The default number of sv driver devices (that is, volumes) you can configure is 4096, as set by the nsc_max_devices setting in the nsctl.conf file. The number of allowed volumes is divided for use between the Remote Mirror and Point-in-Time Copy software. If you use the Remote Mirror and Point-in-Time Copy software products together, the storage devices are divided between the two products.

The following procedure describes how to increase this default limit.


procedure icon  To Increase the Storage Volume Limit

1. Log in as superuser.

2. Open the /usr/kernel/drv/nsctl.conf file using a text editor.

3. Search for the nsc_max_devices field.

4. Edit the number in this field to increase your volume limit.

The default number is 4096.

5. Save and exit the file.

6. Restart the Remote Mirror data services using the dscfgadm -d -r command followed by the dscfgadm -e -r command.


Setting Up Bitmap Volumes

The Remote Mirror software does not support bitmap files. Instead, it uses raw devices to store bitmaps.

These raw devices must be stored on a disk separate from the disk that contains the data from the replicated volumes. Configure RAID (such as mirrored partitions) for these bitmap devices and ensure that you mirror the bitmap to another disk in a different array. The bitmap must not be stored on the same disk as the replicated volumes.

Another configuration consideration is persistence of Remote Mirror bitmaps. By default, Remote Mirror bitmaps are written only to memory and destaged to disk on an orderly shutdown. This improves application performance by saving the service time of writing a bit to the bitmap volume for every local write. Use of memory-based bitmaps improves performance, but there is a trade off. If the active site's server crashes, the bitmap is lost and a full synchronization is required.

The alternate to writing bitmap date to memory is to configure for bitmap writes to go to a disk volume during runtime. In this configuration, there is a performance penalty of one I/O per local write through Remote Mirror. However, if the server crashes, the bitmap data is retained and on reboot, no resynchronization is required. In this configuration, it is highly recommended to place the bitmap volumes on a caching array.

For additional information about how to configure the Remote Mirror bitmap use by setting rdc_bitmap_mode in the rdc.conf file, see Setting the Bitmap Operation Mode.



caution icon

Caution - When creating volume sets, do not create secondary or bitmap volumes using partitions that include cylinder 0. Data loss might occur. See VTOC Information.



If the bitmap and the replicated volumes reside on the same disk or array, a single point of failure exists. In case of a disk or array failure, you risk a greater chance of data loss exists. The bitmap might become corrupted.

In a clustered environment, the bitmap volume must be part of the same disk group or cluster resource group as the corresponding primary or secondary data volume.

The bitmap size can be calculated using the following formula:

For example, a 2-Gbyte data device requires a bitmap size of 9 Kbyte. (You can create bitmaps that are larger than the calculated size.)

See dsbitmap Bitmap Sizing Utility for information about a utility that provides correct sizes for bitmap volumes.

Setting the Bitmap Operation Mode

A bitmap maintained on disk can persist across a system crash, depending on the setting of rdc_bitmap_mode in /usr/kernel/drv/rdc.conf. The default setting is 1.



Note - In earlier versions of the Remote Mirror software, the default setting for rdc_bitmap_mode was 0.



If your server is configured in a clustered environment, the bitmap mode should be set to 1.

single-step bulletEdit the rdc.conf file and locate the following section. Edit the value for the bitmap mode, save the file, close it, then restart the Remote Mirror data services using the dscfgadm -d -r command followed by the dscfgadm -e -r command.


#
# rdc_bitmap_mode
# - Sets the mode of the RDC bitmap operation, acceptable values are:
#   0 - autodetect bitmap mode depending on the state of SDBC (default).
#   1 - force bitmap writes for every write operation, so an update resync
#       can be performed after a crash or reboot.
#   2 - only write the bitmap on shutdown, so a full resync is
#       required after a crash, but an update resync is required after
#       a reboot.
#
rdc_bitmap_mode=0;


Customizing Volume Sets

Before you start creating volume sets, see the following topics:

See also Reconfiguring or Modifying a Volume Set.

Restricted Access To Volume Sets



caution icon

Caution - In a clustered environment, only one system administrator or root user at a timeis allowed to create and configure Sun StorageTek volume sets. This restriction helps avoid creating an inconsistent Sun StorageTek Availability Suite volume set configuration.



The operations that access the configuration include, but are not limited to:



caution icon

Caution - When configuring a volume set, do not use the same volume set as a Point-in-Time Copy shadow volume and as a Remote Mirror secondary volume. If you attempt to configure a volume set for two purposes, the data contained on the volume might not be valid for the application that accesses the volume.



Setting Up a Volume Set File

When you enable the Remote Mirror software, you can specify an optional volume set file containing information about the volume set: volumes, primary and secondary hosts, bitmaps, operating mode, and so on. Use the sndradm -f volset-file option when you use a volume set file.

You can also type information about each volume set from the command line, but it might be more convenient to put this information in a file when you have multiple volume sets.

One advantage when using volume set files is that you can operate on specific volume sets and exclude other sets from the operation. Unlike adding the volume sets to an I/O group, you can mix replication modes in a volume set file.

The fields for the volume set file specified using the -f volset-file option are:


phost pdev pbitmap shost sdev sbitmap ip {sync|async} [g io-groupname] [C tag] -q qdev

An example file entry is as follows:


atm10 /dev/vx/rdsk/oracle816/oratest /dev/vx/rdsk/oracle816/oratest_bm \atm20 /dev/vx/rdsk/oracle816/oratest /dev/vx/rdsk/oracle816/oratest_bm \ip sync g oragroup

See TABLE 3-1 for descriptions of the format fields. See the rdc.cf man page for more information about the volume set file format.


TABLE 3-1 Volume Set File Format Fields

Field

Meaning

Description

phost

Primary host

Server on which the primary volume resides.

pdev

Primary device

Primary volume partition. Specify full path names only (for example, /dev/rdsk/c0t1d0s4).

pbitmap

Primary bitmap

Volume partition in which the bitmap of the primary partition is stored. Specify full path names only.

shost

Secondary host

Server on which the secondary volume resides.

sdev

Secondary device

Secondary volume partition. Specify full path names only.

sbitmap

Secondary bitmap

Volume partition in which the bitmap of the secondary partition is stored. Specify full path names only.

ip

Network transfer protocol

Specify ip.

sync | async

Operating mode

sync is the mode in which the I/O operation is confirmed as complete only when the remote volume has been updated.


async is the mode in which the primary host I/O operation is confirmed as complete before updating the remote volume.

g io-groupname

I/O group name

An I/O group name can be specified using the g character. In the preceding example, it is oragroup. The set must be configured in the same io-groupname on both the primary and secondary hosts.

qdev

Disk queue

Volume to use for a disk-based queue.



Commands and I/O Group Operations

Adding the Remote Mirror software volume sets to an I/O group enables you to issue a single command that operates on all volume sets in the specified I/O group, and excludes all other volume sets. Most commands allow for group operations and perform them when you include -g io-groupname in the command syntax.

The operations performed are independent of each other. Operations performed on I/O group A, volume set 1 are independent of operations performed on I/O group A, volume set 2.

Write ordering is preserved among sets within a group. This requires that all the asynchronous sets within a group share the same queue, which may be held either in memory or on disk.

Failed Operations in an I/O Group

If an operation fails on one or more volume sets in an I/O group, the state of the data on the failed volumes in the volume sets is unknown. To correct this:

1. Correct any known problems with the failing sets

2. Reissue the command on the I/O group


Commands and Sun Cluster Operations

You use the C tag and -C tag options described in Chapter 5 in Sun Cluster Operating Environments only. If you accidentally use these options in a noncluster environment, the Remote Mirror operation does not execute.


Mounting and Unmounting Replicated Volumes

When the Remote Mirror software replicates a volume, the source, which is usually the primary volume, can be mounted. After the replication is complete, the target, which is usually the unmounted secondary volume, contains on-disk metadata that states that the volume is currently mounted even though it is not.

When a replication is created in this way and the target volume is first mounted, the software detects that a currently dismounted volume has mounted metadata. The software usually forces fsck to run under these conditions, because the assumption is that the only time a volume contains mounted metadata, but is not currently mounted, is after a system crash.

Since Remote Mirror will replicate the mounted metadata, the assumption that a crash occurred is no longer correct. However, flushing the cache on the source volume (usually the primary) by running sync or the database's flush command, and then running fsck or the database recovery mechanism, should return no errors. The target of a replication operation (which is usually the secondary volume) must not be mounted until the fsck is run. If the target is mounted, the application accessing the target volume will read inconsistent and changing data.


dsbitmap Bitmap Sizing Utility

The dsbitmap utility is installed with the Sun StorageTek Availability Suite software. Use it to calculate the required size of a bitmap for a Point-in-Time Copy shadow volume set or a Remote Mirror volume set.

The dsbitmap utility is typically used by the system administrator during the initial stages of configuring Sun StorageTek Availability Suite software. The utility determines the required bitmap volume size and then verifies that the bitmap volumes are suitable.

dsbitmap

This utility enables you to determine the size of the bitmap volume that is required for a Remote Mirror bitmap or a Point-in-Time Copy bitmap. If you include a proposed bitmap volume in the command, dsbitmap tests its suitability as a bitmap volume for the proposed data volume.

Syntax

To obtain the size of a Point-in-Time Copy bitmap, use this command:

dsbitmap -p datavolume [bitmap_volume]

To obtain the size of a Remote Mirror bitmap, use this command:

dsbitmap -r datavolume [bitmap_volume]

Usage for dsbitmap


# dsbitmap -h
usage: dsbitmap -h
       dsbitmap { -p | -r } data_volume [bitmap_volume]
       -h : This usage message
       -p : Calculate size of Point in Time bitmap
       -r : Calculate size of Remote Mirror bitmap

Examples for dsbitmap

Remote mirror volumes display both memory and disk queue sizes:


# dsbitmap -r /dev/md/rdsk/d100
Remote Mirror bitmap sizing
 
Data volume (/dev/md/rdsk/d100) size: 2064384 blocks
Required bitmap volume size:
  Sync replication: 9 blocks
  Async replication with memory queue: 9 blocks
  Async replication with disk queue: 73 blocks