C H A P T E R  3

Using the Sun StorageTek Availability Suite iiadm and sndradm Commands

This chapter describes using the Sun StorageTek Availability Suite commands iiadm and sndradm in a Sun Cluster environment. The Sun StorageTek Availability Suite Software Administration Guides, listed in Related Documentation, describe the full command syntax and options for iiadm and sndradm.

The Sun StorageTek Availability Suite software can use volumes that are global or local devices.

The topics in this chapter include:


Mounting and Replicating Global Volume File Systems

If a volume contains a file system and you wish to replicate the file system using the Sun StorageTek Availability Suite software, you must create and mount a related global file system on all cluster nodes. These steps ensure that the file system is available to all nodes and hosts when you copy or update the volume sets.



Note - See the Sun Cluster documentation for information about administering cluster file systems, including creating and mounting global file systems. See also the mount(1M) and mount_ufs(1M) commands.



Use the following steps to create and mount a related global file system on all cluster nodes:

1. Create the file systems on the appropriate diskset metadevices or disk group volumes.


# newfs raw-disk-device

For example, using the VERITAS Volume Manager, you might specify raw-disk-device as /dev/vx/rdsk/sndrdg/vol01.

2. On each node, create a mount point directory for the file system.


# mkdir -p /global/device-group/mount-point

3. On each node, add an entry to the /etc/vfstab file for the mount point and use the global mount option.

4. On a cluster node, use sccheck(1M) to verify the mount points and other entries.

5. From any node in the cluster, mount the file system.


# mount /global/device-group/mount-point

6. Verify that the file system is mounted using the mount command with no options.


Global Device Command Syntax



Note - During the initial enable of the remote mirror or point-in-time copy volume sets, you can optionally specify the global device disk group with the -C tag cluster option when you use the iiadm or sndradm commands. As this section shows, however, you do not have to use the -C tag cluster option. Also see The C tag and -C tag Options.



The Sun StorageTek Availability Suite software automatically derives the disk device group name from the volume path when you first enable volume sets. During this initial enable operation, the Remote Mirror and Point-in-Time Copy software creates a configuration entry for each volume set. Part of the entry is the disk device group name for use in a cluster.

The Remote Mirror software shows this name as C tag, where tag is the disk device group name. The Point-in-Time Copy software shows this name as Cluster tag: tag.

The C tag and -C tag Options

C tag is displayed as part of a volume set's configuration information as shown in Global Device Command Syntax.

Typically, the Sun StorageTek Availability Suite software derives the disk device group name from the volume path and does not require the -C tag option.

Use the -C tag option and C tag volume set option to execute the iiadm and sndradm commands on the enabled volume sets in the disk device group name tag, when the disk device group name is not indicated by the volume path. The commands are not executed on any other volume sets in your configuration; -C tag excludes those volume sets not contained in the tag disk device group from the specified operation.

For example, the following command makes a point-in-time copy volume set in the iigrp2 disk device group wait for all copy or update operations to finish before you can issue other point-in-time copy commands.


# iiadm -w /dev/vx/rdsk/iigrp2/nfsvol-shadow -C iigrp2

Remote Mirror Example

When you enable this remote mirror volume set where host1 is a logical failover host name:


# sndradm -e host1 /dev/vx/rdsk/sndrdg/datavol /dev/vx/rdsk/sndrdg/datavolbm1 \host2 /dev/rdsk/c1t3d0s0 /dev/rdsk/c1t2d0s4 ip sync

The corresponding configuration information, as shown by the sndradm -i command, is:


# sndradm -i
 
host1 /dev/vx/rdsk/sndrdg/datavol /dev/vx/rdsk/sndrdg/datavolbm1 \
host2 /dev/rdsk/c1t3d0s0 /dev/rdsk/c1t2d0s4 ip sync \
C sndrdg

The C portion of the entry shows a disk device group name sndrdg.

Point-in-Time Copy Example

When you enable a point-in-time copy volume set on a cluster node (logical failover host):


# iiadm -e ind /dev/vx/rdsk/iidg/c1t3d0s0 /dev/vx/rdsk/iidg/c1t3d0s4 \/dev/vx/rdsk/iidg/c1t2d0s5

the corresponding configuration as shown by iiadm -i command is:


# iiadm -i
 
/dev/vx/rdsk/iidg/c1t3s0d0: (master volume)
/dev/vx/rdsk/iidg/c1t3d0s4: (shadow volume)
/dev/vx/rdsk/iidg/c1t2d0s5: (bitmap volume)
Cluster tag: iidg
Independent copy
Volume size:   208278
Percent of bitmap set: 0

The Cluster tag entry shows the derived disk device group name iidg.


Local Device Command Syntax



Note - Enabling a local disk device group named local prevents you from configuring a cluster disk device group named local.



where vol-set is:


phost pdev pbitmap shost sdev sbitmap ip {sync | async} [g io-groupname][C local]

The local disk device group is local to the individual cluster node and is not defined in a cluster disk or resource group. Local devices do not fail over and switch back. This initial configuration is similar to using the Sun StorageTek Availability Suite software in a non-clustered environment.

When you enable a volume set with the local disk device group, its configuration entry includes the name of its host machine.



caution icon

Caution - Volumes and bitmaps used in a local remote mirror volume set cannot reside in a shared disk device group or metaset.



Point-in-Time Copy Example

When you enable this Point-in-Time Copy volume set where local indicates a disk device group:


# iiadm -C local -e ind /dev/rdsk/c1t90d0s5 /dev/rdsk/c1t90d0s6 \
/dev/rdsk/c1t90d0s7

The corresponding configuration as shown by iiadm -i command is:


# iiadm -i
/dev/rdsk/iidg/c1t90d0s5: (master volume)
/dev/rdsk/iidg/c1t90d0s6: (shadow volume)
/dev/rdsk/iidg/c1t90d0s7: (bitmap volume)
Cluster tag: (local)
Independent copy
Volume size:   208278
Percent of bitmap set: 0

where localhost is the local host name as returned by the hostname(1) command.

The corresponding configuration information as shown by the dscfg -l command is:


# dscfg -l | grep /dev/rdsk/c1t3d0s0
ii: /dev/rdsk/c1t90d0s5 /dev/rdsk/c1t90d0s6 /dev/rdsk/c1t90d0s7 I - - - -

Which Host To Issue Remote Mirror Commands From?

The Sun StorageTek Availability Suite software requires that you issue the iiadm or sndradm commands from the node that is the current primary host for the disk device group that the command applies to.

In a clustered environment, you can issue the command from the node mastering the disk device group you specified in Step 2 in To configure Sun Cluster for HAStorage or HAStoragePlus.

When you enable the Remote Mirror software for the first time, issue the sndradm enable command from the primary and secondary hosts. See TABLE 3-1.


TABLE 3-1 Which Host to Issue Remote Mirror Commands From

Task

Where Command Is Issued

Comments

Assign a new bitmap to a volume set.

Primary and secondary host

Perform this command first on the host where the new bitmap resides and is being assigned, and then perform it on the other host.

Disable the Remote Mirror software.

Primary or secondary host

You can disable on one host, leave the other host enabled, and then re-enable the disabled host.

 

Perform this operation on both hosts if you are deleting a volume set.

Enable the Remote Mirror software.

Primary and secondary host

When enabling the Remote Mirror software for the first time, issue the command from both hosts.

 

Full forward or reverse synchronization (copy).

Primary host

Ensure that both hosts are enabled.

Forward or reverse synchronization (update).

Primary host

Ensure that both hosts are enabled.

Log.

Primary host

Perform on the primary host only if a synchronization is in progress.

 

Perform on the secondary host if the primary host failed.

Primary or secondary host

Perform on either host if no synchronization is in progress.

Toggle the autosynchronization state.

Primary host

 

Update an I/O group.

Primary and secondary hosts

 



Putting All Cluster Volume Sets in an I/O Group



Note - Placing volume sets in an I/O group does not affect the cluster operations of all volume sets configured in disk device and resource groups.





caution icon

Caution - Do not reverse synchronize the primary volume from more than one secondary volume or host at a time. You can group one-to-many sets that share a common primary volume into a single I/O group to forward synchronize all sets simultaneously instead of issuing a separate command for each set.

You cannot use the same technique to reverse synchronize volume sets, however. In this case, you must issue a separate command for each set and reverse update the primary volume by using a specific secondary volume.



The Remote Mirror and Point-in-Time Copy software enables you to assign volume sets to I/O groups. Instead of issuing one command for each volume set, you can:

Like the -C tag and C tag options, the I/O group name excludes all other enabled volume sets from operations you specify.

In a clustered environment, you can assign some or all volume sets in a specific disk device group to an I/O group when you enable each volume set.


procedure icon  To Place Volume Sets in an I/O Group

1. Enable three Point-in-Time Copy volume sets and place them in an I/O group named cluster1.


# iiadm -g cluster1 -e ind /dev/rdsk/iigrp2/c1t3d0s0 \/dev/rdsk/iigrp2/c1t3d0s4 /dev/rdsk/iigrp2/c1t2d0s5
 
# iiadm -g cluster1 -e dep /dev/rdsk/iigrp2/c1t4d0s0 \/dev/rdsk/iigrp2/c1t4d0s4 /dev/rdsk/iigrp2/c1t3d0s5
 
# iiadm -g cluster1 -e ind /dev/rdsk/iigrp2/c1t5d0s0 \/dev/rdsk/iigrp2/c1t5d0s4 /dev/rdsk/iigrp2/c1t4d0s5

2. Wait for any disk write operations to complete before issuing another command.


# iiadm -g cluster1 -w

3. Allow your applications to write to the master volumes.

4. Update the shadow volumes.


# iiadm -g cluster1 -u s


Preserving Point-in-Time Copy Volume Data

If a Solaris operating environment system failure or Sun Cluster failover occurs during a point-in-time copy or update operation to the master volume, specifically where the shadow volume is copying (iiadm -c m) or updating (iiadm -u m) data to the master volume, the master volume might be in an inconsistent state (that is, the copy or update operation might be incomplete).

To avoid or reduce the risk of inconsistent data if a system failover occurs during such a copy or update operation, perform the following before performing the shadow volume-to-master volume copy or update operation:

1. Create a second independent shadow volume copy of the master volume by issuing an iiadm -e ind command.

This operation results in a full shadow volume copy of the master volume data.

2. Ensure that all copy or update operations to this second shadow volume are finished by issuing a wait command (iiadm -w shadowvol) after issuing the iiadm -e ind command.

You can now perform the copy or update operation from the original shadow volume to the master volume. If a system failure or failover occurs during this operation, you at least have a known good copy of your original master volume data. When this operation is complete, you can keep the second shadow volume under point-in-time copy control or return it to your storage pool.