Solaris Volume Manager Administration Guide

Chapter 19 Disk Sets (Overview)

This chapter provides conceptual information about disk sets. For information about performing related tasks, see Chapter 20, Disk Sets (Tasks).

This chapter includes the following information:

What Do Disk Sets Do?

A shared disk set, or simply disk set, is a set of disk drives that contain volumes and hot spares that can be shared exclusively but not at the same time by multiple hosts. Additionally, disk sets provide a separate namespace within which Solaris Volume Manager volumes can be managed.

A disk set supports data redundancy and data availability. If one host fails, another host can take over the failed host's disk set. (This type of configuration is known as a failover configuration.) Although each host can control the set of disks, only one host can control it at a time.


Note –

Disk sets are supported on both SPARC based and x86 based platforms.



Note –

Disk sets are intended, in part, for use with Sun Cluster, Solstice HA (High Availability), or another supported third-party HA framework. Solaris Volume Manager by itself does not provide all the functionality necessary to implement a failover configuration.


How Does Solaris Volume Manager Manage Disk Sets?

In addition to the shared disk set, each host has a local disk set. The local disk set consists of all of the disks on a host that are not in a shared disk set. A local disk set belongs exclusively to a specific host. The local disk set contains the state database for that specific host's configuration.

Volumes and hot spare pools in a shared disk set must be built on drives from within that disk set. Once you have created a volume within the disk set, you can use the volume just as you would a physical slice. However, disk sets do not support mounting file systems from the /etc/vfstab file.

A file system that resides on a volume in a disk set cannot be mounted automatically at boot with the /etc/vfstab file. The necessary disk set RPC daemons (rpc.metad and rpc.metamhd) do not start early enough in the boot process to permit this. Additionally, the ownership of a disk set is lost during a reboot.

Similarly, volumes and hot spare pools in the local disk set can consist only of drives from within the local disk set.

When you add disks to a disk set, Solaris Volume Manager automatically creates the state database replicas on the disk set. When a drive is accepted into a disk set, Solaris Volume Manager might repartition the drive so that the state database replica for the disk set can be placed on the drive (see Automatic Disk Partitioning).

Unlike local disk set administration, you do not need to manually create or delete disk set state databases. Solaris Volume Manager places one state database replica (on slice 7) on each drive across all drives in the disk set, up to a maximum of 50 total replicas in the disk set.


Note –

Although disk sets are supported in single-host configurations, they are often not appropriate for “local” (not dual-connected) use. Two common exceptions are the use of disk sets to provide a more managable namespace for logical volumes, and to more easily manage storage on a Storage Area Network (SAN) fabric (see Scenario—Disk Sets).


Automatic Disk Partitioning

When you add a new disk to a disk set, Solaris Volume Manager checks the disk format and, if necessary, repartitions the disk to ensure that the disk has an appropriately configured slice 7 with adequate space for a state database replica. The precise size of slice 7 depends on the disk geometry, but it will be no less than 4 Mbytes, and probably closer to 6 Mbytes (depending on where the cylinder boundaries lie).


Note –

The minimal size for slice seven will likely change in the future, based on a variety of factors, including the size of the state database replica and information to be stored in the state database replica.


For use in disk sets, disks must have a slice seven that meets these criteria:

If the existing partition table does not meet these criteria, Solaris Volume Manager will repartition the disk. A small portion of each drive is reserved in slice 7 for use by Solaris Volume Manager. The remainder of the space on each drive is placed into slice 0. Any existing data on the disks is lost by repartitioning.


Tip –

After you add a drive to a disk set, you may repartition it as necessary, with the exception that slice 7 is not altered in any way.


The minimum size for slice seven is variable, based on disk geometry, but is always equal to or greater than 4MB.

The following output from the prtvtoc command shows a disk before it is added to a disk set.


[root@lexicon:apps]$ prtvtoc /dev/rdsk/c1t6d0s0
* /dev/rdsk/c1t6d0s0 partition map
*
* Dimensions:
*     512 bytes/sector
*     133 sectors/track
*      27 tracks/cylinder
*    3591 sectors/cylinder
*    4926 cylinders
*    4924 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      2    00          0   4111695   4111694
       1      3    01    4111695   1235304   5346998
       2      5    01          0  17682084  17682083
       3      0    00    5346999   4197879   9544877
       4      0    00    9544878   4197879  13742756
       5      0    00   13742757   3939327  17682083


Note –

If you have disk sets that you upgraded from Solstice DiskSuite software, the default state database replica size on those sets will be 1034 blocks, not the 8192 block size from Solaris Volume Manager. Also, slice 7 on the disks that were added under Solstice DiskSuite will be correspondingly smaller than slice 7 on disks that were added under Solaris Volume Manager.


After you add the disk to a disk set, the output of prtvtoc looks like the following:


[root@lexicon:apps]$ prtvtoc /dev/rdsk/c1t6d0s0
* /dev/rdsk/c1t6d0s0 partition map
*
* Dimensions:
*     512 bytes/sector
*     133 sectors/track
*      27 tracks/cylinder
*    3591 sectors/cylinder
*    4926 cylinders
*    4924 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      0    00      10773  17671311  17682083
       7      0    01          0     10773     10772
[root@lexicon:apps]$ 
If disks you add to a disk set have acceptable slice 7s (that start at cylinder 0 and that have sufficient space for the state database replica), they will not be reformatted.

Disk Set Name Requirements

Disk set component names are similar to other Solaris Volume Manager component names, but the disk set name is part of the name.

Table 19–1 Example Volume Names

/dev/md/blue/dsk/d0

Block volume d0 in disk set blue

/dev/md/blue/dsk/d1

Block volume d1 in disk set blue

/dev/md/blue/rdsk/d126

Raw volume d126 in disk set blue

/dev/md/blue/rdsk/d127

Raw volume d127 in disk set blue

Similarly, hot spare pools have the disk set name as part of the hot spare name.

Example—Two Shared Disk Sets

Figure 19–1 shows an example configuration that uses two disk sets.

In this configuration, Host A and Host B share disk sets A and B. They each have their own local disk set, which is not shared. If Host A fails, Host B can take over control of Host A's shared disk set (Disk set A). Likewise, if Host B fails, Host A can take control of Host B's shared disk set (Disk set B).

Figure 19–1 Disk Sets Example

Diagram shows how two hosts can share some disks through shared disk sets and retain exclusive use of other disks in local disk sets.

Background Information for Disk Sets

When working with disk sets, consider the following Background Information for Disk Sets and Administering Disk Sets.

Requirements for Disk Sets

Guidelines for Disk Sets

Administering Disk Sets

Disk sets can be created and configured by using the Solaris Volume Manager command-line interface (the metaset command) or the Enhanced Storage tool within the Solaris Management Console.

After drives are added to a disk set, the disk set can be reserved (or taken) and released by hosts in the disk set. When a disk set is reserved by a host, the other host in the disk set cannot access the data on the drives in the disk set. To perform maintenance on a disk set, a host must be the owner of the disk set or have reserved the disk set. A host takes implicit ownership of the disk set by putting the first drives into the set.

Reserving a Disk Set

Before a host can use drives in a disk set, the host must reserve the disk set. There are two methods of reserving a disk set:


Note –

If a drive has been determined unexpectedly not to be reserved (perhaps because another host using the disk set forcibly took the drive), the host will panic. This behavior helps to minimize data loss which would occur if two hosts were to simultaneously access the same drive.


For more information about taking or reserving a disk set, see How to Take a Disk Set.

Releasing a Disk Set

Releasing a disk set can be useful when you perform maintenance on the physical drives in the disk set. When a disk set is released, it cannot be accessed by the host. If both hosts in a disk set release the set, neither host in the disk set can access the drives in the disk set.

For more information about releasing a disk set, see How to Release a Disk Set.

Scenario—Disk Sets

The following example, drawing on the sample system shown in Chapter 4, Configuring and Using Solaris Volume Manager (Scenario), describes how disk sets should be used to manage storage that resides on a SAN (Storage Area Network) fabric.

Assume that the sample system has an additional controller that connects to a fiber switch and SAN storage. Storage on the SAN fabric is not available to the system as early in the boot process as other devices, such as SCSI and IDE disks, and Solaris Volume Manager would report logical volumes on the fabric as unavailable at boot. However, by adding the storage to a disk set, and then using the disk set tools to manage the storage, this problem with boot time availability is avoided (and the fabric-attached storage can be easily managed within a separate, disk set controlled, namespace from the local storage).