Solaris Volume Manager Administration Guide

Chapter 3 Solaris Volume Manager Overview

This chapter explains the overall structure of Solaris Volume Manager. This chapter contains the following information:

What's New in Solaris Volume Manager

This section describes new features for working with Solaris Volume Manager in this Solaris release.

For a complete listing of new Solaris features and a description of Solaris releases, see What’s New in Solaris Express.

Support for Descriptive Names

Solaris Express 4/06: Solaris Volume Manager has been enhanced to include the use of descriptive names for both volumes and hot spare pools. System administrators can now name volumes and hot spare pools by using any name that follows the naming guidelines.

A new option has been added to the metastat command to assist in identifying volumes and hot spare pools with descriptive names. The metastat -D command lists all of the volumes and hot spare pools with descriptive names. This information is useful if it becomes necessary to move your storage to a previous release of the Solaris OS that does not support the use of descriptive names.

For more information, see Volume Names.

Introduction to Solaris Volume Manager

Solaris Volume Manager is a software product that lets you manage large numbers of disks and the data on those disks. Although there are many ways to use Solaris Volume Manager, most tasks include the following:

In some instances, Solaris Volume Manager can also improve I/O performance.

For information on the types of disks supported in the Solaris operating system, see Chapter 11, Managing Disks (Overview), in System Administration Guide: Devices and File Systems.

How Solaris Volume Manager Manages Storage

Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. For historical reasons, some command-line utilities also refer to a volume as a metadevice.

From the perspective of an application or a file system, a volume is functionally identical to a physical disk. Solaris Volume Manager converts I/O requests directed at a volume into I/O requests to the underlying member disks.

Solaris Volume Manager volumes are built from disk slices or from other Solaris Volume Manager volumes. An easy way to build volumes is to use the graphical user interface (GUI) that is built into the Solaris Management Console. The Enhanced Storage tool within the Solaris Management Console presents you with a view of all the existing volumes. By following the steps in wizards, you can easily build any kind of Solaris Volume Manager volume or component. You can also build and modify volumes by using Solaris Volume Manager command-line utilities.

For example, if you need more storage capacity as a single volume, you could use Solaris Volume Manager to make the system treat a collection of slices as one larger volume. After you create a volume from these slices, you can immediately begin using the volume just as you would use any “real” slice or device.

For a more detailed discussion of volumes, see Overview of Volumes.

Solaris Volume Manager can increase the reliability and availability of data by using RAID-1 (mirror) volumes and RAID-5 volumes. Solaris Volume Manager hot spares can provide another level of data availability for mirrors and RAID-5 volumes.

Once you have set up your configuration, you can use the Enhanced Storage tool within the Solaris Management Console to report on its operation.

How to Administer Solaris Volume Manager

Use either of these methods to administer Solaris Volume Manager:

Note –

Do not attempt to administer Solaris Volume Manager with the command line and the GUI at the same time. Conflicting changes could be made to the configuration, and its behavior would be unpredictable. You can use both tools to administer Solaris Volume Manager, but not concurrently.

Figure 3–1 View of the Enhanced Storage Tool (Solaris Volume Manager) in the Solaris Management Console

Screen capture shows the Enhanced Storage tool. Components
are listed at the right, with the various Solaris Volume Manager tools at
the left.

ProcedureHow to Access the Solaris Volume Manager Graphical User Interface (GUI)

The Solaris Volume Manager GUI (Enhanced Storage) is part of the Solaris Management Console. To access the GUI, use the following instructions:

  1. Start the Solaris Management Console on the host system by using the following command:

    % /usr/sbin/smc
  2. Double-click This Computer in the Navigation pane.

  3. Double-click Storage in the Navigation pane.

  4. Double-click Enhanced Storage in the Navigation pane to load the Solaris Volume Manager tools.

  5. If prompted to log in, log in as root or as a user who has equivalent access.

  6. Double-click the appropriate icon to manage volumes, hot spare pools, state database replicas, and disk sets.

    Tip –

    All tools in the Solaris Management Console display information in the bottom section of the console window or at the left side of a wizard panel. Choose Help at any time to find additional information about performing tasks in this interface.

Solaris Volume Manager Requirements

Solaris Volume Manager requirements include the following:

Overview of Solaris Volume Manager Components

The five basic types of components that you create with Solaris Volume Manager are volumes, soft partitions, disk sets, state database replicas, and hot spare pools. The following table gives an overview of these Solaris Volume Manager features.

Table 3–1 Summary of Solaris Volume Manager Features

Solaris Volume Manager Feature 



For More Information 

  • RAID-0 volume (stripe, concatenation, concatenated stripe)

  • RAID-1 (mirror) volume

  • RAID-5 volume

A group of physical slices that appear to the system as a single, logical device 

To increase storage capacity, performance, or data availability. 

Overview of Volumes

Soft partition 

A subdivision of physical slices or logical volumes to provide smaller, more manageable storage units 

To improve manageability of large storage volumes.  

Chapter 12, Soft Partitions (Overview)

State database (state database replicas)

A database that contains configuration and status information for all volumes, hot spares, and disk sets. Solaris Volume Manager cannot operate until you have created the state database replicas. 

To store information about the state of your Solaris Volume Manager configuration 

State Database and State Database Replicas

Hot spare pool

A collection of slices (hot spares) reserved. These slices are automatically substituted when either a submirror or RAID-5 volume component fails. 

To increase data availability for RAID-1 and RAID-5 volumes. 

Hot Spare Pools

Disk set

A set of shared disk drives in a separate namespace that contains volumes and hot spares and that can be shared non-concurrently by multiple hosts 

To provide data redundancy and data availability and to provide a separate namespace for easier administration. 

Disk Sets

Overview of Volumes

A volume is a group of physical slices that appears to the system as a single, logical device. Volumes are actually pseudo, or virtual, devices in standard UNIX® terms.

Note –

Historically, the Solstice DiskSuiteTM product referred to these logical devices as metadevices. However, for simplicity and standardization, this book refers to these devices as volumes.

Classes of Volumes

You create a volume as a RAID-0 (concatenation or stripe) volume, a RAID-1 (mirror) volume, a RAID-5 volume, .

You can use either the Enhanced Storage tool within the Solaris Management Console or the command-line utilities to create and administer volumes.

The following table summarizes the classes of volumes.

Table 3–2 Classes of Volumes



RAID-0 (stripe or concatenation)

Can be used directly, or as the basic building block for mirrors. RAID-0 volumes do not directly provide data redundancy.  

RAID-1 (mirror)

Replicates data by maintaining multiple copies. A RAID-1 volume is composed of one or more RAID-0 volumes that are called submirrors. 


Replicates data by using parity information. In the case of disk failure, the missing data can be regenerated by using available data and the parity information. A RAID-5 volume is generally composed of slices. One slice's worth of space is allocated to parity information, but the parity is distributed across all slices in the RAID-5 volume. 

Soft partition 

Divides a slice or logical volume into one or more smaller, extensible volumes. 

How Volumes Are Used

You use volumes to increase storage capacity, performance, and data availability. In some instances, volumes can also increase I/O performance. Functionally, volumes behave the same way as slices. Because volumes look like slices, the volumes are transparent to end users, applications, and file systems. As with physical devices, volumes are accessed through block or raw device names. The volume name changes, depending on whether the block or raw device is used. See Volume Names for details about volume names.

You can use most file system commands, including mkfs, mount, umount, ufsdump, ufsrestore, and others, on volumes. You cannot use the format command, however. You can read, write, and copy files to and from a volume, as long as the volume contains a mounted file system.

Example—Volume That Consists of Two Slices

Figure 3–2 shows a volume that contains two slices, one slice from Disk A and one slice from Disk B. An application or UFS treats the volume as if it were one physical disk. Adding more slices to the volume increases its storage capacity.

Figure 3–2 Relationship Among a Volume, Physical Disks, and Slices

Diagram shows two disks, and how slices on those disks
are presented by Solaris Volume Manager as a single logical volume.

Volume and Disk Space Expansion Using the growfs Command

Solaris Volume Manager enables you to expand a volume by adding additional slices. You can use either the Enhanced Storage tool within the Solaris Management Console or the command-line interface to add a slice to an existing volume.

You can expand a mounted or unmounted UFS file system that is contained within a volume without having to halt or back up your system. Nevertheless, backing up your data is always a good idea. After you expand the volume, use the growfs command to grow the file system.

Note –

After a file system has been expanded, the file system cannot be reduced in size. The inability to reduce the size of a file system is a UFS limitation. Similarly, after a Solaris Volume Manager partition has been increased in size, it cannot be reduced.

Applications and databases that use the raw volume must have their own method to “grow” the added space so that applications can recognize it. Solaris Volume Manager does not provide this capability.

You can expand the disk space in volumes in the following ways:

The growfs command expands a UFS file system without loss of service or data. However, write access to the volume is suspended while the growfs command is running. You can expand the file system to the size of the slice or the volume that contains the file system.

The file system can be expanded to use only part of the additional disk space by using the -s size option to the growfs command.

Note –

When you expand a mirror, space is added to the mirror's underlying submirrors. The growfs command is then run on the RAID-1 volume. The general rule is that space is added to the underlying devices, and the growfs command is run on the top-level device.

Volume Names

As with physical slices, volumes have logical names that appear in the file system. Logical volume names have entries in the /dev/md/dsk directory for block devices and the /dev/md/rdsk directory for raw devices. Instead of specifying the full volume name, such as /dev/md/dsk/volume-name, you can often use an abbreviated volume name, such as d1, with any meta* command. You can generally rename a volume, as long as the volume is not currently being used and the new name is not being used by another volume. For more information, see Exchanging Volume Names.

Originally, volume names had to begin with the letter “d” followed by a number (for example, d0). This format is still acceptable. The following are examples of volume names that use the “d*” naming construct:


Block volume d0


Block volume d1


Raw volume d126


Raw volume d127

Beginning with the Solaris Express 4/06 release, Solaris Volume Manager has been enhanced to include the use of descriptive names for naming volumes and hot spare pools. A descriptive name for a volume is a name that can be composed of a combination of the following:

Descriptive names must begin with a letter. The words “all” and “none” are reserved and cannot be used as names for volumes or hot spare pools. You also cannot use only a “.” (period) or “..” (two periods) as the entire name. Finally, you cannot create a descriptive name that looks like a physical disk name, such as c0t0d0s0. As noted previously, you can also continue to use the “d*” naming convention. The following are examples of descriptive volume names:





When descriptive names are used in disk sets, each descriptive name must be unique within that disk set. Hot spare pools and volumes within the same disk set cannot have the same name. However, you can reuse names within different disk sets. For example, if you have two disk sets, one disk set called admins and one disk set called managers, you can create a volume named employee_files in each disk set.

The functionality of the Solaris Volume Manager commands that are used to administer volumes with descriptive names remains unchanged. You can substitute a descriptive name in any meta* command where you previously used the “d*” format. For example, to create a single-stripe volume of one slice with the name employee_files, you would type the following command at the command line:

# metainit employee_files 1 1 c0t1d0s4

If you create volumes and hot spare pools using descriptive names and then later determine that you need to use Solaris Volume Manager under previous releases of the Solaris OS, you must remove the components that are defined with descriptive names. To determine if the Solaris Volume Manager configuration on your system contains descriptive names, you can use the -D option of the metastat command. The metastat -D command lists volumes and hot spare pools that were created using descriptive names. These components must be removed from the Solaris Volume Manager configuration before the remaining configuration can be used with a release prior to the Solaris Express 4/06 release. If these components are not removed, the Solaris Volume Manager in these prior Solaris releases does not start. For more information about the -D option, see the metastat(1M) man page. For information about removing components from a configuration, see Removing RAID-1 Volumes (Unmirroring) and Removing a RAID-0 Volume.

Volume Name Guidelines

The use of a standard for your volume names can simplify administration and enable you at a glance to identify the volume type. Here are a few suggestions:

State Database and State Database Replicas

The state database is a database that stores information about the state of your Solaris Volume Manager configuration. The state database records and tracks changes made to your configuration. Solaris Volume Manager automatically updates the state database when a configuration or state change occurs. Creating a new volume is an example of a configuration change. A submirror failure is an example of a state change.

The state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a state database replica, ensures that the data in the database is always valid. Multiple copies of the state database protect against data loss from single points-of-failure. The state database tracks the location and status of all known state database replicas.

Solaris Volume Manager cannot operate until you have created the state database and its state database replicas. A Solaris Volume Manager configuration must have an operating state database.

When you set up your configuration, you can locate the state database replicas on either of the following:

Solaris Volume Manager recognizes when a slice contains a state database replica, and automatically skips over the replica if the slice is used in a volume. The part of a slice reserved for the state database replica should not be used for any other purpose.

You can keep more than one copy of a state database on one slice. However, you might make the system more vulnerable to a single point-of-failure by doing so.

The Solaris operating system continues to function correctly if all state database replicas are deleted. However, the system loses all Solaris Volume Manager configuration data if a reboot occurs with no existing state database replicas on disk.

Hot Spare Pools

A hot spare pool is a collection of slices (hot spares) reserved by Solaris Volume Manager to be automatically substituted for failed components. These hot spares can be used in either a submirror or RAID-5 volume. Hot spares provide increased data availability for RAID-1 and RAID-5 volumes. You can create a hot spare pool with either the Enhanced Storage tool within the Solaris Management Console or the command-line interface.

When component errors occur, Solaris Volume Manager checks for the first available hot spare whose size is equal to or greater than the size of the failed component. If found, Solaris Volume Manager automatically replaces the component and resynchronizes the data. If a slice of adequate size is not found in the list of hot spares, the submirror or RAID-5 volume is considered to have failed. For more information, see Chapter 16, Hot Spare Pools (Overview).

Disk Sets

A disk set is a set of physical storage volumes that contain logical volumes and hot spares. Volumes and hot spare pools must be built on drives from within that disk set. Once you have created a volume within the disk set, you can use the volume just as you would a physical slice.

A disk set provides data availability in a clustered environment. If one host fails, another host can take over the failed host's disk set. (This type of configuration is known as a failover configuration.) Additionally, disk sets can be used to help manage the Solaris Volume Manager namespace, and to provide ready access to network-attached storage devices.

For more information, see Chapter 18, Disk Sets (Overview).

Solaris Volume Manager Configuration Guidelines

A poorly designed Solaris Volume Manager configuration can degrade performance. This section offers tips for achieving good performance from Solaris Volume Manager. For information on storage configuration performance guidelines, see General Performance Guidelines.

General Guidelines

File System Guidelines

Do not mount file systems on a volume's underlying slice. If a slice is used for a volume of any kind, you must not mount that slice as a file system. If possible, unmount any physical device that you intend to use as a volume before you activate the volume.

Overview of Creating Solaris Volume Manager Components

When you create a Solaris Volume Manager component, you assign physical slices to a logical Solaris Volume Manager name, such as d0. The Solaris Volume Manager components that you can create include the following:

Note –

For suggestions on how to name volumes, see Volume Names.

Prerequisites for Creating Solaris Volume Manager Components

The prerequisites for creating Solaris Volume Manager components are as follows:

Overview of Multi-Terabyte Support in Solaris Volume Manager

Starting with the Solaris 9 4/03 release, Solaris Volume Manager supports storage devices and logical volumes greater than 1 terabyte (Tbyte) on systems running a 64-bit kernel.

Note –

Use isainfo -v to determine if your system is running a 64-bit kernel. If the string “64-bit” appears, you are running a 64-bit kernel.

Solaris Volume Manager allows you to do the following:

Support for large volumes is automatic. If a device greater than 1 Tbyte is created, Solaris Volume Manager configures it appropriately and without user intervention.

Large Volume Support Limitations

Solaris Volume Manager only supports large volumes (greater than 1 Tbyte) on the Solaris 9 4/03 or later release when running a 64-bit kernel. Running a system with large volumes under 32-bit kernel on previous Solaris 9 releases will affect Solaris Volume Manager functionality. Specifically, note the following:

Caution – Caution –

Do not create large volumes if you expect to run the Solaris software with a 32-bit kernel or if you expect to use a version of the Solaris OS prior to the Solaris 9 4/03 release.

Using Large Volumes

All Solaris Volume Manager commands work with large volumes. No syntax differences or special tasks are required to take advantage of large volume support. Thus, system administrators who are familiar with Solaris Volume Manager can immediately work with Solaris Volume Manager large volumes.

Tip –

If you create large volumes, then later determine that you need to use Solaris Volume Manager under previous releases of Solaris or that you need to run under the 32-bit Solaris 9 4/03 or later kernel, you will need to remove the large volumes. Use the metaclear command under the 64-bit kernel to remove the large volumes from your Solaris Volume Manager configuration before rebooting under previous Solaris release or under a 32-bit kernel.

Upgrading to Solaris Volume Manager

Solaris Volume Manager fully supports seamless upgrade from Solstice DiskSuite versions 4.1, 4.2, and 4.2.1. Make sure that all volumes are in Okay state (not “Needs Maintenance” or “Last Erred”) and that no hot spares are in use. You do not need to do anything else special to Solaris Volume Manager for the upgrade to work—it is not necessary to change the configuration or break down the root mirror. When you upgrade your system, the Solstice DiskSuite configuration will be brought forward and will be accessible after upgrade through Solaris Volume Manager tools.

The Solaris 10 OS introduced the Service Management Facility (SMF), which provides an infrastructure that augments the traditional UNIX start-up scripts, init run levels, and configuration files. When upgrading from a previous version of the Solaris OS, verify that the SMF services associated with Solaris Volume Manager are online. If the SMF services are not online, you might encounter problems when administering Solaris Volume Manager.

To check the SMF services associated with Solaris Volume Manager, use the following form of the svcs command:

# svcs -a |egrep "md|meta"
disabled       12:05:45 svc:/network/rpc/mdcomm:default
disabled       12:05:45 svc:/network/rpc/metamed:default
disabled       12:05:45 svc:/network/rpc/metamh:default
online         12:05:39 svc:/system/metainit:default
online         12:05:46 svc:/network/rpc/meta:default
online         12:05:48 svc:/system/fmd:default
online         12:05:51 svc:/system/mdmonitor:default

If the Solaris Volume Manager configuration consists of a local set only, then these services should be online:




If the Solaris Volume Manager configuration includes disk sets, then these additional services should be online:



If the Solaris Volume Manager includes multi-node disk sets, then this service should be online in addition to the other services already mentioned:


For more information on SMF, see Chapter 14, Managing Services (Overview), in System Administration Guide: Basic Administration.