This chapter explains the overall structure of Solaris Volume Manager. This chapter contains the following information:
This section describes new features for working with Solaris Volume Manager in this Solaris release.
For a complete listing of new Solaris features and a description of Solaris releases, see What’s New in Solaris Express.
Solaris Express 4/06: Solaris Volume Manager has been enhanced to include the use of descriptive names for both volumes and hot spare pools. System administrators can now name volumes and hot spare pools by using any name that follows the naming guidelines.
A new option has been added to the metastat command to assist in identifying volumes and hot spare pools with descriptive names. The metastat -D command lists all of the volumes and hot spare pools with descriptive names. This information is useful if it becomes necessary to move your storage to a previous release of the Solaris OS that does not support the use of descriptive names.
For more information, see Volume Names.
Solaris Volume Manager is a software product that lets you manage large numbers of disks and the data on those disks. Although there are many ways to use Solaris Volume Manager, most tasks include the following:
Increasing storage capacity
Increasing data availability
Easing administration of large storage devices
In some instances, Solaris Volume Manager can also improve I/O performance.
For information on the types of disks supported in the Solaris operating system, see Chapter 11, Managing Disks (Overview), in System Administration Guide: Devices and File Systems.
Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. For historical reasons, some command-line utilities also refer to a volume as a metadevice.
From the perspective of an application or a file system, a volume is functionally identical to a physical disk. Solaris Volume Manager converts I/O requests directed at a volume into I/O requests to the underlying member disks.
Solaris Volume Manager volumes are built from disk slices or from other Solaris Volume Manager volumes. An easy way to build volumes is to use the graphical user interface (GUI) that is built into the Solaris Management Console. The Enhanced Storage tool within the Solaris Management Console presents you with a view of all the existing volumes. By following the steps in wizards, you can easily build any kind of Solaris Volume Manager volume or component. You can also build and modify volumes by using Solaris Volume Manager command-line utilities.
For example, if you need more storage capacity as a single volume, you could use Solaris Volume Manager to make the system treat a collection of slices as one larger volume. After you create a volume from these slices, you can immediately begin using the volume just as you would use any “real” slice or device.
For a more detailed discussion of volumes, see Overview of Volumes.
Solaris Volume Manager can increase the reliability and availability of data by using RAID-1 (mirror) volumes and RAID-5 volumes. Solaris Volume Manager hot spares can provide another level of data availability for mirrors and RAID-5 volumes.
Once you have set up your configuration, you can use the Enhanced Storage tool within the Solaris Management Console to report on its operation.
Use either of these methods to administer Solaris Volume Manager:
Solaris Management Console – This tool provides a GUI to administer volume management functions. Use the Enhanced Storage tool within the Solaris Management Console. See Figure 3–1 for an example of the Enhanced Storage tool. This interface provides a graphical view of Solaris Volume Manager components, including volumes, hot spare pools, and state database replicas. This interface offers wizard-based manipulation of Solaris Volume Manager components, enabling you to quickly configure your disks or change an existing configuration.
The command line – You can use several commands to perform volume management functions. The Solaris Volume Manager core commands begin with meta, for example the metainit and metastat commands. For a list of Solaris Volume Manager commands, see Appendix B, Solaris Volume Manager Quick Reference.
Do not attempt to administer Solaris Volume Manager with the command line and the GUI at the same time. Conflicting changes could be made to the configuration, and its behavior would be unpredictable. You can use both tools to administer Solaris Volume Manager, but not concurrently.
The Solaris Volume Manager GUI (Enhanced Storage) is part of the Solaris Management Console. To access the GUI, use the following instructions:
Start the Solaris Management Console on the host system by using the following command:
% /usr/sbin/smc |
Double-click This Computer in the Navigation pane.
Double-click Storage in the Navigation pane.
Double-click Enhanced Storage in the Navigation pane to load the Solaris Volume Manager tools.
If prompted to log in, log in as root or as a user who has equivalent access.
Double-click the appropriate icon to manage volumes, hot spare pools, state database replicas, and disk sets.
All tools in the Solaris Management Console display information in the bottom section of the console window or at the left side of a wizard panel. Choose Help at any time to find additional information about performing tasks in this interface.
Solaris Volume Manager requirements include the following:
You must have root privilege to administer Solaris Volume Manager. Equivalent privileges granted through the User Profile feature in the Solaris Management Console allow administration through the Solaris Management Console. However, only the root user can use the Solaris Volume Manager command-line interface.
Before you can create volumes with Solaris Volume Manager, state database replicas must exist on the Solaris Volume Manager system. A state database replica contains configuration and status information for all volumes, hot spares, and disk sets. At least three replicas should exist, and the replicas should be placed on different controllers and different disks for maximum reliability. See About the Solaris Volume Manager State Database and Replicas for more information about state database replicas. See Creating State Database Replicas for instructions on how to create state database replicas.
The five basic types of components that you create with Solaris Volume Manager are volumes, soft partitions, disk sets, state database replicas, and hot spare pools. The following table gives an overview of these Solaris Volume Manager features.
Table 3–1 Summary of Solaris Volume Manager Features
A volume is a group of physical slices that appears to the system as a single, logical device. Volumes are actually pseudo, or virtual, devices in standard UNIX® terms.
Historically, the Solstice DiskSuiteTM product referred to these logical devices as metadevices. However, for simplicity and standardization, this book refers to these devices as volumes.
You create a volume as a RAID-0 (concatenation or stripe) volume, a RAID-1 (mirror) volume, a RAID-5 volume, .
You can use either the Enhanced Storage tool within the Solaris Management Console or the command-line utilities to create and administer volumes.
The following table summarizes the classes of volumes.
Table 3–2 Classes of Volumes
Volume |
Description |
---|---|
Can be used directly, or as the basic building block for mirrors. RAID-0 volumes do not directly provide data redundancy. |
|
Replicates data by maintaining multiple copies. A RAID-1 volume is composed of one or more RAID-0 volumes that are called submirrors. |
|
Replicates data by using parity information. In the case of disk failure, the missing data can be regenerated by using available data and the parity information. A RAID-5 volume is generally composed of slices. One slice's worth of space is allocated to parity information, but the parity is distributed across all slices in the RAID-5 volume. |
|
Soft partition |
Divides a slice or logical volume into one or more smaller, extensible volumes. |
You use volumes to increase storage capacity, performance, and data availability. In some instances, volumes can also increase I/O performance. Functionally, volumes behave the same way as slices. Because volumes look like slices, the volumes are transparent to end users, applications, and file systems. As with physical devices, volumes are accessed through block or raw device names. The volume name changes, depending on whether the block or raw device is used. See Volume Names for details about volume names.
You can use most file system commands, including mkfs, mount, umount, ufsdump, ufsrestore, and others, on volumes. You cannot use the format command, however. You can read, write, and copy files to and from a volume, as long as the volume contains a mounted file system.
Figure 3–2 shows a volume that contains two slices, one slice from Disk A and one slice from Disk B. An application or UFS treats the volume as if it were one physical disk. Adding more slices to the volume increases its storage capacity.
Solaris Volume Manager enables you to expand a volume by adding additional slices. You can use either the Enhanced Storage tool within the Solaris Management Console or the command-line interface to add a slice to an existing volume.
You can expand a mounted or unmounted UFS file system that is contained within a volume without having to halt or back up your system. Nevertheless, backing up your data is always a good idea. After you expand the volume, use the growfs command to grow the file system.
After a file system has been expanded, the file system cannot be reduced in size. The inability to reduce the size of a file system is a UFS limitation. Similarly, after a Solaris Volume Manager partition has been increased in size, it cannot be reduced.
Applications and databases that use the raw volume must have their own method to “grow” the added space so that applications can recognize it. Solaris Volume Manager does not provide this capability.
You can expand the disk space in volumes in the following ways:
Adding one or more slices to a RAID-0 volume
Adding one or more slices to all submirrors of a RAID-1 volume
Expanding a soft partition with additional space from the underlying component
The growfs command expands a UFS file system without loss of service or data. However, write access to the volume is suspended while the growfs command is running. You can expand the file system to the size of the slice or the volume that contains the file system.
The file system can be expanded to use only part of the additional disk space by using the -s size option to the growfs command.
When you expand a mirror, space is added to the mirror's underlying submirrors. The growfs command is then run on the RAID-1 volume. The general rule is that space is added to the underlying devices, and the growfs command is run on the top-level device.
As with physical slices, volumes have logical names that appear in the file system. Logical volume names have entries in the /dev/md/dsk directory for block devices and the /dev/md/rdsk directory for raw devices. Instead of specifying the full volume name, such as /dev/md/dsk/volume-name, you can often use an abbreviated volume name, such as d1, with any meta* command. You can generally rename a volume, as long as the volume is not currently being used and the new name is not being used by another volume. For more information, see Exchanging Volume Names.
Originally, volume names had to begin with the letter “d” followed by a number (for example, d0). This format is still acceptable. The following are examples of volume names that use the “d*” naming construct:
Block volume d0
Block volume d1
Raw volume d126
Raw volume d127
Beginning with the Solaris Express 4/06 release, Solaris Volume Manager has been enhanced to include the use of descriptive names for naming volumes and hot spare pools. A descriptive name for a volume is a name that can be composed of a combination of the following:
Alphanumeric characters
“-” (a dash)
“_” (an underscore)
“.” (a period)
Descriptive names must begin with a letter. The words “all” and “none” are reserved and cannot be used as names for volumes or hot spare pools. You also cannot use only a “.” (period) or “..” (two periods) as the entire name. Finally, you cannot create a descriptive name that looks like a physical disk name, such as c0t0d0s0. As noted previously, you can also continue to use the “d*” naming convention. The following are examples of descriptive volume names:
account_stripe_1 |
mirror.3 |
d100 |
d-100 |
When descriptive names are used in disk sets, each descriptive name must be unique within that disk set. Hot spare pools and volumes within the same disk set cannot have the same name. However, you can reuse names within different disk sets. For example, if you have two disk sets, one disk set called admins and one disk set called managers, you can create a volume named employee_files in each disk set.
The functionality of the Solaris Volume Manager commands that are used to administer volumes with descriptive names remains unchanged. You can substitute a descriptive name in any meta* command where you previously used the “d*” format. For example, to create a single-stripe volume of one slice with the name employee_files, you would type the following command at the command line:
# metainit employee_files 1 1 c0t1d0s4 |
If you create volumes and hot spare pools using descriptive names and then later determine that you need to use Solaris Volume Manager under previous releases of the Solaris OS, you must remove the components that are defined with descriptive names. To determine if the Solaris Volume Manager configuration on your system contains descriptive names, you can use the -D option of the metastat command. The metastat -D command lists volumes and hot spare pools that were created using descriptive names. These components must be removed from the Solaris Volume Manager configuration before the remaining configuration can be used with a release prior to the Solaris Express 4/06 release. If these components are not removed, the Solaris Volume Manager in these prior Solaris releases does not start. For more information about the -D option, see the metastat(1M) man page. For information about removing components from a configuration, see Removing RAID-1 Volumes (Unmirroring) and Removing a RAID-0 Volume.
The use of a standard for your volume names can simplify administration and enable you at a glance to identify the volume type. Here are a few suggestions:
Use ranges for each type of volume. For example, assign numbers 0–20 for RAID-1 volumes, 21–40 for RAID-0 volumes, and so on.
Use a naming relationship for mirrors. For example, name mirrors with a number that ends in zero (0), and submirrors that end in one (1), two (2), and so on. For example, you might name mirrors as follows: mirror d10, submirrors d11 and d12; mirror d20, submirrors d21, d22, d23, and d24. In an example using descriptive names, you could use a naming relationship such as employee_mirror1 for a mirror with employee_sub1 and employee_sub2 comprising the submirrors.
Use a naming method that maps the slice number and disk number to volume numbers.
The state database is a database that stores information about the state of your Solaris Volume Manager configuration. The state database records and tracks changes made to your configuration. Solaris Volume Manager automatically updates the state database when a configuration or state change occurs. Creating a new volume is an example of a configuration change. A submirror failure is an example of a state change.
The state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a state database replica, ensures that the data in the database is always valid. Multiple copies of the state database protect against data loss from single points-of-failure. The state database tracks the location and status of all known state database replicas.
Solaris Volume Manager cannot operate until you have created the state database and its state database replicas. A Solaris Volume Manager configuration must have an operating state database.
When you set up your configuration, you can locate the state database replicas on either of the following:
On dedicated slices
On slices that will later become part of volumes
Solaris Volume Manager recognizes when a slice contains a state database replica, and automatically skips over the replica if the slice is used in a volume. The part of a slice reserved for the state database replica should not be used for any other purpose.
You can keep more than one copy of a state database on one slice. However, you might make the system more vulnerable to a single point-of-failure by doing so.
The Solaris operating system continues to function correctly if all state database replicas are deleted. However, the system loses all Solaris Volume Manager configuration data if a reboot occurs with no existing state database replicas on disk.
A hot spare pool is a collection of slices (hot spares) reserved by Solaris Volume Manager to be automatically substituted for failed components. These hot spares can be used in either a submirror or RAID-5 volume. Hot spares provide increased data availability for RAID-1 and RAID-5 volumes. You can create a hot spare pool with either the Enhanced Storage tool within the Solaris Management Console or the command-line interface.
When component errors occur, Solaris Volume Manager checks for the first available hot spare whose size is equal to or greater than the size of the failed component. If found, Solaris Volume Manager automatically replaces the component and resynchronizes the data. If a slice of adequate size is not found in the list of hot spares, the submirror or RAID-5 volume is considered to have failed. For more information, see Chapter 16, Hot Spare Pools (Overview).
A disk set is a set of physical storage volumes that contain logical volumes and hot spares. Volumes and hot spare pools must be built on drives from within that disk set. Once you have created a volume within the disk set, you can use the volume just as you would a physical slice.
A disk set provides data availability in a clustered environment. If one host fails, another host can take over the failed host's disk set. (This type of configuration is known as a failover configuration.) Additionally, disk sets can be used to help manage the Solaris Volume Manager namespace, and to provide ready access to network-attached storage devices.
For more information, see Chapter 18, Disk Sets (Overview).
A poorly designed Solaris Volume Manager configuration can degrade performance. This section offers tips for achieving good performance from Solaris Volume Manager. For information on storage configuration performance guidelines, see General Performance Guidelines.
Disk and controllers – Place drives in a volume on separate drive paths, or for SCSI drives, separate host adapters. An I/O load distributed over several controllers improves volume performance and availability.
System files – Never edit or remove the /etc/lvm/mddb.cf or /etc/lvm/md.cf files.
Make sure these files are backed up regularly.
Volume Integrity – If a slice is defined as a volume, do not use the underlying slice for any other purpose, including using the slice as a dump device.
Information about disks and partitions – Keep a copy of output from the prtvtoc and metastat -p commands in case you need to reformat a bad disk or recreate your Solaris Volume Manager configuration.
Do not mount file systems on a volume's underlying slice. If a slice is used for a volume of any kind, you must not mount that slice as a file system. If possible, unmount any physical device that you intend to use as a volume before you activate the volume.
When you create a Solaris Volume Manager component, you assign physical slices to a logical Solaris Volume Manager name, such as d0. The Solaris Volume Manager components that you can create include the following:
State database replicas
Volumes (RAID-0 (stripes, concatenations), RAID-1 (mirrors), RAID-5, )
Hot spare pools
Disk sets
For suggestions on how to name volumes, see Volume Names.
The prerequisites for creating Solaris Volume Manager components are as follows:
Create initial state database replicas. If you have not done so, see Creating State Database Replicas.
Identify slices that are available for use by Solaris Volume Manager. If necessary, use the format command, the fmthard command, or the Solaris Management Console to repartition existing disks.
Make sure you have root privilege.
Have a current backup of all data.
If you are using the GUI, start the Solaris Management Console and navigate to the Solaris Volume Manager feature. For information, see How to Access the Solaris Volume Manager Graphical User Interface (GUI).
Starting with the Solaris 9 4/03 release, Solaris Volume Manager supports storage devices and logical volumes greater than 1 terabyte (Tbyte) on systems running a 64-bit kernel.
Use isainfo -v to determine if your system is running a 64-bit kernel. If the string “64-bit” appears, you are running a 64-bit kernel.
Solaris Volume Manager allows you to do the following:
Create, modify, and delete logical volumes built on or from logical storage units (LUNs) greater than 1 Tbyte in size.
Create, modify, and delete logical volumes that exceed 1 Tbyte in size.
Support for large volumes is automatic. If a device greater than 1 Tbyte is created, Solaris Volume Manager configures it appropriately and without user intervention.
Solaris Volume Manager only supports large volumes (greater than 1 Tbyte) on the Solaris 9 4/03 or later release when running a 64-bit kernel. Running a system with large volumes under 32-bit kernel on previous Solaris 9 releases will affect Solaris Volume Manager functionality. Specifically, note the following:
If a system with large volumes is rebooted under a 32-bit Solaris 9 4/03 or later kernel, the large volumes will be visible through metastat output, but they cannot be accessed, modified or deleted. In addition, new large volumes cannot be created. Any volumes or file systems on a large volume will also be unavailable.
If a system with large volumes is rebooted under a Solaris release prior to Solaris 9 4/03, Solaris Volume Manager will not start. All large volumes must be removed before Solaris Volume Manager will run under another version of the Solaris platform.
Do not create large volumes if you expect to run the Solaris software with a 32-bit kernel or if you expect to use a version of the Solaris OS prior to the Solaris 9 4/03 release.
All Solaris Volume Manager commands work with large volumes. No syntax differences or special tasks are required to take advantage of large volume support. Thus, system administrators who are familiar with Solaris Volume Manager can immediately work with Solaris Volume Manager large volumes.
If you create large volumes, then later determine that you need to use Solaris Volume Manager under previous releases of Solaris or that you need to run under the 32-bit Solaris 9 4/03 or later kernel, you will need to remove the large volumes. Use the metaclear command under the 64-bit kernel to remove the large volumes from your Solaris Volume Manager configuration before rebooting under previous Solaris release or under a 32-bit kernel.
Solaris Volume Manager fully supports seamless upgrade from Solstice DiskSuite versions 4.1, 4.2, and 4.2.1. Make sure that all volumes are in Okay state (not “Needs Maintenance” or “Last Erred”) and that no hot spares are in use. You do not need to do anything else special to Solaris Volume Manager for the upgrade to work—it is not necessary to change the configuration or break down the root mirror. When you upgrade your system, the Solstice DiskSuite configuration will be brought forward and will be accessible after upgrade through Solaris Volume Manager tools.
The Solaris 10 OS introduced the Service Management Facility (SMF), which provides an infrastructure that augments the traditional UNIX start-up scripts, init run levels, and configuration files. When upgrading from a previous version of the Solaris OS, verify that the SMF services associated with Solaris Volume Manager are online. If the SMF services are not online, you might encounter problems when administering Solaris Volume Manager.
To check the SMF services associated with Solaris Volume Manager, use the following form of the svcs command:
# svcs -a |egrep "md|meta" disabled 12:05:45 svc:/network/rpc/mdcomm:default disabled 12:05:45 svc:/network/rpc/metamed:default disabled 12:05:45 svc:/network/rpc/metamh:default online 12:05:39 svc:/system/metainit:default online 12:05:46 svc:/network/rpc/meta:default online 12:05:48 svc:/system/fmd:default online 12:05:51 svc:/system/mdmonitor:default |
If the Solaris Volume Manager configuration consists of a local set only, then these services should be online:
svc:/system/metainit |
svc:/network/rpc/meta |
svc:/system/mdmonitor |
If the Solaris Volume Manager configuration includes disk sets, then these additional services should be online:
svc:/network/rpc/metamed |
svc:/network/rpc/metamh |
If the Solaris Volume Manager includes multi-node disk sets, then this service should be online in addition to the other services already mentioned:
svc:/network/rpc/mdcomm |
For more information on SMF, see Chapter 14, Managing Services (Overview), in System Administration Guide: Basic Administration.