Solaris Volume Manager Administration Guide

Chapter 22 Maintaining Solaris Volume Manager (Tasks)

This chapter provides information about performing general storage administration maintenance tasks with Solaris Volume Manager.

This is a list of the information in this chapter:

Solaris Volume Manager Maintenance (Task Map)

The following task map identifies the procedures needed to maintain Solaris Volume Manager.

Task 

Description 

Instructions 

View the Solaris Volume Manager configuration 

Use the Solaris Volume Manager GUI or the metastat command to view the system configuration.

How to View the Solaris Volume Manager Volume Configuration

Rename a volume 

Use the Solaris Volume Manager GUI or the metarename command to rename a volume.

How to Rename a Volume

Create configuration files 

Use the metastat -p command and the metadb command to create configuration files.

How to Create Configuration Files

Initialize Solaris Volume Manager from configuration files 

Use the metainit command to initialize Solaris Volume Manager from configuration files.

How to Initialize Solaris Volume Manager From a Configuration File

Increase the number of possible volumes 

Edit the /kernel/drv/md.conf file to increase the number of possible volumes.

How to Increase the Number of Default Volumes

Increase the number of possible disk sets 

Edit the /kernel/drv/md.conf file to increase the number of possible disk sets.

How to Increase the Number of Default Disk Sets

Grow a file system 

Use the growfs command to grow a file system.

How to Expand a File System

Enable components 

Use the Solaris Volume Manager GUI or the metareplace command to enable components.

Enabling a Component

Replace components 

Use the Solaris Volume Manager GUI or the metareplace command to replace components.

Replacing a Component With Another Available Component

Viewing the Solaris Volume Manager Configuration


Tip –

The metastat command does not sort output. Pipe the output of the metastat -p command to the sort or grep commands for a more manageable listing of your configuration.


ProcedureHow to View the Solaris Volume Manager Volume Configuration

Step

    To view the volume configuration, use one of the following methods:

    • From the Enhanced Storage tool within the Solaris Management Console, open the Volumes node. For more information, see the online help.

    • Use the following format of the metastat command:


      metastat -p -i component-name
      
      • -p specifies to output a condensed summary, suitable for use in creating the md.tab file.

      • -i specifies to verify that all devices can be accessed.

      • component-name is the name of the volume to view. If no volume name is specified, a complete list of components will be displayed.


Example 22–1 Viewing the Solaris Volume Manager Volume Configuration

The following example illustrates output from the metastat command.


# metastat
d50: RAID
    State: Okay         
    Interlace: 32 blocks
    Size: 20985804 blocks
Original device:
    Size: 20987680 blocks
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t4d0s5                 330     No    Okay         Yes    
        c1t5d0s5                 330     No    Okay         Yes    
        c2t4d0s5                 330     No    Okay         Yes    
        c2t5d0s5                 330     No    Okay         Yes    
        c1t1d0s5                 330     No    Okay         Yes    
        c2t1d0s5                 330     No    Okay         Yes    

d1: Concat/Stripe
    Size: 4197879 blocks
    Stripe 0:
        Device              Start Block  Dbase  Reloc
        c1t2d0s3                   0     No     Yes

d2: Concat/Stripe
    Size: 4197879 blocks
    Stripe 0:
        Device              Start Block  Dbase  Reloc
        c2t2d0s3                   0     No     Yes


d80: Soft Partition
    Device: d70
    State: Okay
    Size: 2097152 blocks
        Extent              Start Block              Block count
             0                        1                  2097152

d81: Soft Partition
    Device: d70
    State: Okay
    Size: 2097152 blocks
        Extent              Start Block              Block count
             0                  2097154                  2097152

d70: Mirror
    Submirror 0: d71
      State: Okay         
    Submirror 1: d72
      State: Okay         
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 12593637 blocks

d71: Submirror of d70
    State: Okay         
    Size: 12593637 blocks
    Stripe 0:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t3d0s3                   0     No    Okay         Yes    
    Stripe 1:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t3d0s4                   0     No    Okay         Yes    
    Stripe 2:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t3d0s5                   0     No    Okay         Yes    


d72: Submirror of d70
    State: Okay         
    Size: 12593637 blocks
    Stripe 0:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c2t3d0s3                   0     No    Okay         Yes    
    Stripe 1:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c2t3d0s4                   0     No    Okay         Yes    
    Stripe 2:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c2t3d0s5                   0     No    Okay         Yes    

hsp010: is empty

hsp014: 2 hot spares
        Device              Status      Length          Reloc
        c1t2d0s1            Available    617652 blocks  Yes
        c2t2d0s1            Available    617652 blocks  Yes

hsp050: 2 hot spares
        Device              Status      Length          Reloc
        c1t2d0s5            Available    4197879 blocks Yes
        c2t2d0s5            Available    4197879 blocks Yes

hsp070: 2 hot spares
        Device              Status      Length          Reloc
        c1t2d0s4            Available    4197879 blocks Yes
        c2t2d0s4            Available    4197879 blocks Yes

Device Relocation Information:
Device              Reloc       Device ID
c1t2d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0N1S200002103AF29
c2t2d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0P64Z00002105Q6J7
c1t1d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0N1EM00002104NP2J
c2t1d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0N93J000071040L3S
c0t0d0              Yes         id1,dad@s53554e575f4154415f5f53543339313430412525415933
 

Example—Viewing a Large Terabyte Solaris Volume Manager Volume

The following example illustrates output from the metastat command for a large storage volume (11 TB).


# metastat d0
 d0: Concat/Stripe
    Size: 25074708480 blocks (11 TB)
    Stripe 0: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c27t8d3s0          0     No     Yes
        c4t7d0s0       12288     No     Yes
    Stripe 1: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c13t2d1s0      16384     No     Yes
        c13t4d1s0      16384     No     Yes
        c13t6d1s0      16384     No     Yes
        c13t8d1s0      16384     No     Yes
        c16t3d0s0      16384     No     Yes
        c16t5d0s0      16384     No     Yes
        c16t7d0s0      16384     No     Yes
        c20t4d1s0      16384     No     Yes
        c20t6d1s0      16384     No     Yes
        c20t8d1s0      16384     No     Yes
        c9t1d0s0       16384     No     Yes
        c9t3d0s0       16384     No     Yes
        c9t5d0s0       16384     No     Yes
        c9t7d0s0       16384     No     Yes
    Stripe 2: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c27t8d2s0      16384     No     Yes
        c4t7d1s0       16384     No     Yes
    Stripe 3: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c10t7d0s0      32768     No     Yes
        c11t5d0s0      32768     No     Yes
        c12t2d1s0      32768     No     Yes
        c14t1d0s0      32768     No     Yes
        c15t8d1s0      32768     No     Yes
        c17t3d0s0      32768     No     Yes
        c18t6d1s0      32768     No     Yes
        c19t4d1s0      32768     No     Yes
        c1t5d0s0       32768     No     Yes
        c2t6d1s0       32768     No     Yes
        c3t4d1s0       32768     No     Yes
        c5t2d1s0       32768     No     Yes
        c6t1d0s0       32768     No     Yes
        c8t3d0s0       32768     No     Yes

Where To Go From Here

For more information, see metastat(1M).

Renaming Volumes

Background Information for Renaming Volumes

The metarename command with the -x option can exchange the names of volumes that have a parent-child relationship. For more information, see How to Rename a Volume and the metarename(1M) man page.

Solaris Volume Manager enables you to rename most types of volumes at any time, subject to some constraints.

Renaming volumes or switching volume names is an administrative convenience for management of volume names. For example, you could arrange all file system mount points in a desired numeric range. You might rename volumes to maintain a naming scheme for your logical volumes or to allow a transactional volume to use the same name as the underlying volume had been using.

Before you rename a volume, make sure that it is not currently in use. For a file system, make sure it is not mounted or being used as swap. Other applications using the raw device, such as a database, should have their own way of stopping access to the data.

Specific considerations for renaming volumes include the following:

You can use either the Enhanced Storage tool within the Solaris Management Console or the command line (the metarename(1M) command) to rename volumes.

Exchanging Volume Names

When used with the -x option, the metarename command exchanges the names of an existing layered volume with one of its subdevices. This exchange can occur between a mirror and one of its submirrors, or a transactional volume and its master device.


Note –

You must use the command line to exchange volume names. This functionality is currently unavailable in the Solaris Volume Manager GUI. However, you can rename a volume with either the command line or the GUI.


The metarename -x command can make it easier to mirror or unmirror an existing volume, and to create or remove a transactional volume of an existing volume.


Caution – Caution –

Solaris Volume Manager transactional volumes do not support large volumes. In all cases, UFS logging (see mount_ufs(1M)


ProcedureHow to Rename a Volume

Steps
  1. Check the volume name requirements (Volume Names), and Background Information for Renaming Volumes.

  2. Unmount the file system that uses the volume.

  3. To rename the volume, use one of the following methods:

    • From the Enhanced Storage tool within the Solaris Management Console, open the Volumes node and select the volume you want to rename. Right-click the icon and choose the Properties option, then follow the instructions on screen. For more information, see the online help.

    • Use the following format of the metarename command:


      metarename old-volume-name new-volume-name
      
      • old-volume-name is the name of the existing volume.

      • new-volume-name is the new name for the existing volume.

      See metarename(1M) for more information.

  4. Edit the /etc/vfstab file to refer to the new volume name, if necessary.

  5. Remount the file system.


Example 22–2 Renaming a Volume Used for a File System


# umount /home
# metarename d10 d100
d10: has been renamed to d100
(Edit the /etc/vfstab file so that the file system  references the new volume)
# mount /home

In this example, the volume d10 is renamed to d100. Because d10 contains a mounted file system, the file system must be unmounted before the rename can occur. If the volume is used for a file system with an entry in the /etc/vfstab file, the entry must be changed to reference the new volume name. For example, the following line:


/dev/md/dsk/d10 /dev/md/rdsk/d10 /docs ufs 2 yes -

should be changed to:


/dev/md/dsk/d100 /dev/md/rdsk/d100 /docs ufs 2 yes -

Then, the file system should be remounted.

If you have an existing mirror or transactional volume, you can use the metarename -x command to remove the mirror or transactional volume and keep data on the underlying volume. For a transactional volume, as long as the master device is a volume (RAID 0, RAID 1, or RAID 5 volume), you can keep data on that volume.


Working with Configuration Files

Solaris Volume Manager configuration files contain basic Solaris Volume Manager information, as well as most of the data necessary to reconstruct a configuration. The following sections illustrate how to work with these files.

ProcedureHow to Create Configuration Files

Step

    Once you have defined all appropriate parameters for the Solaris Volume Manager environment, use the metastat -p command to create the /etc/lvm/md.tab file.


    # metastat -p > /etc/lvm/md.tab
    

    This file contains all parameters for use by the metainit, and metahs commands, in case you need to set up several similar environments or recreate the configuration after a system failure.

    For more information about the md.tab file, see Overview of the md.tab File.

ProcedureHow to Initialize Solaris Volume Manager From a Configuration File


Caution – Caution –

Use this procedure only if you have experienced a complete loss of your Solaris Volume Manager configuration, or if you have no configuration yet and you want to create a configuration from a saved configuration file.


If your system loses the information maintained in the state database (for example, because the system was rebooted after all state database replicas were deleted), and as long as no volumes were created since the state database was lost, you can use the md.cf or md.tab files to recover your Solaris Volume Manager configuration.


Note –

The md.cf file does not maintain information on active hot spares. Thus, if hot spares were in use when the Solaris Volume Manager configuration was lost, those volumes that were using active hot spares will likely be corrupted.


For more information about these files, see the md.cf(4) and the md.tab(4) man pages.

Steps
  1. Create state database replicas.

    See Creating State Database Replicas for more information.

  2. Create, or update the /etc/lvm/md.tab file.

    • If you are attempting to recover the last known Solaris Volume Manager configuration, copy the md.cf file to the md.tab file.

    • If you are creating a new Solaris Volume Manager configuration based on a copy of the md.tab file that you preserved, put a copy of your preserved file at /etc/lvm/md.tab.

  3. Edit the “new” md.tab file and do the following:

    • If you are creating a new configuration or recovering a configuration after a crash, configure the mirrors as one-way mirrors. If a mirror's submirrors are not the same size, be sure to use the smallest submirror for this one-way mirror. Otherwise data could be lost.

    • If you are recovering an existing configuration and Solaris Volume Manager was cleanly stopped, leave the mirror configuration as multi-way mirrors

    • Specify RAID 5 volumes with the -k option, to prevent reinitialization of the device. See the metainit(1M) man page for more information.

  4. Check the syntax of the md.tab file entries without committing changes by using the following form of the metainit command:


    # metainit -n -a component-name
    

    The metainit command does not maintain a hypothetical state of the devices that might have been created while running with the -n, so creating volumes that rely on other, nonexistent volumes will result in errors with the -n even though the command may succeed without the -n option.

    • -n specifies not to actually create the devices. Use this to check to verify that the results will be as you expect

    • -a specifies to activate the devices.

    • component-name specifies the name of the component to initialize. If no component is specified, all components will be created.

  5. If no problems were apparent from the previous step, recreate the volumes and hot spare pools from the md.tab file:


    # metainit -a component-name
    
    • -a specifies to activate the devices.

    • component-name specifies the name of the component to initialize. If no component is specified, all components will be created.

  6. As needed, make the one-way mirrors into multi-way mirrors by using the metattach command.

  7. Validate the data on the volumes.

Changing Solaris Volume Manager Defaults

The Solaris Volume Manager configuration has the following default values:

The default values of total volumes, namespace, and number of disk sets can be changed, if necessary. The tasks in this section tell you how to change these values.

ProcedureHow to Increase the Number of Default Volumes

The nmd field in the /kernel/drv/md.conf file allocates the number of volumes allowed and the namespace available for volumes. This task describes how to increase the number of volumes from the default value of 128 and the namespace from the default range of d0 through d127. If you need to configure more volumes than the default allows, you can increase this value up to 8192.


Caution – Caution –

If you lower this number at any point, any volume existing between the old number and the new number might not be available, potentially resulting in data loss. If you see a message such as “md: d200: not configurable, check /kernel/drv/md.conf,” you must edit the md.conf file and increase the value, as explained in this task.


Before You Begin

Review Prerequisites for Troubleshooting the System.

Steps
  1. Edit the /kernel/drv/md.conf file.

  2. Change the value of the nmd field. Values up to 8192 are supported.

  3. Save your changes.

  4. Perform a reconfiguration reboot to build the volume names.


    # reboot -- -r
    

Example 22–3 Increasing the Number of Default Volumes

Here is a sample md.conf file that is configured for 256 volumes.


#
#ident "@(#)md.conf   1.7     94/04/04 SMI"
#
# Copyright (c) 1992, 1993, 1994 by Sun Microsystems, Inc.
#
#
#pragma ident   "@(#)md.conf    2.1     00/07/07 SMI"
#
# Copyright (c) 1992-1999 by Sun Microsystems, Inc.
# All rights reserved.
#
name="md" parent="pseudo" nmd=256 md_nsets=4;

ProcedureHow to Increase the Number of Default Disk Sets

This task shows you how to increase the number of disk sets from the default value of 4.


Caution – Caution –

Do not decrease the number of default disk sets if you have already configured disk sets. Lowering this number could make existing disk sets unavailable or unusable.


Before You Begin

Review Prerequisites for Troubleshooting the System.

Steps
  1. Edit the /kernel/drv/md.conf file.

  2. Change the value of the md_nsets field. Values up to 32 are supported.

  3. Save your changes.

  4. Perform a reconfiguration reboot to build the volume names.


    # reboot  -- -r
    

Example 22–4 Increasing the Number of Default Disk Sets

Here is a sample md.conf file that is configured for five shared disk sets. The value of md_nsets is 6, which results in five shared disk sets and one local disk set.


#
#
#pragma ident   "@(#)md.conf    2.1     00/07/07 SMI"
#
# Copyright (c) 1992-1999 by Sun Microsystems, Inc.
# All rights reserved.
#
name="md" parent="pseudo" nmd=128 md_nsets=6;
# Begin MDD database info (do not edit)
...
# End MDD database info (do not edit)

Expanding a File System Using the growfs Command

After a volume that contains a file system is expanded (more space is added), if that volume contains a UFS, you also need to “grow” the file system to recognize the added space. You must manually grow the file system with the growfs command. The growfs command expands the file system, even while mounted. However, write access to the file system is not possible while the growfs command is running.

An application, such as a database, that uses the raw device must have its own method to grow added space. Solaris Volume Manager does not provide this capability.

The growfs command will “write-lock” a mounted file system as it expands the file system. The length of time the file system is write-locked can be shortened by expanding the file system in stages. For instance, to expand a 1 Gbyte file system to 2 Gbytes, the file system can be grown in 16 Mbyte stages using the -s option to specify the total size of the new file system at each stage.

During the expansion, the file system is not available for write access because of write-lock. Write accesses are transparently suspended and are restarted when the growfs command unlocks the file system. Read accesses are not affected, though access times are not kept while the lock is in effect.

Background Information for Expanding Slices and Volumes


Note –

Solaris Volume Manager volumes can be expanded, but not shrunk.


ProcedureHow to Expand a File System

Steps
  1. Check Prerequisites for Creating Solaris Volume Manager Components.

  2. Use the growfs command to grow a UFS on a logical volume.


    # growfs -M /mount-point /dev/md/rdsk/volumename
    

    See the following example and the growfs(1M) man page for more information.


Example 22–5 Expanding a File System


# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
...
/dev/md/dsk/d10        69047   65426       0   100%    /home2
...
# growfs -M /home2 /dev/md/rdsk/d10
/dev/md/rdsk/d10:       295200 sectors in 240 cylinders of 15 tracks, 82 sectors
        144.1MB in 15 cyl groups (16 c/g, 9.61MB/g, 4608 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 19808, 39584, 59360, 79136, 98912, 118688, 138464, 158240, 178016, 197792,
 217568, 237344, 257120, 276896,
# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
...
/dev/md/dsk/d10       138703   65426   59407    53%    /home2
...

In this example, a new slice was added to a volume, d10, which contains the mounted file system /home2. The growfs command specifies the mount point with the -M option to be /home2, which is expanded onto the raw volume /dev/md/rdsk/d10. The file system will span the entire volume when the growfs command is complete. You can use the df -hk command before and after to verify the total disk capacity.

For mirror and transactional volumes, always run the growfs command on the top-level volume, not a submirror or master device, even though space is added to the submirror or master device.


Overview of Replacing and Enabling Components in RAID 1 and RAID 5 Volumes

Solaris Volume Manager has the capability to replace and enable components within RAID 1 (mirror) and RAID 5 volumes.

In Solaris Volume Manager terms, replacing a component is a way to substitute an available component on the system for a selected component in a submirror or RAID 5 volume. You can think of this process as logical replacement, as opposed to physically replacing the component. (See Replacing a Component With Another Available Component.)

Enabling a component means to “activate” or substitute a component with itself (that is, the component name is the same). See Enabling a Component.


Note –

When recovering from disk errors, scan /var/adm/messages to see what kind of errors occurred. If the errors are transitory and the disks themselves do not have problems, try enabling the failed components. You can also use the format command to test a disk.


Enabling a Component

You can enable a component when any of the following conditions exist:


Note –

Always check for state database replicas and hot spares on the drive being replaced. Any state database replica shown to be in error should be deleted before replacing the disk. Then after enabling the component, they should be recreated (at the same size). You should treat hot spares in the same manner.


Replacing a Component With Another Available Component

You use the metareplace command when you replace or swap an existing component with a different component that is available and not in use on the system.

You can use this command when any of the following conditions exist:

Maintenance and Last Erred States

When a component in a mirror or RAID 5 volume experiences errors, Solaris Volume Manager puts the component in the “Maintenance” state. No further reads or writes are performed to a component in the “Maintenance” state. Subsequent errors on other components in the same volume are handled differently, depending on the type of volume. A RAID 1 volume might be able to tolerate many components in the “Maintenance” state and still be read from and written to. A RAID 5 volume, by definition, can only tolerate a single component in the “Maintenance” state.

When a component in a RAID 0 or RAID 5 volume experiences errors and there are no redundant components to read from (for example, in a RAID 5 volume, after one component goes into Maintenance state, there is no redundancy available, so the next component to fail would go into “Last Erred” state) When either a mirror or RAID 5 volume has a component in the “Last Erred” state, I/O is still attempted to the component marked “Last Erred.” This happens because a “Last Erred” component contains the last good copy of data from Solaris Volume Manager's point of view. With a component in the “Last Erred” state, the volume behaves like a normal device (disk) and returns I/O errors to an application. Usually, at this point some data has been lost.

Always replace components in the “Maintenance” state first, followed by those in the “Last Erred” state. After a component is replaced and resynchronized, use the metastat command to verify its state, then validate the data to make sure it is good.

Mirrors –If components are in the “Maintenance” state, no data has been lost. You can safely replace or enable the components in any order. If a component is in the “Last Erred” state, you cannot replace it until you first replace all the other mirrored components in the “Maintenance” state. Replacing or enabling a component in the “Last Erred” state usually means that some data has been lost. Be sure to validate the data on the mirror after you repair it.

RAID 5 Volumes–A RAID 5 volume can tolerate a single component failure. You can safely replace a single component in the “Maintenance” state without losing data. If an error on another component occurs, it is put into the “Last Erred” state. At this point, the RAID 5 volume is a read-only device. You need to perform some type of error recovery so that the state of the RAID 5 volume is stable and the possibility of data loss is reduced. If a RAID 5 volume reaches a “Last Erred” state, there is a good chance it has lost data. Be sure to validate the data on the RAID 5 volume after you repair it.

Background Information For Replacing and Enabling Slices in RAID 1 and RAID 5 Volumes

When you replace components in a mirror or a RAID 5 volume, follow these guidelines:


Note –

A submirror or RAID 5 volume might be using a hot spare in place of a failed component. When that failed component is enabled or replaced by using the procedures in this section, the hot spare is marked “Available” in the hot spare pool, and is ready for use.