Solaris Volume Manager Administration Guide

Chapter 20 Maintaining Solaris Volume Manager (Tasks)

This chapter provides information about performing general storage administration maintenance tasks with Solaris Volume Manager.

This is a list of the information in this chapter:

Solaris Volume Manager Maintenance (Task Map)

The following task map identifies the procedures that are needed to maintain Solaris Volume Manager.

Task 

Description 

For Instructions 

View the Solaris Volume Manager configuration 

Use the Solaris Volume Manager GUI or the metastat command to view the system configuration.

How to View the Solaris Volume Manager Volume Configuration

Rename a volume 

Use the Solaris Volume Manager GUI or the metarename command to rename a volume.

How to Rename a Volume

Create configuration files 

Use the metastat -p command and the metadb command to create configuration files.

How to Create Configuration Files

Initialize Solaris Volume Manager from configuration files 

Use the metainit command to initialize Solaris Volume Manager from configuration files.

How to Initialize Solaris Volume Manager From a Configuration File

Expand a file system 

Use the growfs command to expand a file system.

How to Expand a File System

Enable components 

Use the Solaris Volume Manager GUI or the metareplace command to enable components.

Enabling a Component

Replace components 

Use the Solaris Volume Manager GUI or the metareplace command to replace components.

Replacing a Component With Another Available Component

Viewing the Solaris Volume Manager Configuration


Tip –

The metastat command does not sort output. Pipe the output of the metastat -p command to the sort or grep commands for a more manageable listing of your configuration.


ProcedureHow to View the Solaris Volume Manager Volume Configuration

Step

    To view the volume configuration, use one of the following methods:

    • From the Enhanced Storage tool within the Solaris Management Console, open the Volumes node. For more information, see the online help.

    • Use the following form of the metastat command:


      # metastat -p -i component-name
      
      -p

      Specifies to show output in a condensed summary. This output is suitable for use in creating the md.tab file.

      -i

      Specifies to verify that RAID-1 (mirror) volumes, RAID-5 volumes, and hot spares can be accessed.

      component-name

      Specifies the name of the volume to view. If no volume name is specified, a complete list of components is displayed.


Example 20–1 Viewing the Solaris Volume Manager Volume Configuration

The following example illustrates output from the metastat command.


# metastat
d50: RAID
    State: Okay         
    Interlace: 32 blocks
    Size: 20985804 blocks
Original device:
    Size: 20987680 blocks
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t4d0s5                 330     No    Okay         Yes    
        c1t5d0s5                 330     No    Okay         Yes    
        c2t4d0s5                 330     No    Okay         Yes    
        c2t5d0s5                 330     No    Okay         Yes    
        c1t1d0s5                 330     No    Okay         Yes    
        c2t1d0s5                 330     No    Okay         Yes    

d1: Concat/Stripe
    Size: 4197879 blocks
    Stripe 0:
        Device              Start Block  Dbase  Reloc
        c1t2d0s3                   0     No     Yes

d2: Concat/Stripe
    Size: 4197879 blocks
    Stripe 0:
        Device              Start Block  Dbase  Reloc
        c2t2d0s3                   0     No     Yes


d80: Soft Partition
    Device: d70
    State: Okay
    Size: 2097152 blocks
        Extent              Start Block              Block count
             0                        1                  2097152

d81: Soft Partition
    Device: d70
    State: Okay
    Size: 2097152 blocks
        Extent              Start Block              Block count
             0                  2097154                  2097152

d70: Mirror
    Submirror 0: d71
      State: Okay         
    Submirror 1: d72
      State: Okay         
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 12593637 blocks

d71: Submirror of d70
    State: Okay         
    Size: 12593637 blocks
    Stripe 0:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t3d0s3                   0     No    Okay         Yes    
    Stripe 1:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t3d0s4                   0     No    Okay         Yes    
    Stripe 2:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c1t3d0s5                   0     No    Okay         Yes    


d72: Submirror of d70
    State: Okay         
    Size: 12593637 blocks
    Stripe 0:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c2t3d0s3                   0     No    Okay         Yes    
    Stripe 1:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c2t3d0s4                   0     No    Okay         Yes    
    Stripe 2:
        Device              Start Block  Dbase State        Reloc  Hot Spare
        c2t3d0s5                   0     No    Okay         Yes    

hsp010: is empty

hsp014: 2 hot spares
        Device              Status      Length          Reloc
        c1t2d0s1            Available    617652 blocks  Yes
        c2t2d0s1            Available    617652 blocks  Yes

hsp050: 2 hot spares
        Device              Status      Length          Reloc
        c1t2d0s5            Available    4197879 blocks Yes
        c2t2d0s5            Available    4197879 blocks Yes

hsp070: 2 hot spares
        Device              Status      Length          Reloc
        c1t2d0s4            Available    4197879 blocks Yes
        c2t2d0s4            Available    4197879 blocks Yes

Device Relocation Information:
Device              Reloc       Device ID
c1t2d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0N1S200002103AF29
c2t2d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0P64Z00002105Q6J7
c1t1d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0N1EM00002104NP2J
c2t1d0              Yes         id1,sd@SSEAGATE_ST39204LCSUN9.0G3BV0N93J000071040L3S
c0t0d0              Yes         id1,dad@s53554e575f4154415f5f53543339313430412525415933
 


Example 20–2 Viewing a Multi-Terabyte Solaris Volume Manager Volume

The following example illustrates output from the metastat command for a multi-terabyte storage volume (11 Tbytes).


# metastat d0
 d0: Concat/Stripe
    Size: 25074708480 blocks (11 TB)
    Stripe 0: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c27t8d3s0          0     No     Yes
        c4t7d0s0       12288     No     Yes
    Stripe 1: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c13t2d1s0      16384     No     Yes
        c13t4d1s0      16384     No     Yes
        c13t6d1s0      16384     No     Yes
        c13t8d1s0      16384     No     Yes
        c16t3d0s0      16384     No     Yes
        c16t5d0s0      16384     No     Yes
        c16t7d0s0      16384     No     Yes
        c20t4d1s0      16384     No     Yes
        c20t6d1s0      16384     No     Yes
        c20t8d1s0      16384     No     Yes
        c9t1d0s0       16384     No     Yes
        c9t3d0s0       16384     No     Yes
        c9t5d0s0       16384     No     Yes
        c9t7d0s0       16384     No     Yes
    Stripe 2: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c27t8d2s0      16384     No     Yes
        c4t7d1s0       16384     No     Yes
    Stripe 3: (interlace: 32 blocks)
        Device      Start Block  Dbase  Reloc
        c10t7d0s0      32768     No     Yes
        c11t5d0s0      32768     No     Yes
        c12t2d1s0      32768     No     Yes
        c14t1d0s0      32768     No     Yes
        c15t8d1s0      32768     No     Yes
        c17t3d0s0      32768     No     Yes
        c18t6d1s0      32768     No     Yes
        c19t4d1s0      32768     No     Yes
        c1t5d0s0       32768     No     Yes
        c2t6d1s0       32768     No     Yes
        c3t4d1s0       32768     No     Yes
        c5t2d1s0       32768     No     Yes
        c6t1d0s0       32768     No     Yes
        c8t3d0s0       32768     No     Yes

Where To Go From Here

For more information, see the metastat(1M) man page.

Renaming Volumes

Background Information for Renaming Volumes

Solaris Volume Manager enables you to rename most types of volumes at any time, subject to some constraints. You can use either the Enhanced Storage tool within the Solaris Management Console or the command line (the metarename(1M) command) to rename volumes.

Renaming volumes or switching volume names is an administrative convenience for the management of volume names. For example, you could arrange all file system mount points in a desired numeric range. You might rename volumes to maintain a naming scheme for your logical volumes or to allow a transactional volume to use the same name as the name of the underlying volume.


Note –

Transactional volumes are no longer valid in Solaris Volume Manager. You can rename transactional volumes to replace them.


Before you rename a volume, make sure that it is not currently in use. For a file system, make sure that it is not mounted or being used as swap. Other applications that use the raw device, such as a database, should have their own way of stopping access to the data.

Specific considerations for renaming volumes include the following:

Exchanging Volume Names

Using the metarename command with the -x option exchanges the names of volumes that have a parent-child relationship. For more information, see How to Rename a Volume and the metarename(1M) man page. The name of an existing volume is exchanged with one of its subcomponents. For example, this type of exchange can occur between a mirror and one of its submirrors. The metarename -x command can make it easier to mirror or unmirror an existing volume.


Note –

You must use the command line to exchange volume names. This functionality is currently unavailable in the Solaris Volume Manager GUI. However, you can rename a volume with either the command line or the GUI.


Consider the following guidelines when you want to rename a volume:

ProcedureHow to Rename a Volume

Before You Begin

Check the volume name requirements (Volume Names), and Background Information for Renaming Volumes.

Steps
  1. Unmount the file system that uses the volume.


    # umount /filesystem
    
  2. To rename the volume, use one of the following methods:

    • From the Enhanced Storage tool within the Solaris Management Console, open the Volumes. Select the volume you want to rename. Click the right mouse on the icon. Choose the Properties option. Then, follow the onscreen instructions. For more information, see the online help.

    • Use the following form of the metarename command:


      # metarename old-volume-name new-volume-name
      
      old-volume-name

      Specifies the name of the existing volume.

      new-volume-name

      Specifies the new name for the existing volume.

      See the metarename(1M) man page for more information.

  3. Edit the /etc/vfstab file to refer to the new volume name, if necessary.

  4. Remount the file system.


    # mount /filesystem
    

Example 20–3 Renaming a Volume Used for a File System

In the following example, the volume, d10, is renamed to d100.


# umount /home
# metarename d10 d100
d10: has been renamed to d100
(Edit the /etc/vfstab file so that the file system  references the new volume)
# mount /home

Because d10 contains a mounted file system, the file system must be unmounted before the volume can be renamed. If the volume is used for a file system with an entry in the /etc/vfstab file, the entry must be changed to reference the new volume name.

For example, if the /etc/vfstab file contains the following entry for the file system:


/dev/md/dsk/d10 /dev/md/rdsk/d10 /docs home 2 yes -

Change the entry to read as follows:


/dev/md/dsk/d100 /dev/md/rdsk/d100 /docs home 2 yes -

Then, remount the file system.

If you have an existing mirror or transactional volume, you can use the metarename -x command to remove the mirror or transactional volume and keep data on the underlying volume. For a transactional volume, as long as the master device is a volume ( either a RAID-0, RAID-1, or RAID-5 volume), you can keep data on that volume.


Working With Configuration Files

Solaris Volume Manager configuration files contain basic Solaris Volume Manager information, as well as most of the data that is necessary to reconstruct a configuration. The following procedures illustrate how to work with these files.

ProcedureHow to Create Configuration Files

Step

    Once you have defined all appropriate parameters for the Solaris Volume Manager environment, use the metastat -p command to create the /etc/lvm/md.tab file.


    # metastat -p > /etc/lvm/md.tab
    

    This file contains all parameters for use by the metainit command and metahs command. Use this file if you need to set up several similar environments or if you need to recreate the configuration after a system failure.

    For more information about the md.tab file, see Overview of the md.tab File and the md.tab(4) man page.

ProcedureHow to Initialize Solaris Volume Manager From a Configuration File


Caution – Caution –

Use this procedure in the following circumstances:


On occasion, your system loses the information maintained in the state database. For example, this loss might occur if the system was rebooted after all of the state database replicas were deleted. As long as no volumes were created after the state database was lost, you can use the md.cf or md.tab files to recover your Solaris Volume Manager configuration.


Note –

The md.cf file does not maintain information on active hot spares. Thus, if hot spares were in use when the Solaris Volume Manager configuration was lost, those volumes that were using active hot spares are likely corrupted.


For more information about these files, see the md.cf(4) and the md.tab(4) man pages.

Steps
  1. Create state database replicas.

    See Creating State Database Replicas for more information.

  2. Create or update the /etc/lvm/md.tab file.

    • If you are attempting to recover the last known Solaris Volume Manager configuration, copy the md.cf file into the /etc/lvm/md.tab file.

    • If you are creating a new Solaris Volume Manager configuration based on a copy of the md.tab file that have you preserved, copy the preserved file into the /etc/lvm/md.tab file.

  3. Edit the “new” /etc/lvm/md.tab file and do the following:

    • If you are creating a new configuration or recovering a configuration after a crash, configure the mirrors as one-way mirrors. For example:


      d80 -m d81 1
      d81 1 1 c1t6d0s3

      If the submirrors of a mirror are not the same size, be sure to use the smallest submirror for this one-way mirror. Otherwise, data could be lost.

    • If you are recovering an existing configuration and Solaris Volume Manager was cleanly stopped, leave the mirror configuration as multi-way mirrors. For example:


      d70 -m d71 d72 1
      d71 1 1 c1t6d0s2
      d72 1 1 c1t5d0s0
    • Specify RAID-5 volumes with the -k option, to prevent reinitialization of the device. For example:


      d45 -r c1t3d0s5 c1t3d0s3 c1t3d0s4 -k -i 32b

      See the metainit(1M) man page for more information.

  4. Check the syntax of the /etc/lvm/md.tab file entries without committing changes by using one of the following forms of the metainit command:


    # metainit -n md.tab-entry
    

    # metainit -n -a
    

    The metainit command does not maintain a hypothetical state of the devices that might have been created while running with the -n, so creating volumes that rely on other, nonexistent volumes will result in errors with the -n even though the command may succeed without the -n option.

    -n

    Specifies not to actually create the devices. Use this option to verify that the results are as you expected.

    md.tab-entry

    Specifies the name of the component to initialize.

    -a

    Specifies to check all components.

  5. If no problems were apparent from the previous step, recreate the volumes and hot spare pools from the md.tab file:


    # metainit -a
    
    -a

    Specifies to activate the entries in the /etc/lvm/md.tab file.

  6. As needed, make the one-way mirrors into multi-way mirrors by using the metattach command.


    # mettach mirror submirror
    
  7. Validate the data on the volumes to confirm that the configuration has been reconstructed accurately.


    # metastat
    

Changing Solaris Volume Manager Default Values

With the Solaris 10 release, Solaris Volume Manager has been enhanced to configure volumes dynamically. You no longer need to edit the nmd and the md_nsets parameters in the /kernel/drv/md.conf file. New volumes are dynamically created, as needed.

The maximum Solaris Volume Manager configuration values remain unchanged:

Expanding a File System Using the growfs Command

After a volume that contains a UFS file system is expanded (meaning that more space is added), you also need to expand the file system in order to recognize the added space. You must manually expand the file system with the growfs command. The growfs command expands the file system, even while the file system is mounted. However, write access to the file system is not possible while the growfs command is running.

An application, such as a database, that uses the raw device must have its own method to incorporate the added space. Solaris Volume Manager does not provide this capability.

The growfs command “write-locks” a mounted file system as it expands the file system. The length of time the file system is write-locked can be shortened by expanding the file system in stages. For instance, to expand a 1-Gbyte file system to 2 Gbytes, the file system can be grown in 16 Mbyte stages by using the -s option. This option specifies the total size of the new file system at each stage.

During the expansion, the file system is not available for write access because of the write-lock feature. Write accesses are transparently suspended and are restarted when the growfs command unlocks the file system. Read accesses are not affected. However, access times are not kept while the lock is in effect.

Background Information for Expanding Slices and Volumes


Note –

Solaris Volume Manager volumes can be expanded. However, volumes cannot be reduced in size.


ProcedureHow to Expand a File System

Before You Begin

Check Prerequisites for Creating Solaris Volume Manager Components.

Steps
  1. Review the disk space associated with a file system.


    # df -hk
    

    See the df(1M) man page for more information.

  2. Expand a UFS file system on a logical volume.


    # growfs -M /mount-point /dev/md/rdsk/volume-name
    
    -M /mount-point

    Specifies the mount point for the file system to be expanded.

    /dev/md/rdsk/volume-name

    Specifies the name of the volume on which you want to expand.

    See the following example and the growfs(1M) man page for more information.


Example 20–4 Expanding a File System

In the following example, a new slice is added to a volume, d10, which contains the mounted file system, /home2. The growfs command specifies the mount point with the -M option to be /home2, which is expanded onto the raw volume /dev/md/rdsk/d10. The file system will span the entire volume when the growfs command is complete. You can use the df -hk command before and after expanding the file system to verify the total disk capacity.


# df -hk
Filesystem            kbytes    used   avail capacity  Mounted on
...
/dev/md/dsk/d10        69047   65426       0   100%    /home2
...
# growfs -M /home2 /dev/md/rdsk/d10
/dev/md/rdsk/d10:       295200 sectors in 240 cylinders of 15 tracks, 82 sectors
        144.1MB in 15 cyl groups (16 c/g, 9.61MB/g, 4608 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 19808, 39584, 59360, 79136, 98912, 118688, 138464, 158240, 178016, 197792,
 217568, 237344, 257120, 276896,
# df -hk
Filesystem            kbytes    used   avail capacity  Mounted on
...
/dev/md/dsk/d10       138703   65426   59407    53%    /home2
...

For mirror volumes, always run the growfs command on the top-level volume. Do not run the command on a submirror or master device, even though space is added to the submirror or master device.


Overview of Replacing and Enabling Components in RAID-1 and RAID-5 Volumes

Solaris Volume Manager can replace and enable components within RAID-1 (mirror) and RAID-5 volumes.

In Solaris Volume Manager terminology, replacing a component is a way to substitute an available component on the system for a selected component in a submirror or RAID-5 volume. You can think of this process as logical replacement, as opposed to physically replacing the component. For more information see Replacing a Component With Another Available Component.

Enabling a component means to “activate” or substitute a component with itself (that is, the component name is the same). For more information, see Enabling a Component.


Note –

When recovering from disk errors, scan /var/adm/messages to see what kind of errors occurred. If the errors are transitory and the disks themselves do not have problems, try enabling the failed components. You can also use the format command to test a disk.


Enabling a Component

You can enable a component when any of the following conditions exist:


Note –

Always check for state database replicas and hot spares on the disk that is being replaced. Any state database replica in an erred state should be deleted before you replace the disk. Then, after you enable the component, recreate the state database replicas using the same size. You should treat hot spares in the same manner.


Replacing a Component With Another Available Component

You use the metareplace command when you replace or swap an existing component with a different component that is available and not in use on the system.

You can use this command when any of the following conditions exist:

Maintenance and Last Erred States

When a component in a RAID-1 or RAID-5 volume experiences errors, Solaris Volume Manager puts the component in the “Maintenance” state. No further reads or writes are performed to a component in the “Maintenance” state.

Sometimes a component goes into a “Last Erred” state. For a RAID-1 volume, this usually occurs with a one-sided mirror. The volume experiences errors. However, there are no redundant components to read from. For a RAID-5 volume this occurs after one component goes into “Maintenance” state, and another component fails. The second component to fail goes into the “Last Erred” state.

When either a RAID-1 volume or a RAID-5 volume has a component in the “Last Erred” state, I/O is still attempted to the component marked “Last Erred.” This I/O attempt occurs because a “Last Erred” component contains the last good copy of data from Solaris Volume Manager's point of view. With a component in the “Last Erred” state, the volume behaves like a normal device (disk) and returns I/O errors to an application. Usually, at this point, some data has been lost.

The subsequent errors on other components in the same volume are handled differently, depending on the type of volume.

RAID-1 Volume

A RAID-1 volume might be able to tolerate many components in the “Maintenance” state and still be read from and written to. If components are in the “Maintenance” state, no data has been lost. You can safely replace or enable the components in any order. If a component is in the “Last Erred” state, you cannot replace it until you first replace the components in the “Maintenance” state. Replacing or enabling a component in the “Last Erred” state usually means that some data has been lost. Be sure to validate the data on the mirror after you repair it.

RAID-5 Volume

A RAID-5 volume can tolerate a single component in the “Maintenance” state. You can safely replace a single component in the “Maintenance” state without losing data. If an error on another component occurs, it is put into the “Last Erred” state. At this point, the RAID-5 volume is a read-only device. You need to perform some type of error recovery so that the state of the RAID-5 volume is stable and the possibility of data loss is reduced. If a RAID-5 volume reaches a “Last Erred” state, there is a good chance it has lost data. Be sure to validate the data on the RAID-5 volume after you repair it.

Always replace components in the “Maintenance” state first, followed by those in the “Last Erred” state. After a component is replaced and resynchronized, use the metastat command to verify its state. Then, validate the data.

Background Information for Replacing and Enabling Components in RAID-1 and RAID-5 Volumes

When you replace components in a RAID-1 volume or a RAID-5 volume, follow these guidelines: