C H A P T E R  3

Hot-Plug Procedures for FC-AL Disks and Disk Arrays

Hot-plugging is the process of installing or removing an individual FC-AL disk drive or an entire Sun StorEdge A5x00 enclosure while the power is on and the operating system is running. This chapter describes how to hot-plug individual
FC-AL disk drives installed in a Sun StorEdge A5x00 disk array or in a Sun Fire 880 internal storage subsystem.

This chapter covers hot-plug procedures for systems running UNIX File System (UFS) operations, VERITAS Volume Manager, or Solstice DiskSuite software.



caution icon

Caution - As with other products that have high reliability, availability, and serviceability (RAS), you should not randomly remove disk drives. If the drive is active, you must stop all activity before removing it. You can do this without bringing down the operating system or powering down the unit; however, there are software considerations that you must take into account. Follow the procedures in this chapter when removing, replacing, or adding disk drives.



This chapter covers the following topics and procedures:


About Hot-Plugging FC-AL Disks and Disk Arrays

Three specific cases exist where the hot-plug feature is useful:

The way in which you hot-plug a disk drive depends on the application you are using. Each application is different, but each requires that you:

Identifying a Faulty Drive

Different applications provide various levels of error logging. In general, you can find messages about failing or failed disks in your system console window. The information is also logged in the /usr/adm/messages file. See the documentation that came with your application for more information.

Preparing Spare Drives

If possible, prepare replacement disk drives in advance. Format, label, and partition each replacement disk drive in the same way as the disk it will replace. See the documentation for your application for instructions on how to format and partition the disk and add that disk to your application.

Adding, Removing, and Replacing Drives

The FC-AL disk hot-plug procedures use the luxadm insert_device and remove_device subcommands to add, remove, and replace disk drives. For detailed information about the syntax of these commands, see Removing, Inserting, and Replacing Enclosures and Disks in this manual.

Refer to the disk enclosure's installation or service manual for details on physically adding or removing disk drives.

If you are replacing a faulty drive, install the new drive in the same slot from which you removed the faulty drive.

Finding the Logical Device Name

When you unconfigure or configure a disk drive for an application, you may need to specify the drive by using its logical device name.

The naming convention for disks attached to a host port or host adapter is c w t x d y s z. c w t x d y s z is the logical device name, where:

w corresponds to the FC-AL controller

x corresponds to the disk slot

y is the logical unit for the disk drive (always 0)

z is the slice or partition on the disk

To obtain the logical device name for a mounted disk drive, use the df command. Refer to the df (1M) man page for more information. To obtain the logical device name for an unmounted drive, use the luxadm display command. You can also use the format command. Refer to the format(1M) man page for more information.

Assigning a Box Name to an Enclosure

You can specify a device to the luxadm subcommands by using a path name, WWN, or enclosure name and slot number.

If you use an enclosure name, you need to assign a box name.

The enclosure name for a Sun Fire 880 SES is specified as:

box_name,[s]slot_number

The enclosure name for a Sun StorEdge A5x00 IB is specified as:

box_name,[f|r]slot_number

A box_name is the name you assign to the enclosure with the luxadm enclosure_name subcommand or if you are using a Sun StorEdge A5x00, the front panel module. When used without the optional slot_number , the box_name identifies the Sun StorEdge A5x00 subsystem IB or a Sun Fire 880 internal storage array.

To assign the box_name and the slot_number , follow these steps:

1. Use the probe subcommand to determine the enclosure name, type:

#luxadm probe

A list of all attached subsystems and disks is displayed including the logical path name, the WWNs, and the enclosure names.

2. Use the enclosure_name subcommand to assign a box_name to the enclosure name, type:

#luxadm enclosure_name new-name enclosure | pathname

TABLE 3-1 enclosure_name Options and Arguments

Options

Description

new-name

The name you assign to the enclosure name. The new name must be 16 or fewer alphabetic or numeric characters. New-name specifies the box_name of the enclosure or interface board.

enclosure

The enclosure name of a Sun StorEdge A5x00 disk array or a Sun Fire 880 internal storage array. Use the probe command to display the enclosure name.

pathname

The physical or logical path name of a Sun StorEdge A5x00 disk array or Sun Fire 880 internal storage array. Use a path name instead of enclosure if you do not know the enclosure name. Use the probe (or probe -p ) command to display the path names and World Wide Name.


3. Use the display subcommand to determine the slot number for an individual disk.

The display command returns a list of slot numbers and WWN numbers for each disk. Use the box_name from Step 2 and the slot_number from Step 3 to specify an individual disk to a luxadm subcommand.

Example:

The following command assigns the box name dak to a Sun Fire 880 enclosure using the enclosure_name subcommand with a logical path name.

# luxadm enclosure_name dak /dev/es/ses1


How to Add an FC-AL Disk Drive

This procedure describes how to add a disk drive while the power is on and the operating system is running. Use this procedure to add a new FC-AL disk drive to a Sun Fire 880 system or to a Sun StorEdge A5x00 array.

After you install a new drive, you need to configure the file system so that the Solaris Operating Environment recognizes the new drive. If you are running Volume Manager or Solstice DiskSuite software, you need to configure your application to recognize the new drive.



caution icon

Caution Caution - You must be a qualified system administrator to perform this procedure.



Before You Begin

What to Do

1. Become superuser.

2. Select any available slot for the new disk drive.

For reference when you configure the software environment, make a note of which slot (and enclosure) you choose.

3. Determine the address for the new device.

You need to specify the new device to the luxadm insert_device command. To specify an individual Sun Fire 880 disk, use box_name [ ,s ] slot_number . To specify an individual Sun StorEdge A5x00 disk, use box_name [ ,f | ,r ] slot_number . Use a box name without a slot number to specify an enclosure. To determine the box name and slot number, use the probe , enclosure_name, and display subcommands:

For more information, see Assigning a Box Name to an Enclosure . For more detailed information about all of the addressing options, see About Addressing a Disk or Disk Array .

4. Use the luxadm insert_device command to insert the new device.

This command is interactive. You are informed when you can insert the new device and guided through the procedure for creating a new device entry or chain of devices.

    a. Type the luxadm insert_device command:

    # luxadm insert_device [enclosure,dev...]

    where enclosure,dev is the box name and slot number determined in Step 3.

    After you press Return, luxadm displays the list of device(s) to be inserted and asks you to verify that the list is correct.

    The following example inserts a new drive into slot 5 of a Sun Fire 880 enclosure named dak.

    # luxadm insert_device dak,s5

    The following example inserts a new drive into the first slot in the front of a Sun StorEdge A5x00 array named macs1.

    # luxadm insert_device macs1,f1

    b. Type c at the prompt or press Return if the list of devices to be added is correct.

    A message similar to the following is displayed.

    # Searching directory /dev/es for links to enclosures.
    Hit <Return> after inserting the devices(s)

    c. Physically insert the new drive, then press Return.

    Refer to the disk enclosure's installation or service manual for information about installing a disk drive.

    The luxadm insert_device subcommand configures the drive for the Solaris Operating Environment by creating a new device entry for the drive in the /dev/dsk and /dev/rdsk directories. The new drive is assigned a WWN.

    After you insert the drive and press Return, the luxadm command informs you that the disk has been inserted and displays the logical device names for the device, for example:

     Device dak5 inserted
    Drive in Box Name "dak" slot 5
      Logical Nodes under /dev/dsk and /dev/rdsk :
    	 c2t5d0s0
            c2t5d0s1
            c2t5d0s2
            c2t5d0s3
            c2t5d0s4
            c2t5d0s5
            c2t5d0s6
            c2t5d0s7



    Note Note - For reference when you configure the application, make a note of the logical device name (cwtxdysz) for the disk you just added. You need to enter this device name when you configure the disk drive for your application.



5. Configure the new disk drive for your application.

Continue the procedure for adding a drive by configuring the disk drive for your application. The procedure you use depends on whether your system is running UFS, VERITAS Volume Manager, or Solstice DiskSuite software. See How to Configure a New FC-AL Disk Drive .


How to Configure a New FC-AL Disk Drive

caution icon

Caution Caution - You must be a qualified system administrator to perform this procedure. Performing a hot-plug operation on an active disk drive can result in data loss or data corruption.



After you install a new disk drive into a Sun Fire 880 enclosure or a Sun StorEdge A5x00 array, you need to configure your application to accept the new drive. Each application is different. This section provides procedures for UFS, VERITAS Volume Manager, and Solstice DiskSuite software. Select the appropriate procedure for your application and follow the steps.



Note Note - To configure a disk drive, you need the logical device name (cwtxdysz) of the new disk. The logical device name is displayed after you use the luxadm insert_device subcommand to physically install the disk.




procedure icon  Configuring a New FC-AL Drive for UFS

1. Become superuser.

2. Verify that the device label meets your requirements.

Use the prtvtoc command to inspect the label for your disk. To modify the label, use the format command. Refer to the prtvtoc(1M) and format(1M) man pages for more information.

3. Select a disk slice for your UFS file system and check if it has a clean file system, type:

# fsck /dev/rdsk/cwtxdysz

where c w t x d y s z is the logical device name for the new disk.

For example:

# fsck /dev/rdsk/c1t2d0s2

If you get an error message, you need to use the newfs command to create a new file system on the slice, type:

# newfs /dev/rdsk/cwtxdysz

Refer to the newfs(1M) man page for more information.

4. If necessary, create a mount point for the new file system, type:

# mkdir mount_point

where mount_point is a fully qualified path name. Refer to the mount(1M) man page for more information.

5. Mount the new file system, type:

# mount mount_point

where: mount_point is the directory you created in Step 4 .

6. After you have created the file system and mount point, modify the /etc/vfstab file to reflect the new file system.

See the vfstab(4) man page for more details.

The new disk is ready to be used.


procedure icon  Configuring a New FC-AL Disk Drive for Volume Manager

1. Become superuser.

2. Configure the Volume Manager to recognize the disk drive, type:

# vxdctl enable

3. Add the new disk to a new or existing Volume Manager disk group, type

# vxdiskadd cwtxdysz

:

where c w t x d y s z is logical device name of the new disk. This command is interactive. You will be guided through the procedure for adding a new disk to Volume Manager.

Refer to the vxdiskadd (1M) man page for further details.

The disk is now ready for use with Volume Manager as part of a new volume, added to an existing volume as a plex, or to increase an existing volume. Refer to your Sun StorEdge Volume Manager User's Guide for more information.

4. Quit the vxdiskadd utility.

Configuring a New FC-AL Disk Drive for Solstice DiskSuite

Refer to the Solstice DiskSuite documentation for information about configuring the new disk drive.

How to Prepare an FC-AL Drive for Removal

Before you remove a device from a Sun StorEdge A5x00 array or a Sun Fire 880 enclosure, you need to stop activity to the drive and remove the drive from the application. The way you prepare a disk drive for removal depends on whether you are using UFS, VERITAS Volume Manager, or Solstice DiskSuite software. Each application is different.

This section provides procedures for UFS, VERITAS Volume Manager, and Solstice DiskSuite software. Select the appropriate procedure for your application and follow the steps.



caution icon

Caution Caution - You must be a qualified system administrator to perform this procedure. Performing a hot-plug operation on an active disk drive can result in data loss or data corruption.



Preparing a Disk Drive for Removal From UFS

Use this procedure to unconfigure a disk that is being used by one or more UFS file systems.

1. Become superuser.

2. Identify activities or applications attached to the device you plan to remove.

Commands to use are mount , showmount -a , df , and ps -ef . See the mount(1M) , showmount(1M) , and ps(1) man pages for more details.

For example, where the device to be removed is c0t11d0 :

# mount | grep c0t11d0
/export/home1 on /dev/dsk/c0t11d0s2 setuid/read/write on
# showmount -a | grep /export/home1
cinnamon:/export/home1/archive
austin:/export/home1
swlab1:/export/home1/doc
# ps -f | grep c0t11d0
root  1225   450   4 13:09:58  pts/2   0:00 grep c0t11

In this example, the file system /export/home1 on the faulty disk is being remotely mounted by three different systems-- cinnamon , austin , and swlab1 . The only process running is grep , which has finished.

3. Stop any activity or application processes on the file systems to be unconfigured.

4. Back up your system.

5. Determine and save the partition table for the disk.

If you are replacing the disk and the replacement disk is the same type as the faulty disk, you can use the format command to save the partition table of the disk. Use the format save command to save a copy of the partition table to the
/etc/format.dat file. This enables you to configure the replacement disk so that its layout matches the current disk.

Refer to the format(1M) man page for more information.

6. U nmount any file systems on the disk.


Note Note - If the file systems are on a disk that is failing or has failed, the umount operation may not unmount the file systems. A large number of error messages may be displayed in the system console and in the /var directory during the umount operation. If this happens and the umount command does not complete its operation, you may have to restart the system.



For each file system, type:

# umount filesystem

filesystem is the first field for each file system returned.

For example:

# umount /export/home1

7. Verify that the file system has been unmounted, type:

# df

The disk is now ready to be removed or replaced. See How to Remove an FC-AL Disk Drive .

Preparing a Disk Drive for Removal From Volume Manager

You will need the logical device name of the disk to complete this procedure.

1. Become superuser.

2. Identify the faulty disk drive.

Different applications provide various levels of error logging. In general, you can find messages about failing or failed disks in your system console window. The information is also logged in the /usr/adm/messages file. See the documentation that came with your application for more information.

3. Back up your system.

Refer to the documentation that came with your system for backup details.

4. Identify the disk media name for the disk you intend to replace, type:

# vxdisk list | grep cwtxdysz

For example, if the disk to be removed is c2t1d0 , type:

# vxdisk list | grep c2t1d0
c2t1d0s2     sliced    disk01       rootdg       online

The disk media name is the third field in the output above: disk01 .

You can use the vxdiskadm utility to prepare the disk for replacement.

5. Type vxdiskadm in a shell window.

# vxdiskadm

This operation is interactive and requires your confirmation of the operation.

6. If you are planning to replace the disk, select the "Remove a disk for replacement" option. Otherwise select the "Remove a disk" option.

When prompted for a disk name to replace or remove, type the disk media name. The vxdiskadm utility marks the disk for replacement and saves the subdisk information to be rebuilt on the replacement disk.

Redundant data is automatically recovered after the replacement disk has been reattached to Volume Manager. Nonredundant data is identified as unusable and must be re-created from backups.

Refer to the vxdiskadm(1M) man page for further details.

7. Quit the vxdiskadm utility.

The disk is now ready to be removed or replaced. See How to Remove an FC-AL Disk Drive .

Preparing a Disk Drive for Removal From Solstice DiskSuite

1. Become superuser.

2. Identify the disk to be replaced by examining the /var/adm/messages file and metastat output.

3. Use the metadb command to locate any local metadevice state database replicas that may have been placed on the problem disk.

Errors may be reported for the replicas located on the failed disk. In this example, c0t1d0 is the problem device.

 # metadb
          flags       first blk        block count
         a m     u        16               1034            /dev/dsk/c0t0d0s4
         a       u        1050             1034            /dev/dsk/c0t0d0s4
         a       u        2084             1034            /dev/dsk/c0t0d0s4
         W   pc luo       16               1034            /dev/dsk/c0t1d0s4
         W   pc luo       1050             1034            /dev/dsk/c0t1d0s4
         W   pc luo       2084             1034            /dev/dsk/c0t1d0s4

The output above shows three state database replicas on slice 4 of each of the local disks, c0t0d0 and c0t1d0 . The W in the flags field of the c0t1d0s4 slice indicates that the device has write errors. Three replicas on the c0t0d0s4 slice are still good.



caution icon

Caution Caution - If, after deleting the bad state database replicas, you are left with three or fewer replicas, add more state database replicas before continuing. This will ensure that your system reboots correctly.



4. Record the slice name where the replicas reside and the number of replicas, then delete the state database replicas.

The system obtains the number of replicas by counting the number of appearances of a slice in metadb output in Step 2. In this example, the three state database replicas that exist on c0t1d0s4 are deleted.

# metadb -d c0t1d0s4

5. Locate any submirrors using slices on the problem disk and detach them.

    a. Use the metastat command to show the affected mirrors.

     # metastat 
    metastat
      
      d5: Mirror
        Submirror 0: d4
          State: Okay         
        Submirror 1: d3
          State: Okay         
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 1213380 blocks
    d4: Submirror of d5
        State: Okay         
        Size: 1213380 blocks
        Stripe 0:
            Device              Start Block  Dbase State        Hot Spare
            c1t117d0s3                 0     No    Okay         
        Stripe 1:
            Device              Start Block  Dbase State        Hot Spare
            c3t112d0s3                 0     No    Okay         

    b. Use the metadetach command to detach the submirrors identified in the previous step.

     # metadetach d5 d3
        d5: submirror d3 is detached

6. Delete hot spares on the problem disk.

 # metahs -d hsp000 c0t1d0s6
hsp000: Hotspare is deleted

7. Preserve the disk label if the disk is using multiple partitions.

# prtvtoc /dev/rdsk/c2t17d0s2 > /var/tmp/c2t17d0.vtoc

Perform this step if you are using a slice other than s2 .

See the prtvtoc ( 1M ) man page for more information.

8. Use the metareplace command to replace the disk slices that are not hot spares.

# metareplace d1 c2t17d0s2 c2t16d0s2
d1: device c2t17d0s2 is replaced with c2t16d0s2

The disk is now ready to be removed or replaced. See How to Remove an FC-AL Disk Drive .

How to Remove an FC-AL Disk Drive

This procedure describes how to remove a disk drive or an entire array while the power is on and the operating system is running. Use this procedure to remove an FC-AL disk drive from a Sun Fire 880 server or a Sun StorEdge A5x00 array.



caution icon

Caution Caution - You must be a qualified system administrator to perform this procedure. Performing a hot-plug operation on an active disk drive can result in data loss or data corruption.



Before You Begin

What to Do

1. Determine an address for the disk to be removed.

You need to specify the device to the luxadm remove_device command by using a path name, a WWN, or a box_name and slot_number . Use the probe , enclosure_name, and display subcommands to determine an address.

For more information about using a box name and slot number, see Assigning a Box Name to an Enclosure . For information about all of the addressing options, see About Addressing a Disk or Disk Array .

2. Stop any activity to the drive and unconfigure the drive from your application.

See How to Prepare an FC-AL Drive for Removal and follow the steps for your application.

3. Use the luxadm remove_device command to remove the device.

This command is interactive. You are guided through the procedure for removing a new device entry or chain of devices. This command checks if the device is busy, makes the device go offline, and informs you that the device can be removed.

    a. Type the luxadm remove_device command:

    # luxadm remove_device [-F] enclosure[,dev]...| pathname...

    where enclosure [ ,dev ]...| pathname ... is the address determined in Step 1.



    Note Note - If you are running VERITAS Volume Manager or Solstice DiskSuite software, use the luxadm remove_device -F command to remove the disk drive. The -F option is required to take disks offline.





    caution icon

    Caution Caution - Removing devices that are in use will cause unpredictable results. Try to hot-plug the devices normally (without -F) first, resorting to this option only when you are sure of the consequences of overriding normal hot-plug checks.



    After you press Return, luxadm displays a list of the devices to be removed and asks you to verify that the list is correct.

    The following example shows the command to remove a drive from slot 10 in a Sun Fire 880 enclosure named newdak.

    # luxadm remove_device newdak,s10

    The following example shows the command to remove a disk in slot 1 in the front of a Sun StorEdge A5x00 array named macs.

    # luxadm remove_device macs,f1

    b. Type c at the prompt or press Return if the list of devices to be removed is correct.

    luxadm prepares the disk(s) or enclosure(s) for removal and displays a message similar to the following:

    Searching directory /dev/es for links to enclosures
    stopping: Drive in "DAK1" slot 1....Done
    offlining:Drive in "DAK1" slot 1....Done
    Hit <Return> after removing the device(s).



    Note Note - If a message is displayed indicating that the list of devices is being used by the host, you will need to take the devices offline. See How to Prepare an FC-AL Drive for Removal and follow the steps for your application.



    c. Physically remove the drive, then press Return.

    The luxadm command indicates which device you can remove by the status of the LEDs.

    On a Sun StorEdge 5000 array, the yellow LED on the designated disk drive(s) will be flashing. On a Sun Fire 880 enclosure, the disk's OK-to-Remove LED will light.

    For a Sun Fire 880 system, you may remove the disk drive when the OK-to-Remove LED is lit. The green power LED may also be lit or blinking.

    For a Sun StorEdge 5000 array, you may remove the disk drive when the OK-to- Remove LED is blinking.



    caution icon

    Caution Caution - When the OK-to Remove LED is lit on a Sun Fire 880 system or blinking on a Sun StorEdge A5x00 system, the disk is logically ready to be removed. However, the spindle will continue to rotate for 30 seconds or more. It is safe to remove the disk before it completely stops spinning if you are careful. Do not use sudden movements and do not drop the drive.



    See your service manual for more information about removing a disk drive.

    After you remove the disk drive and press Return, luxadm informs you that the disk has been removed and displays the logical device names for the removed device. For example, after you remove a disk from slot 10 of Sun Fire 880 enclosure, dak, and press Return, a message similar to the following is displayed:

    Device DISK10 removed
    Drive in Box Name "dak" slot 10
      Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
      Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
            c1t12d0s0
            c1t12d0s1
            c1t12d0s2
            c1t12d0s3
            c1t12d0s4
            c1t12d0s5
            c1t12d0s6
            c1t12d0s7
    # 

    This drive is now removed from the enclosure and your application.

What Next

If you are replacing the drive, go to How to Replace an FC-AL Disk Drive and continue the procedure at Step 3. Otherwise, if you are running UFS, edit the /etc/vfstab file to delete any references to the removed devices. See the vfstab(4) man page for additional details.

How to Replace an FC-AL Disk Drive

This procedure describes how to replace an FC-AL disk drive while the power is on and the operating system is running. Before you remove a disk drive, you need to stop activity to the drive and remove the drive from your application. After you replace the drive you need to reconfigure the drive for your application.



Note Note - If you are familiar with the luxadm command and the procedures for hot-plugging a disk, see the quick reference checklists in Appendix B for a summary of the tasks required for disk replacement.





caution icon

Caution Caution - You must be a qualified system administrator to perform this procedure. Performing a hot-plug operation on an active disk drive can result in data loss or data corruption.



Before You Begin

What to Do

1. Determine an address for the disk to be removed.

You need to specify the disk to luxadm . You can specify the disk with a path name, a WWN, or a box_name and slot_number . To determine an address, you need to use the probe , enclosure_name, and display subcommands:

To specify a disk or an array by box_name and slot_number , see Assigning a Box Name to an Enclosure . For more detailed information about all of the addressing options, see About Addressing a Disk or Disk Array .

2. Stop all activity to the drive and unconfigure the drive from your application, if you have not already done so.

Your system may be running UNIX File system, VERITAS Volume Manager, or Solstice DiskSuite software. You must stop activity to the disk and notify the application that you are removing the disk drive.

See How to Prepare an FC-AL Drive for Removal and follow the steps for your application.

3. Use the luxadm remove_device command to remove the device.

See How to Remove an FC-AL Disk Drive and follow the steps.

4. Use the luxadm insert_device command to add the new device.

See How to Add an FC-AL Disk Drive and follow the steps. Insert the new drive into the same slot as the one you removed.

5. Reconfigure the disk drive for your application.

Continue the disk replacement procedure by reconfiguring the disk drive within your application. The procedure you use depends on whether your system is running UFS or Volume Manager or Solstice DiskSuite software. See How to Reconfigure an FC-AL Disk Drive .

How to Reconfigure an FC-AL Disk Drive

After you replace a faulty FC-AL disk drive, it is necessary to reconfigure the drive for the application running on your system.

This section provides procedures for UFS, VERITAS Volume Manager, and Solstice DiskSuite software. Use the reconfiguration procedure appropriate for the application running on your system.



caution icon

Caution Caution - You must be a qualified system administrator to perform this procedure. Performing a hot-plug operation on an active disk drive can result in data loss and/or data corruption.



Reconfiguring a Disk Drive for UFS

1. Verify that the device's partition table satisfies the requirements of the file system(s) you intend to re-create.

You can use the prtvtoc command to inspect the label for your device. If you need to modify the label, use the format command. Refer to the prtvtoc(1M) and format(1M) man pages for more information.
For example:

# prtvtoc /dev/rdsk/cwtxdysz

If you have saved a disk partition table using the format utility and the replacement disk type matches the old disk type, then you can use the format utility's partition section to configure the partition table of the replacement disk. See the select and label commands in the partition section.

If the replacement disk is of a different type than the disk it replaced, you can use the partition size information from the previous disk to set the partition table for the replacement disk. Refer to the prtvtoc(1M) and format(1M) man pages for more information.

2. Select a disk slice for your UFS file system and create a new file system on the slice:

# newfs /dev/rdsk/cwtxdysz

Refer to the newfs(1M) man page for more information.

3. Mount the new file system using the mount command, type:

# mount mount_point

where mount_point is the directory on which the faulty disk was mounted.

The new disk is ready to be used. You can now restore data from your backups.

Reconfiguring a Disk Drive for Volume Manager

To re-create the replaced disk on the new drive:

1. Configure the Volume Manager to recognize the disk drive, type:

# vxdctl enable

2. Use the vxdiskadm utility.

Select the "Replace a failed or removed disk" option.

vxdiskadm supplies a list of available disks to be used as replacements.

3. Select the replacement drive.

vxdiskadm automatically configures the replacement drive to match the failed drive.

Redundant data is recovered automatically. Space for nonredundant data is created and identified. Nonredundant data must be recovered from backing store.

Reconfiguring a Disk Drive for Solstice DiskSuite

1. Restore the disk label, if necessary.

# cat /var/tmp/c2t17d0.vtoc | fmhard -s - /dev/rdsk/c2t17d0s2

2. If you deleted replicas, add the same number back to the appropriate slice. In this example, /dev/dsk/c-t1d0s4 is used.

# metadb -a c 3 c0t1d0s4

3. Depending on how the disk was used, you may have a variety of tasks to do.

Use the following table to decide what to do next.

TABLE 3-2 Disk Replacement Decision Table

Type of Device

Do the Following...

Slice

Use normal data recovery procedures.

Unmirrored Stripe or Concatenation

If the stripe/concat is used for a file system, run newfs(1M) , mount the file system, then restore data from backup. If the stripe/concat is used as an application that uses the raw device, that application must have its own recovery procedures.

Mirror (Submirror)

Run metattach(1M) to reattach a detached submirror.

RAID5 Metadevice

Run metareplace(1M) to re-enable the slice. This causes the resyncs to start.

Trans Metadevice

Run fsck(1M) to repair the trans metadevice.


4. Replace hot spares that were deleted, and add them to the appropriate hot spare pool(s).

 # metahs -a hsp000 c0t0d0s6
hsp000: Hotspare is added

5. Validate the data.

Check the user and application data on all metadevices. You may have to run an application-level consistency checker or use some other method to check the data.