Sun Cluster 2.2 System Administration Guide

Recovering From Power Loss

When power is lost to one Sun StorEdge A5000, I/O operations generate errors that are detected by your volume management software. Errors are not reported until I/O transactions are made to the disk.

You should monitor the configuration for these events using the commands described in Chapter 2, Sun Cluster Administration Tools.

How to Recover From Power Loss (Solstice DiskSuite)

These are the high-level steps to recover from power loss to a disk enclosure in a Solstice DiskSuite environment:

These are the detailed steps to recover from power loss to a disk enclosure in a Solstice DiskSuite environment.

  1. When power is restored, use the metadb(1M) command to identify the errored replicas:


    # metadb -s diskset
    

  2. Return replicas to service.

    After the loss of power, all metadevice state database replicas on the affected disk enclosure chassis enter an errored state. Because metadevice state database replica recovery is not automatic, it is safest to perform the recovery immediately after the disk enclosure returns to service. Otherwise, a new failure can cause a majority of replicas to be out of service and cause a kernel panic. This is the expected behavior of Solstice DiskSuite when too few replicas are available.

    While these errored replicas will be reclaimed at the next takeover (haswitch(1M) or reboot(1M)), you might want to return them to service manually by first deleting and then adding them back.


    Note -

    Make sure that you add back the same number of replicas that were deleted on each slice. You can delete multiple replicas with a single metadb(1M) command. If you need multiple copies of replicas on one slice, you must add them in one invocation of the metadb(1M) command using the -c flag.


  3. Use the metastat(1M) command to identify the errored metadevices.


    # metastat -s diskset
    

  4. Return errored metadevices to service using the metareplace(1M) command, and resync the disks.


    # metareplace -s diskset -e mirror component
    

    The -e option transitions the component (slice) to the available state and performs a resync.

    Components that have been replaced by a hot spare should be the last devices replaced using the metareplace(1M) command. If the hot spare is replaced first, it could replace another errored submirror as soon as it becomes available.

    You can perform a resync on only one component of a submirror (metadevice) at a time. If all components of a submirror were affected by the power outage, each component must be replaced separately. It takes approximately 10 minutes to resync a 1.05GB disk.

    If both disksets in a symmetric configuration were affected by the power outage, you can resync each diskset's affected submirrors concurrently. Log into each host separately to recover that host's diskset by running metareplace(1M) on each.


    Note -

    Depending on the number of submirrors and the number of components in these submirrors, the resync actions can require a considerable amount of time. A single submirror made up of 30 1.05GB drives might take about five hours to complete. A more manageable configuration made up of five component submirrors might take only 50 minutes to complete.


How to Recover From Power Loss (VxVM)

Power failures can detach disk drives and cause plexes to become detached, and thus, unavailable. The volume remains active, however, because the remaining plexes in a mirrored volume are still available. It is possible to reattach the disk drives and recover from this condition without halting nodes in the cluster.

These are the high-level steps to recover from power loss to a disk enclosure in an VxVM configuration:

These are the detailed steps to recover from power loss to a disk enclosure in an VxVM configuration.

  1. Use the vxprint command to view the errored plexes.

    Optionally, specify a disk group with the -g diskgroup option.

  2. Use the vxdisk command to identify the errored disks.


    # vxdisk list
    DEVICE       TYPE      DISK         GROUP        STATUS
    ..
    -            -         c1t5d0       toi          failed was:c1t5d0s2
    ...

  3. Fix the condition that resulted in the problem so that power is restored to all failed disks.

    Be sure that the disks are spun up before proceeding.

  4. Enter the following commands on all nodes in the cluster.

    In some cases, the drive(s) must be rediscovered by the node(s).


    # drvconfig
    # disks
    

  5. Enter the following commands on all nodes in the cluster.

    The volume manager must scan the current disk configuration again.


    # vxdctl enable
    # vxdisk -a online
    

  6. Enter the following command first on the master node, then on the remaining nodes in the cluster.

    This will reattach disks that had transitory failures.


    # vxreattach
    

  7. Verify the output of the vxdisk command to see if there are any more errors.


    # vxdisk list
    

  8. If media was replaced, from the master node enter the following command for each disk that has been disconnected.

    The physical disk and the volume manager access name for that disk must be reconnected.


    # vxdg -g diskgroup -k adddisk medianame=accessname
    

    The values for medianame and accessname appear at the end of the vxdisk list command output.

    For example:


    # vxdg -g toi -k adddisk c1t5d0=c1t5d0s2
    # vxdg -g toi -k adddisk c1t5d1=c1t5d1s2
    # vxdg -g toi -k adddisk c1t5d2=c1t5d2s2
    # vxdg -g toi -k adddisk c1t5d3=c1t5d3s2
    # vxdg -g toi -k adddisk c1t5d4=c1t5d4s2
    

    You can also use the vxdiskadm command, or the graphical user interface, to reattach the disks.

  9. From the node, start volume recovery.

    If you have shared disk groups, use the -svc options to the vxrecover command.


    # vxrecover -bv [-g diskgroup]
    

  10. (Optional) Use the vxprint -g command to view the changes.