Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS

ProcedureHow to Replace a Failed Boot Disk in a Running Cluster

Use this procedure to replace a failed boot disk that is mirrored and that is on a node in a running cluster. Use this procedure for both Solstice DiskSuite/Solaris Volume Manager and VERITAS Volume Manager. This procedure also assumes that one mirror is available. This procedure defines Node N as the node on which you are replacing the failed boot disk.

If you do not have a mirror that is available, see your Sun Cluster system administration documentation to restore data on the boot disk.

  1. Is Node N up and running?

  2. Is your disk drive hot-pluggable?

  3. Determine the resource groups and device groups running on Node N.

    Record this information because you use this information in Step 6 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

    For more information, see your Sun Cluster system administration documentation.

  4. Move all resource groups and device groups off Node N.


    # scswitch -S -h from-node
    

    For more information, see your Sun Cluster system administration documentation.

  5. Replace the failed boot disk by using the procedure that is outlined in your volume manager documentation.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation. For the procedure about how to replace a boot disk, see Recovering From Boot Problems in Solaris Volume Manager Administration Guide.

  6. If you moved all resource groups and device groups off Node N in Step 4, return the resource groups and device groups that you identified in Step 3 to Node N.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.