Sun Cluster 3.0 U1 Release Notes Supplement

How to Remove a StorEdge 9910/9960 Array Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge 9910/9960 array that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

This procedure removes all data on the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume that you are removing. Otherwise, proceed to Step 2.

  2. Are you running VERITAS Volume Manager?

    • If no, go to Step 3.

    • If yes, update the list of devices on all cluster nodes attached to the logical volume that you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Remove the logical volume.

    Contact your service provider to remove the logical volume.

  5. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 12 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  6. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    
  7. Shut down and reboot Node A by using the shutdown command with the -i6 option.

    The -i6 option with the shutdown command causes the node to shutdown and reboot.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  8. On Node A, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  9. Move all resource groups and device groups off Node B.


    # scswitch -S -h from-node
    
  10. Shut down and reboot Node B by using the shutdown command with the -i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  11. On Node B, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  12. Return the resource groups and device groups you identified in Step 5 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster".