Sun Cluster 3.0 12/01 Hardware Guide

How to Remove a StorEdge T3/T3+ Array

Use this procedure to permanently remove a StorEdge T3/T3+ array and its submirrors from a running cluster. This procedure provides the flexibility to remove the host adapters from the nodes for the StorEdge T3/T3+ array you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

During this procedure, you will lose access to the data that resides on the StorEdge T3/T3+ array you are removing.


  1. Back up all database tables, data services, and volumes that are associated with the StorEdge T3/T3+ array that you are removing.

  2. Detach the submirrors from the StorEdge T3/T3+ array you are removing in order to stop all I/O activity to the StorEdge T3/T3+ array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node B.


    # scstat
    
  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  6. Stop the Sun Cluster software on Node A, and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Is the StorEdge T3/T3+ array you are removing the last StorEdge T3/T3+ array that is connected to Node A?

    • If yes, disconnect the fiber-optic cable between Node A and the Sun StorEdge FC-100 hub that is connected to this StorEdge T3/T3+ array, then disconnect the fiber-optic cable between the Sun StorEdge FC-100 hub and this StorEdge T3/T3+ array.

    • If no, proceed to Step 8.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note -

    If you are using your StorEdge T3/T3+ arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel to maintain cluster availability. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  8. Do you want to remove the host adapter from Node A?

    • If yes, power off Node A.

    • If no, skip to Step 11.

  9. Remove the host adapter from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  10. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  11. Boot Node A into cluster mode.


    {0} ok boot
    
  12. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  13. Stop the Sun Cluster software on Node B, and shut down Node B.


    # shutdown -y -g0 -i0
    
  14. Is the StorEdge T3/T3+ array you are removing the last StorEdge T3/T3+ array that is connected to the Sun StorEdge FC-100 hub.

    • If yes, disconnect the fiber-optic cable that connects this Sun StorEdge FC-100 hub and Node B.

    • If no, proceed to Step 15.

    For the procedure on removing a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note -

    If you are using your StorEdge T3/T3+ arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel to maintain cluster availability. See "StorEdge T3 and T3+ Array (Single-Controller) SAN Considerations" for more information.


  15. Do you want to remove the host adapter from Node B?

    • If yes, power off Node B.

    • If no, skip to Step 18.

  16. Remove the host adapter from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  17. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  18. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  19. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  20. Return the resource groups and device groups you identified in Step 4 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename