Sun Cluster 3.0 12/01 Release Notes Supplement

Appendix D Installing and Maintaining a Sun StorEdge 9910 or StorEdge 9960 Array

This chapter contains a limited set of procedures for installing, configuring, and maintaining Sun StorEdge 9910 and Sun StorEdge 9960 arrays. Contact your service provider to perform tasks not documented in this chapter.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 12/01 Concepts document.

Installing a StorEdge 9910/9960 Array

This section references information about installing a cluster in a campus cluster configuration. The installation of the StorEdge 9910/9960 array should only be performed by your certified service provider.

How to Install the StorEdge 9910/9960 Array in a Campus Cluster Configuration

You can configure the StorEdge 9910/9960 as a two-room or three-room campus cluster. For more information about configuring a campus cluster, see Appendix E, Campus Clustering with Sun Cluster 3.0 Software - Concepts.

For supported campus cluster hardware configurations, contact your Sun sales representative.

Configuring a StorEdge 9910/9960 Array

This section contains the procedures for configuring a StorEdge 9910/9960 array in a running cluster. The following table lists these procedures.

Table D-1 Task Map: Configuring a StorEdge 9910/9960 Array

Task 

For Instructions, Go To 

Add a StorEdge 9910/9960 array logical volume 

See "How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster".

Remove a StorEdge 9910/9960 array logical volume 

See "How to Remove a StorEdge 9910/9960 Array Logical Volume".

How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster

Use this procedure to add a logical volume to a cluster. This procedure assumes that your service provider has created your logical volume and that all cluster nodes are booted and attached to the StorEdge 9910/9960 array.

  1. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm
    
  2. On one node connected to the array, use the format(1M) command to label, partition, and verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  3. Are you running VERITAS Volume Manager?

    • If not, go to Step 4

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume that you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  4. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

Where to Go From Here

To create a new resource or reconfigure a running resource to use the new StorEdge 9910/9960 Array logical volume, see the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

How to Remove a StorEdge 9910/9960 Array Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and connected to the StorEdge 9910/9960 array that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining nodes in the cluster. If you remove an array from more than 2 nodes, repeat Step 9 through Step 11 for each additional node that connects to the logical volume.


Caution - Caution -

This procedure removes all data on the logical volume that you are removing.


  1. If necessary, back up all data and migrate all resource groups and disk device groups to another node.

  2. Determine if the logical volume that you plan to remove is configured as a quorum device.


    # scstat -q
    
    • If the logical volume is not a quorum device, go to Step 3.

    • If the logical volume is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

    To add and remove a quorum device in you configuration, see the Sun Cluster 3.0 12/01 System Administration Guide.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the reference to the logical volume from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. If the cluster is running VERITAS Volume Manager, update the list of devices on all cluster nodes attached to the logical volume that you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  5. Remove the logical volume.

    Contact your service provider to remove the logical volume.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 11 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Shut down and reboot Node A by using the shutdown command with the -i6 option.


    # shutdown -y -g0 -i6
    

    For procedures on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. On Node A, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  9. Shut down and reboot Node B by using the shutdown command with the -i6 option.


    # shutdown -y -g0 -i6
    

    For procedures on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  10. On Node B, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  11. Return the resource groups and device groups you identified in Step 6 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  12. Repeat Step 9 through Step 11 for each additional node that connects to the logical volume.

Where to Go From Here

To create a logical volume, see "How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster".

Maintaining a StorEdge 9910/9960 Array

This section contains a limited set of procedures for maintaining a StorEdge 9910/9960 array. Contact your service provider to add, remove, or replace any StorEdge 9910/9960 components.

Table D-2 Task Map: Maintaining a StorEdge 9910/9960 Array

Task 

For Instructions, Go To 

Add an array. 

"How to Add a StorEdge 9910/9960 Array"

Remove an array. 

"How to Remove a StorEdge 9910/9960 Array"

Add a node to the array. 

"Adding a Cluster Node"

Remove a node from the array. 

"Removing a Cluster Node"

How to Add a StorEdge 9910/9960 Array

Use this procedure to add a new StorEdge 9910/9960 array to a running cluster.

This procedure defines Node A as the node you begin working with, and Node B as the remaining nodes. If you add an array to more than two nodes, repeat Step 19 through Step 30 for each additional node that connects to the array.

  1. Power on the StorEdge 9910/9960 array.


    Note -

    The StorEdge 9910/9960 array will require approximately 10 minutes to boot.


    Contact your service provider to power on the StorEdge 9910/9960 array.

  2. If you plan to use Sun StorEdge Traffic Manager software, verify on the array that the array is configured for multipathing.

    Contact your service provider to verify that the array is configured for multipathing.

  3. Configure the new StorEdge 9910/9960 array.

    Contact your service provider to create the desired logical volumes.

  4. Determine if you need to install a host adapter in Node A.


    Note -

    If you use Sun StorEdge Traffic Manager, each node will need two host adapters that connect to each array.


  5. Is the host adapter the first host adapter of its type on Node A?

    • If no, skip to Step 6.

    • If yes, contact your service provider to install the support packages and configure the drivers before you proceed to Step 6.

  6. Determine whether your node is enabled with the Solaris dynamic reconfiguration (DR) feature.

    • If your node is enabled with DR, install the host adapter and proceed to Step 12.

      For the procedure on installing a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    • If your node does not have DR enabled, you will need to shutdown this node to install the host adapter(s). Proceed to Step 7

  7. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 30 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  8. Stop the Sun Cluster software on Node A and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  9. Power off Node A.

  10. Install the host adapter in Node A.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  11. Power on and boot Node A.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  12. Attach the StorEdge 9910/9960 array to Node A.

    Contact your service provider to install a fiber-optic cable between the StorEdge 9910/9960 array and your cluster node.

  13. Configure the host adapter and the StorEdge 9910/9960 array.

    Contact your service provider to configure the adapter and StorEdge 9910/9960 array.

  14. If necessary, install any required patches or software for Sun StorEdge Traffic Manager software support to Node A from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the Sun StorEdge Traffic Manager Software Installation and Configuration Guide.

  15. Shut down Node A.


    # shutdown -y -g0 -i0
    
  16. Perform a reconfiguration boot to create the new Solaris device files and links on Node A.


    {0} ok boot -r
    
  17. On one node connected to the array, use the format(1M) command to label, partition, and verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  18. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new StorEdge 9910/9960 array.


    # scdidadm -l
    

  19. Do you need to install a host adapter in Node B?

  20. Is the host adapter the first host adapter of its type on Node B?

    • If no, skip to Step 21.

    • If yes, contact your service provider to install the support packages and configure the drivers before you proceed to Step 21.

  21. Determine whether your node is enabled with the Solaris dynamic reconfiguration (DR) feature.

    • If your node is enabled with DR, install the host adapter and proceed to Step 25.

      For the procedure on installing a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    • If your node does not have DR enabled, proceed to Step 22

  22. Stop the Sun Cluster software on Node B and shut down Node B.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  23. Power off Node B and install the host adapter in Node B.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  24. Power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  25. Attach the StorEdge 9910/9960 array to Node B.

    Contact your service provider to install a fiber-optic cable between the StorEdge 9910/9960 array and your cluster node.

  26. If necessary, install any required patches or software for Sun StorEdge Traffic Manager software support to Node B from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the Sun StorEdge Traffic Manager Software Installation and Configuration Guide.

  27. Shut down Node B.


    # shutdown -y -g0 -i0
    
  28. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    
  29. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new StorEdge 9910/9960 array.


    # scdidadm -l
    

  30. Return the resource groups and device groups you identified in Step 7 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  31. Repeat Step 19 through Step 30 for each additional node that connects to the array.

  32. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Remove a StorEdge 9910/9960 Array

Use this procedure to permanently remove a StorEdge 9910/9960 array. This procedure provides the flexibility to remove the host adapters from the nodes that were attached to the StorEdge 9910/9960 array you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining nodes. If you remove an array from more than two nodes, repeat Step 14 through Step 21 for each additional node that connects to the array.


Caution - Caution -

During this procedure, you will lose access to the data that resides on the StorEdge 9910/9960 array you are removing.


  1. If necessary, back up all data and migrate all resource groups and disk device groups to another node.

  2. Determine if the array that you plan to remove is configured as a quorum device.


    # scstat -q
    
    • If the array is not a quorum device, go to Step 3.

    • If the array is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

    To add and remove a quorum device in you configuration, see the Sun Cluster 3.0 12/01 System Administration Guide.

  3. If necessary, detach the submirrors from the StorEdge 9910/9960 array you are removing in order to stop all I/O activity to the array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove the references to the logical volume(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Determine whether your nodes are enabled with the Solaris dynamic reconfiguration (DR) feature.

    • If your nodes are enabled with DR, disconnect the fiber optic cables and, if desired, remove the host adapters from both nodes. Then perform Step 21 on each node that was connected to the array.

      For the procedure on removing a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    • If your nodes do not have DR enabled, proceed to Step 6.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 20 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Stop the Sun Cluster software on Node A, and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. Disconnect the fiber-optic cable between Node A and the StorEdge 9910/9960 array.

  9. Do you want to remove the host adapter from Node A?

    • If no, skip to Step 12.

    • If yes, power off Node A.

  10. Remove the host adapter from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  11. Without allowing the node to boot, power on Node A.

    For more information, see the documentation that shipped with your server.

  12. Boot Node A into cluster mode.


    {0} ok boot
    
  13. On Node A, update the device namespace.


    # devfsadm -C
    
  14. Stop the Sun Cluster software on Node B, and shut down Node B.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  15. Disconnect the fiber-optic cable between Node B and the StorEdge 9910/9960 array.

  16. Do you want to remove the host adapter from Node B?

    • If no, skip to Step 19.

    • If yes, power off Node B.

  17. Remove the host adapter from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your server and host adapter.

  18. Without allowing the node to boot, power on Node B.

    For more information, see the documentation that shipped with your server.

  19. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  20. Return the resource groups and device groups you identified in Step 6 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    
  21. On Node B, update the device namespace.


    # devfsadm -C
    
  22. Repeat Step 14 through Step 21 for each additional node that connects to the array.

  23. From one node, remove DID references to the array that was removed.


    # scdidadm -C