Sun Cluster 3.0 5/02 Supplement

Appendix C Installing and Maintaining a Sun StorEdge 9910 or StorEdge 9960 Array

This chapter contains the procedures for installing, configuring, and maintaining Sun StorEdge 9910 and Sun StorEdge 9960 arrays.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 12/01 Concepts document.

Configuring a StorEdge 9910/9960 Array

This section contains the procedures for configuring a StorEdge 9910/9960 array in a running cluster. The following table lists these procedures.

Table C-1 Task Map: Configuring a StorEdge 9910/9960 Array

Task 

For Instructions, Go To 

Add a StorEdge 9910/9960 array logical volume 

See "How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster".

Remove a StorEdge 9910/9960 array logical volume 

See "How to Remove a StorEdge 9910/9960 Array Logical Volume".

How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster

Use this procedure to add a logical volume to a cluster. This procedure assumes that your service provider has created your logical volumes and that all cluster nodes are booted and attached to the StorEdge 9910/9960 array.

  1. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm
    

  2. On one node connected to the partner-group, use the format command to verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  3. Are you running VERITAS Volume Manager?

    • If not, go to Step 4.

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume that you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  4. If necessary, partition the logical volume.

    Contact your service provider to partition the logical volume.

  5. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

Where to Go From Here

To create a new resource or reconfigure a running resource to use the new StorEdge 9910/9960 Array logical volume, see the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

How to Remove a StorEdge 9910/9960 Array Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all cluster nodes are booted and attached to the StorEdge 9910/9960 array that hosts the logical volume you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

This procedure removes all data on the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume that you are removing. Otherwise, proceed to Step 2.

  2. Are you running VERITAS Volume Manager?

    • If no, go to Step 3.

    • If yes, update the list of devices on all cluster nodes attached to the logical volume that you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  3. Run the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Remove the logical volume.

    Contact your service provider to remove the logical volume.

  5. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 12 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  6. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    

  7. Shut down and reboot Node A by using the shutdown command with the -i6 option.

    The -i6 option with the shutdown command causes the node to shutdown and reboot.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. On Node A, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    

  9. Move all resource groups and device groups off Node B.


    # scswitch -S -h from-node
    

  10. Shut down and reboot Node B by using the shutdown command with the -i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  11. On Node B, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    

  12. Return the resource groups and device groups you identified in Step 5 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Add a StorEdge 9910/9960 Array Logical Volume to a Cluster".

Maintaining a StorEdge 9910/9960 Array

This section contains a limited set of procedures for maintaining a StorEdge 9910/9960 array. Contact your service provider to add, remove, or replace any StorEdge 9910/9960 components.

Table C-2 Task Map: Maintaining a StorEdge 9910/9960 Array

Task 

For Instructions, Go To 

Add a StorEdge 9910/9960 array. 

"How to Add a StorEdge 9910/9960 Array"

Remove a StorEdge 9910/9960 array. 

"How to Remove a StorEdge 9910/9960 Array"

Replace a host-to-array fiber-optic cable. 

"How to Replace a Host-to-Array Fiber-Optic Cable"

Replace a host adapter. 

"How to Replace a Host Adapter"

How to Add a StorEdge 9910/9960 Array

Use this procedure to add a new StorEdge 9910/9960 array to a running cluster.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.

  1. Power on the StorEdge 9910/9960 array.


    only -

    The StorEdge 9910/9960 array will require a few minutes to boot.


    Contact your service provider to power on the StorEdge 9910/9960 array.

  2. Configure the new StorEdge 9910/9960 array.

    Contact your service provider to create the desired logical volumes.

  3. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 28 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  4. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    

  5. Do you need to install a host adapter in Node A?

  6. Is the host adapter the first JNI host adapter on Node A?

    • If no, skip to Step 11.

    • If yes, contact your service provider to install the support packages and configure the drivers.

  7. Stop the Sun Cluster software on Node A and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. Power off Node A.

  9. Install the host adapter in Node A.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adaper or updated information on the manufacturer's web site.

  10. If necessary, power on and boot Node A.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  11. Attach the StorEdge 9910/9960 array to Node A.

    Contact your service provider to install a fiber-optic cable between the StorEdge 9910/9960 array and your cluster node.

  12. Configure the fibre-channel adapter and the StorEdge 9910/9960 array.

    Contact your service provider to configure the adapter and StorEdge 9910/9960 array.

  13. Shut down Node A.


    # shutdown -y -g0 -i0
    

  14. Perform a reconfiguration boot to create the new Solaris device files and links on Node A.


    {0} ok boot -r
    

  15. Label the new logical volume.

    Contact your service provider to label the new logical volume.

  16. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new StorEdge 9910/9960 array.


    # scdidadm -l
    

  17. Do you need to install a host adapter in Node B?

  18. Is the host adapter the first JNI host adapter on Node B?

    • If no, skip to Step 19.

    • If yes, contact your service provider to install the support packages and configure the drivers.

  19. Move all resource groups and device groups off Node B.


    # scswitch -S -h from-node
    

  20. Stop the Sun Cluster software on Node B, and shut down the node.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  21. Power off Node B.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  22. Install the host adapter in Node B.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  23. If necessary, power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  24. Attach the StorEdge 9910/9960 array to Node B.

    Contact your service provider to install a fiber-optic cable between the StorEdge 9910/9960 array and your cluster node.

  25. Shut down Node B.


    # shutdown -y -g0 -i0
    

  26. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    

  27. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new StorEdge 9910/9960 array.


    # scdidadm -l
    

  28. Return the resource groups and device groups you identified in Step 3 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  29. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

How to Remove a StorEdge 9910/9960 Array

Use this procedure to permanently remove a StorEdge 9910/9960 array and its submirrors from a running cluster. This procedure provides the flexibility to remove the host adapters from the nodes for the StorEdge 9910/9960 array you are removing.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.


Caution - Caution -

During this procedure, you will lose access to the data that resides on the StorEdge 9910/9960 array you are removing.


  1. Back up all database tables, data services, and volumes that are associated with the StorEdge 9910/9960 array that you are removing.

  2. Detach the submirrors from the StorEdge 9910/9960 array you are removing in order to stop all I/O activity to the array.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node A.


    # scstat
    

  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    

  6. Stop the Sun Cluster software on Node A, and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Disconnect the fiber-optic cable between Node A and the StorEdge 9910/9960 array.

  8. Do you want to remove the host adapter from Node A?

    • If no, skip to Step 11.

    • If yes, power off Node A.

  9. Remove the host adapter from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  10. Without allowing the node to boot, power on Node A.

    For more information, see the documentation that shipped with your server.

  11. Boot Node A into cluster mode.


    {0} ok boot
    

  12. Determine the resource groups and device groups that are running on Node A.


    # scstat
    

  13. Move all resource groups and device groups off Node B.


    # scswitch -S -h from-node
    

  14. Stop the Sun Cluster software on Node B, and shut down Node B.


    # shutdown -y -g0 -i0
    

  15. Disconnect the fiber-optic cable between Node B and the StorEdge 9910/9960 array.

  16. Do you want to remove the host adapter from Node B?

    • If no, skip to Step 19.

    • If yes, power off Node B.

  17. Remove the host adapter from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your server and host adapter.

  18. Without allowing the node to boot, power on Node B.

    For more information, see the documentation that shipped with your server.

  19. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  20. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    

  21. Return the resource groups and device groups you identified in Step 4 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Host-to-Array Fiber-Optic Cable

Use this procedure to replace the host-to-array fiber-optic cables. Node A in this procedure refers to the node with the failed fiber-optic cable that you are replacing.

  1. On Node A, determine the resource groups and device groups that are running on this node.

    Record this information because you will use it in Step 4 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    

  3. Replace the host-to-array fiber-optic cable.

  4. Return the resource groups and device groups you identified in Step 1 to Node A.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. Node A in this procedure refers to the node with the failed host adapter you are replacing. Node B is a backup node.

  1. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 8 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    

  3. Shut down Node A.


    # shutdown -y -g0 -i0
    

  4. Power off Node A.

    For more information, see the documentation that shipped with your server.

  5. Replace the failed host adapter.

    Contact your service provider to replace the failed host adapter.

  6. Power on Node A.

    For more information, see the documentation that shipped with your server.

  7. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. Return the resource groups and device groups you identified in Step 1 to Node A.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.