Sun Cluster 3.0-3.1 With Sun StorEdge 9900 Series Storage Device Manual

Chapter 1 Installing and Configuring a Sun StorEdge 9900 Series Storage Array

This chapter contains a limited set of procedures about how to install and configure SunTM StorEdge 9900 Series storage arrays in a Sun Cluster environment. Contact your Sun service provider to perform tasks that are not documented in this chapter.

The StorEdge 9900 Series storage arrays includes the following storage arrays:

You can perform all the procedures in this chapter on all StorEdge 9900 Series storage arrays unless noted otherwise.

This chapter contains the following sections.

For conceptual information on multihost disks, see your Sun Cluster concepts documentation document.

Restrictions

When using storage-based replication, do not configure a replicated volume as a quorum device. Locate any quorum devices on an unreplicated volume. See Using Storage-Based Data Replication in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information on storage-based replication.

Installing Storage Arrays

The initial installation of a storage array in a new cluster must be performed by your Sun service provider.

ProcedureHow to Add a Storage Array to an Existing Cluster

Use this procedure to add a new storage array to a running cluster.

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.

If you need to add a storage array to more than two nodes, repeat Step 22 through Step 37 for each additional node that connects to the storage array.

Steps
  1. Power on the storage array.


    Note –

    The storage array requires approximately 10 minutes to boot.


    Contact your service provider to power on the storage array.

  2. If you plan to use multipathing software, verify that the storage array is configured for multipathing.

    Contact your service provider to verify that the storage array is configured for multipathing.

  3. Configure the new storage array.

    Contact your service provider to create the desired logical volumes.

  4. Do you need to install a host adapter in Node A?


    Note –

    If you use multipathing software, each node requires two paths to the same set of LUNs.


    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  5. Is the host adapter that you are installing the first host adapter on Node A?

    • If no, proceed to Step 6.

    • If yes, contact your service provider to install the support packages and configure the drivers before you proceed to Step 6.

  6. Is your node enabled with the Solaris dynamic reconfiguration (DR) feature?

    • If yes, install the host adapter.

      For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    • If no, shut down this node to install the host adapter(s). Proceed to Step 7.

  7. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 37 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

    For more information, see your Sun Cluster system administration documentation.

  8. Shut down and power off Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  9. Install the host adapter in Node A.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  10. Power on and boot Node A into noncluster mode.

    For the procedure about how to boot a node in noncluster mode, see your Sun Cluster system administration documentation.

  11. If necessary, upgrade the host adapter firmware on Node A.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  12. Attach the storage array to Node A.

    Contact your service provider to install a fiber-optic cable between the storage array and your node.

  13. Configure the host adapter and the storage array.

    Contact your service provider to configure the adapter and storage array.

  14. If you plan to install multipathing software, which multipathing solution do you plan to install?

  15. SPARC: Install Sun StorEdge 9900 Dynamic Link Manager (Sun SDLM) software and any required patches for Sun SDLM software support on Node A.

    For the procedure about how to install the Sun SDLM software, see the documentation that shipped with your storage array.

  16. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  17. Perform a reconfiguration boot to create the new Solaris device files and links on Node A.


    # boot -r
    
  18. On Node A, configure all controllers that are affected by the new physical path.


    # cfgadm -c configure cN
    
  19. On Node A, verify that the same set of LUNs is visible to the expected controllers.


    # format
    

    See the format command man page for more information about how to use the command.

  20. On Node A, update the paths to the device ID instances.


    # scdidadm -C
    # scdidadm -r
    
  21. (Optional) On Node A, verify that the device IDs are assigned to the new storage array.


    # scdidadm -l
    
  22. Do you need to install a host adapter in Node B?


    Note –

    If you use multipathing software, each node requires two paths to the same set of LUNs.


    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  23. Is the host adapter that you are installing the first host adapter on Node B?

    • If no, proceed to Step 24.

    • If yes, contact your service provider to install the support packages and configure the drivers before you proceed to Step 24.

  24. Is your node enabled with the Solaris dynamic reconfiguration (DR) feature?

    • If yes, install the host adapter.

      For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    • If no, proceed to Step 25.

  25. Shut down and power off Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  26. Install the host adapter in Node B.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  27. If necessary, upgrade the host adapter firmware on Node B.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  28. Power on and boot Node B into noncluster mode.

    For the procedure about how to boot a node in noncluster mode, see your Sun Cluster system administration documentation.

  29. Attach the storage array to Node B.

    Contact your service provider to install a fiber-optic cable between the storage array and your node.

  30. If you plan to install multipathing software, which multipathing solution do you plan to install?

  31. SPARC: Install any required patches or software for Sun StorEdge Traffic Manager software support on Node B.

  32. SPARC: Install Sun StorEdge 9900 Dynamic Link Manager (Sun SDLM) software and any required patches for Sun SDLM software support on Node B.

    For the procedure about how to install the Sun SDLM software, see the documentation that shipped with your storage array.

  33. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  34. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.

  35. On Node B, update the paths to the device ID instances.


    # scdidadm -C
    # scdidadm -r
    
  36. (Optional) On Node B, verify that the device IDs are assigned to the new LUNs.


    # scdidadm -l
    
  37. Return the resource groups and device groups that you identified in Step 7 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.

  38. Repeat Step 22 through Step 37 for each additional node that connects to the storage array.

  39. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install and enable it before installing the Sun Cluster software and enabling the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Add Sun StorEdge Traffic Manager Software.

Configuring Storage Arrays

This section contains the procedures about how to configure a storage array in a Sun Cluster environment. The following table lists these procedures. For configuration tasks that are not cluster-specific, see the documentation that shipped with your storage array.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the scdidadm -R command for each affected device.


Table 1–1 Task Map: Configuring a Storage Array

Task 

Information 

Add a logical volume. 

See How to Add a Logical Volume.

Remove a logical volume. 

See How to Remove a Logical Volume.

ProcedureHow to Add a Logical Volume

Use this procedure to add a logical volume to a cluster. This procedure assumes that your service provider created your logical volume. This procedure also assumes that all nodes are booted and are attached to the storage array.

Steps
  1. On all nodes, update the /devices and /dev entries.


    # devfsadm
    
  2. On each node connected to the storage array, verify that the same set of LUNs is visible to the expected controllers.


    # format
    

    See the format command man page for more information about how to use the command.

  3. Determine if you are running VERITAS Volume Manager.

    • If not, proceed to Step 4

    • If you are running VERITAS Volume Manager, update the list of devices on all nodes that are attached to the logical volume that you created in Step 2.

      See your VERITAS Volume Manager documentation for information about how to use the vxdctl enable command. Use this command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  4. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

See Also

To create a new resource or reconfigure a running resource to use the new logical volume, see your Sun Cluster data services collection.

ProcedureHow to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all nodes are booted and are connected to the storage array. This storage array hosts the logical volume that you are removing.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

If you need to remove a storage array from more than two nodes, repeat Step 9 through Step 11 for each additional node. Each node connects to the logical volume.


Caution – Caution –

This procedure destroys all data on the logical volume that you are removing.


Steps
  1. If necessary, back up all data. Migrate all resource groups and disk device groups to another node.

  2. Is the logical volume that you plan to remove configured as a quorum device?


    # scstat -q
    
    • If no, proceed to Step 3.

    • If yes, choose and configure another device to be the new quorum device. Then remove the old quorum device.

      For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation.

  3. Run the appropriate Solstice DiskSuite/Solaris Volume Manager commands or VERITAS Volume Manager commands to remove the reference to the logical volume from any diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. If the cluster is running VERITAS Volume Manager, update the list of devices on all nodes. These nodes are attached to the logical volume that you are removing.

    See your VERITAS Volume Manager documentation for information about how to use the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  5. Remove the logical volume.

    Contact your service provider to remove the logical volume.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 11 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Shut down and reboot Node A by using the shutdown command with the -i6 option.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  8. On Node A, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  9. Shut down and reboot Node B by using the shutdown command with the -i6 option.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  10. On Node B, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  11. Return the resource groups and device groups that you identified in Step 6 to Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.

  12. Repeat Step 9 through Step 11 for each additional node that connects to the logical volume.

See Also

To create a logical volume, see How to Add a Logical Volume.