Sun Cluster 3.1 - 3.2 With Sun StorEdge or StorageTek 9900 Series Storage Device Manual for Solaris OS

Chapter 1 Installing and Configuring a Sun StorEdge or StorageTek 9900 Series Storage Array

This chapter contains a limited set of procedures about how to install and configure Sun StorEdge 9900 Series storage arrays in a Sun Cluster environment. Contact your Sun service provider to perform tasks that are not documented in this chapter.

The StorEdge and StorageTek 9900 Series storage arrays includes the following storage arrays:

You can perform all the procedures in this chapter on all StorEdge 9900 Series storage arrays unless noted otherwise.

This chapter contains the following sections.

For conceptual information on multihost disks, see your Sun Cluster concepts documentation document.

Restrictions

When using storage-based replication, do not configure a replicated volume as a quorum device. Locate any quorum devices on an unreplicated volume. See Using Storage-Based Data Replication in Sun Cluster System Administration Guide for Solaris OS for more information on storage-based replication.

Installing Storage Arrays

The initial installation of a storage array in a new cluster must be performed by your Sun service provider.

ProcedureHow to Add a Storage Array to an Existing Cluster

Use this procedure to add a new storage array to a running cluster.

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.

If you need to add a storage array to more than two nodes, repeat Step 20 through Step 36 for each additional node that connects to the storage array.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Power on the storage array.


    Note –

    The storage array requires approximately 10 minutes to boot.


    Contact your service provider to power on the storage array.

  2. If you plan to use multipathing software, verify that the storage array is configured for multipathing.

    Contact your service provider to verify that the storage array is configured for multipathing.

  3. Configure the new storage array.

    Contact your service provider to create the desired logical volumes.

  4. If you need to install a host adapter in Node A, and if this host adapter is the first on Node A, contact your service provider to install the support packages and configure the drivers before you proceed to Step 5.


    Note –

    If you use multipathing software, each node requires two paths to the same set of LUNs.


    If you do not need to install a host adapter, skip to Step 11.

  5. If your node is enabled with the Solaris dynamic reconfiguration (DR) feature, install the host adapter.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    If your node is not enabled with DR, you must shut down this node to install the host adapter. Proceed to Step 6.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 35 and Step 36 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA[ NodeB ...] 
      # cldevicegroup status -n NodeA[ NodeB ...]
      
      -n NodeA[ NodeB ...]

      The node or nodes for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      

    For more information, see your Sun Cluster system administration documentation.

  7. Shut down and power off Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  8. Install the host adapter in Node A.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  9. Power on and boot Node A into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  11. Attach the storage array to Node A.

    Contact your service provider to install a fiber-optic cable between the storage array and your node.

  12. Configure the storage array.

    Contact your service provider to configure the storage array.

  13. If you plan to install the Solaris I/O multipathing multipathing software, use the procedure in How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

  14. SPARC: If you plan to install Sun StorEdge 9900 Dynamic Link Manager (Sun SDLM) software, install it and any required patches for Sun SDLM software support on Node A.

    For the procedure about how to install the Sun SDLM software, see the documentation that shipped with your storage array.

  15. To create the new Solaris device files and links on Node A, perform a reconfiguration boot.

    For the procedure about how to boot a cluster node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  16. If you are using Solaris 8 or 9, on Node A configure all controllers that are affected by the new physical path.


    # cfgadm -c configure cN
    
  17. On Node A, use the appropriate multipathing software commands to verify that the same set of LUNs is visible to the expected controllers.

  18. On Node A, update the paths to the device ID instances.

    • If you are using Sun Cluster 3.2, use the following command:


       # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scgdevs
      
  19. (Optional) On Node A, verify that the device IDs are assigned to the new storage array.

    • If you are using Sun Cluster 3.2, use the following command:


       # cldevice list -n NodeA -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  20. If you need to install a host adapter in Node B, and if this host adapter is the first in Node B, contact your service provider to install the support packages and configure the drivers before you proceed to Step 21.


    Note –

    If you use multipathing software, each node requires two paths to the same set of LUNs.


    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

    If you do not need to install host adapters, skip to Step 26.

  21. If your node is enabled with the Solaris dynamic reconfiguration (DR) feature, install the host adapter.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  22. If your node is not enabled with DR, shut down and power off Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  23. Install the host adapter in Node B.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  24. If necessary, upgrade the host adapter firmware on Node B.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  25. Power on and boot Node B into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  26. Attach the storage array to Node B.

    Contact your service provider to install a fiber-optic cable between the storage array and your node.

  27. If you plan to install Solaris I/O multipathing multipathing software, use the procedure in How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

  28. Install any required patches or software for Solaris I/O multipathing software support on Node B.

  29. SPARC: If you plan to use Sun StorEdge 9900 Dynamic Link Manager (Sun SDLM) multipathing software, install the software and any required patches for Sun SDLM software support on Node B.

    For the procedure about how to install the Sun SDLM software, see the documentation that shipped with your storage array.

  30. To create the new Solaris device files and links on Node B, perform a reconfiguration boot.

    For the procedure about how to boot a cluster node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  31. If you are using Solaris 8 or 9, on Node Bconfigure all controllers that are affected by the new physical path.


    # cfgadm -c configure cN
    
  32. On Node B, use the appropriate multipathing software commands to verify that the same set of LUNs is visible to the expected controllers.

  33. On Node B, update the paths to the device ID instances.

    • If you are using Sun Cluster 3.2, use the following command:


       # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scgdevs
      
  34. (Optional) On Node B, verify that the device IDs are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


       # cldevice show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  35. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  36. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup

      The resource group that is returned to the node or nodes.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  37. Repeat Step 20 through Step 36 for each additional node that connects to the storage array.

  38. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install and enable it before installing the Sun Cluster software and enabling the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Add Solaris I/O multipathing Software.

Configuring Storage Arrays

This section contains the procedures about how to configure a storage array in a Sun Cluster environment. The following table lists these procedures. For configuration tasks that are not cluster-specific, see the documentation that shipped with your storage array.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


Table 1–1 Task Map: Configuring a Storage Array

Task 

Information 

Add a logical volume. 

See How to Add a Logical Volume.

Remove a logical volume. 

See How to Remove a Logical Volume.

ProcedureHow to Add a Logical Volume

Use this procedure to add a logical volume to a cluster. This procedure assumes that your service provider created your logical volume. This procedure also assumes that all nodes are booted and are attached to the storage array.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. On all nodes, update the /devices and /dev entries.


    # devfsadm
    
  2. On each node connected to the storage array, use the appropriate multipathing software commands to verify that the same set of LUNs is visible to the expected controllers.

  3. If you are running Veritas Volume Manager, update the list of devices on all nodes that are attached to the logical volume that you created in Step 2.

    See your Veritas Volume Manager documentation for information about how to use the vxdctl enable command. Use this command to update new devices (volumes) in your Veritas Volume Manager list of devices.


    Note –

    You might need to install the Veritas Array Support Library (ASL) package that corresponds to the array. For more information, see your Veritas Volume Manager documentation.


    If you are not running Veritas Volume Manager, proceed to Step 4.

  4. From any node in the cluster, update the global device namespace.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.

See Also

To create a new resource or reconfigure a running resource to use the new logical volume, see your Sun Cluster data services collection.

ProcedureHow to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure assumes all nodes are booted and are connected to the storage array. This storage array hosts the logical volume that you are removing.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

If you need to remove a storage array from more than two nodes, repeat Step 9 through Step 12 for each additional node. Each node connects to the logical volume.


Caution – Caution –

During this procedure, you lose access to the data that resides on the logical volume that you are removing.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. If necessary, back up all data. Migrate all resource groups and disk device groups to another node.

  2. If the logical volume that you plan to remove is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

    To determine whether this logical volume is configured as a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      #clquorum show 
      
    • If you are using Sun Cluster 3.1, use the following command:


      #scstat -q
      

    For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation.

  3. Run the appropriate Solaris Volume Manager commands or Veritas Volume Manager commands to remove the reference to the logical volume from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. If the cluster is running Veritas Volume Manager, update the list of devices on all nodes. These nodes are attached to the logical volume that you are removing.

    See your Veritas Volume Manager documentation for information about how to use the vxdisk rm command to remove devices (volumes) in your Veritas Volume Manager device list.

  5. Remove the logical volume.

    Contact your service provider to remove the logical volume.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 11 and Step 12 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA[ NodeB ...] 
      # cldevicegroup status -n NodeA[ NodeB ...]
      
      -n NodeA[ NodeB…]

      The node or nodes for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  7. Shut down and reboot Node A by using the shutdown command with the -i6 option.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  8. On Node A, update the /devices and /dev entries.

    • If you are using Sun Cluster 3.2, use the following commands:


      # devfsadm -C 
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # devfsadm -C
      # scdidadm -C
      
  9. Shut down and reboot Node B by using the shutdown command with the -i6 option.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. On Node B, update the /devices and /dev entries.

    • If you are using Sun Cluster 3.2, use the following commands:


      # devfsadm -C 
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # devfsadm -C
      # scdidadm -C
      
  11. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  12. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are restored.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are restoring to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  13. Repeat Step 9 through Step 12 for each additional node that connects to the logical volume.

See Also

To create a logical volume, see How to Add a Logical Volume.