JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 9900 Series Storage Device Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring a Sun StorEdge or StorageTek 9900 Series Storage Array

Restrictions

Installing Storage Arrays

How to Add a Storage Array to an Existing Cluster

Configuring Storage Arrays

How to Add a Logical Volume

How to Remove a Logical Volume

2.  Enabling Multipathing Software in a Sun StorEdge or StorageTek 9900 Series Storage Array

3.  Maintaining a Sun StorEdge or StorageTek 9900 Series Storage Array

Index

Installing Storage Arrays

The initial installation of a storage array in a new cluster must be performed by your Oracle service provider.

How to Add a Storage Array to an Existing Cluster

Use this procedure to add a new storage array to a running cluster.

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.

If you need to add a storage array to more than two nodes, repeat Step 18 through Step 31 for each additional node that connects to the storage array.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Power on the storage array.

    Note - The storage array requires approximately 10 minutes to boot.


    Contact your service provider to power on the storage array.

  2. If you plan to use multipathing software, verify that the storage array is configured for multipathing.

    Contact your service provider to verify that the storage array is configured for multipathing.

  3. Configure the new storage array.

    Contact your service provider to create the desired logical volumes.

  4. If you need to install a host adapter in Node A, and if this host adapter is the first on Node A, contact your service provider to install the support packages and configure the drivers before you proceed to Step 5.

    Note - If you use multipathing software, each node requires two paths to the same set of LUNs.


    If you do not need to install a host adapter, skip to Step 11.

  5. If your node is enabled with the Oracle Solaris dynamic reconfiguration (DR) feature, install the host adapter.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    If your node is not enabled with DR, you must shut down this node to install the host adapter. Proceed to Step 6.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 30 and Step 31 of this procedure to return resource groups and device groups to these nodes.

    Use the following command:

    # clresourcegroup status -n NodeA[ NodeB ...] 
    # cldevicegroup status -n NodeA[ NodeB ...]
    -n NodeA[ NodeB ...]

    The node or nodes for which you are determining resource groups and device groups.

    For more information, see your Oracle Solaris Cluster system administration documentation.

  7. Shut down and power off Node A.

    For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.

  8. Install the host adapter in Node A.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  9. Power on and boot Node A into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  10. If necessary, upgrade the host adapter firmware on Node A.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  11. Attach the storage array to Node A.

    Contact your service provider to install a fiber-optic cable between the storage array and your node.

  12. Configure the storage array.

    Contact your service provider to configure the storage array.

  13. Oracle Solaris 10 automatically installs Solaris I/O multipathing (MPxIO). If you plan to use it, verify that the paths to the storage device are functioning. See How to Install Solaris Software in Oracle Solaris Cluster Software Installation Guide and Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
  14. To create the new Oracle Solaris device files and links on Node A, perform a reconfiguration boot.

    For the procedure about how to boot a cluster node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  15. On Node A, use the appropriate multipathing software commands to verify that the same set of LUNs is visible to the expected controllers.
  16. On Node A, update the paths to the device ID instances.
     # cldevice populate
  17. (Optional) On Node A, verify that the device IDs are assigned to the new storage array.
     # cldevice list -n NodeA -v
  18. If you need to install a host adapter in Node B, and if this host adapter is the first in Node B, contact your service provider to install the support packages and configure the drivers before you proceed to Step 19.

    Note - If you use multipathing software, each node requires two paths to the same set of LUNs.


    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

    If you do not need to install host adapters, skip to Step 24.

  19. If your node is enabled with the Oracle Solaris dynamic reconfiguration (DR) feature, install the host adapter.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  20. If your node is not enabled with DR, shut down and power off Node B.

    For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.

  21. Install the host adapter in Node B.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  22. If necessary, upgrade the host adapter firmware on Node B.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  23. Power on and boot Node B into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  24. Attach the storage array to Node B.

    Contact your service provider to install a fiber-optic cable between the storage array and your node.

  25. Verify that the multipathing paths to the storage device are functioning.

    Oracle Solaris 10 automatically installs Solaris I/O multipathing (MPxIO) for the Oracle Solaris 10 OS, which was previously called Sun StorEdge Traffic Manager in the Solaris 9 OS. For more information, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.

  26. To create the new Oracle Solaris device files and links on Node B, perform a reconfiguration boot.

    For the procedure about how to boot a cluster node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  27. On Node B, use the appropriate multipathing software commands to verify that the same set of LUNs is visible to the expected controllers.
  28. On Node B, update the paths to the device ID instances.
     # cldevice populate
  29. (Optional) On Node B, verify that the device IDs are assigned to the new LUNs.
     # cldevice show
  30. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  31. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup

    The resource group that is returned to the node or nodes.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

  32. Repeat Step 18 through Step 31 for each additional node that connects to the storage array.
  33. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.