JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 6120 Array Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining a Sun StorEdge 6120 Array

Installing Storage Arrays

Storage Array Cabling Configurations

How to Install a Single-Controller Configuration in a New Cluster

How to Install a Dual-Controller Configuration in a New Cluster

How to Add a Single-Controller Configuration to an Existing Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Storage Array to FC Switches

How to Connect the Node to the FC Switches or the Storage Array

How to Add a Dual-Controller Configuration to an Existing Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Storage Array to FC Switches

How to Connect the Node to the FC Switches or the Storage Array

Configuring Storage Arrays

How to Create a Logical Volume

How to Remove a Logical Volume

Maintaining Storage Arrays

StorEdge 6120 Array FRUs

How to Upgrade Storage Array Firmware

How to Remove a Single-Controller Configuration

How to Remove a Dual-Controller Configuration

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Host Adapter

Index

Maintaining Storage Arrays

This section contains the procedures about how to maintain a storage array in a running cluster. Table 1-3 lists these procedures.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


Table 1-3 Task Map: Maintaining a Storage Array

Task
Information
Upgrade storage array firmware.
Remove a storage array or partner group.
Replace a node-to-switch component.
  • Node-to-switch fiber-optic cable

  • FC host adapter

  • FC switch

  • GBIC or SFP

Replace a node's host adapter.
Add a node to the storage array.
Oracle Solaris Cluster system administration documentation
Remove a node from the storage device.
Oracle Solaris Cluster system administration documentation

StorEdge 6120 Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 6020 and 6120 Array System Manual for the following procedures.

How to Upgrade Storage Array Firmware

Use this procedure to upgrade storage array firmware in a running cluster. Storage array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


  1. Stop all I/O to the storage arrays you are upgrading.
  2. Apply the controller, disk drive, and loop-card firmware patches by using the arrays' GUI tools.

    For the list of required patches, see the Sun StorEdge 6120 Array Release Notes. For the procedure about how to apply firmware patches, see the firmware patch README file. For the procedure about how to verify the firmware level, see the Sun StorEdge 6020 and 6120 Array System Manual.

    For specific instructions, see your storage array's documentation.

  3. Confirm that all storage arrays that you upgraded are visible to all nodes.
    # luxadm probe
  4. Restart all I/O to the storage arrays.

    You stopped I/O to these storage arrays in Step 1.

How to Remove a Single-Controller Configuration

Use this procedure to permanently remove a storage array from a running cluster. This storage array resides in a single-controller configuration. This procedure provides the flexibility to remove the host adapters from the nodes for the storage array that you are removing.

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.


Caution

Caution - During this procedure, you lose access to the data that resides on the storage array that you are removing.


Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.
  2. Detach the submirrors from the storage array that you are removing. Detach the submirrors to stop all I/O activity to the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node A.

    You will need this information to restore resource groups and device groups to the original node in Step 18 and Step 19 of this procedure.

    Use the following command:

    # clresourcegroup status -n NodeA
    # cldevicegroup status -n NodeA
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  6. If the storage array that you are removing is the last storage array that is connected to Node A, disconnect the fiber-optic cable between Node A and the FC switch that is connected to this storage array. Then disconnect the fiber-optic cable between the FC switch and this storage array.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  7. If you want to remove the host adapter from Node A, power off Node A..

    If you do not want to remove the host adapter, skip to Step 10.

  8. Remove the host adapter from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  9. Without enabling the node to boot, power on Node A.

    For more information, see your Oracle Solaris Cluster system administration documentation.

  10. Boot Node A into cluster mode.

    For the procedure about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  11. Shut down Node B.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  12. If the storage array that you are removing is the last storage array that is connected to the FC switch, disconnect the fiber-optic cable that connects this FC switch and Node B.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  13. If you want to remove the host adapter from Node B, power off Node B.

    If you do not want to remove the host adapter, skip to Step 16.

  14. Remove the host adapter from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  15. Without enabling the node to boot, power on Node B.

    For more information, see your Oracle Solaris Cluster system administration documentation.

  16. Boot Node B into cluster mode.

    For the procedure about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  17. On all nodes, update the /devices and /dev entries.
    # devfsadm -C
    # cldevice clear
  18. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  19. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

How to Remove a Dual-Controller Configuration

Use this procedure to remove a dual-controller configuration from a running cluster. This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.


Caution

Caution - During this procedure, you lose access to the data that resides on each partner group that you are removing.


Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. If necessary, back up all database tables, data services, and volumes that are associated with each partner group that you are removing.
  2. If necessary, detach the submirrors from each storage array or partner group that you are removing. Detach the submirrors to stop all I/O activity to the storage array or partner group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Remove references to each LUN. This LUN belongs to the storage array or partner group that you are removing.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node A.

    You will need this information to restore resource groups and device groups to the original node in Step 19 and Step 20 of this procedure.

    Use the following command:

    # clresourcegroup status -n NodeA 
    # cldevicegroup status -n NodeA
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  6. Disconnect from both storage arrays the fiber-optic cables that connect to the FC switches. Then disconnect the Ethernet cables.
  7. If any storage array that you are removing is the last storage array that is connected to an FC switch that is on Node A, disconnect the fiber-optic cable between Node A and the FC switch that was connected to this storage array.

    If no storage array that you are removing is the last array connected to any switch, skip to Step 11.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  8. If you want to remove the host adapters from Node A, power off the node.

    If you do not want to remove host adapters from the node, skip to Step 11.

  9. Remove the host adapters from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter and nodes.

  10. Without enabling the node to boot, power on Node A.

    For more information, see your Oracle Solaris Cluster system administration documentation.

  11. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  12. Shut down Node B.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  13. If any storage array that you are removing is the last storage array that is connected to an FC switch that is on Node B, disconnect the fiber-optic cable that connects this FC switch to Node B.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  14. If you want to remove the host adapters from Node B, power off the node.

    If you do not want to remove host adapters, skip to Step 17.

  15. Remove the host adapters from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  16. Without enabling the node to boot, power on Node B.

    For more information, see your Oracle Solaris Cluster system administration documentation.

  17. Boot Node B into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  18. On all nodes, update the /devices and /dev entries.
    # devfsadm -C
    # cldevice clear
  19. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  20. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note - Node-to-switch components that are covered by this procedure include the following components:

To replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

  1. If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.
  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
  2. If the physical data path has failed, do the following:
    1. Replace the component.
    2. Fix the volume manager error that was caused by the failed data path.
    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  3. If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.
    # clresourcegroup status -n NodeA
    # cldevicegroup status -n NodeA
    -n NodeA

    The node for which you are determining resource groups and device groups.

  4. Move all resource groups and device groups to another node.
    # clnode evacuate nodename
  5. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  6. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  7. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

How to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.

    # clresourcegroup status -n NodeA 
    # cldevicegroup status -n NodeA
    -n NodeA

    The node for which you are determining resource groups and device groups.

  3. Move all resource groups and device groups off Node A.
    # clnode evacuate nodename
  4. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  5. Power off Node A.
  6. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  7. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.

    If you do not need to upgrade firmware, skip to Step 9.

  8. Upgrade the host adapter firmware on Node A.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  9. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  10. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  11. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.