JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 3510 or 3511 FC RAID Array Manual
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining Sun StorEdge 3510 and 3511 Fibre Channel RAID Arrays

Installing Storage Arrays

Storage Array Cabling Configurations

How to Install a Storage Array

Adding a Storage Array to a Running Cluster

How to Perform Initial Configuration Tasks on the Storage Array

How to Connect the Storage Array to FC Switches

How to Connect the Node to the FC Switches or the Storage Array

Configuring Storage Arrays in a Running Cluster

How to Create and Map a LUN

How to Unmap and Remove a LUN

Maintaining Storage Arrays

StorEdge 3510 and 3511 FC RAID Array FRUs

How to Remove a Storage Array From a Running Cluster

How to Upgrade Storage Array Firmware

How to Replace a Disk Drive

How to Replace a Host Adapter

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Chassis in a Running Cluster

Index

Maintaining Storage Arrays

This section contains the procedures for maintaining a storage array in an Oracle Solaris Cluster environment. Maintenance tasks are listed in Table 1-3 contain cluster-specific tasks. Tasks that are not cluster-specific are referenced in a list following the table.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


Table 1-3 Task Map: Maintaining a Storage Array 

Task
Information
Remove a storage array from a running cluster.
Upgrade array firmware.
Replace a disk drive in an storage array.
Replace a host adapter.
Replace a node-to-switch fiber optic cable.
Replace a gigabit interface converter (GBIC) or Small Form-Factor Pluggable (SFP) on a node's host adapter.
Replace a GBIC or an SFP on an FC switch, connecting to a node.
Replace a storage array-to-switch fiber-optic cable.
Replace a GBIC or an SFP on an FC switch, connecting to a storage array.
Replace an FC switch.
Replace the power cord of an FC switch.
Replace the controller.
Replace the chassis.
Add a node to the storage array.
Oracle Solaris Cluster system administration documentation
Remove a node from the storage array.
Oracle Solaris Cluster system administration documentation

StorEdge 3510 and 3511 FC RAID Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 3000 Family Installation, Operation, and Service Manual for the following procedures.

How to Remove a Storage Array From a Running Cluster

Use this procedure to permanently remove storage arrays and their submirrors from a running cluster.

If you need to remove a storage array from more than two nodes, repeat Step 6 to Step 13 for each additional node that connects to the storage array.


Caution

Caution - During this procedure, you lose access to the data that resides on each storage array that you are removing.


Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If the storage array you are removing contains any quorum devices, choose another disk drive to configure as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use the following command.

    # clquorum show 

    For procedures on adding and removing quorum devices, see Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.

  2. If necessary, back up all database tables, data services, and drives associated with each storage array that you are removing.
  3. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you will use it in Step 17 and Step 18 of this procedure to return resource groups and device groups to these nodes.

    # clresourcegroup status + 
    # cldevicegroup status + 
  4. If necessary, run the appropriate Veritas Volume Manager commands to detach the submirrors from each storage array that you are removing to stop all I/O activity to the storage array.

    For more information, see your Veritas Volume Manager documentation.

  5. Run the appropriate volume manager commands to remove references to each LUN that belongs to the storage array that you are removing.

    For more information, see your Veritas Volume Manager documentation.

  6. Shut down the node.

    For the full procedure on shutting down and powering off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  7. If necessary, disconnect the storage arrays from the nodes or the FC switches.
  8. If the storage array that you are removing is not the last storage array connected to the node, skip to Step 10.
  9. If the storage array that you are removing is the last storage array connected to the node, disconnect the fiber-optic cable between the node and the FC switch that was connected to this storage array.
  10. If you do not want to remove the host adapters from the node, skip to Step 13.
  11. If you want to remove the host adapters from the node, power off the node.
  12. Remove the host adapters from the node.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter and nodes.

  13. Boot the node into cluster mode.

    For more information on booting nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  14. Repeat steps Step 6 to Step 13 on each additional node that you need to disconnect from the storage array.
  15. On all cluster nodes, remove the paths to the devices that you are deleting.
    # devfsadm -C
  16. On all cluster nodes, remove all obsolete device IDs.
    # cldevice clear
  17. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  18. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

See Also

To prepare the storage array for later use, unmap and delete all LUNs and logical drives. See How to Unmap and Remove a LUN for information about LUN removal. For more information about removing logical drives, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

How to Upgrade Storage Array Firmware

Use this procedure to upgrade storage array firmware in a running cluster.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


  1. Stop all I/O to the storage arrays you are upgrading.
  2. Download the firmware to the storage arrays.

    Refer to the Sun StorEdge 3000 Family RAID Firmware 3.25 and 3.27 User's Guide and to any patch readme files for more information.

  3. Confirm that all storage arrays that you upgraded are visible to all nodes.
    # luxadm probe
  4. Restart all I/O to the storage arrays.

    You stopped I/O to these storage arrays in Step 1.

How to Replace a Disk Drive

Use this procedure to replace a failed disk drive in a storage array in a running cluster.

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read RBAC authorization.

  1. If the failed disk drive does not affect the storage array LUN's availability, skip to Step 4.
  2. If the failed disk drive affects the storage array LUN's availability, use volume manager commands to detach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. If the LUN (in Step 1) is configured as a quorum device, choose and configure another device to be the new quorum device. Remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use the following command.

    # clquorum show 

    For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation.

  4. Replace the failed disk drive.

    For instructions, refer to the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  5. (Optional) If you reconfigured a quorum device in Step 3, restore the original quorum configuration.

    For the procedure about how to add a quorum device, see your Oracle Solaris Cluster system administration documentation.

  6. If you detached a submirror or plex in Step 1, use volume manager commands to reattach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.

    # clresourcegroup status -n NodeA 
    # cldevicegroup status -n NodeA
    -n NodeA

    The node for which you are determining resource groups and device groups.

  3. Move all resource groups and device groups off Node A.
    # clnode evacuate nodename
  4. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  5. Power off Node A.
  6. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  7. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.

    If you do not need to upgrade firmware, skip to Step 9.

  8. Upgrade the host adapter firmware on Node A.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  9. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  10. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  11. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note - Node-to-switch components that are covered by this procedure include the following components:

To replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

  1. If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.
  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
  2. If the physical data path has failed, do the following:
    1. Replace the component.
    2. Fix the volume manager error that was caused by the failed data path.
    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  3. If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.
    # clresourcegroup status -n NodeA
    # cldevicegroup status -n NodeA
    -n NodeA

    The node for which you are determining resource groups and device groups.

  4. Move all resource groups and device groups to another node.
    # clnode evacuate nodename
  5. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  6. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  7. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

How to Replace a Chassis in a Running Cluster

Use this procedure to replace a storage array chassis in a running cluster. This procedure assumes that you want to retain all FRUs other than the chassis and the backplane.

  1. To stop all I/O activity to this storage array, detach the submirrors that are connected to the chassis you are replacing.

    For more information, see your Veritas Volume Manager documentation.

  2. If this storage array is not made redundant by host-based mirroring, shut down the cluster.

    For the full procedure on shutting down a cluster, see the Oracle Solaris Cluster system administration documentation.

  3. Replace the chassis and backplane.

    For the procedure on replacing a chassis, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  4. If you shut down the cluster in Step 2, boot it back into cluster mode.

    For the full procedure on booting a cluster, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  5. Reattach the submirrors that you detached in Step 1 to resynchronize them.

    Caution

    Caution - The world wide numbers (WWNs) might change as a result of this procedure. If the WWNs change, and you must reconfigure your volume manager software to recognize the new WWNs.


    For more information, see your Veritas Volume Manager documentation.