JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge T3 or T3+ Array Manual SPARC Platform Edition
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring a Sun StorEdge T3 or T3+ Array

2.  Maintaining and Upgrading a Sun StorEdge T3 or T3+ Array

Maintaining Sun StorEdge T3 or T3+ Storage Array Components

Sun StorEdge T3 and T3+ Array FRUs

How to Replace a Disk Drive

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

How to Replace an FC Switch or Storage Array-to-Switch Component in a Partner-Group Configuration

How to Replace a Storage Array Controller

How to Replace a Chassis

How to Replace a Host Adapter

How to Remove a Storage Array in a Single-Controller Configuration

How to Remove a Partner Group

Upgrading Sun StorEdge T3 or T3+ Storage Arrays

How to Upgrade Storage Array Firmware (No Submirrors)

How to Upgrade Storage Array Firmware When Using Mirroring

How to Upgrade a StorEdge T3 Controller to a StorEdge T3+ Controller

How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

SAN Considerations

SAN Clustering Considerations

Index

Upgrading Sun StorEdge T3 or T3+ Storage Arrays

This section contains the procedures about how to upgrade storage arrays. The following table lists these procedures.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


Table 2-2 Task Map: Maintaining a Storage Array

Task
Information
Upgrade storage array firmware.
Upgrade a StorEdge T3 array controller to a StorEdge T3+ array controller.
Migrate to a partner group

How to Upgrade Storage Array Firmware (No Submirrors)

Use this procedure to upgrade storage array firmware in a running cluster, when your arrays are not configured to support submirrors. To upgrade firmware when you are using submirrors, see How to Upgrade Storage Array Firmware When Using Mirroring. Firmware includes controller firmware, unit interconnect card (UIC) firmware, and disk drive firmware.


Caution

Caution - Perform this procedure on one storage array at a time. This procedure requires that you reset the storage arrays that you are upgrading. If you reset more than one storage array at a time, your cluster loses access to data.



Note - For all firmware installations, always read any README files that accompany the firmware patch for the latest information and special notes.


  1. On one node that is attached to the storage array, detach the submirrors. This storage array is the storage array that you are upgrading.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  3. Reset the storage array, if you have not already done so.

    For the procedure about how to reboot a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. Reattach the submirrors to resynchronize the submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Upgrade Storage Array Firmware When Using Mirroring

Use this procedure to upgrade out-of-date controller firmware, disk drive firmware, or unit interconnect card (UIC) firmware. This procedure assumes that your cluster is operational. This procedures defines Node A as the node on which you are upgrading firmware. Node B is another node in the cluster.


Caution

Caution - Perform this procedure on one storage array at a time. This procedure requires that you reset the storage arrays that you are upgrading. If you reset more than one storage array at a time, your cluster loses access to data.


  1. On the node that currently owns the disk group or diskset to which the mirror belongs, detach the storage array logical volume. This storage array is the storage array on which you are upgrading firmware.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required storage array patches, see the Sun StorEdge T3 Disk Tray Release Notes. To apply firmware patches, see the firmware patch README file. To verify the firmware level, see the Sun StorEdge T3 Disk Tray Release Notes.

  3. Disable the storage array controller that is attached to Node B. Disable the controller so that all logical volumes are managed by the remaining controller.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  4. On one node that is connected to the partner group, verify that the storage array controllers are visible to the node.
    # format
  5. Enable the storage array controller that you disabled in Step 3.
  6. Reattach the mirrors that you detached in Step 1 to resynchronize the mirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Upgrade a StorEdge T3 Controller to a StorEdge T3+ Controller

Use the following procedure to upgrade a StorEdge T3 storage array controller to a StorEdge T3+ storage array controller in a running cluster.


Caution

Caution - Perform this procedure on one storage array at a time. This procedure requires you to reset the storage arrays that you are upgrading. If you reset more than one storage array at a time, your cluster loses access to data.


  1. On one node that is attached to the StorEdge T3 storage array in which you are upgrading the controller, detach that storage array's submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Upgrade the StorEdge T3 storage array controller to a StorEdge T3+ storage array controller.

    For instructions, see the Sun StorEdge T3 Array Controller Upgrade Manual .

  3. Reattach the submirrors to resynchronize the submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

Use this procedure to migrate your storage arrays from a single-controller (noninterconnected) configuration to a partner-group (interconnected) configuration. This procedure assumes that the two storage arrays in the partner-group configuration are correctly isolated from each other on separate FC switches. Do not disconnect the cables from the FC switches or the nodes.


Caution

Caution - You must be an Oracle service provider to perform this procedure. If you need to migrate from a single-controller configuration to a partner-group configuration, contact your Oracle service provider.


This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Back up all data on the storage arrays before you remove the storage arrays from the Oracle Solaris Cluster configuration.
  2. Remove the noninterconnected storage arrays to be in your partner group from the cluster configuration.

    Follow the procedure in How to Remove a Storage Array in a Single-Controller Configuration.

  3. Connect and configure the single storage arrays to form a partner group.

    Follow the procedure in the Sun StorEdge T3 and T3+ Array Field Service Manual.

  4. Ensure that each storage array has a unique target address.

    For the procedure about how to verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  5. Ensure that the cache and mirror settings for each storage array are set to auto.
  6. Ensure that the mp_support parameter for each storage array is set to mpxio.
  7. If necessary, upgrade the host adapter firmware on Node A.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  8. If necessary, install the required Oracle Solaris patches for storage array support on Node A.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  9. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.

  10. Perform a reconfiguration boot on Node A to create the new Oracle Solaris device files and links.
    # boot -r
  11. On Node A, update the /devices and /dev entries.
    # devfsadm -C 
  12. On Node A, update the paths to the DID instances.
    # cldevice clear
  13. Label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. If necessary, upgrade the host adapter firmware on Node B.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  15. If necessary, install the required Oracle Solaris patches for storage array support on Node B.

    For a list of required Oracle Solaris patches for storage array support, see the Sun StorEdge T3 Disk Tray Release Notes.

  16. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.

  17. Perform a reconfiguration boot to create the new Oracle Solaris device files and links on Node B.
    # boot -r
  18. On Node B, update the /devices and /dev entries.
    # devfsadm -C 
  19. On Node B, update the paths to the DID instances.
    # cldevice clear
  20. (Optional) On Node B, verify that the DIDs are assigned to the new LUNs.
    # cldevice list -n NodeB -v
  21. On one node that is attached to the new storage arrays, reset the SCSI reservation state.
    # cldevice repair

    Note - Repeat this command on the same node for each storage array LUN that you are adding to the cluster.


  22. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Veritas Volume Manager documentation.