JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual For Solaris OS (SPARC Platform Edition)
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining a SCSI RAID Storage Device

Restrictions and Requirements

Installing Storage Arrays

How to Install a Storage Array in a New Cluster

How to Add a Storage Array to an Existing Cluster

Configuring Storage Arrays

How to Create a LUN

How to Delete a LUN

How to Reset the LUN Configuration

How to Correct Mismatched Device ID Numbers

Maintaining Storage Arrays

FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures

Sun StorEdge A1000 Array and Netra st A1000 Array FRUs

Sun StorEdge A3500 System FRUs

How to Remove a Storage Array

How to Replace a Failed Controller or Restore an Offline Controller

How to Upgrade Controller Module Firmware

How to Add a Disk Drive

How to Replace a Disk Drive

How to Remove a Disk Drive

How to Upgrade Disk Drive Firmware

How to Replace a Host Adapter

A.  Cabling Diagrams

Index

Maintaining Storage Arrays

The maintenance procedures in FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures are performed the same as in a noncluster environment. Table 1-3 lists the procedures that require cluster-specific steps.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


Table 1-3 Task Map: Maintaining a Storage Array or Storage System

Task
Information
Remove a storage array or storage system
Replace a storage array or storage system

Replacing a storage array or storage system, requires first removing the storage array or storage system, then adding a new storage array or storage system to the configuration.

Replace a failed controller module or restore an offline controller module
Upgrade controller module firmware and NVSRAM file
Add a disk drive
Replace a disk drive
Remove a disk drive
Upgrade disk drive firmware
Replace a host adapter

FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures

Each storage device has a different set of FRUs that do not require cluster-specific procedures. Choose among the following storage devices:

Sun StorEdge A1000 Array and Netra st A1000 Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A1000 and D1000 Installation, Operations, and Service Manual and the Netra st A1000/D1000 Installation and Maintenance Manual for these procedures.

Replacing a storage array-to-host SCSI cable requires no cluster-specific procedures. See the Sun StorEdge RAID Manager User’s Guide and the Sun StorEdge RAID Manager Release Notes for these procedures.

Sun StorEdge A3500 System FRUs

With the exception of one item, the following is a list of administrative tasks that require no cluster-specific procedures. Shut down the cluster, and then see the Sun StorEdge A3500/A3500FC Controller Module Guide, the Sun StorEdge A1000 and D1000 Installation, Operations, and Service Manual, and the Sun StorEdge Expansion Cabinet Installation and Service Manual for the following procedures. See the Oracle Solaris Cluster system administration documentation for procedures about how to shut down a cluster. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A3500/A3500FC Controller Module Guide, the Sun StorEdge RAID Manager User’s Guide, the Sun StorEdge RAID Manager Release Notes, the Sun StorEdge FC-100 Hub Installation and Service Manual, and the documentation that shipped with your FC hub or FC switch for the following procedures.

How to Remove a Storage Array


Caution

Caution - This procedure removes all data that is on the storage array or storage system you are removing.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Migrate any Oracle Real Application Clusters tables, data services, or volumes off the storage array or storage system.
  2. If no LUNS in the storage array that you are removing are quorum devices, proceed to Step 4.

    Note - Your storage array or storage system might not support LUNs as quorum devices. To determine if this restriction applies to your storage array or storage system, see Restrictions and Requirements.


    To determine whether any LUNs in the storage array are quorum devices, use the following command.

    # clquorum show 
  3. If one of the LUNs in the storage array is a quorum device, relocate that quorum device to another suitable storage array.

    For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

  4. Halt all activity to the controller module.

    For instructions, see your storage device documentation and your operating system documentation.

  5. If a volume manager does not manage any of the LUNs on the controller module you are removing, proceed to Step 12.
  6. If a volume manager manages any LUNs on the controller module that you are removing, remove the LUN from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

    You must completely remove LUNs that were managed by Veritas Volume Manager from Veritas Volume Manager control before you can delete the LUNs.

    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
  7. Delete the LUN.

    For the procedure about how to delete a LUN, see your storage device's documentation.

  8. Remove the paths to the LUNs you deleted in Step 7.
    # rm /dev/rdsk/cNtXdY*
    # rm /dev/dsk/cNtXdY*
    
    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
  9. On all nodes, remove references to the storage array.
    # cldevice clear
  10. (StorEdge A3500 Only) Use the lad command to determine the alternate paths to the LUN you are deleting.

    The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.

    For example:

    # lad
    c0t5d0 1T93600714 LUNS: 0 1
    c1t4d0 1T93500595 LUNS: 2

    Therefore, the alternate paths are as follows:

    /dev/osa/dev/dsk/c1t4d1*
    /dev/osa/dev/rdsk/c1t4d1*
  11. (StorEdge A3500 Only) Remove the alternate paths to the LUNs you deleted in Step 7.
    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
  12. Disconnect all cables from the storage array and storage system. Remove the hardware from your cluster.
  13. If you plan to remove a host adapter that has an entry in the nvramrc script, delete the references to the host adapters in the nvramrc script.

    Note - If no other parallel SCSI devices are connected to the nodes, you can delete the contents of the nvramrc script. At the OpenBoot PROM, set setenv use-nvramrc? to false.


  14. Remove any unused host adapter from nodes that were attached to the storage array or storage system.
    1. Shut down and power off Node A, from which you are removing a host adapter.

      For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

    2. Remove the host adapter from Node A.

      For the procedure about how to remove a host adapter, see the documentation that shipped with your node hardware.

    3. Perform a reconfiguration boot to create the new Solaris device files and links.
    4. Repeat Step a through Step c for Node B that was attached to the storage array or storage system.
  15. Restore resource groups to their primary nodes.

    Use the following command for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

  16. If this is not the last storage array or storage system in your cluster, you are finished with this procedure.
  17. If this is the last storage array or storage system in the cluster, remove RAID Manager patches, then remove RAID Manager software packages.

    Caution

    Caution - If you improperly remove RAID Manager packages, the next reboot of the node fails. Before you remove RAID Manager software packages, see the Sun StorEdge RAID Manager Release Notes.


    For the procedure about how to remove software packages, see the documentation that shipped with your storage array or storage system.

How to Replace a Failed Controller or Restore an Offline Controller

This procedure assumes that your cluster is operational. For conceptual information about SCSI reservations and failure fencing, see your Oracle Solaris Cluster concepts documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. (StorEdge A1000 Only) If none of the LUNs in the storage array is a quorum device, proceed to Step 3.

    Note - Your storage array or storage system might not support LUNs as quorum devices. To determine if this restriction applies to your storage array or storage system, see Restrictions and Requirements.


    To determine whether any LUNs in the storage array are quorum devices, use the following command.

    # clquorum show 
  2. (StorEdge A1000 Only) If any of the LUNs in the storage array is a quorum device, relocate that quorum device to another suitable storage array.

    For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

  3. (StorEdge A3500 Only) On both nodes, to prevent LUNs from automatic assignment to the controller that is being brought online, set the System_LunReDistribution parameter in the /etc/raid/rmparams file to false.

    Caution

    Caution - You must set the System_LunReDistribution parameter in the /etc/raid/rmparams file to false so that no LUNs are assigned to the controller being brought online. After you verify in Step 8 that the controller has the correct SCSI reservation state, you can balance LUNs between both controllers.


    For the procedure about how to modify the rmparams file, see the Sun StorEdge RAID Manager Installation and Support Guide.

  4. Restart the RAID Manager daemon.
    # /etc/init.d/amdemon stop
    # /etc/init.d/amdemon start
  5. If your controller module is offline, but does not have a failed controller, proceed to Step 7.
  6. If you have a failed controller, replace the failed controller with a new controller. Do not bring the controller online.

    For the procedure about how to replace controllers, see the Sun StorEdge A3500/A3500FC Controller Module Guide and the Sun StorEdge RAID Manager Installation and Support Guide for additional considerations.

  7. On one node, use the RAID Manager GUI's Recovery application to restore the controller online.

    Note - You must use the RAID Manager GUI's Recovery application to bring the controller online. Do not use the Redundant Disk Array Controller Utility (rdacutil) because this utility ignores the value of the System_LunReDistribution parameter in the /etc/raid/rmparams file.


    For information about the Recovery application, see the Sun StorEdge RAID Manager User’s Guide. If you have problems with bringing the controller online, see the Sun StorEdge RAID Manager Installation and Support Guide.

  8. On one node that is connected to the storage array or storage system, verify that the controller has the correct SCSI reservation state.

    Use the following command on LUN 0 of the controller you want to bring online.

    In the following command, devicename is the full UNIX path name of the device, for example /dev/dsk/c1tXdY

    # cldevice repair devicename
  9. (StorEdge A3500 Only) Set the controller to active/active mode. Assign LUNs to the controller.

    For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.

  10. (StorEdge A3500 Only) Reset the System_LunReDistribution parameter in the /etc/raid/rmparams file to true.

    For the procedure about how to change the rmparams file, see the Sun StorEdge RAID Manager Installation and Support Guide.

  11. (StorEdge A3500 Only) Restart the RAID Manager daemon.
    # /etc/init.d/amdemon stop
    # /etc/init.d/amdemon start

How to Upgrade Controller Module Firmware

Use either the online or the offline method to upgrade your NVSRAM firmware. The method that you choose depends on your firmware.

Before You Begin

This procedure assumes that your cluster is operational

  1. Are you upgrading the NVSRAM firmware file?
    • If you are not upgrading the NVSRAM file, you can use the online method.

      Upgrade the firmware by using the online method, as described in the Sun StorEdge RAID Manager User’s Guide. No special steps are required for a cluster environment.

    • If you are upgrading the NVSRAM file, you must use an offline method. Use one of the following procedures.

      • If the data on your controller module is mirrored on another controller module, use the procedure in Step 2.

      • If the data on your controller module is not mirrored on another controller module, use the procedure in Step 3.

  2. Use this step if you are upgrading the NVSRAM and other firmware files on a controller module. This controller module must have mirrored data.
    1. Halt all activity to the controller module.

      For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

    2. Update the firmware files by using the offline method, as described in the Sun StorEdge RAID Manager User’s Guide.
    3. Restore all activity to the controller module.

      For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

      This step completes the firmware upgrade.

  3. Use this step if you are upgrading the NVSRAM and other firmware files on a controller module. This controller module must not have mirrored data.
    1. Shut down the entire cluster.

      For the procedure about how to shut down a cluster, see your Oracle Solaris Cluster system administration documentation.

    2. Boot one node that is attached to the controller module into noncluster mode.

      For the procedure about how to boot a node in noncluster mode, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

    3. Update the firmware files by using the offline method, as described in the Sun StorEdge RAID Manager User’s Guide.
    4. Boot both nodes into cluster mode.

      For more information about how to boot nodes, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

      This step completes the firmware upgrade.

How to Add a Disk Drive

Adding a disk drive enables you to increase your storage space after a storage array has been added to your cluster.


Caution

Caution - If the disk drive that you are adding was previously owned by another controller module, reformat the disk drive to wipe clean the old DacStore information before adding the disk drive to this storage array. If any old DacStore information remains, it can cause aberrant behavior including the appearance of ghost disks or LUNs in the RAID Manager interfaces.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

  1. Verify that the new disk drive is formatted.

    For information about how to move drives between storage arrays, see the Sun StorEdge RAID Manager Release Notes.

  2. Install the new disk drive to the storage array.

    For the procedure about how to install a disk drive, see your storage documentation. For a list of storage documentation, see Related Documentation.

  3. Enable the disk drive to spin up approximately 30 seconds.
  4. Run Health Check to ensure that the new disk drive is not defective.

    For instructions about how to run Recovery Guru and Health Check, see the Sun StorEdge RAID Manager User’s Guide.

  5. Fail the new drive, then revive the drive to update DacStore on the drive.

    For the procedure about how to fail and revive drives, see the Sun StorEdge RAID Manager User’s Guide.

  6. Repeat Step 1 through Step 5 for each disk drive you are adding.

See Also

To create LUNs for the new drives, see How to Create a LUN for more information.

How to Replace a Disk Drive

Removing a disk drive enables you to reduce or reallocate your existing storage pool. You might want to perform this procedure if a disk has failed or is behaving in an unreliable manner.

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Oracle Solaris Cluster concepts documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

  1. Does replacing the disk drive affect any LUN's availability?
    • If no, proceed to Step 2.
    • If yes, remove the LUNs from volume management control. For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
  2. Replace the disk drive in the storage array.

    For the procedure about how to replace a disk drive, see your storage documentation. For a list of storage documentation, see Related Documentation.

  3. Run Health Check to ensure that the new disk drive is not defective.

    For the procedure about how to run Recovery Guru and Health Check, see the Sun StorEdge RAID Manager User’s Guide.

  4. Does the failed drive belong to a drive group?
    • If the drive does not belong to a device group, proceed to Step 5.
    • If the drive is part of a device group, reconstruction is started automatically. If reconstruction does not start automatically for any reason, then select Reconstruct from the Manual Recovery application. Do not select Revive. When reconstruction is complete, skip to Step 6.
  5. Fail the new drive, then revive the drive to update DacStore on the drive.

    For the procedure about how to fail and revive drives, see the Sun StorEdge RAID Manager User’s Guide.

  6. If you removed LUNs from volume management control in Step 1, return the LUNs to volume management control.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

How to Remove a Disk Drive

Removing a disk drive enables you to reduce or reallocate your existing storage pool. You might want to perform this procedure in the following scenarios.

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Oracle Solaris Cluster concepts documentation.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If the LUN that is associated with the disk drive you are removing is not a quorum device, proceed to Step 3.

    Note - Your storage array or storage system might not support LUNs as quorum devices. To determine if this restriction applies to your storage array or storage system, see Restrictions and Requirements.


    To determine whether any LUNs in the storage array are quorum devices, use the following command.

    # clquorum show 
  2. If the LUN that is associated with the disk drive you are removing is a quorum device, relocate that quorum device to another suitable storage array.

    For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation.

  3. Remove the LUN that is associated with the disk drive you are removing.

    For the procedure about how to remove a LUN, see How to Delete a LUN.

  4. Remove the disk drive from the storage array.

    For the procedure about how to remove a disk drive, see your storage documentation. For a list of storage documentation, see Related Documentation.


    Caution

    Caution - After you remove the disk drive, install a dummy drive to maintain proper cooling.


How to Upgrade Disk Drive Firmware


Caution

Caution - You must be a Oracle service provider to perform disk drive firmware updates. If you need to upgrade drive firmware, contact your Oracle service provider.


How to Replace a Host Adapter


Note - Several steps in this procedure require you to halt I/O activity. To halt I/O activity, take the controller module offline by using the RAID Manager GUI's manual recovery procedure in the Sun StorEdge RAID Manager User’s Guide.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Determine the resource groups and device groups that are running on Node A.
    # clresourcegroup status -n nodename
    # cldevicegroup status -n nodename

    Note the device groups, the resource groups, and the node list for the resource groups. You will need this information to restore the cluster to its original configuration in Step 25 of this procedure.

  2. Move all resource groups and device groups off Node A.
    # clnode evacuate fromnode
  3. Without powering off the node, shut down Node A.

    For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.

  4. From Node B, halt I/O activity to SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  5. From the controller module end of the SCSI cable, disconnect the SCSI bus A cable. This cable connects the controller module to Node A. Afterward, replace this cable with a differential SCSI terminator.
  6. Restart I/O activity on SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  7. If servicing the failed host adapter affects SCSI bus B, proceed to Step 9.
  8. If servicing the failed host adapter does not affect SCSI bus B, skip to Step 12.
  9. From Node B, halt I/O activity to the controller module on SCSI bus B.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  10. From the controller module end of the SCSI cable, disconnect the SCSI bus B cable. This cable connects the controller module to Node A. Afterward, replace this cable with a differential SCSI terminator.
  11. Restart I/O activity on SCSI bus B.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  12. Power off Node A.
  13. Replace Node A's host adapter.

    For the procedure about how to replace a host adapter, see the documentation that shipped with your node hardware.

  14. Power on Node A. Do not enable the node to boot. If necessary, halt the system.
  15. From Node B, halt I/O activity to the controller module on SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  16. Remove the differential SCSI terminator from SCSI bus A. Afterward, reinstall the SCSI cable to connect the controller module to Node A.
  17. Restart I/O activity on SCSI bus A.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  18. Did you install a differential SCSI terminator to SCSI bus B in Step 10?
    • If no, skip to Step 21.
    • If yes, halt I/O activity on SCSI bus B, then continue with Step 19.
  19. Remove the differential SCSI terminator from SCSI bus B. Afterward, reinstall the SCSI cable to connect the controller module to Node A.
  20. Restart I/O activity on SCSI bus B.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  21. Bring the controller module online.

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  22. Rebalance all logical unit numbers (LUNs).

    For instructions, see the Sun StorEdge RAID Manager User’s Guide.

  23. Boot Node A into cluster mode.
  24. (Optional) Return resource groups and device groups to Node A.
  25. If you moved device groups off their original node in Step 2, restore the device groups that you identified in Step 1 to their original node.

    Perform the following step for each device group you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

    In these commands, devicegroup is one or more device groups that are returned to the node.

  26. If you moved resource groups off their original node in Step 2 restore the resource groups that you identified in Step 1 to their original node.

    Perform the following step for each resource group you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.