JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Sun Storage 6580 and 6780 Arrays

Hardware Release Notes, Release 6.10

search filter icon
search icon

Document Information

1.  Sun Storage 6580 and 6780 Array Hardware Release Notes

What's In This Firmware Release

Downloading Firmware

Cache Battery Expiration Notification

About the Array

System Requirements

Firmware Requirements

Updating Controller Firmware

Supported Disk Drives and Tray Capacity

Array Expansion Module Support

Data Host Requirements

Multipathing Software

Supported Host Bus Adaptors (HBAs)

Supported FC and Multilayer Switches

Supported Premium Features

Tier 1 Support

Tier 2 Support

ALUA/TPGS Multipathing with VMware

Procedure for ESX4.1U2 and ESXi5.0

Procedure for ESX4.1U3 and ESXi5.0U1

Restrictions and Known Issues

Installation and Hardware Related Issues

Single Path Data Connections

Setting the Tray Link Rate

Controller Issues

Log Events Using SLES 11.1 With smartd Monitoring Enabled

After Re-Installing the Oracle Virtual Machine (OVM) Manager, International Standards Organizations (ISO) Files Are Listed by Universally Unique Identifier (UUID) Rather Than by Friendly Name

After Un-Mapping a Volume from an Oracle Virtual Machine (OVM) Server, the Volume Continues to Appear in the Storage Database on the Server

In the Oracle Virtual Machine (OVM) Manager User Interface, Only One Drive at a Time Can Be Selected for Deletion

Kernel Panics During Controller Firmware (CFW) Download

BCM Driver Fails to Load

Kernel Panics During Controller Firmware Download

Network Interface on Device eth0 Fails to Come Online When Booting a Host

When Over 128 Volumes are Mapped to a Host, Paths to Only the First 128 Volumes are Restored after the Controller is Reset

Unable to Add More Than 117 Volumes to the Oracle Virtual Machine (OVM) Manager Database

Write-Back Cache is Disabled after Controllers Reboot with Multiple Failed Volumes in a Storage Array

During Multiple Node Failover/Failback Events, Input/Output (I/O) Operations Time Out Because a Resource is Not Available to a Cluster

After an NVSRAM Download, a Controller Reboots a Second Time when the NVSRAM is Activated

When a Controller is Not Set Offline Before Being Replaced, an Exception Occurs when the Replacement Controller is Brought Online

Input/Output (I/O) Errors Occur when a Cable is Disconnected between a Host and a Controller, and the Alternate Controller is Unavailable

Backup Failure or I/O Errors with Snapshot Creation or Mounting Failure During Backup of Cluster Shared Volumes (CSV)

Data is Misread when a Physical Drive Has an Unreadable Sector

Solaris 10 Guest in Fault Tolerant Mode Is Unable to Relocate Secondary Virtual Machine (VM) Upon Host Failure

Documentation Issues

Total Cache Size Specification for Sun Storage 6780 Array

Inaccurate Cabling Diagrams in Hardware Installation Guide

Product Documentation

Documentation, Support, and Training

Restrictions and Known Issues

The following sections provide information about restrictions, known issues, and bugs (or CRs) filed against this product release. If a recommended workaround is available for a bug, it follows the bug description.

For information about bug fixes in this release, see the Sun Storage Common Array Manager Software Release Notes.

Installation and Hardware Related Issues

This section describes known issues and bugs related to installing and initially configuring Sun Storage 6580 and 6780 arrays. This section describes general issues related to Sun Storage 6580 and 6780 array hardware and firmware.

Single Path Data Connections

In a single path data connection, a group of heterogeneous servers is connected to an array through a single connection. Although this connection is technically possible, there is no redundancy, and a connection failure will result in loss of access to the array.


Caution

Caution - Because of the single point of failure, single path data connections are not recommended.


Setting the Tray Link Rate

When setting the tray link rate for an expansion tray, all expansion trays connected to the same drive channel must be set to operate at the same data transfer rate (speed).

For details about how to set the tray link rate, see “Setting the Tray Link Rate” in the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays.

Controller Issues

Log Events Using SLES 11.1 With smartd Monitoring Enabled

Bug 15693183 (CR7014293) – When volumes are mapped to a SLES 11.1 host with smartd monitoring enabled, on either a Sun Storage 2500-M2 or 6780 array, it is possible to receive “IO FAILURE” and “Illegal Request ASC/ASCQ” log events.

Workaround – Either disable smartd monitoring or disregard the messages. This is an issue with the host OS.

After Re-Installing the Oracle Virtual Machine (OVM) Manager, International Standards Organizations (ISO) Files Are Listed by Universally Unique Identifier (UUID) Rather Than by Friendly Name

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you re-install the OVM manager on the host using the same ID as the previous installation. ISO file systems that were imported with the previous OVM manager are now renamed with their UUIDs rather than their friendly names. This makes it difficult to identify the ISO file systems.

Workaround

None.

After Un-Mapping a Volume from an Oracle Virtual Machine (OVM) Server, the Volume Continues to Appear in the Storage Database on the Server

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you un-map a volume on an OVM server. The OVM manager continues to show the volume along with those that are still mapped to the server. When you try to assign one of the affected volumes to a virtual machine, you see this error message:

disk doesn't exist

Workaround

After you un-map the volumes, use the OVM manager to remove those volumes from the storage database on the server.

In the Oracle Virtual Machine (OVM) Manager User Interface, Only One Drive at a Time Can Be Selected for Deletion

Operating System

Hardware/Software/Firmware

Problem or Restriction

In the OVM user interface, only one drive at a time can be selected for deletion.

Workaround

None.

Kernel Panics During Controller Firmware (CFW) Download

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you upgrade CFW. The kernel panics on an attached host when downloading the CFW and shows the following message:

Kernel panic - not syncing: Fatal exception BUG: unable to handle kernel NULL pointer dereference at 0000000000000180 IP: [<ffffffff8123450a>] kref_get+0xc/0x2a PGD 3c275067 PUD 3c161067 PMD 0Oops: 0000 [#1] SMP last sysfs file: /sys/block/sdc/dev

Workaround

To avoid this problem, do not perform a CFW upgrade on a storage array that is attached to hosts running the affected operating system version. If the problems occurs, power cycle the host.

BCM Driver Fails to Load

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you attempt to install the BCM driver on a server. The driver installs, but the component reports one of the following errors:

This device is not configured correctly. (Code1) The system cannot find the file specified.

or

The drivers for this device are not installed. (Code 28) The system cannot find the file specified.

Workaround

None.

Kernel Panics During Controller Firmware Download

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you upgrade controller firmware. A host with the affected kernel with UEK support experiences a devloss error for one of the world-wide port numbers (WWPNs) followed by a kernel panic.

Workaround

To avoid this problem, upgrade the host kernel to release 2.6.32-300.23.1.

If the problems occurs, power cycle the host.

Network Interface on Device eth0 Fails to Come Online When Booting a Host

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs during a host boot process when a large number (112+) of volumes are mapped to the host.At the point in the boot process where the network interface should be brought online, the host displays the following message:

Bringing up interface eth0: Device eth0 has different MAC address than expected. [FAILED]

The network interface does not come online during the boot process, and cannot subsequently be brought online.

Workaround

To avoid this problem, reduce the number of volumes mapped to host with the affected version of Oracle Linux. You can map additional volumes to the host after it boots.

When Over 128 Volumes are Mapped to a Host, Paths to Only the First 128 Volumes are Restored after the Controller is Reset

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you have more than 128 volumes mapped to a host, both controllers reboot, and only one controller comes back online. Only the first 128 volumes mapped to the host are accessible to the host for input/output (I/O) operations after the reboot. During the controller reboot, there might be a delay before any of the volumes are accessible to the host. I/O timeouts occur when the host tries to communicate with the inaccessible volumes.

Workaround

You can avoid this problem by mapping no more that 128 volumes to a host with the affected operating system release. If the problem occurs, run the multipath command again after the controller comes back online.

Unable to Add More Than 117 Volumes to the Oracle Virtual Machine (OVM) Manager Database

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you attempt to add more that 117 volumes to the database of the OVM manager. When the OVM manager scans for the additional volumes, it returns the following error:

OSCPlugin.OperationFailedEx:'Unable to query ocfs2 devices'

Workaround

You can avoid this problem by deleting volumes from the OVM manager database when those volumes are no longer mapped to the OVM server.

Write-Back Cache is Disabled after Controllers Reboot with Multiple Failed Volumes in a Storage Array

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when power is turned off and then back on to a controller-drive tray while there are failed volumes in the storage array. When the controllers reboot after the power cycle, they attempt to flush restored cache data to disk. If the controllers are unable to flush the cache data because of failed volumes, all of the volumes in the storage array remain in write-through mode after the controllers reboot. This will cause a substantial reduction in performance on input/output operations.

Workaround

None.

During Multiple Node Failover/Failback Events, Input/Output (I/O) Operations Time Out Because a Resource is Not Available to a Cluster

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when a cluster loses access to a file system resource. A message similar to the following appears in the cluster log:

Device /dev/mapper/mpathaa not found. Will retry wait to see if it appears. The device node /dev/mapper/mpathaa was not found or did not appear in the udev create time limit of 60 seconds Fri Apr 27 18:45:08 CDT 2012 restore: END restore of file system /home/smashmnt11 (err=1) ERROR: restore action failed for resource /home/smashmnt11 /opt/LifeKeeper/bin/lcdmachfail: restore in parallel of resource "dmmp19021 "has failed; will re-try serially END vertical parallel recovery with return code -1

You might experience I/O timeouts.

Workaround

If this problem occurs, restart I/O operations on the storage array.

After an NVSRAM Download, a Controller Reboots a Second Time when the NVSRAM is Activated

Operating System

Hardware/Software/Firmware

All controllers

This problem occurs when a controller detects corruption in the signature of the NVSRAM loaded on the controller. The controller restores the NSVRAM from the physical drive, and then reboots.

Workaround

The controller recovers and continues normal operations.

When a Controller is Not Set Offline Before Being Replaced, an Exception Occurs when the Replacement Controller is Brought Online

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you fail to follow standard procedures when replacing a controller. If you do not set a controller offline before you replace it, and the replacement controller has a difference firmware level from the remaining controller, the firmware mismatch is not properly detected.

Workaround

You can avoid this problem by following the standard procedure for replacing a controller. If this problem occurs, the replacement controller reboots after the exception and the storage array returns to normal operations.

Input/Output (I/O) Errors Occur when a Cable is Disconnected between a Host and a Controller, and the Alternate Controller is Unavailable

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when the maximum number of volumes (256) is mapped to a host. If you disconnect the cable between a controller and a host, and then reconnect the cable, I/O errors occur if the alternate controller becomes unavailable before the host can rediscover all of the volumes on the connection.

Workaround

After some delay, the host will rediscover all of the volumes and normal operations will resume.

Backup Failure or I/O Errors with Snapshot Creation or Mounting Failure During Backup of Cluster Shared Volumes (CSV)

Operating System

Problem or Restriction

This problem occurs when a backup operation of CSVs begins. The backup application talks to the VSS provider and initiates the backup operation. The creation of a snapshot volume or mounting of a snapshot volume fails. The backup application then tries to backup the CSVs instead of a snapshot of the CSVs. If the Retry option is set with lock, the application hosted on the CSVs or data written to or read from these volumes might throw an error. If the Retry option is set without lock, the backup skips files. This error occurs because the backup application and the application hosted on the CSVs or data being written to or read from the CSVs tries to "lock" the volume or file, which results in a conflict.

Users encounter this issue whenever there is a resource conflict between the backup operation and the application trying to perform write or read operations to the volume undergoing a backup operation.

Depending on the option the customers choose, the backup operation reports one of these conditions:

Workaround

Run the backup operation at a time when the application is not doing write or read intensive work on the CSV undergoing backup.

Also, when using the option "Without Lock," files will be skipped and the user can then create another backup operation with the skipped files. For more information, see http://www.symantec.com/docs/TECH195868

Data is Misread when a Physical Drive Has an Unreadable Sector

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when issuing a read to a location where the length of the read includes an unreadable sector. The host operating system assumes that data up to the unreadable sector was read correctly, but this might not be the case. A bug has been opened with Red Hat.Go to this site for more information:

Workaround

Replace any drives that have media errors.

Solaris 10 Guest in Fault Tolerant Mode Is Unable to Relocate Secondary Virtual Machine (VM) Upon Host Failure

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when the host fails while the host was running a secondary VM for a Solaris 10 (u10) guest. The message in the event log for that VM that reads as follows:

No compatible host for the Fault Tolerant secondary VM

When this problem occurs, the secondary VM for the guest is stuck in an Unknown status and cannot re‐enable Fault Tolerance for this VM. An attempt to disable and then re‐enable Fault Tolerance fails because it cannot relocate the secondary VM from a host that is not responding. Also Fault Tolerance cannot be completely turned off on the VM for the same reason.

The main problem is that the HA service reports that there are not enough resources available to restart the secondary VM. However, even after reducing all used resources in the cluster to a level so that there is an overabundance of resources, the HA service still reports that there are not enough and therefore no available host in the cluster on which to run the secondary VM. After the VM fails completely, however, the VM can be restarted and put into Fault Tolerance mode again.

The shutdown of the VM is something that always happens if a Fault Tolerance enabled VM is running unprotected without a linked secondary VM and the host on which the primary VM is running fails for any reason. The failure of the secondary VM in a node failure scenario for Solaris 10 guests can be regularly reproduced.

When a node failure happens, the customer sees that Solaris 10 guests can have issues restoring a secondary VM for Fault Tolerance enabled VMs. This is seen by reviewing the vSphere client in the cluster VM view as well as in the event log for the VM.

Workaround

In most cases, the customer can correct the problem by performing one of the following actions in the order shown. Perform one action and if that does not work, proceed to the next until the problem is resolved.

  1. Disable and re-enable fault tolerance on the affected VM.

  2. Turn off fault tolerance for the VM altogether and turn it back on.

  3. Attempt to live vMotion the VM and try action 1 and action 2 again.

It is possible that either the host CPU model is not compatible with turning Fault Tolerance off and on for running VMs, or that, even after performing the previous action, a secondary VM still does not start. If the secondary VM does not start, the customer needs to briefly shut down the affected VM, perform action 2, and then restart the VM.

Documentation Issues

This section describes issues related to Sun Storage 6580 and 6780 array documentation.

Total Cache Size Specification for Sun Storage 6780 Array

In Table 1-1 of the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays (820-5773-11), the value for “Total cache size” is incorrectly reported as “16 Gbytes or 32 Gbytes.” As of the CAM 6.6 release, the revised value is “8, 16, 32, or 64 Gbytes.” The revised value is documented in Comparison of Sun Storage 6580 and 6780 Array Configurations of this document.

Inaccurate Cabling Diagrams in Hardware Installation Guide

The Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays (p/n 820-5773-11) show inaccurate cabling for Controller A. Figure B-13 show two cables incorrectly routed and two other cables that are missing. On Controller A both drive ports 2 and 3 go to Array 7 and Array 6 respectively, but should go to Array 11 and Array 10. Also, the data path cable from Controller A to either Array 6 or Array 7 is missing.

Workaround: Use the cabling matrix (Figure B-14) as your cabling guide. The cabling matrix is correct.