JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Sun Storage 2500-M2 Arrays

Hardware Release Notes, Release 6.10

search filter icon
search icon

Document Information

1.  Sun Storage 2500-M2 Arrays Hardware Release Notes

What's New in This Release

Product Overview

About the Management Software

System Requirements

Firmware Requirements

Supported Disk Drives and Tray Capacity

Array Expansion Module Support

Data Host Requirements

Multipathing Software

Supported Host Bus Adaptors (HBAs)

Supported FC and Multilayer Switches

Expansion Tray Specifications

ALUA/TPGS Multipathing with VMware

Procedure for ESX4.1U2 and ESXi5.0

Procedure for ESX4.1U3 and ESXi5.0U1

Restrictions and Known Issues

Restrictions

Single Path Data Connections

SAS Host Ports on the Sun Storage 2540-M2

Controller Issues

Log Events With smartd Monitoring Enabled

After Re-Installing the Oracle Virtual Machine (OVM) Manager, International Standards Organizations (ISO) Files Are Listed by Universally Unique Identifier (UUID) Rather Than by Friendly Name

After Un-Mapping a Volume from an Oracle Virtual Machine (OVM) Server, the Volume Continues to Appear in the Storage Database on the Server

In the Oracle Virtual Machine (OVM) Manager User Interface, Only One Drive at a Time Can Be Selected for Deletion

Kernel Panics During Controller Firmware (CFW) Download

BCM Driver Fails to Load

Kernel Panics During Controller Firmware Download

Network Interface on Device eth0 Fails to Come Online When Booting a Host

When Over 128 Volumes are Mapped to a Host, Paths to Only the First 128 Volumes are Restored after the Controller is Reset

Tasks Aborts Are Logged During a Controller Firmware Upgrade

Unable to Add More Than 117 Volumes to the Oracle Virtual Machine (OVM) Manager Database

Write-Back Cache is Disabled after Controllers Reboot with Multiple Failed Volumes in a Storage Array

During Multiple Node Failover/Failback Events, Input/Output (I/O) Operations Time Out Because a Resource is Not Available to a Cluster

After an NVSRAM Download, a Controller Reboots a Second Time when the NVSRAM is Activated

When a Controller is Not Set Offline Before Being Replaced, an Exception Occurs when the Replacement Controller is Brought Online

Input/Output (I/O) Errors Occur when Disconnection of Devices from a SAS Switch Is Not Detected

A Path Failure and Premature Failover Occur when a Cable is Disconnected between a Host and a Controller

Input/Output (I/O) Errors Occur when a Cable is Disconnected between a Host and a Controller, and the Alternate Controller is Unavailable

With 3 Gb/s SAS Host Bus Adapters (HBAs) and Heavy Input/Output (I/O), I/O Timeouts Occur During a Controller Firmware Upgrade

Host Operating System Logs "Hung Task" During a Path Failure

Backup Failure or I/O Errors with Snapshot Creation or Mounting Failure During Backup of Cluster Shared Volumes (CSV)

With Multiple SAS Hosts Using Single‐PHY, a Host Cable Pull During Input/Output (I/O) Operations Causes a Controller Reboot

Data is Misread when a Physical Drive Has an Unreadable Sector

Solaris 10 Guest in Fault Tolerant Mode Is Unable to Relocate Secondary Virtual Machine (VM) Upon Host Failure

Documentation Bugs

Hardware Installation Guide

Related Documentation

Documentation, Support, and Training

Restrictions and Known Issues

The following are restrictions and known issues applicable to this product release.

Restrictions

Single Path Data Connections

In a single path data connection, a group of heterogeneous servers is connected to an array through a single connection. Although this connection is technically possible, there is no redundancy, and a connection failure will result in loss of access to the array.


Caution

Caution - Because of the single point of failure, single path data connections are not recommended.


SAS Host Ports on the Sun Storage 2540-M2

Although SAS host ports are physically present on the Sun Storage 2540-M2 array controller tray, they are not for use, not supported, and are capped at the factory. The following figure shows the location of these ports. The Sun Storage 2540-M2 only supports Fibre Channel host connectivity.

image:Graphic showing location of SAS ports on the controller.

Controller Issues

Log Events With smartd Monitoring Enabled

Bug 15693183 (7014293) – When volumes are mapped to a Linux host with smartd monitoring enabled, on either a Sun Storage 2500-M2 or 6780 array, it is possible to receive “IO FAILURE” and “Illegal Request ASC/ASCQ” log events. This bug has been observed on SLES 11.1, but occurs also on other Linux platforms and versions.

Workaround – Either disable smartd monitoring or disregard the messages. This is an issue with the host OS.

After Re-Installing the Oracle Virtual Machine (OVM) Manager, International Standards Organizations (ISO) Files Are Listed by Universally Unique Identifier (UUID) Rather Than by Friendly Name

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you re-install the OVM manager on the host using the same ID as the previous installation. ISO file systems that were imported with the previous OVM manager are now renamed with their UUIDs rather than their friendly names. This makes it difficult to identify the ISO file systems.

Workaround

None.

After Un-Mapping a Volume from an Oracle Virtual Machine (OVM) Server, the Volume Continues to Appear in the Storage Database on the Server

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you un-map a volume on an OVM server. The OVM manager continues to show the volume along with those that are still mapped to the server. When you try to assign one of the affected volumes to a virtual machine, you see this error message:

disk doesn't exist

Workaround

After you un-map the volumes, use the OVM manager to remove those volumes from the storage database on the server.

In the Oracle Virtual Machine (OVM) Manager User Interface, Only One Drive at a Time Can Be Selected for Deletion

Operating System

Hardware/Software/Firmware

Problem or Restriction

In the OVM user interface, only one drive at a time can be selected for deletion.

Workaround

None.

Kernel Panics During Controller Firmware (CFW) Download

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you upgrade CFW. The kernel panics on an attached host when downloading the CFW and shows the following message:

Kernel panic - not syncing: Fatal exception BUG: unable to handle kernel NULL pointer dereference at 0000000000000180 IP: [<ffffffff8123450a>] kref_get+0xc/0x2a PGD 3c275067 PUD 3c161067 PMD 0 Oops: 0000 [#1] SMP last sysfs file: /sys/block/sdc/dev

Workaround

To avoid this problem, do not perform a CFW upgrade on a storage array that is attached to hosts running the affected operating system version. If the problems occurs, power cycle the host.

BCM Driver Fails to Load

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you attempt to install the BCM driver on a server. The driver installs, but the component reports one of the following errors:

This device is not configured correctly. (Code1) The system cannot find the file specified.

or

The drivers for this device are not installed. (Code 28) The system cannot find the file specified.

Workaround

None.

Kernel Panics During Controller Firmware Download

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you upgrade controller firmware. A host with the affected kernel with UEK support experiences a devloss error for one of the world-wide port numbers (WWPNs) followed by a kernel panic.

Workaround

To avoid this problem, upgrade the host kernel to release 2.6.32-300.23.1.

If the problems occurs, power cycle the host.

Network Interface on Device eth0 Fails to Come Online When Booting a Host

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs during a host boot process when a large number (112+) of volumes are mapped to the host.At the point in the boot process where the network interface should be brought online, the host displays the following message:

Bringing up interface eth0: Device eth0 has different MAC address than expected. [FAILED]

The network interface does not come online during the boot process, and cannot subsequently be brought online.

Workaround

To avoid this problem, reduce the number of volumes mapped to host with the affected version of Oracle Linux. You can map additional volumes to the host after it boots.

When Over 128 Volumes are Mapped to a Host, Paths to Only the First 128 Volumes are Restored after the Controller is Reset

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you have more than 128 volumes mapped to a host, both controllers reboot, and only one controller comes back online. Only the first 128 volumes mapped to the host are accessible to the host for input/output (I/O) operations after the reboot. During the controller reboot, there might be a delay before any of the volumes are accessible to the host. I/O timeouts occur when the host tries to communicate with the inaccessible volumes.

Workaround

You can avoid this problem by mapping no more that 128 volumes to a host with the affected operating system release. If the problem occurs, run the multipath command again after the controller comes back online.

Tasks Aborts Are Logged During a Controller Firmware Upgrade

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs during a controller firmware upgrade. The operating system logs task abort messages similar to those shown below.

May 3 21:30:51 ictc-eats kernel: [118114.764601] sd 0:0:101:3: task abort: SUCCESS scmd(ffff88012383c6c0) May 3 21:30:51 ictc-eats kernel: [118114.764606] sd 0:0:101:1: attempting task abort! scmd(ffff88022705c0c0) May 3 21:30:51 ictc-eats kernel: [118114.764609] sd 0:0:101:1: CDB: Test Unit Ready: 00 00 00 00 00 00 May 3 21:30:51 ictc-eats kernel: [118114.764617] scsi target0:0:101: handle(0x000c), sas_address(0x50080e51b0bae000), phy(4) May 3 21:30:51 ictc-eats kernel: [118114.764620] scsi target0:0:101: enclosure_logical_id(0x500062b10000a8ff), slot(4) May 3 21:30:51 ictc-eats kernel: [118114.767084] sd 0:0:101:1: task abort: SUCCESS scmd(ffff88022705c0c0)

You might experience input/output (I/O) timeouts or read/write errors after the upgrade.

Workaround

If this problem occurs, restart input/output operations. the affected resources will come back online without further intervention.

Unable to Add More Than 117 Volumes to the Oracle Virtual Machine (OVM) Manager Database

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you attempt to add more that 117 volumes to the database of the OVM manager. When the OVM manager scans for the additional volumes, it returns the following error:

OSCPlugin.OperationFailedEx:'Unable to query ocfs2 devices'

Workaround

You can avoid this problem by deleting volumes from the OVM manager database when those volumes are no longer mapped to the OVM server.

Write-Back Cache is Disabled after Controllers Reboot with Multiple Failed Volumes in a Storage Array

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when power is turned off and then back on to a controller-drive tray while there are failed volumes in the storage array. When the controllers reboot after the power cycle, they attempt to flush restored cache data to disk. If the controllers are unable to flush the cache data because of failed volumes, all of the volumes in the storage array remain in write-through mode after the controllers reboot. This will cause a substantial reduction in performance on input/output operations.

Workaround

None.

During Multiple Node Failover/Failback Events, Input/Output (I/O) Operations Time Out Because a Resource is Not Available to a Cluster

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when a cluster loses access to a file system resource. A message similar to the following appears in the cluster log:

Device /dev/mapper/mpathaa not found. Will retry wait to see if it appears. The device node /dev/mapper/mpathaa was not found or did not appear in the udev create time limit of 60 seconds Fri Apr 27 18:45:08 CDT 2012 restore: END restore of file system /home/smashmnt11 (err=1) ERROR: restore action failed for resource /home/smashmnt11 /opt/LifeKeeper/bin/lcdmachfail: restore in parallel of resource "dmmp19021 "has failed; will re-try serially END vertical parallel recovery with return code -1

You might experience I/O timeouts.

Workaround

If this problem occurs, restart I/O operations on the storage array.

After an NVSRAM Download, a Controller Reboots a Second Time when the NVSRAM is Activated

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when a controller detects corruption in the signature of the NVSRAM loaded on the controller. The controller restores the NSVRAM from the physical drive, and then reboots.

Workaround

The controller recovers and continues normal operations.

When a Controller is Not Set Offline Before Being Replaced, an Exception Occurs when the Replacement Controller is Brought Online

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you fail to follow standard procedures when replacing a controller. If you do not set a controller offline before you replace it, and the replacement controller has a difference firmware level from the remaining controller, the firmware mismatch is not properly detected.

Workaround

You can avoid this problem by following the standard procedure for replacing a controller. If this problem occurs, the replacement controller reboots after the exception and the storage array returns to normal operations.

Input/Output (I/O) Errors Occur when Disconnection of Devices from a SAS Switch Is Not Detected

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when there is a heavy load of I/O operations between hosts and storage arrays that are connected through a SAS switch. The switch fails to notify the host when a volume is no longer available. A host experiences I/O errors or application timeouts.

Workaround

To avoid this problem, reduce some or all of the following factors:

A Path Failure and Premature Failover Occur when a Cable is Disconnected between a Host and a Controller

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you disconnect a SAS cable between a controller and a host. Even if you reconnect the cable before the normal failover timeout, the path fails and the controller fails over to the alternate.

Workaround

If this problem occurs, reconnect the cable. The path will be restored.

Input/Output (I/O) Errors Occur when a Cable is Disconnected between a Host and a Controller, and the Alternate Controller is Unavailable

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when the maximum number of volumes (256) is mapped to a host. If you disconnect the cable between a controller and a host, and then reconnect the cable, I/O errors occur if the alternate controller becomes unavailable before the host can rediscover all of the volumes on the connection.

Workaround

After some delay, the host will rediscover all of the volumes and normal operations will resume.

With 3 Gb/s SAS Host Bus Adapters (HBAs) and Heavy Input/Output (I/O), I/O Timeouts Occur During a Controller Firmware Upgrade

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when you upgrade controller firmware during a heavy load of I/O operations. The host experiences I/O timeouts during firmware activation.

Workaround

Do not perform an online controller firmware upgrade while the system is under heavy I/O load. If this problem occurs, restart I/O operations on the host.

Host Operating System Logs "Hung Task" During a Path Failure

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when there is a path failure through a host connection. The operating system logs a "Hung Task" message in /var/log/messages before the MPP driver marks the path failed and fails over to the alternate path.

Workaround

The logging of this message does not affect normal operation. You can disable the log message by entering the following command on the host command line:

echo 0 > /proc/sys/kernel/hung_task_timeout_secs

Backup Failure or I/O Errors with Snapshot Creation or Mounting Failure During Backup of Cluster Shared Volumes (CSV)

Operating System

Problem or Restriction

This problem occurs when a backup operation of CSVs begins. The backup application talks to the VSS provider and initiates the backup operation. The creation of a snapshot volume or mounting of a snapshot volume fails. The backup application then tries to backup the CSVs instead of a snapshot of the CSVs. If the Retry option is set with lock, the application hosted on the CSVs or data written to or read from these volumes might throw an error. If the Retry option is set without lock, the backup skips files. This error occurs because the backup application and the application hosted on the CSVs or data being written to or read from the CSVs tries to "lock" the volume or file, which results in a conflict.

Users encounter this issue whenever there is a resource conflict between the backup operation and the application trying to perform write or read operations to the volume undergoing a backup operation.

Depending on the option the customers choose, the backup operation reports one of these conditions:

Workaround

Run the backup operation at a time when the application is not doing write or read intensive work on the CSV undergoing backup.

Also, when using the option "Without Lock," files will be skipped and the user can then create another backup operation with the skipped files. For more information, see http://www.symantec.com/docs/TECH195868

With Multiple SAS Hosts Using Single‐PHY, a Host Cable Pull During Input/Output (I/O) Operations Causes a Controller Reboot

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem rarely occurs when multiple hosts are connected by a quadfurcated cable to a single wide port on the controller. If the cable is disconnected, the controller reboots.

Workaround

The controller reboots and return to normal operations when the cable is reconnected.

Data is Misread when a Physical Drive Has an Unreadable Sector

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when issuing a read to a location where the length of the read includes an unreadable sector. The host operating system assumes that data up to the unreadable sector was read correctly, but this might not be the case. A bug has been opened with Red Hat: http://bugzilla.redhat.com/show_bug.cgi?id=845135

Workaround

Replace any drives that have media errors.

Solaris 10 Guest in Fault Tolerant Mode Is Unable to Relocate Secondary Virtual Machine (VM) Upon Host Failure

Operating System

Hardware/Software/Firmware

Problem or Restriction

This problem occurs when the host fails while the host was running a secondary VM for a Solaris 10 (u10) guest. The message in the event log for that VM that reads as follows:

No compatible host for the Fault Tolerant secondary VM

When this problem occurs, the secondary VM for the guest is stuck in an Unknown status and cannot re‐enable Fault Tolerance for this VM. An attempt to disable and then re‐enable Fault Tolerance fails because it cannot relocate the secondary VM from a host that is not responding. Also Fault Tolerance cannot be completely turned off on the VM for the same reason.

The main problem is that the HA service reports that there are not enough resources available to restart the secondary VM. However, even after reducing all used resources in the cluster to a level so that there is an overabundance of resources, the HA service still reports that there are not enough and therefore no available host in the cluster on which to run the secondary VM. After the VM fails completely, however, the VM can be restarted and put into Fault Tolerance mode again.

The shutdown of the VM is something that always happens if a Fault Tolerance enabled VM is running unprotected without a linked secondary VM and the host on which the primary VM is running fails for any reason. The failure of the secondary VM in a node failure scenario for Solaris 10 guests can be regularly reproduced.

When a node failure happens, the customer sees that Solaris 10 guests can have issues restoring a secondary VM for Fault Tolerance enabled VMs. This is seen by reviewing the vSphere client in the cluster VM view as well as in the event log for the VM.

Workaround

In most cases, the customer can correct the problem by performing one of the following actions in the order shown. Perform one action and if that does not work, proceed to the next until the problem is resolved.

  1. Disable and re-enable fault tolerance on the affected VM.

  2. Turn off fault tolerance for the VM altogether and turn it back on.

  3. Attempt to live vMotion the VM and try action 1 and action 2 again.

It is possible that either the host CPU model is not compatible with turning Fault Tolerance off and on for running VMs, or that, even after performing the previous action, a secondary VM still does not start. If the secondary VM does not start, the customer needs to briefly shut down the affected VM, perform action 2, and then restart the VM.

Documentation Bugs

Hardware Installation Guide

Page 38 of the Sun Storage 2500-M2 Arrays Hardware Installation Guide mistakenly refers to AIX and HP-UX as supported data host platforms. Disregard HP-UX and AIX referenced in the following note:

"The data host multipathing software for Red Hat Linux, HP-UX, AIX, and Windows platforms is Sun Redundant Dual Array Controller (RDAC), also known as MPP."