Sun Storage 6580 and 6780 Array Hardware Release Notes |
This document contains important release information about the Sun Storage 6580 and 6780 arrays running Sun Storage Common Array Manager (CAM), Version 6.8.x. Read about issues or requirements that can affect the installation and operation of the arrays.
The release notes consist of the following sections:
Array controller firmware version 7.77.xx.xx remains the same as delivered with CAM 6.8.0, and provides the following updates for Sun Storage 6580 and Sun Storage 6780 arrays:
For information about Sun Storage Common Array Manager enhancements and bug fixes for this release, see the Sun Storage Common Array Manager Software Release Notes.
To download Sun Storage Common Array Manager, as well as server patches pertaining to the Sun Storage 6580 and 6780 arrays, follow this procedure.
1. Sign in to My Oracle Support:
2. At the top of the page, click the Patches & Updates tab.
3. Search for CAM software and patches in one of two ways:
a. Under the Patch Search section, click the Search tab.
b. In the Patch Name or Number field, enter the patch number. For example, 10272123 or 141474-01.
a. Under the Patch Search section, click the Search tab, and then click the Product or Family (Advanced Search) link.
b. Check Include all products in a family.
c. In the Product field, start typing the product name. For example, “Sun Storage Common Array Manager (CAM)” or “Sun Storage 6580 array.”
d. Select the product name when it appears.
e. In the Release field, expand the product name, check the release and patches you want to download, and then click Close.
4. Select the patch you want to download.
5. Click ReadMe for a patch description and installation instructions.
6. Click Download for a single patch, or Add to Plan to download a group of patches.
Sun Storage 6580 array and Sun Storage 6780 array disk drives can now be replaced by customers. Previously designated as field replaceable units (FRUs), disk drives are now customer-replaceable units (CRUs).
When inserting a replacement disk drive, be sure the role of the replacement drive is “unassigned” to a virtual disk. All data will be erased before the controller reconstructs the data on the replacement disk drive.
Sun Storage 6580 and 6780 arrays use smart battery technology which maintains and reports its own status, providing a more accurate reporting of battery status. When a battery can no longer hold a charge, the battery is flagged for replacement, rather than a battery expiration report provided by the array firmware.
The Sun Storage 6580 and 6780 array models are compared in TABLE 1.
IOPS[1] 115K
|
||
Note - Upgrading from a 61x0 array to a Sun Storage 6580 or 6780 array is a data-in-place migration. |
The software and hardware products that have been tested and qualified to work with the Sun Storage 6580 and 6780 arrays are described in the following sections.
The firmware version for Sun Storage 6580 and 6780 arrays features described in this release note is version 07.77.xx.xx. This firmware version (or higher) is installed on the array controllers prior to shipment and is also delivered with the latest version of Sun Storage Common Array Manager (CAM).
To update controller firmware on an existing array:
1. Download the software as described in Downloading Patches and Updates.
2. Log into Sun Storage Common Array Manager.
3. Select the check box to the left of the array you want to update.
4. Click Install Firmware Baseline.
5. Follow the wizard instructions.
TABLE 2 lists the size, spindle speed, type, interface speed, and tray capacity for the supported Fibre Channel (FC), Serial Advanced Technology Attachment (SATA), and Serial Attached SCSI (SAS) disk drives for Sun Storage 6580 and 6780 arrays. Additional legacy drives might also be supported with this product.
Note - For special requirements concerning FC Solid State Disks (SSDs), see Solid State Disk Requirements. |
Solid State Drives (SSDs) have the following installation requirements:
TABLE 3 lists supported expansion modules. To add capacity to a Sun Storage 6580 or 6780 array, refer to the following Service Advisor procedures:
Caution - To add trays with existing stored data, contact Oracle Support for assistance to avoid data loss. |
For additional baseline firmware information, such as controller, NVSRAM, disk drive, version, and firmware file, see Sun Storage Array Baseline Firmware Reference.
This section describes supported data host software, HBAs, and switches.
TABLE 4 provides a summary of the data host requirements for the Sun Storage 6580 and 6780 arrays. It lists the current multipathing software and supported host bus adapters (HBAs) by operating system.
You must install multipathing software on each data host that communicates with Sun Storage 6580 and 6780 arrays.
Note - Single path data connections are not recommended. For more information, see Single Path Data Connections. |
TABLE 4 lists supported multipathing software by operating system.
Solaris 10[2] |
||||
RDAC version 09.03.0C02.0331 is included with Oracle VM 2.2.2 |
||||
Oracle Linux 6.0, 5.6, 5.5[3] |
||||
Note - Download the multipathing drivers from My Oracle Support at https://support.oracle.com. Search for the driver using one of the keywords “MPIO,” “RDAC,” or “MPP.” See Downloading Patches and Updates. |
Note - The multipathing driver for the IBM AIX platform is Veritas DMP, bundled in Veritas Storage Foundation 5.0 for Sun Storage 6580 and 6780 arrays. Download the Array Support Library (ASL) from http://support.veritas.com/. |
TABLE 5, TABLE 6, and TABLE 7 list supported HBAs and other data host platform elements by operating system.
To obtain the latest HBA firmware:
Download operating system updates from the web site of the operating system company.
Note - Always install the multipathing software before you install any OS patches. |
Solaris 10 SPARC[4] |
||||
HBAs[5] |
||||
---|---|---|---|---|
Microsoft Windows Server 2008, R2 (64-bit only ) / AMD x86 and EM64T |
||||
HBAs[6] |
||||
---|---|---|---|---|
HBAs[7] |
||
The following FC fabric and multilayer switches are compatible for connecting data hosts and Sun Storage 6580 and 6780 arrays:
The Sun Storage 6180 arrays support the Tier 1 classified licensable features. Tier 1 classified arrays include the StorageTek 6140 and Sun Storage 6180 arrays.
Available licenses for the Sun Storage 6180:
The Sun Storage 6580 and 6780 arrays support the below Tier 2 classified arrays licensable features.Tier 2 classified arrays include the StorageTek 6540, Sun Storage 6580, and Sun Storage 6780 arrays.
Available licenses for the Sun Storage 6580 and 6780 arrays:
Device Mapper (DM) is a generic framework for block devices provided by the Linux operating system. It supports concatenation, striping, snapshots, mirroring, and multipathing. The multipath function is provided by the combination of the kernel modules and user space tools.
The DMMP is supported on SUSE Linux Enterprise Server (SLES) Version 11 and 11.1. The SLES installation must have components at or above the version levels shown in the following table before you install the DMMP.
To update a component, download the appropriate package from the Novell website at http://download.novell.com/patch/finder. The Novell publication, SUSE Linux Enterprise Server 11 Installation and Administration Guide, describes how to install and upgrade the operating system.
1. Use the media supplied by your operating system vendor to install SLES 11.
2. Install the errata kernel 2.6.27.29-0.1.
Refer to the SUSE Linux Enterprise Server 11 Installation and Administration Guide for the installation procedure.
3. To boot up to 2.6.27.29-0.1 kernel, reboot your system.
4. On the command line, enter rpm -qa |grep device-mapper, and check the system output to see if the correct level of the device mapper component is installed.
5. On the command line, enter rpm -qa |grep multipath-tools and check the system output to see if the correct level of the multipath tools is installed.
6. Update the configuration file /etc/multipath.conf.
See Setting Up the multipath.conf File for detailed information about the /etc/multipath.conf file.
7. On the command line, enter chkconfig multipathd on.
This command enables multipathd daemon when the system boots.
8. Edit the /etc/sysconfig/kernel file to add directive scsi_dh_rdac to the INITRD_MODULES section of the file.
9. Download the KMP package for scsi_dh_rdac for the SLES 11 architecture from the website http://forgeftp.novell.com/driver-process/staging/pub/update/lsi/sle11/common/, and install the package on the host.
10. Update the boot loader to point to the new initrd image, and reboot the host with the new initrd image.
The multipath.conf file is the configuration file for the multipath daemon, multipathd. The multipath.conf file overwrites the built-in configuration table for multipathd. Any line in the file whose first non-white-space character is # is considered a comment line. Empty lines are ignored.
All of the components required for DMMP are included in SUSE Linux Enterprise Server (SLES) version 11.1 installation media. However, users might need to select the specific component based on the storage hardware type. By default, DMMP is disabled in SLES. You must follow the following steps to enable DMMP components on the host.
1. On the command line, type chkconfig multipath on.
The multipathd daemon is enabled with the system starts again.
2. Edit the /etc/sysconfig/kernel file to add the directive scsi_dh_rdac to the INITRD_MODULES section of the file.
3. Create a new initrd image using the following command to include scsi_dh_rdac into ram disk:
mkinitrd -i /boot/initrd-r -rdac -k /bootvmlinuz
4. Update the boot leader to point to the new initrd image, and reboot the host with the new initrd image.
Copy and rename the sample file located at /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic to /etc/multipath.conf. Configuration changes are now accomplished by editing the new /etc/multipath.conf file. All entries for multipath devices are commented out initially. The configuration file is divided into five sections:
To determine the attributes of a multipath device, check the multipaths section of the /etc/multipath.conf file, then the devices section, then the defaults section. The model settings used for multipath devices are listed for each storage array and include matching vendor and product values. Add matching storage vendor and product values for each type of volume used in your storage array.
For each UTM LUN mapped to the host, include an entry in the blacklist section of the /etc/multipath.conf file. The entries should follow the pattern of the following example.
blacklist { device { vendor "*" product "Universal Xport" } }
The following example shows the devices section for LSI storage from the sample /etc/multipath.conf file. Update the vendor ID, which is LSI in the sample file, and the product ID, which is INF-01-00 in the sample file, to match the equipment in the storage array.
devices { device { vendor "LSI" product "INF-01-00" path_grouping_policy group_by_prio prio rdac getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" polling_interval 5 path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 } }
The following table explains the attributes and values in the devices section of the /etc/multipath.conf file.
Multipath devices are created under /dev/ directory with the prefix dm-. These devices are the same as any other bock devices on the host. To list all of the multipath devices, run the multipath -ll command. The following example shows system output from the multipath -ll command for one of the multipath devices.
mpathp (3600a0b80005ab177000017544a8d6b92) dm-0 LSI,INF-01-00 [size=5.0G][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=6][active] \_ 5:0:0:0 sdc 8:32 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 4:0:0:0 sdb 8:16 [active][ghost]
In this example, the multipath device node for this device is /dev/mapper/mpathp and /dev/dm-0. The following table lists some basic options and parameters for the multipath command.
The following sections provide information about restrictions, known issues, and bugs filed against this product release:
If a recommended workaround is available for a bug, it follows the bug description.
This section describes known issues and bugs related to installing and initially configuring Sun Storage 6580 and 6780 arrays. This section describes general issues related to Sun Storage 6580 and 6780 array hardware and firmware.
In a single path data connection, a group of heterogeneous servers is connected to an array through a single connection. Although this connection is technically possible, there is no redundancy, and a connection failure will result in loss of access to the array.
Caution - Because of the single point of failure, single path data connections are not recommended. |
When setting the tray link rate for an expansion tray, all expansion trays connected to the same drive channel must be set to operate at the same data transfer rate (speed).
For details about how to set the tray link rate, see “Setting the Tray Link Rate” in the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays.
CR 6783749--When upgrading a StorageTek 6540 array to a Sun Storage 6580 or 6780 Array, you cannot change the tray ID 85 to tray ID 99 using CAM.
Workaround: You can use controller tray ID 85 for array configurations up to a maximum of 256 drives.
CR 6362850--The cfgadm -c unconfigure command unconfigures Universal Transport Mechanism (UTM) LUNs only and not other data LUNs. When this happens, you will not be able to unconfigure LUNs.
Workaround: Obtain Solaris 10 patch 118833-20 (SPARC) or patch 118855-16 (x86) to fix this issue.
CR 6760395: CAM logEvent messages intermittently reports power supply failures and 12 seconds later changes to optimal. This is caused by devices not responding to polling.
Workaround: No workaround required. You can ignore the failure messages.
See Appendix C, Troubleshooting and Operational Procedures, in the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays for a description of the controller tray and expansion tray diagnostic codes.
Note - This problem does not occur in RHEL version 6.0 with kernel 2.6.33. |
Problem or Restriction: An I/O error occurs during an online controller firmware upgrade.
Workaround: To avoid this problem, quiesce the host I/O before the performing controller firmware upgrades. To recover from this problem, make sure that the host reports that it has optimal paths available to the storage array controllers, and then resume I/O.
CR 6872995, 6949589--Both RAID controllers reboot after 828.5 days of continuous operation. A timer in the firmware (vxWorks) called “vxAbsTicks” is a 32-bit (double word) integer that keeps count in the 0x0000 0000 format. When this timer rolls over from 0xffffffff to 0x00000000 (after approximately 828.5 days), if there is host I/O to volumes, the associated drives fail with a write failure.
Original Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter. For controllers with 03.xx-06.60 firmware (6000 series) and 03.xx-6.70 firmware (2500 series): Both controllers reboot if counter is greater than 825 days.
Final Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter.
This fix staggers the reboots of the controllers for approximately five days so the only impact is a small performance degradation while the reboot occurs.
For controllers with firmware 07.15.11.12 or later (6000 series) and firmware 07.35.10.10 or later (2500 series): Controller A reboots if counter is greater than 820 days. Controller B reboots if counter is greater than 825 days.
Note - There is no redundancy for failover in a simplex 2500 configuration or any duplex configuration where a controller is already offline for any reason. |
Problem or Restriction: CR 7042297--Before running a "make" on the RDAC driver, the following kernel packages are required:
CR 7014293--When a SLES 11.1 host with smartd monitoring enabled is mapped to volumes on either a Sun Storage 2500-M2 or Sun Storage 6780 array, it is possible to receive “IO FAILURE” and “Illegal Request ASC/ASCQ” log events.
Workaround: Either disable smartd monitoring or disregard the messages. This is an issue with the host OS.
CR 7038184, 7028670, 7028672: When booting an Oracle Linux 6.0 host mapped to volumes on Sun Storage 2500-M2 and Sun Storage 6780 arrays, it is possible to receive one of these messages:
FIXME driver has no support for subenclosures (1) FIXME driver has no support for subenclosures (3) Failed to bind enclosure -19
Workaround: This is a cosmetic issue with no impact to the I/O path. There is no workaround.
Operating System: SLES Linux Enterprise Server 11.1 SP1
Problem or Restriction CR 7014293: Several IO FAILURE and Illegal Requests log events with ASC/ASQ SCSI errors appear in /var/log/messages while running vdbench on 25 LUNs.
An application client may request any one or all of the supported mode pages from the device server. If an application client issues a MODE SENSE command with a page code or subpage code value not implemented by the logical unit, the command shall be terminated with CHECK CONDITION status, with the sense key set to ILLEGAL REQUEST, and the additional sense code set to INVALID FIELD IN CDB.
The controller responds correctly (05h/24h/00h -INVALID FIELD IN CDB). The smartctl tool may need to ask all supported mode pages first before sending a unsupported mode page request.
Workaround: Disable SLES11 smartd monitoring service to stop these messages.
System Services (Runlevel) > smartd Disable
Problem or Restriction: This problem occurs when the DMMP failover driver is used with the RHEL version 6.0 OS. If you try to set up a Red Hat cluster with the DMMP failover driver, cluster startup might fail during the unfencing stage, where each host registers itself with the SCSI devices. The devices are in a Unit Attention state, which causes the SCSI registration command issued by the host during startup to fail. When the cluster manager (cman) service starts, the logs show that the nodes failed to unfence themselves, which causes the cluster startup to fail.
Workaround: To avoid this problem, do not use the DMMP failover driver with RHEL version 6.0. To recover from this problem, open a terminal window, and run:
where <device> is a SCSI device that is virtualized by the DMMP failover driver. Run this command on every /dev/sd device that the DMMP failover driver manages. It issues a Test Unit Ready command to clear the Unit Attention state and allow node registration on the device to succeed.
Operating System: Red Hat Enterprise Linux 6 with Native Cluster
Problem or Restriction: This problem occurs the first time a cluster is set up when the cluster.conf file does not have manually defined host keys. When the cluster.conf file was first defined to set up a cluster with SCSI reservation fencing, the cluster services were started on the nodes. With SCSI reservation fencing, the hosts try to generate and register a key on the clustered devices as part of the cluster manager's startup. The cluster manager service (cman) fails to start, and the key cannot be zero error message appears in the host log.
Workaround: To avoid this problem, use only power fencing. Do not use SCSI reservation fencing. To recover from this problem, change to manually defined host keys, and restart the cluster services.
Operating System: Red Hat Enterprise Linux 6 Native Cluster
Problem or Restriction: This problem occurs during an attempt to transfer a cluster service manually when a client is connected using NFSv4. The Global File System (GFS) 2 mount points failed to unmount, which caused the Red Hat Cluster Suite Services to go to the Failed state. The mount point, and all other mount points exported from the same virtual IP address, becomes inaccessible.
Workaround: To avoid this problem, configure the cluster nodes to not allow mount requests from NFS version 4 (NFSv4) clients. To recover from this problem, restart the failed service on the node that previously owned it.
Operating System: Red Hat Enterprise Linux version 6.0
Problem or Restriction: This problem occurs during an online controller firmware upgrade. The controller is not responding quickly enough to a host read or write to satisfy the host. After 30 seconds, the host sends a command to abort the I/O. The I/O aborts, and then starts again successfully.
Workaround: Quiesce the host I/O before performing the controller firmware upgrade. To recover from this problem, either reset the server, or wait until the host returns an I/O error.
Operating System: Red Hat Enterprise Linux version 6.0 with kernel 2.6.32
Red Hat Bugzilla Number: 620391
Note - This problem does not occur in Red Hat Enterprise Linux version 6.0 with kernel 2.6.33. |
Problem or Restriction: This problem occurs under situations of heavy stress when storage arrays take longer than expected to return the status of a read or write. The storage array must be sufficiently stressed that the controller response is more than 30 seconds, at which time a command is issued to abort if no response is received. The abort will be retried indefinitely even when the abort is successful. The application either times out or hangs indefinitely on the read or write that is being aborted. The messages file reports the aborts, and resets might occur on the LUN, the host, or the bus.
Factors effecting controller response include Remote Volume Mirroring, the controller state, the number of attached hosts, and the total throughput.
Workaround: To recover from this problem, reset the power on the server.
Problem or Restriction: When a Red Hat Enterprise Linux 5.1 host has more than two new volumes mapped to it, it hangs during reboot.
Workaround: Try one of the following options:
Problem or Restriction: An I/O timeout error occurs after you enable a switch port. This problem occurs when two or more Brocade switches are used, and both the active and the alternative paths from the host are located on one switch, and both the active path and the alternative path from the storage array are located on another switch. For the host to detect the storage array on the other switch, the switches are cascaded, and a shared zone is defined between the switches. This problem occurs on fabrics managing high I/O traffic.
Workaround: Reconfigure the switch zoning to avoid the need for cascading. Limit the zones within each switch, and do not create zones across the switches. Configure the active paths from the host and the storage array on one switch, and all of the alternative paths from the host and the storage array on the other switch.
Problem or Restriction: Red Hat Enterprise Linux 5.2 PowerPC (PPC) only. On rare occasions, the host hangs during reboot.
Problem or Restriction: Linux Red Hat 5 and Linux SLES 10 SP1 only. After a controller failover in an open SAN environment, a controller comes back online, but the path is not rediscovered by the multi-path proxy (MPP). After a controller comes online in a fabric connection (through a SAN switch), it is possible that a link will not be established by the Emulex HBA driver. This behavior is seen only if the SAN switch is “default” zoned (all ports see all other ports). This condition can result in an I/O error if the other path is taken offline.
Workaround: Set all of the SAN switches to be “default” zoned.
Problem or Restriction: Linux SLES 10 SP2 only. I/O errors occur during a system reboot, and the host resets.
Problem or Restriction: Red Hat Enterprise Linux 4.7 only. When the controller is going through the start-of-day sequence, the drive channel does not achieve link speed detection and logs a Major Event Log (MEL) event. This event recovers within a few seconds, and a second MEL event occurs. The second MEL event indicates that the link speed detection was achieved.
This section describes issues related to Sun Storage 6580 and 6780 array documentation.
In Table 1-1 of the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays (820-5773-11), the value for “Total cache size” is incorrectly reported as “16 Gbytes or 32 Gbytes.” As of the CAM 6.6 release, the revised value is “8, 16, 32, or 64 Gbytes.” The revised value is documented in TABLE 1 of this release note document.
These web sites provide additional resources:
Copyright © 2011, Oracle and/or its affiliates. All rights reserved.