Sun Storage 6180 Array |
This document contains important release information about Oracle’s Sun Storage 6180 array running Sun Storage Common Array Manager (CAM), Version 6.9.x. Read about issues or requirements that can affect the installation and operation of the array.
The release notes consist of the following sections:
Array controller firmware version 7.80.xx.xx provides Sun Storage Common Array Manager enhancements and bug fixes as described in the Sun Storage Common Array Manager Software Release Notes.
To download Sun Storage Common Array Manager, as well as server patches pertaining to the Sun Storage 6180 array, follow this procedure.
1. Sign in to My Oracle Support:
2. At the top of the page, click the Patches & Updates tab.
3. Search for CAM software and patches in one of two ways:
a. Under the Patch Search section, click the Search tab.
b. In the Patch Name or Number field, enter the patch number. For example, 10272123 or 141474-01.
a. Under the Patch Search section, click the Search tab, and then click the Product or Family (Advanced Search) link.
b. Check Include all products in a family.
c. In the Product field, start typing the product name. For example, “Sun Storage Common Array Manager (CAM)” or “Sun Storage 6180 array.”
d. Select the product name when it appears.
e. In the Release field, expand the product name, check the release and patches you want to download, and then click Close.
4. Select the patch you want to download.
5. Click ReadMe for a patch description and installation instructions.
6. Click Download for a single patch, or Add to Plan to download a group of patches.
Sun Storage 6180 arrays use smart battery technology which maintains and reports its own status, providing a more accurate reporting of battery status. When a battery can no longer hold a charge, the battery is flagged for replacement, rather than a battery expiration report provided by the array firmware.
The Sun Storage 6180 array is a high-performance, enterprise-class, full 8 Gigabit per second (Gb/s) I/O Fibre Channel solution (with backend loop speeds of 2 or 4 Gb/s) that combines outstanding performance with the highest reliability, availability, flexibility, and manageability.
The Sun Storage 6180 array is modular, rackmountable, and scalable from a single dual-controller tray (1x1) configuration to a maximum configuration of 1x7 with six additional CSM200 expansion trays behind one controller tray.
The software and hardware products that have been tested and qualified to work with the Sun Storage 6180 array are described in the following sections.
The firmware version for Sun Storage 6180 array features described in this release note is version 07.80.xx.xx. This firmware version (or higher) is installed on the array controllers prior to shipment and is also delivered with the latest version of Sun Storage Common Array Manager (CAM).
To update controller firmware on an existing array:
1. Download the software as described in Downloading Patches and Updates.
2. Log into Sun Storage Common Array Manager.
3. Select the check box to the left of the array you want to update.
4. Click Install Firmware Baseline.
5. Follow the wizard instructions.
TABLE 1 lists the size, spindle speed, type, interface speed, and tray capacity for supported Fibre Channel (FC), Serial Advanced Technology Attachment (SATA), and Solid State Disk (SSD) disk drives for the Sun Storage 6180 array. Additional legacy drives might also be supported with this product.
The following list of supported disk drives replaces the listing in the Sun Storage 6180 Array Hardware Installation Guide.
The CSM200 is the only expansion tray supported by the Sun Storage 6180 array. To add capacity to a 6180 array, refer to the following Service Advisor procedures:
Caution - To add trays with existing stored data, contact Oracle Support for assistance to avoid data loss. |
For additional baseline firmware information, such as controller, NVSRAM, disk drive, version, and firmware file, see Sun Storage Array Baseline Firmware Reference.
This section describes supported data host software, HBAs, and switches.
TABLE 3 provides a summary of the data host requirements for the Sun Storage 6180 array. It lists the current multipathing software and supported host bus adapters (HBAs) by operating system.
You must install multipathing software on each data host that communicates with the Sun Storage 6180 array.
Note - Single path data connections are not recommended. For more information, see Single Path Data Connections. |
TABLE 3 lists supported multipathing software by operating system.
Solaris 10[1] |
Update 6 or Update 5 with patch 140919- 04 (SPARC), 140920- 04 (x64/x86) |
|||
RDAC version 09.03.0C02.0331 is included with Oracle VM 2.2.2 |
||||
Not supported with CAM 6.9, firmware 7.80.xx.xx, but is supported with CAM 6.8.1 firmware 7.77.xx.xx |
||||
Not supported with CAM 6.9, firmware 7.80.xx.xx, but is supported with CAM 6.8.1 firmware 7.77.xx.xx |
Note - Download the multipathing drivers from My Oracle Support at https://support.oracle.com. Search for the appropriate driver using one of the keywords “MPIO,” “RDAC,” or “MPP.” See Downloading Patches and Updates. |
Note - The multipathing driver for the IBM AIX platform is Veritas DMP, bundled in Veritas Storage Foundation 5.0 for the Sun Storage 6180 array. Download the Array Support Library (ASL) from http://support.veritas.com/. |
TABLE 4, TABLE 5, and TABLE 6 list supported HBAs and other data host platform elements by operating system.
To obtain the latest HBA firmware:
Download operating system updates from the web site of the operating system company.
Note - You must install the multipathing software before you install any OS patches. |
Minimum OS Patches[2] |
||||
---|---|---|---|---|
HBAs[3] |
||||
---|---|---|---|---|
Microsoft Windows Server 2008 R2 SP1 (64-bit only ) / AMD x86 and EM64T |
||||
HBAs[4] |
||||
---|---|---|---|---|
Oracle Linux 6.0, 5.6, 5.5; Oracle VM 2.2.2; RHEL 6.0, 5.6, 5.5 |
||||
HBAs[5] |
||
The following FC fabric and multilayer switches are compatible for connecting data hosts and Sun Storage 6180 array:
The Sun Storage 6180 arrays support the Tier 1 classified licensable features. Tier 1 classified arrays include the StorageTek 6140 and Sun Storage 6180 arrays.
Available licenses for the Sun Storage 6180:
The Sun Storage 6580 and 6780 arrays support the below Tier 2 classified arrays licensable features. Tier 2 classified arrays include the StorageTek 6540, Sun Storage 6580, and Sun Storage 6780 arrays.
Available licenses for the Sun Storage 6580 and 6780 arrays:
Device Mapper (DM) is a generic framework for block devices provided by the Linux operating system. It supports concatenation, striping, snapshots, mirroring, and multipathing. The multipath function is provided by the combination of the kernel modules and user space tools.
The DMMP is supported on SUSE Linux Enterprise Server (SLES) Version 11 and 11.1. The SLES installation must have components at or above the version levels shown in the following table before you install the DMMP.
To update a component, download the appropriate package from the Novell website at http://download.novell.com/patch/finder. The Novell publication, SUSE Linux Enterprise Server 11 Installation and Administration Guide, describes how to install and upgrade the operating system.
1. Use the media supplied by your operating system vendor to install SLES 11.
2. Install the errata kernel 2.6.27.29-0.1.
Refer to the SUSE Linux Enterprise Server 11 Installation and Administration Guide for the installation procedure.
3. To boot up to 2.6.27.29-0.1 kernel, reboot your system.
4. On the command line, enter rpm -qa |grep device-mapper, and check the system output to see if the correct level of the device mapper component is installed.
5. On the command line, enter rpm -qa |grep multipath-tools and check the system output to see if the correct level of the multipath tools is installed.
6. Update the configuration file /etc/multipath.conf.
See Setting Up the multipath.conf File for detailed information about the /etc/multipath.conf file.
7. On the command line, enter chkconfig multipathd on.
This command enables multipathd daemon when the system boots.
8. Edit the /etc/sysconfig/kernel file to add directive scsi_dh_rdac to the INITRD_MODULES section of the file.
9. Download the KMP package for scsi_dh_rdac for the SLES 11 architecture from the website http://forgeftp.novell.com/driver-process/staging/pub/update/lsi/sle11/common/, and install the package on the host.
10. Update the boot loader to point to the new initrd image, and reboot the host with the new initrd image.
The multipath.conf file is the configuration file for the multipath daemon, multipathd. The multipath.conf file overwrites the built-in configuration table for multipathd. Any line in the file whose first non-white-space character is # is considered a comment line. Empty lines are ignored.
All of the components required for DMMP are included in SUSE Linux Enterprise Server (SLES) version 11.1 installation media. However, users might need to select the specific component based on the storage hardware type. By default, DMMP is disabled in SLES. You must follow the following steps to enable DMMP components on the host.
Note - Make sure you do not have LUNs mapped to your host, or be sure to unplug your host cables before this step, or else it will take a very long time to complete. |
1. On the command line, type chkconfig multipath on.
The multipathd daemon is enabled with the system starts again.
2. Edit the /etc/sysconfig/kernel file to add the directive scsi_dh_rdac to the INITRD_MODULES section of the file.
3. Run mkinitrd using one of the following commands, depending on your architecture:
mkinitrd -i /boot/initrd -k /boot/vmlinuz (x86/x86-64)
mkinitrd -i /boot/initrd -k /boot/vmlinux (PowerPC)
4. After creating the initial ram disk, make sure that the initial ram disk size is set correctly in /etc/yaboot.conf file. If it is not set correctly, the host might not boot up. The initial ram disk size can be found by:
ls -al /boot/<the initrd that you are using>
5. Run mkinitrd with the following command:
mkinitrd /boot/initrd-‘uname -r‘.img ‘uname -r‘ (no space between initrd- and ‘uname, but there is a space between uname and -r)
6. Run dracut to recompile the initramfs image using the following command:
8. After the reboot, check to make sure the proper kernel modules are loaded by running the following command:
scsi_dh_rdac and dm_multipath should both show up in the output.
Copy and rename the sample file located at /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic to /etc/multipath.conf. Configuration changes are now accomplished by editing the new /etc/multipath.conf file. All entries for multipath devices are commented out initially. The configuration file is divided into five sections:
To determine the attributes of a multipath device, check the multipaths section of the /etc/multipath.conf file, then the devices section, then the defaults section. The model settings used for multipath devices are listed for each storage array and include matching vendor and product values. Add matching storage vendor and product values for each type of volume used in your storage array.
For each UTM LUN mapped to the host, include an entry in the blacklist section of the /etc/multipath.conf file. The entries should follow the pattern of the following example.
blacklist { device { vendor "*" product "Universal Xport" } }
Modify Vendor ID and Product ID
The following example shows the devices section from the /etc/multipath.conf file. Be sure the vendor ID and the product ID for the Sun Storage 6180 array are set as shown in this example:
devices { device { vendor "SUN" product "SUN_6180" path_grouping_policy group_by_prio prio rdac getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" polling_interval 5 path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 } }
The following table explains the attributes and values in the devices section of the /etc/multipath.conf file.
Multipath devices are created under /dev/ directory with the prefix dm-. These devices are the same as any other bock devices on the host. To list all of the multipath devices, run the multipath -ll command. The following example shows system output from the multipath -ll command for one of the multipath devices.
mpathp (3600a0b80005ab177000017544a8d6b92) dm-0 LSI,INF-01-00 [size=5.0G][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=6][active] \_ 5:0:0:0 sdc 8:32 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 4:0:0:0 sdb 8:16 [active][ghost]
In this example, the multipath device node for this device is /dev/mapper/mpathp and /dev/dm-0. The following table lists some basic options and parameters for the multipath command.
The following sections provide information about restrictions, known issues, and bugs (or CRs) filed against this product release. If a recommended workaround is available for a bug, it follows the bug description.
For information about bug fixes in this release, see the Sun Storage Common Array Manager Software Release Notes.
Bug 7097416--When an OVM2.2.2 or OEL 5.5 SLES host with Oracle Hardware Management Package (OHMP) daemon enabled is mapped to volumes on a 6180 array, it is possible to receive IO FAILURE and Illegal Request ASC/ASCQ log events.
Workaround--Either disable OHMP or disregard the messages. This is an issue with the host OS.
Bug 7110592--Firmware 07.80.51.10 can cause ancient I/O reboots if the cache block size does not match the application I/O size.
Workaround--Ensure the application I/O size can fit into one cache block. If the cache block size is too small for the application I/O size, it will results in a shortage of an internal structure known as a buf_t. By setting the cache block size to match the I/O size, the correct number of buf_t’s will be available and the ancient I/O will be avoided.
To set the cache block size, go to the Administration page for the selected array.
Firmware revision 07.80.x.x supports the following cache block sizes:
Note - This problem does not occur in RHEL version 6.0 with kernel 2.6.33. |
Problem or Restriction: An I/O error occurs during an online controller firmware upgrade.
Workaround: To avoid this problem, quiesce the host I/O before the performing controller firmware upgrades. To recover from this problem, make sure that the host reports that it has optimal paths available to the storage array controllers, and then resume I/O.
CR 6872995, 6949589-Both RAID controllers reboot after 828.5 days of continuous operation. A timer in the firmware (vxWorks) called “vxAbsTicks” is a 32-bit (double word) integer that keeps count in the 0x0000 0000 format. When this timer rolls over from 0xffffffff to 0x00000000 (after approximately 828.5 days), if there is host I/O to volumes, the associated drives fail with a write failure.
Original Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter. For controllers with 03.xx-06.60 firmware (6000 series) and 03.xx-6.70 firmware (2500 series): Both controllers reboot if counter is greater than 825 days.
Final Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter.
This fix staggers the reboots of the controllers for approximately five days so the only impact is a small performance degradation while the reboot occurs.
For controllers with firmware 07.15.11.12 or later (6000 series) and firmware 07.35.10.10 or later (2500 series): Controller A reboots if counter is greater than 820 days. Controller B reboots if counter is greater than 825 days.
Note - There is no redundancy for failover in a simplex 2500 configuration or any duplex configuration where a controller is already offline for any reason. |
Problem or Restriction: After removing a second I/O Module from a storage array, the controller panics.
Workaround: After removing an I/O Module, wait at least 10 minutes before removing another I/O Module from the same storage array.
Problem or Restriction: Cache restore is attempted when the controller is attached to foreign drive modules, and there is data on the USB devices that the cache has not written to the drive modules.
Caution - Possible loss of data--Failure to perform this workaround could result in data loss. |
Before the power is turned off to the system, quiesce the system. You should quiesce the system before the controller or the drive module is moved. This process does not back up the cache, and it does not attempt to restore the data from the USB devices to the foreign drive modules.
Problem or Restriction: With power-on diagnostics, some host interface card hardware defects are not found, including problems transferring data across the PCI express bus, interrupt failures, and issues with the internal buffers in the chip.
Workaround: Verify that the host interface cable connections into the Small Form-factor Pluggable (SFP) transceivers are secure. If the problem remains, replace the host interface card.
Problem or Restriction: If the controllers are running firmware that uses 64-bit addressing, you cannot load firmware that uses 32-bit addressing if your storage array has these conditions:
Recent code changes have been implemented to fix a 32-bit addressing issue by using 64-bit addressing. After you have updated to a firmware version that uses the 64-bit addressing, do not attempt to reload firmware version that uses 32-bit addressing.
Workaround: If you must replace a firmware version that uses 64-bit addressing with a firmware version that uses 32-bit addressing, contact a Sun Technical Support representative. The Technical Support representative will delete all snapshots before starting the downgrade process. Snapshots of any size will not survive the downgrade process. After the firmware that uses 32-bit addressing boots and runs, no snapshot records will be available to cause errors. After the 32-bit addressing firmware is running, you can re-create the snapshots.
Problem or Restriction: This problem occurs when Internet Protocol Version 6 (IPV6) addresses have been disabled on a Sun Storage 6180 array. If the Internet Storage Name Service (iSNS) is enabled and set to obtain configuration data automatically from the Dynamic Host Configuration Protocol (DHCP) server, the IPV6 addresses will be discovered even though they were disabled on the ports of the controllers in the Sun Storage 6180 array.
Problem or Restriction: This problem occurs when you change the configuration for all of the ports in a storage array from using Dynamic Host Configuration Protocol (DHCP) to using static IP addresses or vice versa. If you are using Internet Storage Name Service (iSNS), the registration of the IP addresses for the ports will be lost.
Workaround: Use one of the following workarounds after you change the IP addresses:
In a single path data connection, a group of heterogeneous servers is connected to an array through a single connection. Although this connection is technically possible, there is no redundancy, and a connection failure will result in loss of access to the array.
Caution - Because of the single point of failure, single path data connections are not recommended. |
Bug 7006425--If you create a storage pool with no volumes, a replacement disk drive role is reported as “unassigned.”
Workaround--Delete the empty storage pool and create a new storage pool containing at least one volume.
Problem or Restriction: Because of the potential conflict between a drive module intentionally set to 0 (zero) and a drive module ID switch error that causes a drive module ID to be accidentally set to 0, do not set your drive module ID to 0.
Workaround: Change drive module ID to a value other than zero.
Problem or Restriction: Removing and reinserting drives during the drive firmware download process might cause the drive to be shown as unavailable, failed, or missing.
Workaround: Remove the drive, and either reinsert it or reboot the controllers to recover the drive.
Problem or Restriction: If you add a drive module using the loop topology option during Environmental Services Monitor (I/O Module) firmware download, the I/O Module firmware download process might fail due to a disconnected loop.
Workaround: When adding the drive module, do not follow the loop topology option. If you add the drive module by connecting the ports to the end of the storage array without disconnecting the loop, the I/O Module firmware download is successful.
Problem or Restriction: Removing drives while a storage array is online and then waiting to reinsert the drives until the storage array is starting after a reboot might cause the drives to be marked as failed after the storage array comes back online.
Workaround: Wait until the storage array is back online before reinserting the drives. If the storage array still does not recognize the drives, reconstruct the drives using Sun Storage Manager Common Array Manager software.
Problem or Restriction: CR 7042297--Before running a "make" on the RDAC driver, the following kernel packages are required:
Operating System: SUSE Linux Enterprise Server 11.1 SP1
Problem or Restriction: CR 7026018--Support for SUN and SUN_6180 is missing from the rdac_dev_list in the device handler scsi_dh_rdac.c file. For more information, refer to https://bugzilla.novell.com/show_bug.cgi?id=682738.
1. Verify DMMP is installed (see Installing the Device Mapper Multi-Path).
2. Download the scsi_dh_rdac KMP package for the SLES 11 architecture:
http://drivers.suse.com/driver-process/pub/update/LSI/sle11sp1/common/
3. Add the vendor ID and product ID to the /etc/multipath.conf file:
b. Copy a device block of code starting with "device {", and ending with "}" and paste a copy of it at the end of the file, within the "devices {" and "}" block.
c. Change the vendor ID and product ID to the values "SUN" and "SUN_6180", as shown in the following example:
vendor "SUN" product "SUN_6180"
d. Save your changes and exit the file.
For more information about the DMMP device handler, see Device Mapper Multipath (DMMP) for the Linux Operating System.
Operating System: SUSE Linux Enterprise Server 11.1 SP1
Problem or Restriction: Several IO FAILURE and Illegal Requests log events with ASC/ASQ SCSI errors appear in /var/log/messages while running vdbench on 25 LUNs.
An application client may request any one or all of the supported mode pages from the device server. If an application client issues a MODE SENSE command with a page code or subpage code value not implemented by the logical unit, the command shall be terminated with CHECK CONDITION status, with the sense key set to ILLEGAL REQUEST, and the additional sense code set to INVALID FIELD IN CDB.
The controller responds correctly (05h/24h/00h -INVALID FIELD IN CDB). The smartctl tool may need to ask all supported mode pages first before sending a unsupported mode page request.
Workaround: Disable SLES11 smartd monitoring service to stop these messages.
System Services (Runlevel) > smartd Disable
Problem or Restriction: This problem occurs when the DMMP failover driver is used with the RHEL version 6.0 OS. If you try to set up a Red Hat cluster with the DMMP failover driver, cluster startup might fail during the unfencing stage, where each host registers itself with the SCSI devices. The devices are in a Unit Attention state, which causes the SCSI registration command issued by the host during startup to fail. When the cluster manager (cman) service starts, the logs show that the nodes failed to unfence themselves, which causes the cluster startup to fail.
Workaround: To avoid this problem, do not use the DMMP failover driver with RHEL version 6.0. To recover from this problem, open a terminal window, and run:
where <device> is a SCSI device that is virtualized by the DMMP failover driver. Run this command on every /dev/sd device that the DMMP failover driver manages. It issues a Test Unit Ready command to clear the Unit Attention state and allow node registration on the device to succeed.
Operating System: Red Hat Enterprise Linux 6 with Native Cluster
Problem or Restriction: This problem occurs the first time a cluster is set up when the cluster.conf file does not have manually defined host keys. When the cluster.conf file was first defined to set up a cluster with SCSI reservation fencing, the cluster services were started on the nodes. With SCSI reservation fencing, the hosts try to generate and register a key on the clustered devices as part of the cluster manager's startup. The cluster manager service (cman) fails to start, and the key cannot be zero error message appears in the host log.
Workaround: To avoid this problem, use only power fencing. Do not use SCSI reservation fencing. To recover from this problem, change to manually defined host keys, and restart the cluster services.
Operating System: Red Hat Enterprise Linux 6 Native Cluster
Problem or Restriction: This problem occurs during an attempt to transfer a cluster service manually when a client is connected using NFSv4. The Global File System (GFS) 2 mount points failed to unmount, which caused the Red Hat Cluster Suite Services to go to the Failed state. The mount point, and all other mount points exported from the same virtual IP address, becomes inaccessible.
Workaround: To avoid this problem, configure the cluster nodes to not allow mount requests from NFS version 4 (NFSv4) clients. To recover from this problem, restart the failed service on the node that previously owned it.
Operating System: Red Hat Enterprise Linux version 6.0
Problem or Restriction: This problem occurs during an online controller firmware upgrade. The controller is not responding quickly enough to a host read or write to satisfy the host. After 30 seconds, the host sends a command to abort the I/O. The I/O aborts, and then starts again successfully.
Workaround: Quiesce the host I/O before performing the controller firmware upgrade. To recover from this problem, either reset the server, or wait until the host returns an I/O error.
Operating System: Red Hat Enterprise Linux version 6.0 with kernel 2.6.32
Red Hat Bugzilla Number: 620391
Note - This problem does not occur in Red Hat Enterprise Linux version 6.0 with kernel 2.6.33. |
Problem or Restriction: This problem occurs under situations of heavy stress when storage arrays take longer than expected to return the status of a read or write. The storage array must be sufficiently stressed that the controller response is more than 30 seconds, at which time a command is issued to abort if no response is received. The abort will be retried indefinitely even when the abort is successful. The application either times out or hangs indefinitely on the read or write that is being aborted. The messages file reports the aborts, and resets might occur on the LUN, the host, or the bus.
Factors effecting controller response include Remote Volume Mirroring, the controller state, the number of attached hosts, and the total throughput.
Workaround: To recover from this problem, reset the power on the server.
Problem or Restriction: When a Red Hat Enterprise Linux 5.1 host has more than two new volumes mapped to it, it hangs during reboot.
Workaround: Try one of the following options:
Problem or Restriction: An I/O timeout error occurs after you enable a switch port. This problem occurs when two or more Brocade switches are used, and both the active and the alternative paths from the host are located on one switch, and both the active path and the alternative path from the storage array are located on another switch. For the host to detect the storage array on the other switch, the switches are cascaded, and a shared zone is defined between the switches. This problem occurs on fabrics managing high I/O traffic.
Workaround: Reconfigure the switch zoning to avoid the need for cascading. Limit the zones within each switch, and do not create zones across the switches. Configure the active paths from the host and the storage array on one switch, and all of the alternative paths from the host and the storage array on the other switch.
Problem or Restriction: Red Hat Enterprise Linux 5.2 PowerPC (PPC) only. On rare occasions, the host hangs during reboot.
Problem or Restriction: Linux Red Hat 5 and Linux SLES 10 SP1 only. After a controller failover in an open SAN environment, a controller comes back online, but the path is not rediscovered by the multi-path proxy (MPP). After a controller comes online in a fabric connection (through a SAN switch), it is possible that a link will not be established by the Emulex HBA driver. This behavior is seen only if the SAN switch is “default” zoned (all ports see all other ports). This condition can result in an I/O error if the other path is taken offline.
Workaround: Set all of the SAN switches to be “default” zoned.
Problem or Restriction: SLES 10 SP2 only. I/O errors occur during a system reboot, and the host resets.
Problem or Restriction: Red Hat Enterprise Linux 4.7 only. When the controller is going through the start-of-day sequence, the drive channel does not achieve link speed detection and logs a Major Event Log (MEL) event. This event recovers within a few seconds, and a second MEL event occurs. The second MEL event indicates that the link speed detection was achieved.
Problem or Restriction: Windows Server 2003 only. When you configure a storage array as a boot device, the system shows a blue screen and does not respond when it is manually or automatically set to hibernate.
Workaround: If you use a storage array as a boot device for the Windows Server 2003 operating system, you cannot use the hibernation feature.
Problem or Restriction: Windows Server 2003 only. No Automatic Synchronization MEL events are received when the controllers go through autocode synchronization (ACS) and a deferred lockdown.
Workaround: You must verify the firmware on the controllers.
Problem or Restriction: AIX only. When you perform a firmware download with aMEL heavy load, the download fails because the volumes take too long to transfer to the alternate controller.
Workaround: Execute the download again. To avoid this problem, perform the firmware updates during non-peak I/O activity times.
Problem: The Sun Storage 6180 Site Preparation Guide contains discrepancies for certain array specifications.
Workaround: Note the following corrected capacity, environment, and physical values.
Problem: The Note on page 15 of the Sun Storage 6180 Array Hardware Installation Guide incorrectly references the Common Array Manager Release Notes for information about Installing Firmware for Additional Expansion Modules.
Correction: Refer to the “Adding Expansion Trays” procedure in Service Advisor. If you need to upgrade to the latest firmware revision, see “Upgrade Firmware” in Service Advisor.
Related product documentation is available at:
http://download.oracle.com/docs/cd/E19373-01/index.html
These web sites provide additional resources:
Copyright © 2011, Oracle and/or its affiliates. All rights reserved.