Feedback Sun Storage 6180 Array

This document contains important release information about Oracle’s Sun Storage 6180 array running Sun Storage Common Array Manager (CAM), Version 6.9.x. Read about issues or requirements that can affect the installation and operation of the array.

The release notes consist of the following sections:


What’s In This Firmware Release

Array controller firmware version 7.80.xx.xx provides Sun Storage Common Array Manager enhancements and bug fixes as described in the Sun Storage Common Array Manager Software Release Notes.

Downloading Patches and Updates

To download Sun Storage Common Array Manager, as well as server patches pertaining to the Sun Storage 6180 array, follow this procedure.

1. Sign in to My Oracle Support:

https://support.oracle.com/

2. At the top of the page, click the Patches & Updates tab.

3. Search for CAM software and patches in one of two ways:

a. Under the Patch Search section, click the Search tab.

b. In the Patch Name or Number field, enter the patch number. For example, 10272123 or 141474-01.

c. Click Search.

a. Under the Patch Search section, click the Search tab, and then click the Product or Family (Advanced Search) link.

b. Check Include all products in a family.

c. In the Product field, start typing the product name. For example, “Sun Storage Common Array Manager (CAM)” or “Sun Storage 6180 array.”

d. Select the product name when it appears.

e. In the Release field, expand the product name, check the release and patches you want to download, and then click Close.

f. Click Search.

4. Select the patch you want to download.

5. Click ReadMe for a patch description and installation instructions.

6. Click Download for a single patch, or Add to Plan to download a group of patches.

Cache Battery Expiration Notification

Sun Storage 6180 arrays use smart battery technology which maintains and reports its own status, providing a more accurate reporting of battery status. When a battery can no longer hold a charge, the battery is flagged for replacement, rather than a battery expiration report provided by the array firmware.


About the Array

The Sun Storage 6180 array is a high-performance, enterprise-class, full 8 Gigabit per second (Gb/s) I/O Fibre Channel solution (with backend loop speeds of 2 or 4 Gb/s) that combines outstanding performance with the highest reliability, availability, flexibility, and manageability.

The Sun Storage 6180 array is modular, rackmountable, and scalable from a single dual-controller tray (1x1) configuration to a maximum configuration of 1x7 with six additional CSM200 expansion trays behind one controller tray.


System Requirements

The software and hardware products that have been tested and qualified to work with the Sun Storage 6180 array are described in the following sections.

Firmware Requirements

The firmware version for Sun Storage 6180 array features described in this release note is version 07.80.xx.xx. This firmware version (or higher) is installed on the array controllers prior to shipment and is also delivered with the latest version of Sun Storage Common Array Manager (CAM).

To update controller firmware on an existing array:

1. Download the software as described in Downloading Patches and Updates.

2. Log into Sun Storage Common Array Manager.

3. Select the check box to the left of the array you want to update.

4. Click Install Firmware Baseline.

5. Follow the wizard instructions.

Disk Drives and Tray Capacity

TABLE 1 lists the size, spindle speed, type, interface speed, and tray capacity for supported Fibre Channel (FC), Serial Advanced Technology Attachment (SATA), and Solid State Disk (SSD) disk drives for the Sun Storage 6180 array. Additional legacy drives might also be supported with this product.

The following list of supported disk drives replaces the listing in the Sun Storage 6180 Array Hardware Installation Guide.


TABLE 1 Supported Disk Drives

Drive

Description

FC, 73G15K

73-Gbyte 15,000-RPM FC drives

(4 Gbits/sec); 1168 Gbytes per tray

FC, 146G10K

146-Gbyte 10,000-RPM FC drives

(4 Gbits/sec); 2336 Gbytes per tray

FC, 146G15K

146-Gbyte 15,000-RPM FC drives

(4 Gbits/sec); 2336 Gbytes per tray

FC, 300G10K

300-Gbyte 10,000-RPM FC drives
(4 Gbits/sec): 4800 Gbytes per tray

FC, 300G15K

300-Gbyte 15,000-RPM FC drives
(4 Gbits/sec); 4800 Gbytes per tray

FC, 400G10K

400-Gbyte 10,000-RPM FC drives
(4 Gbits/sec): 6400 Gbytes per tray

FC, 450G15K

450-Gbyte 15,000-RPM FC drives
(4 Gbits/sec); 7200 Gbytes per tray

SATA-2, 500G7.2K

500-Gbyte 7,200-RPM SATA drives

(3 Gbits/sec); 8000 Gbytes per tray

FC, 600GB15K, Encryption Capable

600-Gbyte 15,000-RPM FC drives
Encryption Capable
(4 Gbits/sec); 9600 Gbytes per tray

SATA-2, 750G7.2K

750-Gbyte 7,200-RPM SATA drives
(3 Gbits/sec); 12000 Gbytes per tray

SATA-2, 1T7.2K

1-Tbyte 7,200-RPM SATA drives
(3 Gbits/sec); 16000 Gbytes per tray

SATA-2, 2TB7.2K

2-Tbyte 7,200-RPM SATA drives
(3 Gbits/sec); 32000 Gbytes per tray


Array Expansion Module Support

The CSM200 is the only expansion tray supported by the Sun Storage 6180 array. To add capacity to a 6180 array, refer to the following Service Advisor procedures:



caution icon Caution - To add trays with existing stored data, contact Oracle Support for assistance to avoid data loss.



TABLE 2 IOM Code for the Sun Storage 6180 Expansion Module

Array Controller

Firmware

Supported Expansion Tray

IOM Code

Sun Storage 6180

07.80.51.10

CSM200

98D6


For additional baseline firmware information, such as controller, NVSRAM, disk drive, version, and firmware file, see Sun Storage Array Baseline Firmware Reference.

Data Host Requirements

This section describes supported data host software, HBAs, and switches.

Multipathing Software

TABLE 3 provides a summary of the data host requirements for the Sun Storage 6180 array. It lists the current multipathing software and supported host bus adapters (HBAs) by operating system.

You must install multipathing software on each data host that communicates with the Sun Storage 6180 array.



Note - Single path data connections are not recommended. For more information, see Single Path Data Connections.


TABLE 3 lists supported multipathing software by operating system.


TABLE 3 Multipathing Software

Operating System

Multipathing Software

Minimum Version

Host Type Setting

Notes

Solaris 10[1]

STMS/MPxIO

Update 6 or Update 5 with patch 140919- 04 (SPARC), 140920- 04 (x64/x86)

Solaris with MPxIO

Multipathing software included in Solaris OS 10

Solaris 10 with DMP

Symantec Veritas Dynamic Multi-Pathing (DMP)

5.0MP3

Solaris with DMP

 

Windows 2003 SP2 R2
Non-clustered

MPIO

01.03.0302.0504

Windows 2003 Non-clustered

 

Windows 2003/2008 MSCS Cluster

MPIO

01.03.0302.0504

Windows Server 2003 Clustered

You must use MPIO for 7.10 and above

Windows 2003
Non-clustered with DMP

DMP

5.1

Windows Server 2003 Non-clustered (with Veritas DMP)

See Symantec Hardware Compatibility List (HCL)

Windows 2003 Clustered with DMP

DMP

5.1

Windows Server 2003 clustered (with Veritas DMP)

See Symantec HCL

Windows Server 2008 R2 SP1 (64-bit only)

MPIO

01.03.0302.0504

Windows Server 2003

 

Oracle VM 2.2.2

RDAC

09.03.0C02.0331

Linux

RDAC version 09.03.0C02.0331 is included with Oracle VM 2.2.2

Oracle Linux 6.0, 5.6, 5.5

RDAC

09.03.0C02.0453

Linux

 

Unbreakable Linux

DMMP

 

Unbreakable

DMMP is included with the Unbreakable OS.

SUSE Linux Enterprise Server 11 SP1 and 10 SP3

RDAC/MPP

DMMP

09.03.0C00.0453

Linux

 

SLES 10.4, 10 SP1

RDAC/MPP

09.03.0C02.0453

Linux

 

Red Hat 6.0, 5.6, 5.5

RDAC

09.03.0C02.0453

Linux

 

Red Hat 4, SLES 10

RDAC/MPP

09.03.0C02.0453

Linux

 

Red Hat SLES with DMP

DMP

5.0MP3

Linux with DMP

See Symantec HCL

VMware ESX(i) 4.1 U1 and 3.5

Native Multipathing (NMP)

 

VMware

 

HPUX

Veritas DMP

5.0MP3

HP-UX

See Symantec HCL

AIX 6.1, 5.3

Cambex DPF

6.1.0.63

AIX

Not supported with CAM 6.9, firmware 7.80.xx.xx, but is supported with CAM 6.8.1 firmware 7.77.xx.xx

AIX 6.1, 5.3 with DMP

DMP

5.0

AIX with DMP

Not supported with CAM 6.9, firmware 7.80.xx.xx, but is supported with CAM 6.8.1 firmware 7.77.xx.xx




Note - Download the multipathing drivers from My Oracle Support at https://support.oracle.com. Search for the appropriate driver using one of the keywords “MPIO,” “RDAC,” or “MPP.” See Downloading Patches and Updates.




Note - The multipathing driver for the IBM AIX platform is Veritas DMP, bundled in Veritas Storage Foundation 5.0 for the Sun Storage 6180 array. Download the Array Support Library (ASL) from http://support.veritas.com/.


Supported Host Bus Adaptors (HBAs)

TABLE 4, TABLE 5, and TABLE 6 list supported HBAs and other data host platform elements by operating system.

To obtain the latest HBA firmware:

Download operating system updates from the web site of the operating system company.



Note - You must install the multipathing software before you install any OS patches.



TABLE 4 Supported HBAs for Solaris Data Host Platforms

Operating System

Minimum OS Patches[2]

Sun 2-Gbit HBAs

Sun 4-Gbit HBAs

Sun 8-Gb HBAs

Solaris 10 SPARC

Update 6 or Update 5 with patch 140919-04

SG-XPCI1FC-QL2 (6767A)

SG-XPCI2FC-QF2-Z (6768A)

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

Solaris 10 x64/x86

Update 6 or Update 5 with patch 140920-04

SG-XPCI1FC-QL2 (6767A)

SG-XPCI2FC-QF2-Z (6768A)

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8



TABLE 5 Supported HBAs for Microsoft Windows Data Host Platforms

Host OS / Servers

HBAs[3]

Sun 2-Gb HBAs

Sun 4-Gb HBAs

Sun 8-Gb HBAs

Microsoft Windows Server 2008 R2 SP1 (64-bit only ) / AMD x86 and EM64T

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LPe12000/LPe12002/ LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

Microsoft Windows Server 2003 SP2 R2 / AMD x86 and EM64T

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LPe12000/LPe12002/LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8

Microsoft Windows 2003 64-bit with SP2 R2 / x64 (AMD)

EM64T

IA64

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex: LPe12000/LPe12002/LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8



TABLE 6 Supported HBAs for Linux Data Host Platforms

Host OS / Sun Servers

HBAs[4]

Sun 2-Gb HBAs

Sun 4-Gb HBAs

Sun 8-Gb HBAs

SLES 11 SP1, 10.4, 10 SP3

QLogic:

QLE 256x

QLE246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LP982/LP9802/9802DC

LP9002/LP9002DC/LP952

LP10000/10000DC/LP1050

LP11000/LP11002/LP1150

Lpe11000/LPe11002/ LPe1150/Lpe12000/ LPe12002/Lpe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8

Oracle Linux 6.0, 5.6, 5.5; Oracle VM 2.2.2; RHEL 6.0, 5.6, 5.5

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex: LP982/LP9802/9802DC

LP9002/LP9002DC/LP952

LP10000/10000DC/LP1050

Lpe11000/LPe11002/LPe1150

Lpe12000/LPe12002/LPe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4-Z

SG-XPCI2FC-EM4-Z

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

RHEL 4u7

RHEL 4.8

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex: LP982/LP9802/9802DC

LP9002/LP9002DC/LP952

LP10000/10000DC/LP1050

Lpe11000/LPe11002/LPe1150

Lpe12000/LPe12002/Lpe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4-Z

SG-XPCI2FC-EM4-Z

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8



TABLE 7 Other Supported Data Host Platforms

Host OS

Host Servers

HBAs[5]

HP-UX 11.31

HP RISC

IA64

HP A6795A

HP A6826A

HP A6684A

HP A6685A

HP AB378A

HP AB379A

HP AD300A

HP AD355A

AH400A (IA64)

AH401A (IA64)

HP-UX B.11.23

HP RISC

IA64

HP A6795A

HP A6826A

HP A9784A

HP AB378A

HP AB379A

HP AD300A

HP AD355A

IBM AIX 5.2, 5.3, 6.1

Power

IBM 5716

IBM 5758

IBM 5759

IBM 6228

IBM 6239


Supported FC and Multilayer Switches

The following FC fabric and multilayer switches are compatible for connecting data hosts and Sun Storage 6180 array:

Supported Premium Features

Tier 1 Support

The Sun Storage 6180 arrays support the Tier 1 classified licensable features. Tier 1 classified arrays include the StorageTek 6140 and Sun Storage 6180 arrays.

Available licenses for the Sun Storage 6180:

Tier 2 Support

The Sun Storage 6580 and 6780 arrays support the below Tier 2 classified arrays licensable features. Tier 2 classified arrays include the StorageTek 6540, Sun Storage 6580, and Sun Storage 6780 arrays.

Available licenses for the Sun Storage 6580 and 6780 arrays:


Device Mapper Multipath (DMMP) for the Linux Operating System

Device Mapper (DM) is a generic framework for block devices provided by the Linux operating system. It supports concatenation, striping, snapshots, mirroring, and multipathing. The multipath function is provided by the combination of the kernel modules and user space tools.

The DMMP is supported on SUSE Linux Enterprise Server (SLES) Version 11 and 11.1. The SLES installation must have components at or above the version levels shown in the following table before you install the DMMP.


TABLE 8 Minimum Supported Configurations for the SLES 11 Operating System

Version

Component

Kernel version

kernel-default-2.6.27.29-0.1.1

Scsi_dh_rdac kmp

lsi-scsi_dh_rdac-kmp-default-0.0_2.6.27.19_5-1

Device Mapper library

device-mapper-1.02.27-8.6

Multipath-tools

multipath-tools-0.4.8-40.6.1


To update a component, download the appropriate package from the Novell website at http://download.novell.com/patch/finder. The Novell publication, SUSE Linux Enterprise Server 11 Installation and Administration Guide, describes how to install and upgrade the operating system.

Device Mapper Features

Known Limitations and Issues of the Device Mapper

Installing the Device Mapper Multi-Path

1. Use the media supplied by your operating system vendor to install SLES 11.

2. Install the errata kernel 2.6.27.29-0.1.

Refer to the SUSE Linux Enterprise Server 11 Installation and Administration Guide for the installation procedure.

3. To boot up to 2.6.27.29-0.1 kernel, reboot your system.

4. On the command line, enter rpm -qa |grep device-mapper, and check the system output to see if the correct level of the device mapper component is installed.

5. On the command line, enter rpm -qa |grep multipath-tools and check the system output to see if the correct level of the multipath tools is installed.

6. Update the configuration file /etc/multipath.conf.

See Setting Up the multipath.conf File for detailed information about the /etc/multipath.conf file.

7. On the command line, enter chkconfig multipathd on.

This command enables multipathd daemon when the system boots.

8. Edit the /etc/sysconfig/kernel file to add directive scsi_dh_rdac to the INITRD_MODULES section of the file.

9. Download the KMP package for scsi_dh_rdac for the SLES 11 architecture from the website http://forgeftp.novell.com/driver-process/staging/pub/update/lsi/sle11/common/, and install the package on the host.

10. Update the boot loader to point to the new initrd image, and reboot the host with the new initrd image.

Setting Up the multipath.conf File

The multipath.conf file is the configuration file for the multipath daemon, multipathd. The multipath.conf file overwrites the built-in configuration table for multipathd. Any line in the file whose first non-white-space character is # is considered a comment line. Empty lines are ignored.

Installing the Device Mapper Multi-Path for SLES 11.1, SLES11 SP1

All of the components required for DMMP are included in SUSE Linux Enterprise Server (SLES) version 11.1 installation media. However, users might need to select the specific component based on the storage hardware type. By default, DMMP is disabled in SLES. You must follow the following steps to enable DMMP components on the host.



Note - Make sure you do not have LUNs mapped to your host, or be sure to unplug your host cables before this step, or else it will take a very long time to complete.


1. On the command line, type chkconfig multipath on.

The multipathd daemon is enabled with the system starts again.

2. Edit the /etc/sysconfig/kernel file to add the directive scsi_dh_rdac to the INITRD_MODULES section of the file.

3. Run mkinitrd using one of the following commands, depending on your architecture:

mkinitrd -i /boot/initrd -k /boot/vmlinuz (x86/x86-64)

mkinitrd -i /boot/initrd -k /boot/vmlinux (PowerPC)

4. After creating the initial ram disk, make sure that the initial ram disk size is set correctly in /etc/yaboot.conf file. If it is not set correctly, the host might not boot up. The initial ram disk size can be found by:

ls -al /boot/<the initrd that you are using>

RHEL5.x

5. Run mkinitrd with the following command:

mkinitrd /boot/initrd-‘uname -r‘.img ‘uname -r‘ (no space between initrd- and ‘uname, but there is a space between uname and -r)

RHEL6

6. Run dracut to recompile the initramfs image using the following command:

dracut -f

The installation is complete.

7. Reboot the system.

8. After the reboot, check to make sure the proper kernel modules are loaded by running the following command:

lsmod | grep scsi_dh_rdac

scsi_dh_rdac and dm_multipath should both show up in the output.

Copy and Rename the Sample File

Copy and rename the sample file located at /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic to /etc/multipath.conf. Configuration changes are now accomplished by editing the new /etc/multipath.conf file. All entries for multipath devices are commented out initially. The configuration file is divided into five sections:

Determine the Attributes of a MultiPath Device

To determine the attributes of a multipath device, check the multipaths section of the /etc/multipath.conf file, then the devices section, then the defaults section. The model settings used for multipath devices are listed for each storage array and include matching vendor and product values. Add matching storage vendor and product values for each type of volume used in your storage array.

For each UTM LUN mapped to the host, include an entry in the blacklist section of the /etc/multipath.conf file. The entries should follow the pattern of the following example.

blacklist { device {         vendor "*"         product "Universal Xport"    } } 

Modify Vendor ID and Product ID

The following example shows the devices section from the /etc/multipath.conf file. Be sure the vendor ID and the product ID for the Sun Storage 6180 array are set as shown in this example:

devices {     device {         vendor                "SUN"         product               "SUN_6180"         path_grouping_policy  group_by_prio         prio                  rdac         getuid_callout      "/lib/udev/scsi_id -g -u -d /dev/%n"         polling_interval      5         path_checker          rdac         path_selector         "round-robin 0"         hardware_handler      "1 rdac"         failback immediate         features              "2 pg_init_retries 50"         no_path_retry          30         rr_min_io              100     } }

The following table explains the attributes and values in the devices section of the /etc/multipath.conf file.


TABLE 9 Attributes and Values in the multipath.conf File

Attribute

Parameter Value

Description

path_grouping_policy

group_by_prio

The path grouping policy to be applied to this specific vendor and product storage.

prio

rdac

The program and arguments to determine the path priority routine. The specified routine should return a numeric value specifying the relative priority of this path. Higher numbers have a higher priority.

getuid_callout

"/lib/udev/ scsi_id -g -u -d /dev/%n"

The program and arguments to call out to obtain a unique path identifier.

polling_interval

5

The interval between two path checks, in seconds.

path_checker

rdac

The method used to determine the state of the path.

path_selector

"round-robin 0"

The path selector algorithm to use when there is more than one path in a path group.

hardware_handler

"1 rdac"

The hardware handler to use for handling device-specific knowledge.

failback

10

A parameter to tell the daemon how to manage path group failback. In this example, the parameter is set to 10 seconds, so failback occurs 10 seconds after a device comes online. To disable the failback, set this parameter to manual. Set it to immediate to force failback to occur immediately.

features

"2 pg_init_retries 50"

Features to be enabled. This parameter sets the kernel parameter pg_init_retries to 50. The pg_init_retries parameter is used to retry the mode select commands.

no_path_retry

30

Specify the number of retries before queuing is disabled. Set this parameter to fail for immediate failure (no queuing). When this parameter is set to queue, queuing continues indefinitely.

rr_min_io

100

The number of I/Os to route to a path before switching to the next path in the same path group. This setting applies if there is more than one path in a path group.


Using the Device Mapper Devices

Multipath devices are created under /dev/ directory with the prefix dm-. These devices are the same as any other bock devices on the host. To list all of the multipath devices, run the multipath -ll command. The following example shows system output from the multipath -ll command for one of the multipath devices.

mpathp (3600a0b80005ab177000017544a8d6b92) dm-0 LSI,INF-01-00 [size=5.0G][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=6][active] \_ 5:0:0:0   sdc  8:32   [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 4:0:0:0   sdb  8:16 [active][ghost]

In this example, the multipath device node for this device is /dev/mapper/mpathp and /dev/dm-0. The following table lists some basic options and parameters for the multipath command.


TABLE 10 Options and Parameters for the multipath Command

Command

Description

multipath -h

Prints usage information

multipath -ll

Shows the current multipath topology from all available information (sysfs, the device mapper, path checkers, and so on)

multipath -f map

Flushes the multipath device map specified by the map option, if the map is unused

multipath -F

Flushes all unused multipath device maps


Troubleshooting the Device Mapper


TABLE 11 Troubleshooting the Device Mapper

Situation

Resolution

Is the multipath daemon, multipathd, running?

At the command prompt, enter the command: /etc/init.d/multipathd status.

Why are no devices listed when you run the multipath -ll command?

At the command prompt, enter the command: #cat /proc/scsi/scsi. The system output displays all of the devices that are already discovered.

Verify that the multipath.conf file has been updated with proper settings.



Restrictions and Known Issues

The following sections provide information about restrictions, known issues, and bugs (or CRs) filed against this product release. If a recommended workaround is available for a bug, it follows the bug description.

For information about bug fixes in this release, see the Sun Storage Common Array Manager Software Release Notes.

Controller Issues

I/O FAILURE Messages and Illegal Requests in Logs

Bug 7097416--When an OVM2.2.2 or OEL 5.5 SLES host with Oracle Hardware Management Package (OHMP) daemon enabled is mapped to volumes on a 6180 array, it is possible to receive IO FAILURE and Illegal Request ASC/ASCQ log events.

Workaround--Either disable OHMP or disregard the messages. This is an issue with the host OS.

Incorrect controller cache block size can cause an ancient I/O

Bug 7110592--Firmware 07.80.51.10 can cause ancient I/O reboots if the cache block size does not match the application I/O size.

Workaround--Ensure the application I/O size can fit into one cache block. If the cache block size is too small for the application I/O size, it will results in a shortage of an internal structure known as a buf_t. By setting the cache block size to match the I/O size, the correct number of buf_t’s will be available and the ancient I/O will be avoided.

To set the cache block size, go to the Administration page for the selected array.

Firmware revision 07.80.x.x supports the following cache block sizes:

2500-M2: 4k, 8k, 16k, 32k

6x80: 4k, 8k, 16k, 32k

I/O Errors Occur During Controller Firmware Download

Configuration:



Note - This problem does not occur in RHEL version 6.0 with kernel 2.6.33.


Problem or Restriction: An I/O error occurs during an online controller firmware upgrade.

Workaround: To avoid this problem, quiesce the host I/O before the performing controller firmware upgrades. To recover from this problem, make sure that the host reports that it has optimal paths available to the storage array controllers, and then resume I/O.

Both RAID Controllers Reboot After 828.5 Days--2500/6000 Arrays

CR 6872995, 6949589-Both RAID controllers reboot after 828.5 days of continuous operation. A timer in the firmware (vxWorks) called “vxAbsTicks” is a 32-bit (double word) integer that keeps count in the 0x0000 0000 format. When this timer rolls over from 0xffffffff to 0x00000000 (after approximately 828.5 days), if there is host I/O to volumes, the associated drives fail with a write failure.

Original Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter. For controllers with 03.xx-06.60 firmware (6000 series) and 03.xx-6.70 firmware (2500 series): Both controllers reboot if counter is greater than 825 days.

Final Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter.

This fix staggers the reboots of the controllers for approximately five days so the only impact is a small performance degradation while the reboot occurs.

For controllers with firmware 07.15.11.12 or later (6000 series) and firmware 07.35.10.10 or later (2500 series): Controller A reboots if counter is greater than 820 days. Controller B reboots if counter is greater than 825 days.



Note - There is no redundancy for failover in a simplex 2500 configuration or any duplex configuration where a controller is already offline for any reason.


Controller Panics After Removing the Last I/O Module

Problem or Restriction: After removing a second I/O Module from a storage array, the controller panics.

Workaround: After removing an I/O Module, wait at least 10 minutes before removing another I/O Module from the same storage array.

Cache Attempts to Restore the Backup Data on Foreign Devices

Problem or Restriction: Cache restore is attempted when the controller is attached to foreign drive modules, and there is data on the USB devices that the cache has not written to the drive modules.

Workaround:



caution icon Caution - Possible loss of data--Failure to perform this workaround could result in data loss.


Before the power is turned off to the system, quiesce the system. You should quiesce the system before the controller or the drive module is moved. This process does not back up the cache, and it does not attempt to restore the data from the USB devices to the foreign drive modules.

Controller Does Not Detect All Hardware Defects on a Newly Replaced Host Interface Card

Problem or Restriction: With power-on diagnostics, some host interface card hardware defects are not found, including problems transferring data across the PCI express bus, interrupt failures, and issues with the internal buffers in the chip.

Workaround: Verify that the host interface cable connections into the Small Form-factor Pluggable (SFP) transceivers are secure. If the problem remains, replace the host interface card.

Unable to Load a Previous Firmware Version

Problem or Restriction: If the controllers are running firmware that uses 64-bit addressing, you cannot load firmware that uses 32-bit addressing if your storage array has these conditions:

Recent code changes have been implemented to fix a 32-bit addressing issue by using 64-bit addressing. After you have updated to a firmware version that uses the 64-bit addressing, do not attempt to reload firmware version that uses 32-bit addressing.

Workaround: If you must replace a firmware version that uses 64-bit addressing with a firmware version that uses 32-bit addressing, contact a Sun Technical Support representative. The Technical Support representative will delete all snapshots before starting the downgrade process. Snapshots of any size will not survive the downgrade process. After the firmware that uses 32-bit addressing boots and runs, no snapshot records will be available to cause errors. After the 32-bit addressing firmware is running, you can re-create the snapshots.

Controller Registers Disabled IPV6 Addresses When Using iSNS with DHCP

Problem or Restriction: This problem occurs when Internet Protocol Version 6 (IPV6) addresses have been disabled on a Sun Storage 6180 array. If the Internet Storage Name Service (iSNS) is enabled and set to obtain configuration data automatically from the Dynamic Host Configuration Protocol (DHCP) server, the IPV6 addresses will be discovered even though they were disabled on the ports of the controllers in the Sun Storage 6180 array.

Workaround: None.

iSNS Does Not Update the iSNS Registration Data When You Change the iSCSI Host Port IP Addresses

Problem or Restriction: This problem occurs when you change the configuration for all of the ports in a storage array from using Dynamic Host Configuration Protocol (DHCP) to using static IP addresses or vice versa. If you are using Internet Storage Name Service (iSNS), the registration of the IP addresses for the ports will be lost.

Workaround: Use one of the following workarounds after you change the IP addresses:

Single Path Data Connections

In a single path data connection, a group of heterogeneous servers is connected to an array through a single connection. Although this connection is technically possible, there is no redundancy, and a connection failure will result in loss of access to the array.



caution icon Caution - Because of the single point of failure, single path data connections are not recommended.


Drive Issues

Replacement drive comes in unassigned in an empty storage pool

Bug 7006425--If you create a storage pool with no volumes, a replacement disk drive role is reported as “unassigned.”

Workaround--Delete the empty storage pool and create a new storage pool containing at least one volume.

Drive Module ID of 0 (Zero) Is Restricted

Problem or Restriction: Because of the potential conflict between a drive module intentionally set to 0 (zero) and a drive module ID switch error that causes a drive module ID to be accidentally set to 0, do not set your drive module ID to 0.

Workaround: Change drive module ID to a value other than zero.

Drives Cannot Be Removed During a Drive Firmware Download

Problem or Restriction: Removing and reinserting drives during the drive firmware download process might cause the drive to be shown as unavailable, failed, or missing.

Workaround: Remove the drive, and either reinsert it or reboot the controllers to recover the drive.

Drive Modules Cannot Be Added During an I/O Module Firmware Download

Problem or Restriction: If you add a drive module using the loop topology option during Environmental Services Monitor (I/O Module) firmware download, the I/O Module firmware download process might fail due to a disconnected loop.

Workaround: When adding the drive module, do not follow the loop topology option. If you add the drive module by connecting the ports to the end of the storage array without disconnecting the loop, the I/O Module firmware download is successful.

Drives Fail to Spin Up if Inserted While the Storage Array Reboots

Problem or Restriction: Removing drives while a storage array is online and then waiting to reinsert the drives until the storage array is starting after a reboot might cause the drives to be marked as failed after the storage array comes back online.

Workaround: Wait until the storage array is back online before reinserting the drives. If the storage array still does not recognize the drives, reconstruct the drives using Sun Storage Manager Common Array Manager software.

Linux Issues

Linux RDAC 09.03.0C02.0453 - Make Install Dependencies

Configuration:

Problem or Restriction: CR 7042297--Before running a "make" on the RDAC driver, the following kernel packages are required:

DMMP Device Handler scsi_dh_rdac.c Missing SUN, SUN_6180

Operating System: SUSE Linux Enterprise Server 11.1 SP1

Problem or Restriction: CR 7026018--Support for SUN and SUN_6180 is missing from the rdac_dev_list in the device handler scsi_dh_rdac.c file. For more information, refer to https://bugzilla.novell.com/show_bug.cgi?id=682738.

Workaround:

1. Verify DMMP is installed (see Installing the Device Mapper Multi-Path).

2. Download the scsi_dh_rdac KMP package for the SLES 11 architecture:

http://drivers.suse.com/driver-process/pub/update/LSI/sle11sp1/common/

3. Add the vendor ID and product ID to the /etc/multipath.conf file:

a. Open /etc/multipath.conf.

b. Copy a device block of code starting with "device {", and ending with "}" and paste a copy of it at the end of the file, within the "devices {" and "}" block.

c. Change the vendor ID and product ID to the values "SUN" and "SUN_6180", as shown in the following example:

vendor		"SUN"
product		"SUN_6180"

d. Save your changes and exit the file.

4. Reboot the host.

For more information about the DMMP device handler, see Device Mapper Multipath (DMMP) for the Linux Operating System.

I/O FAILURE Messages and Illegal Requests in Logs

Operating System: SUSE Linux Enterprise Server 11.1 SP1

Problem or Restriction: Several IO FAILURE and Illegal Requests log events with ASC/ASQ SCSI errors appear in /var/log/messages while running vdbench on 25 LUNs.

An application client may request any one or all of the supported mode pages from the device server. If an application client issues a MODE SENSE command with a page code or subpage code value not implemented by the logical unit, the command shall be terminated with CHECK CONDITION status, with the sense key set to ILLEGAL REQUEST, and the additional sense code set to INVALID FIELD IN CDB.

The controller responds correctly (05h/24h/00h -INVALID FIELD IN CDB). The smartctl tool may need to ask all supported mode pages first before sending a unsupported mode page request.

Workaround: Disable SLES11 smartd monitoring service to stop these messages.

System Services (Runlevel) > smartd Disable

Cluster Startup Fails When Devices Are in a Unit Attention State

Configuration:

Problem or Restriction: This problem occurs when the DMMP failover driver is used with the RHEL version 6.0 OS. If you try to set up a Red Hat cluster with the DMMP failover driver, cluster startup might fail during the unfencing stage, where each host registers itself with the SCSI devices. The devices are in a Unit Attention state, which causes the SCSI registration command issued by the host during startup to fail. When the cluster manager (cman) service starts, the logs show that the nodes failed to unfence themselves, which causes the cluster startup to fail.

Workaround: To avoid this problem, do not use the DMMP failover driver with RHEL version 6.0. To recover from this problem, open a terminal window, and run:

sg_turs -n 5 <device>

where <device> is a SCSI device that is virtualized by the DMMP failover driver. Run this command on every /dev/sd device that the DMMP failover driver manages. It issues a Test Unit Ready command to clear the Unit Attention state and allow node registration on the device to succeed.

Node Unfencing Fails when Automatically Generated Host Keys Are Used during a Red Hat Cluster Suite Services Startup

Operating System: Red Hat Enterprise Linux 6 with Native Cluster

Problem or Restriction: This problem occurs the first time a cluster is set up when the cluster.conf file does not have manually defined host keys. When the cluster.conf file was first defined to set up a cluster with SCSI reservation fencing, the cluster services were started on the nodes. With SCSI reservation fencing, the hosts try to generate and register a key on the clustered devices as part of the cluster manager's startup. The cluster manager service (cman) fails to start, and the key cannot be zero error message appears in the host log.

Workaround: To avoid this problem, use only power fencing. Do not use SCSI reservation fencing. To recover from this problem, change to manually defined host keys, and restart the cluster services.

Red Hat Cluster Suite Services with GFS2 Mounts Cannot Transfer Between Nodes when the Client Mounts with NFSv4

Operating System: Red Hat Enterprise Linux 6 Native Cluster

Problem or Restriction: This problem occurs during an attempt to transfer a cluster service manually when a client is connected using NFSv4. The Global File System (GFS) 2 mount points failed to unmount, which caused the Red Hat Cluster Suite Services to go to the Failed state. The mount point, and all other mount points exported from the same virtual IP address, becomes inaccessible.

Workaround: To avoid this problem, configure the cluster nodes to not allow mount requests from NFS version 4 (NFSv4) clients. To recover from this problem, restart the failed service on the node that previously owned it.

Host Aborts I/O Operations

Operating System: Red Hat Enterprise Linux version 6.0

Problem or Restriction: This problem occurs during an online controller firmware upgrade. The controller is not responding quickly enough to a host read or write to satisfy the host. After 30 seconds, the host sends a command to abort the I/O. The I/O aborts, and then starts again successfully.

Workaround: Quiesce the host I/O before performing the controller firmware upgrade. To recover from this problem, either reset the server, or wait until the host returns an I/O error.

Host Attempts to Abort I/O Indefinitely

Operating System: Red Hat Enterprise Linux version 6.0 with kernel 2.6.32

Red Hat Bugzilla Number: 620391



Note - This problem does not occur in Red Hat Enterprise Linux version 6.0 with kernel 2.6.33.


Problem or Restriction: This problem occurs under situations of heavy stress when storage arrays take longer than expected to return the status of a read or write. The storage array must be sufficiently stressed that the controller response is more than 30 seconds, at which time a command is issued to abort if no response is received. The abort will be retried indefinitely even when the abort is successful. The application either times out or hangs indefinitely on the read or write that is being aborted. The messages file reports the aborts, and resets might occur on the LUN, the host, or the bus.

Factors effecting controller response include Remote Volume Mirroring, the controller state, the number of attached hosts, and the total throughput.

Workaround: To recover from this problem, reset the power on the server.

Linux Host Hangs During Reboot After New Volumes Are Added

Problem or Restriction: When a Red Hat Enterprise Linux 5.1 host has more than two new volumes mapped to it, it hangs during reboot.

Workaround: Try one of the following options:

Linux I/O Timeout Error Occurs After Enabling a Switch Port

Problem or Restriction: An I/O timeout error occurs after you enable a switch port. This problem occurs when two or more Brocade switches are used, and both the active and the alternative paths from the host are located on one switch, and both the active path and the alternative path from the storage array are located on another switch. For the host to detect the storage array on the other switch, the switches are cascaded, and a shared zone is defined between the switches. This problem occurs on fabrics managing high I/O traffic.

Workaround: Reconfigure the switch zoning to avoid the need for cascading. Limit the zones within each switch, and do not create zones across the switches. Configure the active paths from the host and the storage array on one switch, and all of the alternative paths from the host and the storage array on the other switch.



Note - Configuring the active paths from all of the hosts on one switch will not provide optimal performance. To resolve this performance issue, alternate the hosts in terms of using active and alternative paths.
For switch 1, connect to storage array 1, and use the following arrangement: Host A_Active port, Host B_Alternative port, Host C_Active port, Host D_Alternative port.
For switch 2, connect to storage array 2, and use the following arrangement: Host A_Alternative port, Host B_Active port, Host C_Alternative port, Host D_Active port.


Linux Host Hangs During Reboot

Problem or Restriction: Red Hat Enterprise Linux 5.2 PowerPC (PPC) only. On rare occasions, the host hangs during reboot.

Workaround: Reset the host.

Cannot Find an Online Path After a Controller Failover

Problem or Restriction: Linux Red Hat 5 and Linux SLES 10 SP1 only. After a controller failover in an open SAN environment, a controller comes back online, but the path is not rediscovered by the multi-path proxy (MPP). After a controller comes online in a fabric connection (through a SAN switch), it is possible that a link will not be established by the Emulex HBA driver. This behavior is seen only if the SAN switch is “default” zoned (all ports see all other ports). This condition can result in an I/O error if the other path is taken offline.

Workaround: Set all of the SAN switches to be “default” zoned.

I/O Errors Occur During a Linux System Reboot

Problem or Restriction: SLES 10 SP2 only. I/O errors occur during a system reboot, and the host resets.

Workaround: None.

MEL Events Occur During the Start-of-Day Sequence

Problem or Restriction: Red Hat Enterprise Linux 4.7 only. When the controller is going through the start-of-day sequence, the drive channel does not achieve link speed detection and logs a Major Event Log (MEL) event. This event recovers within a few seconds, and a second MEL event occurs. The second MEL event indicates that the link speed detection was achieved.

Workaround: None.

Windows Issues

Hibernate Does Not Work in a Root Boot Environment for Windows Server 2003

Problem or Restriction: Windows Server 2003 only. When you configure a storage array as a boot device, the system shows a blue screen and does not respond when it is manually or automatically set to hibernate.

Workaround: If you use a storage array as a boot device for the Windows Server 2003 operating system, you cannot use the hibernation feature.

No Automatic Synchronization MEL Events on ACS and Deferred Lockdown

Problem or Restriction: Windows Server 2003 only. No Automatic Synchronization MEL events are received when the controllers go through autocode synchronization (ACS) and a deferred lockdown.

Workaround: You must verify the firmware on the controllers.

AIX Issues

Volume Transfer Fails

Problem or Restriction: AIX only. When you perform a firmware download with aMEL heavy load, the download fails because the volumes take too long to transfer to the alternate controller.

Workaround: Execute the download again. To avoid this problem, perform the firmware updates during non-peak I/O activity times.

Documentation Issues

Sun Storage 6180 Site Preparation Guide

Problem: The Sun Storage 6180 Site Preparation Guide contains discrepancies for certain array specifications.

Workaround: Note the following corrected capacity, environment, and physical values.


TABLE 12 Hardware Specifications

Correct Specification

Capacity

  • For controller trays with four host ports, up to three expansion trays can be added.
  • For controller trays with eight host ports, up to six expansion trays can be added.
  • The array configuration supports unlimited global hot-spare drives, and each spare can be used for any disk in the array configuration.

Environment

  • Controller tray AC input:

50/60 Hz, 3.96 A max. operating @ 115 VAC, 2.06A max. operating @ 230 VAX (115 to 230 VAC range).

  • Expansion tray AC input:

50/60 Hz, 3.90 A max. operating @ 115 VAC, 2.06A max. operating @ 230 VAX (90 to 264 VAC range)

Tray Dimensions

5.1 in. x 17.6 in. x 22.5 in

12.95 cm x 44.7 cm x 57.15 cm

Weight

The maximum weight of a fully populated controller or expansion tray is 93 pounds (42.18 kilograms).


Sun Storage 6180 Array Hardware Installation Guide

Problem: The Note on page 15 of the Sun Storage 6180 Array Hardware Installation Guide incorrectly references the Common Array Manager Release Notes for information about Installing Firmware for Additional Expansion Modules.

Correction: Refer to the “Adding Expansion Trays” procedure in Service Advisor. If you need to upgrade to the latest firmware revision, see “Upgrade Firmware” in Service Advisor.


Product Documentation

Related product documentation is available at:

http://download.oracle.com/docs/cd/E19373-01/index.html


Application

Title

Site planning information

Sun Storage 6180 Array Site Planning Guide

Regulatory and safety information

Sun Storage 6180 Array Safety and Compliance Manual

Installation overview for rack-mounted arrays

Getting Started Guide for Sun Storage 6180 Rack Ready Arrays

Array installation instructions

Sun Storage 6180 Array Hardware Installation Guide

Rack installation instructions

Sun Rack II User’s Guide

Rail kit installation instructions

Sun Modular Storage Rail Kit Installation Guide

PDU installation instructions

Sun Cabinet Power Distribution Unit (PDU) Installation Guide

CAM software installation and initial configuration instructions

Sun Storage Common Array Manager Quick Start Guide

Sun Storage Common Array Manager Software Installation and Setup Guide

Command line management interface reference

Sun Storage Common Array Manager CLI Guide

Release-specific information for the Sun Storage Common Array Manager

Sun Storage Common Array Manager Release Notes

Multipath failover driver installation and configuration

Sun StorageTek MPIO Device Specific Module Installation Guide For Microsoft Windows OS

Sun StorageTek RDAC Multipath Failover Driver Installation Guide For Linux OS



Documentation, Support, and Training

These web sites provide additional resources:


1 (TableFootnote) Oracle recommends installing the latest Solaris update.
2 (TableFootnote) Oracle recommends installing the latest Solaris update.
3 (TableFootnote) Refer to the HBA manufacturer’s web site for support information.
4 (TableFootnote) Refer to the HBA manufacturer’s web site for support information.
5 (TableFootnote) Refer to the HBA manufacturer’s web site for support information.
Feedback