Feedback Sun Storage 6580 and 6780 Array Hardware Release Notes

This document contains important release information about the Sun Storage 6580 and 6780 arrays running Sun Storage Common Array Manager (CAM), Version 6.8.x. Read about issues or requirements that can affect the installation and operation of the arrays.

The release notes consist of the following sections:


What’s In This Firmware Release

Array controller firmware version 7.77.xx.xx remains the same as delivered with CAM 6.8.0, and provides the following updates for Sun Storage 6580 and Sun Storage 6780 arrays:

For information about Sun Storage Common Array Manager enhancements and bug fixes for this release, see the Sun Storage Common Array Manager Software Release Notes.

Downloading Patches and Updates

To download Sun Storage Common Array Manager, as well as server patches pertaining to the Sun Storage 6580 and 6780 arrays, follow this procedure.

1. Sign in to My Oracle Support:

https://support.oracle.com/

2. At the top of the page, click the Patches & Updates tab.

3. Search for CAM software and patches in one of two ways:

a. Under the Patch Search section, click the Search tab.

b. In the Patch Name or Number field, enter the patch number. For example, 10272123 or 141474-01.

c. Click Search.

a. Under the Patch Search section, click the Search tab, and then click the Product or Family (Advanced Search) link.

b. Check Include all products in a family.

c. In the Product field, start typing the product name. For example, “Sun Storage Common Array Manager (CAM)” or “Sun Storage 6580 array.”

d. Select the product name when it appears.

e. In the Release field, expand the product name, check the release and patches you want to download, and then click Close.

f. Click Search.

4. Select the patch you want to download.

5. Click ReadMe for a patch description and installation instructions.

6. Click Download for a single patch, or Add to Plan to download a group of patches.

Disk Drive Replacement Changes

Sun Storage 6580 array and Sun Storage 6780 array disk drives can now be replaced by customers. Previously designated as field replaceable units (FRUs), disk drives are now customer-replaceable units (CRUs).

When inserting a replacement disk drive, be sure the role of the replacement drive is “unassigned” to a virtual disk. All data will be erased before the controller reconstructs the data on the replacement disk drive.



caution icon Caution - Potential for data loss--Use care when determining what disk drive to use as a replacement for a failed disk drive. All data on the replacement disk drive will be erased, before data reconstruction occurs.


Cache Battery Expiration Notification

Sun Storage 6580 and 6780 arrays use smart battery technology which maintains and reports its own status, providing a more accurate reporting of battery status. When a battery can no longer hold a charge, the battery is flagged for replacement, rather than a battery expiration report provided by the array firmware.


About the Array

The Sun Storage 6580 and 6780 array models are compared in TABLE 1.


TABLE 1 Comparison of Sun Storage 6580 and 6780 Array Configurations

6580

6780

Total cache size per array

8 or 16 Gbytes

8, 16, 32, or 64 Gbytes

Number of host ports

8 4-Gbit/second or 8 G-bit/second

8 or 16 4-Gbit/second or 8-Gbit/second

Host interface cards

2

2 or 4

Maximum number of drives supported

256

448

Disk reads

IOPS[1] 115K
Throughput 3000 MB/second

IOPS* 175K
Throughput 6400 MB/second

Maximum array configuration

1x16

1x28

Maximum raw capacity

512 Tbytes

896 Tbytes




Note - Upgrading from a 61x0 array to a Sun Storage 6580 or 6780 array is a data-in-place migration.



System Requirements

The software and hardware products that have been tested and qualified to work with the Sun Storage 6580 and 6780 arrays are described in the following sections.

Firmware Requirements

The firmware version for Sun Storage 6580 and 6780 arrays features described in this release note is version 07.77.xx.xx. This firmware version (or higher) is installed on the array controllers prior to shipment and is also delivered with the latest version of Sun Storage Common Array Manager (CAM).

To update controller firmware on an existing array:

1. Download the software as described in Downloading Patches and Updates.

2. Log into Sun Storage Common Array Manager.

3. Select the check box to the left of the array you want to update.

4. Click Install Firmware Baseline.

5. Follow the wizard instructions.

Disk Drives and Tray Capacity

TABLE 2 lists the size, spindle speed, type, interface speed, and tray capacity for the supported Fibre Channel (FC), Serial Advanced Technology Attachment (SATA), and Serial Attached SCSI (SAS) disk drives for Sun Storage 6580 and 6780 arrays. Additional legacy drives might also be supported with this product.



Note - For special requirements concerning FC Solid State Disks (SSDs), see Solid State Disk Requirements.



TABLE 2 Supported Disk Drives

Drive

Description

FC, 73GB, Solid State Disk

73-Gbyte SSD drives
(4 Gbits/sec); 1168 Gbytes per tray

FC, 146G15K

146-Gbyte 15,000-RPM FC drives

(4 Gbits/sec); 2336 Gbytes per tray

FC, 300G15K

300-Gbyte 15,000-RPM FC drives
(4 Gbits/sec); 4800 Gbytes per tray

FC, 400G10K

400-Gbyte 10,000-RPM FC drives
(4 Gbits/sec): 6400 Gbytes per tray

FC, 450G15K

450-Gbyte 15,000-RPM FC drives
(4 Gbits/sec); 7200 Gbytes per tray

SATA-2, 500G7.2K

500-Gbyte 7,200-RPM SATA drives
(3 Gbits/sec); 8000 Gbytes per tray

FC, 600GB15K, Encryption Capable

600-Gbyte 15,000-RPM FC drives
Encryption Capable
(4 Gbits/sec); 9600 Gbytes per tray

SATA-2, 750G7.2K

750-Gbyte 7,200-RPM SATA drives
(3 Gbits/sec); 12000 Gbytes per tray

SATA-2, 1T7.2K

1-Tbyte 7,200-RPM SATA drives
(3 Gbits/sec); 16000 Gbytes per tray

SATA-2, 2TB7.2K

2-Tbyte 7,200-RPM SATA drives
(3 Gbits/sec); 32000 Gbytes per tray


Solid State Disk Requirements

Solid State Drives (SSDs) have the following installation requirements:

Array Expansion Module Support

TABLE 3 lists supported expansion modules. To add capacity to a Sun Storage 6580 or 6780 array, refer to the following Service Advisor procedures:



caution icon Caution - To add trays with existing stored data, contact Oracle Support for assistance to avoid data loss.



TABLE 3 Supported Expansion Modules and IOM Codes

Array Controller

Firmware

Supported Expansion Modules

IOM Code

Sun Storage 6580 and
Sun Storage 6780

07.77.13.11

CSM200

98E4

CSM100 FC

9682

CSM100 SATA

9728

FLA200

9330

FLC200-dSATA

9566

FLC200-iSATA

9728


For additional baseline firmware information, such as controller, NVSRAM, disk drive, version, and firmware file, see Sun Storage Array Baseline Firmware Reference.

Data Host Requirements

This section describes supported data host software, HBAs, and switches.

Multipathing Software

TABLE 4 provides a summary of the data host requirements for the Sun Storage 6580 and 6780 arrays. It lists the current multipathing software and supported host bus adapters (HBAs) by operating system.

You must install multipathing software on each data host that communicates with Sun Storage 6580 and 6780 arrays.



Note - Single path data connections are not recommended. For more information, see Single Path Data Connections.


TABLE 4 lists supported multipathing software by operating system.


TABLE 4 Multipathing Software

Operating System

Multipathing Software

Minimum Version

Host Type Setting

Notes

Solaris 10[2]

STMS/MPxIO

Update 6

Solaris with MPxIO

Multipathing software is included in
Solaris OS 10

Solaris 10 with DMP

Symantec Veritas Dynamic Multi-Pathing (DMP)

5.0MP3

Solaris with DMP

 

Windows 2003 SP2, R2
Non-clustered

MPIO

01.03.0302.0504

Windows 2003 Non-clustered

 

Windows 2003/2008 MSCS Cluster

MPIO

01.03.0302.0504

Windows Server 2003 Clustered

You must use MPIO for 7.10 and above

Windows 2003
Non-clustered with DMP

DMP

5.1

Windows Server 2003 Non-clustered (with Veritas DMP)

See Symantec Hardware Compatibility List (HCL)

Windows 2003 Clustered with DMP

DMP

5.1

Windows Server 2003 clustered (with Veritas DMP)

See Symantec HCL

Windows 2008 R2
(64-bit only)

MPIO

01.03.0302.0504

Windows Server 2003

 

Oracle VM 2.2.2

RDAC

09.03.0C02.0331

Linux

RDAC version 09.03.0C02.0331 is included with Oracle VM 2.2.2

Oracle Linux 6.0, 5.6, 5.5[3]

RDAC

09.03.0C02.0453

Linux

 

SUSE Linux Enterprise Server 11 and 11.1

RDAC/MPP

DMMP

09.03.0C00.0453

Linux

 

SLES 10.4, 10 SP1

RDAC/MPP

09.03.0C02.0453

Linux

 

Red Hat 6.0, 5.6, 5.5

RDAC

09.03.0C00.0453

Linux

 

Red Hat 4, SLES 10

RDAC/MPP

09.03.0C00.0453

Linux

 

Red Hat
SLES with DMP

DMP

5.0MP3

Linux with DMP

See Symantec HCL

HPUX

Veritas DMP

5.0MP3

HP-UX

See Symantec HCL

AIX 6.1, 5.3

Cambex DPF

6.1.0.63

AIX

 

AIX 5.3, 6.1 with DMP

DMP

5.0

AIX with DMP

See Symantec HCL




Note - Download the multipathing drivers from My Oracle Support at https://support.oracle.com. Search for the driver using one of the keywords “MPIO,” “RDAC,” or “MPP.” See Downloading Patches and Updates.




Note - The multipathing driver for the IBM AIX platform is Veritas DMP, bundled in Veritas Storage Foundation 5.0 for Sun Storage 6580 and 6780 arrays. Download the Array Support Library (ASL) from http://support.veritas.com/.


Supported Host Bus Adaptors (HBAs)

TABLE 5, TABLE 6, and TABLE 7 list supported HBAs and other data host platform elements by operating system.

To obtain the latest HBA firmware:

Download operating system updates from the web site of the operating system company.



Note - Always install the multipathing software before you install any OS patches.



TABLE 5 Supported HBAs for Solaris Data Host Platforms

Operating System

Minimum OS

Sun 2-Gbit HBAs

Sun 4-Gbit HBAs

Sun 8-Gb HBAs

Solaris 10 SPARC[4]

Update 6

SG-XPCI1FC-QL2 (6767A)

SG-XPCI2FC-QF2-Z (6768A)

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

Solaris 10 x64/x86

Update 6

SG-XPCI1FC-QL2 (6767A)

SG-XPCI2FC-QF2-Z (6768A)

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8



TABLE 6 Supported HBAs for Microsoft Windows Data Host Platforms

Host OS / Servers

HBAs[5]

Sun 2-Gb HBAs

Sun 4-Gb HBAs

Sun 8-Gb HBAs

Microsoft Windows Server 2008, R2 (64-bit only ) / AMD x86 and EM64T

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LPe12000/LPe12002/ LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

Microsoft Windows Server 2003 SP2, R2 / AMD x86 and EM64T

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LPe12000/LPe12002/LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8

Microsoft Windows 2003

64-bit with SP2, R2 / x64 (AMD)

EM64T

IA64

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex: LPe12000/LPe12002/LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8



TABLE 7 Supported HBAs for Linux Data Host Platforms

Host OS

HBAs[6]

Sun 2-Gb HBAs

Sun 4-Gb HBAs

Sun 8-Gb HBAs

SLES 11.1, 11, 10.4, 10.1

QLogic:

QLE 256x

QLE246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LP982/LP9802/9802DC

LP9002/LP9002DC/LP952

LP10000/10000DC/LP1050

LP11000/LP11002/LP1150

Lpe11000/LPe11002/ LPe1150/Lpe12000/ LPe12002/Lpe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8

Oracle Linux 6.0, 5.6, 5.5; Oracle VM 2.2.2;

RHEL 6.0, 5.6, 5.5

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex: LP982/LP9802/9802DC

LP9002/LP9002DC/LP952

LP10000/10000DC/LP1050

Lpe11000/LPe11002/LPe1150

Lpe12000/LPe12002/LPe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4-Z

SG-XPCI2FC-EM4-Z

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

RHEL 4u7

RHEL 4.8

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex: LP982/LP9802/9802DC

LP9002/LP9002DC/LP952

LP10000/10000DC/LP1050

Lpe11000/LPe11002/LPe1150

Lpe12000/LPe12002/Lpe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4-Z

SG-XPCI2FC-EM4-Z

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8



TABLE 8 Other Supported Data Host Platforms

Host OS

Host Servers

HBAs[7]

HP-UX 11.31

HP RISC

IA64

HP A6795A

HP A6826A

HP A6684A

HP A6685A

HP AB378A

HP AB379A

HP AD300A

HP AD355A

AH400A (IA64)

AH401A (IA64)

HP-UX B.11.23

HP RISC

IA64

HP A6795A

HP A6826A

HP A9784A

HP AB378A

HP AB379A

HP AD300A

HP AD355A

IBM AIX 5.2, 5.3, 6.1

Power

IBM 5716

IBM 5758

IBM 5759

IBM 6228

IBM 6239


Supported FC and Multilayer Switches

The following FC fabric and multilayer switches are compatible for connecting data hosts and Sun Storage 6580 and 6780 arrays:

Supported Premium Features

Tier 1 Support

The Sun Storage 6180 arrays support the Tier 1 classified licensable features. Tier 1 classified arrays include the StorageTek 6140 and Sun Storage 6180 arrays.

Available licenses for the Sun Storage 6180:

Tier 2 Support

The Sun Storage 6580 and 6780 arrays support the below Tier 2 classified arrays licensable features.Tier 2 classified arrays include the StorageTek 6540, Sun Storage 6580, and Sun Storage 6780 arrays.

Available licenses for the Sun Storage 6580 and 6780 arrays:


Device Mapper Multipath (DMMP) for the Linux Operating System

Device Mapper (DM) is a generic framework for block devices provided by the Linux operating system. It supports concatenation, striping, snapshots, mirroring, and multipathing. The multipath function is provided by the combination of the kernel modules and user space tools.

The DMMP is supported on SUSE Linux Enterprise Server (SLES) Version 11 and 11.1. The SLES installation must have components at or above the version levels shown in the following table before you install the DMMP.


TABLE 9 Minimum Supported Configurations for the SLES 11 Operating System

Version

Component

Kernel version

kernel-default-2.6.27.29-0.1.1

Scsi_dh_rdac kmp

lsi-scsi_dh_rdac-kmp-default-0.0_2.6.27.19_5-1

Device Mapper library

device-mapper-1.02.27-8.6

Multipath-tools

multipath-tools-0.4.8-40.6.1


To update a component, download the appropriate package from the Novell website at http://download.novell.com/patch/finder. The Novell publication, SUSE Linux Enterprise Server 11 Installation and Administration Guide, describes how to install and upgrade the operating system.

Device Mapper Features

Known Limitations and Issues of the Device Mapper

Installing the Device Mapper Multi-Path

1. Use the media supplied by your operating system vendor to install SLES 11.

2. Install the errata kernel 2.6.27.29-0.1.

Refer to the SUSE Linux Enterprise Server 11 Installation and Administration Guide for the installation procedure.

3. To boot up to 2.6.27.29-0.1 kernel, reboot your system.

4. On the command line, enter rpm -qa |grep device-mapper, and check the system output to see if the correct level of the device mapper component is installed.

5. On the command line, enter rpm -qa |grep multipath-tools and check the system output to see if the correct level of the multipath tools is installed.

6. Update the configuration file /etc/multipath.conf.

See Setting Up the multipath.conf File for detailed information about the /etc/multipath.conf file.

7. On the command line, enter chkconfig multipathd on.

This command enables multipathd daemon when the system boots.

8. Edit the /etc/sysconfig/kernel file to add directive scsi_dh_rdac to the INITRD_MODULES section of the file.

9. Download the KMP package for scsi_dh_rdac for the SLES 11 architecture from the website http://forgeftp.novell.com/driver-process/staging/pub/update/lsi/sle11/common/, and install the package on the host.

10. Update the boot loader to point to the new initrd image, and reboot the host with the new initrd image.

Setting Up the multipath.conf File

The multipath.conf file is the configuration file for the multipath daemon, multipathd. The multipath.conf file overwrites the built-in configuration table for multipathd. Any line in the file whose first non-white-space character is # is considered a comment line. Empty lines are ignored.

Installing the Device Mapper Multi-Path for SLES 11.1

All of the components required for DMMP are included in SUSE Linux Enterprise Server (SLES) version 11.1 installation media. However, users might need to select the specific component based on the storage hardware type. By default, DMMP is disabled in SLES. You must follow the following steps to enable DMMP components on the host.

1. On the command line, type chkconfig multipath on.

The multipathd daemon is enabled with the system starts again.

2. Edit the /etc/sysconfig/kernel file to add the directive scsi_dh_rdac to the INITRD_MODULES section of the file.

3. Create a new initrd image using the following command to include scsi_dh_rdac into ram disk:

mkinitrd -i /boot/initrd-r -rdac -k /bootvmlinuz

4. Update the boot leader to point to the new initrd image, and reboot the host with the new initrd image.

Copy and Rename the Sample File

Copy and rename the sample file located at /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic to /etc/multipath.conf. Configuration changes are now accomplished by editing the new /etc/multipath.conf file. All entries for multipath devices are commented out initially. The configuration file is divided into five sections:

Determine the Attributes of a MultiPath Device

To determine the attributes of a multipath device, check the multipaths section of the /etc/multipath.conf file, then the devices section, then the defaults section. The model settings used for multipath devices are listed for each storage array and include matching vendor and product values. Add matching storage vendor and product values for each type of volume used in your storage array.

For each UTM LUN mapped to the host, include an entry in the blacklist section of the /etc/multipath.conf file. The entries should follow the pattern of the following example.

blacklist { device {         vendor "*"         product "Universal Xport"    } } 

The following example shows the devices section for LSI storage from the sample /etc/multipath.conf file. Update the vendor ID, which is LSI in the sample file, and the product ID, which is INF-01-00 in the sample file, to match the equipment in the storage array.

devices {     device {         vendor                "LSI"         product               "INF-01-00"         path_grouping_policy  group_by_prio         prio                  rdac         getuid_callout      "/lib/udev/scsi_id -g -u -d /dev/%n"         polling_interval      5         path_checker          rdac         path_selector         "round-robin 0"         hardware_handler      "1 rdac"         failback               immediate         features              "2 pg_init_retries 50"         no_path_retry          30         rr_min_io              100     } }

The following table explains the attributes and values in the devices section of the /etc/multipath.conf file.


TABLE 10 Attributes and Values in the multipath.conf File

Attribute

Parameter Value

Description

path_grouping_policy

group_by_prio

The path grouping policy to be applied to this specific vendor and product storage.

prio

rdac

The program and arguments to determine the path priority routine. The specified routine should return a numeric value specifying the relative priority of this path. Higher numbers have a higher priority.

getuid_callout

"/lib/udev/ scsi_id -g -u -d /dev/%n"

The program and arguments to call out to obtain a unique path identifier.

polling_interval

5

The interval between two path checks, in seconds.

path_checker

rdac

The method used to determine the state of the path.

path_selector

"round-robin 0"

The path selector algorithm to use when there is more than one path in a path group.

hardware_handler

"1 rdac"

The hardware handler to use for handling device-specific knowledge.

failback

10

A parameter to tell the daemon how to manage path group failback. In this example, the parameter is set to 10 seconds, so failback occurs 10 seconds after a device comes online. To disable the failback, set this parameter to manual. Set it to immediate to force failback to occur immediately.

features

"2 pg_init_retries 50"

Features to be enabled. This parameter sets the kernel parameter pg_init_retries to 50. The pg_init_retries parameter is used to retry the mode select commands.

no_path_retry

30

Specify the number of retries before queuing is disabled. Set this parameter to fail for immediate failure (no queuing). When this parameter is set to queue, queuing continues indefinitely.

rr_min_io

100

The number of I/Os to route to a path before switching to the next path in the same path group. This setting applies if there is more than one path in a path group.


Using the Device Mapper Devices

Multipath devices are created under /dev/ directory with the prefix dm-. These devices are the same as any other bock devices on the host. To list all of the multipath devices, run the multipath -ll command. The following example shows system output from the multipath -ll command for one of the multipath devices.

mpathp (3600a0b80005ab177000017544a8d6b92) dm-0 LSI,INF-01-00 [size=5.0G][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=6][active] \_ 5:0:0:0   sdc  8:32   [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 4:0:0:0   sdb  8:16 [active][ghost]

In this example, the multipath device node for this device is /dev/mapper/mpathp and /dev/dm-0. The following table lists some basic options and parameters for the multipath command.


TABLE 11 Options and Parameters for the multipath Command

Command

Description

multipath -h

Prints usage information

multipath -ll

Shows the current multipath topology from all available information (sysfs, the device mapper, path checkers, and so on)

multipath -f map

Flushes the multipath device map specified by the map option, if the map is unused

multipath -F

Flushes all unused multipath device maps


Troubleshooting the Device Mapper


TABLE 12 Troubleshooting the Device Mapper

Situation

Resolution

Is the multipath daemon, multipathd, running?

At the command prompt, enter the command: /etc/init.d/multipathd status.

Why are no devices listed when you run the multipath -ll command?

At the command prompt, enter the command: #cat /proc/scsi/scsi. The system output displays all of the devices that are already discovered.

Verify that the multipath.conf file has been updated with proper settings.



Restrictions and Known Issues

The following sections provide information about restrictions, known issues, and bugs filed against this product release:

If a recommended workaround is available for a bug, it follows the bug description.

Installation and Hardware Related Issues

This section describes known issues and bugs related to installing and initially configuring Sun Storage 6580 and 6780 arrays. This section describes general issues related to Sun Storage 6580 and 6780 array hardware and firmware.

Single Path Data Connections

In a single path data connection, a group of heterogeneous servers is connected to an array through a single connection. Although this connection is technically possible, there is no redundancy, and a connection failure will result in loss of access to the array.



caution icon Caution - Because of the single point of failure, single path data connections are not recommended.


Setting the Tray Link Rate

When setting the tray link rate for an expansion tray, all expansion trays connected to the same drive channel must be set to operate at the same data transfer rate (speed).

For details about how to set the tray link rate, see “Setting the Tray Link Rate” in the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays.

Upgrading the StorageTek 6540 Array

CR 6783749--When upgrading a StorageTek 6540 array to a Sun Storage 6580 or 6780 Array, you cannot change the tray ID 85 to tray ID 99 using CAM.

Workaround: You can use controller tray ID 85 for array configurations up to a maximum of 256 drives.

Replacing CRUs/FRUs in Less Than 15 Minutes


caution icon Caution - Without adequate ventilation and air circulation, the controller tray will overheat resulting in potential damage to all customer-replaceable units (CRUs) or field-replaceable units (FRUs). Do not allow any CRU/FRU slot to remain empty for an extended time. Replace the failed CRU/FRU within 15 minutes.


System Cabinet Doors Must Be Closed


caution icon Caution - The front and back doors of the system cabinet must be closed for compliance to domestic and international EMI regulations as well as proper equipment cooling. Do not block or cover the openings of the system cabinet. Cabinet airflow is from front to back. Allow at least 30 inches (76.2 cm) in front of the cabinet, and at least 24 (60.96 cm) inches behind the cabinet, for service clearance, proper ventilation, and heat dissipation.


The cfgadm -c unconfigure Command Unconfigures UTM LUNs Only and Not Other Data LUNs (Solaris 10)

CR 6362850--The cfgadm -c unconfigure command unconfigures Universal Transport Mechanism (UTM) LUNs only and not other data LUNs. When this happens, you will not be able to unconfigure LUNs.

Workaround: Obtain Solaris 10 patch 118833-20 (SPARC) or patch 118855-16 (x86) to fix this issue.

Intermittent Power Supply Failure Notification

CR 6760395: CAM logEvent messages intermittently reports power supply failures and 12 seconds later changes to optimal. This is caused by devices not responding to polling.

Workaround: No workaround required. You can ignore the failure messages.

Tray ID Diagnostic Codes

See Appendix C, Troubleshooting and Operational Procedures, in the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays for a description of the controller tray and expansion tray diagnostic codes.

Controller Issues

I/O Errors Occur During Controller Firmware Download

Configuration:



Note - This problem does not occur in RHEL version 6.0 with kernel 2.6.33.


Problem or Restriction: An I/O error occurs during an online controller firmware upgrade.

Workaround: To avoid this problem, quiesce the host I/O before the performing controller firmware upgrades. To recover from this problem, make sure that the host reports that it has optimal paths available to the storage array controllers, and then resume I/O.

Both RAID Controllers Reboot After 828.5 Days--2500/6000 Arrays

CR 6872995, 6949589--Both RAID controllers reboot after 828.5 days of continuous operation. A timer in the firmware (vxWorks) called “vxAbsTicks” is a 32-bit (double word) integer that keeps count in the 0x0000 0000 format. When this timer rolls over from 0xffffffff to 0x00000000 (after approximately 828.5 days), if there is host I/O to volumes, the associated drives fail with a write failure.

Original Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter. For controllers with 03.xx-06.60 firmware (6000 series) and 03.xx-6.70 firmware (2500 series): Both controllers reboot if counter is greater than 825 days.

Final Resolution: Every 24 hours, firmware spawns a task--cfgMonitorTask--that checks the value of the vxworks kernel timing counter.

This fix staggers the reboots of the controllers for approximately five days so the only impact is a small performance degradation while the reboot occurs.

For controllers with firmware 07.15.11.12 or later (6000 series) and firmware 07.35.10.10 or later (2500 series): Controller A reboots if counter is greater than 820 days. Controller B reboots if counter is greater than 825 days.



Note - There is no redundancy for failover in a simplex 2500 configuration or any duplex configuration where a controller is already offline for any reason.


Linux Issues

Linux RDAC 09.03.0C02.0453 - Make Install Dependencies

Configuration:

Problem or Restriction: CR 7042297--Before running a "make" on the RDAC driver, the following kernel packages are required:

Log Events Using SLES 11.1 With smartd Monitoring Enabled

CR 7014293--When a SLES 11.1 host with smartd monitoring enabled is mapped to volumes on either a Sun Storage 2500-M2 or Sun Storage 6780 array, it is possible to receive “IO FAILURE” and “Illegal Request ASC/ASCQ” log events.

Workaround: Either disable smartd monitoring or disregard the messages. This is an issue with the host OS.

Oracle Linux 6 Bootd With Messages

CR 7038184, 7028670, 7028672: When booting an Oracle Linux 6.0 host mapped to volumes on Sun Storage 2500-M2 and Sun Storage 6780 arrays, it is possible to receive one of these messages:

FIXME driver has no support for subenclosures (1)
FIXME driver has no support for subenclosures (3)
Failed to bind enclosure -19

Workaround: This is a cosmetic issue with no impact to the I/O path. There is no workaround.

IO FAILURE Messages and Illegal Requests in Logs

Operating System: SLES Linux Enterprise Server 11.1 SP1

Problem or Restriction CR 7014293: Several IO FAILURE and Illegal Requests log events with ASC/ASQ SCSI errors appear in /var/log/messages while running vdbench on 25 LUNs.

An application client may request any one or all of the supported mode pages from the device server. If an application client issues a MODE SENSE command with a page code or subpage code value not implemented by the logical unit, the command shall be terminated with CHECK CONDITION status, with the sense key set to ILLEGAL REQUEST, and the additional sense code set to INVALID FIELD IN CDB.

The controller responds correctly (05h/24h/00h -INVALID FIELD IN CDB). The smartctl tool may need to ask all supported mode pages first before sending a unsupported mode page request.

Workaround: Disable SLES11 smartd monitoring service to stop these messages.

System Services (Runlevel) > smartd Disable

Cluster Startup Fails When Devices Are in a Unit Attention State

Configuration:

Problem or Restriction: This problem occurs when the DMMP failover driver is used with the RHEL version 6.0 OS. If you try to set up a Red Hat cluster with the DMMP failover driver, cluster startup might fail during the unfencing stage, where each host registers itself with the SCSI devices. The devices are in a Unit Attention state, which causes the SCSI registration command issued by the host during startup to fail. When the cluster manager (cman) service starts, the logs show that the nodes failed to unfence themselves, which causes the cluster startup to fail.

Workaround: To avoid this problem, do not use the DMMP failover driver with RHEL version 6.0. To recover from this problem, open a terminal window, and run:

sg_turs -n 5 <device>

where <device> is a SCSI device that is virtualized by the DMMP failover driver. Run this command on every /dev/sd device that the DMMP failover driver manages. It issues a Test Unit Ready command to clear the Unit Attention state and allow node registration on the device to succeed.

Node Unfencing Fails when Automatically Generated Host Keys Are Used during a Red Hat Cluster Suite Services Startup

Operating System: Red Hat Enterprise Linux 6 with Native Cluster

Problem or Restriction: This problem occurs the first time a cluster is set up when the cluster.conf file does not have manually defined host keys. When the cluster.conf file was first defined to set up a cluster with SCSI reservation fencing, the cluster services were started on the nodes. With SCSI reservation fencing, the hosts try to generate and register a key on the clustered devices as part of the cluster manager's startup. The cluster manager service (cman) fails to start, and the key cannot be zero error message appears in the host log.

Workaround: To avoid this problem, use only power fencing. Do not use SCSI reservation fencing. To recover from this problem, change to manually defined host keys, and restart the cluster services.

Red Hat Cluster Suite Services with GFS2 Mounts Cannot Transfer Between Nodes when the Client Mounts with NFSv4

Operating System: Red Hat Enterprise Linux 6 Native Cluster

Problem or Restriction: This problem occurs during an attempt to transfer a cluster service manually when a client is connected using NFSv4. The Global File System (GFS) 2 mount points failed to unmount, which caused the Red Hat Cluster Suite Services to go to the Failed state. The mount point, and all other mount points exported from the same virtual IP address, becomes inaccessible.

Workaround: To avoid this problem, configure the cluster nodes to not allow mount requests from NFS version 4 (NFSv4) clients. To recover from this problem, restart the failed service on the node that previously owned it.

Host Aborts I/O Operations

Operating System: Red Hat Enterprise Linux version 6.0

Problem or Restriction: This problem occurs during an online controller firmware upgrade. The controller is not responding quickly enough to a host read or write to satisfy the host. After 30 seconds, the host sends a command to abort the I/O. The I/O aborts, and then starts again successfully.

Workaround: Quiesce the host I/O before performing the controller firmware upgrade. To recover from this problem, either reset the server, or wait until the host returns an I/O error.

Host Attempts to Abort I/O Indefinitely

Operating System: Red Hat Enterprise Linux version 6.0 with kernel 2.6.32

Red Hat Bugzilla Number: 620391



Note - This problem does not occur in Red Hat Enterprise Linux version 6.0 with kernel 2.6.33.


Problem or Restriction: This problem occurs under situations of heavy stress when storage arrays take longer than expected to return the status of a read or write. The storage array must be sufficiently stressed that the controller response is more than 30 seconds, at which time a command is issued to abort if no response is received. The abort will be retried indefinitely even when the abort is successful. The application either times out or hangs indefinitely on the read or write that is being aborted. The messages file reports the aborts, and resets might occur on the LUN, the host, or the bus.

Factors effecting controller response include Remote Volume Mirroring, the controller state, the number of attached hosts, and the total throughput.

Workaround: To recover from this problem, reset the power on the server.

Linux Host Hangs During Reboot After New Volumes Are Added

Problem or Restriction: When a Red Hat Enterprise Linux 5.1 host has more than two new volumes mapped to it, it hangs during reboot.

Workaround: Try one of the following options:

Linux I/O Timeout Error Occurs After Enabling a Switch Port

Problem or Restriction: An I/O timeout error occurs after you enable a switch port. This problem occurs when two or more Brocade switches are used, and both the active and the alternative paths from the host are located on one switch, and both the active path and the alternative path from the storage array are located on another switch. For the host to detect the storage array on the other switch, the switches are cascaded, and a shared zone is defined between the switches. This problem occurs on fabrics managing high I/O traffic.

Workaround: Reconfigure the switch zoning to avoid the need for cascading. Limit the zones within each switch, and do not create zones across the switches. Configure the active paths from the host and the storage array on one switch, and all of the alternative paths from the host and the storage array on the other switch.



Note - Configuring the active paths from all of the hosts on one switch will not provide optimal performance. To resolve this performance issue, alternate the hosts in terms of using active and alternative paths.
For switch 1, connect to storage array 1, and use the following arrangement: Host A_Active port, Host B_Alternative port, Host C_Active port, Host D_Alternative port.
For switch 2, connect to storage array 2, and use the following arrangement: Host A_Alternative port, Host B_Active port, Host C_Alternative port, Host D_Active port.


Linux Host Hangs During Reboot

Problem or Restriction: Red Hat Enterprise Linux 5.2 PowerPC (PPC) only. On rare occasions, the host hangs during reboot.

Workaround: Reset the host.

Cannot Find an Online Path After a Controller Failover

Problem or Restriction: Linux Red Hat 5 and Linux SLES 10 SP1 only. After a controller failover in an open SAN environment, a controller comes back online, but the path is not rediscovered by the multi-path proxy (MPP). After a controller comes online in a fabric connection (through a SAN switch), it is possible that a link will not be established by the Emulex HBA driver. This behavior is seen only if the SAN switch is “default” zoned (all ports see all other ports). This condition can result in an I/O error if the other path is taken offline.

Workaround: Set all of the SAN switches to be “default” zoned.

I/O Errors Occur During a Linux System Reboot

Problem or Restriction: Linux SLES 10 SP2 only. I/O errors occur during a system reboot, and the host resets.

Workaround: None.

MEL Events Occur During the Start-of-Day Sequence

Problem or Restriction: Red Hat Enterprise Linux 4.7 only. When the controller is going through the start-of-day sequence, the drive channel does not achieve link speed detection and logs a Major Event Log (MEL) event. This event recovers within a few seconds, and a second MEL event occurs. The second MEL event indicates that the link speed detection was achieved.

Workaround: None.

Documentation Issues

This section describes issues related to Sun Storage 6580 and 6780 array documentation.

Total Cache Size Specification for Sun Storage 6780 Array

In Table 1-1 of the Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays (820-5773-11), the value for “Total cache size” is incorrectly reported as “16 Gbytes or 32 Gbytes.” As of the CAM 6.6 release, the revised value is “8, 16, 32, or 64 Gbytes.” The revised value is documented in TABLE 1 of this release note document.


Product Documentation


Application

Title

Site planning information

Site Planning Guide for Sun Storage 6580 and 6780 Arrays

Regulatory and safety information

Sun Storage Regulatory and Safety Compliance Manual

Installation overview for rack-mounted arrays

Getting Started Guide for Sun Storage 6580 and 6780 Rack Mounted Arrays

 

Getting Started Guide for Sun Storage 6580 and 6780 Rack Ready Arrays

Rack installation instructions

Sun Rack II User’s Guide

Rail kit installation instructions

Sun Modular Storage Rail Kit Installation Guide

PDU installation instructions

Power Distribution Unit Installation Guide for Sun Storage 6580 and 6780 Arrays and Sun StorageTek 2500 and 6000 Array Series

Array installation instructions

Hardware Installation Guide for Sun Storage 6580 and 6780 Arrays

Upgrade a Sun StorageTek 6540 array to a Sun Storage 6580 or 6780 array

Sun Storage 6000 Series Hardware Upgrade Guide

Release-specific information for the Sun StorageTek Common Array Manager

Sun Storage Common Array Manager Release Notes

Software installation and initial configuration instructions

Sun Storage Common Array Manager Software Installation and Setup Guide

Reference information for the Common Array Manager CLI

Sun Storage Common Array Manager CLI Guide

Multipath failover driver installation and configuration

Sun StorageTek MPIO Device Specific Module Installation Guide For Microsoft Windows OS

Sun StorageTek RDAC Multipath Failover Driver Installation Guide For Linux OS



Documentation, Support, and Training

These web sites provide additional resources:


1 (TableFootnote) Input/output operations per second
2 (TableFootnote) Oracle recommends installing the latest Solaris update.
3 (TableFootnote) Unbreakable Enterprise Kernel not supported for this release.
4 (TableFootnote) Oracle recommends installing the latest Solaris update.
5 (TableFootnote) Refer to the HBA manufacturer’s web site for support information.
6 (TableFootnote) Refer to the HBA manufacturer’s web site for support information.
7 (TableFootnote) Refer to the HBA manufacturer’s web site for support information.
Feedback