Sun Storage 2500-M2 Arrays Hardware Release Notes

This document contains important release information about Oracle’s Sun Storage 2500-M2 Arrays managed by Sun Storage Common Array Manager (CAM), Version 6.8.1. Read this document so that you are aware of issues or requirements that can affect the installation and operation of the array.

The release notes consist of the following sections:


What’s New for This Release

Release 6.8.1 adds support for Windows platform data hosts for Sun Storage 2500-M2 arrays. This is achieved via updated NVSRAM, included with the CAM 6.8.1 download.


Product Overview

The Sun Storage 2500-M2 Arrays are a family of storage products that provide high-capacity, high-reliability storage in a compact configuration. The controller tray, with two controller modules, provides the interface between a data host and the disk drives. Three array models are offered:

The Sun Storage 2500-M2 Arrays are modular and rack-mountable in industry-standard cabinets. The arrays are scalable from a single controller tray configuration to a maximum configuration of one controller tray and three expansion trays. The maximum configuration creates a storage array configuration with a total of 48 drives attached behind the controllers.

Use Sun Storage Common Array Manager version 6.8 (or higher) to manage the array. See About the Management Software for more information.


About the Management Software

Oracle’s Sun Storage Common Array Manager (CAM) software is a key component for the initial configuration and operation of Sun Storage 2500-M2 Arrays hardware. It is installed on a management host cabled to the array via out-of-band Ethernet. Note: In-band management is also supported.

To download CAM, follow the procedure in the section Downloading Patches and Updates. Then, review the latest Sun Storage Common Array Manager Quick Start Guide and Sun Storage Common Array Manager Installation and Setup Guide to begin installation. CAM documentation can be found here:

http://www.oracle.com/technetwork/documentation/disk-device-194280.html


Downloading Patches and Updates

To download patches and updates from My Oracle Support, including the CAM management software, follow this procedure.

1. Sign in to My Oracle Support:

https://support.oracle.com

2. At the top of the page, click the Patches & Updates tab.

3. Search for software and patches in one of two ways:

a. Under the Patch Search section, click the Search tab.

b. In the Patch Name or Number field, enter the patch number. For example, 10272123 or 141474-01.

c. Click Search.

a. Under the Patch Search section, click the Search tab, and then click the Product or Family (Advanced Search) link.

b. Check Include all products in a family.

c. In the Product field, start typing the product name. For example, “Sun Storage Common Array Manager (CAM)”.

Select the product name when it appears.

d. In the Release field, expand the product name, check the release and patches you want to download, and then click Close.

e. Click Search.

4. Select the patch you want to download.

5. Click ReadMe for a patch description and installation instructions.

6. Click Download for a single patch, or Add to Plan to download a group of patches.


System Requirements

The software and hardware products that have been tested and qualified to work with Sun Storage 2500-M2 Arrays are described in the following sections. Sun Storage 2500-M2 Arrays require Sun Storage Common Array Manager, Version 6.8.0 (or higher) software.

Firmware Requirements

The Sun Storage 2500-M2 Arrays require firmware version 07.77.xx.xx. This firmware version (or higher) is installed on the array controllers prior to shipment and is also delivered with Sun Storage Common Array Manager (CAM), Version 6.8.0.

Firmware is bundled with the CAM software download package. To download CAM, follow the procedure in Downloading Patches and Updates.

Supported Disk Drives and Tray Capacity

TABLE 1 lists the disk capacity, form factor, spindle speed, interface type, interface speed, and tray capacity for supported SAS disk drives for the Sun Storage 2500-M2 arrays.


TABLE 1 Supported Disk Drives

Drive

Description

SAS-2, 300G15K

300-Gbyte 3.5" 15K-RPM SAS-2 drives
(6 Gbits/sec); 3600 Gbytes per tray

SAS-2, 600G15K

600-Gbyte 3.5” 15K-RPM SAS-2 drives
(6 Gbits/sec); 7200 Gbytes per tray


Disk Drive Replacement

When inserting a replacement disk drive, be sure the role of the replacement drive is “unassigned” to a virtual disk. All data will be erased before the controller reconstructs the data on the replacement disk drive.



caution icon Caution - Potential for data loss--Use care when determining what disk drive to use as a replacement for a failed disk drive. All data on the replacement disk drive will be erased before data reconstruction occurs.


Array Expansion Module Support

The Sun Storage 2530-M2 and 2540-M2 arrays can be expanded by adding Sun Storage 2501-M2 array expansion trays. To add capacity to an array, refer to the following Service Advisor procedures:



caution icon Caution - To add trays with existing stored data, contact Oracle Support for assistance to avoid data loss.


Data Host Requirements

Multipathing Software

TABLE 2 and TABLE 3 provide a summary of the data host requirements for the Sun Storage 2500-M2 Arrays. You must install multipathing software on each data host that communicates with the Sun Storage 2500-M2 Arrays. For additional information on multipathing software, see the following:



Note - Download RDAC multipathing drivers from My Oracle Support at https://support.oracle.com using keyword “RDAC” or “MPP”. See Downloading Patches and Updates for more information.




Note - Single path data connections are not recommended. For more information, see Single Path Data Connections.



TABLE 2 Supported Fibre Channel Multipathing Software

OS

Multipathing Software

Minimum Version

Host Type Setting

Notes

Solaris 10

STMS/MPxIO

Update 5[1]

Solaris with MPxIO

Multipathing software included in Solaris OS 10

Oracle Linux[2] 5.5, 5.6, 6.0

RDAC

 

09.03.0C02.0453

Linux

 

Oracle VM 2.2.2

RDAC

RDAC version 09.03.0C02.0331 is included with Oracle VM 2.2.2.

Linux

 

RHEL 5.5, 5.6, 6.0

RDAC

 

09.03.0C02.0453

Linux

 

SLES 10.1, 10.4, 11, and 11.1

RDAC/MPP

09.03.0C02.0453

Linux

 

SLES 11, 11.1

DMMP

----->

----->

See Device Mapper Multipath (DMMP) for the Linux Operating System.

Windows 2003 SP2, R2
Non-clustered

MPIO

01.03.0302.0504

Windows 2003 Non-clustered

 

Windows 2003/2008 MSCS Cluster

MPIO

01.03.0302.0504

Windows Server 2003 Clustered

You must use MPIO for 7.10 and above

Windows 2003
Non-clustered with DMP

DMP

5.1

Windows Server 2003 Non-clustered (with Veritas DMP)

See the Symantec Hardware Compatibility List (HCL)

Windows 2003 Clustered with DMP

DMP

5.1

Windows Server 2003 clustered (with Veritas DMP)

See the Symantec HCL

Windows 2008 R2
(64-bit only)

MPIO

01.03.0302.0504

Windows Server 2003

 



TABLE 3 Supported SAS Multipathing Software

OS

Multipathing Software

Minimum Version

Host Type Setting

Notes

Solaris 10

MPxIO

Update 9

Solaris with MPxIO

Multipathing software included in Solaris OS 10

Oracle Linux[3] 5.5

RDAC

09.03.0C02.0453

Linux

 

RHEL 5.5

RDAC

09.03.0C02.0453

Linux

 

Windows 2008

Windows 2008 R2

 

MPIO

01.03.0302.0504

Windows clustered

Windows non-clustered

 

Windows 2003 SP2

MPIO

01.03.0302.0504

Windows clustered

Windows non-clustered

 


Supported Host Bus Adaptors (HBAs)

The following tables list supported HBAs by interface type and operating system:

 


TABLE 4 Supported Fibre Channel HBAs for Solaris Data Host Platforms

Host OS

Oracle 2-Gbit HBAs

Oracle 4-Gbit HBAs

Oracle 8-Gbit HBAs

Solaris 10u5 (minimum)
SPARC

SG-XPCI1FC-QL2 (6767A)

SG-XPCI2FC-QF2-Z (6768A)

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

Solaris 10u5 (minimum)
x64/x86

SG-XPCI1FC-QL2 (6767A)

SG-XPCI2FC-QF2-Z (6768A)

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8



TABLE 5 Supported Fibre Channel HBAs for Linux Data Host Platforms

Host OS

Generic HBAs[4]

Oracle 2-Gbit HBAs

Oracle 4-Gbit HBAs

Oracle 8-Gbit HBAs

Oracle Linux
5.5, 5.6, 6.0

 

RHEL
5.5, 5.6, 6.0

 

Oracle VM
2.2.2

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LP982/LP9802/
9802DC

LP9002/LP9002DC/
LP952

LP10000/10000DC/
LP1050

Lpe11000/LPe11002/
LPe1150

Lpe12000/LPe12002/
LPe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4-Z

SG-XPCI2FC-EM4-Z

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

SLES
10.1, 10.4, 11, 11.1

QLogic:

QLE 256x

QLE246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LP982/LP9802/
9802DC

LP9002/LP9002DC/
LP952

LP10000/10000DC/
LP1050

LP11000/LP11002/
LP1150

Lpe11000/LPe11002/ LPe1150/Lpe12000/ LPe12002/Lpe1250

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8



TABLE 6 Supported Fibre Channel HBAs for Windows Data Host Platforms

Host OS / Servers

Generic HBAs[5]

Sun 2-Gb HBAs

Sun 4-Gb HBAs

Sun 8-Gb HBAs

Microsoft Windows 2008, R2 Server 32-bit / x86 (IA32)

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LPe12000/LPe12002/ LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8

64-bit / x64 (AMD)

EM64T

IA64

Microsoft Windows 2003

32-bit with SP1 R2 / x86 (IA32)

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex:

LPe12000/LPe12002/LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-E8

SG-XPCIEFCGBE-Q8

Microsoft Windows 2003

64-bit with SP1 R2 / x64 (AMD)

EM64T

IA64

QLogic:

QLE 256x

QLE 246x

QLA 246x

QLA 234x

QLA 2310F

Emulex: LPe12000/LPe12002/LPe1250

Lpe11000/LPe11002/LPe1150

LP11000/LP11002/LP1150

LP9802/9802DC/982

LP952/LP9002/LP9002DC

10000/10000DC/LP1050

SG-XPCI1FC-EM2

SG-XPCI2FC-EM2

SG-XPCI1FC-QL2

SG-XPCI2FC-QF2-Z

SG-XPCIE1FC-QF4

SG-XPCIE2FC-QF4

SG-XPCIE1FC-EM4

SG-XPCIE2FC-EM4

SG-XPCI1FC-QF4

SG-XPCI2FC-QF4

SG-XPCI1FC-EM4

SG-XPCI2FC-EM4

SG-XPCIE2FCGBE-Q-Z

SG-XPCIE2FCGBE-E-Z

SG-XPCIE1FC-QF8-Z

SG-XPCIE2FC-QF8-Z

SG-XPCIE1FC-EM8-Z

SG-XPCIE2FC-EM8-Z

SG-XPCIEFCGBE-Q8

SG-XPCIEFCGBE-E8



TABLE 7 Supported SAS HBAs for Solaris and Linux Data Host Platforms

Host OS

Oracle 3-Gbit HBAs (SAS-1)[6]

Oracle 6-Gbit HBAs (SAS-2)[7]

Solaris 10u9 (minimum)

Oracle Linux 5.5

RHEL 5.5

SG-XPCIE8SAS-E-Z

SG-XPCIE8SAS-EB-Z

SG(X)-SAS6-EXT-Z

SG(X)-SAS6-EM-Z



TABLE 8 Supported SAS HBAs for Microsoft Windows Data Host Platforms

Host OS

Oracle 3-Gbit HBAs (SAS-1)

Oracle 6-Gbit HBAs (SAS-2)

Windows 2008

Windows 2008 R2

Windows 2003 SP2[8]

SG-XPCIE8SAS-E-Z[9]

SG(X)-SAS6-EXT-Z[10]


SAS-1 HBA Settings

TABLE 9 lists supported HBA settings for SAS-1 HBA compatibility.

Configuration: Firmware 01.29.06.00-IT with NVDATA 2DC5, BIOS 6.28.00.00, FCode 1.00.49.


TABLE 9 SAS-1 HBA Settings

Host OS

Settings

Solaris 10u9, SPARC

HBA defaults

Solaris 10u9, x86

IODeviceMissingDelay 20

ReportDeviceMissingDelay 20

Oracle Linux 5.5
RHEL 5.5

IODeviceMissingDelay 8

ReportDeviceMissingDelay 144


Supported FC and Multilayer Switches

The following FC fabric and multilayer switches are compatible for connecting data hosts and the Sun Storage 2540-M2 array. See the release notes for your switch hardware for firmware support information.


Device Mapper Multipath (DMMP) for the Linux Operating System

Device Mapper (DM) is a generic framework for block devices provided by the Linux operating system. It supports concatenation, striping, snapshots, mirroring, and multipathing. The multipath function is provided by the combination of the kernel modules and user space tools.

The DMMP is supported on SUSE Linux Enterprise Server (SLES) Version 11 and 11.1. The SLES installation must have components at or above the version levels shown in the following table before you install the DMMP.


TABLE 10 Minimum Supported Configurations for the SLES 11 Operating System

Version

Component

Kernel version

kernel-default-2.6.27.29-0.1.1

Scsi_dh_rdac kmp

lsi-scsi_dh_rdac-kmp-default-0.0_2.6.27.19_5-1

Device Mapper library

device-mapper-1.02.27-8.6

Multipath-tools

multipath-tools-0.4.8-40.6.1


To update a component, download the appropriate package from the Novell website at http://download.novell.com/patch/finder. The Novell publication, SUSE Linux Enterprise Server 11 Installation and Administration Guide, describes how to install and upgrade the operating system.

Device Mapper Features

Known Limitations and Issues of the Device Mapper

Installing the Device Mapper Multi-Path

1. Use the media supplied by your operating system vendor to install SLES 11.

2. Install the errata kernel 2.6.27.29-0.1.

Refer to the SUSE Linux Enterprise Server 11 Installation and Administration Guide for the installation procedure.

3. To boot up to 2.6.27.29-0.1 kernel, reboot your system.

4. On the command line, enter rpm -qa |grep device-mapper, and check the system output to see if the correct level of the device mapper component is installed.

5. On the command line, enter rpm -qa |grep multipath-tools and check the system output to see if the correct level of the multipath tools is installed.

6. Update the configuration file /etc/multipath.conf.

See Setting Up the multipath.conf File for detailed information about the /etc/multipath.conf file.

7. On the command line, enter chkconfig multipathd on.

This command enables multipathd daemon when the system boots.

8. Edit the /etc/sysconfig/kernel file to add directive scsi_dh_rdac to the INITRD_MODULES section of the file.

9. Download the KMP package for scsi_dh_rdac for the SLES 11 architecture from the website http://forgeftp.novell.com/driver-process/staging/pub/update/lsi/sle11/common/, and install the package on the host.

10. Update the boot loader to point to the new initrd image, and reboot the host with the new initrd image.

Setting Up the multipath.conf File

The multipath.conf file is the configuration file for the multipath daemon, multipathd. The multipath.conf file overwrites the built-in configuration table for multipathd. Any line in the file whose first non-white-space character is # is considered a comment line. Empty lines are ignored.

Installing the Device Mapper Multi-Path for SLES 11.1

All of the components required for DMMP are included in SUSE Linux Enterprise Server (SLES) version 11.1 installation media. However, users might need to select the specific component based on the storage hardware type. By default, DMMP is disabled in SLES. You must follow the following steps to enable DMMP components on the host.

1. On the command line, type chkconfig multipath on.

The multipathd daemon is enabled with the system starts again.

2. Edit the /etc/sysconfig/kernel file to add the directive scsi_dh_rdac to the INITRD_MODULES section of the file.

3. Create a new initrd image using the following command to include scsi_dh_rdac into ram disk:

mkinitrd -i /boot/initrd-r -rdac -k /bootvmlinuz

4. Update the boot leader to point to the new initrd image, and reboot the host with the new initrd image.

Copy and Rename the Sample File

Copy and rename the sample file located at /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic to /etc/multipath.conf. Configuration changes are now accomplished by editing the new /etc/multipath.conf file. All entries for multipath devices are commented out initially. The configuration file is divided into five sections:

Determine the Attributes of a MultiPath Device

To determine the attributes of a multipath device, check the multipaths section of the /etc/multipath.conf file, then the devices section, then the defaults section. The model settings used for multipath devices are listed for each storage array and include matching vendor and product values. Add matching storage vendor and product values for each type of volume used in your storage array.

For each UTM LUN mapped to the host, include an entry in the blacklist section of the /etc/multipath.conf file. The entries should follow the pattern of the following example.

blacklist { device {         vendor "*"         product "Universal Xport"    } } 

The following example shows the devices section for LSI storage from the sample /etc/multipath.conf file. Update the vendor ID, which is LSI in the sample file, and the product ID, which is INF-01-00 in the sample file, to match the equipment in the storage array.

devices {     device {         vendor                "LSI"         product               "INF-01-00"         path_grouping_policy  group_by_prio         prio                  rdac         getuid_callout      "/lib/udev/scsi_id -g -u -d /dev/%n"         polling_interval      5         path_checker          rdac         path_selector         "round-robin 0"         hardware_handler      "1 rdac"         failback               immediate         features              "2 pg_init_retries 50"         no_path_retry          30         rr_min_io              100     } }

The following table explains the attributes and values in the devices section of the /etc/multipath.conf file.


TABLE 11 Attributes and Values in the multipath.conf File

Attribute

Parameter Value

Description

path_grouping_policy

group_by_prio

The path grouping policy to be applied to this specific vendor and product storage.

prio

rdac

The program and arguments to determine the path priority routine. The specified routine should return a numeric value specifying the relative priority of this path. Higher numbers have a higher priority.

getuid_callout

"/lib/udev/ scsi_id -g -u -d /dev/%n"

The program and arguments to call out to obtain a unique path identifier.

polling_interval

5

The interval between two path checks, in seconds.

path_checker

rdac

The method used to determine the state of the path.

path_selector

"round-robin 0"

The path selector algorithm to use when there is more than one path in a path group.

hardware_handler

"1 rdac"

The hardware handler to use for handling device-specific knowledge.

failback

10

A parameter to tell the daemon how to manage path group failback. In this example, the parameter is set to 10 seconds, so failback occurs 10 seconds after a device comes online. To disable the failback, set this parameter to manual. Set it to immediate to force failback to occur immediately.

features

"2 pg_init_retries 50"

Features to be enabled. This parameter sets the kernel parameter pg_init_retries to 50. The pg_init_retries parameter is used to retry the mode select commands.

no_path_retry

30

Specify the number of retries before queuing is disabled. Set this parameter to fail for immediate failure (no queuing). When this parameter is set to queue, queuing continues indefinitely.

rr_min_io

100

The number of I/Os to route to a path before switching to the next path in the same path group. This setting applies if there is more than one path in a path group.


Using the Device Mapper Devices

Multipath devices are created under /dev/ directory with the prefix dm-. These devices are the same as any other bock devices on the host. To list all of the multipath devices, run the multipath -ll command. The following example shows system output from the multipath -ll command for one of the multipath devices.

mpathp (3600a0b80005ab177000017544a8d6b92) dm-0 LSI,INF-01-00 [size=5.0G][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=6][active] \_ 5:0:0:0   sdc  8:32   [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 4:0:0:0   sdb  8:16 [active][ghost]

In this example, the multipath device node for this device is /dev/mapper/mpathp and /dev/dm-0. The following table lists some basic options and parameters for the multipath command.


TABLE 12 Options and Parameters for the multipath Command

Command

Description

multipath -h

Prints usage information

multipath -ll

Shows the current multipath topology from all available information (sysfs, the device mapper, path checkers, and so on)

multipath -f map

Flushes the multipath device map specified by the map option, if the map is unused

multipath -F

Flushes all unused multipath device maps


Troubleshooting the Device Mapper


TABLE 13 Troubleshooting the Device Mapper

Situation

Resolution

Is the multipath daemon, multipathd, running?

At the command prompt, enter the command: /etc/init.d/multipathd status.

Why are no devices listed when you run the multipath -ll command?

At the command prompt, enter the command: #cat /proc/scsi/scsi. The system output displays all of the devices that are already discovered.

Verify that the multipath.conf file has been updated with proper settings.



Restrictions and Known Issues

The following are restrictions and known issues applicable to this product release.

Restrictions

Single Path Data Connections

In a single path data connection, a group of heterogeneous servers is connected to an array through a single connection. Although this connection is technically possible, there is no redundancy, and a connection failure will result in loss of access to the array.



caution icon Caution - Because of the single point of failure, single path data connections are not recommended.


SAS Host Ports on the Sun Storage 2540-M2

Although SAS host ports are physically present on the Sun Storage 2540-M2 array controller tray, they are not for use, not supported, and are capped at the factory. FIGURE 1 shows the location of these ports. The Sun Storage 2540-M2 only supports Fibre Channel host connectivity.


FIGURE 1 SAS Host Ports on the 2540-M2

SAS-2 Phase 5++ HBA Unable to Boot from Attached 2530-M2 Volume

Bug 7042226 - SAS-2 HBA device booting is not supported using Phase 5++ HBA firmware. This restriction will be lifted with the Phase 10 HBA firmware.

Controller Issues

I/O Errors Occur During Controller Firmware Download

Configuration:



Note - This problem does not occur in RHEL version 6.0 with kernel 2.6.33.


Problem or Restriction: An I/O error occurs during an online controller firmware upgrade.

Workaround: To avoid this problem, quiesce the host I/O before the performing controller firmware upgrades. To recover from this problem, make sure that the host reports that it has optimal paths available to the storage array controllers, and then resume I/O.

2500-M2 Controller Firmware Panics during Firmware Download

Configuration:

Problem or Restriction: This problem occurs when a firmware download to the controller causes the controller to panic and reboot.

Workaround: Stop all I/O to the array before initiating the firmware download. Once the download is initiated, the controller automatically reboots, recovering the system.



caution icon Caution - To avoid potential data loss, stop all I/O to the array before initiating the firmware download.


OS Issues

Linux RDAC 09.03.0C02.0453 - Make Install Dependencies

Configuration:

Bug 7042297 - Before running a “make” on the RDAC driver, the following kernel packages are required:

RHEL6 DMMP Excessive Log Messages

Bug 7034078 - When booting a Red Hat Enterprise Linux 6.0 host mapped to volumes on a Sun Storage 2500-M2 array in a multipath configuration using DMMP, it is possible to receive excessive messages similar to the following:

Workaround - This is normal behavior on RHEL 6.0.

Oracle Linux 6 Boots With Messages

Bugs 7038184, 7028670, 7028672 - When booting an Oracle Linux 6.0 host mapped to volumes on Sun Storage 2500-M2 and 6780 arrays, it is possible to receive one of these messages:

“FIXME driver has no support for subenclosures (1)”
“FIXME driver has no support for subenclosures (3)”
“Failed to bind enclosure -19”

Workaround - This is a cosmetic issue with no impact to the I/O path. There is no workaround.

Log Events Using SLES 11.1 With smartd Monitoring Enabled

Bug 7014293 - When a SLES 11.1 host with smartd monitoring enabled is mapped to volumes on either a Sun Storage 2500-M2 or 6780 array, it is possible to receive “IO FAILURE” and “Illegal Request ASC/ASCQ” log events.

Workaround - Either disable smartd monitoring or disregard the messages. This is an issue with the host OS.

Cluster Startup Fails When Devices Are in a Unit Attention State

Configuration:

Problem or Restriction: This problem occurs when the DMMP failover driver is used with the RHEL version 6.0 OS. If you try to set up a Red Hat cluster with the DMMP failover driver, cluster startup might fail during the unfencing stage, where each host registers itself with the SCSI devices. The devices are in a Unit Attention state, which causes the SCSI registration command issued by the host during startup to fail. When the cluster manager (cman) service starts, the logs show that the nodes failed to unfence themselves, which causes the cluster startup to fail.

Workaround: To avoid this problem, do not use the DMMP failover driver with RHEL version 6.0. To recover from this problem, open a terminal window, and run:

sg_turs -n 5 <device>

where <device> is a SCSI device that is virtualized by the DMMP failover driver. Run this command on every /dev/sd device that the DMMP failover driver manages. It issues a Test Unit Ready command to clear the Unit Attention state and allow node registration on the device to succeed.

Node Unfencing Fails when Automatically Generated Host Keys Are Used during a Red Hat Cluster Suite Services Startup

Operating System: Red Hat Enterprise Linux 6 with Native Cluster

Problem or Restriction: This problem occurs the first time a cluster is set up when the cluster.conf file does not have manually defined host keys. When the cluster.conf file was first defined to set up a cluster with SCSI reservation fencing, the cluster services were started on the nodes. With SCSI reservation fencing, the hosts try to generate and register a key on the clustered devices as part of the cluster manager's startup. The cluster manager service (cman) fails to start, and the key cannot be zero error message appears in the host log.

Workaround: To avoid this problem, use only power fencing. Do not use SCSI reservation fencing. To recover from this problem, change to manually defined host keys, and restart the cluster services.

Red Hat Cluster Suite Services with GFS2 Mounts Cannot Transfer Between Nodes when the Client Mounts with NFSv4

Operating System: Red Hat Enterprise Linux 6 Native Cluster

Problem or Restriction: This problem occurs during an attempt to transfer a cluster service manually when a client is connected using NFSv4. The Global File System (GFS) 2 mount points failed to unmount, which caused the Red Hat Cluster Suite Services to go to the Failed state. The mount point, and all other mount points exported from the same virtual IP address, becomes inaccessible.

Workaround: To avoid this problem, configure the cluster nodes to not allow mount requests from NFS version 4 (NFSv4) clients. To recover from this problem, restart the failed service on the node that previously owned it.

Host Aborts I/O Operations

Operating System: Red Hat Enterprise Linux version 6.0

Problem or Restriction: This problem occurs during an online controller firmware upgrade. The controller is not responding quickly enough to a host read or write to satisfy the host. After 30 seconds, the host sends a command to abort the I/O. The I/O aborts, and then starts again successfully.

Workaround: Quiesce the host I/O before performing the controller firmware upgrade. To recover from this problem, either reset the server, or wait until the host returns an I/O error.

Host Attempts to Abort I/O Indefinitely

Operating System: Red Hat Enterprise Linux version 6.0 with kernel 2.6.32.

Red Hat Bugzilla Number: 620391



Note - This problem does not occur in Red Hat Enterprise Linux version 6.0 with kernel 2.6.33.


Problem or Restriction: This problem occurs under situations of heavy stress when storage arrays take longer than expected to return the status of a read or write. The storage array must be sufficiently stressed that the controller response is more than 30 seconds, at which time a command is issued to abort if no response is received. The abort will be retried indefinitely even when the abort is successful. The application either times out or hangs indefinitely on the read or write that is being aborted. The messages file reports the aborts, and resets might occur on the LUN, the host, or the bus.

Factors effecting controller response include Remote Volume Mirroring, the controller state, the number of attached hosts, and the total throughput.

Workaround: To recover from this problem, reset the power on the server.


Related Documentation

Product documentation for Sun Storage 2500-M2 Arrays is available at:

http://www.oracle.com/technetwork/documentation/oracle-unified-ss-193371.html

Product documentation for Sun Storage Common Array Manager is available at:

http://www.oracle.com/technetwork/documentation/disk-device-194280.html


TABLE 14 Related Documentation

Application

Title

Review safety information

Sun Storage 2500-M2 Arrays Safety and Compliance Manual

Important Safety Information for Sun Hardware Systems

Review known issues and workarounds

Sun Storage Common Array Manager Release Notes

Prepare the site

Sun Storage 2500-M2 Arrays Site Preparation Guide

Install the support rails

Sun Storage 2500-M2 Arrays Support Rail Installation Guide

Install the array

Sun Storage 2500-M2 Arrays Hardware Installation Guide

Get started with the management software

Sun Storage Common Array Manager Quick Start Guide

Install the management software

Sun Storage Common Array Manager Installation and Setup Guide

Manage the array

Sun Storage Common Array Manager Array Administration Guide

Sun Storage Common Array Manager CLI Guide



Documentation, Support, and Training

These web sites provide additional resources: