Sun StorEdge 6920 System Release Notes, Release 3.0.1

This document contains important information about the Sun StorEdgetrademark 6920 system software release 3.0.1 that was not available at the time the product documentation was published. Read this document so that you are aware of issues or requirements that can impact the installation and operation of the Sun StorEdge 6920 system running system software release 3.0.1.



Note - These release notes have been updated for build 26 (release 3.0.1.26).



This document consist of the following sections:


New Features for Release 3.0.1

Release 3.0.1 adds the following new features with the Sun StorEdge 6920 system software release 3.0.1:

This section provides a brief description of these features. For additional information, see the product documentation.

Remote Data Replication

Release 3.0.1 of the Sun StorEdge 6920 system software has added support for remote data replication. This feature enables you to periodically copy a volume's data onto a secondary storage device. This secondary storage device should be located at a remote location from the original (primary) storage device. If the primary storage device should fail, the secondary storage device can immediately be promoted to primary and brought online.

The replication process begins by creating a complete copy of the primary data on the secondary storage device at the disaster recovery site. Using that copy as a baseline, the replication process records any changes to the data and forwards those changes to the secondary site.

For help setting up appropriate security, contact the Client Solutions Organization (CSO).

300-Gbyte Drive Support

Release 3.0.1 of the Sun StorEdge 6920 system software has added support for 300-Gbyte disk drives.

Just like every other drive capacity that is supported, there is a virtual disk maximum capacity of 2 terabytes of storage. The actual usable capacity of the 300-GB drives is 279.397 Gbyte.

Support for Sun StorageTek 6140 Array as External Storage

You can expand the capacity of your Sun StorageTek 6920 System by connecting Sun StorageTek 6140 Arrays as external storage. See the Best Practices for the Sun StorEdge 6920 System (819-7320-10), specifically the chapter "Working with External Storage", for more information about connecting external storage.

To set up the 6140 Array as external storage:

1. Configure the 6140 Array with the LUNs you want to present by using the Sun StorageTek Common Array Manager software.

The Common Array Manager runs on a separate management host platform, connected to the 6140 Array by Ethernet or a LAN subnet. See the Sun StorageTek Common Array Manager Software Installation Guide for information about configuring arrays. When configuring arrays for use with the 6920 system, use Common Array Manager to add the 6920 DSP egress ports to the 6140 as initiators, with a host type of "Sun StorEdge".



Note - When the 6140 Array is used as external storage for the 6920, the 6920 must be the only device connected to any Host Channel ports.



2. Connect the 6140 Array to the 6920 System using Fibre Channel cables attached to Host Channel 1 port on each of the array controllers.

See the Sun StorageTek 6140 Array Hardware Installation Guide for information about connecting data hosts.

3. Use the Configuration Management software on the 6920 to initialize the LUNs presented by the 6140 as raw storage and place it in a storage pool.

Daylight Savings Time Update

The U.S. Energy Policy Act of 2005 (EPACT) mandates that Daylight Saving Time (DST) in the United States (U.S.) start on the second Sunday in March and end on the first Sunday in November starting in 2007. In 2007, the start and stop dates will be March 11 and November 4, respectively. The start date for DST in the U.S. was previously the first Sunday of April, and the end date for DST in the U.S. was previously the last Sunday of October. In 2006, the dates were the first Sunday in April (April 2, 2006) and the last Sunday in October (October 29, 2006).

As a result of the above, systems in timezones that utilize Daylight Saving Time (DST) will update their clocks on March 11 2007, rather than in April. Likewise system clocks will switch back in November 2007, rather than in October. This change affects timed operations within the Sun StorageTek 6920 product.

For more detailed information regarding DST, please see:

http://www.sun.com/dst

This release contains a Daylight Savings Time update patch to accomodate the 2007 DST changes, and mitigate potential errors.


Supported Software and Hardware

The software and hardware components described in the following sections have been tested and qualified to work with the Sun StorEdge 6920 system:

Supported Web Browsers

The Sun StorEdge 6920 system software release 3.0.1 supports the web browsers listed in TABLE 1.


TABLE 1 Sun StorEdge 6920 Supported Browsers (Common Array Manager 5.0 and Java Console 2.2.5)

Client OS

Minimum Supported Browser Version

Microsoft Windows 98,
Windows XP, Windows 2000, Windows Server 2003

Microsoft Internet Explorer 5.5

Mozilla 1.4

Netscape Navigator 6.2

Firefox 1.0

Solaris 8, 9, 10 for

Sun SPARC and x86 platforms

Mozilla 1.4

Netscape Navigator 6.2

Firefox 1.0

Apple Mac OS X

Mozilla 1.4

Firefox 1.0

Red Hat Enterprise Linux
Application Server 2.1

Mozilla 1.4

SuSE Linux Enterprise Server 8.0

Mozilla 1.4

Hewlett-Packard HP-UX 11

Mozilla 1.4

IBM AIX 5.2

Mozilla 1.4


Additional Supported Data Host Software

The software listed in TABLE 2 is compatible for use on data hosts with data paths or network connections to the Sun StorEdge 6920 system.


TABLE 2 Supported Sun Data Host Software

Software

Minimum Version

Sun StorEdge Enterprise Storage Manager

3.0.1, 2.1 with Patch 117367-01

Sun StorEdge Availability Suite

3.2

Sun StorEdge Enterprise Backup Software

7.1[1]

Solstice DiskSuite

4.2.1

Solaris Volume Manager software (embedded in the Solaris 9 Operating System)

N/A

Sun StorEdge QFS

4.0

Sun StorEdge SAM-FS

4.0

Suntrademark Cluster software

3.0.1, update 3


The third-party software listed in TABLE 3 is compatible for use on data hosts with data paths or network connections to the Sun StorEdge 6920 system.


TABLE 3 Supported Third-Party Software

Software

Version

VERITAS NetBackup Server

5.0 and later

VERITAS NetBackup Enterprise Server

5.0 and later

VERITAS Volume Manager with Dynamic Multipathing (DMP) for Solaris

3.5, 4.0, and 4.1

VERITAS File System (VxFS) for Solaris

3.5, 4.0, and 4.1

VERITAS Volume Replicator for Solaris

3.5, 4.0, and 4.1

VERITAS Cluster Server (VCS)

3.5, 4.0, and 4.1

Legato NetWorker®

7.1 and later


For the current hardware compatibility for the VERITAS products, see:

http://support.veritas.com/

NetWorker PowerSnap Module Software

The NetWorker PowerSnap Module for the Sun StorEdge 6920 system enhances the Sun StorEdge Enterprise Backup software by allowing continuous snapshot-based data protection and availability during backups. Refer to the NetWorker PowerSnap Module For Sun StorEdge SE6920 Installation and Administrator's Guide for the details of the features provided by this module.

Download the PowerSnap Module Software

You can download the software from the Sun Download Center (SDLC) at http://www.sun.com/download/. The software will remain posted until it is included in the Sun StorageTek Enterprise Backup Software 7.4 media kit, targeted for future release.

Extract the following tar files to use the NetWorker PowerSnap Module for Sun StorEdge 6920 system:

Software Requirements

The NetWorker PowerSnap Module for the Sun StorEdge 6920 system must be used with Sun StorEdge Enterprise Backup Software 7.2 Service Update 2 software.

NetWorker PowerSnap Licenses

The NetWorker PowerSnap Module is supported by a Network or Power Edition base server license. In addition to the base server license, the following license is required to enable the module features:

EBSIS-999-6824 - NetWorker PowerSnap Module License for Sun 6000 Series

You also need the one of the following capacity license numbers:

Evaluation Enabler Codes

Enabler codes allow you a 45-day evaluation of NetWorker PowerSnap Module for the Sun StorEdge 6920 system. These codes appear in the readme file downloaded with the NetWorker PowerSnap Module software.

To permanently use the module in production environment, you must purchase the enabler codes for the module, enter the enabler codes, and register the authorization codes within the 45-day period after entering the purchased enabler codes.

Installation Instructions

1. Install Sun StorEdge Enterprise Backup Software 7.2 packages.

The software packages are available on volume 1 CD in the Sun StorEdge Enterprise Backup Software 7.2 media kit. They can also be downloaded from Sun Download Center at:

http://www.sun.com/download

2. Install one of the following applicable Sun StorEdge Enterprise Backup Software 7.2 SU2 patches:

The patches can be downloaded from SunSolve at:

http://sunsolve.sun.com

3. Install the NetWorker PowerSnap Module for Sun StorEdge 6920 system software packages.

For detailed installation instructions, refer to the Sun StorEdge Enterprise Backup Software 7.3 Installation Guide and NetWorker PowerSnap Module for Sun StorEdge 6920 Installation and Administrator's Guide.

NetWorker PowerSnap Documentation

Documentation for the NetWorker PowerSnap Module is available online from the following locations:

http://www.sun.com/products-n-solutions/hardware/docs/Software/Storage_Software/EBS/
index.htm
l

http://www.sun.com/download

http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Midrange/
6920/6920_30/index.html

Supported Fibre Channel Switches, HBAs, Data Hosts, and Operating Systems

The Sun StorEdge 6920 system supports all of the Fibre Channel (FC) switches, host bus adapters (HBAs), data hosts, and operating systems supported by Sun StorEdge SAN Foundation software version 4.4 (and later). Please contect your local Sun customer service representative for more information.

Supported Languages

The Sun StorEdge 6920 system software release 3.0.1 supports the Storage Automated Diagnostic Environment application and supports the languages/locales listed in TABLE 4.


TABLE 4 Supported Languages and Locales

Language

Locale

English

en

French

fr

Japanese

ja

Korean

ko

Simplified Chinese

zh

Traditional Chinese

zh_TW




Note -
bullet Man pages are available only in English and Japanese.

bullet Localization of email notification is not supported in this release.




Upgrading to Release 3.0.1.26

This upgrade must be performed by a Sun Service technician. Please call Sun Service to arrange an installation or upgrade to release 3.0.1.26.


System Usage Limits

TABLE 5 lists maximum values for elements of the Sun StorEdge 6920 system.


TABLE 5 Sun StorEdge 6920 System Limits

System Attribute

Maximum

Volumes per system

1024 volumes

Virtual disks per tray

2 virtual disks

Volumes per virtual disk

32 volumes

Mirrored volumes

128 (256 mirrored components)

Components in a mirror

4 including the primary volume

Legacy volumes

128

Snapshots per volume

8 snapshots

Expand snapshot reserve space

Up to 31 times

Pre-defined profiles

15

Initiators[2] per system

256 initiators

Initiators per DSP port

128

Storage pools

64 storage pools

Storage profiles

15 system-defined storage profiles; no limit for user-defined profiles



Release Documentation

TABLE 6 and TABLE 7 list the documents that are related to the Sun StorEdge 6920 system. For any document number with nn as a version suffix, use the most current version available.

You can search for this documentation online at

System overview information, as well as information on system configuration, maintenance, and basic troubleshooting, is covered in the online help included with the software. In addition, the sscs (1M) man page provides information about the commands used to manage storage using the command-line interface (CLI).


TABLE 7 Sun StorEdge 6920 Related Documentation

Product

Title

Part Number

Best practices

Best Practices for Sun StorEdge 6920 System (Version 3.0)

819-0122-nn

Sun Storage Automated Diagnostic Environment, Enterprise Edition

Sun Storage Automated Diagnostic Environment Enterprise Edition Release Notes Version 2.4

819-0432-nn

SAN Foundation software

Sun StorEdge SAN Foundation 4.4 Configuration Guide

817-3672-nn

Oracle Storage Compatibility Program

Sun StorEdge Data Snapshot Software With Oracle Databases Usage Guide

819-3326-nn

 

Sun StorEdge Data Mirroring Software With Oracle Databases Usage Guide

819-3327-nn

 

Sun StorEdge Data Replication Software With Oracle Databases Usage Guide

819-3328-nn

Sun Storage Traffic Manager software

Sun StorEdge Traffic Manager 4.4 Software Release Notes for
HP-UX, IBM AIX, Microsoft Windows 2000 and 2003, and Red Hat Enterprise Linux

817-6275-nn

 

Sun StorEdge Traffic Manager 4.4 Software User's Guide for
IBM AIX, HP-UX, Microsoft Windows 2000 and 2003, and Red Hat Enterprise Linux

817-6270-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for Red Hat Enterprise Linux

817-6271-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for Microsoft Windows 2000 and 2003

817-6272-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for IBM AIX

817-6273-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for HP-UX 11.0 and 11i

817-6274-nn

Sun StorEdge Network Fibre Channel switch-8 and
switch-16

Sun StorEdge Network 2 Gb FC Switch-8 and Switch-16 FRU Installation

817-0064-nn

 

Sun StorEdge 6920 System Administration Guide for the Browser Interface Management Software

819-0123-nn

 

Sun StorEdge 6920 System Hardware Quick Setup poster

817-5226-nn

 

Sun StorEdge Network 2 Gb FC Switch-8 and Switch-16 Release Notes

817-0770-nn

 

Sun StorEdge Network 2 Gb FC Switch-64 Release Notes

817-0977-nn

Sun StorEdge Brocade switch documentation

Sun StorEdge Network 2 Gb Brocade SilkWorm 3200, 3800, and 12000 Switch 3.1/4.1 Firmware Guide to Documentation

817-0062-nn

Sun StorEdge McData switch documentation

Sun StorEdge Network 2 Gb McDATA Intrepid 6064 Director Guide to Documentation, Including Firmware 5.01.00

817-0063-nn

Expansion cabinet

Sun StorEdge Expansion Cabinet Installation and Service Manual

805-3067-nn

Storage Service Processor

Sun Fire V210 and V240 Server Administration Guide

816-4826-nn

Solaris Operating System

Solaris Handbook for Sun Peripherals

816-4468-nn



Known Issues in Release 3.0.1, Build 26

This section provides information about known issues with this product release (3.0.1.26).

Use Only One Fibre Channel Host Port per 6140 Array Controller When Connecting to a 6920

6920 arrays are restricted to only two paths for any given vdisk. If multiple ports on the 6140 Array controller are connected to the 6920 array, this restriction would be violated. Connect only one port per controller.

Data Services Platform Fan Replacement

The fan in the Data Services Platform (DSP) is a field-replaceable unit (FRU). When removing the fan, observe the following caution.



caution icon

Caution - The fan has unprotected blades that might still be spinning when the fan is removed. Be sure that the fan blades have stopped moving completely before removing the fan from the cabinet.



Setting Message Priority for Email Notification Recipients

If you set the Priority parameter to All when adding or editing an email notification recipient, the recipient receives a message for every event that occurs in the system, even for general messages that do not require intervention.

To generate notification messages only for events and alarms that require intervention, set the Priority parameter to Major and above or Critical and above.

Other Known Issues Not Applicable to Release 3.0.1

Array Upgrade Issue

An intermittent problem can occur with PatchPro timing out during an array firmware upgrade. This does not affect the data-path operation, but the upgrade log will indicate that the patch installation failed. Currently, this issue has only been observed on large-capacity systems with numerous arrays.


Bugs

The following sections provide information about bugs filed against this product:

If a recommended workaround is available for a bug, it follows the bug description.

Configuration and Element Management Software

This section describes known issues and bugs related to the configuration management software browser interface.

Unsupported Remote Replication Configuration

Bug 6493606 - A remote replication configuration consisting of a remote replication connection on one port and a host connection on the other shared port fails, and can cause an upgrade to fail. Host (ingress) connections cannot share the same processor as the remote replication connection. Additionally, storage (egress) connections cannot share the remote replication processor either.

A Replication Set Reports 100% Synchronized When It Is Only Starting

Bug 6430940 - When the synchronization of a replication set starts, the set sometimes indicates synchronized (100%) until the first update is sent. This could mislead one to think that the replication process is almost done, when it is just starting.

Workaround - Always wait until the state changes to Replicating before assuming it is completely synched.

An Asynchronous Replication Set Stays in the Suspended Mode When a Queue Becomes Full

Bug 6427254 - During an asynchronous replication, a replication set can transition to the suspended state when the asynchronous queue/log fills up. This happens when the replication set "queue full" action is set to "suspend on queue full". Autosync will not attempt to synchronize this replication set at this point due to the specified queue full action.

Workaround - Monitor the Storage Automated Diagnostic Environment to see if there are any alarms regarding the queue size, such as the following. If so, increase the size of the queue.


Jun  2 08:28:51 dsp00  06/02/2006 15:28:18 LOG_WARNING
(REMOTE_REPLICATION: 3-4)  The disk queue for group
600015d0-00226000-00010000-00015601 is physically 75 percent full

Replacing a Missing External Storage Virtual Disk on Which a Disconnected Component Resides Does Not Cause the Split to Recover

Bug 6429435 - Reconnecting a removed component on a required isolation local mirror results in the removed component remaining in a Missing state and Unknown condition.

Workaround - Contact Sun Service to reboot the DSP to correct the condition.

Do Not Use Reserved Keywords As Storage Domain Names

Bug 6414829 - If you use one of the following reserved keywords as a storage domain name, the system might revert to an unstable condition:

In addition, the patterns "desc" and "proc" should not be used because they match the reserved keywords "description" and "processor," respectively.

Workaround - Do not use any of the following reserved keywords as a storage domain name: description, ip, logical-port, processor, storage-port, or vlan.

Cx700 External Storage Virtual Disks Do Not Fail Over In a Timely Way, Host I/O Fails

Bug 6401685 - A Sun StorEdge 6920 system connected to an EMC CLARiiON Cx700 Array requires Sun StorEdge SAN Foundation 4.4.8 software. With SAN Foundation 4.4.1, I/O operations can time out prematurely and terminate during, for example, rolling upgrades or any card maintenance activity .

Workaround - Update the SAN Foundation software to version 4.4.8.

Replication States Are Not Correct After a Link Failover

Bug 6389703 - The replication state of a replication set is determined by several factors, including the link state. The link state contributes to the replication state but is not necessarily identical. For example, if a replication set is placed in the suspended state, a link in the up state does not automatically transition the replication set into synchronizing or replicating state. Likewise, a link changing to the down state does not automatically force the replication state into the suspended state.

In particular, when a link transitions to a down state, a replication set is kept in the replicating state to avoid the overhead of an update sync. If the replication mode is asynchronous, writes cause state change to queuing, and the data is logged in the asynchronous log. The replication set is placed in the suspended state only in the following circumstances:

Workaround - In scoreboard mode, change the replication set manually after the fault.

Using a Consistency Group Name of NONE Does Not Display Set or Group Name

Bug 6381642 - If you create a replication set with the group name None on the Sun StorEdge 6920 system, the management software is unable to display the Replication Sets Summary page and displays the errors below. The software is also unable to display the replication information for that particular replication set:


Unexpected internal system error. Retry the operation and then contact your Sun service representative if the error persists.

The group name None also fails with the sscs CLI and displays the following error:


# sscs list -S RR repset ip-vol1/1
 
Unexpected internal system error.  Retry the operation and then contact your Sun service representative if the error persists.



Note - The word "NONE" is a reserved word and should not be used for creating consistency groups.



Workaround - Do not use the word "NONE" as a group name.

One Cannot Use the "GB" Notation for the Queue Size When Changing a Configuration From Synchronous to Asynchronous

Bug 6365512 - The error Illegal asynchronous queue size format is deployed if you try to use GB, gb, or G with the queue size command and when changing from synchronous to asynchronous. Following is an example:


server:/home/test 76 % sscs modify -m async -q Default -Q 2G constgroup demo-cg
2G: Illegal asynchronous queue size format.
server:/home/test 77 % sscs modify -m async -q Default -Q 2GB constgroup demo-cg
2GB: Illegal asynchronous queue size format.
server:/home/test 78 % sscs modify -m async -q Default -Q 2gb constgroup demo-cg
2GB: Illegal asynchronous queue size format.

Workaround - Use the equivalent value in MBs for the configuration command. For example, use 2000 MB to configure 2 GB.

FC Port Reports Incorrect "Link Synchronization Lost Events" Value

Bug 6365148 - The Fibre Channel port displays an incorrect Link Synchronization Lost Events. This is because the 32-bit words of the links status are reversed. For example, the following output displays 4294967296. In hexadecimal that value equals 0x100000000. This means only a single loss of synchronization has occurred.

Link Synchronization Lost Events:      4294967296

Workaround - None at this time.

Split Component Missing From the Management Interface Card and Element Manager After a Reboot

Bug 6359244 - This issue occurs if you have the following:

If you are rejoining a few of the split components after writing a pattern to the parent local mirror volumes, and if you reboot the Data Services Platform (DSP) while local mirror volumes are resilvering, you should lose facility power to the DSP, one of the remaining split components will be missing from management interface card and element manager.

You can use the Rescan Devices button to redisplay this component, but it is displayed with a different name. If the running configuration was saved at the time of the reboot, it is now missing one of the split components. The last split component is usually the one that is missing.

Workaround - Click the Rescan Devices button on the External Storage Summary page to perform a scan of all storage and host ports for changes in device configuration or redisplay of the lost split component.. The software might take 10 to 30 seconds to update the External Storage Summary page.

Virtual Disk Information Is Missing From the Volume Details Page After a Primary Component Mirror Break

Bug 6358103 - If you had, for example, a 6-LUN group of volumes assigned to a Sun StorEdge 5310 NAS Appliance gateway and you mirrored the heads to a different pool-profile residing on a different StorEdge 6120 array, the new pool would contain two virtual disks, one on each tray.

If after synchronization you executed a break primary, break final mirror component to shift the 6-LUNs as single volume on the new pool, the operation would complete successfully. However, the Volume Details page would show no virtual disks assigned when there should be two virtual disks listed for the pool.

Workaround - Click the Rescan Devices button on the External Storage tab, to correct the virtual disks information for all volumes page in the Volume Details page.

Queue Size Defaults to 512 MB When the CLI Is Used to Change Replication Mode From Asynchronous to Synchronous Back to Asynchronous

Bug 6357963 - If you change an Asynchronous replication set from Asynchronous mode to Synchronous mode back to Asynchronous mode using the command-line interface (CLI), the following error appears:


You cannot decrease the size of the virtual disk queue without first deleting it

If you change the same Asynchronous replication set through the browser interface interfface, no error occurs. This occurs because the browser interface uses the original queue size while the CLI defaults to 512 MB for queue size.

Workaround - Use the browser interface to change an Asynchronous replication set from Asynchronous mode to Synchronous mode and back to Asynchronous mode

System Allows Initialization of a Virtual Disk That Is Too Small and Cannot Hold the Metadata Information

Bug 6354472 - The system allows you to initialize small logical unit numbers that were possibly used with virtualized legacy volumes and are then converted to an external pool, even though these virtual disks (for example, 100MB virtual disks) should not be allowed because they are too small to hold the metadata information.

Workaround - None at this time.

CLI Allows Stripe All Flag During Creation of a Volume From a Concatenated Pool

Bug 6354266 - If you try to create a volume from a concatenated storage pool using the command-line interface, you should not use the stripe all flag. If you try the same process using the browser interface, you will not see the option to stripe all.

The effect of the stripe all command to spread the volume across the virtual disks in the pool as if the volume were a stripe. No other adverse effect has been observed.

Workaround - None at this time.

Re-use of an Initiator Name Can Lead to Incorrect Devices Being Mapped or Unmapped

Bug 6341547 - If you change the name of a mapped initiator and the old name is then used by another initiator, the mapping information can become wrong.

Workaround - If possible, do not reuse the name. If you have to reuse it, perform the following steps:

1. Unmap initiator A.

2. Rename it to B.

3. Name the new initiator A

4. Remap initiator B.

5. Map initiator A.

An Error Occurs If You Attempt to Map to an Initiator With the Maximum Volumes Already Mapped

Bug 6340957 - The Sun StorEdge 6920 system supports 256 logical unit numbers (LUNs) mappings per initiator. If you attempt to map a volume to an initiator with no available LUNs and you do not specify a LUN number (requiring the system to discover the "next available" LUN) the following message is displayed instead of the correct message:


Unexpected internal system error. Retry the operation and then contact your Sun service representative if the error persists.

Workaround - Either map to a different initiator with free LUNs or free up LUNs on the desired initiator so that you stay within the system limits of 256 LUN mappings per initiator

Not Possible to Configure a Second Gateway on the Same Subnet As a Peer Port

Bug 6339002 - If you try to configure a peer port with a default gateway that is on the same subnet as a previously configured default gateway, the operation fails with the following message:


Unexpected internal system error. Retry the operation and then contact your Sun service representative if the error persists. The peer port operation failed due to an internal error (SP).

This error message is misleading. The operation failed because a subnet can only have one default gateway.

Workaround - Configure the peer port with a default gateway on a different subnet than other peer ports.

Running Repair Mirror on a Local Mirror With a Split Incorrectly Removes the Split From the Mirror

Bug 6332380 - If you create a mirror volume and then split a component, everything should work as expected. But, if you run the repair mirror command on the local mirror the split component is removed form the local mirror.

Workaround - Click the Rescan Devices button from the External Storage Summary page to redisplay the split component.

Storage I/O Card Shut Down Incorrectly Listed As Sub-event of the Link Down State Change of Ethernet Port

Bug 6325646 - When the Ethernet port state changes to LINK_DOWN because the card is shut down, the Storage I/O card shut down is listed as a sub-event of the Ethernet port failure.

Workaround - If you have replication enabled for a Gigabit Ethernet port and a Link Down event occurs you should check the state of the Storage I/O Card during diagnosis.

The list repset details Command Fails After Addition of a Replication Set to Consistency Group With Mismatched Modes

Bug 6323551 - After the failure to add a replication set to a consistency group because of the mismatched modes, the sscs list repset details command fails with the following error:

Unexpected internal system error.


Note - The command listing consistency group details was successful. The listing repset details command was not.



Workaround -Restart the CIMOM. The list repset details command should now work.

Fibre Channel Replication May Not Use All Configured Ports

Bug 6319103 - When multiple Fibre Channel (FC) peer ports have been enabled, full replication synchronization may not occur. This means that one peer port may be running many replication sets in one consistency group and replication set(s) on the remaining peer port failed to launch.

Workaround - If you see that one peer port is not being used for replication, after configuring all the consistency groups and standalone replication sets, delete the peer port not in use, and then reconfigure it. After about 2 minutes, this triggers a redistribution which balances the number of replication sets on each port.

Error Appears When You Rejoin a Component That Has Lost Communication

Bug 6312924 - When you try to rejoin a split mirror component that has lost communication, the system returns this generic message:


./sscs modify -j volume  1_7_1_0-2
The create operation failed.

Workaround - Restore communication with a mirror component before rejoining.

A Failed Bitmap Creation Error During Creation of a Consistency Group Results in the Consistency Group Being Marked As Missing

Bug 6312451 - If you try to create a consistency group but there is not enough availability in the pool, the following error is displayed:


A bitmap distribution error occurred, ensure available capacity in storage pool.

This also results in the consistency group being marked as missing in the sscs list constgroup command display.

Workaround - Remove the consistency group manually from the Data Services Platform (DSP).

Release 3.0.1 Backout Patch Fails to Back Out of One of Two Arrays

Bug 6310593 - When you try to backout of the 3.0.1.5 release on two Sun StorEdge 6020 arrays, you must run two backout reports to back out both arrays

Workaround - Do not select all components available for upgrade from the Revision Maintenance - Upgrade page if any component contains the same patch ID.

Virtual Disk Details for a Volume May Not Show Status of Incomplete

Bug 6310434 - When you display details for virtual disks of a volume, the status of virtual disks may be shown as OK when it is not.

Workaround - Click the Rescan Devices button on the External Storage Summary page to update the virtual disk state when the browser interface has not updated the display.

Modifying an Existing Volume to Be a Mirror Volume Fails in the Browser Interface With All Component Types Used

Bug 6309175 - The following failure message can be displayed when the browser interface is used to create a local mirror with optional isolation and a component from a storage pool, volume, and legacy volume:


Mirror creation failed. The following errors occurred:
lm-1 - The volume size specified is too large for the virtual disks in the storage pool specified

Workaround - Following are two possible workarounds:

1. Mirror the existing volume using a storage pool and a single volume as the second and third components.

2. Add the legacy volume as a fourth component.

Or:

1. Mirror the existing volume using a legacy volume as the second component.

2. Add a storage pool and volume as the third and fourth components.

Wrong Error Message Is Displayed When You Try to Create a Local Mirror From a Disconnected Volume

Bug 6308290 - If you try to create a local mirror from a disconnected volume, the following error is displayed:


You cannot add an existing volume as an mirrored volume component for a new mirrored

This is an incorrect error message.

Workaround - None at this time.

sscs list revision Command Failed on a New System

Bug 6307074 - When performing the initial startup of a new Sun StorEdge 6920 system out of the box and using the Host CD to install the software on a Solaris Sparc system, issuing the sscs list revision command results in the display of numerous errors.

Workaround - None at this time.

Sun StorEdge Network Data Replicator Software Times Out on a Single User Volume Deletion When Both Sides Are Deleting the Volume in Parallel

Bug 6305366 - If you try to delete a remote mirror (RM) on both sides at the same time when the replication set has just been created on this RM and was synchronizing, the Sun StorEdge Network Data Replicator software times out, and no event logs or errors are reported.

Workaround - None at this time.

Misleading Error Message Results During Creation of a Legacy Volume From External Storage

Bug 6304579 - The following message can appear during successful creation of a legacy volume from external storage:


sscs create -e disk/3/1/1/0 -p Default -S DEFAULT volume bubb Operation not supported; operation failed.

Workaround - Issue the following sscs list volume command to verify that the volume was created:


sscs list volume name

Interface Incorrectly Allows Addition of Replication Sets From Different Domains to the Same Consistency Group

Bug 6296378 - The interface allows the addition of replication sets from different storage domains to the same consistency group. The operation succeeds, but existing members of the consistency group could display unexpected results. Adding replication sets from different storage domains to the same consistency group is not a supported operation and should not be attempted.

Workaround - Do not attempt to add replication sets from different storage domains to the same consistency group.

Volumes Created In Sun StorEdge 6920 System, V2.0.5 Might Be Slightly Larger Than Those Created in Release 3.0.1

Bug 6296000 - Primary volumes created using Sun StorEdge 6920 v.2.0.5 software are slightly larger than secondary volumes created using Sun StorEdge 6920 v.3.0.1 software. If you try to replicate a primary volume created in v.2.0.5 software, an error might appear that indicates the secondary volume is not big enough if this volume was created with v.3.0.1 software. You must then create a bigger volume.

Workaround - Create the new Sun StorEdge 6920 v.3.0.1 secondary volumes slightly larger to accommodate the size of the v.2.0.5 volumes that you want to replicate.

Creation of a Second Peer Port Fails on the Same Subnet as the First, Fails if the Default Gateway Addresses Do Not Match

Bug 6295024 - Creation of a second peer port fails if the default gateways configured do not match and a Console error message appears

Workaround - Create all peer ports with same default gateway.

When You Log Out With a Job Running, Processing Terminates But Job Elapsed Time Does Not

Bug 6292502 - A job appears to be running forever if you log out of the Storage Service Processor or have your user session time out while the job is running.

When you log in again, the job is still on the Current Jobs page showing no progress, but the elapsed time is still incrementing.

The job should have been removed and placed on the Historical Jobs page with a status of User Logout or Timed-Out.

Workaround - From the Jobs page, cancel the job that is no longer running.

Volume State Is Intermittently Displayed Incorrectly

Bug 6291118 - When you create volumes with snapshot reserve, the pool and profile are intermittently displayed as Null, and further operations on the volume are limited.

Workaround - The Rescan Devices button on the External Storage Summary page enables you to modify volume-to-pool associations. That operation will recover and report the correct volume state.

More Than the Maximum Supported 128 Virtualized Legacy Volumes Can Be Configured

Bug 6285494 - It you try to configure more than the maximum number of 128 supported virtualized legacy volumes (VLVs), the following error is displayed from the command-line interfaces (CLI), but the VLVs are still configured:


The maximum number of legacy volumes for the system has been exceeded.



Note - From the browser interface, no error message is displayed.



Workaround - Do not configure more than the maximum number of 128 supported VLVs.

Peer Link Does Not Come Up Without Replication Sets Configured

Bug 6264635 - A replication link will not transition to the "link up" state until at least one replication set is configured for the remote system.



Note - You must create a replication set before a replication link will come up.



Workaround - None at this time.

Workaround - A CIM client application must make use of one of the firewall's existing open port numbers. Make sure that the port you select is displayed as open on the Administration Port Filtering page of the Sun StorEdge 6920 Configuration Service application. Both the Pegasus and wbemservices client libraries allow a specific port number to be used for setup of a CIM indication listener. The open port numbers include 22 (ssl), 25 (smtp), 427 (slp), 443 (patchpro) and 8443 (esm). More ports than this are listed on the Administration Port Filtering page, but not all are suitable for use as CIM indication destination ports.

After a Rolling Firmware Upgrade, the DSP Fails to Delete the Replication Set on the Upgrade Node and Displays a System Error

Bug 6260176 - After you perform a rolling upgrade on the Data Services Platform (DSP) firmware, the primary DSP is sometimes unable to resume data replication and displays a system error.

Workaround - Confirm that all remote replication has been suspended before initiating rolling upgrade.

The System Processing Time Can Be Long During Creation of a New Mirrored Volume

Bug 6256116 - Occasionally, the system may take a long time when you create a new mirrored volume and simultaneously map it to initiators using the New Volume Wizard.

Workaround - Limit to 32 the number of virtual disks in pools from which you create mirrored volumes.

Creating a Virtual Disk With an Invalid Array Name Produces the Wrong Error Message

Bug 6215190 - Creating a virtual disk with an invalid array name results in the following message:


Default, couldn't find space.

Workaround - Be aware of this condition. If you receive this error message, check to see that you have not supplied an invalid array name or tray ID.

Mirroring a Replicated Volume Fails

Bug 6205347 - When you attempt to mirror a volume that is already configured with a replication set, the system might return the error The create operation failed.

Workaround - None. Mirroring of replicated volumes is not permitted.

Virtual Disks Are Not Reinitialized When Reassigned to a New Storage Pool

Bug 5069434 - The system software does not prevent you from adding a virtual disk created for one storage pool to another storage pool that has a different storage profile. Because the original attributes of a virtual disk cannot be changed, the result is a virtual disk residing in a storage pool with attributes that do not match the attributes of the storage pool.

Workaround - Although you cannot reassign a virtual disk from one storage pool to another pool with a different storage profile, you can delete the virtual disk and create a new one. First delete the volumes, and then delete the virtual disk. Create a new virtual disk in the storage pool with the desired storage profile.

Changing Passwords Works Intermittently

Bug 5061119 - If you type a password into the New Password and Password Confirmation fields and then click Set Password, the change might not occur, in spite of the following message:


The password has been successfully changed.

If this happens, and you type the user name and "old" password, the login is accepted.

Workaround - If the password update was not accepted initially, change the password again.

Add Storage To Pool Wizard Displays Invalid Trays

Bug 5049258 - The Add Storage To Pool wizard can erroneously display invalid trays for selection when you attempt to add storage to a pool.

Workaround - After you add storage to a pool, wait at least one minute before attempting to add more storage to a pool (including the same storage pool).

If the Add Storage To Pool wizard shows a list of trays that contains two entries for each tray, cancel the operation and wait another minute. This should clear the invalid trays from the display.

Profile Details Page Allows You to Change the Profile Configuration to RAID5 With Two Drives

Bug 5010540 - The Profile Details page allows you to change the profile configuration to RAID5 with two drives.

For example, if you use the following process to configure an invalid number of disks for a RAID-5 profile, the save operation is successful:

1. On the Storage Profile Summary page, select User Profile.

2. From the RAID Level list, select RAID-5 .

3. From the #Drives list, select 2.

4. Click Save.



Note - The Profile Creation wizard accurately checks the number of disks. However, the Profile Details page for an existing user-created profile, where the number of disks can also be changed, does not check the number of disks.



Workaround - Use the Profile Creation wizard.

The Browser Interface Might Not List the Correct Status of Storage Pools With the Same Name

Bug 4993083 - The browser interface might not show storage pools with the same name in two storage domains correctly.

Workaround - If two or more storage pools with the same name appear in different domains, only one will be listed on the Storage Pool Summary page. If you filter the storage pool summary by domain, you will be able to see the individual storage pools.

When creating storage pools, assign names that are unique across the whole system.

Data Services Platform Firmware

This section describes known issues and bugs related to the Data Services Platform (DSP) firmware.

After Addition of a Replication Set to a Consistency Group Fails, a Volume Might Remain As Part of Consistency Group

Bug 6342044 - If a failure occurs during the addition of a replication set to a consistency group, a volume remains that appears to be part of the consistency group. Removing the replication set from the consistency group does not clear the problem.

Workaround - Delete the replication set and re-create it on the underlying data volume.

Creation of Local Mirrors With More Than Two Components Causes Improper Distribution of Data Partitions

Bug 6330647 - If you have a configuration with only two virtual disks in a storage pool and you create a local mirror with three or four components from a storage pool (compared to existing volumes), the system does not distribute the data partitions between the two available virtual disks. Instead, the mirror is improperly created with all data partitions coming from a single virtual disk. With a two-component local mirror, the two data partitions are distributed as they should be between the two virtual disks.

Workaround - To create mirrors with more components than virtual disks in the pool, create the components individually and then mirror those components together. For example, to create a three-component mirror on two devices with some degree of independence, create the two component volumes individually on one virtual disk, with the third on the other virtual disk, and then mirror the three components together.

Consistency Group Name is Not Usable After a Creation Failure

Bug 6318853 - If creation of a replication set in a new consistency group fails because of the inability to create the bitmap or asynchronous queue, the consistency group name may become unusable for further operations.

Workaround - Use a different consistency group name after the failure.

Removing a Virtual Disk From an Unmapped Local Mirror With Snapshots Causes VSM Errors When the Virtual Disk Is Reinserted

Bug 6306503 - If you remove a virtual disk from an unmapped local mirror with snapshots, Virtualization State Manager errors appear when the virtual disk is reinserted.

Workaround - Map the local mirror volume to an initiator, and then remove and reinsert the virtual disk.

Volumes May Be Missing After Role Reversal

Bug 6300069 - After a role reversal of a large number of volumes, some of the volumes on the primary site might disappear.

Workaround - Click the Rescan Devices button on the External Storage Summary page to recover the missing volumes.

LOG_CRIT Event (ICS Del Failed TIMEOUT) Occurs Despite Successful Rolling Upgrade

Bug 6282833 - When a card is shutting down, the following log_crit message can be returned:


06/08/2005 13:41:09 LOG_CRIT     (VCM: 5-0)  vcm_mic_remove_iscsi: ICS
Del failed TIMEOUT.  2-1, OSH fffffff0-00028700-00002870-0000d576  [0xff]

Workaround - None is required; this message is benign.

The Unexplained excessive retries and device unreachable Events Appear During Gigabit Ethernet Configuration

Bug 6338240 - With Gigabit Ethernet enabled, you might receive Excessive retries and Device Unreachable events from the monitoring software on an external storage virtual disk that has transitioned to a non-redund HA state (only single path to the external storage virtual disk).

Workaround - These messages are due to transient network failures and are usually benign.

After a Processor Reset, a Premature Detection of Disks Results in False LOG_CRIT Messages

Bug 6225669 - When a processor on a storage resource card (SRC) reboots after an inadvertent crash (for instance, due to a software panic of some sort), it might report events similar to the following messages.


02/03/2005 16:35:25 LOG_CRIT     (VCM: 5-0)  FAILED Setup connection 
from 4/1 to 3/1, OSH 60003ba2-7ca6b000-4034919c-0006d196  [0xff], state: 
0 status: CANT_CREATE_
02/03/2005 16:35:25 LOG_CRIT     (VCM: 5-0)  VCM: Remote 3/1 Connection 
failed -2 to WWN = 60:00:3B:A2:7C:A6:B0:00:40:34:91:9C:00:06:D1:96
02/03/2005 16:35:25 LOG_INFO     (VCM: 5-0)  Scheduled to redistribute 4 
ALUs in 120 sec.
02/03/2005 16:35:25 LOG_CRIT     (VCM: 5-0)  vcm_iscsi_t1_to_alu_cb: 
iSCSI setup error state 0, status 19, ALU wwn 
60:00:3B:A2:7C:A6:B0:00:40:34:8F:D1:00:0A:8C:A2
02/03/2005 16:35:25 LOG_CRIT     (VCM: 5-0)  vcm_iscsi_t1_to_alu_cb: 
iSCSI setup error state 0, status 19, ALU wwn 
60:00:3B:A2:7C:A6:B0:00:40:34:90:4F:00:07:62:35
02/03/2005 16:35:25 LOG_CRIT     (VCM: 5-0)  vcm_iscsi_t1_to_alu_cb: 
iSCSI setup error state 0, status 19, ALU wwn 
60:00:3B:A2:7C:A6:B0:00:40:34:90:F8:00:05:F4:50

These events are usually benign as long as the Sun StorEdge 6920 system has fully recovered to its normal high-availability (fully redundant) state, and no further action is required.

Workaround - Ignore the messages that are displayed.

Storage Automated Diagnostic Environment

This section describes known issues and bugs related to the Storage Automated Diagnostic Environment application.



Note - When you replace a standby switch fabric card (SFC), an actionable event could occur, even though the card correctly returns to standby mode when the reload is complete.



Message Incorrectly Says That a Volume Was Deleted When a Virtual Disk Was Actually Deleted

Bug 6357771 - The command-line interface (CLI) displays the following:

The removal/reconfiguration of a volume has been detected.

Actually the removal and reconfiguration of a virtual disk should have been detected.

The alarm/event is non-actionable and non- critical.

Workaround - None at this time.

When You Upgrade the LPC Firmware on Cards in an Array, the Update Change Is Not Shown in the Generate Inventory Display

Bug 6335700 - If you have an array running firmware Release 7.21 on the Loop Card (LPC), and you use the Revision Maintenance command to update the firmware to release 7.23 the Generate Inventory command does not show that the LPC cards are running a newer version of firmware. Nothing is communicated as changed on the array.

Workaround - Use the Generate New Inventory command and click Save.

After a Power Reset the Storage Automated Diagnostic Environment software Is Inaccessible

Bug 6352972 - After a power reset, the system sometimes can no longer access the monitoring and diagnostic software.. All pages respond with the following error:


An Internal Error occurred. The Storage A.D.E engine may not be responding.

Workaround - Contact Sun Service to reboot the Storage Service Processor.

Revision Maintenance-Upgrade View Affects Volumes But Does Not Display Volume Data

Bug 6330817 - During a revision upgrade to release 3.0.1, when you click the View Affected Volumes button on the Revision Maintenance-Upgrade page, the View Affected Volumes button, the screen for Affected Volumes-array00 page is displayed but the volume data is missing.

Workaround - None at this time.

DSP Firmware Inventory Changes Are Not Displayed After a DSP Patch Installation

Bug 6328928 - After installation of a Data Services Platform (DSP) patch and generation of a new inventory, no notification of the DSP firmware change from the previous version is provided.

Workaround - Click Generate New Inventory button to update the display.

Monitoring Software Displays A LOG_CRIT Event When Replication Is Manually Suspended

Bug 6327537 - A LOG_CRIT event is displayed in the event log when replication is suspended, even if it was explicitly suspended by a user command.

Workaround - None at this time.

Generation of a New Inventory Report Is Not Always Successful After Reservation of a System for Maintenance

Bug 6311635 - When you use the Storage Automated Diagnostic Environment to reserve a system for maintenance (and to designate a reserve time), if you finish the work before the designated time, and then release the system, an inventory report cannot be generated.

Workaround - Wait for the designated maintenance time to expire before generating an inventory report.

There Is No Alarm Management Interaction Between Enterprise and System Editions of Storage Automated Diagnostic Environment

Bug 6264718 - There is no logical connection between the System Edition and Enterprise Edition of the Storage Automated Diagnostic Environment software. Each edition is a separate entity and requires independent management by the user. Alarm management is not propagated by either edition to the other.

Workaround - When the condition that generated an alarm is corrected, manually delete the alarm on the Edition page of both the System edition of the Storage Automated Diagnostic Environment (SUNWstads) that resides in the rack, as well as the Enterprise Edition (if monitoring the rack as a device from a separate monitoring station).

This ensures that the separate packages are displaying the correct information.

The DSP Slot Count Is Incorrect After a Component Is Removed and Replaced

Bug 6234925 - After removal and replacement of a Data Services Platform (DSP) board FRU, the View Rack Components page of the Sun Java Web Console shows an incorrect DSP slot count. The Device Details page shows a correct DSP slot count.

Workaround - Do not look at the View Rack Components page for the DSP FRUs. Look for the correct number of installed DSP FRUs on the Inventory Report screen.

The Performance Data Page Does Not Load

Bug 6214849 - If you try to open the Performance Data page at the same time as another user, it will not load.

Workaround - Try to load the page again after waiting a moment.

Local Notification Information Page: Do Not Select All or Informational

Bug 4995950 - When setting up remote email notification on the Local Notification Information page of the Storage Automated Diagnostic Environment application, do not select All or Informational. These selections cause notification to be sent for all events, including those that do not indicate a fault.

Workaround - For fault-specific information only, select Warning, Error, and Down when setting up fault notification.

Internationalization

This section describes known issues and bugs related to internationalization and language translation.

Configuration Management Software

Some Buttons, Box Options, and Job Description Are Not Displayed Correctly by the Localized Interface

Bug 6239357 - Some buttons, box options, and job description are displayed in one language if the browser interface is initially launched or the action is initially taken in that language.

Workaround - None at this time.

Non-Internationalized Messages Might Be Displayed on the Job Details Page

Bug 6237308 - After deletion of a pool, virtual disk, or volume, some English messages might be displayed on the Job Details page in the localized interface.

Workaround - None at this time.

An Internal System Error Is Displayed When a Description Contains French Characters

Bug 6272992 - When you try to save French characters, such as "è" in "système" or "â" in "tâches", in the Description field on the General Setup page, an internal system error message is displayed.



Note - Multibyte characters works for ja, ko, zh_CN and zh_TW characters.



Workaround - Do not enter French characters into the text field.

Storage Automated Diagnostic Environment

The Text Field on the Notification Setup Page Does Not Support Non-ASCII Characters

Bug 6273563 - Multi-byte characters saved on the Notification Setup page are displayed as "??."

Workaround - Enter only ASCII characters in the text field.

Miscellaneous

This section describes other known issues and bugs found in the system.

Upgrade From the 3.0.0 to the 3.0.1 Release Causes Some Log Messages to Have Wrong Severity

Bug 6352921 - Excessive alert messages with the wrong severity could appear after an upgrade from 3.0 to V3.0.1.

Workaround - None at this time.

Point-to-Point Fibre Connection Mode for the Sun StorEdge 3510 Array Fails the ISP fclink Test

Bug 6330626 - When you change the Fibre Channel (FC) connection mode from loop state to point-to-point, the Sun Storage 3510 FC array fails to establish the connection to the Internet service provider. The Data Services Platform continues to report loop-up and loop-down messages for the current attached point.

Workaround - None at this time.

Cannot Take a Snapshot of a Newly Created Mirror

Bug 6328973 - When you create a new mirror and allocate snapshot reserve space from a storage pool that is different from that of the associated mirror component, you can not take a snapshot of the newly created mirror.

Workaround - Delete and re-create the snapshot reserve space after the mirror has been created. At this point you can specify different pools for the snapshot reserve space and associated mirror component. This is only an issue with mirrors; volume creation and snapshot work as expected.

DSP Inventory Is Incorrectly Shown to Change After a Storage Service Processor Patch Installation

Bug 6327158 - Generating an inventory after applying a Storage Service Processor patch can result in the Data Services Platform (DSP-1000) falsely reporting changes in its inventory.

Workaround - None at this time.

Repair Mirror Command oOn a Virtualized Legacy Volume May Fail

Bug 6325108 - Issuing a Repair Mirror command on certain mirrors might fail and result in a general error message. Reasons for the failure might include insufficient available storage in any of the pools of the mirror's components. This is particularly likely if the mirror has legacy volumes, since legacy pools associated with legacy volumes often have no available space.

Workaround - Either add storage to the pools of the mirror's components, or add a component from a pool with available storage.

SSCS CLI Port Enable and Disable Commands Might Time Out in Configurations Using Many Replication Sets

Bug 6322093 - If you have 128 replicated volume sets evenly divided across 16 consistency groups (120 synchronous, 8 asynchronous) and you run a short shell script to time the setup and tear-down of a dual link configuration, then begin configuration of a new address on a single port, you might see large times (approximately 5 minutes) and time-outs logged in the se6920ui log. The replication set redistribution is in progress at the time of the time-outs.



Note - This usually happens with a V210 system processor installed.



Workaround - The intended peer port and link configuration operation will have completed successfully and does not need to be retried. However, confirm the configuration outcome by using the following command:


sscs list etherport port-name

Eight Drives Cannot Be Used for a RAID-5 Virtual Disk When Disks Are 300-Gbyte

Bug 6319525 - The system supports 2-terabyte virtual disks. However, if you create a virtual disk RAID-5 profile and then select by tray using 8 disks (RAID-5 7+1), an error appears that the selection is larger than 2 terabytes.

The 300-GB drives appear as 279.397-GB on the Disk Summary page of the browser interface.



Note - With 7 drives the actual capacity, due to a single drive's worth of parity data, is less than 2 terabytes:
7 x 279.397-GB = 1.9558 terabytes



Workaround - Be aware that you cannot create a virtual disk with the maximum of 2-terabytes of storage when using 300-GB disk drives.

GetClass of SunStorEdge_DSPStorageConfigurationService Fails on Pegasus CIM Clients

Bug 6318084 - A Storage Management Initiative Specification (SMI-S) CIM client using the Pegasus client library will get a CIMXML parse error with a GetClass of the SunStorEdge_DSPStorageConfigurationService. This prevents the client from discovering the methods needed for configuration of the 6920 system.

Workaround - Add the Override qualifier to the SunStorEdge_DSP.mof for the failing method, or rename the method.

During a Rolling Upgrade, Shutting Down the Switch Fabric Card Results in a Failure Notification

Bug 6317192 - Shutting down a switch fabric card (SFC, for example, card 6) during a rolling upgrade results in the following message.


SFC REDUND card 6 has FAILED - card 5 is SFC primary

This message incorrectly indicates that the SFC failed, instead of its operation being only temporarily suspended.

Workaround - Ignore the message.

After Failing Due to a Faulty Disk Drive, Virtual Disk Creation Might Not Resume After Replacement of the Drive

Bug 6313151 - If the process of creating a virtual disk fails due to a faulty disk drive, it does not resume after the disk drive is replaced.



Note - You should expect to retry the Add storage to pool command after a disk drive failure and always click the Rescan Devices button after hardware replacement.



Workaround - After replacing the disk drive, try again to create the virtual disk. If the creation is not allowed because the disk status still shows a failed disk, use the Refresh Arrays button on the Array Details page (Physical Storage right arrow Arrays) of the Sun StorEdge 6920 Configuration Service application to update the array status, and then create the virtual disk.

Event Log Messages Have the Ports Identified by the Physical Port ID Instead of the System Port ID

Bug 6312185 - Event log messages have the system ports labeled by the physical port ID, such as 0x1040001. For example:


Aug 16 12:08:10 dsp00  08/16/2005 12:13:29 LOG_WARNING  (ISP4XXX: 1-4)  Gig Ethernet received link down on port 0x1040001
Aug 16 12:08:14 dsp00  08/16/2005 12:13:33 LOG_WARNING  (ISP4XXX: 1-4)  Gig Ethernet received link   up on port 0x1040001

The ports should be labeled by the system port ID. For example:


Aug 16 12:08:10 dsp00  08/16/2005 12:13:29 LOG_WARNING  (ISP4XXX: 1-4)  Gig Ethernet received link down on port 1/7
Aug 16 12:08:14 dsp00  08/16/2005 12:13:33 LOG_WARNING  (ISP4XXX: 1-4)  Gig Ethernet received link   up on port 1/7

Workaround - Use the following algorithm to convert a physical port ID to a system port ID:

port = S / ((P - 1) x 2) + p

where:

Examples:

port 0x2010001 = port 2/1

port 0x2010002 = port 2/2

port 0x2020001 = port 2/3

port 0x3040002 = port 3/8

port 0x4030001 = port 4/5

Using the config_solution Script to Run the setgid Command Fails

Bug 6283274 - The -I switch is not allowed with the setgid command when you run the t4_rnid_cfg script during amigration from release 2.0.x to 3.0.x.

Workaround - Edit the first line of the /usr/local/bin/t4_rnid_cfg file. The original line looks like:


#!/usr/bin/perl -I/usr/local/lib/perl5 --  # -*-Perl-*-
#
# t4_rnid_cfg.pl -- script to configure T4 RNID parameters

Edit this line to the following:


#!/opt/SUNWstade/bin/perl -U use lib "/usr/local/lib/perl5";

Then, re-run the config_solution script.

Legacy Volume Device Can Become Unreachable During Failover Due to a Large Variability in EVA LUN Move Time

Bug 6281926 - During an array failover, host I/O failures should not occur if the array controllers are operating in a normal mode and the logical unit number (LUN) moves take less than two minutes. A normal enterprise virtual array (EVA), running up-to-date firmware with an optimal configuration, should easily meet this requirement.

Workaround - None at this time.

The System Does Not Recover From I/O Errors Until a Processor Crash, Volume Move, or Configuration Request Occurs

Bug 6278220 - If an I/O error is encountered on a service or log component, that component is marked out of service and shut down. The only way you can restart that component and clear the state is to restart the volume or refresh the volume with a processor crash, volume move, or config request.

Workaround - None at this time.

Rolling Upgrade Fails With Sun StorEdge 6910 on the SAN

Bug 6272710 - A rolling upgrade fails if a Sun StorEdge 6910 system (target and initiator) is configured on the Fibre Channel (FC) storage area network (SAN)..

Workaround - The 6910 system must be placed in a different FC switch zone than the 6920 system to ensure that it does not interfere with the 6920 system.

Barber Pole Animation Doesn't Work When Using a Wizard

Bug 6265292 - When using a wizard with Microsoft Internet Explorer 6, clicking the Finish button on the Wizard Summary page might not display the animation (rotation) of the barber pole progress indicator. The wizard/application appears to be frozen.

Workaround - Let the wizard run to completion (even though it looks as if nothing is happening), at which time it will automatically close the window.

Unexplainable Replication Set Creation Failures

Bug 6262621 - Some errors are not relayed through the system to reveal the cause of replication set error conditions.

Workaround - None at this time.

Adding New Components to a Logical Mirror Fails Intermittently With Small Volume Sizes

Bug 6258661 - An intermittent failure occurs when you try to add a component to a newly created logical mirror. Retrying the add operation succeeds. This is caused by the small size (50 Mbytes) of the mirror and the volume and pool components being added.

Workaround - Retry the command to add a new mirror.

An I/O Error Message Results During an Array Controller Failover

Bug 6258029 - If one array controller of a partner pair goes offline due to a hardware failure or software fault, or during a rolling firmware upgrade, an I/O error message can result.

Workaround - Remove the volume from the attached logical unit waiting list when the volume could not be quiesced.

Clear Upgrade Report Action Removes the Job From the Jobs Page; Archiving a Job Results in the Loss of the Upgrade Report

Bug 6255586 - If you clear an upgrade report, the job and the log file associated with that job are deleted.

For Example, if you use Generate Patch Report and go to the Jobs tab, you see that the job is complete. If you then click Clear Patch Report and go to Jobs tab, you will see that the job has been deleted.

The actions on one tab affect the other tab, but the relationship is not clear.

Workaround - None at this time.

The DSP Provides No Notification When the Sun StorEdge 6130 Array Is Not Set to the AVT Mode

Bug 6254707 - Configuring the Sun StorEdge 6130 arrays with the Auto Volume Transfer (AVT) set to Off results in the following access error message on the host.


Illegal request due to current lun ownership

No event log entry is sent to the Storage Automated Diagnostic Environment that indicates the exact nature of the problem.

Workaround - Configure the Sun StorEdge 6130 arrays with the Auto Volume Transfer (AVT) set to On.

Data Is Not Available After a Snapshot Resnap With Microsoft Windows OS

Bug 6246981 - When using Windows as your operating system, you may not be able to view updated snapshot data after performing a resnap operation.

Workaround - If this occurs, remove and then re-add the drive letter.

A Rolling Upgrade Fails After Inadvertent Creation of a "Micro-Hairpin" Configuration

Bug 6246328 - If both storage arrays and initiators are connected on the same ingress port (not processor) with a Fibre Channel switch (which creates a micro-hairpin configuration), attempts to perform a rolling upgrade can fail.



Note - This is not a supported configuration.



Workaround - Do not connect storage arrays and initiators on the same ingress port (not processor) by using a Fibre Channel switch.

Benign LOG_CRIT iSCSI Messages Are Logged Inadvertently by the Storage Automated Diagnostic Environment

Bug 6245542 - This issue is similar to that of Bug 6225669. The following LOG_CRIT messages can be generated whenever there are failover events on the Sun StorEdge 6920 system, such as a cable being pulled, a card being shut down, a processor crashing due to a latent software bug, or even the system undergoing a PatchPro upgrade.


03/23/2005 13:19:23 LOG_CRIT (CONFIG: 0-0) iSCSI Target Lun 9999 on (tgt VSE
not created/1/4 to 3/4 - CANT CREATE TO VSE) not created
03/23/2005 13:19:23 LOG_CRIT (VCM: 5-0) FAILED Setup connection from 1/4 to
3/4, OSH 60003ba4-d345b000-42374ab6-000c7fb8 [0xff], state: 0 status:
CANT_CREATE_
03/23/2005 13:19:23 LOG_CRIT (VCM: 5-0) VCM: Remote 3/4 Connection failed -
2 to WWN = 60:00:3B:A4:D3:45:B0:00:42:37:4A:B6:00:0C:7F:B8
03/23/2005 13:19:23 LOG_CRIT (CONFIG: 0-0) iSCSI Target Lun 9999 on (tgt VSE
not created/2/3 to 3/4 - CANT CREATE TO VSE) not created
03/23/2005 13:19:23 LOG_INFO (VCM: 5-0) VCM Backup Resync Scheduled in 60
seconds, gen 11870

Workaround - Ignore the error messages.

The Host I/O Fails During DSP Firmware Upgrade

Bug 6244623 - The host I/O data flow can fail during a Data Services Platform (DSP) firmware upgrade using Patchpro if both storage arrays and initiators are connected on the same ingress port (not processor) with a Fibre Channel switch (which creates a micro-hairpin configuration).



Note - This is not a supported configuration.



Workaround - Do not connect storage arrays and initiators on the same ingress port (not processor) by using a Fibre Channel switch.

Profile Descriptions Are Not Included in Search Results

Bug 6233593 - The browser interface search function does not include profile descriptions. Searches will find terms in volume descriptions, but not in profile descriptions. Searches are case insensitive.

Workaround - None at this time.

Volume Namespace Is Global--Domains Imply Isolation of Namespaces

Bug 5095383 - The namespace for volumes is global within a rack. Separate domains do not provide separate volume names.

Workaround - Be aware that separate storage domains do not provide separate volume namespaces and that all volume names must be globally unique across the system.

Login Attempt Can Hang

Bug 5057792 - When an attempt is made to log in to the browser interface or command-line interface (CLI) using the storage account, the login will hang if the Data Services Platform (DSP) is not responding. Correcting this condition requires that the DSP be power-cycled.

Workaround - Use the admin account to log in to the browser interface or CLI. You will not encounter a hang-up and will be able to issue the request to power-cycle the DSP. Then you can log in using the storage account.

The fsck Command Can Take a Long Time to Complete a File System Build on Sun StorEdge 6920 System LUNs

Bug 5026163 - Using the samfsck command to check a Sun StorEdge QFS file system can take a long time for a file system build on Sun StorEdge 6920 system logical unit numbers (LUNs).

Workaround - Be aware that, depending on the configuration and I/O load on the system, a file system build can take up to 45 minutes to complete on a 200-Gbyte file system.

Booting/Rebooting: Errors Occur During Boot for Direct-Attached Storage Data Hosts

Bug 4969489 - When direct-attached storage data hosts are connected to the Sun StorEdge 6920 system and devices are connected in autotopology mode, a problem might occur during initial booting.

Workaround - Edit the jfca.conf file in /kernel/drv on the data host using the following values:

Loop FcLoopEnabled = 1;
FcFabricEnabled = 0;
Fabric FcLoopEnabled = 0;
FcFabricEnabled = 1;


Known Documentation Issues

The following topics describe known issues in areas of the documentation:

sscs CLI Man Page Corrections

This section describes corrections for the sscs man page. Substitute the following changes for these commands.

create profile

Under the description of the -v command option, "-v,--virt-strategy striped|concat" should read "-v,--virt-strategy stripe|concat".

list initiator

Under Response Format in the Examples section, "Description: <initiator-name>" should read "Description: <initiator-description>".

modify volume

Under the description of the -S, --sdomain option, "Specify the storage domain volume operands" should read "Specify the storage domain".

Expand Snapshot Reserve Space for a Volume

The example should be changed from:


sscs -C 8 -L high -S MyDomain volume MyVolume

to read:


sscs snapshot -C 8 -L high -S MyDomain volume MyVolume

Getting Started Guide Corrections

The corrections in this section apply to the Sun StorEdge 6920 System Getting Started Guide (part number 819-0117-10).

Sun StorEdge 9960 System is Not Qualified As an External Storage Device for the 6920 System

Bug 6373801 - The section Supported Storage Devices of the Sun StorEdge 6920 System Getting Started Guide (part number 819-0117-10) lists the "Sun StorEdge 9960 system" as a supported external storage device. The Sun StorEdge 9960 system is not supported and should not be connected to the Sun StorEdge 6920 system as an external storage device.

Workaround - None at this time.

Incorrect Command in the Using the Remote Scripting CLI Client Section

Bug 6307091 - The paragraph of the "Logging In to the System" section incorrectly reads:

Use the /opt/se6920/cli/bin/sscs command to perform the remote management operations. For further information about remote management operations, see the sscs(1M) man page.

Workaround - The paragraph should read:

Use the /opt/se6x20/cli/bin command to perform the remote management operations. For further information about remote management operations, see the sscs(1M) man page.

Power-On Description Should Mention the Storage Service Processor's Green LED

Bugs 6306615, 6307088 - The Sun StorEdge 6920 System Getting Started Guide includes the step, "Wait approximately one minute after the AC power sequencer circuit breakers are pressed on." This text appears in the following two sections:

Workaround - This process should include the following next step:

"On Version 210 of the Storage Service Processor, confirm that the LED on the front left bezel is green, indicating the operating system is up and running, before powering on the remaining rack components."

Clarification for the Default Configuration Options Section

Bug 6242746 - The Default storage profile does not include a dedicated hot-spare. A dedicated hot-spare is a spare disk within an array that is used for failover when a particular virtual disk fails. To reconfigure an array to include a dedicated hot-spare, use the New Storage Profile wizard to create a new profile and enable the dedicated hot-spare attribute.

Workaround - You can also reconfigure the number of array spares within an array. Go to Sun StorEdge 6920 Configuration Service right arrow Physical Storage right arrow Arrays, and click the name of the array you want to modify. The Array Details page displays the array attributes, and includes the fields you can modify. You can specify from 0 to 8 array hot-spares within an array. You can also modify an array using the sccs modify array command.

System Administration Guide and Online Help Corrections

The corrections in this section apply to both the Sun StorEdge 6920 System Administration Guide (part number 819-0123-10), and the online help.

Restoring the System After a Full Shutdown

This process has been changed. Replace the existing process in the Sun StorEdge 6920 System Administration Guide with the following process:

If you want to restore the system after it has been powered off with the full shutdown procedure, you must go to the location of the system and perform the following procedure:

1. Open the front door and back door of the base cabinet and any expansion cabinets.

2. Remove the front trim panel from each cabinet.

3. Verify that the AC power cables are connected to the correct AC outlets.

4. At the bottom front and bottom back of each cabinet, lower the AC power sequencer circuit breakers to On.

The power status light emitting diodes (LEDs) on both the front and back panel illuminate in the following order, showing the status of the front power sequencer:



Note - You must wait until each component is fully booted before powering on the next component.



5. Power on the storage arrays.



caution icon

Caution - If you power on the DSP before the storage arrays are fully booted the system does not see the storage volumes and incorrectly reports them as missing.



6. Power on the Data Services Platform (DSP).

7. At the back of the system, locate the power switch for the Storage Service Processor and press the power switch on.

8. Verify that all components have only green LEDs lit.

9. Replace the front trim panels and close all doors.

The system is now operating and supports the remote power-on procedure.

Combining Replication Sets in a Consistency Group

This process has been changed. Replace the existing process with the following process:

If you have already created a number of replication sets and then determined that you want to place them in a consistency group, do so as outlined in the following sample procedure. In this example, Replication Set A and Replication Set B are existing independent replication sets. Follow these steps on both the primary and secondary peers:

1. Create a temporary volume, or identify an unused volume in the same storage domain as Replication Sets A and B.

2. Determine the World Wide Name (WWN) of the remote peer.

This information is on the Details page for either replication set.

3. Select a temporary or unused volume from which to create Replication Set C, and launch the Create Replication Set wizard from the Details page for that volume.

Creating Replication Set C is just a means to create a consistency group. This replication set is deleted in subsequent steps.

4. Do the following in the Create Replication Set wizard:

a. Select a temporary or unused volume from which to create the replication set.

b. In the Replication Peer WWN field, type the WWN of the remote system.

c. In the Remote Volume WWN field, type all zeros. Then click Next.

d. Select the Create New Consistency Group option, and provide a name and description for Consistency Group G. Click Next.

e. Specify the replication properties and replication bitmap as prompted, confirm your selections, and click Finish.

5. On the Details page for Replication Set A, click Add to Group to add the replication set to Consistency Group G.

6. On the Details page for Replication Set B, click Add to Group to add the replication set to Consistency Group G.

7. On the Details page for Replication Set C, click Delete to remove the replication set from Consistency Group G.

Replication Set A and Replication Set B are no longer independent and are now part of a consistency group.

Fast-Start Feature Documentation Should Mention That the Application Should Be Quiesced During Operation

Bug 6225134 - The documented description of the fast-start feature should mention that a fast start is to be issued with a quiesced application. After the fast start is complete, the user can unquiesce the application.

Workaround -

The online help procedure "Synchronizing Data Using a Backup Tape" should include the following step:

1. Quiesce the application that is accessing the primary volume. Unmount the volume if necessary.

The online help procedure "Synchronizing Data Using a Backup Tape" should include the following step:

9. Unquiesce the application.

Synchronizing Data Using a Backup Tape

Bug 6428911 - This section of the documentation needs the additional step, "Suspend with Fast Start" after the tape is restored to the secondary peer. This step clears the bits that were set on the secondary peer by the tape restore process and allows the subsequent "resume with Normal option" step to transfer only the data written after the application resumed. The following modified documentation process describes these steps and also describes the quiesce and resume steps of the application.

Workaround - The "Synchronizing Data Using a Backup Tape" section should read:

If you want to minimize data replication I/O traffic when you set up a copy of the data on a remote peer, you can use a backup tape copy of the primary volume to copy and synchronize data on the secondary volume.

To synchronize data using a backup tape, perform the following steps on the primary and secondary peers.

1. On the primary and secondary peers, create the replication set.

2. On the primary peer, quiesce the application that is accessing the primary volume.

3. On the primary peer, quiesce the file system.

Unmount the volume if necessary.

4. On the primary peer, click Suspend and select the Fast Start option.

This clears the primary bitmap.

5. On the primary peer, back up the data to tape.

This creates a block-based disk image.

6. On the secondary peer, click Suspend and select the Fast Start option.

This clears the secondary bitmap.

7. On the primary peer, click Resume and select Normal synchronization.

This clears the "sync needed" flag on the secondary peer, which would otherwise disallow writing to the secondary volume in Step 10.



Note - Since both bitmaps are cleared, no data is transferred from primary to secondary in this step. The replication set remains in replicating mode.



8. On the primary peer, click Suspend and select the Fast Start option.

This returns the set to the Suspend state in preparation for unquiescing the application. This also ensures that the bitmap is clear.

9. On the primary peer, unquiesce the application that is accessing the primary volume.

10. On the secondary peer, restore the data from the backup tape.

This also sets the bits in the bitmap.

11. On the secondary peer, click Suspend and select the Fast Start option.

This clears the secondary bitmap. At this point the only bits that are set in either bitmap are those bits that correspond to data written to the primary volume after the application was resumed in Step 9.

12. On the primary peer, click Resume and select Normal synchronization.

This moves the data that was written after the application resumed.

About Core Files

Bug 6206619 - The "About Core Files" page in the online help contains the following misleading sentence:

The system software retains up to five core files for each device.

Workaround - The sentence should read:

The system software retains the last core file for the Sun StorEdge 6020 array and up to five core files for the Data Services Platform (DSP).

In addition, the Note at the end of the topic applies only to the Sun StorEdge 6020 array. The DSP saves up to five core files before it begins to overwrite the oldest saved core files.

Best Practices Guide Corrections

This section describes corrections and additions to the Best Practices for the Sun StorEdge 6920 System (part number 819-3325-10).

Remote Replication

This information has been changed. Replace the existing section with the following information:

Release 3.0.1 of the Sun StorEdge 6920 system has added support for remote data replication. This feature enables you to continuously copy a volume's data onto a secondary storage device. This secondary storage device should be located far away from the original (primary) storage device. If the primary storage device fails, the secondary storage device can immediately be promoted to primary and brought online.

The replication process begins by creating a complete copy of the primary data on the secondary storage device at the disaster recovery site. Using that copy as a baseline, the replication process records any changes to the data and forwards those changes to the secondary site.

For help setting up appropriate security, contact the Client Solutions Organization (CSO).

More Than Two Connections to an External Storage Virtual Disk Cause Rolling Upgrade and Fault Injection Failures

Bug 6346360 - The Best Practices for the Sun StorEdge 6920 System should describe the following limitation:

Any disk configured with more than two connections to an external storage virtual disk causes rolling upgrade and fault injection failures.

Workaround - None at this time.

Other Documentation Bugs

Remote Mirror Full Data Synchronization Does Not Check Secondary Volume Mount

Bug 6227819 - The remote mirror full data synchronization does not check for secondary volume mount. This could result in data not being available to the user immediately upon completion of full synchronization.

Workaround - None at this time.


Service Contact Information

Contact Sun Customer Service if you need additional information about the Sun StorEdge 6920 system or any other Sun products:

http://www.sun.com/service/contacting


1 (TableFootnote) NetWorker PowerSnap Module requires minimum version 7.2 of the Sun StorEdge Enterprise Backup Software with Service Update 2 patch
2 (TableFootnote) The term "initiator" means the "initiator instance" as seen by the Sun StorEdge 6920 system. If a data host-side HBA port sees N ports, the system sees N initiators. The 256-initiator limit translates to a maximum of 128 dual-path data hosts, where each data host HBA port can see one port of the system.