Sun StorEdge 6920 System Release Notes, Release 3.2

This document contains important information about the Sun StorEdgetrademark 6920 system software release 3.2 that was not available at the time the product documentation was published. Read this document so that you are aware of issues or requirements that can impact the installation and operation of the Sun StorEdge 6920 system running system software release 3.2.

This document consist of the following sections:


New Features for Release 3.2

Release 3.2 adds the following new features with the Sun StorEdge 6920 system software release 3.2:

This section provides a brief description of these features. For additional information, see the product documentation.

Tree-based Navigation Flow

The navigation tree is displayed in the left-hand pane of the interface. You use the navigation tree to move among folders and pages.

The top level of the navigation pane displays the following links:

Logical Storage: Displays links to Volumes, Snapshots, Replication Sets, Virtual Disks, Pools, Profiles, and Domains pages.

Physical Storage: Displays links to Initiators, Ports, Arrays, Trays, and Disks pages.

Mappings: Displays the Mappings Summary page.

External Storage: Displays the External Storage Summary page.

Jobs: Displays links to Current Jobs and Historical Jobs pages.

Administration: Displays links to General Settings, Licensing, Port Filtering, Notification, and Activity Log pages.

Mapping Summary Table

The Mapping Summary enables you to view the current volume and initiator mappings, and also create mappings of a snapshot volume to an initiator.

Graphical Alert Icons

Icons are displayed to draw your attention to an object's status, including critical errors, minor errors, and unknown conditions.

Activity Log Display

The activity log lists all user-initiated actions performed on the system, in chronological order. These actions might have been initiated through either the Sun StorageTek Common Array Manager interface or the command-line interface (CLI).

StorADE Alarm Interface

The alarm counts in the masthead are retrieved immediately when a new page is requested (or the existing page is refreshed).

4-GB SAN Support

The 6920 system now operates in a 4-GB SAN environment.


Supported Software and Hardware

The software and hardware components described in the following sections have been tested and qualified to work with the Sun StorEdge 6920 system:

Supported Web Browsers

The Sun StorEdge 6920 system software release 3.2 supports the web browsers listed in TABLE 1.


TABLE 1 Sun StorEdge 6920 Supported Browsers

Client OS

Minimum Supported Browser Version

Microsoft Windows 98,
Windows XP, Windows 2000, Windows Server 2003

Microsoft Internet Explorer 5.5

Mozilla 1.4

Netscape Navigator 6.2

Firefox 1.0

Solaris 8, 9, 10 for

Sun SPARC and x86 platforms

Mozilla 1.4

Netscape Navigator 6.2

Firefox 1.0

Apple Mac OS X

Mozilla 1.4

Firefox 1.0

Red Hat Enterprise Linux
Application Server 2.1

Mozilla 1.4

SuSE Linux Enterprise Server 8.0

Mozilla 1.4

Hewlett-Packard HP-UX 11

Mozilla 1.4

IBM AIX 5.2

Mozilla 1.4


Additional Supported Data Host Software

The software listed in TABLE 2 is compatible for use on data hosts with data paths or network connections to the Sun StorEdge 6920 system.


TABLE 2 Supported Sun Data Host Software

Software

Minimum Version

Sun StorEdge Enterprise Storage Manager

3.0.1

Sun StorEdge Availability Suite

3.0.1

Sun StorEdge Enterprise Backup Software

7.1

Solstice DiskSuite

4.2.1

Solaris Volume Manager software (embedded in the Solaris 9 Operating System)

N/A

Sun StorEdge QFS

4.0

Sun StorEdge SAM-FS

4.0

Suntrademark Cluster software

3.2, update 3


The third-party software listed in TABLE 3 is compatible for use on data hosts with data paths or network connections to the Sun StorEdge 6920 system.


TABLE 3 Supported Third-Party Software

Software

Version

VERITAS NetBackup Server

5.0 and later

VERITAS NetBackup Enterprise Server

5.0 and later

VERITAS Volume Manager with Dynamic Multipathing (DMP) for Solaris

3.5, 4.0, and 4.1

VERITAS File System (VxFS) for Solaris

3.5, 4.0, and 4.1

VERITAS Volume Replicator for Solaris

3.5, 4.0, and 4.1

VERITAS Cluster Server (VCS)

3.5, 4.0, and 4.1

Legato NetWorker®

7.1 and later


For the current hardware compatibility for the VERITAS products, see:

http://support.veritas.com/

Supported Fibre Channel Switches, HBAs, Data Hosts, and Operating Systems

The Sun StorEdge 6920 system supports all of the Fibre Channel (FC) switches, host bus adapters (HBAs), data hosts, and operating systems supported by Sun StorEdge SAN Foundation software version 4.4 (and later). Please contact your local Sun customer service representative for more information.

Supported Languages

The Sun StorEdge 6920 system software release 3.2 supports the languages and locales listed in TABLE 4.


TABLE 4 Supported Languages and Locales

Language

Locale

English

en

French

fr

Japanese

ja

Korean

ko

Simplified Chinese

zh

Traditional Chinese

zh_TW




Note -
bullet Man pages are available only in English and Japanese.
bullet The Online Help is not translated in this release; the English version of Online Help will be displayed on localized GUIs. If you need a localized version in any of the above languages, Contact Sun Customer Service.
bullet Localization of email notification is not supported in this release.




Upgrading to Release 3.2

This upgrade must be performed by a Sun Customer Service technician. Please call Sun Customer Service to arrange an installation or upgrade to Release 3.2.

Supported Upgrade Paths


TABLE 5 Supported Upgrade Paths

Previous Version

Supported

3.0.1.13

No

3.0.1.22

Yes

3.0.1.23

No

3.0.1.25

No

3.0.1.26

Yes



System Usage Limits

TABLE 6 lists maximum values for elements of the Sun StorEdge 6920 system.


TABLE 6 Sun StorEdge 6920 System Limits

System Attribute

Maximum

Volumes per system

1024 volumes

Virtual disks per tray

2 virtual disks

Volumes per virtual disk

32 volumes

Mirrored volumes

128 (256 mirrored components)

Components in a mirror

4 including the primary volume

Legacy volumes

128

Snapshots per volume

8 snapshots

Expand snapshot reserve space

Up to 31 times

Pre-defined profiles

15

Initiators[1] per system

256 initiators

Initiators per DSP port

128

Storage pools

64 storage pools

Storage profiles

15 system-defined storage profiles; no limit for user-defined profiles



Release Documentation

TABLE 7 and TABLE 8 list the documents that are related to the Sun StorEdge 6920 system. For any document number with nn as a version suffix, use the most current version available.

You can search for this documentation online at

System overview information, as well as information on system configuration, maintenance, and basic troubleshooting, is covered in the online help included with the software. In addition, the sscs (1M) man page provides information about the commands used to manage storage using the command-line interface (CLI).


TABLE 8 Sun StorEdge 6920 Related Documentation

Product

Title

Part Number

Sun Storage Automated Diagnostic Environment, Enterprise Edition

Sun Storage Automated Diagnostic Environment Enterprise Edition Release Notes Version 2.4

819-0432-nn

SAN Foundation software

Sun StorEdge SAN Foundation 4.4 Configuration Guide

817-3672-nn

Oracle Storage Compatibility Program

Sun StorEdge Data Snapshot Software With Oracle Databases Usage Guide

819-3326-nn

 

Sun StorEdge Data Mirroring Software With Oracle Databases Usage Guide

819-3327-nn

 

Sun StorEdge Data Replication Software With Oracle Databases Usage Guide

819-3328-nn

Sun Storage Traffic Manager software

Sun StorEdge Traffic Manager 4.4 Software Release Notes for
HP-UX, IBM AIX, Microsoft Windows 2000 and 2003, and Red Hat Enterprise Linux

817-6275-nn

 

Sun StorEdge Traffic Manager 4.4 Software User's Guide for
IBM AIX, HP-UX, Microsoft Windows 2000 and 2003, and Red Hat Enterprise Linux

817-6270-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for Red Hat Enterprise Linux

817-6271-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for Microsoft Windows 2000 and 2003

817-6272-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for IBM AIX

817-6273-nn

 

Sun StorEdge Traffic Manager 4.4 Software Installation Guide for HP-UX 11.0 and 11i

817-6274-nn

Sun StorEdge Network Fibre Channel switch-8 and
switch-16

Sun StorEdge Network 2 Gb FC Switch-8 and Switch-16 FRU Installation

817-0064-nn

 

Sun StorEdge 6920 System Administration Guide for the Browser Interface Management Software

819-0123-nn

 

Sun StorEdge 6920 System Hardware Quick Setup poster

817-5226-nn

 

Sun StorEdge Network 2 Gb FC Switch-8 and Switch-16 Release Notes

817-0770-nn

 

Sun StorEdge Network 2 Gb FC Switch-64 Release Notes

817-0977-nn

Sun StorEdge Brocade switch documentation

Sun StorEdge Network 2 Gb Brocade SilkWorm 3200, 3800, and 12000 Switch 3.1/4.1 Firmware Guide to Documentation

817-0062-nn

Sun StorEdge McData switch documentation

Sun StorEdge Network 2 Gb McDATA Intrepid 6064 Director Guide to Documentation, Including Firmware 5.01.00

817-0063-nn

Expansion cabinet

Sun StorEdge Expansion Cabinet Installation and Service Manual

805-3067-nn

Storage Service Processor

Sun Fire V210 and V240 Server Administration Guide

816-4826-nn

Solaris Operating System

Solaris Handbook for Sun Peripherals

816-4468-nn



Known Issues in Release 3.2

This section provides information about known issues with Release 3.2.

Use Only One Fibre Channel Host Port per 6140 Array Controller When Connecting to a 6920

6920 arrays are restricted to only two paths for any given vdisk. If multiple ports on the 6140 Array controller are connected to the 6920 array, this restriction would be violated. Connect only one port per controller.

Data Services Platform Fan Replacement

The fan in the Data Services Platform (DSP) is a field-replaceable unit (FRU). When removing the fan, observe the following caution.



caution icon

Caution - The fan has unprotected blades that might still be spinning when the fan is removed. Be sure that the fan blades have stopped moving completely before removing the fan from the cabinet.



Setting Message Priority for Email Notification Recipients

If you set the Priority parameter to All when adding or editing an email notification recipient, the recipient receives a message for every event that occurs in the system, even for general messages that do not require intervention.

To generate notification messages only for events and alarms that require intervention, set the Priority parameter to Major and above or Critical and above.

Remote Replication, Host Connections, and a Single Controller

Bug 6493606 - You should not connect a controller supporting remote replication on one port and a host connection on the other port, as this is problematic.

Workaround - Do not connect a host to the same controller that is also configured for remote replication.

Other Known Issues Not Applicable to Release 3.2

Array Upgrade Issue

An intermittent problem can occur with PatchPro timing out during an array firmware upgrade. This does not affect the data-path operation, but the upgrade log will indicate that the patch installation failed. Currently, this issue has only been observed on large-capacity systems with numerous arrays.


Bugs

The following sections provide information about bugs filed against this product:

If a recommended workaround is available, it follows the bug description.

Configuration and Element Management Software

This section describes known issues and bugs related to the configuration management software browser interface.

Queue Size Defaults to 512 MB When the CLI Is Used to Change Replication Mode From Asynchronous to Synchronous Back to Asynchronous

Bug 6357963 - If you change an Asynchronous replication set from Asynchronous mode to Synchronous mode back to Asynchronous mode using the command-line interface (CLI), the following error appears:


You cannot decrease the size of the virtual disk queue without first deleting it

If you change the same Asynchronous replication set through the browser interface interface, no error occurs. This occurs because the browser interface uses the original queue size while the CLI defaults to 512 MB for queue size.

Workaround - Use the browser interface to change an Asynchronous replication set from Asynchronous mode to Synchronous mode and back to Asynchronous mode

The System Processing Time Can Be Long During Creation of a New Mirrored Volume

Bug 6256116 - Occasionally, the system may take a long time when you create a new mirrored volume and simultaneously map it to initiators using the New Volume Wizard.

Workaround - Limit to 32 the number of virtual disks in pools from which you create mirrored volumes.

Data Services Platform Firmware

This section describes known issues and bugs related to the Data Services Platform (DSP) firmware.

Misrepresented Progress Status after Rolling Back a Broken Local Mirror Volume

Bug 6360303 - Misrepresented progress status is reported by the system after rolling back a broken off local mirror volume. The status goes from 0 to 100% for the volume, not for individual partitions. When the rollback is complete, the volume condition is no longer in the state "Rollback in Progress"; thus the rollback percentage complete is 0%.

Workaround - Disregard the percentage complete messages until the operation is complete.

Storage Automated Diagnostic Environment

This section describes known issues and bugs related to the Storage Automated Diagnostic Environment application.



Note - When you replace a standby switch fabric card (SFC), an actionable event could occur, even though the card correctly returns to standby mode when the reload is complete.



Fault Management Generates Major Alarms When Remote Replication Is Suspended Manually

Bug 6327537 - If you receive alarms with the event code 30.20.149, you should talk with the system administrators at both the local site and the remote site to verify whether this is an expected occurrence. If it is not, then you should contact Sun StorageTek Customer Service.

Fault Management Does Not Provide Adequate Information on the Queue Performance Page

Bug 6418306 - The Fault Management system does not report statistics to the Global Access Log when Consistency Groups are being used. All sets in the Consistency Group use the same Global Access Log, but the statistics are not being reported.

Workaround - Review the queue statistics on any volumes that are in the Consistency Group.

Solution Extract Causes Erroneous Event Code and Message

Bug 6408258 - If you run Solution Extract, the Fault Management system sends event code 30.20.149 "Potential missing or unmounted MIC slave PC CARD." If the system was not reporting errors before running Solution Extract, then this is an erroneous message.

Workaround - Disregard the message.

Some Event Log Messages Have the Ports Identified by the Physical Port ID Instead of the System Port ID

Bug 6312185 - Some Event log messages have the system ports labeled by the physical port ID, such as 0x1040001. For example:


Aug 16 12:08:10 dsp00  08/16/2005 12:13:29 LOG_WARNING  (ISP4XXX: 1-4)  Gig Ethernet received link down on port 0x1040001
Aug 16 12:08:14 dsp00  08/16/2005 12:13:33 LOG_WARNING  (ISP4XXX: 1-4)  Gig Ethernet received link   up on port 0x1040001

This is because some event log messages, such as Port Up or Port Down already have the system port ID associated with them.

The ports should be labeled by the system port ID. For example:


11/18/2005 09:31:30 LOG_INFO     (Proc: 3-2)  Port 3/4 is UP
11/18/2005 09:31:37 LOG_INFO     (Proc: 4-2)  Port 4/4 is DOWN

Workaround - Use the following algorithm to convert a physical port ID to a system port ID:

physical port = 0xS0P000p

system port = S / ((P - 1) x 2) + p

where:

S = Slot number of the board (1, 2, 3, or 4)

P = Processor number (1, 2, 3, or 4)

p = Port number on that processor (1 or 2)

Examples:


physical port 0x2010001 = system port 2/1

physical port 0x2010002 = system port 2/2
physical port 0x2020001 = system port 2/3
physical port 0x3040002 = system port 3/8
physical port 0x4030001 = system port 4/5

Miscellaneous

This section describes other known issues and bugs found in the system.

Problem With Configuring IP Replication Without Auto Synchronization On

Bug 6509629 - When configuring IP replication without turning on auto synchronization, you get an impractical replication solution. This will in turn force the sets to remain suspended for some period of time.

Workaround - You can do one of the two following steps:

"sscs modify --resume <--full> --sdomain <domain_name> --sdomain <domain_name> constgroup <group_name>"

OPIE Security Challenge Prevents Solution Extract From Capturing SSCS

Bug 6500365 - SSCS data collection is missing from the solution extract on customer configurations when One-Time Password in Everything (OPIE) security is enabled.

Workaround - SSRR is already enabled so Sun service can dial-in to your system and manually retrieve the SSCS information if needed.

6920 System Fails to Add Storage After a Bad Disk Drive Is Replaced

Bug 6427492 - After replacing a bad disk drive, there is a problem with "adding storage" to the storage pool. You get this error:

"Could not find Product class for this disk"

as shown in:

/var/log/webconsole/se6920ui.log 2006-05-18 10:55:40,560 [HttpProcessor[6789][3]] ERROR com.sun.netstorage.array.mgmt.cfg.mgmt.business.impl.mr3.Disk - loadDiskProperties:Could not find Product class for this disk.

Workaround: To correct this problem, you must rescan the devices in the system. You can do this in either of two ways.

CLI: Use the sscs CLI command sscs rescan system.

GUI: Use the Rescan Devices button on the External Storage page.

Switch Frames Display Page Loading Indicators Differently

Bug 6377042 - Different browsers display page loading indicators in different manners:

Firefox: The animated graphic and status bar indicate a done status prior to the actual page completely loading. However, the cursor will display a wait graphic until the page is fully loaded.

Internet Explorer: The status bar indicates a status of "opening page https://....." until all the frames are completely loaded. Also, the cursor does not display a wait graphic until the page is fully loaded.

Using the config_solution Script to Run the setgid Command Fails

Bug 6283274 - The -I switch is not allowed with the setgid command when you run the t4_rnid_cfg script during a migration from release 2.0.x to 3.0.x.

Workaround - Edit the first line of the /usr/local/bin/t4_rnid_cfg file. The original line looks like:


#!/usr/bin/perl -I/usr/local/lib/perl5 --  # -*-Perl-*-
#
# t4_rnid_cfg.pl -- script to configure T4 RNID parameters

Edit this line to the following:


#!/opt/SUNWstade/bin/perl -U use lib "/usr/local/lib/perl5";

Then, rerun the config_solution script.

Limit the Number of Virtual Disks in Pools When Creating Local Mirrors

Bug 6256116 - You cannot create volumes from a pool that has 64 virtual disks in it if you intend to create local mirrors.

Workaround - Do not use more than 32 virtual disks in pools that are used to create local mirrors.

Incorrect Error Message When Mapping to an Offline Initiator

Bug 6353863 - When you create a mapping to an offline initiator, the operation is acknowledged by a message that the mapping has failed. When checking the volume mappings, however, the system indicates that the mapping was performed.

The internal logic tries to map to all instances of the server (i.e., to all ports where the server has been seen). If any one mapping fails, the Map state is displayed as failed, even if the others were successful. Thus, the attempt to map to the offline instance appears to be a failure.

Workaround - Since the operation succeeded, the mapping has not "failed." Disregard the incorrect error message.

Host Channel Port 1 and External Storage

Bug 6511687 - When the 6140 Array is used as external storage for the 6920, only Host Channel 1 port can be used. The other Host Channel ports on the 6140 Array must not be connected to any devices.

Do Not Directly Attach a 1-Gigabit/Sec PCI Dual FC Host Adapter+ to a 6920 Running SAN 4.4.9

Bug 6565798 - There are problems directly attaching a 1-Gigabit/Sec PCI Dual FC Host Adapter+ to a 6920 running SAN 4.4.9 and above.

Workarounds - You can use the 1-Gigabit/Sec PCI Dual FC Host Adapter+ with any version of SAN 4.4.x if it is not directly attached; for example, you can use the 1-Gigabit/Sec PCI Dual FC Host Adapter+ running SAN 4.4.9 when connecting to a 6920 through a switch. And if the Crystal Plus HBA is directly attached, SAN 4.4.8 and below work fine. This does not happen with other HBAs that are directly attached. You can also upgrade the 1-Gigabit/Sec PCI Dual FC Host Adapter+ to 2 Gigabit, which works fine.

Ghost LUNs and Legacy External Dual-Path Storage

Bug 6389694 - In some instances when you have legacy external dual-path storage configured, you might see ghost LUNs. For example, your configuration services database might show 16 LUNs when only 8 LUNs are actually configured.

Workaround - Enter the following command to stop the element manager:

/etc/init.d/init.se6000 stop

Then restart the element manger:

/etc/init.d/init.se6000 start

In case restarting the element manager does not clear the condition, enter the following command to reboot the service processor and also restart the element manager.

reboot

Losing a Mirror Component Results in a Split Mirror

Bug 6472491 - If a component in a Local Mirror situation is removed, then the GUI and the CLI report that the mirror component was removed. The component might report Lost Communication, and a volume might appear to be missing.

Workaround - Rebooting the DSP might clear the problem.

Attempt to Repair the mirror. Depending on the various circumstances of the mirror components, this might or might not repair the situation and report all volumes properly. To rejoin a split mirror component to the mirror:

1. Click Sun StorEdge Configuration Manager.

The Volume Summary page and the navigation pane are displayed. To display the Volume Summary page at any time, choose Logical Storage > Volumes.

2. Click a mirrored volume that has a split component that you want to bring back into the mirror.

The Mirrored Volume Details page is displayed.

3. In the Mirror section of the page, click the radio button to the left of the split component that you want to rejoin. Its condition status will be OK, Split Volume.

Click Rejoin.

A confirmation message and the Mirrored Volume Details page are displayed. During the rejoin process, the status of the component is listed as Resilvering. When the resilvering process is complete, the Mirror section is updated to show that the component is 100% resilvered and that it has a condition of OK.


Known Documentation Issues

The following topics describe known issues in areas of the documentation:

General Documentation Issues

Configuring MPxIO on iSCSI and FC Is Not Shown in User Documentation

Bug 6485986 - The following highlights how to configure Multiple iSCSI sessions for a target (MPxIO) on iSCSI:

This procedure can be used to create multiple iSCSI sessions that connect to a single target. This scenario is useful with iSCSI target devices that support login redirection or have multiple target portals in the same target portal group. iSCSI multiple sessions per target support should be used in combination with Solaris SCSI Multipathing (MPxIO).

1. Become a superuser.

2. List the current parameters for the iSCSI initiator and target.

a. List the current parameters for the iSCSI initiator. For example:

# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:0003ba4d233b.425c293c
Initiator node alias: zzr1200
	 	Configured Sessions: 1

b. List the current parameters of the iSCSI target device. For example:

# iscsiadm list target-param -v iqn.1992-08.com.abcstorage:sn.84186266
Target: iqn.1992-08.com.abcstorage:sn.84186266
                    Alias: -
Configured Sessions: 1

The configured sessions value is the number of configured iSCSI sessions that will be created for each target name in a target portal group.

3. Select one of the following to modify the number of configured sessions at either the initiator node to apply to all targets, or at a target level to apply to a specific target.

The number of sessions for a target must be between 1 and 4.

For example:

# iscsiadm modify initiator-node -c 2

*Apply the parameter to the iSCSI target.

For example:

# iscsiadm modify target-param -c 2  iqn.1992-08.com.abcstorage:sn.84186266

Configured sessions can also be bound to a specific local IP address. Using this method, one or more local IP addresses are supplied in a comma-separated list. Each IP address represents an iSCSI session. This method can also be done at the initiator-node or target-parameter level. For example:

# iscsiadm modify initiator-node -c 10.0.0.1,10.0.0.2


Note - If the specified IP address is not routable, the address is ignored and the default Solaris route and IP address is used for this session.



4. Verify that the parameter was modified.

a. Display the updated information for the initiator node. For example:

# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:0003ba4d233b.425c293c
Initiator node alias: zzr1200
Configured Sessions: 2

b. Display the updated information for the target node. For example:

# iscsiadm list target-param -v iqn.1992-08.com.abcstorage:sn.84186266
Target: iqn.1992-08.com.abcstorage:sn.84186266
Alias: -
Configured Sessions: 2
The following describes the process to configure MPxIO on a FC drive:

1. Log in as superuser.

Determine the HBA controller ports that you want the multipathing software to control. For example, to select the desired device, perform an ls -l command on /dev/fc. The following is an example of the ls -l command output.

lrwxrwxrwx   1 root   root            49 Apr 17 18:14 fp0 ->
../../devices/pci@6,2000/SUNW,qlc@2/fp@0,0:devctl
lrwxrwxrwx   1 root   root            49 Apr 17 18:14 fp1 ->
../../devices/pci@7,2000/SUNW,qlc@2/fp@0,0:devctl
lrwxrwxrwx   1 root   root            49 Apr 17 18:14 fp2 ->
../../devices/pci@a,2000/SUNW,qlc@2/fp@0,0:devctl
lrwxrwxrwx   1 root   root            49 Apr 17 18:14 fp3 ->
../../devices/pci@b,2000/SUNW,qlc@2/fp@0,0:devctl
lrwxrwxrwx   1 root   root            50 Apr 17 18:14 fp4 ->
../../devices/pci@12,2000/SUNW,qlc@2/fp@0,0:devctl
lrwxrwxrwx   1 root   root            56 Apr 17 18:14 fp5 ->
../../devices/pci@13,2000/pci@2/SUNW,qlc@4/fp@0,0:devctl
lrwxrwxrwx   1 root   root            56 Apr 17 18:14 fp6 ->
../../devices/pci@13,2000/pci@2/SUNW,qlc@5/fp@0,0:devctl
lrwxrwxrwx   1 root   root            56 Apr 17 18:14 fp7 ->
../../devices/sbus@7,0/SUNW,qlc@0,30400/fp@0,0:devctl


Note - The fp7 is a SBus HBA. The fp5 and fp6 include two /pci elements. This indicates a dual PCI HBA. The rest of the entries do not have additional PCI bridges and are single PCI HBAs.



2. Open the /kernel/drv/fp.conf file and explicitly enable or disable multipathing on an HBA controller port. This file allows you to enable or disable both the global multipath setting, as well as multipath settings for specific ports.

3. Change the value of global mpxio-disable property. If the entry does not exist add a new entry. The global setting applies to all ports except the ports specified by the per-port entries.

a. To enable multipathing globally, change to

mpxio-disable="no";

b. To disable multipathing globally, change to

mpxio-disable="yes";

4. Add the per-port mpxio-disable entries - one entry for every HBA controller port you want to configure. Per-port settings override the global setting for the specified ports.

a. To enable multipathing on a HBA port, add

name="fp" parent="parent name" port=port-number mpxio-disable="no";

b. To disable multipathing on a HBA port, add

name="fp" parent="parent name" port=port-number mpxio-disable="yes";

5. The following example disables multipathing on all HBA controller ports except the two specified ports:

mpxio-disable="yes";
name="fp" parent="/pci@6,2000/SUNW,qlc@2" port=0 mpxio-disable="no";
name="fp" parent="/pci@13,2000/pci@2/SUNW,qlc@5" port=0 mpxio-disable="no";

6. If running on a SPARC-based system, perform the following:

Run the stmsboot -u command:
# stmsboot -u
WARNING: This operation will require a reboot.
Do you want to continue ? [y/n] (default: y) y
The changes will come into effect after rebooting the system.
Reboot the system now ? [y/n] (default: y) y

You are prompted to reboot. During the reboot, /etc/vfstab and the dump configuration will be updated to reflect the device name changes.

If running on an x86-based system, perform a reconfiguration reboot.

# touch /reconfigure
# shutdown -g0 -y -i6

7. If necessary, perform device name updates as described in Device Name Change Considerations.

Internationalization

Broken Links Found in the Localized Storade Online Help

Bug 6556476 - To view the broken links, use the ToC, Index, or Search to select a help page to view.

System Administration Guide and Online Help Corrections

The corrections in this section apply to both the Sun StorEdge 6920 System Administration Guide (part number 819-0123-10) and the online help.

Restoring the System After a Full Shutdown

This process has been changed. Replace the existing process in the Sun StorEdge 6920 System Administration Guide with the following process:

If you want to restore the system after it has been powered off with the full shutdown procedure, you must go to the location of the system and perform the following procedure:

1. Open the front door and back door of the base cabinet and any expansion cabinets.

2. Remove the front trim panel from each cabinet.

3. Verify that the AC power cables are connected to the correct AC outlets.

4. At the bottom front and bottom back of each cabinet, lower the AC power sequencer circuit breakers to On.

The power status light emitting diodes (LEDs) on both the front and back panel illuminate in the following order, showing the status of the front power sequencer:



Note - You must wait until each component is fully booted before powering on the next component.



5. Power on the storage arrays.



caution icon

Caution - If you power on the DSP before the storage arrays are fully booted the system does not see the storage volumes and incorrectly reports them as missing.



6. Power on the Data Services Platform (DSP).

7. At the back of the system, locate the power switch for the Storage Service Processor and press the power switch on.

8. Verify that all components have only green LEDs lit.

9. Replace the front trim panels and close all doors.

The system is now operating and supports the remote power-on procedure.

Combining Replication Sets in a Consistency Group

This process has been changed. Replace the existing process with the following process:

If you have already created a number of replication sets and then determined that you want to place them in a consistency group, do so as outlined in the following sample procedure. In this example, Replication Set A and Replication Set B are existing independent replication sets. Follow these steps on both the primary and secondary peers:

1. Create a temporary volume, or identify an unused volume in the same storage domain as Replication Sets A and B.

2. Determine the World Wide Name (WWN) of the remote peer.

This information is on the Details page for either replication set.

3. Select a temporary or unused volume from which to create Replication Set C, and launch the Create Replication Set wizard from the Details page for that volume.

Creating Replication Set C is just a means to create a consistency group. This replication set is deleted in subsequent steps.

4. Do the following in the Create Replication Set wizard:

a. Select a temporary or unused volume from which to create the replication set.

b. In the Replication Peer WWN field, type the WWN of the remote system.

c. In the Remote Volume WWN field, type all zeros. Then click Next.

d. Select the Create New Consistency Group option, and provide a name and description for Consistency Group G. Click Next.

e. Specify the replication properties and replication bitmap as prompted, confirm your selections, and click Finish.

5. On the Details page for Replication Set A, click Add to Group to add the replication set to Consistency Group G.

6. On the Details page for Replication Set B, click Add to Group to add the replication set to Consistency Group G.

7. On the Details page for Replication Set C, click Delete to remove the replication set from Consistency Group G.

Replication Set A and Replication Set B are no longer independent and are now part of a consistency group.

Online Help Does Not Describe the Column Labeled Mapping State

Bug 6432516 - The online help should describe the mapping state as shown below:

Mapping State - Summary of the state of all the known paths between the 6920 and the host.

Best Practices Guide Corrections

This section describes corrections and additions to the Best Practices for the Sun StorEdge 6920 System (part number 819-3325-10).

Remote Replication

This information has been changed. Replace the existing section with the following information:

Release 3.0.1 of the Sun StorEdge 6920 system has added support for remote data replication. This feature enables you to continuously copy a volume's data onto a secondary storage device. This secondary storage device should be located far away from the original (primary) storage device. If the primary storage device fails, the secondary storage device can immediately be promoted to primary and brought online.

The replication process begins by creating a complete copy of the primary data on the secondary storage device at the disaster recovery site. Using that copy as a baseline, the replication process records any changes to the data and forwards those changes to the secondary site.

For help setting up appropriate security, contact the Client Solutions Organization (CSO).

More Than Two Connections to an External Storage Virtual Disk Cause Rolling Upgrade and Fault Injection Failures

Bug 6346360 - The Best Practices for the Sun StorEdge 6920 System should describe the following limitation:

Any disk configured with more than two connections to an external storage virtual disk causes rolling upgrade and fault injection failures.

sscs CLI Man Page Addition

This section describes an addition that will be made to the CLI man page.

Increasing the TCP Window Size For Remote Replication

Bug 6481346 - Currently the CLI man page does not include an sscs command to allow you to increase the TCP window size to allow for remote replication.

Use the following SSCS command to increase the TCP window size to allow for remote replication:

modify etherport

modify -r <enable | disable> [ -g string ] [ -m string ] [ -l string ] [ -w

< 1KB|2KB|4KB|16KB|32KB|64KB|128KB|256KB|512KB|1MB > ]

etherport string

Options

-r, - -replication enable | disable

Enables the remote replication feature.

-g, - -gateway string

The gateway address to be used for the remote replication feature.

-m, - -network-mask string

The gateway address network mask.

-l, - -local-address string

The local IP address to which you want to transmit remote replication data.

-w, - -window-size 1KB | 2KB | 4KB |16KB | 32KB | 64KB | 128KB |256KB | 512KB | 1MB

The TCP window size that you want for remote replication.

etherport string

The Ethernet port to be used for remote replication.


Service Contact Information

Contact Sun Customer Service if you need additional information about the Sun StorEdge 6920 system or any other Sun products:

http://www.sun.com/service/contacting


1 (TableFootnote) The term "initiator" means the "initiator instance" as seen by the Sun StorEdge 6920 system. If a data host-side HBA port sees N ports, the system sees N initiators. The 256-initiator limit translates to a maximum of 128 dual-path data hosts, where each data host HBA port can see one port of the system.