Sun StorEdge 6920 System Release Notes, Release 3.2
This document contains important information about the Sun StorEdge 6920 system software release 3.2 that was not available at the time the product documentation was published. Read this document so that you are aware of issues or requirements that can impact the installation and operation of the Sun StorEdge 6920 system running system software release 3.2.
This document consist of the following sections:
Release 3.2 adds the following new features with the Sun StorEdge 6920 system software release 3.2:
This section provides a brief description of these features. For additional information, see the product documentation.
The navigation tree is displayed in the left-hand pane of the interface. You use the navigation tree to move among folders and pages.
The top level of the navigation pane displays the following links:
Logical Storage: Displays links to Volumes, Snapshots, Replication Sets, Virtual Disks, Pools, Profiles, and Domains pages.
Physical Storage: Displays links to Initiators, Ports, Arrays, Trays, and Disks pages.
Mappings: Displays the Mappings Summary page.
External Storage: Displays the External Storage Summary page.
Jobs: Displays links to Current Jobs and Historical Jobs pages.
Administration: Displays links to General Settings, Licensing, Port Filtering, Notification, and Activity Log pages.
The Mapping Summary enables you to view the current volume and initiator mappings, and also create mappings of a snapshot volume to an initiator.
Icons are displayed to draw your attention to an object's status, including critical errors, minor errors, and unknown conditions.
The activity log lists all user-initiated actions performed on the system, in chronological order. These actions might have been initiated through either the Sun StorageTek Common Array Manager interface or the command-line interface (CLI).
The alarm counts in the masthead are retrieved immediately when a new page is requested (or the existing page is refreshed).
The 6920 system now operates in a 4-GB SAN environment.
The software and hardware components described in the following sections have been tested and qualified to work with the Sun StorEdge 6920 system:
The Sun StorEdge 6920 system software release 3.2 supports the web browsers listed in TABLE 1.
The software listed in TABLE 2 is compatible for use on data hosts with data paths or network connections to the Sun StorEdge 6920 system.
The third-party software listed in TABLE 3 is compatible for use on data hosts with data paths or network connections to the Sun StorEdge 6920 system.
For the current hardware compatibility for the VERITAS products, see:
The Sun StorEdge 6920 system supports all of the Fibre Channel (FC) switches, host bus adapters (HBAs), data hosts, and operating systems supported by Sun StorEdge SAN Foundation software version 4.4 (and later). Please contact your local Sun customer service representative for more information.
The Sun StorEdge 6920 system software release 3.2 supports the languages and locales listed in TABLE 4.
This upgrade must be performed by a Sun Customer Service technician. Please call Sun Customer Service to arrange an installation or upgrade to Release 3.2.
TABLE 6 lists maximum values for elements of the Sun StorEdge 6920 system.
Initiators per system
TABLE 7 and TABLE 8 list the documents that are related to the Sun StorEdge 6920 system. For any document number with nn as a version suffix, use the most current version available.
You can search for this documentation online at
System overview information, as well as information on system configuration, maintenance, and basic troubleshooting, is covered in the online help included with the software. In addition, the sscs (1M) man page provides information about the commands used to manage storage using the command-line interface (CLI).
This section provides information about known issues with Release 3.2.
6920 arrays are restricted to only two paths for any given vdisk. If multiple ports on the 6140 Array controller are connected to the 6920 array, this restriction would be violated. Connect only one port per controller.
The fan in the Data Services Platform (DSP) is a field-replaceable unit (FRU). When removing the fan, observe the following caution.
If you set the Priority parameter to All when adding or editing an email notification recipient, the recipient receives a message for every event that occurs in the system, even for general messages that do not require intervention.
To generate notification messages only for events and alarms that require intervention, set the Priority parameter to Major and above or Critical and above.
Bug 6493606 - You should not connect a controller supporting remote replication on one port and a host connection on the other port, as this is problematic.
Workaround - Do not connect a host to the same controller that is also configured for remote replication.
An intermittent problem can occur with PatchPro timing out during an array firmware upgrade. This does not affect the data-path operation, but the upgrade log will indicate that the patch installation failed. Currently, this issue has only been observed on large-capacity systems with numerous arrays.
The following sections provide information about bugs filed against this product:
If a recommended workaround is available, it follows the bug description.
This section describes known issues and bugs related to the configuration management software browser interface.
Bug 6357963 - If you change an Asynchronous replication set from Asynchronous mode to Synchronous mode back to Asynchronous mode using the command-line interface (CLI), the following error appears:
If you change the same Asynchronous replication set through the browser interface interface, no error occurs. This occurs because the browser interface uses the original queue size while the CLI defaults to 512 MB for queue size.
Workaround - Use the browser interface to change an Asynchronous replication set from Asynchronous mode to Synchronous mode and back to Asynchronous mode
Bug 6256116 - Occasionally, the system may take a long time when you create a new mirrored volume and simultaneously map it to initiators using the New Volume Wizard.
Workaround - Limit to 32 the number of virtual disks in pools from which you create mirrored volumes.
This section describes known issues and bugs related to the Data Services Platform (DSP) firmware.
Bug 6360303 - Misrepresented progress status is reported by the system after rolling back a broken off local mirror volume. The status goes from 0 to 100% for the volume, not for individual partitions. When the rollback is complete, the volume condition is no longer in the state "Rollback in Progress"; thus the rollback percentage complete is 0%.
Workaround - Disregard the percentage complete messages until the operation is complete.
This section describes known issues and bugs related to the Storage Automated Diagnostic Environment application.
Bug 6327537 - If you receive alarms with the event code 30.20.149, you should talk with the system administrators at both the local site and the remote site to verify whether this is an expected occurrence. If it is not, then you should contact Sun StorageTek Customer Service.
Bug 6418306 - The Fault Management system does not report statistics to the Global Access Log when Consistency Groups are being used. All sets in the Consistency Group use the same Global Access Log, but the statistics are not being reported.
Workaround - Review the queue statistics on any volumes that are in the Consistency Group.
Bug 6408258 - If you run Solution Extract, the Fault Management system sends event code 30.20.149 "Potential missing or unmounted MIC slave PC CARD." If the system was not reporting errors before running Solution Extract, then this is an erroneous message.
Workaround - Disregard the message.
Bug 6312185 - Some Event log messages have the system ports labeled by the physical port ID, such as 0x1040001. For example:
Aug 16 12:08:10 dsp00 08/16/2005 12:13:29 LOG_WARNING (ISP4XXX: 1-4) Gig Ethernet received link down on port 0x1040001
This is because some event log messages, such as Port Up or Port Down already have the system port ID associated with them.
The ports should be labeled by the system port ID. For example:
Workaround - Use the following algorithm to convert a physical port ID to a system port ID:
physical port = 0xS0P000p
system port = S / ((P - 1) x 2) + p
S = Slot number of the board (1, 2, 3, or 4)
P = Processor number (1, 2, 3, or 4)
p = Port number on that processor (1 or 2)
This section describes other known issues and bugs found in the system.
Bug 6509629 - When configuring IP replication without turning on auto synchronization, you get an impractical replication solution. This will in turn force the sets to remain suspended for some period of time.
Workaround - You can do one of the two following steps:
"sscs modify --resume <--full> --sdomain <domain_name> --sdomain <domain_name> constgroup <group_name>"
Bug 6500365 - SSCS data collection is missing from the solution extract on customer configurations when One-Time Password in Everything (OPIE) security is enabled.
Workaround - SSRR is already enabled so Sun service can dial-in to your system and manually retrieve the SSCS information if needed.
Bug 6427492 - After replacing a bad disk drive, there is a problem with "adding storage" to the storage pool. You get this error:
"Could not find Product class for this disk"
as shown in:
/var/log/webconsole/se6920ui.log 2006-05-18 10:55:40,560 [HttpProcessor] ERROR com.sun.netstorage.array.mgmt.cfg.mgmt.business.impl.mr3.Disk - loadDiskProperties:Could not find Product class for this disk.
Workaround: To correct this problem, you must rescan the devices in the system. You can do this in either of two ways.
CLI: Use the sscs CLI command sscs rescan system.
GUI: Use the Rescan Devices button on the External Storage page.
Bug 6377042 - Different browsers display page loading indicators in different manners:
Firefox: The animated graphic and status bar indicate a done status prior to the actual page completely loading. However, the cursor will display a wait graphic until the page is fully loaded.
Internet Explorer: The status bar indicates a status of "opening page https://....." until all the frames are completely loaded. Also, the cursor does not display a wait graphic until the page is fully loaded.
Bug 6283274 - The -I switch is not allowed with the setgid command when you run the t4_rnid_cfg script during a migration from release 2.0.x to 3.0.x.
Workaround - Edit the first line of the /usr/local/bin/t4_rnid_cfg file. The original line looks like:
Edit this line to the following:
Then, rerun the config_solution script.
Bug 6256116 - You cannot create volumes from a pool that has 64 virtual disks in it if you intend to create local mirrors.
Workaround - Do not use more than 32 virtual disks in pools that are used to create local mirrors.
Bug 6353863 - When you create a mapping to an offline initiator, the operation is acknowledged by a message that the mapping has failed. When checking the volume mappings, however, the system indicates that the mapping was performed.
The internal logic tries to map to all instances of the server (i.e., to all ports where the server has been seen). If any one mapping fails, the Map state is displayed as failed, even if the others were successful. Thus, the attempt to map to the offline instance appears to be a failure.
Workaround - Since the operation succeeded, the mapping has not "failed." Disregard the incorrect error message.
Bug 6511687 - When the 6140 Array is used as external storage for the 6920, only Host Channel 1 port can be used. The other Host Channel ports on the 6140 Array must not be connected to any devices.
Bug 6565798 - There are problems directly attaching a 1-Gigabit/Sec PCI Dual FC Host Adapter+ to a 6920 running SAN 4.4.9 and above.
Workarounds - You can use the 1-Gigabit/Sec PCI Dual FC Host Adapter+ with any version of SAN 4.4.x if it is not directly attached; for example, you can use the 1-Gigabit/Sec PCI Dual FC Host Adapter+ running SAN 4.4.9 when connecting to a 6920 through a switch. And if the Crystal Plus HBA is directly attached, SAN 4.4.8 and below work fine. This does not happen with other HBAs that are directly attached. You can also upgrade the 1-Gigabit/Sec PCI Dual FC Host Adapter+ to 2 Gigabit, which works fine.
Bug 6389694 - In some instances when you have legacy external dual-path storage configured, you might see ghost LUNs. For example, your configuration services database might show 16 LUNs when only 8 LUNs are actually configured.
Workaround - Enter the following command to stop the element manager:
Then restart the element manger:
In case restarting the element manager does not clear the condition, enter the following command to reboot the service processor and also restart the element manager.
Bug 6472491 - If a component in a Local Mirror situation is removed, then the GUI and the CLI report that the mirror component was removed. The component might report Lost Communication, and a volume might appear to be missing.
Workaround - Rebooting the DSP might clear the problem.
Attempt to Repair the mirror. Depending on the various circumstances of the mirror components, this might or might not repair the situation and report all volumes properly. To rejoin a split mirror component to the mirror:
1. Click Sun StorEdge Configuration Manager.
The Volume Summary page and the navigation pane are displayed. To display the Volume Summary page at any time, choose Logical Storage > Volumes.
2. Click a mirrored volume that has a split component that you want to bring back into the mirror.
The Mirrored Volume Details page is displayed.
3. In the Mirror section of the page, click the radio button to the left of the split component that you want to rejoin. Its condition status will be OK, Split Volume.
A confirmation message and the Mirrored Volume Details page are displayed. During the rejoin process, the status of the component is listed as Resilvering. When the resilvering process is complete, the Mirror section is updated to show that the component is 100% resilvered and that it has a condition of OK.
The following topics describe known issues in areas of the documentation:
Bug 6485986 - The following highlights how to configure Multiple iSCSI sessions for a target (MPxIO) on iSCSI:
This procedure can be used to create multiple iSCSI sessions that connect to a single target. This scenario is useful with iSCSI target devices that support login redirection or have multiple target portals in the same target portal group. iSCSI multiple sessions per target support should be used in combination with Solaris SCSI Multipathing (MPxIO).
1. Become a superuser.
2. List the current parameters for the iSCSI initiator and target.
a. List the current parameters for the iSCSI initiator. For example:
# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:0003ba4d233b.425c293c
Initiator node alias: zzr1200
Configured Sessions: 1
b. List the current parameters of the iSCSI target device. For example:
# iscsiadm list target-param -v iqn.1992-08.com.abcstorage:sn.84186266
Configured Sessions: 1
The configured sessions value is the number of configured iSCSI sessions that will be created for each target name in a target portal group.
3. Select one of the following to modify the number of configured sessions at either the initiator node to apply to all targets, or at a target level to apply to a specific target.
The number of sessions for a target must be between 1 and 4.
# iscsiadm modify initiator-node -c 2
*Apply the parameter to the iSCSI target.
# iscsiadm modify target-param -c 2 iqn.1992-08.com.abcstorage:sn.84186266
Configured sessions can also be bound to a specific local IP address. Using this method, one or more local IP addresses are supplied in a comma-separated list. Each IP address represents an iSCSI session. This method can also be done at the initiator-node or target-parameter level. For example:
# iscsiadm modify initiator-node -c 10.0.0.1,10.0.0.2
4. Verify that the parameter was modified.
a. Display the updated information for the initiator node. For example:
# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:0003ba4d233b.425c293c
Initiator node alias: zzr1200
Configured Sessions: 2
b. Display the updated information for the target node. For example:
# iscsiadm list target-param -v iqn.1992-08.com.abcstorage:sn.84186266
Configured Sessions: 2
The following describes the process to configure MPxIO on a FC drive:
1. Log in as superuser.
Determine the HBA controller ports that you want the multipathing software to control. For example, to select the desired device, perform an ls -l command on /dev/fc. The following is an example of the ls -l command output.
lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp0 ->
lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp1 ->
lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp2 ->
lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp3 ->
lrwxrwxrwx 1 root root 50 Apr 17 18:14 fp4 ->
lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp5 ->
lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp6 ->
lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp7 ->
2. Open the /kernel/drv/fp.conf file and explicitly enable or disable multipathing on an HBA controller port. This file allows you to enable or disable both the global multipath setting, as well as multipath settings for specific ports.
3. Change the value of global mpxio-disable property. If the entry does not exist add a new entry. The global setting applies to all ports except the ports specified by the per-port entries.
a. To enable multipathing globally, change to
b. To disable multipathing globally, change to
4. Add the per-port mpxio-disable entries - one entry for every HBA controller port you want to configure. Per-port settings override the global setting for the specified ports.
a. To enable multipathing on a HBA port, add
name="fp" parent="parent name" port=port-number mpxio-disable="no";
b. To disable multipathing on a HBA port, add
name="fp" parent="parent name" port=port-number mpxio-disable="yes";
5. The following example disables multipathing on all HBA controller ports except the two specified ports:
name="fp" parent="/pci@6,2000/SUNW,qlc@2" port=0 mpxio-disable="no";
name="fp" parent="/pci@13,2000/pci@2/SUNW,qlc@5" port=0 mpxio-disable="no";
6. If running on a SPARC-based system, perform the following:
Run the stmsboot -u command:
# stmsboot -u
WARNING: This operation will require a reboot.
Do you want to continue ? [y/n] (default: y) y
The changes will come into effect after rebooting the system.
Reboot the system now ? [y/n] (default: y) y
You are prompted to reboot. During the reboot, /etc/vfstab and the dump configuration will be updated to reflect the device name changes.
If running on an x86-based system, perform a reconfiguration reboot.
# touch /reconfigure
# shutdown -g0 -y -i6
7. If necessary, perform device name updates as described in Device Name Change Considerations.
Bug 6556476 - To view the broken links, use the ToC, Index, or Search to select a help page to view.
The corrections in this section apply to both the Sun StorEdge 6920 System Administration Guide (part number 819-0123-10) and the online help.
This process has been changed. Replace the existing process in the Sun StorEdge 6920 System Administration Guide with the following process:
If you want to restore the system after it has been powered off with the full shutdown procedure, you must go to the location of the system and perform the following procedure:
1. Open the front door and back door of the base cabinet and any expansion cabinets.
2. Remove the front trim panel from each cabinet.
3. Verify that the AC power cables are connected to the correct AC outlets.
4. At the bottom front and bottom back of each cabinet, lower the AC power sequencer circuit breakers to On.
The power status light emitting diodes (LEDs) on both the front and back panel illuminate in the following order, showing the status of the front power sequencer:
5. Power on the storage arrays.
6. Power on the Data Services Platform (DSP).
7. At the back of the system, locate the power switch for the Storage Service Processor and press the power switch on.
8. Verify that all components have only green LEDs lit.
9. Replace the front trim panels and close all doors.
The system is now operating and supports the remote power-on procedure.
This process has been changed. Replace the existing process with the following process:
If you have already created a number of replication sets and then determined that you want to place them in a consistency group, do so as outlined in the following sample procedure. In this example, Replication Set A and Replication Set B are existing independent replication sets. Follow these steps on both the primary and secondary peers:
1. Create a temporary volume, or identify an unused volume in the same storage domain as Replication Sets A and B.
2. Determine the World Wide Name (WWN) of the remote peer.
This information is on the Details page for either replication set.
3. Select a temporary or unused volume from which to create Replication Set C, and launch the Create Replication Set wizard from the Details page for that volume.
Creating Replication Set C is just a means to create a consistency group. This replication set is deleted in subsequent steps.
4. Do the following in the Create Replication Set wizard:
a. Select a temporary or unused volume from which to create the replication set.
b. In the Replication Peer WWN field, type the WWN of the remote system.
c. In the Remote Volume WWN field, type all zeros. Then click Next.
d. Select the Create New Consistency Group option, and provide a name and description for Consistency Group G. Click Next.
e. Specify the replication properties and replication bitmap as prompted, confirm your selections, and click Finish.
5. On the Details page for Replication Set A, click Add to Group to add the replication set to Consistency Group G.
6. On the Details page for Replication Set B, click Add to Group to add the replication set to Consistency Group G.
7. On the Details page for Replication Set C, click Delete to remove the replication set from Consistency Group G.
Replication Set A and Replication Set B are no longer independent and are now part of a consistency group.
Bug 6432516 - The online help should describe the mapping state as shown below:
Mapping State - Summary of the state of all the known paths between the 6920 and the host.
This section describes corrections and additions to the Best Practices for the Sun StorEdge 6920 System (part number 819-3325-10).
This information has been changed. Replace the existing section with the following information:
Release 3.0.1 of the Sun StorEdge 6920 system has added support for remote data replication. This feature enables you to continuously copy a volume's data onto a secondary storage device. This secondary storage device should be located far away from the original (primary) storage device. If the primary storage device fails, the secondary storage device can immediately be promoted to primary and brought online.
The replication process begins by creating a complete copy of the primary data on the secondary storage device at the disaster recovery site. Using that copy as a baseline, the replication process records any changes to the data and forwards those changes to the secondary site.
For help setting up appropriate security, contact the Client Solutions Organization (CSO).
Bug 6346360 - The Best Practices for the Sun StorEdge 6920 System should describe the following limitation:
Any disk configured with more than two connections to an external storage virtual disk causes rolling upgrade and fault injection failures.
This section describes an addition that will be made to the CLI man page.
Bug 6481346 - Currently the CLI man page does not include an sscs command to allow you to increase the TCP window size to allow for remote replication.
Use the following SSCS command to increase the TCP window size to allow for remote replication:
modify -r <enable | disable> [ -g string ] [ -m string ] [ -l string ] [ -w
< 1KB|2KB|4KB|16KB|32KB|64KB|128KB|256KB|512KB|1MB > ]
-r, - -replication enable | disable
Enables the remote replication feature.
-g, - -gateway string
The gateway address to be used for the remote replication feature.
-m, - -network-mask string
The gateway address network mask.
-l, - -local-address string
The local IP address to which you want to transmit remote replication data.
-w, - -window-size 1KB | 2KB | 4KB |16KB | 32KB | 64KB | 128KB |256KB | 512KB | 1MB
The TCP window size that you want for remote replication.
The Ethernet port to be used for remote replication.
Contact Sun Customer Service if you need additional information about the Sun StorEdge 6920 system or any other Sun products: