C H A P T E R  8

Viewing and Editing Configuration Parameters

This chapter describes viewing and editing configuration parameters. Topics covered include:


Communication Parameters

The "Communication parameters" menu option enables you to view communication settings. Most of these parameters are reserved and should not be changed. Use the "Internet Protocol (TCP/IP) menu option to set or change your array's IP address.

Setting an IP Address

The controller Ethernet port offers interactive out-of-band management through two interfaces:

To access the array using the Ethernet port, you must set up an IP address for the controller. You can set the IP address by typing in values for the IP address itself, the subnet mask, and the IP address of the gateway manually. If your network is using a Reverse Address Resolution Protocol (RARP) server or a Dynamic Host Configuration Protocol (DHCP) server to automatically configure IP information for devices on the network, you can specify the appropriate protocol instead of typing in the information manually.



caution icon

Caution - If you assign an IP address to an array in order to manage it out-of-band, for security reasons make sure that the IP address is on a private network rather than a publicly routable network.




procedure icon  To Set an Array's IP Address

To set the IP address, subnet mask, and gateway addresses of the RAID controller, perform the following steps:

1. Access the array through the COM port on the controller module of the array.

2. Choose "view and edit Configuration parameter right arrow Communication Parameters right arrow Internet Protocol (TCP/IP)."

3. Select the chip hardware address.

4. Choose "Set IP Address."

5. Type in the desired IP address, subnet mask, and gateway address, choosing each menu option in turn.

If your network sets IP addresses using a RARP server, type RARP rather than an IP address and do not type in a subnet mask or gateway address. If your network sets IP addresses using a DHCP server, type DHCP rather than an IP address and do not type in a subnet mask or gateway address.

6. Press Esc to continue.

A confirmation prompt is displayed.

Change/Set IP Address ?

7. Choose Yes to continue.



Note - You need to reset the controller for the configuration to take effect.



You are prompted to reset the controller.

8. Choose Yes to reset the controller.


Caching Parameters

Caching parameters include the write-back cache and optimization modes described below.

Enabling and Disabling Write-Back Cache

The write-back cache function significantly enhances controller performance. When it is disabled, a write-through strategy replaces it. A write-through strategy is considered more secure if power failure should occur. Because a battery module is installed, power will be supplied to the data cached in memory and the cached writes can be completed when power is restored.


procedure icon  To Change the Caching Parameter Option

1. Choose "view and edit Configuration parameters right arrow Caching Parameters right arrow Write-Back Cache."

A confirmation message asks if you want to change the write-back cache setting.

 Screen capture showing a submenu with Enable Write-Back Cache?" prompt displayed.

2. Choose Yes to confirm.

Optimization Modes

Mass storage applications fall into two major categories: database applications and video/imaging applications. The controller supports two embedded optimization modes.

The random I/O optimization mode reads and writes small blocks of data, and sequential optimization mode reads and writes large blocks of data. TABLE 8-1 specifies the stripe size for each RAID level according to the optimization mode.

TABLE 8-1 Stripe Size Per Optimization Mode

RAID Level

Sequential I/O

Random I/O

0, 1, 5

128

32

3

16

4


The types of applications appropriate for random and sequential optimization are described in Database and Transaction-Based Applications and Video Recording, Playback, and Imaging Applications.

Optimization Limitations

There are two limitations that apply to the optimization modes:

This limitation results from the redundant configuration of controllers. Data inconsistency can occur when a controller configured with one optimization mode is used to replace a failed controller with a different mode.



Note - The maximum allowable size of a logical drive optimized for sequential I/O is 2 Tbyte. The maximum allowable size of a logical drive optimized for random I/O is 512 Gbyte. When creating a logical drive that is greater than these limits, an error message is displayed.



Database and Transaction-Based Applications

Typical transaction sizes for transactions and database updates range from 2 Kbyte to 4 Kbyte. These applications keep each transaction small so that I/O transfers are not clogged.

Transaction-based applications do not read or write data in a sequential order. Instead, access to data occurs randomly. Transaction-based performance is usually measured in I/O operations per second (IOPS).

Video Recording, Playback, and Imaging Applications

Video playback, multimedia post-production editing, and similar applications read and write large files to and from storage in sequential order. The size of each I/O operation typically ranges from 28 Kbytes up to 1 MByte or higher. Performance is measured in MBytes per second.

When an array works with applications such as video or image-oriented applications, the application reads and writes data to and from the drive as large-block, sequential files instead of small-block, randomly accessed files.

Optimization for Random I/O (32 Kbyte block size)

The logical drive, cache memory, and other controller parameters are adjusted for the use of database and transaction-processing applications.

Optimization for Sequential I/O (128 Kbyte block size)

Optimization for sequential I/O provides larger stripe size (block size, also known as chunk size) than optimization for random I/O. Numerous controller parameters are also changed to optimize for sequential I/O. These changes takes effect after the controller resets.

The logical drive, cache memory, and other controller parameters are adjusted for the use of video and imaging applications.

Maximum Number of Disks and Maximum Usable Capacity for Random and Sequential Optimization

Your choice of random or sequential optimization affects the maximum number of disks you can include in an array and the maximum usable capacity of a logical drive. The following tables contain the maximum number of disks per logical drive and the maximum usable capacity of a logical drive.



Note - You can have a maximum of eight logical drives and 36 disks, using one array and two expansion units.



TABLE 8-2 Maximum Number of Disks per Logical Drive for a 2U Array

Disk Capacity (GByte)

RAID 5 Random

RAID 5 Sequential

RAID 3 Random

RAID 3 Sequential

RAID 1 Random

RAID 1 Sequential

RAID 0 Random

RAID 0 Sequential

36.2

14

31

14

31

28

36

14

36

73.4

7

28

7

28

12

30

6

27

146.8

4

14

4

14

6

26

3

13


TABLE 8-3 Maximum Usable Capacity (Gbyte) per Logical Drive for a 2U Array

Disk Capacity

 

RAID 5 Random

RAID 5 Sequential

RAID 3 Random

RAID 3 Sequential

RAID 1 Random

RAID 1 Sequential

RAID 0 Random

RAID 0 Sequential

36.2

471

1086

471

1086

507

543

507

1122

73.4

440

1982

440

1982

440

1101

440

1982

146.8

440

1908

440

1908

440

1908

440

1908




Note - You might not be able to use all disks for data when using 36 146-Gbyte disks. Any remaining disks can be used as spares.



The default optimization mode is "Sequential." Sequential optimization mode is automatically applied to any logical configuration of drives larger than 512 Gbyte.


procedure icon  To Choose the Optimization Mode for All Drives

1. Choose "view and edit Configuration parameters right arrow Caching Parameters right arrow Optimization for Random I/O, or choose "view and edit Configuration parameters right arrow Caching Parameters right arrow Optimization for Sequential I/O."

A confirmation message is displayed.

 Screen capture showing submenu with "Optimization for Sequential I/O" chosen and "Optimization for Random I/O?" prompt.

2. Choose Yes to confirm.


Host-side SCSI Parameters Menu Options

The Host-side SCSI Parameters menu options are discussed in the following sections:

Maximum Queued I/O Count

The "Maximum Queued I/O Count" menu option enables you to configure the maximum number of I/O operations per logical drive that can be accepted from servers. The predefined range is from 1 to 1024 I/O operations per logical drive, or you can choose the "Auto" (automatically configured) setting. The default value is 256 I/O operations per logical drive.

The maximum number of queued I/O operations is 4096. (A maximum of eight logical drives, multiplied by the maximum of 1024 queued I/O operations per logical drive, equals a total of 4096 queued I/O operations).

The appropriate "Maximum Queued I/O Count" setting depends on how many I/O operations attached servers are performing. This can vary according to the amount of host memory present as well as the number of drives and their size. If you increase the amount of host memory, add more drives, or replace drives with larger drives, you might want to increase the maximum I/O count. But usually optimum performance results from using the "Auto" or "256" settings.


procedure icon  To Set the Maximum Queued I/O Count

1. Choose "view and edit Configuration parameters right arrow Host-side SCSI Parameters right arrow Maximum Queued I/O Count."

A list of I/O count values is displayed.

 Screen capture showing submenus with "Host-side SCSI Parameters," "Maximum Queued I/O Count - 256," and "1024" chosen.

2. Choose a value.

A confirmation message is displayed.

3. Choose Yes to confirm.

LUNs Per Host SCSI ID

This function is used to change the number of LUNs per host SCSI ID. Each time a host channel ID is added, it uses the number of LUNS allocated in this setting, no matter how many LUNs are actually mapped to it. The default setting is 32 LUNs, with a predefined range of 1 to 32 LUNs per logical drive available.



Note - For the Sun StorEdge 3310 SCSI array, the maximum number of LUN assignments is 128. If you use the default setting of 32 LUNs per host ID, you can only add four host channel IDs (4 x 32 = 128). If you want to allocate more than four host channel IDs, you need to set the LUNs per Host SCSI ID parameter to a value less than 32.




procedure icon  To Change the Number of LUNs Per Host SCSI ID

1. Choose "view and edit Configuration parameters right arrow Host-side SCSI Parameters right arrow LUNs per Host SCSI ID."

A list of values is displayed.

 Screen capture showing submenus with "Host-side SCSI Parameters," "LUNs per Host SCSI ID - 8," and "32 LUNs" chosen.

2. Choose a value.

A confirmation message is displayed.

3. Choose Yes to confirm.

Maximum Number of Concurrent Host-LUN Connections

The "Max Number of Concurrent Host-LUN Connection" menu option is used to set the maximum number of concurrent host-LUN connections. Change this menu option setting only if you have more than four logical drives or partitions. Increasing this number might increase your performance.

Maximum concurrent host LUN connection (nexus in SCSI) is the arrangement of the controller internal resources for use with a number of the current host nexus.

For example, you can have four hosts (A, B, C, and D) and four host IDs/LUNs (IDs 0, 1, 2 and 3) in a configuration where:

These connections are all queued in the cache and are called four nexus.

If there is I/O in the cache with four different nexus, and another host I/O comes with a nexus different than the four in the cache (for example, host A accesses ID 3), the controller returns busy. This occurs with the concurrent active nexus; if the cache is cleared, it accepts four different nexus again. Many I/O operations can be accessed via the same nexus.


procedure icon  To Change the Maximum Number of Concurrent Host-LUN Connections

1. Choose "view and edit Configuration parameters right arrow Host-side SCSI Parameters right arrow Max Number of Concurrent Host-LUN Connection."

A list of values is displayed. For FC arrays, this list ranges from 1 to 1024. For SCSI arrays, the list ranges from 1 to 128.

2. Choose a value.

 Screen capture showing submenus with "Host-side SCSI Parameters," "Max Number of Concurrent Host-LUN Connection - Def(4)," and "32" chosen.

A confirmation message is displayed.

3. Choose Yes to confirm.

Number of Tags Reserved for Each Host LUN Connection

This menu option is used to modify the tag command queuing on the host-LUN connection. The default setting is 32 tags, with a predefined range of 1 to 256. The default factory setting should not be changed unless necessary.

Each nexus has 32 (the default setting) tags reserved. This setting ensures that the controller accepts at least 32 tags per nexus. The controller can accept more as long as controller resources allow it; if the controller does not have enough internal resources, at least 32 tags can be accepted per nexus.


procedure icon  To Modify the Tag Command Queuing on the Host-LUN Connection

1. From the Main Menu, choose "view and edit Configuration parameters right arrow Host-side SCSI Parametersright arrow Number of Tags Reserved for each Host-LUN Connection."

A list of values is displayed.

 Screen capture showing submenus with "Host-side SCSI Parameters," "Number of Tags Reserved for each Host-LUN Connection - Def (32)," and "256" chosen.

2. Choose a value.

A confirmation message is displayed.

3. Choose Yes to confirm.

Host Cylinder/Head/Sector Mapping Configuration

SCSI drive capacity is determined by the host computer according to the number of blocks. Some host operating systems read the capacity of the array based on the cylinder/head/sector count of the drives. The RAID controller firmware enables you to either specify the appropriate number of cylinders, heads, and sectors, or to use the Variable menu option for one or more of these settings. When you use the Variable menu option, the firmware calculates the settings appropriately.

Leaving the cylinder, head, and sector settings at "Variable" ensures that all three values are calculated automatically. If you choose a specific value for one of these settings and leave the other two set to "Variable," the firmware calculates the other two settings. If you set two, the firmware automatically calculates the third.

For the Solaris operating environment, the number of cylinders cannot exceed 65,535, so you can choose "< 65536 Cylinders" and "255 Heads" to cover all logical drives over 253 GB and under the maximum limit. The controller automatically adjusts the sector count, and then the operating environment can read the correct drive capacity.

After changing the size of a disk in the Solaris operating environment, run the format utility and choose the 0, autoconfigure option from the menu. This enables the host to reconfigure the size of the disk appropriately and relabel the disk with the current firmware revision level.


procedure icon  To Configure Sector Ranges, Head Ranges, and Cylinder Ranges

1. Choose "view and edit Configuration parameters right arrow Host-Side SCSI Parameters right arrow Host Cylinder/Head/Sector Mapping Configuration right arrow Sector Ranges."

2. Choose a value.

3. Choose "Head Ranges" and choose a value.

 Screen capture showing submenus with "Host Cylinder/Head/Sector Mapping Configuration," "Head Ranges - Variable," and "Variable" chosen.

4. Choose "Cylinder Ranges" and choose a value.

 Screen capture showing submenus with "Host Cylinder/Head/Sector Mapping Configuration," "Cylinder Ranges - Variable," and "Variable" chosen.

Preparing for Logical Drives Larger Than 253 Gbytes on Solaris Systems

The Solaris operating environment requires drive geometry for various operations, including newfs. For the appropriate drive geometry to be presented to the Solaris operating environment for logical drives larger than 253 Gbyte, change the default settings to "< 65536 Cylinders" and "255 Heads" to cover all logical drives over 253 GB. The controller automatically adjusts the sector count, and then the operating environment can read the correct drive capacity.

For Solaris operating environment configurations, use the values in the following table.

TABLE 8-4 Cylinder and Head Mapping for the Solaris Operating Environment

Logical Drive Capacity

Cylinder

Head

Sector

< 253 GB

variable (default)

variable (default)

variable (default)

253 GB - 1 TB

< 65536 Cylinders *

255 *

variable (default)


* These settings are also valid for all logical drives under 253 GBytes.



Note - Earlier versions of the Solaris operating environment do not support drive capacities larger than 1 terabyte.




procedure icon  To Prepare Logical Drives Larger than 253 Gbytes

1. Choose "view and edit Configuration parameters right arrow Host-Side SCSI Parameters right arrow Host Cylinder/Head/Sector Mapping Configuration right arrow Sector Ranges - Variable right arrow 255 Sectors right arrow Head Ranges - Variable."

2. Specify "255 Heads."

3. Choose "Cylinder Ranges - Variable right arrow < 65536 Cylinders."

Peripheral Device Type Parameters (Reserved)

Do not use this menu option to change the Peripheral Device Type setting from "Enclosure Services Device."

The "Peripheral Device Type Parameters" menu option is used only when attempting to configure an array through an in-band connection before a logical drive has been created and mapped.to a host LUN. If you follow the instructions for creating a logical drive using a tip or telnet session, using the "Peripheral Device Type Parameters" menu option is unnecessary.



caution icon

Caution - Changing this setting might cause unexpected results.





Note - Do not change the Peripheral Device Qualifier setting from "Connected."



Fibre Connection Options (FC Only)

Your FC array contains internal circuits called port bypass circuits (PBCs). These PBCs are controlled by firmware configuration settings. Choose the "Loop only" menu option from the Fibre Connection Option menu to make a FC loop configuration possible. Choose the "Point to point only" menu option to make point-to-point connections possible.



Note - It is important that you choose the correct one of these two options for your configuration.





caution icon

Caution - An additional menu option defaults to a loop configurations but, upon failure to connect at boot time, switches to a point-to-point configuration. Do not use this option unless directed to use it by technical support personnel.



Refer to Sun StorEdge 3000 Family Best Practices Manual for the 3510 FC Array and Sun StorEdge 3000 Family Installation, Operation, and Service Manual for the 3510 FC Array for more information about point-to-point and loop configurations.

It is important for point-to-point configurations to also specify only a primary Fibre Channel Host ID (PID) or a secondary ID (SID) for each host channel. For loop configurations with failover, it is important to specify both a PID and SID. See Default Fibre Channel Host IDs (FC Only) for more information about creating host IDs.



Note - The following steps show you how to change a loop configuration to a point-to-point configuration.




procedure icon  To Confirm or Change the Fibre Connection for the Array

1. Choose "view and edit Configuration parameters right arrow Host-side SCSI Parameters right arrow Fibre Connection Option."

2. Choose a connection type.

 Screen capture with Fibre Connection Option and "Point to point only" chosen


caution icon

Caution - Do not use the "Loop preferred, otherwise point to point" menu option. This menu option is reserved for special use and should be used only if directed by technical support personnel.



A confirmation message is displayed.

3. Choose Yes to continue.

 Screen capture shows the "Set Fibre Channel Connection Option?" confirmation message with Yes chosen.


Note - You need to reset your controller for this configuration change to take effect.



To reset your controller, perform these steps.

4. Choose "system Functions right arrow Reset controller."


Drive-side SCSI Parameters Menu

The Drive-side SCSI Parameters menu options include:

These parameters are user-configurable. However they should not be changed from their preset values without good reason, and without an understanding of potential impacts on performance or reliability.


procedure icon  To Access the Drive-side Parameter Menu

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters."

The Drive-side SCSI parameters menu is displayed.

 Screen capture showing the first submenu with "Drive-side SCSI Parameters" chosen and the second submenu shows "SCSI Motor Spin-Up Disabled."

SCSI Motor Spin-Up (Reserved)



caution icon

Caution - Do not use the SCSI Motor Spin-Up menu option. It is reserved for specific troubleshooting methods and should be used only by qualified technicians.



The SCSI spin-up decides how the SCSI drives in a disk array are started. When the power supply is unable to provide sufficient current for the hard drives and controllers that are powered on at the same time, spinning-up the hard drives serially uses less current.

If the drives are configured as "Delay Motor Spin-up" or "Motor Spin-up" in Random Sequence, some of these drives might not be ready for the controller to access when the array powers up. Increase the disk access delay time so that the controller will wait a longer time for the drive to be ready.

By default, all hard drives spin up when powered on. These hard drives can be configured so that they do not all spin up at the same time.


procedure icon  To Spin-Up SCSI Hard Drives (Reserved)

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow SCSI Motor Spin-Up."

A confirmation message is displayed.

2. Choose Yes.

SCSI Reset at Power-Up (Reserved)



caution icon

Caution - Do not use the SCSI Reset at Power-Up menu option. It is reserved for specific troubleshooting methods and should be used only by qualified technicians.



By default, when the controller is powered on, it sends a SCSI bus reset command to the internal SCSI bus. When disabled, it does not send a SCSI bus reset command when powered on.


procedure icon  To Reset SCSI at Power-Up (Reserved)

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow SCSI Reset at Power-Up."

A confirmation message is displayed.

2. Choose Yes.

3. Power off all hard drives and controller.

4. Power the hard drives and controller on again.

The controller spins up the hard drives in sequence, separated by a four-second interval.

Disk Access Delay Time

This function sets the delay time before the controller tries to access the hard drives after power-on. The default is 15 seconds. The range is from no delay to 75 seconds.


procedure icon  To Set Disk Access Delay Time

1. Choose "view and edit Configuration parametersright arrow Drive-side SCSI Parameters right arrow Disk Access Delay Time."

A list of selections is displayed.

2. Choose the desired delay time.

A confirmation message is displayed.

3. Choose Yes.

SCSI I/O Timeout

The SCSI I/O timeout is the time interval for the controller to wait for a drive to respond. If the controller attempts to read data from or write data to a drive but the drive does not respond within the SCSI I/O timeout value, the drive will be considered a failed drive.



caution icon

Caution - The correct setting for "SCSI I/O Timeout" is 30 seconds for Fiber Channel arrays and 15 seconds for SCSI arrays. Do not change this setting. Setting the timeout to a lower value, or to Default, causes the controller to judge a drive as failed while a drive is still retrying or while a drive is unable to arbitrate the SCSI bus. Setting the timeout to a greater value causes the controller to keep waiting for a drive, and it can sometimes cause a host timeout.



When the drive detects a media error while reading from the drive platter, it retries the previous reading or recalibrates the head. When the drive encounters a bad block on the media, it reassigns the bad block to another spare block. However, all of this takes time. The time to perform these operations can vary between brands and models of drives.

During SCSI bus arbitration, a device with higher priority can utilize the bus first. A device with lower priority sometimes receives a SCSI I/O timeout when devices of higher priority devices keep utilizing the bus.


procedure icon  To Choose SCSI I/O Timeout

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow SCSI I/O Timeout -."

A list of selections is displayed.

2. Select a timeout.

A confirmation message is displayed.

3. Choose Yes.

Maximum Tag Count (Tag Command Queuing)

The maximum tag count is the maximum number of tags that can be sent to each drive at the same time. A drive has a built-in cache that is used to sort all of the I/O requests ("tags") that are sent to the drive, enabling the drive to finish the requests more quickly.

The cache size and maximum number of tags varies between brands and models of drive. Use the default setting of 32.



Note - Changing the maximum tag count to "Disable" causes the Write-Back cache in the hard drive to not be used.



The controller supports tag command queuing with an adjustable tag count from 1 to 128. The default setting is "Enabled," with a maximum tag count of 32.

It is possible to configure command tag queuing with a maximum tag count of 128 (SCSI) and 256 (FC).


procedure icon  To Change the Default Maximum Tag Count Setting

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow Maximum Tag Count."

A menu of available tag count values is displayed.

2. Select a tag count number.

A confirmation message is displayed.

 Screen capture showing "Maximum Tag count" chosen and "Set Maximum Tag Count?" message displayed.

Choose Yes to confirm.



caution icon

Caution - Disabling the maximum tag count disables the drive's internal cache.



3. Reset the controller to have the change take effect.

Periodic Drive Check Time

The periodic drive check time is an interval for the controller to check the drives on the SCSI bus at controller startup. The default value is Disabled, which means that if a drive is removed from the bus, the controller does not know that the drive is removed until a host tries to access that drive.

Changing the check time to any other value allows the controller to check all of the drives that are listed under "view and edit scsi Drives" at the specified interval. If any drive is then removed, the controller recognizes the removal even if a host does not access that drive.


procedure icon  To Set the Periodic Drive Check Time

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow Periodic Drive Check Time -."

A list of intervals is displayed.

2. Choose the interval you want

A confirmation message is displayed.

3. Choose Yes.

Periodic SAF-TE and SES Device Check Time

If there are remote devices within your RAID enclosure monitored by SAF-TE or SES, use this function to decide at what interval the controller checks the status of these devices.



caution icon

Caution - Do not set this interval for less than one second. Setting it to less than one second can adversely impact reliability.




procedure icon  To Set the Periodic SAF-TE and SES Device Check Time

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow Periodic SAF-TE and SES Device Check Time."

A list of intervals is displayed.

2. Choose the interval you want.

A confirmation message is displayed.

3. Choose Yes.

Periodic Auto-Detect Failure Drive Swap Check Time

This menu option periodically polls the unit to detect a replacement of a bad drive. If no spare drive is present within the array, the logical drive begins an automatic rebuild of a degraded RAID set if the firmware detects a bad drive replacement.

The drive-swap check time is the interval at which the controller checks to see whether a failed drive has been swapped. When a logical drive's member drive fails, the controller detects the failed drive (at the specified time interval). Once the failed drive has been swapped with a drive that has adequate capacity to rebuild the logical drive, the rebuild begins automatically.

The default setting is "Disabled," meaning that the controller does not auto-detect the swap of a failed drive. When "Periodic Drive Check Time" is set to "Disabled," the controller is not able to detect any drive removal that occurs after the controller has been powered on. The controller detects drive removal only when a host attempts to access the data on the drive.



Note - This feature requires system resources and can impact performance.




procedure icon  To Set the Auto-Detect Failure Drive Swap Check Time

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow Periodic Auto-Detect Failure Drive Swap Check Time."

A list of intervals is displayed.

2. Choose the interval you want.

A confirmation message is displayed.

3. Choose Yes to confirm the setting.

By choosing a time value to enable the periodic drive check time, the controller polls all connected drives in the controller's drive channels at the assigned interval. Drive removal is detected even if a host does not attempt to access data on the drive.

Drive Predictable Failure Mode (SMART)

Use this menu option to enable or disable SMART functionality. See Understanding SMART Technology for a detailed description of this functionality. See Enabling SMART From Firmware Menus for information about how to configure your Drive Predictable Failure Mode setting.

Auto-Assign Global Spare Drive

This feature is Disabled by default.

When you enable the "Auto-Assign Global Spare Drive" menu option, the system automatically assigns a global spare to the minimum drive ID in unused drives. This enables the Fibre Channel array to rebuild automatically without user intervention if a drive is replaced.


procedure icon  To Automatically Assign Replacements to Faulty Drives

1. Choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow Auto-Assign Global Spare Drive."

2. When the prompt Enable Auto-Assign Global Spare? appears, select Yes.

As soon as a faulty drive is replaced, the replacement drive is identified as a global spare drive.


Disk Array Parameters Menu

The menu options on the Disk Array Parameters menu are described in this section.


procedure icon  To Display the Disk Array Parameters Menu

1. Choose "view and edit Configuration parameters right arrow Disk Array Parameters."

 Screen capture showing submenus with "Disk Array Parameters" and "Rebuild Priority Low" chosen.

Setting Rebuild Priority

The RAID controller provides a background rebuilding ability. This means the controller is able to serve other I/O requests while rebuilding the logical drives. The time required to rebuild a drive set depends largely on the total capacity of the logical drive being rebuilt. Additionally, the rebuilding process is totally transparent to the host computer and its operating system.


procedure icon  To Set the Rebuild Priority

1. Choose "view and edit Configuration parameters right arrow Disk Array Parameters right arrow Rebuild Priority."

A list of the priority selections is displayed:

2. Choose the desired setting.

 Screen capture showing submenus with "Rebuild Priority Low" and "Low" chosen.

Verification on Writes

Normally, errors can occur when a hard drive writes data. To avoid write errors, the controller can force the hard drives to verify the written data. There are three selectable methods:

This method performs Verify-after-Write while initializing the logical drive.

This method performs Verify-after-Write during the rebuilding process.

This method performs Verify-after-Write during normal I/O requests.

Each method can be enabled or disabled individually. Hard drives perform Verify-after-Write according to the chosen method.



Note - The "verification on Normal Drive Writes" method affects the write performance during normal use.




procedure icon  To Enable and Disable Verification Methods

1. Choose "view and edit Configuration parameters right arrow Disk Array Parameters right arrow Verification on Writes."

The verification methods available are displayed.

 Screen capture showing submenus with "Verification on Writes" and "Verification on LD Initialization Writes Disabled" chosen.

2. Choose the method you want to enable or disable.

A confirmation message is displayed.

 Screen capture showing "Verification on LD Initialization Writes Disabled" chosen and the prompt is "Enable Initialize RAID with Verify Data?"

3. Choose Yes.



Note - Follow the same procedure to enable or disable each method.




Redundant Controller Parameters Menu (Reserved)

The Redundant Controller Parameters menu options are:

Do not use these menu options. They are reserved for specific troubleshooting procedures and should be used by only qualified technicians.


procedure icon  To Display the Redundant Controller Parameters Menu (Reserved)

1. Choose "view and edit Configuration parameters right arrow Redundant Controller Parameters."

The "Redundant Controller Parameters" menu options are displayed.


Controller Parameters

Procedures for viewing and displaying controller parameters are described in this section.

Controller Name

The controller name is displayed only in the firmware program and is used to identify separate controllers.



Note - The controller's name and password jointly share a 16-character alphanumeric field. If you set up a password, check that both the controller name and any password can fit within the 16-character field.




procedure icon  To View and Display the Controller Name

1. Choose "view and edit Configuration parameters right arrow Controller Parameters right arrow Controller Name."

A text area is displayed where you can type in a controller name. Depending on the controller's current settings, you are prompted to either enter a new name or modify the existing name for the designated controller.

 Screen capture showing submenu with "Controller Name - Not Set" chosen and the prompt is asking for the new controller name.

2. Type a name for the controller and press Return.

LCD Title Display - Controller Logo (Reserved)

This function is not applicable to this product.

Password Validation Timeout

This function sets a timeout when a password.

If a single password is set, the operator must enter this case-sensitive, alphanumeric password each time the controller is reset, causing an initial display of the Terminal Interface screen. In most cases, the "Always Check" default value should be left unchanged.

Although this function allows you to set the timeout setting, it does not provide a means of counting retries. In other words, the user can continue to retry a password until the preset timeout expires, unless the default "Always Check" value is chosen. The other options available are "Disable" or setting a value for 1, 2, or 5 minutes.

Leaving this setting at "Always Check" means that there is no defined timeout and the operator has unlimited opportunities to enter the correct password, but each try is validated before access to the firmware's functions is permitted. If this function is disabled, any entry provides immediate access to firmware menu options, whether or not a password has been established.



Note - Only one password can be stored.




procedure icon  To Set a Password Validation Timeout

1. Choose "view and edit Configuration parameters right arrow Controller Parameters right arrow Password Validation Timeout."

2. Choose a validation timeout from the list that is displayed.

 Screen capture shows "Controller Parameters," "Password Validation Timeout - Always Check," and "5 minutes" chosen.


Note - The "Always Check" timeout will disable any configuration change without entering the correct password.



A confirmation message is displayed.

 Screen capture shows "Password Validation Timeout" and "Always Check" chosen. The prompt is "Change Password Validation Timeout?"

3. Choose Yes to confirm.

Controller Unique Identifier (Reserved)

The controller unique identifier is automatically set by the SAF-TE or SES device. The controller unique identifier is used to create Ethernet addresses and WWNs, and to identify the unit for some network configurations.



caution icon

Caution - Do not change the controller unique identifier unless instructed to do so by qualified service personnel.





caution icon

Caution - If the array is powered off during the controller replacement or if you replaced a controller in a single-controller configuration, you must set the controller unique identifier to the correct value or the array might become inaccessible.




procedure icon  To Set the Controller Unique Identifier

1. Choose "view and edit Configuration parameters right arrow Controller Parameters right arrow Controller Unique Identifier <hex>."

2. Type in the value 0 (to automatically read the chassis serial number from the midplane) or type the hex value for the original serial number of the chassis (used when the midplane has been replaced).

The value 0 is immediately replaced with the hex value of the chassis serial number.



caution icon

Caution - A nonzero value should only be specified if the chassis has been replaced but the original chassis serial number must be retained; this feature is especially important in a Sun Cluster environment to maintain the same disk device names in a cluster.



3. To implement the revised parameter value, choose "system Functions right arrow Reset Controller."

SDRAM ECC Function (Reserved)

The default is set to enabled. Do not use this setting. It is reserved for specific troubleshooting methods and should be used only by qualified technicians.


Controller Failover

Some controller failure symptoms are as follows:

A "Bus Reset Issued" warning message is displayed for each of the channels. In addition, a "Redundant Controller Failure Detected" alert message is displayed.

If one controller in the redundant controller configuration fails, the surviving controller temporarily takes over for the failed controller until it is replaced.

A failed controller is managed by the surviving controller which disables and disconnects from its counterpart while gaining access to all the signal paths. The surviving controller then manages the ensuing event notifications and takes over all processes. It is always the primary controller regardless of its original status, and any replacement controller afterward assumes the role of the secondary controller.

The failover and failback processes are completely transparent to the host.

Controllers are hot-swappable if you are using a redundant configuration. Replacing a failed unit takes only a few minutes. Since the I/O connections are on the controllers, you might experience some unavailability between the time when the failed controller is removed and the time when a new one is installed in its place.

To maintain your redundant controller configuration, replace the failed controller as soon as possible. For details, refer to Sun StorEdge 3000 Family FRU Installation Guide.


Rebuilding Logical Drives

This section describes automatic and manual procedures for rebuilding logical drives.

Automatic Logical Drive Rebuild

Rebuild with Spare. When a member drive in a logical drive fails, the controller first examines whether there is a local spare drive assigned to this logical drive. If yes, it automatically starts to rebuild the data of the failed disk to it.

If there is no local spare available, the controller searches for a global spare. If there is a global spare, it automatically uses it to rebuild the logical drive.

Failed Drive Swap Detect. If neither a local spare drive nor a global spare drive is available, and the "Periodic Auto-Detect Failure Drive Swap Check Time" is disabled, the controller does not attempt to rebuild unless you apply a forced-manual rebuild.

To enable this feature, go to the Main Menu choose "view and edit Configuration parameters right arrow Drive-side SCSI Parameters right arrow Periodic Auto-Detect Failure Drive Swap Check Time."

When the "Periodic Auto-Detect Failure Drive Swap Check Time" is enabled (that is, a check time interval has been chosen), the controller detects whether the failed drive has been swapped by checking the failed drive's channel/ID. Once the failed drive has been swapped, the rebuild begins immediately.



Note - This feature requires system resources and can impact performance.



If the failed drive is not swapped but a local spare is added to the logical drive, the rebuild begins with the spare.

For a flowchart of automatic rebuild, see FIGURE 8-1.

 FIGURE 8-1 Automatic Rebuild

Flowchart shows automatic rebuild process.

Manual Rebuild

When a user applies forced-manual rebuild, the controller first examines whether there is any local spare assigned to the logical drive. If a local spare is available, it automatically starts to rebuild.

If no local spare is available, the controller searches for a global spare. If there is a global spare, the logical drive rebuild begins. See FIGURE 8-2.

If neither local spare nor global spare is available, the controller examines the channel and ID of the failed drive. After the failed drive has been replaced by a healthy one, the logical drive rebuild begins on the new drive. If there is no drive available for rebuilding, the controller does not attempt to rebuild until the user applies another forced-manual rebuild.

 FIGURE 8-2 Manual Rebuild

Flowchart shows manual rebuild process.

Concurrent Rebuild in RAID 1+0

RAID 1+0 allows multiple-drive failure and concurrent multiple-drive rebuild. Drives newly swapped must be scanned and set as local spares. These drives are rebuilt at the same time; you do not need to repeat the rebuilding process for each drive.


Recovering From Fatal Drive Failure

In redundant RAID array configurations, your system is protected with the RAID parity drive and by the default global spare (you might have more than one).



Note - A FATAL FAIL status occurs when one more drive fails than the number of spare drives available for that logical drive. If a logical drive has two global spares available, then three failed drives must occur for FATAL FAIL status.



It is rare for two or more drives to fail at the same time.


procedure icon  To Recover From Fatal Drive Failure

1. Discontinue all I/O activity immediately.

2. To cancel the beeping alarm, choose "system Functions right arrow Mute beeper."

3. Physically check whether all the drives are firmly seated in the array and make sure that none have been partially or completely removed.

4. Choose "view and edit Logical drives" from the Main Menu and look for:

Status: FAILED DRV (one failed drive) or
Status: FATAL FAIL (two or more failed drives)

5. Select the logical drive.

6. Choose "view scsi drives."

If two physical drives have a problem, one drive will have a BAD status and one drive will have a MISSING status. The MISSING status is a reminder that one of the drives might be a "false" failure. The status does not tell you which drive might be a false failure.

7. Perform one of the following steps:

8. Repeat Steps 4 and 5 to check the logical and drive status.

After resetting the controller, if there is a false bad drive, the array automatically starts rebuilding the failed RAID set.

If the array does not automatically start rebuilding the RAID set, check the status under "view and edit Logical drives."

a. Replace the failed drive. Refer to Sun StorEdge 3000 Family FRU Installation Guide for more information.

b. Delete the logical drive. See Deleting a Logical Drive for more information.

c. Create a new logical drive. See Assigning a Logical Drive Name for more information.

For additional troubleshooting tips, refer to the Sun StorEdge 3510 FC Array Release Notes located at:

http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Workgroup/3510