C H A P T E R  5

First-Time Configuration for FC or SATA Arrays

The Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array are each preconfigured with a single RAID 0 logical drive mapped to LUN 0, and no spare drives. This is not a working configuration. Unmap and delete this logical drive, using the procedure in To Unmap and Delete a Logical Drive, and replace it with logical drives that suit your requirements.

This chapter shows you how to configure your array for the first time, or reconfigure it. It describes the normal sequence of events you follow to configure an array:

Before configuring your array, carefully read chapters 1, 2, and 3.



Note - As you perform the operations described in this and other chapters, you might periodically see event message pop up on the screen. To dismiss an event message after you've read it, press Escape. To prevent event messages for displaying so that you can only read them by displaying the event message log, press Ctrl-C. You can press Ctrl-C again at any time to enable pop-up displays of event messages. Viewing Event Logs on the Screen for more information about event messages.




Existing Logical Drive Configuration

If you are configuring your array for the first time, there is no need to review the existing configuration before you delete it.

If you are reconfiguring logical drives, note the existing logical drive configuration to determine its status and any changes you want to make to the RAID level, logical drive size, number of physical drives that make up a selected logical drive, and spare drives. Also view the channel configuration to determine whether you want to make any changes to the channel mode and channel host IDs.


procedure icon  To View the Logical Drive Configuration

1. From the Main Menu, choose "view and edit Logical drives."

The Logical Drive Status table is displayed.

For a description of the parameters, see Logical Drive Status Table.

 Screen capture shows logical drive configuration.

2. Note the changes you want to make to the existing configuration.


procedure icon  To View the Channel Configuration

1. From the Main Menu, choose "view and edit channelS."

The Channel Status table is displayed.

 Screen capture shows channel configuration.

2. Note the changes you want to make to the existing configuration.


Deleting Logical Drives

To assign a different RAID level or set of drives to a logical drive, or to change local spare drives, you must first unmap and delete the logical drive and then create a new logical drive.



caution icon

Caution - This operation erases all data on the logical drive. If any data exists on the logical drive, copy it to another location or back it up before it is deleted.





Note - You can delete a logical drive only if it has first been unmapped.




procedure icon  To Unmap and Delete a Logical Drive

1. From the Main Menu, choose "view and edit Host luns."

A list of channel and host IDs is displayed. You might need to scroll through the list to display some of the channels and host IDs.

2. Choose a channel and host ID combination from the list.

A list of host LUNs that are assigned to the selected channel/host combination is displayed.

3. Select a host LUN and choose Yes to unmap the host LUN from the channel/host ID.

 Screen capture shows Unmap Host Lun dialog with Yes selected.

4. Repeat Step 3 to unmap all remaining host LUNs that are mapped to the logical drive you want to delete.

5. Press Escape to return to the Main Menu.

6. From the Main Menu, choose "view and edit Logical drives."

7. Select the logical drive that you unmapped and want to delete.

8. Choose "Delete logical drive," and, if it is safe to delete the logical drive, choose Yes to confirm the deletion.


Cache Optimization Mode (FC and SATA)

Before creating any logical drives, determine the appropriate optimization mode for the array. The type of application accessing the array determines whether to use sequential or random optimization. See Cache Optimization Mode and Stripe Size Guidelines for a detailed description of sequential and random optimization.



Note - Due to firmware improvements beginning with version 4.11, sequential optimization yields better performance than random optimization for most applications and configurations. Use sequential optimization unless real-world tests in your production environment show better results for random optimization.



If you are modifying an existing configuration and do not want to delete your existing logical drives, verify your optimization mode but do not change it.


procedure icon  To Verify the Optimization Mode

1. From the Main Menu, choose "view and edit Configuration parameters right arrow Caching Parameters."

Sequential I/O is the default optimization mode.

 

2. To accept the optimization mode that is displayed, press Escape.

To change the optimization mode, see To Change the Optimization Mode.


procedure icon  To Change the Optimization Mode

Once logical drives are created, you cannot use the RAID firmware to change the optimization mode without deleting all logical drives. You can, however, use the Sun StorEdge CLI set cache-parameters command to change the optimization mode while logical drives exist. Refer to the Sun StorEdge 3000 Family CLI User's Guide for more information.

If you have not deleted all logical drives, a notice will inform you of this requirement and you will not be able to change the optimization mode. See Deleting Logical Drives for the procedure to delete logical drives.

1. From the Main Menu, choose "view and edit Configuration parameters right arrow Caching Parameters."

The Optimization mode that is currently assigned to the array is displayed.

2. Choose "Optimization for Sequential I/O" or "Optimization for Random I/O" as appropriate.

If you have not deleted all logical drives, a notice will inform you of this requirement and you will not be able to change the optimization mode.

3. Choose Yes to change the Optimization mode from Sequential I/O to Random I/O, or from Random I/O to Sequential I/O.

You are prompted to reset the controller:

NOTICE: Change made to this setting will NOT take effect until all Logical Drives are deleted and then the controller is RESET. Prior to resetting the controller, operation may not proceed normally.
 
Do you want to reset the controller now ?

4. Choose Yes to reset the controller.

If you do not reset the controller now, the optimization mode remains unchanged.


Physical Drive Status

Before configuring physical drives into a logical drive, you must determine the availability of the physical drives in your enclosure. Only drives with a status of FRMT DRV are available.



Note - A drive that does not show a status of FRMT DRV needs to have reserved space added. See Changing Disk Reserved Space for more information.




procedure icon  To Check Physical Drive Availability

1. From the Main Menu, choose "view and edit Drives."

A list of all the installed physical drives is displayed.

 Screen capture shows the physical drives status window accessed with the "view and edit Scsi drives" command.

2. Use the arrow keys to scroll through the table and check that all installed drives are listed.



Note - If a drive is installed but is not listed, it might be defective or installed incorrectly.



When the power is initially turned on, the controller scans all physical drives that are connected through the drive channels.

To view more information about a drive:

a. Select the drive.

b. Choose "View drive information."

Additional information is displayed about the drive you selected.

 Screen capture shows information available about a selected drive.


Enabling Support for SATA Expansion Units Attached to FC Arrays

It is possible to connect Sun StorEdge 3511 SATA expansion units to Sun StorEdge 3510 FC arrays, either alone or in combination with Sun StorEdge 3510 FC expansion units. Refer to the release notes and Sun StorEdge 3000 Family Installation, Operation, and Service Manual for your array for important information about limitations and appropriate uses of such a configuration.

If you do connect one or more Sun StorEdge 3511 SATA expansion units to a Sun StorEdge 3510 FC array, you must ensure that mixed drive support is enabled. Enabling mixed drive support means certain safeguard menu options and messages will ensure that you do not improperly mix FC and SATA drive types when performing such operations as creating logical drives and logical volumes, or assigning local or global spares to logical drives.

If you have not connected any SATA expansion units to a Sun StorEdge 3510 FC array, verify that mixed drive support is not enabled so that you do not see inappropriate and potentially confusing menu options and messages.


procedure icon  To Enable or Disable Mixed Drive Support

1. From the Main Menu, choose "view and edit Configuration parameters right arrow Disk Array Parameters right arrow Mixed Drive Support -."

 

Depending on whether Mixed Drive Support is currently enabled or disabled, a message describes the change you can make:

Disable Mixed Drive Support ?

2. Choose Yes to change the Mixed Drive Support setting or choose No to keep the current setting.


Channel Settings

The Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array are preconfigured with the channel settings shown in Default Channel Configurations. Follow the procedures for configuring a channel mode if you plan to add a host connection or expansion unit, or to reassign redundant channel communications.

To make changes to channel host IDs, follow the procedures for adding or deleting a host ID.

Configuring Channel Mode

When configuring the channel mode, the following rules apply:


procedure icon  To Modify a Channel Mode

1. From the Main Menu, choose "view and edit channelS."

The Channel Status Table is displayed.

 Screen capture shows a FC array Channel Status table.

The Chl column for channel 2 displays <3:C> to indicate that channel 3 is a redundant loop for channel 2. Similarly, the Chl column for channel 3 displays <2:C> to indicate that channel 2 is a redundant loop for channel 3.

2. Select the channel that you want to modify.

3. Choose "channel Mode" to display a menu of channel mode options.

4. Select the mode you want that channel to have, and then choose Yes to confirm the change.

This change does not take effect until the controller is reset.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

5. Choose Yes to reset the controller.

Redundant Communication Channels (RCCOM)

The RCCOM channel mode provides the communication channels by which two controllers in a redundant RAID array communicate with one another. This communication enables the controllers to monitor each other, and includes configuration updates and control of cache.

By default, channels 2 and 3 are configured as DRV + RCCOM, which combines drive and RCCOM functions on the same channel (Drive and RCCOM). In this configuration, RCCOM is distributed over all DRV + RCCOM channels, which leaves other channels free for host or drive functions.

If performance issues are particularly important, you can spread the combined DRV + RCCOM functions over four channels. Alternatively, you can configure two channels so that they are used exclusively for RCCOM, ensuring maximum I/O performance on the other host and drive channels. These two configurations are described below.

Using Four DRV + RCCOM Channels

If only channels 0 and 1 are used for communication with servers, channels 4 and 5 can be configured as DRV + RCCOM, thus providing four DRV + RCCOM channels (channels 2, 3, 4, and 5). An advantage of this configuration is that channels 4 and 5 are still available for connection of expansion units. The performance impact of RCCOM is reduced because it is now distributed over four channels instead of two. If at a later time you choose to add an expansion unit, it will not be necessary to interrupt service by resetting the controller after reconfiguring a channel.


procedure icon  To Configure Channels 4 and 5 as Additional DRV + RCCOM Channels

1. From the Main Menu, choose "view and edit channelS."

2. Select channel 4.

3. Choose "channel Mode right arrow Drive + RCCOM," and then choose Yes to confirm the change.

4. Choose No to decline the controller reset, since you have another channel to reconfigure.

5. Press Enter to return to the menu.

6. Choose "Secondary controller scsi id."

7. Specify a secondary ID (SID) that is not already in use.

You will specify this same SID for Channel 5, as shown below.

8. Choose No to decline the controller reset, since you have another channel to reconfigure.

9. Select channel 5.

10. Choose "channel Mode right arrow Drive + RCCOM," and then choose Yes to confirm the change.

11. Choose No to decline the controller reset, since you have another channel to reconfigure.

12. Press Enter to return to the menu.

13. Choose "Secondary controller scsi id."

14. Specify the same secondary ID (SID) that you assigned to Channel 4.

This change does not take effect until the controller is reset, as described in the message that is displayed:

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally.Do you want to reset the controller now?

15. Choose Yes to reset the controller.

Using Channels 4 and 5 as RCCOM-Only Channels

When only channels 0 and 1 are used for communication with servers, another option is to assign channels 4 and 5 as dedicated RCCOM channels, and then assign channels 2 and 3 as drive channels. This reduces the impact of RCCOM on the drive channels by removing RCCOM from drive channels 2 and 3. In this configuration, however, channels 4 and 5 cannot be used to communicate with hosts or to attach expansion units.



caution icon

Caution - If later you reconfigure channels 4 and 5 as host or drive channels, you must restore channels 2 and 3 as DRV + RCCOM channels or the RAID array will no longer operate.




procedure icon  To Configure Channels 4 and 5 as RCCOM-Only Channels

1. On the Main Menu, choose "view and edit channelS."

2. Select channel 4.

3. Choose "channel Mode right arrow RCCOM," and then choose Yes to confirm the change.

4. Choose No to decline the controller reset, since you have three more channels to reconfigure.

5. Select channel 5.

6. Choose "channel Mode right arrow RCCOM," and then choose Yes to confirm the change.

7. Choose No to decline the controller reset, since you have two more channels to reconfigure.

8. Select channel 2.

9. Choose "channel Mode right arrow Drive."

10. Choose Yes to confirm, and then choose Yes to confirm the change.

11. Choose No to decline the controller reset, since you have another channel to reconfigure.

12. Select channel 3.

13. Choose "channel Mode right arrow Drive," and then choose Yes to confirm the change.

This change does not take effect until the controller is reset.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

14. Choose Yes to reset the controller.

Host Channel IDs

Host channel IDs identify the controller to the host. Some applications require that specific IDs be assigned to host channels before they can recognize the array. Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array default host channel IDs are shown in TABLE 3-2 and TABLE 3-3 under Default Channel Configurations.

The number of host IDs depends on the configuration mode:

Each host ID can have up to 32 partitions, which are then mapped to LUNs to create a total not to exceed 128 in point-to-point mode or 1024 in loop mode. To map 1024 partitions in loop mode, you must add additional host IDs so that 32 IDs are mapped to the array's channels. Several configurations are possible, such as eight IDs mapped to each of the four host channels or sixteen IDs mapped to two channels and none to the other two. For more information, see Planning for 1024 LUNs on an FC or SATA Array (Optional, Loop Mode Only).

Each host channel has a unique primary and secondary ID available. Typically host IDs are distributed between the primary and secondary controllers to load-balance the I/O in the most effective manner for the network. You can:



Note - Channel ID values of 0 to 125 are accessed in eight ranges of IDs. When you change a channel's mode, the channel ID might change to an ID that is not in the range you want to use. See Channel ID Ranges for a description of channel ID ranges and a procedure for changing the ID range.




procedure icon  To Add or Delete a Unique Host ID



Note - To change an ID, you must first delete it and then add the new ID.



1. From the Main Menu, choose "view and edit channelS."

2. Select the host channel on which you want to add or change an ID.

3. Choose "view and edit scsi Id."

If host IDs have already been configured on the host channel, they will be displayed.

4. If no host IDs have been configured, choose Yes when the following message is displayed.

No SCSI ID Assignment - Add Channel SCSI ID?

5. If a host ID has already been assigned to that channel, select an ID.

6. To delete an ID, choose "Delete Channel SCSI ID," and then choose Yes to confirm the deletion.

7. To add an ID, choose "Add Channel SCSI ID."

8. Select a controller from the list to display a list of IDs.

9. Select an ID from the list, and then choose Yes to confirm your choice.

This change does not take effect until the controller is reset.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

10. Choose Yes to reset the controller.

Channel ID Ranges

The ID values of 0 to 125 are available when you assign a channel ID. These IDs are accessed in the eight ranges shown in TABLE 5-1.

TABLE 5-1 ID Values Assigned to Each ID Range

Range

Available ID Numbers

0

0 to 15

1

16 to 31

2

32 to 47

3

48 to 63

4

64 to 79

5

80 to 95

6

96 to 111

7

112 to 125


Once an ID is assigned to a channel, if you decide to add an ID, the only IDs that are initially displayed are those in the range of the first ID you assigned. For example, if you initially assign an ID of 40 to host channel 0, when you add IDs to host channel 0, only IDs in Range 2 (32 to 47) are available.


procedure icon  To Assign an ID From a Different Range

1. Choose "view and edit channelS" to display the Channel Status table.

2. Select the channel whose ID range you want to change.

3. Choose "view and edit scsi Id."

4. Select a controller.



Note - To change an ID, you must first delete it and then add the new ID.



5. Choose "Delete Channel SCSI ID," and then choose Yes to confirm the deletion.

This change does not take effect until the controller is reset.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

6. If other IDs exist on the selected channel, choose No and repeat Step 5 to delete every ID configured on the channel.

7. After deleting the last ID, choose Yes to reset the controller.

When all IDs have been deleted, you can assign an ID from a different range.

No SCSI ID Assignment - Add Channel SCSI ID ?

8. Choose Yes to assign an ID.

9. Select the controller to which you want to assign an ID.

A list of IDs is displayed. Depending on the current range, adjoining ranges are displayed at the top and bottom of the ID list, except ranges 0 and 7, which only display one adjoining range. In the following example, range 7 is displayed.

 Screen capture showing Range 6 displayed in the ID list, with Range 5 selected.

10. To change to a different range, select an adjoining range.

IDs in the newly selected range are displayed.

11. Repeat Step 10 until the desired range is displayed.

12. Select an ID from the desired range, and then choose Yes to confirm the assignment.

This change does not take effect until the controller is reset.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

13. Choose Yes to reset the controller.


Fibre Connection Protocol

See Fibre Connection Protocol Guidelines for a detailed description of Loop and Point-to-Point operation.


procedure icon  To Change the Fibre Connection Protocol

1. From the Main Menu, choose "view and edit Configuration parameters right arrow Host-side Parameters right arrow Fibre Connection Option."

The fibre connection that is currently assigned to the array is displayed.

2. Choose "Loop only" or "Point to point only" as appropriate.



Note - Do not use the command, "Loop preferred, otherwise point to point." This command is reserved for special use and should be used only if you are directed to do so by technical support.



This change does not take effect until the controller is reset.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

3. Choose Yes to reset the controller.


Creating Logical Drives

The RAID array is preconfigured with one RAID 0 logical drive as described in Default Logical Drive Configuration. Each logical drive consists of a single partition by default.

This section describes how to modify the RAID level or add more logical drives. In these procedures, you configure a logical drive to contain one or more physical drives based on the desired RAID level, and divide the logical drive into additional partitions.



Note - Depending on the size and RAID level, it can take up to several hours to build a logical drive. Online initialization, however, enables you to begin configuring and using the logical drive before initialization is complete.



Creating a 2-Tbyte RAID 5 logical drive can take up to:

Preparing for Logical Drives Larger Than 253 Gbyte (Solaris Operating System Only)

The Solaris operating system requires drive geometry for various operations, including newfs. For the appropriate drive geometry to be presented to the Solaris operating system for logical drives larger than 253 Gbyte, change the default settings to cover all logical drives over 253 Gbyte. These settings work for smaller configurations as well. The controller automatically adjusts the sector count so the operating system can read the correct drive capacity.

For Solaris operating system configurations, use the values in the following table.

TABLE 5-2 Cylinder and Head Mapping for the Solaris Operating System

Logical Drive Capacity

Cylinder

Head

Sector

< 253 GB

< 65536 (default)

Variable

Variable (default)

253 GB-1 TB

< 65536 (default)

64 (default)

variable (default)


After settings are changed, they apply to all logical drives in the chassis.



Note - Refer to your operating system documentation for limitations on device sizes.




procedure icon  To Change Cylinder and Head Settings

1. Choose "view and edit Configuration parameters right arrow Host-side Parameters right arrow Host Cylinder/Head/Sector Mapping Configuration right arrow Sector Ranges - Æ Variable," and then choose Yes to confirm your choice.

2. Choose "Head Ranges - right arrow 64 Heads," and then choose Yes to confirm your choice.

3. Choose "Cylinder Ranges - right arrow < 65536," and then choose Yes to confirm your choice.


procedure icon  To Create a Logical Drive



Note - To reassign drives and add local or global spare drives on your preconfigured array, you must first unmap and then delete the existing logical drives. For more information about deleting a logical drive, see Deleting Logical Drives.



1. From the Main Menu, choose "view and edit Logical drives."

Unassigned logical drives show a RAID level of NONE.

2. Select the first available unassigned logical drive (LG).

 Screen capture shows the logical drive status window with unassigned logical drive 1 selected.

You can create as many as 32 logical drives using physical drives on any loop.

If mixed drive support is enabled, a menu of drive types is displayed. If mixed drive support is disabled, proceed to the next step. See To Enable or Disable Mixed Drive Support for information about mixed drive support.

3. If mixed drive support is enabled, select the type of logical drive to create.

 Screen capture shows Fibre drives and SATA drives menu option.

4. When prompted to "Create Logical Drive?" choose Yes to confirm your choice and display a pull-down list of supported RAID levels.

5. Select a RAID level from the list to assign to the logical drive.



Note - RAID 5 is used as an example in the following steps.



 


Note - NRAID does not provide data redundancy. The NRAID option that appears in some firmware menus is no longer used and is not recommended.



Screen capture shows RAID levels menu with "RAID 5" selected.

For more information about RAID levels, see RAID Levels.

6. Select the drives you want to include in the logical drive from the list of available physical drives, using the steps below.

You must select at least the minimum number of drives required for the selected RAID level.

For redundancy, you can create a logical drive containing drives distributed over separate channels. You can then create several partitions on each logical drive. In a RAID 1 or RAID 0+1 configuration, the order in which you select the physical drives for a logical drive determines the channels to which the physical drives are assigned. If you want drives to be mirrored over two channels, select them in the appropriate order. For example:



Note - Logical drives that include both Fibre Channel drives and SATA drives are not supported. If you have enabled mixed drive support, only the appropriate drive types are displayed.



a. Use the up and down arrow keys and press Return to select the drives you want to include in the logical drive.

An asterisk mark (*) is displayed in the Chl (Channel) column of each selected physical drive.

 Screen capture shows a list of available physical drives with three selected drives marked with an asterisk.

b. To deselect a drive, press Return again on the selected drive.

The asterisk marking that drive disappears.

c. After all physical drives have been selected for the logical drive, press Escape.

Several optional menu options are displayed. You can choose these menu options to define aspects of the logical drive you are creating:

These menu options are described in the remainder of this section.

7. (Optional) Set the maximum logical drive capacity, using the following procedure:

a. Choose "Maximum Drive Capacity."



Note - Changing the maximum drive capacity reduces the size of the logical drive and leaves some disk space unused.



b. Specify the maximum capacity of each physical drive that makes up the logical drive you are creating.

 Screen capture shows a logical drive with Maximum Available Drive Capacity of 34476 MB and Maximum Drive Capacity configured to 20000 MB.

A logical drive should be composed of physical drives with the same capacity. A logical drive can only use the capacity of each drive up to the maximum capacity of the smallest drive.

8. (Optional) Add a local spare drive from the list of unused physical drives, by following these steps:

a. Choose "Assign Spare Drives" to display a list of available physical drives you can use as a local spare.



Note - A global spare cannot be created while creating a logical drive.





Note - An NRAID or RAID 0 logical drive has no data redundancy or parity and does not support spare drive rebuilding.



The spare chosen here is a local spare and will automatically replace any disk drive that fails in this logical drive. The local spare is not available for any other logical drive.

b. Select a physical drive from the list to use as a local spare.

 Screen capture shows a list of unused drives with the top drive, ID 8, selected.

c. Press Escape to return to the menu of logical drive options.



Note - The Disk Reserved Space option is not supported while you are creating a logical drive.



If you use two controllers for a redundant configuration, you can assign a logical drive to either of the controllers to balance the workload. By default, all logical drives are assigned to the primary controller.

Logical drive assignments can be changed later, but that operation requires that you unmap host LUNS and reset the controller.

9. (Optional) For dual-controller configurations, you can assign this logical drive to the secondary controller by following these steps:



caution icon

Caution - In single-controller configurations, assign logical drives only to the primary controller.



a. Choose "Logical Drive Assignments."

 Screen capture showing the Assign Logical Drive to Redundant Controller prompt.

b. Choose Yes to assign the logical drive to the redundant controller.

10. (Optional) Configure the logical drive's write policy.

Write-back cache is the preconfigured global logical drive write policy, which is specified on the Caching Parameters submenu. (See Enabling and Disabling Write-Back Cache for the procedure on setting the global caching parameter.) This option enables you to assign a write policy per logical drive that is either the same as or different than the global setting. Write policy is discussed in more detail in Cache Write Policy Guidelines.

a. Choose "Write Policy -."



Note - The Default write policy displayed is the global write policy assigned to all logical drives.



The following write policy options are displayed:

As described in Cache Write Policy Guidelines, the array can be configured to dynamically switch write policy from write-back cache to write-through cache if specified events occur. Write policy is only automatically switched for logical drives with write policy configured to Default. See Event Trigger Operations for more information.

b. Choose a write policy option.

Screen capture showing write policy options with Write Back selected.

Note - You can change the logical drive logical drive's write policy at any time, as explained in Changing Write Policy for a Logical Drive.



11. (Optional) Set the logical drive initialization mode by choosing "Initialize Mode" from the list of logical drive options, and then choosing Yes to change the initialization mode.

The assigned initialization mode is displayed in the list of logical drive options.

You can choose between these two logical drive initialization options:

This option enables you to configure and use the logical drive before initialization is complete. Because the controller is building the logical drive while performing I/O operations, initializing a logical drive online requires more time than off-line initialization.

This menu option enables you to configure and use the drive only after initialization is complete. Because the controller is building the logical drive without having to also perform I/O operations, offline initialization requires less time than on-line initialization.

Because logical drive initialization can take a considerable amount of time, depending on the size of your physical disks and logical drives, you can choose online initialization so that you can use the logical drive before initialization is complete.

12. (Optional) Configure the logical drive stripe size.

Depending on the optimization mode selected, the array is configured with the default stripe sizes shown in Cache Optimization Mode and Stripe Size Guidelines. When you create a logical drive, however, you can assign a different stripe size to that logical drive.



Note - Default stripe sizes result in optimal performance for most applications. Selecting a stripe size that is inappropriate for your optimization mode and RAID level can decrease performance significantly. For example, smaller stripe sizes are ideal for I/Os that are transaction-based and randomly accessed. But when a logical drive configured with a 4-Kbyte stripe size receives files of 128 Kbyte, each physical drive has to write many more times to store it in 4-Kbyte data fragments. Change stripe size only when you are sure it will result in performance improvements for your particular applications.



See Cache Optimization Mode and Stripe Size Guidelines for more information.



Note - Once a logical drive is created, its stripe size cannot be changed. To change the stripe size, you must delete the logical drive, and then recreate it using the new stripe size.



a. Choose Stripe Size.

A menu of stripe size options is displayed.

b. Choose Default to assign the stripe size per optimization mode, or choose a different stripe size from the menu.

Default stripe size per optimization mode is shown in Cache Optimization Mode and Stripe Size Guidelines.

The selected stripe size is displayed in the list of logical drive options.

13. Once all logical drive options have been assigned, press Escape to display the settings you have chosen.

 Screen capture shows the Create Logical Drive confirmation window displayed with "Yes" selected.

14. Verify that all information is correct, and then choose Yes to create the logical drive.



Note - If the logical drive has not been configured correctly, select No to return to the logical drive status table so that you can configure the drive correctly.



Messages indicate that the logical drive initialization has begun, and then that it has completed.

15. Press Escape to close the drive initialization message.

A progress bar displays the progress of initialization as it occurs.

You can press Escape to remove the initialization progress bar and continue working with menu options to create additional logical drives. The percentage of completion for each initialization in progress is displayed in the upper left corner of the window.

 Screen capture shows the initialization in progress displayed in the upper left corner of the window.

The following message is displayed when the initialization is completed:

 Screen capture shows the notification that initialization of the logical drive is complete.

16. Press Escape to dismiss the notification.

The newly created logical drive is displayed in the status window.

 Screen capture shows the newly-created logical drive in the status window.

Controller Assignment

By default, logical drives are automatically assigned to the primary controller. If you assign half of the logical drives to the secondary controller in a dual controller array, the maximum speed and performance is somewhat improved due to the redistribution of the traffic.

To balance the workload between both controllers, you can distribute your logical drives between the primary controller (displayed as the Primary ID or PID) and the secondary controller (displayed as the Secondary ID or SID).



caution icon

Caution - In single-controller configurations, do not set the controller as a secondary controller. The primary controller controls all firmware operations and must be the assignment of the single controller. In a single-controller configuration, if you disable the Redundant Controller function and reconfigure the controller with the Autoconfigure option or as a secondary controller, the controller module becomes inoperable and will need to be replaced.



After a logical drive has been created, it can be assigned to the secondary controller. Then the host computer associated with the logical drive can be mapped to the secondary controller (see Mapping a Partition to a Host LUN).


procedure icon  To Change a Controller Assignment (Optional)



caution icon

Caution - Assign logical drives only to primary controllers in single-controller configurations.



1. From the Main Menu, choose "view and edit Logical drives."

2. Select the logical drive you want to reassign.

3. Choose "logical drive Assignments," and then choose Yes to confirm the reassignment.

The reassignment is evident from the "view and edit Logical drives" screen. A "P" in front of the LG number, such as "P0,"means that the logical drive is assigned to the primary controller. An "S" in front of the LG number means that the logical drive is assigned to the secondary controller.

Logical Drive Name

You can assign a name to each logical drive. These logical drive names are used only in RAID firmware administration and monitoring and do not appear anywhere on the host. You can also edit this drive name.


procedure icon  To Assign a Logical Drive Name (Optional)

1. From the Main Menu, choose "view and edit Logical drives."

2. Select a logical drive.

3. Choose "logical drive Name."

4. Type the name you want to give the logical drive in the New Logical Drive Name field and press Return to save the name.

 Screen capture shows "Logical Drive name:" prompt displayed and "New Name" entered in the New Logical Drive Name field.


Partitions

You can divide a logical drive into several partitions, or use the entire logical drive as a single partition. You can configure up to 32 partitions and 1024 LUN assignments (loop mode only). For guidelines on setting up 1024 LUNs, see Planning for 1024 LUNs on an FC or SATA Array (Optional, Loop Mode Only).



caution icon

Caution - If you modify the size of a partition or logical drive, all data on the drive is lost.





Note - If you plan to map hundreds of LUNs, the process is easier if you use Sun StorEdge Configuration Service. Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for more information.



  FIGURE 5-1 Partitions in Logical Drives

Diagram shows logical drive 0 with three partitions and logical drive 1 with three partitions.

procedure icon  To Partition a Logical Drive (Optional)



caution icon

Caution - Make sure any data that you want to save on this partition has been backed up before you partition the logical drive.



1. From the Main Menu, choose "view and edit Logical drives."

2. Select the logical drive you want to partition.

3. Choose "Partition logical drive."

If the logical drive has not already been partitioned, the following warning is displayed:

This operation may result in the LOSS OF ALL DATA on the Logical Disk.

Partition Logical Drive?


4. Choose Yes to continue.

A list of the partitions on this logical drive is displayed. If the logical drive has not yet been partitioned, all the logical drive capacity is listed as "partition 0."

5. Select a partition.

6. Type the desired size of the selected partition.

The following warning is displayed:

This operation will result in the LOSS OF ALL DATA on the partition.
Partition Logical Drive?


7. Choose Yes to partition the drive.

The remaining capacity of the logical drive is automatically allocated to the next partition. In the following example, a partition size of 20000 Mbyte was entered; the remaining storage of 20000 Mbyte is allocated to the partition below the newly created partition.

 Screen capture shows the partition allocation with A 20000 MB partition and the remaining 20000 MB storage allocated to the partition below.

8. Repeat Step 5 through Step 7 to partition the remaining capacity of your logical drive.

For information on deleting a partition, see Deleting a Logical Drive Partition.


Mapping a Partition to a Host LUN

A partition is a division of the logical drive that appears as a physical drive to any host that has access to that partition. You can create a maximum of 32 partitions per logical drive. So that host bus adapters (HBAs) recognize the partitions when the host bus is reinitialized, each partition must be mapped to a host LUN (logical unit number). Two methods can be used to map a partition to a host:



Note - When you modify a partition, you must first unmap the LUN.





Note - If you plan to map 128 or more LUNs, the process is easier if you use Sun StorEdge Configuration Service. Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for more information.



LUN Mapping

Map a partition to a LUN on a host channel to create a connection between that host channel and the partition. Note that with LUN mapping, all hosts on the mapped host channel have full access to all partitions mapped to LUNs on that channel. To provide redundant connections between a host and a partition, map the partition to a LUN on both of the host channels that connect with that host.

With LUN mapping, only one partition can be mapped to each LUN. To assign multiple partitions to the same LUN, use LUN filtering rather than LUN mapping. LUN mapping is most effective when only one host is connected to a host channel.

Channel IDs represent the physical connection between the HBA and the array. The host ID is an identifier assigned to the channel so that the host can identify LUNs. The following figure shows the relationship between a host ID and a LUN.

  FIGURE 5-2 LUNs Resemble Drawers in a File Cabinet

Diagram shows the ID as a file cabinet and its LUNs as file drawers.

The ID is like a cabinet and the drawers are like the LUNs.

The following figure illustrates mapping partitions to host ID/LUNs.

  FIGURE 5-3 Mapping Partitions to Host ID/LUNs

Diagram shows LUN partitions mapped to ID 0 on Channel 1 and to ID 1 on Channel 3.

For detailed instructions for LUN mapping, see To Map a Logical Drive Partition.

LUN Filtering (FC and SATA Only)

For multiple servers connected to the same FC array, LUN filtering provides an exclusive path from a server to a logical drive and essentially hides or excludes the other connected servers from seeing or accessing the same logical drive. That is, the LUN filter organizes how the array devices are accessed and viewed from host devices, and typically maps an array device to only one host so that other hosts do not access and use the same array device.

LUN filtering also enables multiple hosts to be mapped to the same LUN, allowing different servers to have their own LUN 0 to boot from, if needed. Even though host filters are created on the same LUN, each host filter can provide individual hosts exclusive access to a different partition, and even access to partitions on different logical drives. Host filters can also grant different levels of access to different hosts. LUN filtering is also valuable in clarifying mapping when each HBA typically sees twice the number of logical drives when viewed through a hub.

Each Fibre Channel device is assigned a unique identifier called a worldwide name (WWN). A WWN is assigned by the IEEE and is similar to a MAC address in IP or a URL on the Internet. These WWNs stay with the device for its lifetime. LUN filtering uses this WWN to specify which server is to have exclusive use of a specific logical drive.

As shown in the following example, when you map LUN 01 to host channel 0 and select WWN1, server A has a proprietary path to that logical drive. All servers continue to see and access LUN 02 and LUN 03 unless filters are created on them.

  FIGURE 5-4 Example of LUN Filtering

Diagram shows multiple hosts with access to the same LUNs where LUN filtering creates exclusive paths from a server to a specific LUN.


Note - It is possible to see differing information when a fabric switch queries the WWN of an array. When the RAID controller does a Fibre Channel fabric login to a switch, during the fabric login process the switch obtains the WWN of the RAID controller. In this case, the switch displays the company name. When the switch issues an inquiry command to a mapped LUN on the array, the switch obtains the company name from the inquiry data of the LUN. In this case, the switch displays Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array, which is the inquiry data returned by the RAID controller.



Prior to using the LUN filter feature, identify which array is connected to which HBA card, and the WWN assigned to each card. This procedure varies according to the HBA you are using. Refer to the Sun StorEdge 3000 Family Installation, Operation and Service Manual for your array for instructions on identifying the WWN for your host.

For detailed instructions for creating host filters, see, LUN Filtering (FC and SATA Only).



Note - You can create a maximum of 128 host filters. You can create a maximum of 64 WWNs.





Note - The process of creating host filters is easier using Sun StorEdge Configuration Service.




procedure icon  To Map a Logical Drive Partition

1. From the Main Menu, choose "view and edit Host luns."

A list of available channels, IDs, and their associated controllers is displayed.

2. Select a channel and host ID on the primary controller.

3. If the Logical Drive and Logical Volume menu options are displayed, choose "Logical Drive" to display the LUN table.

4. Select the LUN you want to map the drive to.

A list of available logical drives is displayed.

5. Select the logical drive (LD) that you want to map to the selected LUN.

The partition table is displayed.

6. Select the partition you want to map to the selected LUN.

 Screen capture shows the partition table with Partition 0 selected.

7. Choose "Map Host LUN," and then choose Yes to confirm the host LUN mapping.

 Screen capture shows Confirm Mapping Scheme prompt.

The partition is now mapped to the selected LUN.

 Screen capture shows partition 0 mapped to LUN 0.

8. Repeat Step 4 through Step 7 to map additional partitions to host LUNs on this channel and logical drive.

9. Press Escape.

10. If you are LUN mapping a redundant configuration, repeat Step 2 through Step 7 to map partitions to host LUNs with other IDs on the logical drive assigned to the primary controller.

When you map a partition to two channels in a redundant configuration, the number in the Partition column of the partition table displays an asterisk (*) to indicate that the partition is mapped to two LUNs.



Note - If you are using host-based multipathing software, map each partition to two or more host IDs so multiple paths will be available from the partition to the host.



11. Repeat Step 2 through Step 10 to map hosts to the secondary controller.

12. To verify unique mapping of each LUN (unique LUN number, unique DRV number, or unique Partition number):

a. From the Main Menu, choose "view and edit Host luns."

b. Select the appropriate controller and ID and press Return to review the LUN information.

A mapped LUN displays a number and a filtered LUN displays an "M" for masked LUN in the host LUN partition window.

13. When all host LUNs have been mapped, save the updated configuration to nonvolatile memory. See Saving Configuration (NVRAM) to a Disk for more information.

14. (Solaris operating system only) For the Solaris operating system to recognize a LUN, you must first manually write the label using the Auto configure option of the format (1M) utility, as described in To Label a LUN.


procedure icon  To Create Host Filters (FC and SATA Arrays Only)

1. From the Main Menu, choose "view and edit Host luns."

A list of available channels and their associated controllers is displayed.

2. Select a channel and host ID.

3. If the Logical Drive and Logical Volume menu options are displayed, choose Logical Drive.

4. Select the LUN for which you want to create the host filter.

A list of available logical drives is displayed.

5. Select the logical drive (LD) for which you want to create a host filter.

6. Select the partition for which you want to create a host filter.

7. Choose "Create Host Filter Entry right arrow Add from current device list."

 Screen capture shows "Add from current device list" selected.

This step automatically performs a discovery of the attached HBAs and displays a list of WWNs. This list includes:

When you select a worldwide name from this list, ensure that the worldwide name you select is from an HBA on the channel where you are creating the filter.

Alternatively, you can add a worldwide name manually by choosing "Manually add host filter entry" rather than "Add from current device list." Then type the Host-ID/WWN in the text area provided and press Return. When you manually enter a worldwide name using the "Manually add host filter entry" menu option, that WWN only appears in the list of WWNs when you are creating a filter on a channel where the WWN was initially added.

8. From the device list, select the WWN number of the server for which you are creating a filter, and choose Yes to confirm your choice.

A filter configuration screen displays the filter you are creating.

 Screen capture shows the WWN number confirmation screen with "Yes" selected.

9. Review the filter configuration screen. Make any changes necessary by selecting the setting you want to change.

 Screen capture shows the host filter for "Logical Drive 0 Partition 0."

a. To edit the WWN, use the arrow keys to select "Host-ID/WWN." Type the desired changes, and press Return.

 Screen capture shows the Host ID edit dialog.

Be sure that you edit the WWN correctly. If the WWN is incorrect, the host will be unable to recognize the LUN.

b. To edit the WWN Mask, use the arrow keys to select "Host-ID/WWN Mask." Type the desired changes, and press Return.

 Screen capture shows Host Mask edit dialog.

c. To change the filter setting, select "Filter Type -," and choose Yes to exclude or include the Host-ID/WWN selection.

Choose "Filter Type to Include" to grant LUN access to the host identified by the WWN and WWN Mask. Choose "Filter Type to Exclude" to deny the identified host LUN access.

 Screen capture shows the Filter Type confirmation screen with "set Filter Type to Exclude? prompt displayed with "Yes" selected.


Note - If no host has been granted access to the selected LUN (by having its Filter Type set to Include), all hosts can access that LUN. In this configuration, you can deny specific hosts access to that LUN by configuring their Filter Type to Exclude. Once any host is granted access to a LUN, only hosts with explicit access (Filter Type set to Include) can access that LUN.



d. To change the access mode, which assigns Read-Only or Read/Write privileges, select "Access mode -," and choose Yes to confirm the assignment.

 Screen capture shows "set Access Mode to Read-Only? prompt displayed with "Yes" selected.

e. To set a name for the filter, select "Name -." Type the name you want to use and press Return.

10. Verify all settings and press Escape to continue.

 Screen capture shows the current host filter settings.

11. Verify all filter entries and press Escape.

12. Choose Yes to add the host filter entry.

 Screen capture shows a confirmation screen with "Add Host Filter Entry?" displayed and "Yes" selected.


Note - Unlike most firmware operations, where you must complete each entry individually and repeat the procedure if you want to perform a similar operation, you can add multiple WWNs to your list before you actually complete the host filter entry in Step 14.



13. At the server list, repeat the previous steps to create additional filters, or press Escape to continue.

 Screen capture shows the server WWN number selected.

14. Choose Yes to complete the host LUN filter entry.

 Screen capture shows settings listed in the confirmation screen with "Yes" selected.

A mapped LUN displays a number. A filtered LUN displays an "M" for "masked LUN" in the LUN column.

 Screen capture shows a filtered LUN indicated by an "M" for "masked" in the LUN column.


Labeling a LUN (Solaris Operating System Only)

For the Solaris operating system to recognize a LUN, you must first manually write the label using the Auto configure option of the format (1M) command.


procedure icon  To Label a LUN

1. On the data host, type format at the root prompt.

# format

2. Specify the disk number when prompted.

1. Type Y at the following prompt, if it is deployed, and press Return:

Disk not labeled. Label it now? Y

The Solaris operating system's Format menu is displayed.

2. Type type to select a drive type.

3. Type 0 to choose the Auto configure menu option.

Choose the Auto configure menu option regardless of which drive types are displayed by the type option.

4. Type label and press Y when prompted to continue.

format> label
Ready to label disk, continue? y

5. Type quit to finish using the Format menu.


Creating Solaris Operating System Device Files for Newly Mapped LUNs

Perform the following procedure to create device files for newly mapped LUNs on hosts in the Solaris 8 and Solaris 9 operating system.

For additional operating system information, see the Installation, Operation, and Service manual for your Sun StorEdge 3000 family array.


procedure icon  To Create Device Files for Newly Mapped LUNs

1. To create device files, type:

# /usr/sbin/devfsadm -v 

2. To display the new LUNs, type:

# format

3. If the format command does not recognize the newly mapped LUNs, perform a configuration reboot on the host:

# reboot -- -r


Saving Configuration (NVRAM) to a Disk

The controller configuration information is stored in non-volatile RAM (NVRAM). When you save it, the information is stored in the disk reserved space of all drives that have been configured into logical drives. Back up the controller configuration information whenever you change the array's configuration.

Saving NVRAM controller configuration to a file provides a backup of controller configuration information such as channel settings, host IDs, and cache configuration. It does not save LUN mapping information. The NVRAM configuration file can restore all configuration settings but does not rebuild logical drives.



Note - A logical drive must exist for the controller to write NVRAM content onto it.




procedure icon  To Save a Configuration to NVRAM

single-step bulletChoose "system Functions right arrow Controller maintenance right arrow Save nvram to disks," and choose Yes to save the contents of NVRAM to disk.

A prompt confirms that the NVRAM information has been successfully saved.

To restore the configuration, see Restoring Your Configuration (NVRAM) From Disk.

If you want to save and restore all configuration data, including LUN mapping information, use Sun StorEdge Configuration Service or the Sun StorEdge CLI in addition to saving your NVRAM controller configuration to disk. The information saved this way can be used to rebuild all logical drives and therefore can be used to completely duplicate an array configuration to another array.

Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for information about the "save configuration" and "load configuration" features. Refer to the sccli man page or to the Sun StorEdge 3000 Family CLI User's Guide for information about the reset nvram and download controller-configuration commands.