C H A P T E R  4

First-Time Configuration for SCSI Arrays

The Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array are each preconfigured with a single RAID 0 logical drive mapped to LUN 0, and no spare drives. This is not a working configuration. Unmap and delete this logical drive, using the procedure in To Unmap and Delete a Logical Drive, and replace it with logical drives that suit your requirements.

This chapter shows you how to configure your array for the first time, or reconfigure it. It describes the normal sequence of events you follow to configure an array:

Before configuring your array, carefully read chapters 1, 2, and 3.



Note - As you perform the operations described in this and other chapters, you might periodically see event message pop up on the screen. To dismiss an event message after you've read it, press Escape. To prevent event messages for displaying so that you can only read them by displaying the event message log, press Ctrl-C. You can press Ctrl-C again at any time to enable pop-up displays of event messages. Viewing Event Logs on the Screen for more information about event messages.




Existing Logical Drive Configuration

If you are configuring your array for the first time, there is no need to review the existing configuration before you delete it.

If you are reconfiguring logical drives, it is a good idea to view the existing logical drive configuration to determine its status and any changes you want to make to the RAID level, size, number of physical drives that make up a selected logical drive, and spare drives. You also need to view the channel configuration to determine whether you want to make any changes to the channel mode and channel host IDs.


procedure icon  To View the Logical Drive Configuration

1. From the Main Menu, choose "view and edit Logical drives" to display the Logical Drive Status Table.

For a description of this table's categories, see Logical Drive Status Table.

 Screen capture shows logical drive configuration.

2. Note the changes you want to make to the existing configuration.


procedure icon  To View the Channel Configuration

1. From the Main Menu, choose "view and edit channelS" to display the Channel Status Table.

For a description of this table's categories, see Channel Status Table

 Screen capture shows channel configuration.

2. Note changes you want to make to the existing configuration.


Deleting Logical Drives

To assign a different RAID level or a different set of drives to a logical drive, or to change local spare drives, you must first unmap and delete the logical drive, and then create a new logical drive.



caution icon

Caution - This operation erases all data on the logical drive. Therefore, if any data exists on the logical drive, copy it to another location or back it up before it is deleted.





Note - You can delete a logical drive only if it has first been unmapped.




procedure icon  To Unmap and Delete a Logical Drive

1. From the Main Menu, choose "view and edit Host luns" to display a list of channel and host IDs.

2. Choose a channel and host ID combination from the list.

A list of channel and host IDs is displayed. You might need to scroll through the list to display some of the channels and host IDs.

3. Select a host LUN and choose Yes to unmap the host LUN from the channel/host ID.

 Screen capture shows Unmap Host Lun dialog with Yes selected.

4. Repeat Step 3 to unmap all remaining host LUNs that are mapped to the logical drive you want to delete.

5. Press Escape to return to the Main Menu.

6. From the Main Menu, choose "view and edit Logical drives."

7. Select the logical drive that you unmapped and want to delete.

8. Choose "Delete logical drive" and, if it is safe to delete the logical drive, choose Yes to confirm the deletion.


Cache Optimization Mode (SCSI)

Before creating any logical drives, determine the appropriate optimization mode for the array. The type of application accessing the array determines whether to use sequential or random optimization. See Cache Optimization Mode and Stripe Size Guidelines for a detailed description of sequential and random optimization.



Note - Due to firmware improvements beginning with version 4.11, sequential optimization yields better performance than random optimization for most applications and configurations. Use sequential optimization unless real-world tests in your production environment show better results for random optimization.



If you are modifying an existing configuration and do not want to delete your existing logical drives, verify your optimization mode but do not change it.


procedure icon  To Verify the Optimization Mode

1. From the Main Menu, choose "view and edit Configuration parameters right arrow Caching Parameters."

Sequential I/O is the default optimization mode.

 Screen capture showing submenu with "Optimization for Sequential I/O" chosen.

2. To accept the optimization mode that is displayed, press Escape.


procedure icon  To Change the Optimization Mode

Once logical drives are created, you cannot use the RAID firmware to change the optimization mode without deleting all logical drives. You can, however, use the Sun StorEdge CLI set cache-parameters command to change the optimization mode while logical drives exist. Refer to the Sun StorEdge 3000 Family CLI 2.0 User's Guide for more information.

If you have not deleted all logical drives, a notice will inform you of this requirement and you will not be able to change the optimization mode. See Deleting Logical Drives for the procedure to delete logical drives.

1. From the Main Menu, choose "view and edit Configuration parameters right arrow Caching Parameters" to display the current optimization mode.

2. Select "Optimization for Sequential I/O" or "Optimization for Random I/O" as applicable.

If you have not deleted all logical drives, a notice will inform you of this requirement and you will not be able to change the optimization mode.

3. Choose Yes to change the Optimization mode from Sequential I/O to Random I/O, or from Random I/O to Sequential I/O.

You are prompted to reset the controller:

NOTICE: Change made to this setting will NOT take effect until all Logical Drives are deleted and then the controller is RESET. Prior to resetting the controller, operation may not proceed normally.
 
Do you want to reset the controller now ?

4. Choose Yes to reset the controller.

If you do not reset the controller now, the optimization mode remains unchanged.


Physical Drive Status

Before configuring physical drives into a logical drive, you must determine the availability of the physical drives in your enclosure. Only drives with a status of FRMT DRV are available.



Note - A drive that does not show a status of FRMT DRV needs to have reserved space added. See Changing Disk Reserved Space for more information.




procedure icon  To Check Physical Drive Availability

1. From the Main Menu, choose "view and edit Drives" to display a list of all installed physical drives.

 Screen capture shows the physical drives status window accessed with the "view and edit Scsi drives" command.

2. Use the arrow keys to scroll through the table and check that all installed drives are listed.

When the power is initially turned on, the controller scans all installed physical drives that are connected through the drive channels.



Note - If a drive is installed but is not listed, it might be defective or installed incorrectly. If a physical was connected after the controller completed initialization, use the "Scan scsi drive" menu option to enable the controller to recognize the newly added physical drive and to configure it. See To Scan a New SCSI Drive for information about scanning a new SCSI drive.



3. To view more information about a drive:

a. Select the drive.

b. Choose "View drive information."

 Screen capture shows "View drive information" selected.

Additional information is displayed about the drive you selected.

 Screen capture shows drive selected with additional information displayed.


Channel Settings

The Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array are preconfigured with the channel settings shown in Default Channel Configurations. Follow the procedures for configuring a channel mode if you plan on adding a host connection or expansion unit. To make changes to channel host IDs, follow the procedures for adding or deleting a host ID.

Configuring Channel Mode

When configuring the channel mode, the following rules apply:



Note - RCCOM provides the communication channels by which two controllers in a redundant RAID array communicate with one another. This communication enables the controllers to monitor each other, and includes configuration updates, and control of cache. By default, channel 6 is configured as RCCOM.




procedure icon  To Configure the Channel Mode

1. From the Main Menu, choose "view and edit channelS" to display the Channel Status Table.

 Screen capture shows the Channel Status table.

2. Select the channel that you want to modify, which displays a menu of channel options.

 Screen capture shows the menu of channel options with "channel Mode" selected.

3. Choose "channel Mode" to change the channel from host to drive, or drive to host, and then choose Yes to confirm the mode change.

This change does not take effect until the controller is reset.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

4. Choose Yes to reset the controller.

Host Channel IDs

Host channel IDs identify the controller to the host. Some applications require that specific IDs be assigned to host channels in order to recognize the array. Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array default host channel IDs are shown in TABLE 3-1 under Default Channel Configurations.

Each host ID can have up to 32 partitions, which are then mapped to LUNs to create a total not to exceed 128. The default host channel ID settings enable you to map up to a total of 64 LUNs. To map up to 128 LUNs, you must add host IDs. At least four host IDs are required; no more than six host IDs are supported.

For details on mapping 128 LUNs, refer to Planning for 128 LUNs on a SCSI Array (Optional).

Each host channel has a unique primary and secondary ID available. You can:



Note - Channel ID values range from 0 to 15.




procedure icon  To Add or Delete a Unique Host ID



Note - To change an ID, you must delete the old ID first and then add the new ID



1. From the Main Menu, choose "view and edit channelS."

 Screen capture shows the "view and edit channelS" menu option selected and its status table displaying the channel information.

2. Select the host channel on which you want to add an ID.

3. Choose "view and edit scsi Id."

If host IDs have already been configured on the host channel, they will be displayed. If no host IDs have been configured, the following message is displayed.

No SCSI ID Assignment - Add Channel SCSI ID?

4. If a host ID has already been assigned to that channel, select an ID and press Return to view a menu for adding or deleting SCSI IDs.

5. To add an ID, select "Add Channel SCSI ID." To delete an ID, select "Delete Channel SCSI ID."

6. If adding an ID, select a controller from the list to display a list of SCSI IDs. If deleting an ID, select Yes to delete the ID.

7. If adding an ID, select an ID from the list, and then choose Yes to confirm the addition.

 

8. If you are only changing one Channel ID, choose Yes to the following confirmation message to reset the controller.

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?

9. If you are changing more than one Channel ID, do not reset the controller until all IDs are changed.

The configuration change takes effect only after the controller is reset.


Creating Logical Drives

The RAID array is preconfigured with one RAID 0 logical drive as described in Default Logical Drive Configuration. Each logical drive consists of a single partition by default.

This section describes how to modify the RAID level or add logical drives. In these procedures, you configure a logical drive to contain one or more physical drives based on the desired RAID level, and divide the logical drive into additional partitions.



Note - Depending on the size and RAID level, it can take several hours to build a logical drive. Online initialization, however, enables you to begin configuring and using the logical drive before initialization is complete.



If you do not use on-line initialization, be sure to allow enough time when you create logical drives. Creating a 2-Tbyte RAID 5 logical drive can take up to:

Preparing for Logical Drives Larger Than 253 Gbyte

The Solaris operating system requires drive geometry for various operations, including newfs. For the appropriate drive geometry to be presented to the Solaris operating system for logical drives larger than 253 Gbyte, use the default settings shown below to cover all logical drives over 253 Gbyte. These settings work for smaller configurations as well. The controller automatically adjusts the sector count so the operating system can read the correct drive capacity.

For Solaris operating system configurations, use the values in the following table.

TABLE 4-1 Cylinder and Head Mapping for the Solaris Operating System

Logical Drive Capacity

Cylinder

Head

Sector

< 253 GB

< 65536 (default)

variable

variable (default)

253 GB-1 TB

< 65536 (default)

64 (default)

variable (default)


See Host Cylinder/Head/Sector Mapping Configuration for more information. See To Change Cylinder and Head Settings for instructions on how to apply these settings to FC and SATA arrays.

After settings are changed, they apply to all logical drives in the chassis.



Note - Refer to your operating system documentation for limitations on device sizes.




procedure icon  To Change Cylinder and Head Settings

1. Choose "view and edit Configuration parameters right arrow Host-side Parameters right arrow Host Cylinder/Head/Sector Mapping Configuration right arrow Sector Ranges - right arrow Variable," and then choose Yes to confirm your choice.

2. Choose "Head Ranges - right arrow 64 Heads," and then choose Yes to confirm your choice.

3. Choose "Cylinder Ranges - right arrow < 65536," and then choose Yes to confirm your choice.


procedure icon  To Create a Logical Drive



Note - To reassign drives and add local or global spare drives to the preconfigured array, you must first unmap and then delete the existing logical drives. For more information about deleting a logical drive, see Deleting Logical Drives.



1. From the Main Menu, choose "view and edit Logical drives."

Unassigned logical drives show a RAID level of NONE.

2. Select the first available unassigned logical drive (LG).

 Screen capture shows the logical drive status window with unassigned logical drive 1 selected.

You can create as many as 16 logical drives using physical drives on any loop.

3. When prompted to Create Logical Drive? choose Yes to confirm your choice and display a pull-down list of supported RAID levels.

4. Select a RAID level from the list to assign to the logical drive.



Note - RAID 5 is used as an example in the following steps.



 


Note - NRAID does not provide data redundancy. The NRAID option that appears in some firmware menus does not provide the protection of other RAID levels and is rarely used.



Screen capture shows RAID levels menu with "RAID 5" selected.

For more information about RAID levels, see RAID Levels.

5. Select the drives you want to include in the logical drive from the list of available physical drives, using the steps below.

You must select at least the minimum number of drives required for the selected RAID level.

For redundancy, you can create a logical drive containing drives distributed over separate channels. You can then create several partitions on each logical drive. In a RAID 1 or RAID 0+1 configuration, the order in which you select the physical drives for a logical drive determines the channels to which the physical drives are assigned. If you want drives to be mirrored over two channels, select them in the appropriate order. For example:

a. Use the up and down arrow keys and press Return to select the drives you want to include in the logical drive.

An asterisk mark (*) is displayed in the Chl (Channel) column of each selected physical drive.

 Screen capture shows a list of available physical drives with three selected drives marked with an asterisk.

b. To deselect a drive, press Return again on the selected drive.

The asterisk marking that drive disappears.

c. After all physical drives have been selected for the logical drive, press Escape to display a menu of logical drive options.

Several optional menu options are displayed. You can choose these menu options to define aspects of the logical drive you are creating:

These menu options are described in the remainder of this section.

6. (Optional) Set the maximum logical drive capacity, using the following procedure:

a. Choose "Maximum Drive Capacity."



Note - Changing the maximum drive capacity reduces the size of the logical drive and leaves some disk space unused.



b. Type in the maximum capacity of each physical drive that makes up the logical drive you are creating.

 Screen capture shows a logical drive with Maximum Available Drive Capacity of 34476 Mbyte and Maximum Drive Capacity configured to 20000 Mbyte.

A logical drive should be composed of physical drives with the same capacity. A logical drive can only use the capacity of each drive up to the maximum capacity of the smallest drive.

7. (Optional) Add a local spare drive from the list of unused physical drives by following these steps:

a. Choose "Assign Spare Drives" to display a list of available physical drives you can use as a local spare.



Note - A global spare cannot be created while creating a logical drive.





Note - A logical drive created in NRAID or RAID 0, which has no data redundancy or parity, does not support spare drive rebuilding.



The spare chosen here is a local spare and will automatically replace any disk drive that fails in this logical drive. The local spare is not available for any other logical drive.

b. Select a physical drive from the list to use as a local spare.

 Screen capture shows a list of unused drives with the top drive, ID 8, selected.

c. Press Escape to return to the menu of logical drive options.



Note - The Disk Reserved Space option is not supported while you are creating logical drives.



If you use two controllers for a redundant configuration, you can assign a logical drive to either of the controllers to balance the workload. By default, all logical drives are assigned to the primary controller.

Logical drive assignments can be changed later, but that operation requires a controller reset to take effect.

8. (Optional) For dual-controller configurations, you can assign this logical drive to the secondary controller by following these steps:



caution icon

Caution - In single-controller configurations, assign logical drives only to the primary controller.



a. Choose "Logical Drive Assignments."

A confirmation message is displayed.

 Screen capture showing the Assign Logical Drive to Redundant Controller prompt.

b. Choose Yes to assign the logical drive to the redundant controller.

9. (Optional) Configure the logical drive's write policy.

Write-back cache is the preconfigured global logical drive write policy, which is specified on the Caching Parameters submenu. (See Enabling and Disabling Write-Back Cache for the procedure on setting the global caching parameter.) This option enables you to assign a write policy per logical drive that is either the same as or different than the global setting. Write policy is discussed in more detail in Cache Write Policy Guidelines.

a. Choose "Write Policy -."



Note - The Default write policy displayed is the global write policy assigned to all logical drives.



The following write policy options are displayed:

As described in Cache Write Policy Guidelines, the array can be configured to dynamically switch write policy from write-back cache to write-through cache if specified events occur. Write policy is only automatically switched for logical drives with write policy configured to Default. See Event Trigger Operations for more information.

b. Choose a write policy option.

 Screen capture showing write policy options with Write Back selected.


Note - You can change the logical drive logical drive's write policy at any time, as explained in Changing Write Policy for a Logical Drive.



10. (Optional) Set the logical drive initialization mode by choosing "Initialize Mode" from the list of logical drive options, and then choosing Yes to change the initialization mode.

The assigned initialization mode is displayed in the list of logical drive options.

You can choose between these two logical drive initialization options:

This option enables you to configure and use the logical drive before initialization is complete. Because the controller is building the logical drive while performing I/O operations, initializing a logical drive on-line requires more time than off-line initialization.

This menu option enables you to configure and use the drive only after initialization is complete. Because the controller is building the logical drive without having to also perform I/O operations, off-line initialization requires less time than on-line initialization.

Because logical drive initialization can take a considerable amount of time, depending on the size of your physical disks and logical drives, you can choose on-line initialization so that you can use the logical drive before initialization is complete.

11. (Optional) Configure the logical drive stripe size.

Depending on the optimization mode selected, the array is configured with the default stripe sizes shown in Cache Optimization Mode and Stripe Size Guidelines. When you create a logical drive, however, you can assign a different stripe size to that logical drive.



Note - Default stripe sizes result in optimal performance for most applications. Selecting a stripe size that is inappropriate for your optimization mode and RAID level can decrease performance significantly. For example, smaller stripe sizes are ideal for I/Os that are transaction-based and randomly accessed. But when a logical drive configured with a 4-Kbyte stripe size receives files of 128 Kbyte, each physical drive has to write many more times to store it in 4-Kbyte data fragments. Change stripe size only when you are sure it will result in performance improvements for your particular applications.



See Cache Optimization Mode and Stripe Size Guidelines for more information.



Note - Once a logical drive is created, its stripe size cannot be changed. To change the stripe size, you must delete the logical drive, and then recreate it using the new stripe size.



a. Choose Stripe Size.

A menu of stripe size options is displayed.

b. Choose Default to assign the stripe size per Optimization mode, or choose a different stripe size from the menu.

Default stripe size per optimization mode is shown in Cache Optimization Mode and Stripe Size Guidelines.

The selected stripe size is displayed in the list of logical drive options.

12. Once all logical drive options have been assigned, press Escape to display the settings you have chosen.
Screen capture shows the Create Logical Drive confirmation window displayed with "Yes" selected.

13. Verify that all the information is correct, and then choose Yes to create the logical drive.



Note - If the logical drive has not been configured correctly, select No to return to the logical drive status table so you can configure the drive correctly.



Messages indicate that the logical drive initialization has begun, and then that it has completed.

14. Press Escape to close the drive initialization message.

A progress bar displays the progress of initialization as it occurs.

You can press Escape to remove the initialization progress bar and continue working with menu options to create additional logical drives. The percentage of completion for each initialization in progress is displayed in the upper left corner of the window as shown in the following example screen.

 Screen capture shows the initialization in progress displayed in the upper left corner of the window.

The following message is displayed when the initialization is completed:

 Screen capture shows the notification that initialization of the logical drive is complete.

15. Press Escape to dismiss the notification.

The newly created logical drive is displayed in the status window.

 Screen capture shows the logical drive status window with second created logical drive (S1) selected.

Controller Assignment

By default, logical drives are automatically assigned to the primary controller. If you assign half of the logical drives to the secondary controller in a dual controller array, the maximum speed and performance is somewhat improved due to the redistribution of the traffic.

To balance the workload between both controllers, you can distribute your logical drives between the primary controller (displayed as the Primary ID or PID) and the secondary controller (displayed as the Secondary ID or SID).



caution icon

Caution - In single-controller configurations, do not set the controller as a secondary controller. The primary controller controls all firmware operations and must be the assignment of the single controller. In a single-controller configuration, if you disable the Redundant Controller function and reconfigure the controller with the Autoconfigure option or as a secondary controller, the controller module becomes inoperable and will need to be replaced.



After a logical drive has been created, it can be assigned to the secondary controller. Then the host computer associated with the logical drive can be mapped to the secondary controller (see Mapping a Partition to a Host LUN).


procedure icon  To Change a Controller Assignment (Optional)



caution icon

Caution - Only assign logical drives to primary controllers in single-controller configurations.



1. From the Main Menu, choose "view and edit Logical drives."

2. Select the drive you want to reassign.

3. Choose "logical drive Assignments," and then choose Yes to confirm the reassignment.

The reassignment is evident from the "view and edit Logical drives" screen. A "P" in front of the LG number, such as "P0,"means that the logical drive is assigned to the primary controller. An "S" in front of the LG number means that the logical drive is assigned to the secondary controller.

Logical Drive Name

You can assign a name to each logical drive. These logical drive names are used only in RAID firmware administration and monitoring and do not appear anywhere on the host. After you assign a drive name, you can change it at any time.


procedure icon  To Assign a Logical Drive Name (Optional)

1. From the Main Menu, choose "view and edit Logical drives."

2. Select a logical drive.

3. Choose "logical drive Name."

4. Type the name you want to give the logical drive in the New Logical Drive Name field and press Return to save the name.

 Screen capture shows "Logical Drive name:" prompt displayed and "New Name" entered in the New Logical Drive Name field.


Partitions

You can divide a logical drive into several partitions, or use the entire logical drive as a single partition. You can configure up to 32 partitions and up to 128 LUN assignments. For guidelines on setting up 128 LUNs, see Planning for 128 LUNs on a SCSI Array (Optional).



caution icon

Caution - If you modify the size of a partition or logical drive, all data on the drive is lost.





Note - If you plan to map hundreds of LUNs, the process is easier if you use Sun StorEdge Configuration Service. Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for more information.



  FIGURE 4-1 Partitions in Logical Drives

Diagram shows logical drive 0 with three partitions and logical drive 1 with three partitions.

procedure icon  To Partition a Logical Drive (Optional)



caution icon

Caution - Make sure any data that you want to save on this partition has been backed up before you partition the logical drive.



1. From the Main Menu, choose "view and edit Logical drives."

2. Select the logical drive you want to partition.

3. Choose "Partition logical drive."

If the logical drive has not already been partitioned, the following warning is displayed:

This operation may result in the LOSS OF ALL DATA on the Logical Disk.

Partition Logical Drive?


4. Choose Yes to confirm.

A list of the partitions on this logical drive is displayed. If the logical drive has not yet been partitioned, all the logical drive capacity is listed as "partition 0."

5. Select a partition.

A Partition Size dialog is displayed.

6. Type the desired size of the selected partition.

The following warning is displayed:

This operation will result in the LOSS OF ALL DATA on the partition.
Partition Logical Drive?


7. Choose Yes to confirm.

The remaining capacity of the logical drive is automatically allocated to the next partition. In the following example, a partition size of 20000 Mbyte was entered; the remaining storage of 20000 Mbyte is allocated to the partition below the newly created partition.
Screen capture shows the partition allocation with A 20000 MB partition and the remaining 20000 MB storage allocated to the partition below.

8. Repeat Step 5 through Step 7 to partition the remaining capacity of your logical drive.

For information on deleting a partition, see Deleting a Logical Drive Partition.

Mapping a Partition to a Host LUN

A partition is a division of the logical drive that appears as a physical drive to any host that has access to that partition. For Sun StorEdge 3310 SCSI arrays and Sun StorEdge 3320 SCSI arrays, you can create a maximum of 32 partitions per logical drive. So that host bus adapters (HBAs) recognize the partitions when the host bus is reinitialized, each partition must be mapped to a host LUN (logical unit number).

Channel IDs represent the physical connection between the HBA and the array. The host ID is an identifier assigned to the channel so that the host can identify LUNs. The following figure shows the relationship between a host ID and a LUN.

  FIGURE 4-2 LUNs Resemble Drawers in a File Cabinet Identified by an ID

Diagram shows the ID as a file cabinet and its LUNs as file drawers.

The ID is like a cabinet, and the drawers are the LUNs.

The following figure illustrates mapping partitions to host ID/LUNs.

  FIGURE 4-3 Mapping Partitions to Host ID/LUNs

Diagram shows LUN partitions mapped to ID 0 on Channel 1 and to ID 1 on Channel 3.

All hosts on the mapped host channel have full access to all partitions mapped to LUNs on that channel. To provide redundant connections between a host and a partition, map the partition to a LUN on both of the host channels that connect to that host. Only one partition can be mapped to each LUN.



Note - When you modify a partition, you must first unmap the LUN.





Note - If you plan to map 128 LUNs, the process is easier if you use Sun StorEdge Configuration Service. Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for more information.




procedure icon  To Map a Logical Drive Partition

1. From the Main Menu, choose "view and edit Host luns."

A list of available channels, IDs, and their associated controllers is displayed.

2. Select a channel and host ID on the primary controller.

3. If the Logical Drive and Logical Volume menu options are displayed, choose Logical Drive to display the LUN table.

 Screen capture shows the LUN table.

4. Select the LUN you want to map the drive to.

A list of available logical drives is displayed.

5. Select the logical drive (LD) that you want to map to the selected LUN.

The partition table is displayed.

 Screen capture shows the partition table with Partition 0 selected.

6. Select the partition you want to map to the selected LUN.

7. Choose "Map Host LUN," and then choose Yes to confirm the host LUN mapping.

 Screen capture shows two mapping options with "Map Host LUN" selected.

The partition is now mapped to the selected LUN.

 Screen capture shows partition 0 mapped to LUN 0.

8. Repeat Step 4 through Step 7 to map additional partitions to host LUNs on this channel and logical drive.

9. Press Escape.

10. If you are LUN mapping a redundant configuration, repeat Step 2 through Step 7 to map partitions to host LUNs with other IDs on the logical drive assigned to the primary controller.

When you map a partition to two channels in a redundant configuration, the number in the Partition column of the partition table displays an asterisk (*) to indicate that the partition is mapped to two LUNs.



Note - If you are using host-based multipathing software, map each partition to two or more host IDs so multiple paths will be available from the partition to the host.



11. Repeat Step 2 through Step 10 to map hosts to the secondary controller.

12. To verify unique mapping of each LUN (unique LUN number, unique DRV number, or unique Partition number):

a. From the Main Menu, choose "view and edit Host luns."

b. Select the appropriate controller and ID and press Return to review the LUN information.

A mapped LUN displays a number in the host LUN partition window.

13. When all host LUNs have been mapped, save the updated configuration to nonvolatile memory. See Saving Configuration (NVRAM) to a Disk for more information.

14. (Solaris operating system only) For the Solaris operating system to recognize a LUN, you must first manually write the label using the Auto configure option of the format (1M) utility, as described in To Label a LUN.


Labeling a LUN (Solaris Operating System Only)

For the Solaris operating system to recognize a LUN, you must first manually write the label using the Auto configure option of the format command.

For additional operating system information, refer to the Installation, Operation, and Service Manual for your Sun StorEdge 3000 family array.


procedure icon  To Label a LUN

1. On the data host, type format at the root prompt.

# format

2. Specify the disk number when prompted.

1. Type Y at the following prompt, if it is deployed, and press Return:

Disk not labeled. Label it now? Y

The Solaris operating system's Format menu is displayed.

2. Type type to select a drive type.

3. Type 0 to choose the Auto configure menu option.

Choose the Auto configure menu option regardless of which drive types are displayed by the type option.

4. Type label and press Y when prompted to continue.

format> label
Ready to label disk, continue? y

5. Type quit to finish using the Format menu.


Solaris Operating System Device Files

Perform the following procedure to create device files for newly mapped LUNs on hosts in the Solaris 8 and Solaris 9 operating system.

For additional operating system information, see the Installation, Operation, and Service manual for your Sun StorEdge 3000 family array.


procedure icon  To Create Device Files for Newly Mapped LUNs

1. To create device files, type:

# /usr/sbin/devfsadm -v 

2. To display the new LUNs, type:

# format

3. If the format command does not recognize the newly mapped LUNs, perform a configuration reboot:

# reboot -- -r


Saving Configuration (NVRAM) to a Disk

The controller configuration information is stored in non-volatile RAM (NVRAM). When you save it, the information is stored in the disk reserved space of all drives that have been configured into logical drives. Back up the controller configuration information whenever you change the array's configuration.

Saving NVRAM controller configuration to a file provides a backup of controller configuration information such as channel settings, host IDs, and cache configuration. It does not save LUN mapping information. The NVRAM configuration file can restore all configuration settings but does not rebuild logical drives.



Note - A logical drive must exist for the controller to write NVRAM content onto it.




procedure icon  To Save a Configuration to NVRAM

single-step bulletChoose "system Functions right arrow Controller maintenance right arrow Save nvram to disks," and choose Yes to save the contents of NVRAM to disk.

A prompt confirms that the NVRAM information has been successfully saved.

To restore the configuration, see Restoring Your Configuration (NVRAM) From Disk.

If you prefer to save and restore all configuration data, including LUN mapping information, use Sun StorEdge Configuration Service or the Sun StorEdge CLI in addition to saving your NVRAM controller configuration to disk. The information saved this way can be used to rebuild all logical drives and therefore can be used to completely duplicate an array configuration to another array.

Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for information about the "save configuration" and "load configuration" features. Refer to the sccli man page or to the Sun StorEdge 3000 Family CLI User's Guide for information about the reset nvram and download controller-configuration commands.