C H A P T E R 4 |
First-Time Configuration for SCSI Arrays |
The Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array are each preconfigured with a single RAID 0 logical drive mapped to LUN 0, and no spare drives. This is not a working configuration. Unmap and delete this logical drive, using the procedure in To Unmap and Delete a Logical Drive, and replace it with logical drives that suit your requirements.
This chapter shows you how to configure your array for the first time, or reconfigure it. It describes the normal sequence of events you follow to configure an array:
Before configuring your array, carefully read chapters 1, 2, and 3.
Note - As you perform the operations described in this and other chapters, you might periodically see event message pop up on the screen. To dismiss an event message after you've read it, press Escape. To prevent event messages for displaying so that you can only read them by displaying the event message log, press Ctrl-C. You can press Ctrl-C again at any time to enable pop-up displays of event messages. Viewing Event Logs on the Screen for more information about event messages. |
If you are configuring your array for the first time, there is no need to review the existing configuration before you delete it.
If you are reconfiguring logical drives, it is a good idea to view the existing logical drive configuration to determine its status and any changes you want to make to the RAID level, size, number of physical drives that make up a selected logical drive, and spare drives. You also need to view the channel configuration to determine whether you want to make any changes to the channel mode and channel host IDs.
To View the Logical Drive Configuration |
1. From the Main Menu, choose "view and edit Logical drives" to display the Logical Drive Status Table.
For a description of this table's categories, see Logical Drive Status Table.
2. Note the changes you want to make to the existing configuration.
To View the Channel Configuration |
1. From the Main Menu, choose "view and edit channelS" to display the Channel Status Table.
For a description of this table's categories, see Channel Status Table
2. Note changes you want to make to the existing configuration.
To assign a different RAID level or a different set of drives to a logical drive, or to change local spare drives, you must first unmap and delete the logical drive, and then create a new logical drive.
Caution - This operation erases all data on the logical drive. Therefore, if any data exists on the logical drive, copy it to another location or back it up before it is deleted. |
Note - You can delete a logical drive only if it has first been unmapped. |
To Unmap and Delete a Logical Drive |
1. From the Main Menu, choose "view and edit Host luns" to display a list of channel and host IDs.
2. Choose a channel and host ID combination from the list.
A list of channel and host IDs is displayed. You might need to scroll through the list to display some of the channels and host IDs.
3. Select a host LUN and choose Yes to unmap the host LUN from the channel/host ID.
4. Repeat Step 3 to unmap all remaining host LUNs that are mapped to the logical drive you want to delete.
5. Press Escape to return to the Main Menu.
6. From the Main Menu, choose "view and edit Logical drives."
7. Select the logical drive that you unmapped and want to delete.
8. Choose "Delete logical drive" and, if it is safe to delete the logical drive, choose Yes to confirm the deletion.
Before creating any logical drives, determine the appropriate optimization mode for the array. The type of application accessing the array determines whether to use sequential or random optimization. See Cache Optimization Mode and Stripe Size Guidelines for a detailed description of sequential and random optimization.
If you are modifying an existing configuration and do not want to delete your existing logical drives, verify your optimization mode but do not change it.
To Verify the Optimization Mode |
1. From the Main Menu, choose "view and edit Configuration parameters Caching Parameters."
Sequential I/O is the default optimization mode.
2. To accept the optimization mode that is displayed, press Escape.
To Change the Optimization Mode |
Once logical drives are created, you cannot use the RAID firmware to change the optimization mode without deleting all logical drives. You can, however, use the Sun StorEdge CLI set cache-parameters command to change the optimization mode while logical drives exist. Refer to the Sun StorEdge 3000 Family CLI 2.0 User's Guide for more information.
If you have not deleted all logical drives, a notice will inform you of this requirement and you will not be able to change the optimization mode. See Deleting Logical Drives for the procedure to delete logical drives.
1. From the Main Menu, choose "view and edit Configuration parameters Caching Parameters" to display the current optimization mode.
2. Select "Optimization for Sequential I/O" or "Optimization for Random I/O" as applicable.
If you have not deleted all logical drives, a notice will inform you of this requirement and you will not be able to change the optimization mode.
3. Choose Yes to change the Optimization mode from Sequential I/O to Random I/O, or from Random I/O to Sequential I/O.
You are prompted to reset the controller:
4. Choose Yes to reset the controller.
If you do not reset the controller now, the optimization mode remains unchanged.
Before configuring physical drives into a logical drive, you must determine the availability of the physical drives in your enclosure. Only drives with a status of FRMT DRV are available.
Note - A drive that does not show a status of FRMT DRV needs to have reserved space added. See Changing Disk Reserved Space for more information. |
To Check Physical Drive Availability |
1. From the Main Menu, choose "view and edit Drives" to display a list of all installed physical drives.
2. Use the arrow keys to scroll through the table and check that all installed drives are listed.
When the power is initially turned on, the controller scans all installed physical drives that are connected through the drive channels.
Note - If a drive is installed but is not listed, it might be defective or installed incorrectly. If a physical was connected after the controller completed initialization, use the "Scan scsi drive" menu option to enable the controller to recognize the newly added physical drive and to configure it. See To Scan a New SCSI Drive for information about scanning a new SCSI drive. |
3. To view more information about a drive:
b. Choose "View drive information."
Additional information is displayed about the drive you selected.
The Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array are preconfigured with the channel settings shown in Default Channel Configurations. Follow the procedures for configuring a channel mode if you plan on adding a host connection or expansion unit. To make changes to channel host IDs, follow the procedures for adding or deleting a host ID.
When configuring the channel mode, the following rules apply:
To Configure the Channel Mode |
1. From the Main Menu, choose "view and edit channelS" to display the Channel Status Table.
2. Select the channel that you want to modify, which displays a menu of channel options.
3. Choose "channel Mode" to change the channel from host to drive, or drive to host, and then choose Yes to confirm the mode change.
This change does not take effect until the controller is reset.
NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now? |
4. Choose Yes to reset the controller.
Host channel IDs identify the controller to the host. Some applications require that specific IDs be assigned to host channels in order to recognize the array. Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array default host channel IDs are shown in TABLE 3-1 under Default Channel Configurations.
Each host ID can have up to 32 partitions, which are then mapped to LUNs to create a total not to exceed 128. The default host channel ID settings enable you to map up to a total of 64 LUNs. To map up to 128 LUNs, you must add host IDs. At least four host IDs are required; no more than six host IDs are supported.
For details on mapping 128 LUNs, refer to Planning for 128 LUNs on a SCSI Array (Optional).
Each host channel has a unique primary and secondary ID available. You can:
To Add or Delete a Unique Host ID |
Note - To change an ID, you must delete the old ID first and then add the new ID |
1. From the Main Menu, choose "view and edit channelS."
2. Select the host channel on which you want to add an ID.
3. Choose "view and edit scsi Id."
If host IDs have already been configured on the host channel, they will be displayed. If no host IDs have been configured, the following message is displayed.
4. If a host ID has already been assigned to that channel, select an ID and press Return to view a menu for adding or deleting SCSI IDs.
5. To add an ID, select "Add Channel SCSI ID." To delete an ID, select "Delete Channel SCSI ID."
6. If adding an ID, select a controller from the list to display a list of SCSI IDs. If deleting an ID, select Yes to delete the ID.
7. If adding an ID, select an ID from the list, and then choose Yes to confirm the addition.
8. If you are only changing one Channel ID, choose Yes to the following confirmation message to reset the controller.
NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now? |
9. If you are changing more than one Channel ID, do not reset the controller until all IDs are changed.
The configuration change takes effect only after the controller is reset.
The RAID array is preconfigured with one RAID 0 logical drive as described in Default Logical Drive Configuration. Each logical drive consists of a single partition by default.
This section describes how to modify the RAID level or add logical drives. In these procedures, you configure a logical drive to contain one or more physical drives based on the desired RAID level, and divide the logical drive into additional partitions.
If you do not use on-line initialization, be sure to allow enough time when you create logical drives. Creating a 2-Tbyte RAID 5 logical drive can take up to:
The Solaris operating system requires drive geometry for various operations, including newfs. For the appropriate drive geometry to be presented to the Solaris operating system for logical drives larger than 253 Gbyte, use the default settings shown below to cover all logical drives over 253 Gbyte. These settings work for smaller configurations as well. The controller automatically adjusts the sector count so the operating system can read the correct drive capacity.
For Solaris operating system configurations, use the values in the following table.
See Host Cylinder/Head/Sector Mapping Configuration for more information. See To Change Cylinder and Head Settings for instructions on how to apply these settings to FC and SATA arrays.
After settings are changed, they apply to all logical drives in the chassis.
Note - Refer to your operating system documentation for limitations on device sizes. |
To Change Cylinder and Head Settings |
1. Choose "view and edit Configuration parameters Host-side Parameters Host Cylinder/Head/Sector Mapping Configuration Sector Ranges - Variable," and then choose Yes to confirm your choice.
2. Choose "Head Ranges - 64 Heads," and then choose Yes to confirm your choice.
3. Choose "Cylinder Ranges - < 65536," and then choose Yes to confirm your choice.
To Create a Logical Drive |
Note - To reassign drives and add local or global spare drives to the preconfigured array, you must first unmap and then delete the existing logical drives. For more information about deleting a logical drive, see Deleting Logical Drives. |
1. From the Main Menu, choose "view and edit Logical drives."
Unassigned logical drives show a RAID level of NONE.
2. Select the first available unassigned logical drive (LG).
You can create as many as 16 logical drives using physical drives on any loop.
3. When prompted to Create Logical Drive? choose Yes to confirm your choice and display a pull-down list of supported RAID levels.
4. Select a RAID level from the list to assign to the logical drive.
Note - NRAID does not provide data redundancy. The NRAID option that appears in some firmware menus does not provide the protection of other RAID levels and is rarely used. |
For more information about RAID levels, see RAID Levels.
5. Select the drives you want to include in the logical drive from the list of available physical drives, using the steps below.
You must select at least the minimum number of drives required for the selected RAID level.
For redundancy, you can create a logical drive containing drives distributed over separate channels. You can then create several partitions on each logical drive. In a RAID 1 or RAID 0+1 configuration, the order in which you select the physical drives for a logical drive determines the channels to which the physical drives are assigned. If you want drives to be mirrored over two channels, select them in the appropriate order. For example:
a. Use the up and down arrow keys and press Return to select the drives you want to include in the logical drive.
An asterisk mark (*) is displayed in the Chl (Channel) column of each selected physical drive.
b. To deselect a drive, press Return again on the selected drive.
The asterisk marking that drive disappears.
c. After all physical drives have been selected for the logical drive, press Escape to display a menu of logical drive options.
Several optional menu options are displayed. You can choose these menu options to define aspects of the logical drive you are creating:
These menu options are described in the remainder of this section.
6. (Optional) Set the maximum logical drive capacity, using the following procedure:
a. Choose "Maximum Drive Capacity."
Note - Changing the maximum drive capacity reduces the size of the logical drive and leaves some disk space unused. |
b. Type in the maximum capacity of each physical drive that makes up the logical drive you are creating.
A logical drive should be composed of physical drives with the same capacity. A logical drive can only use the capacity of each drive up to the maximum capacity of the smallest drive.
7. (Optional) Add a local spare drive from the list of unused physical drives by following these steps:
a. Choose "Assign Spare Drives" to display a list of available physical drives you can use as a local spare.
Note - A global spare cannot be created while creating a logical drive. |
Note - A logical drive created in NRAID or RAID 0, which has no data redundancy or parity, does not support spare drive rebuilding. |
The spare chosen here is a local spare and will automatically replace any disk drive that fails in this logical drive. The local spare is not available for any other logical drive.
b. Select a physical drive from the list to use as a local spare.
c. Press Escape to return to the menu of logical drive options.
Note - The Disk Reserved Space option is not supported while you are creating logical drives. |
If you use two controllers for a redundant configuration, you can assign a logical drive to either of the controllers to balance the workload. By default, all logical drives are assigned to the primary controller.
Logical drive assignments can be changed later, but that operation requires a controller reset to take effect.
8. (Optional) For dual-controller configurations, you can assign this logical drive to the secondary controller by following these steps:
Caution - In single-controller configurations, assign logical drives only to the primary controller. |
a. Choose "Logical Drive Assignments."
A confirmation message is displayed.
b. Choose Yes to assign the logical drive to the redundant controller.
9. (Optional) Configure the logical drive's write policy.
Write-back cache is the preconfigured global logical drive write policy, which is specified on the Caching Parameters submenu. (See Enabling and Disabling Write-Back Cache for the procedure on setting the global caching parameter.) This option enables you to assign a write policy per logical drive that is either the same as or different than the global setting. Write policy is discussed in more detail in Cache Write Policy Guidelines.
Note - The Default write policy displayed is the global write policy assigned to all logical drives. |
The following write policy options are displayed:
As described in Cache Write Policy Guidelines, the array can be configured to dynamically switch write policy from write-back cache to write-through cache if specified events occur. Write policy is only automatically switched for logical drives with write policy configured to Default. See Event Trigger Operations for more information.
b. Choose a write policy option.
Note - You can change the logical drive logical drive's write policy at any time, as explained in Changing Write Policy for a Logical Drive. |
10. (Optional) Set the logical drive initialization mode by choosing "Initialize Mode" from the list of logical drive options, and then choosing Yes to change the initialization mode.
The assigned initialization mode is displayed in the list of logical drive options.
You can choose between these two logical drive initialization options:
This option enables you to configure and use the logical drive before initialization is complete. Because the controller is building the logical drive while performing I/O operations, initializing a logical drive on-line requires more time than off-line initialization.
This menu option enables you to configure and use the drive only after initialization is complete. Because the controller is building the logical drive without having to also perform I/O operations, off-line initialization requires less time than on-line initialization.
Because logical drive initialization can take a considerable amount of time, depending on the size of your physical disks and logical drives, you can choose on-line initialization so that you can use the logical drive before initialization is complete.
11. (Optional) Configure the logical drive stripe size.
Depending on the optimization mode selected, the array is configured with the default stripe sizes shown in Cache Optimization Mode and Stripe Size Guidelines. When you create a logical drive, however, you can assign a different stripe size to that logical drive.
See Cache Optimization Mode and Stripe Size Guidelines for more information.
Note - Once a logical drive is created, its stripe size cannot be changed. To change the stripe size, you must delete the logical drive, and then recreate it using the new stripe size. |
A menu of stripe size options is displayed.
b. Choose Default to assign the stripe size per Optimization mode, or choose a different stripe size from the menu.
Default stripe size per optimization mode is shown in Cache Optimization Mode and Stripe Size Guidelines.
The selected stripe size is displayed in the list of logical drive options.
12. Once all logical drive options have been assigned, press Escape to display the settings you have chosen.
13. Verify that all the information is correct, and then choose Yes to create the logical drive.
Note - If the logical drive has not been configured correctly, select No to return to the logical drive status table so you can configure the drive correctly. |
Messages indicate that the logical drive initialization has begun, and then that it has completed.
14. Press Escape to close the drive initialization message.
A progress bar displays the progress of initialization as it occurs.
You can press Escape to remove the initialization progress bar and continue working with menu options to create additional logical drives. The percentage of completion for each initialization in progress is displayed in the upper left corner of the window as shown in the following example screen.
The following message is displayed when the initialization is completed:
15. Press Escape to dismiss the notification.
The newly created logical drive is displayed in the status window.
By default, logical drives are automatically assigned to the primary controller. If you assign half of the logical drives to the secondary controller in a dual controller array, the maximum speed and performance is somewhat improved due to the redistribution of the traffic.
To balance the workload between both controllers, you can distribute your logical drives between the primary controller (displayed as the Primary ID or PID) and the secondary controller (displayed as the Secondary ID or SID).
After a logical drive has been created, it can be assigned to the secondary controller. Then the host computer associated with the logical drive can be mapped to the secondary controller (see Mapping a Partition to a Host LUN).
To Change a Controller Assignment (Optional) |
Caution - Only assign logical drives to primary controllers in single-controller configurations. |
1. From the Main Menu, choose "view and edit Logical drives."
2. Select the drive you want to reassign.
3. Choose "logical drive Assignments," and then choose Yes to confirm the reassignment.
The reassignment is evident from the "view and edit Logical drives" screen. A "P" in front of the LG number, such as "P0,"means that the logical drive is assigned to the primary controller. An "S" in front of the LG number means that the logical drive is assigned to the secondary controller.
You can assign a name to each logical drive. These logical drive names are used only in RAID firmware administration and monitoring and do not appear anywhere on the host. After you assign a drive name, you can change it at any time.
To Assign a Logical Drive Name (Optional) |
1. From the Main Menu, choose "view and edit Logical drives."
3. Choose "logical drive Name."
4. Type the name you want to give the logical drive in the New Logical Drive Name field and press Return to save the name.
You can divide a logical drive into several partitions, or use the entire logical drive as a single partition. You can configure up to 32 partitions and up to 128 LUN assignments. For guidelines on setting up 128 LUNs, see Planning for 128 LUNs on a SCSI Array (Optional).
Caution - If you modify the size of a partition or logical drive, all data on the drive is lost. |
To Partition a Logical Drive (Optional) |
Caution - Make sure any data that you want to save on this partition has been backed up before you partition the logical drive. |
1. From the Main Menu, choose "view and edit Logical drives."
2. Select the logical drive you want to partition.
3. Choose "Partition logical drive."
If the logical drive has not already been partitioned, the following warning is displayed:
A list of the partitions on this logical drive is displayed. If the logical drive has not yet been partitioned, all the logical drive capacity is listed as "partition 0."
A Partition Size dialog is displayed.
6. Type the desired size of the selected partition.
The following warning is displayed:
The remaining capacity of the logical drive is automatically allocated to the next partition. In the following example, a partition size of 20000 Mbyte was entered; the remaining storage of 20000 Mbyte is allocated to the partition below the newly created partition.
8. Repeat Step 5 through Step 7 to partition the remaining capacity of your logical drive.
For information on deleting a partition, see Deleting a Logical Drive Partition.
A partition is a division of the logical drive that appears as a physical drive to any host that has access to that partition. For Sun StorEdge 3310 SCSI arrays and Sun StorEdge 3320 SCSI arrays, you can create a maximum of 32 partitions per logical drive. So that host bus adapters (HBAs) recognize the partitions when the host bus is reinitialized, each partition must be mapped to a host LUN (logical unit number).
Channel IDs represent the physical connection between the HBA and the array. The host ID is an identifier assigned to the channel so that the host can identify LUNs. The following figure shows the relationship between a host ID and a LUN.
The ID is like a cabinet, and the drawers are the LUNs.
The following figure illustrates mapping partitions to host ID/LUNs.
All hosts on the mapped host channel have full access to all partitions mapped to LUNs on that channel. To provide redundant connections between a host and a partition, map the partition to a LUN on both of the host channels that connect to that host. Only one partition can be mapped to each LUN.
Note - When you modify a partition, you must first unmap the LUN. |
Note - If you plan to map 128 LUNs, the process is easier if you use Sun StorEdge Configuration Service. Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for more information. |
To Map a Logical Drive Partition |
1. From the Main Menu, choose "view and edit Host luns."
A list of available channels, IDs, and their associated controllers is displayed.
2. Select a channel and host ID on the primary controller.
3. If the Logical Drive and Logical Volume menu options are displayed, choose Logical Drive to display the LUN table.
4. Select the LUN you want to map the drive to.
A list of available logical drives is displayed.
5. Select the logical drive (LD) that you want to map to the selected LUN.
The partition table is displayed.
6. Select the partition you want to map to the selected LUN.
7. Choose "Map Host LUN," and then choose Yes to confirm the host LUN mapping.
The partition is now mapped to the selected LUN.
8. Repeat Step 4 through Step 7 to map additional partitions to host LUNs on this channel and logical drive.
10. If you are LUN mapping a redundant configuration, repeat Step 2 through Step 7 to map partitions to host LUNs with other IDs on the logical drive assigned to the primary controller.
When you map a partition to two channels in a redundant configuration, the number in the Partition column of the partition table displays an asterisk (*) to indicate that the partition is mapped to two LUNs.
Note - If you are using host-based multipathing software, map each partition to two or more host IDs so multiple paths will be available from the partition to the host. |
11. Repeat Step 2 through Step 10 to map hosts to the secondary controller.
12. To verify unique mapping of each LUN (unique LUN number, unique DRV number, or unique Partition number):
a. From the Main Menu, choose "view and edit Host luns."
b. Select the appropriate controller and ID and press Return to review the LUN information.
A mapped LUN displays a number in the host LUN partition window.
13. When all host LUNs have been mapped, save the updated configuration to nonvolatile memory. See Saving Configuration (NVRAM) to a Disk for more information.
14. (Solaris operating system only) For the Solaris operating system to recognize a LUN, you must first manually write the label using the Auto configure option of the format (1M) utility, as described in To Label a LUN.
For the Solaris operating system to recognize a LUN, you must first manually write the label using the Auto configure option of the format command.
For additional operating system information, refer to the Installation, Operation, and Service Manual for your Sun StorEdge 3000 family array.
To Label a LUN |
1. On the data host, type format at the root prompt.
2. Specify the disk number when prompted.
1. Type Y at the following prompt, if it is deployed, and press Return:
The Solaris operating system's Format menu is displayed.
2. Type type to select a drive type.
3. Type 0 to choose the Auto configure menu option.
Choose the Auto configure menu option regardless of which drive types are displayed by the type option.
4. Type label and press Y when prompted to continue.
5. Type quit to finish using the Format menu.
Perform the following procedure to create device files for newly mapped LUNs on hosts in the Solaris 8 and Solaris 9 operating system.
For additional operating system information, see the Installation, Operation, and Service manual for your Sun StorEdge 3000 family array.
To Create Device Files for Newly Mapped LUNs |
1. To create device files, type:
2. To display the new LUNs, type:
3. If the format command does not recognize the newly mapped LUNs, perform a configuration reboot:
The controller configuration information is stored in non-volatile RAM (NVRAM). When you save it, the information is stored in the disk reserved space of all drives that have been configured into logical drives. Back up the controller configuration information whenever you change the array's configuration.
Saving NVRAM controller configuration to a file provides a backup of controller configuration information such as channel settings, host IDs, and cache configuration. It does not save LUN mapping information. The NVRAM configuration file can restore all configuration settings but does not rebuild logical drives.
Note - A logical drive must exist for the controller to write NVRAM content onto it. |
To Save a Configuration to NVRAM |
Choose "system Functions Controller maintenance Save nvram to disks," and choose Yes to save the contents of NVRAM to disk.
A prompt confirms that the NVRAM information has been successfully saved.
To restore the configuration, see Restoring Your Configuration (NVRAM) From Disk.
If you prefer to save and restore all configuration data, including LUN mapping information, use Sun StorEdge Configuration Service or the Sun StorEdge CLI in addition to saving your NVRAM controller configuration to disk. The information saved this way can be used to rebuild all logical drives and therefore can be used to completely duplicate an array configuration to another array.
Refer to the Sun StorEdge 3000 Family Configuration Service User's Guide for information about the "save configuration" and "load configuration" features. Refer to the sccli man page or to the Sun StorEdge 3000 Family CLI User's Guide for information about the reset nvram and download controller-configuration commands.
Copyright © 2009, Dot Hill Systems Corporation. All rights reserved.