C H A P T E R  8

Updating the Configuration

Refer to this chapter when you want to change the current configuration or add to it. This chapter describes the following tasks:


The Configuration menu commands and tool icons might be temporarily disabled if an administration process, such as parity checking, is running. The menu command is also shown as deactivated when the console is refreshing its inventory on the server. A satellite dish symbol is attached to the server icon during the refresh processServer icon with satellite dish symbol attached..


Note - To use the Configuration options, you must log into the ssconfig security level of the software with the ssconfig password. When you are finished with the configuration activities, log back into the monitoring level of the program.



procedure icon  To Add a Logical Drive or Logical Volume From New Logical Drives

Use this option to add one or more logical drives to an existing configuration of RAID sets, or to add a logical volume from new logical drives. To add a logical volume from existing logical drives, see To Add a Logical Volume From Existing Logical Drives.



Note - Logical volumes are unsuited to some modern configurations such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information, see Logical Volumes.




Note - If the logical drive is going to be larger than 253 Gbyte, see To Prepare for Logical Drives Larger Than 253 Gbyte.


1. Select the array that you want to configure.

2. Choose Configuration right arrow Custom Configure.



Note - This selection is inactive unless you have selected an array with available physical drives.


3. Select Add LDs/LVs to the Current Configuration from the Custom Configuration Options window.

4. Verify that the server and controller displayed at the top of the window are correct.

5. Select a disk you want to be included in the new logical drive and click Add Disk.

If you make a mistake or change your mind, select the drive and click Remove Disk.

6. Select a RAID Level.

For definitions of RAID levels, see RAID Basics.

7. Select the Host channel and ID to which you would like the new logical drive to be mapped to from the Channel and the ID list boxes.

8. Set the Max Drive Size.

The Max Drive Size displays the total capacity of each disk. A smaller logical drive can be created by decreasing this value.



Note - If you do not change the Max Drive Size but you do change the Partition Size, a new partition is created at the specified partition size. The remaining logical drive size capacity moves to the last partition. Remaining capacity can be used later by expanding the drive (as explained in To Expand the Capacity of a Logical Drive or Logical Volume). The drive capacity is no longer editable after a partition is created.




Note - If you want to create another logical drive on the same controller, click New LD. The logical drive you just defined is created and you are returned to the top of the window, enabling you to create another logical drive. For the maximum number of logical drives supported, see TABLE 4-1.


9. (Solaris OS only). If you want the new logical drive to be automatically labeled, which enables the OS to use the drive, click Write a new label to the new LD.

10. To use the logical drive immediately, select On-line Initialization.

Because logical drive initialization can take up to several hours, you can choose to initialize a logical drive on-line.

On-line initialization enables you to begin configuring and using the logical drive before initialization is complete. However, because the controller is building the logical drive while performing I/O operations, initializing a logical drive on-line requires more time than off-line initialization.

If you do not select On-line initialization, you can configure and use the drive only after initialization is complete. Because the controller is building the logical drive without having to also perform I/O operations, off-line initialization requires less time than on-line initialization.



Note - On-line Initialization does not apply to logical volumes.


11. Select the stripe size.

Select Default to assign the stripe size per Optimization mode as specified in TABLE 8-1, or select a different stripe size.


TABLE 8-1 Default Stripe Size Per Optimization Mode

RAID Level

Sequential I/O

Random I/O

0, 1, 5

128

32

3

16

4


Once the stripe size is selected and data is written to logical drives, the only way to change the stripe size of an individual logical drive is to back up all its data to another location, delete the logical drive, and create a logical drive with the stripe size that you want.

12. Specify Default, Write-through, or Write-back as the Write Policy for the logical drive.

The write policy determines when cached data is written to the disk drives. The ability to hold data in cache while it is being written to disk can increase storage device speed during sequential reads. Write policy options include write-through and write-back.

Using write-through cache, the controller writes the data to the disk drive before signaling the host OS that the process is complete. Write-through cache has lower write operation and throughput performance than write-back cache, but it is the safer strategy, with minimum risk of data loss on power failure. Because a battery module is installed, power is supplied to the data cached in memory and the data can be written to disk when power is restored.

Using write-back cache, the controller receives the data to write to disk, stores it in the memory buffer, and immediately sends the host OS a signal that the write operation is complete, before the data is actually written to the disk drive. Write-back caching improves the performance of write operations and the throughput of the controller card. Write-back cache is enabled by default.



Note - The setting you specify in the Write Back field on the Cache tab of the Change Controller Parameters window is the default global cache setting for all logical drives. (See Cache Tab.)


13. Click OK.

14. To add this logical drive to a logical volume, click New LD and see To Add a Logical Drive to a Logical Volume.

15. When you are satisfied with the selections on this window, and do not want to define another logical drive, click Commit.

A confirmation window is displayed showing the new configuration.

16. Click OK to accept the configuration or Cancel to return to the console.



Note - You cannot change a logical drive configuration after you click OK.




Note - During initialization LD/LV size is displayed as 0 Mbyte.


17. (HP-UX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to run the ioscan -fnC disk command.



Note - If you used System Administrator Manager (sam) to unmount the file system, make sure it is closed before running the ioscan command.


18. (IBM AIX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to update the Object Data Manager (ODM) as explained in Updating the Object Data Manager on an IBM AIX Host.


procedure icon  To Add a Logical Drive to a Logical Volume

A logical volume is composed of two or more logical drives and can be divided into a maximum of 32 partitions. During operation, the host sees a nonpartitioned logical volume or a partition of a logical volume as one single physical drive.



Note - Logical volumes are unsuited to some modern configurations such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information, see Logical Volumes.


1. Create a logical drive as described in Step 1-Step 15 in To Add a Logical Drive or Logical Volume From New Logical Drives.



Note - Do not partition the logical drive that you are adding to the logical volume. A logical drive that has been partitioned cannot be added to a logical volume.


2. Before you click Commit, to add the logical drive to a logical volume, click Add to LV.

The logical drive is added to the LV Definition box. The total size of the logical volume is displayed in the Available Size (MB) field.



Note - Because the logical volume has not been partitioned yet, the Part Size (MB) and the Available Size (MB) are equal. A single logical volume is considered to be a single partition.




Note - Mixing SATA and FC logical drives to create a logical volume is not supported.


3. To create another logical drive to add to the logical volume, click New LD.

4. Create the logical drive and add it to the logical volume by clicking Add to LV.

Repeat this step for every logical drive you want to add to the logical volume.

5. To create a partition, see To Create a Partition.

6. When you have finished adding logical drives to the logical volume, to create another logical volume or an individual logical drive, click Commit LV.

When you are finished creating logical volumes and do not want to create an individual logical drive, click Commit.



Note - When you are finished creating logical volumes and want to exit the New Configuration window, if you accidentally click Commit LV instead of Commit, you will have to create another logical drive; otherwise, you have to click Cancel and configure the logical volume again.


7. (HP-UX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to run the ioscan -fnC disk command.



Note - If you used System Administrator Manager (sam) to unmount the file system, make sure it is closed before running the ioscan command.


8. (IBM AIX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to update the Object Data Manager (ODM) as explained in Updating the Object Data Manager on an IBM AIX Host.

Media Scan

A firmware menu option called Media Scan at Power-Up specifies whether media scan runs automatically following a controller power-cycle, reset, or after logical drive initialization. This setting is disabled by default. For more information, refer to the Sun StorEdge 3000 Family RAID Firmware User’s Guide.

To determine whether or not media scan is running, see the event log. For more information on the event log window, see Event Log Window. For more information about media scan, see To Scan Physical Disks for Bad Blocks (Media Scan).


procedure icon  To Add a Logical Volume From Existing Logical Drives



Note - Logical volumes are unsuited to some modern configurations such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information, see Logical Volumes.




Note - Before you can add a logical volume from existing logical drives, you must unmap the logical drives.


1. Select the array that you want to configure.

2. Choose Configuration right arrow Custom Configure.



Note - This selection is inactive unless you have selected an array with available physical drives.


3. Select Add LDs/LVs to the Current Configuration from the Custom Configuration Options window.

4. Verify that the server and controller displayed at the top of the window are correct.

5. Select Use existing LDs to create LVs.

If you do not see any logical drives listed under Select disks for logical drive, the logical drives have not been unmapped and therefore are unavailable to select. You must unmap the logical drives first.

6. Select a logical drive and click Add to LV.

7. When you have finished adding logical drives to the logical volume, to create another logical volume or an individual logical drive, click Commit LV.

When you have finished creating logical volumes and do not want to create an individual logical drive, click Commit.



Note - When you are finished creating logical volumes and want to exit the New Configuration window, if you accidentally click Commit LV instead of Commit, you will have to create another logical drive; otherwise, you have to click Cancel and configure the logical volume again.


8. (HP-UX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to run the ioscan -fnC disk command.



Note - If you used System Administrator Manager (sam) to unmount the file system, make sure it is closed before running the ioscan command.


9. (IBM AIX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to update the Object Data Manager (ODM) as explained in Updating the Object Data Manager on an IBM AIX Host.


Screen capture of the Add LDs/LVs to the Current Configuration window showing the Use existing LDs to create LVs check box.


procedure icon  To Delete a Logical Drive or Logical Volume

Use this option to delete one or more logical drives, or to delete logical volumes from an existing configuration of RAID sets.



Note - Before you can delete a logical drive or logical volume, you need to unmap all assigned LUNs.


1. Select the array that contains the logical drives or logical volumes you want to delete.

2. To view the existing logical drives or logical volumes, select View right arrow Logical Drive.

3. If any of the logical drives or logical volumes have host LUN assignments, proceed to Step 4 to delete them; if they do not, proceed to Step 8.

4. Choose Configuration right arrow Custom Configure.

5. Select Change Host LUN Assignments.

6. Select the host LUNs attached to the logical drive or logical volume you want to unmap, and click Unmap Host LUN.

7. Click Close.

The console refreshes and the logical drive is displayed as “UNMAPPED.”

8. Choose Configuration right arrow Custom Configure.

9. Select Manage Existing LDs/LVs and Partitions.

10. Select the LDs/LVs tab.

11. Select the logical drive or logical volume you want to delete, click Delete, and click OK.

When deleting a logical volume, after you click Delete, the logical volume is deleted, but the logical drives that make up the logical drive are displayed.


Screen capture showing the Manage Existing LDs/LVs and Partitions window with the LDs/LVs tab and Delete button displayed.

12. Click OK in the Confirm Configuration Operation window to complete the operation, and click Close.

The console refreshes and the array is redisplayed no longer showing the logical drive.

13. (HP-UX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to run the ioscan -fnC disk command.



Note - If you used System Administrator Manager (sam) to unmount the file system, make sure it is closed before running the ioscan command.


14. (IBM AIX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to update the Object Data Manager (ODM) as explained in Updating the Object Data Manager on an IBM AIX Host.

The Logical Drive/Logical Volume Number

The logical drive/logical volume number referenced with each logical drive is dynamic; it changes when logical drives are created/deleted. This number is displayed in the logical drive (LDs/LVs) field of several windows including Dynamically Grow and/or Reconfigure LDs/LVs, Change Host LUN Assignments, Manage Existing LDs/LVs and Partitions, and the main window.

Used strictly as a placeholder that enables you to visually keep track of logical drives and logical volumes, this number is insignificant to the controller. That is, the controller does not report on the logical drives or logical volumes according to this number. For example, if four logical drives exist, and LD2 is deleted, the existing LD3 dynamically changes to LD2, and LD4 changes to LD3. Only the LD/LV number changes; all LUN mapping and data on the logical drives remains unchanged.

Because the controller reports on the total number of logical drives, which in this case is three, the actual LD/LV number as displayed in the LD/LV field is irrelevant. In this example, if a new logical drive is created, it takes the LD number of the logical drive that was deleted, and the controller reports that there are a total of four logical drives. All existing logical drives return to their original primary/secondary designation.



Note - As shown in the following example, the LG number on the firmware terminal menu option View and Edit Logical Drives is not visually dynamic. After a logical drive is deleted, you see an empty placeholder. When a logical drive is created from the console or from the terminal, this empty placeholder is filled with the new logical drive.



Screen capture showing the difference between what the program and the terminal menu option displays when a logical drive is deleted.


procedure icon  To Create a Partition



Note - Before you can create a partition, you need to unmap all assigned LUNs.


1. Select the array that contains the logical drive(s) you want to partition.

2. View the logical drive(s) you want to create partition(s) on.

3. If any of these logical drives have host LUN assignments, proceed to Step 4; if they do not, proceed to Step 8.

4. Choose Configuration right arrow Custom Configure.

5. Select Change Host LUN Assignments.

6. Select the Host LUN(s) that are attached to the logical drive(s) you want to partition, and click Unmap Host LUN.

7. Click OK, and then click Close.

8. Choose Configuration right arrow Custom Configure.

9. Select Manage Existing LDs/LVs and Partitions from the Custom Configuration Options window.

10. Select the Partitions tab.

11. Select a logical drive or logical volume you want to partition.

12. Specify the Partition Size in Mbyte and click Create.

To create multiple partitions of the same size, click Add Partition as many times as partitions you want to create. You can also type the partition size in the Part Size field and multiply (*) it by the number of partitions you want to create, for example 100*128. Any remaining Mbyte is added to the last partition.

As you add partitions, the remaining capacity displayed in Available Size (MB) decreases by the amount of the partition size.

13. To change the size of a partition you have already created, select the logical drive or logical volume, and click Modify Size.

14. Specify the new size (in Mbyte) in the Partition Size field, and click OK.


Screen capture showing the Manage Existing LDs/LVs and Partitions window with the Partitions tab displayed.

15. Click OK in the Confirm Configuration Operation window to complete the operation, and click Close.


After a logical drive or logical volume has been partitioned, when you open a logical drive or logical volume on the main window, the partitions are displayed Partition icon.

Screen capture showing a logical drive that has been partitioned.

16. (HP-UX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to run the ioscan -fnC disk command.



Note - If you used System Administrator Manager (sam) to unmount the file system, make sure it is closed before running the ioscan command.


17. (IBM AIX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to update the Object Data Manager (ODM) as explained in Updating the Object Data Manager on an IBM AIX Host.

The Logical Drive/Logical Volume Number

For important information regarding the logical drive/logical volume number displayed in the LDs/LVs field in the Manage Existing LDs/LVs and Partitions window, see The Logical Drive/Logical Volume Number.


procedure icon  To Delete a Partition



Note - To delete a partition on a logical drive or logical volume, you need to unmap all assigned LUNs.


1. Select the array that contains the logical drives or logical volumes for which you want to delete the partitions.

2. View the logical drives or logical volumes for which you want to delete the partitions.

If any of the partitions on the drive have host LUN mappings, proceed to Step 3; if they do not, proceed to Step 7.

3. Choose Configuration right arrow Custom Configure.

4. Select Change Host LUN Assignments.

5. Select the LUNs that are mapped to the logical drive’s or logical volume’s partitions that you want to delete, and click Unmap Host LUN.

6. Click OK, and then click Close.

7. Choose Configuration right arrow Custom Configure.

8. Select Manage Existing LDs/LVs and Partitions from the Custom Configuration Options window.

9. Select the Partitions tab.

10. Select a partition to modify or delete by starting from the last partition within the logical drive or logical volume.

11. Click Delete, and then click OK.


Screen capture showing the Manage Existing Logical Drives and Partitions window with Partitions tab displayed.

12. Click OK in the Confirm Configuration Operation window to complete the operation, and click Close.

13. (HP-UX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to run the ioscan -fnC disk command.



Note - If you used System Administrator Manager (sam) to unmount the file system, make sure it is closed before running the ioscan command.


14. (IBM AIX OS only). To ensure that the environment is stable and accurate after making configuration changes, you need to update the Object Data Manager (ODM) as explained in Updating the Object Data Manager on an IBM AIX Host.


procedure icon  To Expand the Capacity of a Logical Drive or Logical Volume

Use this option to expand the capacity of an existing logical drive, or to expand the capacity of a logical volume. For example, you might originally have had a 72-Gbyte drive of which only 36 Gbyte was selected to build a logical drive. To use the remaining 36 Gbyte, you need to expand the logical drive. RAID levels 0, 1, 3, and 5 support expansion.



Note - To expand a logical volume, you must first expand the logical drives that make up the logical volume.


1. Select the array that you want to configure.

2. Choose Configuration right arrow Custom Configure.

3. Select Dynamically Grow and/or Reconfigure LDs/LVs from the Custom Configuration Options window.

4. Select the logical drive or logical volume you want to expand.

5. Select the Expand LD/LV tab.

6. Specify the capacity in Mbyte by which you want to expand the logical drive or logical volume in the Maximum Drive Expand Capacity field, and click OK.

The capacity shown in the Maximum Available Drive Free Capacity field is the maximum available free disk space per physical drive, based on the smallest physical drive in the logical drive. The capacity you specify is added to each physical drive in the logical drive.

As described in the following examples, the total amount of capacity that is added to the logical drive is calculated automatically based on the RAID level.



Note - Spare drives are not included when expanding a logical drive. Do not include spare drives when calculating maximum drive expand capacity.


If you know the total maximum drive capacity by which you want to expand a logical drive, perform the following calculations based on the RAID level to determine the amount to enter in the Maximum Drive Expand Capacity field:



Note - The Maximum Drive Expand Capacity cannot exceed the Maximum Available Drive Free Capacity.


7. To use the logical drive immediately, select OnLine Expansion.

Online expansion enables you to use the logical drive before expansion is complete. However, because the controller is building the logical drive while performing I/O operations, expanding a logical drive online requires more time than offline expansion.

If you do not select OnLine Expansion, you can use the drive only after expansion is complete. Because the controller is building the logical drive without having to also perform I/O operations, offline expansion requires less time than online expansion.



Note - The Online Expansion option is not available when expanding logical volumes.



Screen capture showing the Dynamically Grow and/or Reconfigure LDs/LVs window with the Expand LD/LV tab displayed.

8. Click OK in the Confirm Configuration Operation window to complete the operation, and click Close.

The Logical Drive/Logical Volume Number

For important information regarding the logical drive/logical volume number displayed in the LD/LV field in the Dynamically Grow and/or Reconfigure LDs/LVs window, see The Logical Drive/Logical Volume Number.


procedure icon  To Add Physical Drives to an Existing Logical Drive

1. Select the array that you want to configure.

2. Choose Configuration right arrow Custom Configure.

3. Select Dynamically Grow and/or Reconfigure LDs/LVs from the Custom Configuration Options window.

4. Select the logical drive to which you want to add a drive.

5. Select the Add SCSI Drives tab.

6. From the list of Available disks, select the drive you want to add to the logical drive.

7. Click Add Disk.

The drive is moved to the Add disk(s) to LD list.

If you make a mistake or change your mind, select the disk from the Add disk(s) to LD list and click Remove.

8. When you are finished adding the drives, click OK.


Screen capture showing the Dynamically Grow and/or Reconfigure LDs/LVs window with the Add SCSI Drives tab displayed.

9. Click OK in the Confirm Configuration Operation window to complete the operation, and click Close.

The Logical Drive/Logical Volume Number

For important information regarding the logical drive/logical volume number displayed in the LD/LV field in the Dynamically Grow and/or Reconfigure LDs/LVs window, see The Logical Drive/Logical Volume Number.


procedure icon  To Copy and Replace Physical Drives

You can copy and replace existing physical drives with drives of the same or higher capacity. Because the logical drive uses the capacity size of its smallest drive, all drives must be replaced with drives of the same or higher capacity. For example, as shown FIGURE 8-1, a logical drive that originally contains three 36-Gbyte member drives can be replaced with new 73-Gbyte member drives.



Note - To use the additional capacity provided by drives with higher capacity, you need to expand the capacity as explained in To Expand the Capacity of a Logical Drive or Logical Volume.



Diagram showing three original 18-Gbyte member drives being replaced with three new 36-Gbyte member drives using Copy and Replace.

FIGURE 8-1 Copying and Replacing Physical Drives

1. Select the array that you want to configure.

2. Choose Configuration right arrow Custom Configure.

3. Select Dynamically Grow and/or Reconfigure LDs/LVs from the Custom Configuration Options window.

4. Select the logical drive for which you are going to perform the copy and replace operation.

5. Select the Copy and Replace Drive tab on the Dynamically Grow and/or Reconfigure LDs/LVs window.

6. From the Drive to Copy Data From list, select the new hard drive.

7. From the Drive to Copy Data To list, select the hard drive that is going to be replaced, and click OK.


Screen capture showing the Dynamically Grow and/or Reconfigure Logical Drives window with the Copy and Replace Drive tab displayed.

8. Click OK in the Confirm Configuration Operation window to complete the operation, and click Close.

9. When the operation is complete, close the progress window.

10. To use the additional capacity provided by the new drives, follow the instructions in To Expand the Capacity of a Logical Drive or Logical Volume.

The Logical Drive/Logical Volume Number

For important information regarding the logical drive/logical volume number displayed in the LD/LV field in the Dynamically Grow and/or Reconfigure LDs/LVs window, see The Logical Drive/Logical Volume Number.


procedure icon  To Scan in New Hard Drives (SCSI only)

When a SCSI array is powered on, the controller scans all physical drives that are connected through drive channels. Unlike FC and SATA arrays, if a SCSI array has completed initialization and then a physical drive is connected, the controller does not automatically recognize the new drive until the next controller reset. This difference in behavior is due to differences between Fibre Channel and SCSI architectures and protocols.

A SCSI hard drive can be scanned in and made available without having to shut down the array by performing the following steps.

1. Double-click the array.

2. The View Controller Configuration window is displayed.

3. Select the Physical Drives tab, and click Scan SCSI Drive.

If a drive fails, the Scan SCSI Drive button is also displayed on the Physical Drive window. You can select a physical drive, select View, and click Scan SCSI Drive from the View Physical Drive window.


Screen capture showing the View Controller Configuration window with the Physical Drives tab displayed.

4. Select the correct Channel and ID that the drive was input on.


Screen capture showing the Input Channel/ID dialog box.

If the scan was successful, the drive is appropriately displayed in the main window and made available.


procedure icon  To Download RAID Controller Firmware

The following procedures are used to upgrade the controller firmware for both a single and redundant controller configuration.

1. Select the controller.

2. Choose Array Administration right arrow Controller Maintenance.

3. If you are not already logged in as ssconfig, a password prompt is displayed; type the ssconfig password.

The Controller Maintenance Options window is displayed.


Screen capture showing the Controller Maintenance Options window.

4. If upgrading firmware only (not boot record), select the Download Firmware option.

The Select Firmware File window is displayed.


Screen capture showing the Select Firmware File window.

5. Select the firmware you want to download, and click Open.

The Confirmation Dialog prompt is displayed.


Screen capture showing the Confirmation Dialog message box.

6. Click Yes.

The firmware download to the RAID controller displays a progress bar.


Screen capture showing the Downloading Firmware to the RAID Controller progress bar.

7. When the progress bar reaches 100%, click OK.

8. After the firmware has been downloaded, check the settings to make sure they are configured correctly.


procedure icon  To Upgrade Firmware and Boot Record

1. Choose Array Administration right arrow Controller Maintenance.

2. If you are not already logged in as ssconfig, a password prompt is displayed; type the ssconfig password.

The Controller Maintenance Options window is displayed.

3. Select Download Firmware with Boot Record.

The Select Boot Record File window is displayed.


Screen capture showing the Select Boot Record File window.

4. Select the boot record and click Open.

5. Select the appropriate firmware file.

The Select Firmware File is displayed.

6. Click Open.

The Confirmation Dialog window is displayed.

7. Repeat Steps 6 through 8 in the previous subsection.


Downloading Firmware for Devices

This option enables you to upgrade the firmware on hard drives and SAF-TE/SES devices.


procedure icon  To Upgrade Firmware on Hard Drives

1. Select the array.

2. Choose Array Administration right arrow Download FW for Devices.

3. Click the Download FW for Disks tab.

4. Select either To All disks under Controller, and select an array from the menu, or select To All disks under LD, and select the logical drive from the menu.


Screen capture showing the Download Firmware for Disk or SAFTE Device window with Download FW for Disks tab displayed.

5. Click Browse and locate the download firmware file.

Select Open.

6. Select the download firmware file, click Open, and click OK.

The firmware starts to download.

7. When the progress reaches 100%, click OK.

8. To verify that the firmware has downloaded successfully, select View right arrow View Physical Drive, and make sure the firmware version has changed in the Product Revision field.

9. So that the console displays properly, you need to probe for new inventory.

Select the server icon and choose View right arrow View Server right arrow Probe to send a command to the selected server to probe for new inventory.


procedure icon  To Upgrade Firmware on SAF-TE/SES Devices



Note - SAF-TE devices are used by SCSI arrays and SES devices are used by Fibre Channel arrays.


1. Select the array.

2. Choose Array Administration right arrow Download FW for Devices.

3. Click the Download FW for SAF-TE/SES Devices tab.


Screen capture showing the Download Firmware for Disk or SAFTE Device window with Download FW for SAFTE Devices tab displayed.

4. Click Browse and locate the download firmware file.

5. Select the download firmware file, click Open, and click OK.

The firmware starts to download and two progress windows are displayed.

6. When the progress reaches 100%, click OK.

7. To verify that the firmware has downloaded successfully, select View right arrow View Enclosure, and make sure the firmware version has changed in the Firmware Rev field.

8. So that the console displays properly, you need to probe for new inventory.

Select the server icon and choose View right arrow View Server to send a command to the selected server to probe for new inventory.


procedure icon  To Change Controller Parameters

1. Select the array.

2. Choose Configuration right arrow Custom Configure.

If necessary, log in to the configuration level of the program with the ssconfig password. The Custom Configuration Options window is displayed.

3. From the Custom Configuration Options window, select Change Controller Parameters.

The Change Controller Parameters window with the Channel tab selected is displayed.


Screen capture showing the Change Controller Parameters window with the Channel tab displayed.



Note - For the Sun StorEdge 3510 FC array and the Sun StorEdge 3511 SATA array, the CurClk is 2.0 GHz.




caution icon Caution - Do not specify a new nonzero value unless you have replaced the chassis and the original chassis serial number must be retained. It is especially important in a Sun Cluster environment to maintain the same disk device names in a cluster. Do not change the controller unique identifier unless instructed to do so by qualified service personnel. Changes made to the Controller Unique ID do not take effect until the controller is reset.



procedure icon  To Save Changed Values

The options on the Change Controller Parameters window specified in TABLE 8-2 require that the controller be reset so that the changes take effect.


TABLE 8-2 Change Controller Parameters That Require a Reset

Option

Tab

Controller Unique ID

All

Channel Mode

Channel (Change Channel Settings)

Default Transfer Width

Termination

Default Sync Clock

Write Back Cache

(only in firmware later than 3.31)

Cache

Optimization

SCSI I/O Timeout(s)

Drive I/F

Max Queued IO Count

Host I/F

Fibre Connection (FC and SATA only)

LUNs Per Host

Controller Configuration

Redundancy


If a change requires a controller reset, the following message is displayed in the lower left side of the window:

[Controller reset is required for changes to take effect.]

To reset the controller and save changed values, you can either select the Controller Reset check box at the time of making the change, or reset the controller later through the Controller Maintenance window (see To Reset the Controller.). If you are making multiple changes, you might not want to stop and reset the controller after each change. If you do not select the Controller Reset check box, and the change requires a reset, when you click OK, a warning message is displayed:


Screen capture showing the warning message that is displayed for a change made to controller parameters that requires a controller reset.

1. Select the Controller Reset check box.

2. Make the changes and click OK.

or

1. Do not select the Controller Reset check box.

2. Make the changes and click OK.

3. Reset the controller later as explained in To Reset the Controller.

Channel Tab

1. From the Channel Settings tab, select the channel to be edited.

2. Click Change Settings.

The Change Channel Settings window is displayed. For the server to recognize the array, a host channel must have an ID assigned to a logical drive and a logical drive mapped to that host channel and ID. This window enables you to configure the host/drive channel.


Screen capture showing the Change Channel Settings dialog box.

3. From the Channel Mode list box, select either Host or Drive.

A Drive channel is what the drives are connected to (internal or external). A host channel is what is connected to the server. The most common reason to change the Channel Mode from Host to Drive is to attach expansion units to a RAID array.



Note - The Sun StorEdge 3310 SCSI array and the Sun StorEdge 3320 SCSI array support a maximum of two host channels.




Note - Depending on the controller configuration, you might need to select both primary and secondary channel IDs as described in the following steps.




caution icon Caution - Sun StorEdge arrays are preconfigured with host, drive, and RCCOM channel settings. Sun StorEdge Configuration Service cannot configure or show RCCOM channels. Before configuring a host or drive channel, review the channel assignments using the firmware application. In a redundant-controller configuration, if the RCCOM channel settings are overwritten using Sun StorEdge Configuration Service, intercontroller communication stops and unexpected results might occur. For more information, refer to the Sun StorEdge 3000 Family RAID Firmware User’s Guide.


4. From the Available SCSI IDs list box, select the primary channel ID, which is designated as PID, and click Add PID.

5. If you have two controllers installed, select a secondary channel ID from the Available SCSI IDs list box, and click Add SID.



Note - For the Sun StorEdge 3310 SCSI array and the Sun StorEdge 3320 SCSI array, if you add more than four host channel IDs, the LUNs Per Host ID parameter (see Host I/F Tab) must be set to a value less than 32.


6. For changes to take effect, reset the controller.

Changing Host ID in a Fibre or SATA Configuration

1. If you want an ID higher than 15, select the desired range from the Select SCSI Range list box.



Note - Each channel’s ID must be within the same range.


2. Click Remove to remove the PID or SID.

3. Once your selections have been made, click OK to redisplay the previous window.

RS 232 Tab

RS 232 parameters enable you to set the baud rate of the RS 232 connection.

1. After all channel settings have been made, from the Change Controller Parameters window, select the RS 232 tab.


Screen capture showing the Change Controller Parameters window with the RS 232 tab displayed.

2. Select the port desired, and click Change Settings.

The Change RS232 Port Settings window is displayed.

3. Select any baud rate desired, including the default setting of 38400, and then click OK to return to the previous window.


Screen capture showing the Change RS232 Port Settings dialog box.

4. Click OK.

Cache Tab

1. From the Change Controller Parameters window, select the Cache tab.


Screen capture showing the Change Controller Parameters window with the Cache tab displayed.

2. To specify write back as the default cache, click the Write Back Cache list box and select Enabled.

The write policy determines when cached data is written to the disk drives. The ability to hold data in cache while it is being written to disk can increase storage device speed during sequential reads. Write policy options include write-through and write-back.

Using write-back cache, the controller receives the data to write to disk, stores it in the memory buffer, and immediately sends the host OS a signal that the write operation is complete, before the data is actually written to the disk drive. Write-back caching improves the performance of write operations and the throughput of the controller card. Write-back cache is enabled by default.

Using write-through cache, the controller writes the data to the disk drive before signaling the host OS that the process is complete. Write-through cache has lower write operation and throughput performance than write-back cache, but it is the safer strategy, with minimum risk of data loss on power failure. Because a battery module is installed, power is supplied to the data cached in memory and the data can be written to disk when power is restored. When write-back cache is disabled, write- through cache becomes the default write policy.

The setting you specify is the default global cache setting for all logical drives. You can override this setting per logical drive when you create a logical drive.

3. Select an Optimization mode.

The Optimization mode indicates the amount of data that is written across each drive. The controller supports two optimization modes, sequential I/O and random I/O. Sequential I/O is the default mode.

The RAID array’s cache optimization mode determines the cache block size used by the controller for all logical drives:

An appropriate cache block size improves performance when a particular application uses either large or small stripe sizes:

Since the cache block size works in conjunction with the default stripe size set by the cache optimization mode for each logical drive you create, these default stripe sizes are consistent with the cache block size setting. You can, however, specify a different stripe size for any logical drive at the time you create it. See Specifying Non-Default Stripe Sizes for more information.

Refer to the Sun StorEdge 3000 Family RAID Firmware User’s Guide for more information on cache optimization modes.



Note - Once logical drives are created, you cannot use the RAID firmware’s “Optimization for Random I/O” or “Optimization for Sequential I/O” menu option to change the optimization mode without deleting all logical drives. You can use Sun StorEdge Configuration Service, as described above, or the Sun StorEdge CLI set cache-parameters command to change the optimization mode while logical drives exist. Refer to the Sun StorEdge 3000 Family CLI User’s Guide for information on the set cache-parameters command.


Specifying Non-Default Stripe Sizes

Depending on the optimization mode and RAID level selected, newly created logical drives are configured with the default stripe sizes shown in TABLE 8-3.


TABLE 8-3 Default Stripe Size Per Optimization Mode (Kbyte)

RAID Level

Sequential I/O

Random I/O

0, 1, 5

128

32

3

16

4


When you create a logical drive, you can replace the default stripe size with one that better suits your application.



Note - Default stripe sizes optimize performance for most applications.


Refer to the Sun StorEdge 3000 Family RAID Firmware User’s Guide for information about how to set the stripe size for a logical drive.

Once the stripe size is selected and data is written to logical drives, the only way to change the stripe size of an individual logical drive is to back up all its data to another location, delete the logical drive, and create a logical drive with the stripe size that you want.

Maximum Number of Disks and Maximum Usable Capacity for Random and Sequential Optimization

The maximum capacity per logical drive supported by the RAID firmware is:

Actual logical drive maximum capacities are usually determined by practical considerations or the amount of disk space available.



caution icon Caution - In FC and SATA configurations with large drive capacities, the size of the logical drive might exceed the device capacity limitation of your operating system. Be sure to check the device capacity limitation of your operating system before creating the logical drive. If the logical drive size exceeds the capacity limitation, you must partition the logical drive.


Refer to the Sun StorEdge 3000 Family RAID Firmware User’s Guide for details regarding maximum usable capacity of a logical drive, depending on RAID level and optimization mode.

4. Set Periodic Cache Flush Time.

Setting a Periodic Cache Flush Time enables the controller to flush cache to logical drive storage at specified intervals. This safety measure prevents the accumulation of data in cache that could be lost in the event of power loss.

Select one of the following values:



Note - Setting this value to an interval less than one minute (Continuous Sync or 30 sec) might affect performance.


5. For changes to take effect, reset the controller.

Disk Array Tab

1. From the Change Controller Parameters window, select the Disk Array tab.


Screen capture showing the Change Controller Parameters window with the Disk Array tab displayed.

2. Select either Disabled or Enabled from the three Write Verify list boxes.

Normally, errors might occur when a hard drive writes data. To avoid the write error, the controller can force the hard drives to verify the written data.

3. Select from the four options available in the Rebuild Priority list box: Low, Normal, Improved, or High.

The RAID controller provides a background rebuilding ability. This means the controller is able to serve other I/O requests while rebuilding the logical drives. The time required to rebuild a drive set largely depends on the total capacity of the logical drive being rebuilt. Additionally, the rebuilding process is totally transparent to the host computer or the OS.

Drive I/F Tab

1. From the Change Controller Parameters window, select the Drive I/F tab.


Screen capture showing the Change Controller Parameters window with the Drive I/F tab displayed.

2. From the Drive Motor Spin Up field, select either Disabled or Enabled.

Drive Motor Spin Up determines how the physical drives in a disk array are started. When the power supply is unable to provide sufficient current for all physical drives and controllers that are powered up at the same time, spinning up the physical drives serially requires less current.

If Drive Motor Spin Up is enabled, the drives are powered up sequentially and some of these drives might not be ready for the controller to access when the array powers up. Increase the disk access delay time so that the controller will wait longer for the drive to be ready.

3. Set the Disk Access Latency.

This function sets the delay time before the controller tries to access the hard drives after power on. The default is 15 seconds.

4. Set the Tag Count Per drive.

This is the maximum number of tags that can be sent to each drive at the same time. A drive has a built-in cache that is used to sort all of the I/O requests (tags) that are sent to the drive, enabling the drive to finish the requests faster.

The cache size and maximum number of tags varies between different brands and models of drive. Use the default setting of 32. Changing the maximum tag count to Disable causes the internal cache of the drive to be ignored (not used).

The controller supports tag command queuing with an adjustable tag count from 1 to 128.

5. Select the variable time options shown in the list box from the SAF-TE/SES Polling Period(s) field, or select Disabled to disable this function so that all installed Event Recording Modules (ERMs) are never polled.

If there are remote devices in your RAID enclosure monitored by SAF-TE or SES, use this function to determine the interval after which the controller checks the status of those devices.

6. From the SCSI I/O Timeout(s) field, select from 0.5 through 30 seconds.

The SCSI I/O Timeout is the time interval for the controller to wait for a drive to respond. If the controller attempts to read data from or write data to a drive but the drive does not respond within the SCSI I/O timeout value, the drive is considered a failed drive. The default setting for SCSI I/O Timeout is 30 seconds.



caution icon Caution - Do not change this setting. Setting the timeout to a lower value causes the controller to judge a drive as failed while a drive is still retrying or while a drive is unable to arbitrate the SCSI bus. Setting the timeout to a greater value causes the controller to keep waiting for a drive, and it might sometimes cause a host timeout.


When the drive detects a media error while reading from the drive platter, it retries the previous reading or recalibrates the head. When the drive encounters a bad block on the media, it reassigns the bad block to another spare block on the same disk drive. However, all of this takes time. The time to perform these operations can vary between different brands and models of drives.

During SCSI bus arbitration, a device with higher priority can use the bus first. A device with lower priority sometimes receives a SCSI I/O Timeout when devices of higher priority keep using the bus.

7. From the Drive Check Period(s) field, select from 0.5 through 30 seconds.

The Periodic Drive Check Time is an interval for the controller to check the drives on the SCSI bus. The default value is Disabled, which means if there is no activity on the bus, the controller does not know if a drive has failed or has been removed. Setting an interval enables the program to detect a drive failure when there is no array activity; however, performance is degraded.

8. Auto Assign Global Spare Drive.

This feature is disabled by default. When you enable it, the system automatically assigns a global spare to the minimum drive ID in unused drives. This enables the array to rebuild automatically without user intervention when a drive is replaced.

Host I/F Tab

9. From the Change Controller Parameters window, select the Host I/F tab.


Screen capture showing the Change Controller Parameters window with the Host I/F tab displayed.

10. Set the Max Queued IO Count.

This function enables you to configure the maximum number of I/O operations per logical drive that can be accepted from servers. The predefined range is from 1 to 1024 I/O operations per logical drive, or you can choose the “Auto Compute” (automatically configured) setting. The default value is 1024 I/O operations per logical drive.

The appropriate setting depends on how many I/O operations the attached servers and the controller itself are performing. This can vary according to the amount of host memory present, the number of drives and their size, and buffer limitations. If you increase the amount of host memory, add more drives, or replace drives with larger drives, you might want to increase the maximum I/O count. But optimum performance usually results from using the “Auto Compute” or “256” settings.

11. (FC and SATA only). Select the type of Fibre Connection.

Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays support the following Fibre connection protocols:

Refer to the Sun StorEdge 3000 Family RAID Firmware User’s Guide for more information about point-to-point and loop protocols.

12. Set the LUNs Per Host.

This function is used to change the maximum number of LUNs you can configure per host ID. Each time a host channel ID is added, it uses the number of LUNs allocated in this setting. The default setting is 32 LUNs, with a predefined range of 1 to 32 LUNs available.



Note - For the Sun StorEdge 3310 SCSI array and the Sun StorEdge 3320 SCSI array, the maximum number of LUN assignments is 128; therefore, if you use the default setting of 32, you can only add four host channel IDs (4 x 32 = 128). If you added more than four host channel IDs (see Channel Tab), the LUNs Per Host parameter must be set to a value less than 32.


13. (Optional) To increase the security of the data stored on the array, you can prevent in-band management through a SCSI or FC interface by selecting Disable for In-Band External Interface Management.



caution icon Caution - If you are managing the array through in-band, when you select Disable for In-Band External Interface Management, communication with the array is disabled. If you want to continue monitoring this array, select this option only when you are managing the array through out-of-band. For the steps to switch to out-of-band management, see To Use Out-of-Band Management.


After selecting Disable for In-Band External Interface Management, select the server icon and choose View right arrow View Server right arrow Probe. It takes several minutes for the console to update.

14. If you made changes to the Fibre Connection protocol, for changes to take effect, reset the controller.

Redundancy Tab

1. From the Change Controller Parameters window, select the Redundancy tab.


Screen capture showing the Change Controller Parameters window with the Redundancy tab displayed.

2. Select an option from the Set Controller Config field.



Note - Set both controllers in the Redundant Primary configuration. The controllers then determine which one is primary and which one is secondary. This prevents any possible conflicts between controllers.


3. When an array with redundant controllers is operating with write-back cache enabled, you can disable the synchronization of cache between the two controllers by selecting Not Synchronized from the Write-Through Cache Synchronization list box.



caution icon Caution - Disabling cache synchronization and eliminating the mirroring and transferring of data between controllers can improve array performance, but it also eliminates the safeguard provided by cache synchronization if one of the controllers fails.


4. For changes to take effect, reset the controller.

5. Click OK to return to the main menu.

Peripheral Tab

The Peripheral tab enables you to configure the array to dynamically switch write policy from write-back cache to write-through cache when a specified event occurs or threshold is exceeded. Once the problem is corrected, the original write policy is restored. You can also configure the controller to shut down if it exceeds the temperature threshold.

The Peripheral Device Status box enables you to view the status of all environmental sensors for the controller. (For environmental status of the chassis, see View Enclosure.)

1. From the Change Controller Parameters window, select the Peripheral tab.


Screen capture showing the Change Controller Parameters window with the Peripheral tab displayed.

2. Enable or disable event trigger operations.

If the array is configured with write-back cache enabled, specify whether you want the write policy to dynamically switch from write-back cache to write-through cache when the following events occur:



Note - Once the problem is corrected, the original write policy is restored.


If you do not want the write policy to be switched dynamically, set these options to Disable. They are enabled by default.

For more information about write-back and write-through, see To Add a Logical Drive or Logical Volume From New Logical Drives.

3. Enable or disable over-temperature controller shutdown.

If you want the controller to shut down immediately if the temperature exceeds the threshold limit, select Enable in the Temperature Exceeds Threshold field; otherwise, select Disable.


When the controller shuts down, the controller icon in the main window displays a yellow (degraded) device status symbol

.

4. If you want the controller to shut down after the temperature exceeds the threshold limit but not before a specified interval, select a time from the Temperature Exceeds Threshold Period field:


procedure icon  To View Environmental Status for the Controller

1. From the Change Controller Parameters window, select the Peripheral tab.

2. Click the right scroll bar and scroll down until the Peripheral Device Status box is displayed.


Screen capture showing Peripheral Device Status.

3. In the Peripheral Device Status box, click the scroll bar and scroll down to view environmental status information.


The threshold ranges for peripheral devices are set using the firmware application. If a device exceeds the threshold range that was set, its status displays “Over upper threshold.” If a device does not meet the threshold range, its status displays “Under lower threshold.” Both events cause the controller icon in the main window to display a red (critical) device status symbol

Critical symbol.

For information on how to set the threshold ranges, refer to the Sun StorEdge 3000 Family RAID Firmware User’s Guide.

Network Tab

1. From the Change Controller Parameters window, select the Network tab.


Screen capture showing the Change Controller Parameters window with the Network tab displayed.

1. To manually configure an IP address, subnet mask, or gateway address, click Change Settings.

The Change Network Setting window is displayed.


Screen capture showing the Change Network Settings window.



Note - Sun StorEdge 3000 Family arrays are configured by default with the Dynamic Host Configuration Protocol (DHCP) TCP/IP network support protocol enabled. If your network uses a DHCP server, the server assigns an IP address, netmask, and gateway IP address to the RAID array when the array is initialized or subsequently reset.


2. If you have set up an array in an environment with a RARP server:

a. Remove DHCP from the Selected box in the Dynamic IP Assignment Mechanism List.

b. Add RARP to the Selected box in the Dynamic IP Assignment Mechanism List.



Note - The firmware does not support multiple IP assignment mechanisms. If a protocol is currently selected, you must remove it before adding another protocol.


3. If you prefer to have a static IP address:

a. Deselect the Enable Dynamic IP Assignment check box.

b. Type the static IP address, the subnet mask, and the gateway IP address into the appropriate boxes under Static IP Information.

4. Click OK.

5. When prompted to reset the controller, click Yes.

Protocol Tab

For security reasons, you can enable only the network protocols you want to support, which limits the ways in which security can be breached.

1. From the Change Controller Parameters window, select the Protocol tab.


Screen capture showing the Change Controller Parameters window with the Protocol tab displayed.

2. Select which protocols to enable or disable.

The protocols are enabled or disabled by default as follows:



Note - The PriAgentAll protocol must remain enabled for Sun StorEdge Configuration Service and the CLI to receive information from the controller firmware. Do not disable this protocol.



procedure icon  To Mute the Controller Beeper

When an event occurs that causes the controller to beep, for example, when a logical drive fails, during a rebuild, or when adding a physical drive, you can mute the beeper in one of two ways.

1. Select the desired controller icon in the main window.

2. Choose Array Administration right arrow Controller Maintenance.

3. If you are not already logged in as ssconfig, a password prompt is displayed; type the ssconfig password.

The Controller Maintenance Options window is displayed.

4. Click Mute Controller Beeper.

or

1. Select the desired controller icon in the main window.

2. Choose Configuration right arrow Custom Configure.

3. Select Change Controller Parameters.

4. Select Mute Beeper.



Note - If the alarm is caused by a failed component, muting the beeper has no effect. You need to push the Reset button on the right ear of the array. See View Enclosure for more information about component failure alarms.



procedure icon  To Assign or Change Standby Drives

A standby drive acts as a spare to support automatic data rebuilding after a physical drive in a fault-tolerant (non-RAID 0) logical drive fails. For a standby drive to take the place of another drive, it must be at least equal in size to the failed drive and all of the logical drives dependent on the failed disk must be redundant (RAID 1, 3, 5, or 1+0).

With this function you can either assign a global or local standby drive or change a ready drive’s state to standby or a standby drive’s state to ready. A drive that is assigned as a global spare rebuilds if a member of any existing drive fails. You can have one or more standby drives associated with an array controller. Global spares are used in the order in which they are created. A local spare has to be assigned to a particular logical drive and only rebuilds for a member within that logical drive.

1. In the main window, select the desired array controller.

2. Choose Configuration right arrow Custom Configure or click the Custom Configuration tool.

If necessary, log into the configuration level of the program with the ssconfig password. The Custom Configuration Options window is displayed.

3. Select Make or Change Standby Drives from the Custom Configuration Options window.

The Make or Change Standby Drives window is displayed.


Screen capture showing the Make or Change Standby Drives window.

4. Check the server and the controller IDs at the top of the window.

If you want to select a different server or controller, click Cancel to return to the main window, select the correct server, or controller from the tree view, and repeat Steps 2 and 3.

5. Select a drive to be assigned or changed.

6. Change or assign the drive’s state by selecting Ready, Global StandBy, or StandBy for LD# (local).

7. Click Modify.

8. Click Apply, and then click Close.

9. Whenever you make changes to the configuration, save the new configuration to a file. For details, see Configuration File.


Available Servers

Occasionally, you might need to edit or delete an entry from the Available or Managed Servers lists in the Server List Setup window.


procedure icon  To Edit a Server Entry

1. Choose File right arrow Server List Setup. The Server Setup window is displayed.

If necessary, move the server name from the Managed Servers list to the Available Servers list in the Server List Setup window. Note that only the server entries in the Available Servers list can be edited.


Screen capture showing the Server List Setup window.

2. Select the name of the server in the Available Servers list, and click Edit.

The Edit Server window is displayed.


Screen capture showing the Edit Server window.

3. Make the necessary changes. Click OK to register your changes.

For descriptions of the fields in this window, see To Add Servers. The Add Server and Edit Server windows contain the same fields.

IP Address Shortcut: If the network address has changed, click Get IP Addr by Name. The program searches for and displays the correct IP address if you typed the name of the server as it is recorded by the name service used by the network.

If the name used for the server is not the same as the server’s network name or if the naming service is not yet updated, delete the server and add it again.

4. Move the server name back to the Managed Servers list.

5. Click OK to exit the Edit Server window.


Updating the Object Data Manager on an IBM AIX Host

For an IBM AIX host, to ensure that the environment is stable and accurate after making configuration changes, you need to update the Object Data Manager (ODM).


procedure icon  To Update the ODM

1. Run the following command for each deleted disk:


# rmdev -l hdisk# -d

where # is the number of the disk that was removed.



caution icon Caution - Never remove hdisk0.


To remove multiple disks (hdisk1 up to hdisk19), run the following commands:


# /usr/bin/ksh93
# for ((i=1; i<20; i++))
> do
> rmdev -l hdisk$i -d
> done

If the rmdev command returns disk busy errors, use either the command line, smit, or smitty to make sure that any previously created volume groups have been varied off and that no file systems are mounted on the device(s). It might also be necessary to perform an exportvg function on persistent volume groups. If exportvg does not work, try rebooting.

2. If using a JBOD, run the same command for generic devices, which can be determined from the results returned from running the following command:


# lsdev -Cc generic

3. Run the following commands:


# /usr/bin/ksh93
# for ((i=1; i<20; i++))
> do
> rmdev -l gsc$i -d
> done

4. Delete references in the /dev directory by running the command:


# rm /dev/gsc*

5. Stop and start the agent and reread the system configuration into the ODM by running the following commands:


# ssagent stop
# ssagent start
# cfgmgr -v



caution icon Caution - Depending on the number of devices present in the OS, this command might take several minutes to complete. Do not make any configuration changes until cfgmgrhas completed.