C H A P T E R  11

System Maintenance

This chapter describes system maintenance functions. It includes the following sections:


Setting Remote Access Options

System security features include the ability to set remote access options. You can enable or disable network services used to remotely access the system. You can run the system in Secure Mode for maximum security or you can specifically enable certain remote access features such as Telnet, Remote Login, and Remote Shell.

The secure services are Secure Web Administrator, which uses the Secure Socket Layer (SSL) over Hyper Text Transfer Protocol (HTTP), and Secure Shell (ssh).

To set remote access security:

1. From the Web Administrator navigation panel, choose System Operations > Set Remote Access.

2. Check the Secure Mode checkbox for maximum security. In secure mode you can enable only Secure Web Administrator and Secure Shell by checking the associated checkbox.

3. If you are not using Secure Mode, select the checkbox for each service you want to enable:

4. Click Apply.

5. If you have selected Secure Mode, restart the server for the settings to go into effect. For more information, see Shutting Down the Server.


Configuring FTP Access

This section provides information about configuring File Transfer Protocol (FTP) access. The following subsections are included:


About Configuring FTP Access

File Transfer Protocol (FTP) is an Internet protocol used to copy files between a client and a server. FTP requires that each client requesting access to the server must be identified with a user name and password.

You can set up three types of users:

The administrator has root access to all volumes, directories, and files on the system. The administrator's home directory is defined as the "/" symbol.

The user has access to all existing directories and files within the user's home directory. The home directory is defined as part of the user's account information and is retrieved by the name service.

Note: Guest users cannot rename, overwrite, or delete files; cannot create or remove directories; and cannot change permissions of existing files or directories.


Setting Up FTP Users

To set up File Transfer Protocol (FTP) users:

1. From the Web Administrator navigation panel, choose Unix Configuration > Set Up FTP.

2. Check the Enable FTP checkbox.

3. Select the type of FTP access by checking the appropriate checkboxes:

Note: User names and passwords must be specified in the local password file or on a remote network information service (NIS), NIS+, or Lightweight Directory Access Protocol (LDAP) name server.

Note: A root user is a user with UID equal to 0 and the special Sun StorageTek user name, admin.

4. To enable logging, select the Enable Logging checkbox and specify the log file pathname.

The log file is saved to the exported volume you specify on the NAS server. For example, /vol1/ftplog will save the log file named ftplog to the directory /vol1.

5. Click Apply to save settings.


Shutting Down the Server

To shut down, halt, or reboot the server:

1. From the Web Administrator navigation panel, choose System Operations > Shut Down the Server.

2. Select the type of shutdown that you want to perform. For detailed information about the available shutdown options, see Shut Down the Server Panel.


Caution:Check with Sun Services before selecting the Reboot Previous Version option.

3. Click Apply.


Locating a Drive or Controller/Expansion Unit

To locate a particular drive, controller unit, or expansion unit:

1. From the Web Administrator navigation panel, choose RAID > Manage RAID.

2. Click the Locate Drive or Locate Drive Tray button.

3. From the drive images shown, select (click on) the drive you want to locate, or any drive in the controller/expansion unit you want to locate.


4. Click the button to cause the drive indicator lights to flash for the selected drive or controller/expansion unit.

5. After you physically locate the flashing drives, click the button to stop the drive indicator lights from flashing.


Configuring the LAN Manager Compatibility Level

The NAS appliance or gateway system can be configured to run in either of two security modes under Windows: Workgroup mode or NT domain mode. The LAN Manager (LM) compatibility level controls the type of user authentication used in each mode, and is assigned as a numeric value in the range 1 through 5.

By default, the LM compatibility level is 3, which allows for authentication as follows:

For more information, go to the following web site and search for LM authentication and NTLM 2:

http:///support.microsoft.com

You can change the LM compatibility level using the lmcompatibility sub-command of the smbconfig CLI command, as follows:

1. To view the current LM compatibility level, issue the smbconfig lmcompatibility command without any arguments:

smbconfig lmcompatibility

2. To set the LM compatibility level, use the level keyword, as shown below:

smbconfig lmcompatibility level=4

where 4 is the desired LM compatibility level, in the range 2 through 5, as detailed below:


Level

Sent by SMB Redirector
(in NT Domain Mode)

Accepted by SMB Server
(in Windows: Workgroup Mode)

2

NTLM

LM
NTLM
LMv2
NTLMv2

3

LMv2
NTLMv2

LLM
NTLM
LMv2
NTLMv2

4

LMv2
NTLMv2

NTLM
LMv2
NTLMv2

5

LMv2
NTLMv2

LMv2
NTLMv2



Managing File-System Checkpoints

This section provides information about managing file-system checkpoints. The following subsections are included:


About File-System Checkpoints

A checkpoint is a virtual read-only copy of a file volume. A checkpoint is not an online backup. Checkpoints are used to:

Checkpoints are stored in the same physical location as the file volume, so if the file volume is lost, so are all its checkpoints.

Starting with NAS software version 4.20, you can store up to 256 checkpoints for each file volume. For file volumes that existed before upgrading to version 4.20, you are limited to 16 checkpoints unless you disable checkpoint processing (then enable it, as necessary). By disabling checkpoints, you initiate the conversion from 16-to-256 checkpoint support.

Checkpoints can be created manually, on a one-time basis, or they can be scheduled at regular intervals (for example, every evening at 11:00 p.m., or every Tuesday morning at midnight). File volumes remain operational during checkpoint processing.

A checkpoint is not created if the file volume is 90% full. If the file volume becomes 95%, checkpoints are deleted in expiration order until the volume is below 95% full or until there is only one checkpoint remaining.

At any time, you can view how many checkpoints are stored for a file volume and the total space used for checkpoints. Open File Volume Operations > Configure Checkpoints > Manage Checkpoints and look at the Status message at the top of the window to view this information.


Enabling File-System Checkpoints

Before you can create checkpoints for a file volume, you must enable checkpoint processing for that volume, as follows:

1. From the Web Administrator navigation panel, choose File Volume Operations > Edit Volume Properties.

2. From the Volumes list, select the volume for which you want to enable checkpoint processing

3. Select the Enable Checkpoints box.

4. If you plan to create NDMP backups for the file volume, select Use for Backups under Checkpoint Configuration. NDMP performs backups from a copy of the file volume, thereby avoiding potential problems involved with backing up from the live file system.

5. If you plan to create checkpoints for the file volume, select Automatic under Checkpoint Configuration. After selecting this box, the NAS software allows you to specify regularly scheduled checkpoints for that volume, as described under Scheduling File-System Checkpoints.

6. Click Apply.


Scheduling File-System Checkpoints

This section provides information about scheduling file checkpoints. The following subsections are included:

About Scheduling File-System Checkpoints

The checkpoint schedule identifies days and times, weekly, when the NAS software creates a checkpoint. The schedule can contain up to five checkpoint requests for each file volume.

For each scheduled checkpoint, the schedule (available through File Volume Operations > Configure Checkpoints > Schedule Checkpoints) displays the name of the file volume, a description of the checkpoint, the scheduled times and days at which a checkpoint is taken, and the length of time for which the checkpoint is to be retained (days plus hours). The schedule looks like this, as displayed using Telnet for a single volume:


 

 

Days

Hours AM

Hours PM

Keep

 

Enabled

Description

SMTWTFS

M1234567890E

M1234567890E

Days + Hours

1.

Y

MTWTF5am5pm

-*****-

-----*------

-----*------

1

0

2.

Y

SunWed1pm

*--*---

------------

-*----------

0

12

3.

Y

MWFmidnight

-*-*-*-

*-----------

------------

0

3

4.

Y

Weekend

*-----*

*-----*-----

*-----*-----

0

6

5.

Y

FriEvery2hrs

-----*-

*-*-*-*-*-*-

*-*-*-*-*-*-

0

2


Adding a Checkpoint to the Schedule

To add a checkpoint to the schedule, first enable checkpoints for the file volume, as described under Enabling File-System Checkpoints. Then follow the steps below to add the new checkpoint:

1. From the Web Administrator navigation panel, choose File Volume Operations > Configure Checkpoints > Schedule Checkpoints.

2. Select the file volume to show the current schedule.

3. Click New to display the New Checkpoint Schedule window.

4. Click on the cell of the day/time grid to select that day and time. An unavailable cell indicates that an existing checkpoint is in effect for that time slot

5. Type a description for the checkpoint, such as "weekly" or "daily.".This is a mandatory field.

6. Type the number of days and select the number of hours for which you want to retain the checkpoint.

7. Click Apply to save your changes.

Editing an Existing Checkpoint Schedule

To edit an existing checkpoint schedule:

1. From the Web Administrator navigation panel, choose File Volume Operations > Configure Checkpoints > Schedule Checkpoints.

2. Select the file volume to show the current schedule.

3. Click Edit to display the Edit Checkpoint Schedule window.

4. Click on the cell that identifies the checkpoint you want to change.

The Description and Keep fields display information for the current checkpoint.

5. Edit the checkpoint schedule, referring to Adding a Checkpoint to the Schedule, as necessary.

6. Click Apply to save your changes.

Removing a Schedule Line

Follow these steps to remove a schedule line:

1. From the Web Administrator navigation panel, choose File Volume Operations > Configure Checkpoints > Schedule Checkpoints.

2. Select the schedule entry that you want to remove and click Remove.

Note: Disabling checkpoints from the Edit Volume Properties panel has no effect on the schedule. If checkpoints are re-enabled, the schedule remains the same.


Creating a Manual Checkpoint

In addition to taking regularly scheduled checkpoints, you can request a manual (unscheduled) checkpoint at any time. To do this, first enable checkpoints for the file volume, as described under Enabling File-System Checkpoints. Then use the Manage Checkpoints panel to create the manual checkpoint:

1. From the Web Administrator navigation panel, choose File Volume Operations > Configure Checkpoints > Manage Checkpoints.

2. Click Create.

3. Use the drop-down menu to select the file volume you want.

4. Specify the checkpoint options. For detailed information about these options, see Create Checkpoint Window.

5. Click Apply to create the checkpoint.


Renaming a Checkpoint

Follow these steps to rename a checkpoint:

Note: For automatic (scheduled) checkpoints, the NAS software depends on the system-assigned checkpoint name to identify the checkpoint, to retain it for the correct time period, and to delete it when it becomes stale. If you rename a scheduled checkpoint, it will be marked as a manual checkpoint, and it will not be deleted by the NAS software.

1. From the Web Administrator navigation panel, choose File Volume Operations > Configure Checkpoints > Manage Checkpoints.

2. Select the checkpoint you want to rename.

3. Click Rename.

The Volume Name and Old Name fields are read-only.

4. Type the new name for the checkpoint.

5. Click Apply to save your changes.


Removing a Checkpoint

You can delete any checkpoint, regardless of whether it was created using the schedule, or manually.

Note: Backup checkpoints are only retained long enough to back up the file volume, and are deleted immediately thereafter by the backup software.

Note: If you disable checkpoint processing from the Edit Volume Properties panel, any checkpoints taken already will be deleted immediately, regardless of their defined retention.

To delete a checkpoint:

1. From the Web Administrator navigation panel, choose File Volume Operations > Configure Checkpoints > Manage Checkpoints.

2. Select the checkpoint you want to remove.

3. Click Remove.


Sharing File-System Checkpoints

Checkpoints can be shared, allowing network users to access the data that was current when the checkpoint was created. Follow these steps to share checkpoints:

1. From the Web Administrator navigation panel, choose Windows Configurations > Configure Shares.

Note: Alternatively, navigate to the checkpoint file volume under the System Manager, then right-click and choose the appropriate option from the pop-up menu (typically Sharing > New Share). Checkpoint volumes have a .chkpnt extension.

2. Click Add, then fill in the fields as described below.

For detailed information about these and other fields in this window, see New Share Window.

3. Type the share name for the checkpoint in the Share Name box.

This is the name through which the checkpoint will be accessible from the network.

4. Click the Volume Name drop-down menu and select the checkpoint volume from the list. Checkpoint volumes have the .chkpnt extension.

5. Leave the Directory field blank.

6. If Active Directory Service (ADS) is enabled and configured, specify the Container field as the location in the ADS directory where the share will be published.

7. The following fields apply only if Windows Workgroup mode is enabled, as described under Configure Domains and Workgroups Panel. If they are available, complete them as follows:

8. Click Apply.

The checkpoint share will be listed in the Configure Share panel.


Accessing Checkpoints

You can access file checkpoints as follows, to obtain the data that was current when the checkpoints were created.

1. Using a Windows network station, click Start -> Run.

2. Type the Internet Protocol (IP) address and checkpoint sharename for the NAS appliance or gateway-system server in the Run window. For example:

\\xxx.xxx.xxx.xxx\sharename.

3. Click OK.

Alternatively, you can access checkpoints through the "virtual" .chkpnt directory that exists for each directory in a file volume. This directory does not show up in directory listings, and can only be accessed if you specifically name it. To do this:

1. Export the directory to your local server, then navigate to the .chkpnt directory:

my-server# mount 192.168.75.55:V2 /mnt/v2
my-server# cd /mnt/v2
my-server# cd .chkpnt

2. List the checkpoint directories, where each directory is named after an individual checkpoint:

my-server# ls
checkpoint1 checkpoint2

3. Navigate to the checkpoint you want and list its contents. This represents the files as they existed when the checkpoint was taken:

my-server# cd checkpoint1
my-server# ls
test1.txt xx2 xxf


Managing RAID Controllers

This section provides information about using the raidctl command to manage redundant array of independent disks (RAID) controllers from the CLI. The command applies to Sun StorageTek 5310 and Sun StorageTek 5320 NAS appliances and gateway systems.


Controlling LEDs

Use the command described below to control redundant array of independent disks (RAID) controller LEDs for Sun StorageTek 5310 and Sun StorageTek 5320 NAS appliances and gateway systems. Specify the variables as a particular controller, tray, or slot (also known as a column) number, respectively. Alternatively, specify 0..N to request all controllers, trays, or slots.

raidctl locate type=lsi target=tray ctlr=controller tray=tray

raidctl locate type=lsi target=drive ctlr=controller tray=tray slot=slot

raidctl locate type=lsi action=stop ctlr=controller

To obtain Help on subcommands, enter raidctl help at the command line.


Getting Events and Configuration Information

Use the command described below to view redundant array of independent disks (RAID) controller events and configuration information for Sun StorageTek 5310 and Sun StorageTek 5320 NAS appliances and gateway systems. Specify the controller variable as a particular controller number, or 0..N to request all controllers.

raidctl get type=lsi target=events ctlr=controller etype=critical

The log of critical events is written to:

If the file already exists, you will be prompted to overwrite the file, specify a new file name, or cancel the operation.

raidctl get type=lsi target=profile ctlr=controller

Alternatively, you can write the information to the file on your host (profil.txt in the example below):

rsh <server> raidctl get type=lsi target=profile
ctlr=controller > profile.txt

raidctl get type=lsi target=events ctlr=controller

The log of events is written to:

If the file already exists, you will be prompted to overwrite the file, specify a new file name, or cancel the operation.

To obtain Help on subcommands, enter raidctl help at the command line.


Setting the Controller Time and Battery Age

To set the redundant array of independent disks (RAID) controller time and battery age for Sun StorageTek 5310 and Sun StorageTek 5320 NAS appliances and gateway systems. Specify the controller variable as a particular controller number, or 0..N to request all controllers.

raidctl set type=lsi target=battery-age ctlr=controller

raidctl set type=lsi target=ctlr_time-age ctlr=controller

To obtain Help on subcommands, enter raidctl help at the command line.


Downloading RAID Array and Drive Firmware

To download redundant array of independent disks (RAID) array and drive firmware for Sun StorageTek 5310 and Sun StorageTek 5320 NAS appliances and gateway systems, use the raidctl download command.

To obtain Help on subcommands, enter raidctl help at the command line.

Note: Refer to Upgrading Array and Drive Firmware Revision Levels for firmware upgrade procedures.


Mounting File Systems

After multiple continuous reboots, one or more file systems might become unmounted. To mount the file systems, issue the following CLI command:

mount -f volume-name

Do not mount or share the /cvol file system manually. Do not make modifications to /cvol using any method other than the Web Administrator or the console administration.



Note - Sun Services Engineers are authorized to perform a manual mount.




Setting Up NDMP Backups

The Network Data Management Protocol (NDMP) is an open protocol for network-based backup. NDMP architecture lets you use any NDMP-compliant backup administration application to back up your network attached storage device.


Caution:In a cluster configuration, do not configure both heads to be in the same switch zone as the tape device. In the event of a head failover during a backup, data on the media is lost. Configure one of the heads to be in the same zone as the tape device.

By default, the current release uses V4 of NDMP, although V3 is supported and client systems can use V3. To verify the version, use the following command:

ndmp show version

To use V3, use the following command, but verify that no client systems use V4:

ndmp set version=3

To complete the configuration, you need to specify the complete paths to the devices. Use the following command to display the paths:

ndmp devices

To set up NDMP:

1. Configure the backup administration application to log in:

a. Enter the user name admin.

Note: In version 4.20, you specified the user name administrator.

b. Specify the same password used by the console administrator.

2. Configure the backup administration application to locate the devices on which the volumes reside. Specify the complete path to the device and the device's identifier, using the ndmp devices command.

Note: In version 4.20, you specified only the device's identifier.

3. For each file volume, verify that checkpoints are enabled and backup checkpoints are enabled. To view or set these settings, choose File Volume Operations > Edit Volume Properties

4. From the navigation panel, choose System Backup > Set Up NDMP.

5. Select the network interface card (NIC) port adapter or bond port used to transfer data to the backup tape drive (typically an interface configured with independent role).

6. Specify the full path, such as /vol_ndmp, for the directory used to store intermediate backup data and a permanent log of backup history. The directory must be independent from the volumes scheduled for backup, and at least 2 gigabytes in size.

7. Click Apply.


Updating the Time Zone Database

The NAS server supports the major world time zones and adjusts to local time. Different countries and regions set time in different ways. The NAS software uses the standard database format for the time zones.

Use the following procedure to update the time zone information:

1. From the ftp://elsie.nci.nih.gov/pub/ site, download the latest file, for example, tzdata2007c.tar.gz.

2. Use gunzip and tar to extract the database files. The extracted files refer to the various regions as shown in TABLE 11-1. If a file name has more than eight characters, it must be renamed to meet the eight-character limit of the /cvol directory.


TABLE 11-1 Time Zone Database Files

Continent/Region

File Name

 

Africa

africa

africa

Antarctica

antarctica

antarcti

Asia and Australia

australasia

australa

Pacific Islands

pacificnew

pacificn

Greenwich Mean Time (GMT) offset only (no Daylight Savings).

etcetera

etcetera

European countries

europe

europe

North America

northamerica

northame

Special time corrections made in 1987 for Saudi Arabia

solar87

solar87

South America

southamerica

southame


3. Determine the current boot directory. Check the /cvol/defstart file; a value of 1 indicates nf1 and a value of 2 indicates nf2 as the boot directory.

4. Create the tz directory in the current boot directory.

5. Copy the files to cvol/nf1/tz or cvol/nf2/tz, as appropriate.

6. Use the zic command to install the timezone database file for your region. For example, the following command installs the northamerica timezones in the nf2 boot directory:

zic /cvol/nf2/tz/northame

A reboot is not required for the new time zones to take effect.


Enabling CATIA V4/V5 Character Translations

NAS appliances and gateway systems inter-operate with CATIA V4/V5 products (developed by Dessault Systèmes).The following sections provide information about the CATIA software:


About CATIA V4/V5 Character Translations

NAS appliances and gateway systems inter-operate with CATIA V4/V5 products (developed by Dessault Systèmes).

CATIA V4 is a Unix-only product, whereas CATIA V5 is available on both Unix and Windows platforms. CATIA V4 might use certain characters in file names that are invalid in Windows. When CATIA customers migrate from V4 to V5, V4 files might become inaccessible in Windows if their file names contain invalid Windows characters. Therefore, a character translation option is provided for CATIA V4/V5 Unix/Windows inter-operability.

The translation table is shown in TABLE 11-2.


TABLE 11-2 CATIA Character Translation Table

CATIA V4 Unix Character

CATIA V5 Windows Character

CATIA V5 Character Description

Curved open double quotation (not shown)

¨

Dieresis

*

¤

Currency sign

/

ø

Latin small letter O with stroke

:

÷

Division sign

<

«

Left-pointing double angle quotation mark

>

»

Right-pointing double angle quotation mark

?

¿

Inverted question mark

\

ÿ

Latin small letter Y with dieresis

|

Broken bar (not shown)

Broken bar


CATIA V4/V5 inter-operability support is disabled by default. You can enable the feature either manually through the command-line interface (CLI) or automatically after a system boot.


Enabling CATIA Manually

You must re-enable CATIA support after each system reboot.

To enable CATI, issue the command:

load catia. 


Enabling CATIA Automatically

To enable CATIA automatically on reboot:

1. Edit /dvol/etc/inetload.ncf to add the word catia on a separate line within the file.

2. Issue the following two CLI commands to restart the inetload service:

unload inetload

load inetload

If CATIA V4/V5 support was successfully enabled, an entry similar to the following is displayed in the system log:

07/25/05 01:42:16 I catia: $Revision: 1.1.4.1


Backing Up Configuration Information

After you configure the NAS OS or modify the NAS OS configuration, follow the steps below to back up the configuration settings. In a cluster configuration, it is only necessary to perform these steps on one server, because the configuration is synchronized between servers.

1. At the CLI command line, enter load unixtools.

2. Type cp r v /dvol/etc backup-path, where backup-path is the full path, including the volume name, of the desired directory location of the configuration file backup. The directory must already exist and be empty.

This copies all of the configuration information stored in the /dvol/etc directory to the designated location.


Upgrading NAS Software

The discussion describes how to upgrade the NAS software, as follows.


Caution: Never update system software when the RAID subsystem is in a critical state (such as after a drive fails), creating a new volume, or rebuilding an existing volume. You can see this information in the system log, or from the Web Administrator RAID display.


Upgrading Software With a Reboot

The following procedure requires you to reboot the system after the update process is complete. Rebooting the system requires all I/O to be stopped; therefore, plan to update the software during a planned maintenance period.

Note: In a cluster configuration, perform this procedure on both servers in the cluster before you reboot the server. The cluster must be in optimal mode prior to the update.

Follow these steps to update the Sun StorageTek NAS software on your appliance or gateway system:

1. Download the latest version of the NAS software, available at www.sunsolve.sun.com. If you are unsure of which version to download, contact Sun Services to get the appropriate files for your system configuration.

2. From the Web Administrator navigation panel, choose System Operations > Update Software.

3. In the Update Software panel, type the path where the update files are located.

If you need to look for the path, click Browse.

4. Click Update to start the process.

5. When the update process is complete, click Yes to reboot, or click No to continue without rebooting.

The update does not take effect until the system is rebooted.

When upgrading to a release that is 4.10 or higher, from a release earlier than 4.10, you will be asked to re-enter time zone information, even though it was previously entered. This is due to a changed implementation that offers additional time zone locations.


Upgrading Cluster Software Without Interrupting Service

Follow these steps to upgrade the Sun StorageTek NAS software on in a cluster configuration, such that the service is never brought down. This is known as a rolling upgrade.

This procedure supports a single NAS OS software revision upgrade, for example 4.12 to 4.21. Perform upgrades that span more that one release incrementally, checking the OS release notes for each release to determine any issues or potential downtime.

1. From a remote web browser window, log in to the Web Administrator GUI on the first server in the cluster (in this example, Server 1). As necessary, refer to Logging In for instructions.

2. From the Web Administrator navigation panel, choose System Operations > Update Software.

3. Browse to select a valid OS image file, then click Update. This copies the image file to Server 1 and upgrades the NAS OS software.

4. When the upgrade is finished, a pop-up dialog prompts you to reboot the server manually. Click OK to close this dialog.

5. From the Web Administrator navigation panel, select System Operations > Shut Down the Server.

6. Select Reboot This Head and click Apply.

7. Close the web browser window.

8. Looking at the LCD panel, verify that Server 1 (Head 1) has restarted and is in the QUIET state.

9. From a remote web browser window, log in to the Web GUI on the second server in the cluster (Server 2).

10. Looking at the LCD panel, verify that Server 2 (Head 2) is in the ALONE state. You can also verify this using Web Administrator.

11. From the Web Administrator navigation panel, choose High Availability > Recover, then click the Recover button. Wait for the recovery to complete.

Under a heavy processing load, some LUNs might not be fully restored. Repeat this step if any LUN remains in the failover state.

12. Verify that both servers are in the NORMAL state (using the LCD panel or Web Administrator).

13. From the Web Administrator navigation panel, choose System Operations > Update Software.

14. Browse to select the same a OS image file used in Step 3, then click Update. This copies the image file to Server 2 and upgrades the NAS OS software.

15. When the upgrade is finished, a pop-up dialog prompts you to reboot the server manually. Select No.

16. From the Web Administrator navigation panel, choose System Operations > Shut Down the Server.

17. Select Reboot This Head and click Apply.

18. Close the web browser window.

19. Looking at the LCD panel, verify that Server 2 (Head 2) has restarted and is in the QUIET state.

20. From a remote web browser window, log in to the Web GUI on Server 1.

21. Verify that Server 1 (Head 1) is in the ALONE state.

22. From the Web Administrator navigation panel, choose High Availability > Recover, then click the Recover button. Wait for the recovery to complete.

23. Verify that both servers are in the NORMAL state, and running the new OS version. You can check the OS version on the Web Administrator startup System Status panel.

When upgrading to a release that is 4.10 or higher, from a release earlier than 4.10, you will be asked to re-enter time zone information, even though it was previously entered. This is due to a changed implementation that offers additional time zone locations.


Configuring the Compliance Archiving Software

If you have purchased, activated, and enabled the Compliance Archiving Software option (see Activating System Options), there are additional settings you can establish using the command-line interface.

Note: Gateway-system configurations support advisory compliance but not mandatory compliance.


Changing the Default Retention Period

Enter this CLI command to change the default retention period:

fsctl compliance volume drt time

where volume is the name of the volume for which you want to set the default retention time, and time is the duration of the default retention time in seconds.

To set the default retention to "permanent," use the maximum allowable value, 2147483647.


Enabling CIFS Compliance

In its initial configuration, the Compliance Archiving Software supports data retention requests only from NFS clients. Enter this CLI command to enable WIndows Common Internet File System (CIFS) clients to access to this functionality:

fsctl compliance wte on

Upgrading Array and Drive Firmware Revision Levels

This section explains how to determine current array and drive firmware revision levels for Sun StorageTek 5310 and Sun StorageTek 5320 NAS appliances and gateway systems, and how to upgrade your firmware. For purposes of this discussion, the term "array and drive firmware" refers to the firmware loaded on the redundant array of independent disks (RAID) controller, controller NVSRAM, expansion unit, and drive for a storage array - as appropriate to your installation.

This section contains the following topics:


Determining If You Need to Upgrade the Firmware

Before you begin a firmware upgrade, decide if an upgrade is required by determining the current firmware revision level for each array component.

You can use the raidctl profile command to capture and record the current firmware revision level of each redundant array of independent disks (RAID) controller, controller NVSRAM, expansion unit, and drive. See Capturing raidctl Command Output for more information.


Upgrading Array and Drive Firmware (Reboot Required)

Use this procedure to upgrade redundant array of independent disks (RAID) array and drive firmware. This procedure requires you to reboot the NAS server.

If you cannot reboot the NAS server and need to upgrade only array firmware, refer to Upgrading Array Firmware (No Reboot Required).

The amount of time required to complete a firmware upgrade will vary, depending on your configuration. For example, it takes approximately 50 minutes to upgrade and reboot a single NAS server with one controller unit, one Fibre Channel (FC) expansion unit, and one Serial Advanced Technology Attachment (SATA) expansion unit. See Step 13 on "Invalid Cross-Reference Format" to determine how much time to allow for your configuration

Note: Upgrading drive firmware always requires a reboot of the NAS server.

Note: All drives of each drive type will be upgraded, including those that are already at the firmware level of the current firmware file.


Caution: Do not update drive firmware when the RAID subsystem is in critical state (such as after a drive fails), creating a new volume, or rebuilding an existing volume. You can see this information in the system log, or from the Web Administrator RAID display.

Before you begin this procedure, make sure that the NAS server software version 4.10 Build 18 (minimum) is installed. Do not attempt to upgrade array and drive firmware for a NAS server that has a previous software version. If the NAS server software is at an earlier version, go to www.sunsolve.sun.com to obtain the latest software version.

To upgrade array and drive firmware:

1. Download the latest patch from www.sunsolve.sun.com and unzip the file.

2. Review the patch readme file to determine which firmware revision levels are associated with the patch.

3. From a NAS client, enable FTP.

For information about how to enable FTP using the GUI, see About Configuring FTP Access. Refer to Configuring File Transfer Protocol (FTP) Access if you are using the CLI.

4. Change to the directory to which you downloaded the patch.

5. Use FTP to connect to the NAS server, and log in as the admin user.

6. Enter bin for binary mode.

7. At the ftp prompt, create the following directories on /cvol by issuing these commands:

mkdir /cvol/firmware

mkdir /cvol/firmware/2882

mkdir /cvol/firmware/2882/ctlr

mkdir /cvol/firmware/2882/nvsram

mkdir /cvol/firmware/2882/jbod

mkdir /cvol/firmware/2882/drive

8. Change to the directory you created for the firmware and copy the firmware file (see TABLE 11-3) using the put command.

For example, to load firmware for the RAID controller, issue the following commands:

cd /cvol/firmware/2882/ctlr

put SNAP_288X_06120910.dlp

Note: The firmware file names are truncated after they are copied to their associated directories.

9. Continue to load each firmware file to the appropriate directory.

TABLE 11-3 lists the directory and example firmware file for each component.


TABLE 11-3 Component Firmware Directories and Files

Component

Directory

Example File Name

RAID controller

/cvol/firmware/2882/ctlr

SNAP_288X_06120910.dlp

RAID controller NVSRAM

/cvol/firmware/2882/nvsram

N2882-612843-503.dlp

FC expansion unit

/cvol/firmware/2882/jbod

esm9631.s3r

SATA expansion unit

/cvol/firmware/2882/jbod

esm9722.dl

Drive types:

 

 

Seagate ST314680

/cvol/firmware/2882/drive

D_ST314680FSUN146G_0407.dlp

Seagate 10K

/cvol/firmware/2882/drive

D_ST314670FSUN146G_055A.dlp

Hitachi 400GB HDS724040KLSA80

/cvol/firmware/2882/drive

D_HDS7240SBSUN400G_AC7A.dlp

Fujitsu MAT3300F 300GB

/cvol/firmware/2882/drive

D_MAT3300FSUN300G_1203.dlp

Seagate 10K 300GB

/cvol/firmware/2882/drive

D_ST330000FSUN300G_055A.dlp


10. Log out of the FTP session.

11. Use Telnet to connect to the NAS server, and log in to a user account with admin privileges.

12. Reboot the system. For a cluster configuration, reboot both servers.

The following table provides the approximate time needed to upgrade the firmware for each component.


Component

Time to Complete Upgrade

RAID controller

Reboot plus 15 minutes

RAID controller NVSRAM

Reboot plus 5 minutes

FC or SATA expansion unit

Reboot plus 5 minutes

Drives

Reboot plus 1.5 minutes per drive


13. Verify that the new firmware has been loaded by issuing this command:

raidctl get type=lsi target=profile ctlr=0

You can also check the system log for failures.


Upgrading Array Firmware (No Reboot Required)

This procedure upgrades redundant array of independent disks (RAID) array firmware without requiring a reboot of the NAS server.

Before you begin this procedure, keep the following in mind:


Caution: Do not update drive firmware when the RAID subsystem is in critical state (such as after a drive fails), creating a new volume, or rebuilding an existing volume. You can see this information in the system log, or from the Web Administrator RAID display.

To upgrade array firmware, with no reboot required:

1. Download the latest patch from www.sunsolve.sun.com and unzip the file.

2. Review the patch readme file to determine which firmware revision levels are associated with the patch.

3. Gather the tray ID for each expansion unit that requires a firmware upgrade.

a. From the Web Administrator, go to RAID > View Controller/Enclosure Information.

b. Select the appropriate RAID controller from the Controller Information box.

c. The Enclosures Information area displays the tray ID for each controller unit and expansion unit that is managed by the selected controller. Tray IDs are relative to, and unique within, the array managed by the controller unit that houses the selected controller.

For expansion units, the Firmware Release field the revision level. This is the tray ID you will need to upgrade the firmware.

Note: For controller units, the Firmware Release field displays <N/A>.

4. Change to the directory to which you downloaded the patch.

5. From a NAS client, enable FTP.

For information about how to enable FTP using the GUI, see About Configuring FTP Access. Refer to Configuring File Transfer Protocol (FTP) Access if you are using the CLI.

6. Use FTP to connect to the NAS server, and log in to a user account with admin privileges.

7. Enter bin for binary mode.

8. At the ftp prompt, create the following directories on /cvol by issuing these commands:

mkdir /cvol/firmware

mkdir /cvol/firmware/2882

mkdir /cvol/firmware/2882/ctlr

mkdir /cvol/firmware/2882/nvsram

mkdir /cvol/firmware/2882/jbod

9. Load each firmware file to the appropriate directory. The following table lists the directory and example firmware file for each component.


Component

Directory

Example File Name

RAID controller

/cvol/firmware/2882/ctlr

SNAP_288X_06120910.dlp

RAID controller NVSRAM

/cvol/firmware/2882/nvsram

N2882-612843-503.dlp

FC expansion unit

/cvol/firmware/2882/jbod

esm9631.s3r

SATA expansion unit

/cvol/firmware/2882/jbod

esm9722.dl


For each file, change to the directory you created for the firmware, then copy the firmware file using the put command. For example, to load firmware for the RAID controller, issue the following commands:

cd /cvol/firmware/2882/ctlr

put SNAP_288X_06120910.dlp

10. Log out of the FTP session.

11. Use Telnet to connect to the NAS server, and log in to a user account with admin privileges.

12. Use the raidctl download command to load each file to the target directory.

Note: For raidctl command usage, enter raidctl with no arguments at the command line.

To load the RAID controller firmware from the ctlr directory to controller 0 and 1, issue the following command:

raidctl download type=lsi target=ctlr ctlr=0,1

This command downloads the firmware file to both RAID controllers and removes the file from the directory.

Note: The raidctl download command deletes the component-specific firmware file from /cvol/firmware/2882 after each successful invocation of the command. For example, the /cvol/firmware/2882/ctlr file is deleted after each successful invocation of the raidctl download type=lsi target=ctlr ctlr=0 command. Therefore, you must re-copy the firmware file after upgrading each component (controller unit, controller NVSRAM, expansion unit, and drives) if you have multiple controller units or expansion units. With two controller units, the second unit is specified as ctlr=2 in the command raidctl download type=lsi target=ctlr ctlr=2.

To download NVSRAM, issue this command:

raidctl download type=lsi target=nvsram ctlr=0

To download the firmware located in the jbod directory to expansion unit 0 in tray 1, issue this command:

raidctl download type=lsi target=jbod ctlr=0 tray=1

13. Monitor the progress of each download from the Telnet session.

The approximate time needed to complete each upgrade is as follows:


Component

Minutes per Component

RAID controller

15 minutes

RAID controller NVSRAM

5 minutes

FC or SATA expansion unit

5 minutes


Note: After the upgrades are complete, the Telnet cursor can take up to 5 minutes to return. Wait during this time until the cursor is displayed.

14. Before continuing to the next component, verify in the system log that the download is complete.

The following example shows output from the system log:


Ctrl-
 
Firmware Download  90% complete
Firmware Download  95% complete
Firmware Download 100% complete
Waiting for controllers to become ACTIVE
Controller 0 - now ACTIVE
Controller 1 - now ACTIVE
Controllers are now active
nvsram-
 

 



raidctl download type=lsi target=nvsram ctlr=0
Flashing C0 NVSRAM: /cvol/nf2/../firmware/2882/nvsram/n2882-61.dlp (48068)
Firmware Download 100% complete
Waiting for controllers to become ACTIVE
Controller 0 - now ACTIVE
Controller 1 - now ACTIVE
Controllers are now active
ESM-
>> raidctl download type=lsi target=jbod ctlr=0 tray=1
 
Flashing C0 JBOD 1 with /cvol/nf1/../firmware/2882/jbod/esm9631.s3r (663604)
Firmware Download  20% complete
Firmware Download  30% complete
Firmware Download  50% complete
Firmware Download  60% complete
Firmware Download  90% complete
Firmware Download 100% complete
Waiting for controllers to become ACTIVE
Controller 0 - now ACTIVE
Controller 1 - now ACTIVE
Controllers are now active
Drive-
10/26/05 10:57:42 I Firmware Download  20% complete
10/26/05 10:57:46 I Firmware Download  30% complete
10/26/05 10:57:50 I Firmware Download  40% complete
10/26/05 10:57:54 I Firmware Download  50% complete
10/26/05 10:57:58 I Firmware Download  60% complete
10/26/05 10:58:03 I Firmware Download  70% complete
10/26/05 10:58:08 I Firmware Download  80% complete
10/26/05 10:58:13 I Firmware Download  90% complete
10/26/05 10:58:18 I Bytes Downloaded: 628224 (2454 256 chunks),
imageSize=62804
                    8
10/26/05 10:59:01 I Flashed OK - drive in tray 2 slot 12
10/26/05 10:59:01 I Downloaded firmware version 0407 to 27 drives


Upgrading Drive Firmware (Reboot Required)

Use this procedure to upgrade only drive firmware. This procedure requires you to reboot the NAS server.

Note: Upgrading drive firmware always requires a reboot of the NAS server.

Note: All drives of each drive type will be upgraded, including those that are already at the firmware level of the current firmware file.

The amount of time required to complete a firmware upgrade will vary, depending on the number of drives that are installed plus the time it takes to reboot the NAS server. See Step 13 on "Invalid Cross-Reference Format" to determine how much time to allow for your configuration.


Caution: Do not update drive firmware when the RAID subsystem is in critical state (such as after a drive fails), creating a new volume, or rebuilding an existing volume. You can see this information in the system log, or from the Web Administrator RAID display.

Before you begin a drive firmware upgrade, make sure that the NAS server software 4.10 Build 18 (minimum) is installed. Do not attempt to upgrade firmware to a NAS server that has a previous software version.

To upgrade drive firmware, with a reboot required:

1. Download the latest patch from www.sunsolve.sun.com and unzip the file.

2. Review the patch readme file to determine which firmware revision levels are associated with the patch.

3. Change to the directory to which you downloaded the patch.

4. From a NAS client, enable FTP.

For information about how to enable FTP, see About Configuring FTP Access or Configuring File Transfer Protocol (FTP) Access.

5. Use FTP to connect to the NAS server and log in as the admin user.

6. Enter bin for binary mode.

7. At the ftp prompt, create the following directory on /cvol by issuing this command:

mkdir /cvol/firmware/2882/drive

8. Change to the directory you created for the drive firmware and copy the drive firmware files (see TABLE 11-3) using the put command.

For example, to load firmware for the Seagate ST314680 drive issue the following commands:

cd /cvol/firmware/2882/drive

put D_ST314680FSUN146G_0407.dlp

9. Log out of the FTP session.

10. Use Telnet to connect to the NAS server and log in as the admin user.

11. Reboot the system. For a cluster configuration, reboot both servers.

The approximate time to complete the upgrade is reboot time plus 1.5 minutes for each drive.

12. Verify that the new firmware has been loaded by issuing this command:

raidctl get type=lsi target=profile ctlr=0

You can also check the system log for failures.


Capturing raidctl Command Output

You can use the raidctl profile command to determine the current firmware revision level of each controller unit, controller NVSRAM, expansion unit, and drive. This section provides instructions in the following procedures:

Capturing raidctl Command Output From a Solaris Client

To capture raidctl comm and output from a Solaris client:

1. From a Solaris client, type the script command and a file name. For example:

> script raidctl

2. Use Telnet to connect to the NAS server.

3. Type the following raidctl command to collect the output:

raidctl get type=lsi target=profile ctlr=0

With two controller units, the second unit is specified as ctlr=2, as shown in the following example:

raidctl get type=lsi target=profile ctlr=2

4. Type exit to close the Telnet session.

5. Type exit again to close the file named raidctl.

The following example shows command output, with the command and resulting firmware levels in bold:


telnet 10.8.1xx.x2
Trying 10.8.1xx.x2...
Connected to 10.8.1xx.x2.
Escape character is '^]'.
connect to (? for list) ? [menu] admin
password for admin access ? *********
5310 > raidctl get type=lsi target=profile ctlr=0
 
SUMMARY---------------------------------
Number of controllers: 2
Number of volume groups: 4
Total number of volumes (includes an access volume): 5 of 1024 used
   Number of standard volumes: 4
   Number of access volumes: 1
Number of drives: 28
Supported drive types: Fibre (28)
Total hot spare drives: 2
   Standby: 2
   In use: 0
Access volume: LUN 31
Default host type: Sun_SE5xxx (Host type index 0)
Current configuration
   Firmware version: PkgInfo 06.12.09.10
   NVSRAM version: N2882-612843-503
Pending configuration

 


CONTROLLERS ---------------------------------
Number of controllers: 2
 
Controller in Tray 0, Slot B
   Status: Online
   Current Configuration
      Firmware version: 06.12.09.10
         Appware version: 06.12.09.10
         Bootware version: 06.12.09.10
      NVSRAM version: N2882-612843-503
   Pending Configuration
      Firmware version: None
         Appware version: None
         Bootware version: None
      NVSRAM version: None
      Transferred on: None
   Board ID: 2882
   Product ID: CSM100_R_FC
   Product revision: 0612
   Serial number: 1T44155753
   Date of manufacture: Sat Oct 16 00:00:00 2004
   Cache/processor size (MB): 896/128
   Date/Time: Thu Nov  2 19:15:49 2006
   Associated Volumes (* = Perferred Owner):
      lun4* (LUN 3)
Ethernet port: 1
      Mac address: 00.A0.B8.16.C7.A7
      Host name: gei
      Network configuration: Static
      IP address: 192.168.128.106
      Subnet mask: 255.255.255.0
      Gateway: 192.168.128.105
      Remote login: Enabled
   Drive interface: Fibre
      Channel: 2
      Current ID: 124/0x7C
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
      Data rate control: Fixed
      Link status: Up
      Topology: Arbitrated Loop - Private
      World-wide port name: 20:02:00:A0:B8:16:C7:A7
      World-wide node name: 20:00:00:A0:B8:16:C7:A7
      Part type: HPFC-5400      revision 6

 Drive interface: Fibre
      Channel: 2
      Current ID: 124/0x7C
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
Data rate control: Fixed
      Link status: Up
      Topology: Arbitrated Loop - Private
      World-wide port name: 20:02:00:A0:B8:16:C7:A7
      World-wide node name: 20:00:00:A0:B8:16:C7:A7
      Part type: HPFC-5400      revision 6
   Host interface: Fibre
      Channel: 2
      Current ID: 255/0x3
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
      Data rate control: Auto
      Link status: Down
      Topology: Unknown
      World-wide port name: 20:07:00:A0:B8:16:C6:FB
      World-wide node name: 20:06:00:A0:B8:16:C6:F9
      Part type: HPFC-5400      revision 6
   Host interface: Fibre
      Channel: 2
      Current ID: 255/0x3
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
      Data rate control: Auto
      Link status: Down
      Topology: Unknown
      World-wide port name: 20:07:00:A0:B8:16:C6:FB
      World-wide node name: 20:06:00:A0:B8:16:C6:F9
      Part type: HPFC-5400      revision 6
 
Controller in Tray 0, Slot A
   Status: Online
   Current Configuration
      Firmware version: 06.12.09.10
         Appware version: 06.12.09.10
         Bootware version: 06.12.09.10
      NVSRAM version: N2882-612843-503
   Pending Configuration
      Firmware version: None
         Appware version: None
         Bootware version: None
      NVSRAM version: None
      Transferred on: None

 

 


 Board ID: 2882
   Product ID: CSM100_R_FC
   Product revision: 0612
   Serial number: 1T44155741
   Date of manufacture: Sun Oct 10 00:00:00 2004
   Cache/processor size (MB): 896/128
   Date/Time: Thu Nov  2 19:15:45 2006
   Associated Volumes (* = Perferred Owner):
lun1* (LUN 0), lun2* (LUN 1), lun3* (LUN 2)
   Ethernet port: 1
      Mac address: 00.A0.B8.16.C6.F9
      Host name: gei
      Network configuration: Static
      IP address: 192.168.128.105
      Subnet mask: 255.255.255.0
      Gateway: 192.168.128.105
      Remote login: Enabled
   Drive interface: Fibre
      Channel: 1
      Current ID: 125/0x7D
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
      Data rate control: Fixed
      Link status: Up
      Topology: Arbitrated Loop - Private
      World-wide port name: 20:01:00:A0:B8:16:C6:F9
      World-wide node name: 20:00:00:A0:B8:16:C6:F9
      Part type: HPFC-5400      revision 6
Drive interface: Fibre
      Channel: 1
      Current ID: 125/0x7D
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
      Data rate control: Fixed
      Link status: Up
      Topology: Arbitrated Loop - Private
      World-wide port name: 20:01:00:A0:B8:16:C6:F9
      World-wide node name: 20:00:00:A0:B8:16:C6:F9
      Part type: HPFC-5400      revision 6 
Host interface: Fibre
      Channel: 1
      Current ID: 255/0x0
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
      Data rate control: Auto

 


 Link status: Down
      Topology: Unknown
      World-wide port name: 20:06:00:A0:B8:16:C6:FA
      World-wide node name: 20:06:00:A0:B8:16:C6:F9
      Part type: HPFC-5400      revision 6
   Host interface: Fibre
      Channel: 1
      Current ID: 255/0x0
      Maximum data rate: 200 MB/s
      Current data rate: 200 MB/s
      Data rate control: Auto
      Link status: Down
      Topology: Unknown
World-wide port name: 20:06:00:A0:B8:16:C6:FA
      World-wide node name: 20:06:00:A0:B8:16:C6:F9
      Part type: HPFC-5400      revision 6
 
VOLUME GROUPS--------------------------
   Number of volume groups: 4
   Volume group 1 (RAID 5)
      Status: Online
      Tray loss protection: No
      Associated volumes and free capacities:
         lun1 (681 GB)
      Associated drives (in piece order):
      Drive at Tray 0, Slot 7
      Drive at Tray 0, Slot 6
      Drive at Tray 0, Slot 5
      Drive at Tray 0, Slot 4
      Drive at Tray 0, Slot 3
      Drive at Tray 0, Slot 8
Volume group 2 (RAID 5)
      Status: Online
      Tray loss protection: No
      Associated volumes and free capacities:
         lun2 (681 GB)
      Associated drives (in piece order):
      Drive at Tray 0, Slot 14
      Drive at Tray 0, Slot 13
      Drive at Tray 0, Slot 12
      Drive at Tray 0, Slot 11
      Drive at Tray 0, Slot 10
      Drive at Tray 0, Slot 9

 


 Volume group 3 (RAID 5)
      Status: Online
      Tray loss protection: No
      Associated volumes and free capacities:
         lun3 (817 GB)
      Associated drives (in piece order):
      Drive at Tray 11, Slot 5
      Drive at Tray 11, Slot 4
      Drive at Tray 11, Slot 3
      Drive at Tray 11, Slot 2
      Drive at Tray 11, Slot 1
      Drive at Tray 11, Slot 7
      Drive at Tray 11, Slot 6
   Volume group 4 (RAID 5)
      Status: Online
      Tray loss protection: No
      Associated volumes and free capacities:
         lun4 (817 GB)
      Associated drives (in piece order):
      Drive at Tray 11, Slot 13 
Drive at Tray 11, Slot 12
      Drive at Tray 11, Slot 11
      Drive at Tray 11, Slot 10
      Drive at Tray 11, Slot 9
      Drive at Tray 11, Slot 8
      Drive at Tray 11, Slot 14
 
STANDARD VOLUMES---------------------------
 
SUMMARY
   Number of standard volumes: 4
   NAME    STATUS    CAPACITY  RAID LEVEL  VOLUME GROUP
   lun1    Optimal   681   GB    5            1
   lun2    Optimal   681   GB    5            2
   lun3    Optimal   817   GB    5            3
   lun4    Optimal   817   GB    5            4

 


DETAILS
 
   Volume name: lun1
      Volume ID: 60:0A:0B:80:00:16:C6:F9:00:00:23:B4:43:4B:53:3A
      Subsystem ID (SSID): 0
      Status: Optimal
      Action: 1
      Tray loss protection: No
      Preferred owner: Controller in slot A
      Current owner: Controller in slot B
      Capacity: 681 GB
      RAID level: 5
      Segment size: 64 KB
      Associated volume group: 1
      Read cache: Enabled
      Write cache: Enabled
      Flush write cache after (in seconds): 8
      Cache read ahead multiplier: 1
      Enable background media scan: Enabled
      Media scan with redundancy check: Disabled
DRIVES------------------------------
 
SUMMARY
 
   Number of drives: 28
      Supported drive types: Fiber (28)
   BASIC:
CURRENT    PRODUCT      FIRMWARE
   TRAY,SLOT  STATUS  CAPACITY   DATA RATE     ID          REV
      0,1     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,7     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,6     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,5     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,4     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,3     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,2     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,14    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,13    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,12    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,11    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,10    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,9     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      0,8     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307

 


 11,5     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,4     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,3     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,2     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,1     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,13    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,12    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,11    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,10    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,9     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,8     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,7     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,6     Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
      11,14    Optimal  136 GB    2 Gbps   ST314680FSUN146G 0307
 
   HOT SPARE COVERAGE:
 
      The following volume groups are not protected:
 
      Total hot spare drives: 2
         Standby: 2
         In use: 0
 
   DETAILS:
 
      Drive at Tray 0, Slot 1 (HotSpare)
         Available: 0
         Drive path redundancy: OK
         Status: Optimal
         Raw capacity: 136 GB
         Usable capacity: 136 GB
         Product ID: ST314680FSUN146G
         Firmware version: 0307
         Serial number: 3HY90HWJ00007510RKKV
Vendor: SEAGATE
         Date of manufacture: Sat Sep 18 00:00:00 2004
         World-wide name: 20:00:00:11:C6:0D:BA:3E
         Drive type: Fiber
         Speed: 10033 RPM
         Associated volume group: None
         Available: No

 


 Vendor: SEAGATE
         Date of manufacture: Sat Sep 18 00:00:00 2004
         World-wide name: 20:00:00:11:C6:0D:CA:12
         Drive type: Fiber
         Speed: 10033 RPM
         Associated volume group: 3
         Available: No
 
      Drive at Tray 11, Slot 1
         Drive path redundancy: OK
         Status: Optimal
         Raw capacity: 136 GB
         Usable capacity: 136 GB
         Product ID: ST314680FSUN146G
         Firmware version: 0307
         Serial number: 3HY90JEW00007511BDPL
         Vendor: SEAGATE
         Date of manufacture: Sat Sep 18 00:00:00 2004
         World-wide name: 20:00:00:11:C6:0D:C8:8B
         Drive type: Fiber
         Speed: 10033 RPM
         Associated volume group: 3
         Available: No
Drive Tray 1 Overall Component Information
      Tray technology: Fibre Channel
      Minihub datarate mismatch: 0
      Part number: PN 54062390150
      Serial number: SN 0447AWF011
      Vendor: VN SUN
      Date of manufacture: Mon Nov  1 00:00:00 2004
      Tray path redundancy: OK
      Tray ID: 11 
Tray ID Conflict: 0
      Tray ID Mismatch: 0
      Tray ESM Version Mismatch: 0
      Fan canister: Optimal
      Fan canister: Optimal
      Power supply canister
         Status: Optimal
         Part number: PN 30017080150
         Serial number: SN A6847502330F
         Vendor: VN SUN
         Date of manufacture: Sun Aug  1 00:00:00 2004

 


 Power supply canister
         Status: Optimal
         Part number: PN 30017080150
         Serial number: SN A6847502330F
         Vendor: VN SUN
         Date of manufacture: Sun Aug  1 00:00:00 2004
      Power supply canister
         Status: Optimal
         Part number: PN 30017080150
         Serial number: SN A68475023N0F
         Vendor: VN SUN
         Date of manufacture: Sun Aug  1 00:00:00 2004
      Temperature: Optimal
Temperature: Optimal
      Esm card
         Status: Optimal
         Firmware version: 9631
         Maximum data rate: 2 Gbps
         Current data rate: 2 Gbps
         Location: A (left canister)
         Working channel: -1
         Product ID: CSM100_E_FC_S
         Part number: PN 37532180150
         Serial number: SN 1T44462572
         Vendor: SUN
         FRU type: FT SBOD_CEM
         Date of manufacture: Fri Oct  1 00:00:00 2004
      Esm card
         Status: Optimal
         Firmware version: 9631
         Maximum data rate: 2 Gbps
         Current data rate: 2 Gbps
         Location: B (right canister)
         Working channel: -1

Capturing raidctl Output From a Windows Client

To capture raidctl output from a Windows client:

1. Click Start > Run and type cmd. Click OK.

2. Right-click at the top of the window and choose Properties.

The Properties window is displayed.

3. Change the Screen Buffer size (height) to 3000.

4. Click the Options tab and deselect Insert Mode.

5. Use Telnet to connect to the NAS server, and type the following raidctl command to collect the output:

raidctl get type=lsi target=profile ctlr=0

6. Copy the text to a file using any text editor. For example:

a. Select the output text and Press Ctrl-C to copy the data.

b. Open WordPad by clicking Start > Programs > Accessories > WordPad.

c. Click in the window and press Ctrl-V to paste the text.

d. Save the file.

7. Open the file and search for the current firmware version for each component.