C H A P T E R  3

File-System Setup and Management

This chapter covers file-system concepts, setup, and management for the NAS appliances and gateway systems. It includes the following sections:


File-System Concepts

The following sections provide definitions of some of the basic file-system concepts and attributes used in NAS storage:


About RAID Configurations

There are different redundant array of independent disks (RAID) system configurations that are supported by the system. The following sections describe these configurations:

About RAID Systems

Redundant array of independent disks (RAID) systems allow data to be distributed to multiple drives through a RAID controller, for greater performance, data security, and recoverability. The basic concept of a RAID system is to combine a group of smaller physical drives into what looks to the network as a single very large drive. From the perspective of the computer user, a RAID system looks exactly like a single drive. From the perspective of the system administrator, the physical component of the RAID system is a group of drives, but the RAID system itself can be administered as a single unit.

There are multiple types of RAID configurations. NAS appliances support RAID 5 only. NAS gateway systems support RAID 1, RAID 1+0, and RAID 5.

About the RAID-0 Configuration (Not Supported)

The RAID-0 configuration does not include the redundancy for which redundant array of independent disks (RAID) systems were developed. However, it provides a significant increase in drive performance. The RAID-0 configuration employs the concept of striping. Striping means that data is divided into stripes. One stripe is written to the first drive, the next to the second drive, and so on. The primary advantage of striping is the ability for all drives in the array to process reads and writes simultaneously. Simultaneous access greatly speeds both writes and reads.

However, because there is no redundancy in a RAID-0 configuration, if one drive fails, all of the data on the entire array might be lost. The RAID-0 configuration is best used in situations where performance is the overriding concern and lost data is of less significance.

About the RAID-1 Configuration (Gateway Systems Only)

Drive mirroring is the primary concept of the redundant array of independent disks (RAID) 1 array, which doubles the number of drives required to provide the same amount of storage, but provides an up-to-date backup of the drive. The mirrored drive is always online and can be accessed very quickly if the primary drive fails. Each primary drive is mirrored by a second drive of the same size. All writes are duplicated and written to both members of the RAID-1 array simultaneously. The RAID-1 array provides excellent high availability. A RAID-1 array is most useful where data security and integrity are essential, but performance is not as significant.

About the RAID-1+0 Configuration (Gateway Systems Only)

Redundant array of independent disks (RAID) 1+0 combines two RAID concepts to improve both performance and high availability: striping and mirroring. The mirrored drive pairs are built into a RAID-0 array. All writes are duplicated and written to both mirrored drives simultaneously. The striping of the RAID 0 improves performance for the array as a whole, while drive mirroring (RAID 1) provides excellent high availability for each individual drive. RAID 1+0 is a good choice for environments where security might outweigh performance, but performance is still important.

About the RAID-5 Configuration

The redundant array of independent disks (RAID) 5 array claims the best of both the performance improvements of striping and the redundancy of mirroring, without the expense of doubling the number of drives in the overall array.

RAID 5 uses striping and parity information. Parity information is data created by combining the bits in the information to be stored and creating a small amount of data from which the rest of the information can be extracted. In other words, the parity information repeats the original data in such a way that if part of the original is lost, combining the remainder of the original and the parity data reproduces the complete original. The parity information is not stored on a specific drive. Instead, a different drive in the stripe set is used for parity protection for different regions of the RAID-5 set.

The RAID-5 array includes the parity information as one of the stripes in the stripe arrangement. If one drive in the array fails, the parity information and the remaining portion of the original data from the surviving drives are used to rebuild the now missing information from the failed drive. Thus the RAID-5 array combines the high availability of the mirror with the performance of the stripes and produces the best overall RAID type. It also has the advantage of requiring very little "extra" space for the parity information, making it a less expensive solution as well.

NAS RAID-5 Systems - Sun StorageTek 5310 and Sun StorageTek 5320 Appliances

TABLE 3-1 summarizes the supported hardware configurations for Sun StorageTek 5310 and Sun StorageTek 5320 and appliances.


TABLE 3-1 Supported Hardware Configurations - Sun StorageTek 5310 and Sun StorageTek 5320 Appliances

NAS Server

Supported Controller Units/Enclosures

Supported Expansion Units/Enclosures

5320 NAS Server

Sun StorageTek 5320 Controller Unit

Sun StorageTek 5320 Expansion Unit

 

Sun StorageTek 5300 Controller Enclosure

Sun StorageTek 5300 Expansion Enclosure

5310 NAS Server

Sun StorageTek 5300 Controller Enclosure

Sun StorageTek 5300 Expansion Enclosure


Each Sun StorageTek 5320 controller unit and expansion unit contains either 8 or 16 redundant array of independent disks (RAID) drives of a single drive type (either Fibre Channel (FC), or Serial Advanced Technology Attachment (SATA)). The Sun StorageTek 5320 devices are configured as shown in TABLE 3-2.


TABLE 3-2 Sun StorageTek 5320 RAID-5 Configuration

Per Expansion Unit or Controller Unit

RAID-5 Set

Volumes

Hot-Spare

8 drives

6+1

1 if using FC 300 GB drives

2 of equal size for all other drives

1

16 drives

6+1

1 if using FC 300 GB drives

2 of equal size for all other drives

1

 

7+1

2 of equal size

 


Each 5300 expansion enclosure contains either 7 or 14 RAID drives of a single drive type (either FC or SATA), configured as shown in TABLE 3-3. Sun StorageTek 5300 controller enclosures can contain drives as long as they are FC drives, in which case they are also configured as shown in TABLE 3-3. The 5300 controller enclosure cannot contain SATA drives.


TABLE 3-3 Sun StorageTek 5300 RAID-5 Configuration

Per Expansion Enclosure or
Controller Enclosure (FC only)

RAID-5 Set

Volumes

Hot-Spare

7 drives

5+1

1

1

14 drives

5+1

1

1

 

6+1

1 if using FC drives

2 of equal size if using 400GB SATA drives

 


NAS RAID-5 Systems - Sun StorageTek 5210 Appliances

For Sun StorageTek 5210 NAS appliances, the server contains either one or two RAID controllers, and slots for seven drives. As shipped by the manufacturer, six of the seven slots contain SCSI drives that are configured as a single 4+1 SCSI RAID-5 set (with two logical unit numbers (LUNs)), plus one hot-spare.

Optionally, you can connect up to three expansion enclosures (JBODs) with the server, each containing either 6 or 12 drives.


About LUNs

Management of the NAS storage resources is accomplished through the logical unit number (LUN), with little direct management of the redundant array of independent disks (RAID) sets themselves. See About Creating RAID Sets and LUNs for directions and more information on setting up both RAID sets and LUNs.

A logical unit number (LUN) is the logical representation of a storage area within a RAID set. NAS appliances and gateway systems support a maximum of 255 LUNs. For cluster configurations, the 255-LUN limit is shared across both servers (for example, 100 LUNs on one server, and 156 on the partner server).

There is a maximum size limit per LUN of 2 terabytes (TB). This limit is imposed by the underlying storage protocol used to access the LUN.

In versions of NAS software earlier than 4.20, NAS in-band RAID management (IBRM) did not allow for the creation of multiple LUNs (also known as volumes) per RAID set, which resulted in wasted space on RAID sets that exceeded 2 terabytes. (For LUNs that were pre-built at the factory, there could be more than one LUN per RAID set, and the multiple LUNs were displayed and managed correctly by the NAS OS.)

Starting with NAS software version 4.20, you can create more than one LUN for each RAID set, thereby making use of space that would otherwise be wasted. This is sometimes referred to as LUN carving. To access more than 2 terabytes in a single RAID set, you can define as many LUNs as necessary to carve out the size you want.


About Partitions

Partitions are sections on a logical unit number (LUN) and provide a way to subdivide the total space available within a LUN. The NAS software supports a maximum of 31 partitions per LUN. Partitions are defined automatically when you create a LUN.



Note - New components are now configured with LUNs during manufacturing so you must initialize the partition table manually before they can be used. On the File Volume Operations page, a LUN with a partition table displays a white block, indicating free space, and the value 1 as the number of partitions. A LUN without a partition table displays a blank block and does not display the number of partitions.



When a LUN is first created, all of the available space is allocated to the first partition and any others are empty. To use the space in a partition, you must create a file volume. Each partition can contain only one file volume, though a single file volume can span several partitions. When you make a file volume, the size of the partition is adjusted to match the size of the file volume and any additional space on the LUN is assigned to the next partition. After you have made all of the file volumes the operating system supports, any extra space on that LUN is inaccessible.


About File Volumes

File volumes define the spaces that are available for storing information, and are created from partitions that have available space. If the volume does not use up all the available space in a partition, the remaining space is allocated into the next partition. New file volumes are limited to 256 gigabyte in size. To create a larger file volume, you can create and attach up to 63 segments (see About Segments) to the original file volume.

You can increase the size of a file volume by attaching a segment (see About Segments). The segment is essentially another file volume with special characteristics. When you add a segment to an existing volume, there is no distinction between the two and a user sees only more space in the volume. This flexibility enables you to create a file volume and then to expand it as needed without disturbing your users and without forcing them to spread their data over several volumes. As a system administrator adds drives and LUNs, user see more space within the volume.

From the user's point of view, the file volume and any directory structures within it are the focus. If the file volume begins to fill up, the administrator can attach another segment and increase the available space within that file volume. In physical terms, this might involve adding more drives and/or expansion units; however, the user sees only more storage space.


About Segments

Segments are "volumes" of storage space created much like file volumes. They can be attached to an existing file volume at any time. Attaching a segment increases the original file volume's total capacity. Each segment must be created independently and then attached to a file volume. After the segment is attached to a file volume, the volume and the segment are inseparable.

In general, segments are created as needed and attached to volumes as the volumes begin to fill with data. The main advantage of adding space by attaching segments is that you can create the segment on a new drive or even a new array. After the segment is attached to the original file volume, the different physical storage locations are invisible to the user. Therefore, space can be added as needed, without bringing down the network to restructure the data storage and create a bigger file volume.


Creating the File System

This section provides information about creating the NAS file system. The following subsections are included:


About Creating the File System

If you are configuring a gateway system, use the storage system configuration tools to create hot-spare drives and logical unit numbers (LUNs). Refer to the documentation supplied with the storage system that is connected to your gateway.

If you are configuring a (non-gateway) appliance, refer to About Creating RAID Sets and LUNs and Designating a Drive As a Hot-Spare.


About Creating RAID Sets and LUNs

NAS appliances and gateway systems support a maximum of 255 logical unit numbers (LUNs). For cluster configurations, the 255-LUN limit is shared across both servers, but can be split any way.

The NAS software uses two approaches to creating new redundant array of independent disks (RAID) sets and LUNs, depending on your hardware:

Before you add a new LUN, verify the following:


After you add a new LUN, check the following:

If the new LUN had been assigned to another host in the SAN and is now added to the NAS Gateway system, the LUN might be inaccessible because it has residual data, indicated by having an owner of "no DPMGR." To remove the data and make the LUN usable, use the following procedure:

hostname> disk disk-name,partition-number zap

Caution: The zapcommand reformats the LUN. The disk table will be deleted.


Adding a New LUN (Sun StorageTek 5310 and Sun StorageTek 5320 NAS Devices)

For Sun StorageTek 5310 and Sun StorageTek 5320 NAS appliances, a wizard steps you through the process of creating new logical unit numbers (LUNs). New LUNs can be defined either in an existing redundant array of independent disks set (a RAID set that already has one or more LUNs defined), or a new RAID set. When a LUN is created in a new RAID set, the wizard creates the RAID set as well as the LUN.

1. From the navigation panel, choose RAID > Manage RAID.

2. Click Add LUN to launch the wizard, then follow the prompts as it guides you through the process of creating the new LUN and, as applicable, the new RAID set (detailed in Step 3 through Step 5).

3. When prompted to select the controller unit, use the Controller Unit drop-down menu to select the controller unit that will manage the new LUN.

4. When prompted to select the physical drives for the LUN (same screen as for Step 3), you can use unassigned drives, or you can select an existing RAID set. If you use unassigned drives, select at least three drives from the graphical image on the right. Each drive image is keyed to indicate whether it is available for use, selected already for LUN membership, empty, and so forth. Refer to Select Controller Unit and Drives or RAID Set for details.

5. In the LUN Properties window, specify the LUN size (up to 2 terabytes), the server that manages the LUN (applicable only for cluster (dual-server) configurations). Then select the radio button that describes how to proceed:


Adding a New LUN (Sun StorageTek 5210 NAS Appliances)

For Sun StorageTek 5210 NAS appliances, follow these steps to create a new logical unit number (LUN) and redundant array of independent disks (RAID) set:

1. From the navigation panel, choose RAID > Manage RAID.

2. Click Add LUN.

3. From the Controller drop-down menu, select the number of the controller to which you want to add a LUN.

4. Select the drives that will belong to the LUN by clicking each drive image.

You must select at least three drives. The drive images show the status of each drive. For information about the drive images and their statuses, see Add LUN Window.

5. Select one of the following volume options:

Note: In a cluster configuration, volume names must be unique across cluster members.

6. Click Apply to add the new LUN.

Allow several hours for the system to add the LUN and build the RAID set.


Designating a Drive As a Hot-Spare

You can configure any drive as a hot-spare for NAS appliances.

To designate a drive as a hot-spare:

1. From the navigation panel, choose RAID > Manage RAID.

2. Click the Add HS button at the bottom of the screen.

3. Select the drive you want by clicking the drive image.

The drive images show the status of each drive, as detailed under Add Hot-Spare Window. Make sure the disk you select as a hot-spare is at least as large as the largest disk in any logical unit number (LUN) defined on the NAS appliance.

4. Click Apply to add the new hot-spare.


Creating File Volumes or Segments

This section provides information about creating file volumes or segments. The following subsections are included:


About Creating a File Volume or a Segment

New file volumes are limited to 256 gigabyte in size. To create a larger file volume, you can add segments to the primary volume. You create one primary volume and then attach up to 63 segments to increase its size.

A file volume or segment can be created using the Create File Volumes panel or the System Manager.


Creating a File Volume or Segment Using the Create File Volumes Panel

To create a file volume or segment using the Create File Volumes panel:

1. From the navigation panel, choose File Volume Operations > Create File Volumes.

a. Select a LUN file volume from the list.

b. Click Initialize Partition Table.

c. Repeat Steps a and b for all uninitialized LUNs.

2. If you have recently added new disks to the live system without performing a reboot, click the Scan For New Disks button.

The partition number for the file volume in the Partition drop-down menu will increment when the file volume is created.

3. Type in the name of the new volume or segment in the Name field.

The name must begin with a letter of the alphabet (a-z, A-Z), and can include up to 12 alphanumeric characters (a-z, A-Z, 0-9).

Note: In a cluster configuration, volume names must be unique across cluster members. Identical volumes names cause problems in the event of failover. See About Enabling Failover for more information.

4. Select whether the size of the file volume is reported in MB (megabytes) or GB (gigabytes) by clicking on the drop-down menu.

5. Type in the file volume size in whole numbers.

The total space available is shown beneath this field.

6. Select the file volume type (Primary, Segment, or Raw).

7. If you have the Sun StorageTek Compliance Archiving Software installed, and you want to create a compliance-enabled volume, click Enable in the Compliance section. Then specify the type of compliance enforcement.


Caution: After you enable compliance archiving with mandatory enforcement on a volume, that volume cannot be deleted, be renamed, or have compliance archiving disabled or downgraded to advisory enforcement.

Note: Decreasing the retention time and removing retained files before the retention period has expired must be performed by the root user from a trusted host.See Managing Trusted Hosts for more information.

For more information, see About the Compliance Archiving Option.

8. Click Apply to create the new file volume or segment.

Note: After creating a volume, you must create a share for the volume. Users can then access the volume and create directories. After directories are created on the volume, you can create individual shares for them.


Creating a File Volume or Segment Using the System Manager

To create a file volume or segment by using the System Manager:

1. From the navigation panel, right-click System Manager.

2. Choose Create Volume or Create Segment from the pop-up menu to open the desired window.

3. In the LUN box, click the logical unit number (LUN) where you want to create the primary file volume. If the LUN has not been initialized, indicated by a blank display, use the following procedure to initialize the LUN's partition table:

a. Select the LUN file volume from the list.

b. Click Initialize Partition Table.

c. Repeat Steps a and b for all uninitialized LUNs.

The partition number for the file volume in the Partition drop-down menu will increment when the file volume is created.

4. Type in the name of the new volume or segment in the Name field.

The name must begin with a letter of the alphabet (a-z, A-Z), and can include up to 12 alphanumeric characters (a-z, A-Z, 0-9).

5. Select whether the size of the file volume is reported in MB (megabytes) or GB (gigabytes) by clicking on the drop-down menu.

6. Type in the file volume size in whole numbers.

The total space available is shown directly beneath this field.

7. Select the file volume type (Primary, Segment, or Raw).

8. If you have the Compliance Archiving software installed and you want to create a compliance-enabled volume, click Enable in the Compliance section. Then specify the type of compliance enforcement that is needed.


Caution: After you enable compliance archiving with mandatory enforcement on a volume, that volume cannot be deleted, be renamed, or have compliance archiving disabled or downgraded to advisory enforcement.

Note: Decreasing the retention time and removing retained files before the retention period has expired must be performed by the root user from a trusted host.See Managing Trusted Hosts for more information.

For more information, see About the Compliance Archiving Option.

9. Click Apply to create the new file volume or segment.

Note: After creating a volume, you must create a share for the volume. Users can then access the volume and create directories. After directories are created on the volume, you can create individual shares for them.


Attaching Segments to a Primary File Volume

This section provides information about attaching segments to a primary file volume. The following subsections are included:

About Attaching Segments to a Primary File Volume

Attaching segments to a primary file volume expands the size of the volume. The segment becomes permanently associated with the volume and cannot be removed. You must create a segment before you can attach it to a volume. Refer to About Creating a File Volume or a Segment for instructions.


Caution: Attaching a segment to a primary file volume cannot be reversed.

A file volume by itself is limited to 256 gigabytes; however, up to 63 segments from any logical unit number (LUN) can be attached to any file volume. Each segment can be as small as 8 megabytes and as large as 256 gigabytes.

You can attach a segment using the Attach Segments panel, or the System Manager software.


Caution: Compliance-enabled volumes with mandatory enforcement cannot be deleted. If you add a segment to a compliance-enabled volume with mandatory enforcement, you will not be able to delete or reclaim the space used by the segment.

Attaching a Segment Using the Attach Segments Panel

To attach a segment by using the Attach Segments panel:

1. From the navigation panel, choose File Volume Operations > Attach Segments.

2. Click to select the desired volume from the Existing Volumes box.

3. Click to select the desired segment from the Available Segments box.

4. Click Apply to attach.

Attaching a Segment Using the System Manager

To attach a segment by using the System Manager software:

1. From the navigation panel, click System Manager to view existing volumes.

2. Right-click the desired file volume to access the pop-up menu, then select Attach Segment.

3. For each segment that you want to attach, select the desired segment and click Apply to attach it.

Only one segment can be selected and attached at a time.


About Rebuilding a LUN

If one of the drives in a logical unit number (LUN) fails, the light-emitting diode (LED) on that drive turns steady amber, indicating it is waiting to be replaced with a new drive.

If a hot-spare drive is available, the redundant array of independent disks (RAID) set associated with the failed drive will be rebuilt using that hot-spare. All drives associated with the rebuild will have LEDs blinking green and must not be removed during the rebuilding process. A similar rebuild will take place when the failed drive is replaced, as the new drive is reinserted into the RAID set and the hot-spare is returned to standby mode. Rebuilding might take several hours to complete.

If your system does not include a hot-spare, you must remove the failed drive and replace it with another drive of the same or larger capacity. See Appendix D for information on replacing a failed drive.

After you replace the faulty drive, the RAID controller rebuilds the LUN. This can take several hours, depending on disk capacity. The LUN drive LEDs blink amber during LUN rebuilding.


Managing File Volumes and Segments

File-system management tasks include the following:


Editing File Volume Properties

You can change the properties of a file volume using the Edit Volume Properties panel.

Note: Compliance-enabled volumes with mandatory enforcement cannot be renamed or have compliance archiving disabled or downgraded to advisory enforcement.

To rename a volume, enable checkpoints, enable quotas, or edit compliance properties:

1. From the navigation panel, choose File Volume Operations > Edit Properties.

2. From the Volumes list, select the name of the volume you want to change.

3. If you wish to change the volume name, type the new name.

The name must begin with a letter of the alphabet (a-z, A-Z), and can include up to 12 alphanumeric characters (a-z, A-Z, 0-9).

4. To exclude the volume from virus scans, select Virus Scan Exempt.

5. If you plan to maintain file-volume checkpoints, or to run NDMP backups, select Enable Checkpoints. Checkpoints are enabled by default when you first create a file volume.

Note: If you clear this checkbox, any checkpoints taken already will be deleted immediately, regardless of their defined retention.

6. With checkpoints enabled, select one or both of the checkpoint options.:


Option

Description

Use for Backups

Select this box if you plan to create NDMP backups for the file volume. NDMP performs backups from a copy of the file volume, thereby avoiding potential problems involved with backing up from the live file system.

Automatic

Select this box if you plan to create checkpoints for the file volume. After selecting this box, the NAS software allows you to schedule regular checkpoints, as described under Scheduling File-System Checkpoints.


7. Select Enable Quotas to enable quotas for the selected volume. Quotas are disabled by default when you create a file volume.

8. Select Enable Attic to temporarily save deleted files in the.attic$ directory located at the root of each volume. By default, this option is enabled.

In rare cases on very busy file systems, the .attic$ directory can be filled faster than it processes deletes, leading to a lack of free space and slow performance. In such a case, disable the .attic$ directory by clearing this checkbox

9. If the volume is compliance-enabled, you have several options in the Compliance Archiving Software section, as described in the following table, depending on the level of compliance enabled.


Caution: For compliance-enabled volumes with mandatory enforcement, the default retention time is "Permanent."For compliance-enabled volumes with advisory enforcement, the default retention time is zero days. If you want to set a different default retention time, you must specify the new retention period before you begin using the volume.

Caution: After you enable compliance archiving with mandatory enforcement on a volume, that volume cannot be deleted, be renamed, or have compliance archiving disabled or downgraded to advisory enforcement.

For more information, see About the Compliance Archiving Option.


Option

Description

Mandatory Enforcement

If the volume is compliance-enabled with mandatory enforcement, you cannot change to advisory enforcement.

Advisory Enforcement

If the volume is compliance-enabled with advisory enforcement and you want to change the volume to be compliance-enabled with mandatory enforcement, you can change the setting by selecting Mandatory Enforcement.

Permanent Retention

Default. If you do not want the data permanently retained, you must select the Retain for nn Days option before you use the volume.

Select this option to permanently retain the data on this volume.

Retain for nn Days

Select this option and use the drop-down menu to specify the number of days the data is retained.

If the volume is compliance-enabled with advisory enforcement, you can increase or decrease the retention period.

If the volume is compliance-enabled with mandatory enforcement, you can only increase the retention period.


10. Click Apply to save your changes.


Deleting File Volumes or Segments

In some instances, after deleting files, volume free space does not change, most likely due to the checkpoint feature or the attic enable feature. (For information about attic enabling, see Editing File Volume Properties.)

Checkpoints store deleted and changed data for a defined period of time to enable retrieval for data security. This means that the data is not removed from disk until the checkpoint has expired, a maximum of two weeks. An exception occurs with manual checkpoints, which are retained indefinitely.

If you need to delete files to free disk space on a full volume, you must remove or disable checkpoints. Otherwise, you will be unable to delete the files. Refer to Removing a Checkpoint for instructions on removing checkpoints.

Note: Compliance-enabled volumes with mandatory enforcement cannot be deleted, and volumes or LUNs that are off-line cannot be deleted.

To delete a file volume or segment:

1. From the navigation panel, choose File Volume Operations > Delete File Volumes.

2. Select the file volume or segment you want to delete.

3. Click Apply.


Viewing Volume Partitions

The View Volume Partitions panel is a read-only display of the logical unit numbers (LUNs) defined for the NAS appliance or gateway system. It applies for single- and dual-server (cluster) configurations.

To view volume partitions:

1. From the navigation panel, choose File Volume Operations > View Volume Partitions.

2. In the Volumes list, select the file volume for which you want to view partitions.


System Language Considerations

The NAS software stores file and directory names internally in the file system, using 8-bit Unicode Transformation Format (UTF-8) encoding. If you use names that are not UTF-8 encoded, the NAS software converts them to UTF-8 before passing the name to the file system. This allows your client applications to store the files on NAS storage, and to share the files between Unix and Windows applications.

If you have NFS clients that fall into either of the categories below, follow the steps described to enable file/directory name translation:

a. Add the NFS client to the euc-kr host group, referring to Adding a Member to a Host Group for detailed instructions.

b. Make sure the system language is set to Korean, referring to Assigning the Language for detailed instructions.


Configuring the NAS for iSCSI

This section provides information about configuring a NAS appliance or gateway system to expose storage on NAS file volumes as Internet Small Computer Systems Interface (iSCSI) logical unit numbers (LUNs), thereby making them available to iSCSI initiator applications running on host clients. It contains the following subsections:


About iSCSI

Internet Small Computer Systems Interface (iSCSI) is a transport protocol that allows host system applications to access storage devices by encapsulating and sending SCSI commands, data, and status information over TCP/IP (Transmission Control Protocol/Internet Protocol) networks. iSCSI employs a client-initiator/server-target model, where an iSCSI initiator (host-system application) encapsulates SCSI packets and sends them to a target storage device (the server).

NAS appliances and gateway systems can be configured to process Internet Small Computer Systems Interface (iSCSI) commands, and to make NAS storage available to iSCSI applications running on host clients. The NAS appliance or gateway system acts as the iSCSI target in this case, for one or more iSCSI initiator clients (host applications).

The current implementation supports the following iSCSI initiators:

For Microsoft applications, NAS iSCSI supports:

Each iSCSI logical unit number (LUN) can be shared by any number of client initiators, if the client applications and operating systems recognize the disk is being shared. In addition, the NAS iSCSI software supports up to four simultaneous connections per session (that is, between each client initiator and a single iSCSI LUN), for load balancing and/or high availability.

The means, for example, that if the client application is Microsoft Exchange, and several MS Exchange servers are clustered to manage the same MS Exchange database, each server (up to four) can have a connection to the same iSCSI storage on the NAS device.

After you enable iSCSI, iSCSI initiators can store and access data on the NAS file systems just like any other client application. To facilitate this, you define iSCSI logical unit numbers (LUNs) within standard NAS file systems. These iSCSI LUNs use an area of dedicated storage (a file) to emulate a SCSI disk device, providing physical storage for data processed by iSCSI client applications. This storage is treated:

The iSCSI target implemented on NAS appliances and gateway systems is based on iSCSI RFC 3720, developed by the Internet Engineering Task Force (IETF). The supported protocol features include:

About iSCSI Identifiers

Each iSCSI initiator and target has a unique, permanent identifier.

The iSCSI initiator identifier is generated by iSCSI software on the host initiator.

The iSCSI target identifier is generated when you create iSCSI logical unit numbers (LUNs), using this IQN format:

iqn.1986-03.com.sun:01:mac-address.timestamp.user-specified-name

where:


About Configuring an iSCSI Target

Follow these steps to configure the NAS appliance or gateway system as an iSCSI target. This allows iSCSI initiators (host applications) to connect to, and access, iSCSI logical unit numbers (LUNs) on the NAS device:

1. Configure the iSCSI initiator client, referring to the documentation provided with the iSCSI initiator software.

2. Create one or more access lists, each comprising a list of iSCSI initiators that can access a specific set of iSCSI LUNs on the NAS device. Refer to Creating an iSCSI Access List for further details. You will associate the appropriate access list with each LUN during LUN definition.

3. Configure one or more iSCSI LUNs, each corresponding to an area of storage on the NAS device that will be accessible to iSCSI clients. Refer to Creating an iSCSI LUN for further details. Assign the appropriate access list to each LUN, to identify those iSCSI initiators that can access it.

4. Configure the iSCSI target discovery method, referring to About iSCSI Target Discovery Methods for further details.


Creating an iSCSI Access List

An Internet Small Computer Systems Interface (iSCSI) access list defines a set of iSCSI initiators that can access one or more iSCSI logical unit numbers (LUNs) on the NAS device.

Follow these steps to create or edit an iSCSI access list:

1. From the navigation panel, choose iSCSI Configuration > Configure Access List.

2. Click Add to open the Add iSCSI Access window, or select an existing access list and click Edit to modify the list.

3. Fill in the fields to define the access list, specifying the name of the access list, the name of the Challenge Handshake Authentication Protocol (CHAP) initiator and password, and the client initiators that belong to the list. CHAP ensures that the incoming data is sent from an authentic iSCSI initiator. For detailed information about the fields, see Add/Edit iSCSI Access Window.

4. Click Apply to save the settings.


Creating an iSCSI LUN

To configure the NAS appliance or gateway system as an Internet Small Computer Systems Interface (iSCSI) target, you must configure one or more iSCSI logical unit number (LUNs) that will be accessible to iSCSI clients. Each iSCSI LUN uses a dedicated storage area (on a standard NAS file volume) to provide physical storage for data processed by iSCSI client applications.

iSCSI LUNs provide optimal performance if the volumes they reside on are used exclusively for iSCSI LUNs. If these volumes also contain Common Internet File System (CIFS) shares or Network File System (NFS) mounts, the performance of the iSCSI LUNs might not be optimal (depending on the I/O traffic of each protocol).

Before adding or editing an iSCSI LUN, ensure that you have created the corresponding access list for the LUN. For more information, see Creating an iSCSI Access List.


Caution: You can configure more than one iSCSI initiator to access the same target LUN; however, the applications running on the iSCSI client server must ensure synchronized access to avoid data corruption.

Follow these steps to create an iSCSI LUN:

1. From the navigation panel, choose iSCSI Configuration > Configure iSCSI LUN.

2. Click Add to open the Add iSCSI LUN window, or select an existing iSCSI LUN and click Edit to modify a LUN definition.

3. Fill in the fields to define the iSCSI LUN, specifying the LUN name (and optional alias), the corresponding NAS file volume, LUN capacity (maximum of 2 terabytes), whether the LUN is thin-provisioned, and the access list. For detailed information about the fields, see Add/Edit iSCSI LUN Window.

4. Click Apply to save the settings.

About SCSI Thin-Provisioned LUNs

As a general rule when creating Small Computer Systems Interface (SCSI) logical unit numbers (LUNs), configure fully provision LUNs if sufficient storage is available.

If you create thin-provisioned (that is, sparse) iSCSI LUNs, disk space is not allocated prior to use. Thin-provisioned LUNs are useful when you expect to define several iSCSI LUNs that will not use their full capacity. For example, when you expect that five LUNs of 100 gigabytes each will use only 55% of their capacity, you can create them all on a file volume that can hold 5*100*.55=275 gigabytes (GB), plus 50 GB for growth, for a total of 325 GB. Using this model, you can monitor actual volume usage and allocate additional space to the volume before all the space is gone.

If you expect to use the majority of the storage allocated for iSCSI LUNs, do not configure thin provisioning. Some operating environments do not handle out-of-space conditions gracefully on thin-provisioned LUNs, so it's best to use full provisioning for optimal system behavior.


About iSCSI Target Discovery Methods

An Internet Small Computer Systems Interface (iSCSI) initiator can locate its iSCSI NAS target using any of the following methods:


Caution:Advertise each iSCSI LUN only once on the network. Do not advertise the same iSCSI Qualified Name (IQN) from two different NAS devices. (This could happen with mirroring, after promoting a copy of the file on a mirror volume.)

Support for an iSNS server is an optional feature, and can be configured using the Web Administrator GUI, as described under Specifying an iSNS Server.


Specifying an iSNS Server

Follow these steps to enable use of an Internet Storage Name Service (iSNS) server for iSCSI target discovery. The NAS iSNS client inter-operates with any standard iSNS server, such as Microsoft iSNS Server 3.0.

To specify the iSNS server:

1. From the navigation panel, choose iSCSI Configuration > Configure iSNS Server.

2. Identify the iSNS server to use, specifying either the server's Internet Protocol (IP) address or Domain Name Service (DNS) name.

3. Click Apply to save the setting.

Refer to your iSNS server documentation and iSCSI initiator documentation for more information.


Where to Go From Here

At this point, your file system and iSCSI targets are set up and ready to use. From here, you need to set up access privileges, quotas, and whatever directory structures you need. These management functions are described beginning in Chapter 4.

Monitoring functions, which are essential to managing resources, are covered in Chapter 10. Maintenance functions like back up and restore are covered in Chapter 11.