C H A P T E R  2

Underlying Concepts and Practices

This chapter provides a brief overview of important concepts and practices that underlie the configurations you can use. These concepts and practices are described in greater detail in other books in the Sun StorEdge 3000 family documentation set. Refer to Related Documentation for a list of those books.


Fibre Channel Protocols

The Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array support point-to-point and Fibre Channel-Arbitrated Loops (FC-AL) protocols. Using the point-to-point protocol with Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays requires a switched fabric network (SAN), whereas selecting the FC-AL protocol enables the arrays to be used in either DAS or SAN environments. Using the point-to-point protocol enables full-duplex use of the available channel bandwidth, whereas using the FC-AL protocol limits host channels to half-duplex communication.

In a point-to-point configuration, only one ID can be assigned to each host channel. If more than one ID is assigned, the point-to-point protocol rules are violated. Any host channel with more than one ID will not be able to log into an FC switch in fabric mode. This "one-ID-per-channel" requirement is true in both single-controller and dual-controller configurations. Thus, in dual-controller configurations, either the primary or the secondary controller can have an ID assigned, but not both. This yields:

4 (host channels) x 1 (ID per channel) x 32 (LUNs per ID) = 128 maximum addressable LUNs in a fabric point-to-point environment. If dual paths are desired for each logical device, a maximum of 64 dual-pathed LUNs are available.

In an FC-AL configuration, multiple IDs can be assigned to any given host channel. The maximum number of storage partitions that can be mapped to a RAID array is 1024.

There are several ways that 1024 LUNs can be configured. For example:

4 (host channels) x 8 (IDs per channel) x 32 (LUNs per ID) =
1024 maximum addressable LUNs in a FC-AL environment.

However, configuring the maximum number of LUNs increases overhead and can have a negative impact on performance.

The FC-AL protocol should be selected for environments needing more than 128 LUNs, or where a switched fabric network is not available.


Supported RAID Levels

Several RAID levels are available: RAID 0, 1, 3, 5, 1+0 (10), 3+0 (30), and 5+0 (50). RAID levels 1, 3, and 5 are the most commonly used. Sun StorEdge 3000 family arrays support the use of both global and local spare drives in the unlikely event of disk failure. It is good practice to use spare drives when configuring RAID devices. Refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for detailed information about how RAID levels and spare drives are implemented.


Logical Drives

A logical drive (LD) is a group of physical drives configured with a RAID level. Each logical drive can be configured for a different RAID level.

Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays support a maximum of 32 logical drives. A logical drive can be managed by either the primary or secondary controller. The best practice for creating logical drives is to add them evenly across the primary and secondary controllers. With at least one logical drive assigned to each controller, both controllers are active. This configuration is known as an active-active controller configuration and allows maximum use of a dual-controller array's resources.

Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays support logical drives larger than 2 Tbyte. This can increase the usable storage capacity of configurations by reducing the total number of parity disks required when using parity-protected RAID levels. However, this differs from using LUNs larger than 2 Tbyte, which requires specific operating system, host adapter driver, and application program support.

Supporting large storage capacities requires advanced planning since it requires using large logical drives with several partitions each or many logical drives. For maximum efficiency, create logical drives larger than 2 Tbyte and partition them into multiple LUNs with a capacity of 2 Tbyte or less.

The largest supported logical drive configuration depends largely upon the cache optimization setting. TABLE 2-1 shows the maximum number of disks that can be used in a single logical drive, based upon the drive size, and the optimization method chosen.

TABLE 2-1 Maximum Number of Disks per Logical Drive

Drive Size

FC (Random or Sequential Optimization)

SATA

(Random Optimization)

SATA

(Sequential Optimization)

36 Gbyte

108

n/a

n/a

73 Gbyte

108

n/a

n/a

146 Gbyte

108

n/a

n/a

250 Gbyte

n/a

66

72

400 Gbyte

n/a

41

72


The maximum capacity per logical drive supported by the RAID firmware is:

Since Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays also support up to 32 logical drives each, it is unlikely these limits will restrict configurations.



Note - Create several logical drives when using configurations with many disks. Creating a logical drive with a very large number of disks is not advisable.




Maximum Drive Configurations per Array

TABLE 2-2 lists the maximum number of physical and logical drives, partitions per logical drive and logical volume, and maximum number of LUN assignments for each array.

TABLE 2-2 Maximum Number of Supported Physical and Logical Drives, Partitions, and LUN Assignments

Array

 

Physical

Drives

Logical Drives

Partitions per
Logical Drive

Partitions per
Logical Volume

LUN Assignments

 

Sun StorEdge 3510 FC array

108 (1 array and 8 expansion units)

32

32

32

128 (point-to-point mode)

64 (point-to-point mode, redundant configuration)

1024 (loop mode)

512 (loop mode, redundant configuration)

Sun StorEdge 3511 SATA array

72 (1 array and 5 expansion units)

32

32

32

128 (point-to-point mode)

64 (point-to-point mode, redundant configuration)

1024 (loop mode)

512 (loop mode, redundant configuration)

Sun StorEdge 3510 FC array with Sun StorEdge 3511 SATA expansion units[1]

72 (1 array and 5 expansion units)

32

32

32

128 (point-to-point mode)

64 (point-to-point mode, redundant configuration)

1024 (loop mode)

512 (loop mode, redundant configuration)



Maximum Number of Disks and Maximum Usable Capacity per Logical Drive

The following tables show the maximum number of disks per logical drive, and the maximum usable capacity of a logical drive, depending on RAID level and optimization mode.

Actual logical drive maximum capacities are usually determined by practical considerations or the amount of disk space available.



caution icon

Caution - In FC and SATA configurations with large drive capacities, the size of the logical drive might exceed the device capacity limitation of your operating system. Be sure to check the device capacity limitation of your operating system before creating the logical drive. If the logical drive size exceeds the capacity limitation, you must partition the logical drive.



TABLE 2-3 shows the usable capacity of the drives available in Sun StorEdge 3000 family arrays.



Note - The 250 Mbyte of reserved space on each drive used for storing controller metadata is not included in this table, since it is not available for storing data.



TABLE 2-3 Actual Capacities per Drive

Drive Size

Usable Capacity (Mbyte)

36 Gbyte

34,482

73 Gbyte

69,757

146 Gbyte

139,759

250 Gbyte

238,216

400 Gbyte

381,291


TABLE 2-4 shows the maximum usable storage capacity for Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays, using the maximum number of expansion units, fully populated with the largest currently available drives.

TABLE 2-4 Maximum Usable Storage Capacity Determined by RAID Level

Array

Number of Disks

Drive Size

RAID 0 (Tbyte)

RAID 1 (Tbyte)

RAID 3 or RAID 5 (Tbyte)

Sun StorEdge 3510 FC array

108

146 Gbyte

14.39

7.20

14.26

Sun StorEdge 3511 SATA array

72

400 Gbyte

26.18

13.09

25.82




Note - Be sure to check the latest release notes for your array to see additional guidelines or limitations for large configurations.



Each logical drive can be partitioned into up to 32 separate partitions or used as a single partition. Partitions are presented to hosts as LUNs.

Once the logical drives have been created, assigned to a controller, and partitioned, the partitions must be mapped to host channels as LUNs in order for them to be seen by a host. It is usually desirable to map each partition to two host channels for redundant pathing.

A partition can only be mapped to a host channel where its controller has an assigned ID. For example, if LD 0 is assigned to the primary controller, all partitions on LD 0 will need to be mapped to a host channel ID on the primary controller (PID). Any logical drives assigned to the secondary controller will need to have all partitions mapped to a host channel ID on the secondary controller (SID).

When attaching FC cables for LUNs configured with redundant paths, make sure one cable is connected to a channel on the upper controller and the other cable is connected to a different channel on the lower controller. Then, if multipathing software is configured on the host, a controller can be hot-swapped in the event of failure without losing access to the LUN.

For example, suppose partition 0 of LD 0 is mapped to Channel 0 PID 42 and Channel 5 PID 47. To ensure that there is no single point of failure (SPOF), connect a cable from the host HBA or a switch port to the upper board port FC 0, and connect a second cable from the lower board port FC 5 to a different host HBA or switch.


Cache Optimization

Sun StorEdge 3000 family arrays provide settings for both sequential I/O and random I/O. Sequential I/O is the default setting.



Note - Due to firmware improvements beginning with version 4.11, sequential optimization yields better performance than random optimization for most applications and configurations. Use sequential optimization unless real-world tests in your production environment show better results for random optimization.



The RAID array's cache optimization mode determines the cache block size used by the controller for all logical drives:

An appropriate cache block size improves performance when a particular application uses either large or small stripe sizes:

Since the cache block size works in conjunction with the default stripe size set by the cache optimization mode for each logical drive you create, these default stripe sizes are consistent with the cache block size setting. You can, however, specify a different stripe size for any logical drive at the time you create it.

Once logical drives are created, you cannot use the RAID firmware's "Optimization for Random I/O" or "Optimization for Sequential I/O" menu option to change the optimization mode without deleting all logical drives. You can, however, use Sun StorEdge Configuration Service or the Sun StorEdge CLI set cache-parameters command to change the optimization mode while logical drives exist. Refer to the "Upgrading the Configuration" chapter of the Sun StorEdge 3000 Family Configuration Service User's Guide and the Sun StorEdge 3000 Family CLI 2.0 User's Guide for more information.

Depending on the optimization mode and RAID level selected, newly created logical drives are configured with the default stripe sizes shown in TABLE 2-5.

TABLE 2-5 Default Stripe Size Per Optimization Mode (Kbyte)

RAID Level

Sequential I/O

Random I/O

0, 1, 5

128

32

3

16

4


When you create a logical drive, you can replace the default stripe size with one that better suits your application.

Once the stripe size is selected and data is written to logical drives, the only way to change the stripe size of an individual logical drive is to back up all its data to another location, delete the logical drive, and create a logical drive with the stripe size that you want.


Configuring an Array's RCCOM Channel

Redundant controller communication (RCCOM) provides the communication channels by which two controllers in a redundant RAID array communicate with one another. This communication allows the controllers to monitor each other, and includes configuration updates, and control of cache. By default, channels 2 and 3 are configured as DRV + RCCOM (Drive and RCCOM). In this configuration, RCCOM is distributed over all DRV + RCCOM channels. However, when host channels remain unused two alternative configurations are available. Refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for your array to see step-by-step procedure for reconfiguring RCCOM channels.

Using Four DRV + RCCOM Channels

If only channels 0 and 1 are used for communication with servers, channels 4 and 5 can be configured as DRV + RCCOM, providing four DRV + RCCOM channels (channels 2, 3, 4 and 5). An advantage of this configuration is channels 4 and 5 are still available for connection of expansion units. The impact of RCCOM is reduced because it is now distributed over four channels instead of two. If at a later time you choose to add an expansion unit, it will not be necessary to interrupt service by resetting the controller after reconfiguring a channel.

Using Channels 4 and 5 as RCCOM Channels

When only channels 0 and 1 are used for communication with servers, another option is to assign channels 4 and 5 as dedicated RCCOM channels. This reduces the impact of RCCOM on the drive channels by removing RCCOM from drive channels 2 and 3. In this configuration, however, channels 4 and 5 cannot be used to communicate with hosts or to attach expansion modules.


Array Management Tools

Sun StorEdge 3000 family arrays use the same management interfaces and techniques. They can be configured and monitored through any of the following methods:



Note - To set up and use Sun StorEdge Configuration Service, refer to the Sun StorEdge 3000 Family Configuration Service User's Guide. The Sun StorEdge CLI is installed as part of the SUNWsccli package. Information about CLI functionality can be found in the Sun StorEdge 3000 Family CLI User's Guide, and in the sccli man page once the package is installed.



SATA drives respond more slowly than FC drives when being managed by either Sun StorEdge Configuration Service or the Sun StorEdge CLI. From a performance standpoint, it is preferable to use these applications out-of-band to monitor and manage a Sun StorEdge 3511 SATA array or a Sun StorEdge 3510 FC array with attached Sun StorEdge 3511 SATA expansion units. However, security considerations may take precedence over performance considerations.

If you assign an IP address to an array in order to manage it out-of-band, for security reasons consider keeping the IP address on a private network rather than a publicly routable network. Using the controller firmware to set a password for the controller limits unauthorized access to the array. Changing the firmware's Network Protocol Support settings can provide further security by disabling the ability to remotely connect to the array using individual protocols such as HTTP, HTTPS, telnet, FTP, and SSH. Refer to the "Communication Parameters" section of the Sun StorEdge 3000 Family RAID Firmware User's Guide for more information.



Note - Do not use both in-band and out-of-band connections at the same time to manage the array. Otherwise conflicts between multiple operations might occur.




Saving and Restoring Configuration Information

An important feature of these management tools is the ability to save and restore configuration information in a number of ways. Using the array's firmware application, the configuration information (NVRAM) can be saved to disk. This provides a backup of the controller-dependent configuration information such as channel settings, host IDs, FC protocol, and cache configuration. It does not save LUN mapping information. The NVRAM configuration file can restore all configuration settings but does not rebuild logical drives.

Sun StorEdge Configuration Service and the Sun StorEdge CLI can be used to save (upload) and restore (load or download) all configuration data, including LUN mapping information. These applications can also be used to rebuild all logical drives and therefore can be used to completely duplicate an array's configuration to another array.


1 (TableFootnote) Sun StorEdge 3511 SATA expansion units can be connected to a Sun StorEdge 3510 FC array, either alone or in combination with Sun StorEdge 3510 FC expansion units