|C H A P T E R 1|
Product and Architecture Overview
This Installation, Operation, and Service Manual describes both the Sun StorEdge 3510 FC array and the Sun StorEdge 3511 SATA array.
The Sun StorEdge 3510 FC array and the Sun StorEdge 3511 SATA array are rack-mountable, Network Equipment Building System (NEBS) Level 3-compliant, Fibre Channel mass storage subsystems. NEBS Level 3 is the highest level of NEBS criteria used to assure maximum operability of networking equipment in mission-critical environments such as telecommunications central offices.
Sun StorEdge 3510 FC Array. The Sun StorEdge 3510 FC array is a Fibre Channel (FC) array designed for high availability, high performance, and high capacity.
Sun StorEdge 3511 SATA Array. The Sun StorEdge 3511 SATA array is designed for high availability, and employs Serial ATA (SATA) technology for high-density storage, with a Fibre Channel front end. This array is ideal for content management archiving applications.
This chapter provides a brief overview of Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays. Topics covered in this chapter are:
The Sun StorEdge 3510 FC array is a next-generation Fibre Channel storage system designed to provide direct attached storage (DAS) to entry-level, mid-range, and enterprise servers, or to serve as the disk storage within a storage area network (SAN). This solution features powerful performance and reliability, availability, and serviceability (RAS) features using modern FC technology. As a result, the Sun StorEdge 3510 FC array is ideal for performance-sensitive applications and for environments with many entry-level, mid-range, and enterprise servers, such as:
The Sun StorEdge 3511 SATA array is best suited for inexpensive secondary storage applications that are not mission-critical where higher-capacity drives are needed, and where lower performance and less than 7/24 availability is an option. These include near-line applications such as:
It is possible, though not always desirable, to combine both Sun StorEdge 3510 FC expansion units and Sun StorEdge 3511 SATA expansion units connected to a Sun StorEdge 3510 FC RAID array. For instance, you might want to use two Sun StorEdge 3511 SATA expansion units for near-line backup and archival storage while the Fibre Channel drives in your RAID array and other expansion units are used for real-time, mission-critical information processing and input/output (I/O) operations.
For an example of such a configuration, refer to the Sun StorEdge 3000 Family Best Practices Manual for your array.
The Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array share many architectural elements. This section discusses those elements, making note of the few ways in which the architecture is implemented differently in the two arrays.
Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array RAID controllers have six FC channels. RAID controller channels 0, 1, 4, and 5 are normally designated for connection to hosts or Fibre Channel switches. RAID controller channels 2 and 3 are dedicated drive channels that connect to disks. Each channel has a single port connection, except the Sun StorEdge 3511 SATA array which has two extra ports (two connections for channels 0 and 1).
In a dual RAID controller configuration, the architecture of the loops within the chassis provides both RAID controllers the same host channel designators. Each host channel of the top RAID controller shares a loop with the matching host channel on the bottom RAID controller. For example, channel 0 of the top RAID controller shares the same loop as channel 0 of the bottom RAID controller. This provides four distinct loops for connectivity. The individual loops provide logical unit number (LUN) failover without causing host bus adapter (HBA) path failover in the event of a controller failure.
In a single RAID controller configuration, the lower I/O board has drive channels but does not have host channels. Overall, the same number of loops are available, but with only half as many host channel ports. All six fibre channels in a Sun StorEdge 3510 FC array's I/O controller module support 1-Gbit or 2-Gbit data transfer speeds.
On the Sun StorEdge 3510 FC array, RAID controller channels 0, 1, 4, and 5 are normally designated host channels. Any host channel can be configured as a drive channel. In a dual-controller configuration, each host loop includes two ports per loop, one port on the top controller and one port on the bottom controller.
Sun StorEdge 3510 FC RAID controller channels 2 and 3 are dedicated drive channels that connect to expansion units. Each I/O board has two ports designated as disk drive loops. These ports connect to the internal dual-ported FC disk drives and are used to add expansion units to the configuration.
The two drive loop ports on the upper I/O board form FC loop 2 (channel 2) while the two drive ports on the lower I/O board form FC loop 3 (channel 3). FC loop 2 provides a data path from both RAID controllers to the A loop of the internal disk drives, while FC loop 3 provides a data path from both RAID controllers to the B loop of the internal disk drives.
On the Sun StorEdge 3511 SATA array, RAID controller channels 0 and 1 are dedicated host channels. Channels 4 and 5 are host channels by default but can be configured as drive channels. RAID controller channels 2 and 3 are dedicated drive channels that connect to expansion units.
Unlike the Sun StorEdge 3510 FC array, on the Sun StorEdge 3511 SATA RAID controller host channels 0 and 1 include four ports per loop (two ports on the upper controller, and two ports on the lower controller). Channels 0 and 1 support 1-Gbit or 2-Gbit data transfer rates.
Sun StorEdge 3511 SATA RAID controller channels 4 and 5 provide two ports per loop (one port on each controller). Channels 4 and 5 support only a 2-Gbit data transfer rate.
Each Sun StorEdge 3511 SATA RAID controller has two ports designated as disk drive loops. The drive ports support only a 2-Gbit data transfer rate. These ports connect to the internal SATA disk drives using internal FC-SATA routing technology. These drive ports are also used to add expansion units to the configuration.
Like the host channels, each drive channel of the top RAID controller shares a loop with the matching drive channel on the bottom RAID controller. For example, drive channel 2 of the top RAID controller shares the same loop as channel 2 of the bottom RAID controller.
Sun StorEdge 3510 FC arrays use Fibre Channel (FC) disk drives and are supported by Sun Microsystems in primary online applications as well as secondary and near-line applications. Sun StorEdge 3511 SATA arrays use serial ATA (SATA) disk drives and are supported by Sun in either near-line applications such as backup and restore, or in secondary applications such as static storage. Sun StorEdge 3511 SATA arrays can be used in multipath and multi-host configurations. They are not designed to be used in primary online applications.
Sun StorEdge 3511 SATA expansion units can be connected to Sun StorEdge 3510 FC arrays, either alone or in combination with Sun StorEdge 3510 FC expansion units. Up to five expansion units can be used in this configuration.
Before installing and configuring your array, please review the key differences between the Sun StorEdge 3510 FC array and the Sun StorEdge 3511 SATA array in TABLE 1-1.
Note - Although the two products are very similar in appearance and setup, the configurations have very important differences. While the Sun StorEdge 3510 FC array can be used for all applications, the Sun StorEdge 3511 SATA array cannot. Inappropriate use of the Sun StorEdge 3511 SATA array in applications for which the Sun StorEdge 3510 FC array was designed might result in loss of data or loss of data access.
Best suited for inexpensive secondary storage applications that are not mission critical where higher capacity drives are needed, and where lower performance and less than 7/24 availability is an option. This includes near-line applications such as:
Note - In FC and SATA configurations with large drive capacities, the size of the logical drive might exceed the device capacity limitation of your operating system. Be sure to check the device capacity limitation of your operating system before creating the logical drive. If the logical drive size exceeds the capacity limitation, you must partition the logical drive.
Note - All device capacity is displayed in powers of 1024.
Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays can be used in the following configurations:
See Appendix B for detailed information about using Sun StorEdge 3510 FC JBOD arrays.
TABLE 1-2 shows the configuration options for Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays.
2-Gbit/sec Fibre Channel disks (Sun StorEdge 3510 FC array)
FC expansion units
FC JBOD arrays (Sun StorEdge 3510 FC array only)
Configuration management and enclosure event reporting options
A label on the bottom lip of an array chassis, underneath the front bezel, indicates whether the array is a JBOD array or a RAID array. For instance, "3510 AC JBOD" refers to an alternating-current version of a 3510 JBOD array, "3510 DC JBOD" refers to a direct-current version of a JBOD array, and "3510 AC RAID" refers to an alternating-current version of a RAID array. Similarly, using a OBP command such as probe-scsi-all provides similar information, using an "A" designator for RAID arrays and a "D" designator for disks in a JBOD array. For example, "StorEdge 3510F D1000" identifies a JBOD array with SES firmware version 1000 and "StorEdge 3510F A1000" identifies a Sun StorEdge 3510 FC RAID array with firmware version 1000.
For a list of supported racks and cabinets, refer to the release notes for the model of array that you are installing. You can find these release notes at on the websites identified in the section .
Reliability, availability, and serviceability (RAS) are supported by:
For information about specifications and agency approvals, see Appendix A.
This section describes the field replaceable units (FRUs) contained in Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays.
A dual-controller configuration offers increased reliability and availability because it eliminates a single point of failure, the controller. In a dual-controller configuration, if the primary controller fails, the array automatically fails over to the second controller without an interruption of data flow.
Sun StorEdge 3510 FC array I/O controller modules and Sun StorEdge 3511 SATA array I/O controller modules are hot-swappable, assuming that RAID controller firmware version 4.11 or later is installed. Hot-swappable means that a live upgrade can be performed. In the event that it is impossible or impractical to halt I/O from hosts to the array, a controller can be replaced while the surviving controller is active and servicing I/O. Sun StorEdge 3510 FC array RAID controller modules provide six Fibre Channel ports. Sun StorEdge 3511 SATA array I/O controller modules provide eight Fibre Channel ports. Single- and dual-controller models are available, with the dual-controller version supporting active/passive and active/active configurations. Each RAID controller is configured with 1 Gbyte of cache.
In the unlikely event of an I/O Controller Module failure, the redundant RAID controller immediately begins servicing all I/O requests. The failure does not affect application programs.
Each RAID I/O controller module can support up to 1 Gbyte of Synchronous Dynamic Random Access Memory (SDRAM) with Error Control Check (ECC) memory. In addition, each controller supports 64 Mbyte of on-board memory. Two Application-Specific Integrated Circuit (ASIC) controller chips handle the interconnection between the controller bus, DRAM memory, and Peripheral Component Interconnect (PCI) internal buses. They also handle the interface between the on-board 2-Mbyte flash memory, 32-Kbyte nonvolatile random access memory (NVRAM), RS-232 port chip, and 10/100 BASE-T Ethernet chip.
The RAID I/O controller module is a multifunction board. I/O controller modules include Small Form-Factor Pluggable (SFP) ports, SCSI Enclosure Services (SES) logic, and the RAID controller. The SES logic monitors various temperature thresholds, fan speed from each fan, voltage status from each power supply, and the FRU ID.
Each RAID I/O controller module incorporates SES direct-attached Fibre Channel capability to monitor and maintain enclosure environmental information. The SES controller chip monitors all internal +12 and +5 voltages, various temperature sensors located throughout the chassis, and each fan. The SES also controls the front and back panel LEDs and the audible alarm. Both the RAID chassis and the expansion chassis support dual SES failover capabilities for fully redundant event monitoring.
The I/O expansion modules provide four (Sun StorEdge 3510 FC array) or eight (Sun StorEdge 3511 SATA array) SFP ports but do not have battery modules or controllers. I/O expansion modules are used with I/O Controller Modules in non-redundant Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays, and in expansion units and JBODs.
You can connect Sun StorEdge 3511 SATA expansion units to Sun StorEdge 3510 FC arrays. However, certain restrictions and limitations apply to mixed Fibre Channel and SATA environments.
Each disk drive is mounted in its own sled assembly. Each sled assembly has electromagnetic interference (EMI) shielding, an insertion and locking mechanism, and a compression spring for maximum shock and vibration protection.
Each disk drive is slot independent, meaning that once a logical drive has been initialized, the system can be shut down and the drives can be removed and replaced in any order. In addition, disk drives are field upgradeable to larger drives without interruption of service to user applications. The drive firmware is also field upgradeable, but the firmware upgrade procedure requires interruption of service.
Caution - You can mix disk drive capacity in the same chassis, but not spindle speed (RPM). For instance, you can use 36-Gbyte and 73-Gbyte drives with no performance problems if both are 10K RPM drives. Violating this configuration guideline leads to poor performance.
In the event of a single disk drive failure, with the exception of RAID 0, the system continues to service all I/O requests. Either mirrored data or parity data is used to rebuild data from the failed drive to a spare drive, assuming one is assigned. If a spare is not assigned, you must manually rebuild the array.
In the unlikely event that multiple drive failures occur within the same logical drive, data that has not been replicated or backed up might be lost. This is an inherent limitation of all RAID subsystems and could affect application programs.
An air management sled FRU is available for use when you remove a disk drive and do not replace it. Insert an air management sled into the empty slot to maintain optimum airflow through the chassis.
The drives can be ordered in 36-Gbyte, 73-Gbyte, and 146-Gbyte sizes. 36-Gbyte drives have a rotation speed of 15,000 RPM, 146-Gbyte drives have a rotation speed of 10,000 RPM, and 73-Gbyte drives are available with rotation speeds of 10,000 RPM and 15,000 RPM.
The disk drives incorporate Serial ATA (SATA) technology. They are optimized for capacity, but have performance levels approaching Fibre Channel performance levels. The drives can be ordered in 250-Gbyte, 400-Gbyte, and 500-Gbyte sizes. The drives have a rotation speed of 7200 RPM.
The battery module is designed to provide power to system cache for 72 hours in the event of a power failure. When power is reapplied, the cache is purged to disk. The battery module is hot-swappable. The FRU can be removed and replaced while the RAID array is powered on and operational. The battery module is mounted on the I/O board with guide rails and a transition board. It also contains the EIA-232 and DB9 serial interface (COM) ports.
Note - The Sun StorEdge 3511 SATA array can only be ordered in an AC configuration. However, DC power supplies can be ordered in an x-option kit, and the Sun StorEdge 3511 SATA arrays can be reconfigured using the DC power supplies. Refer to the Sun StorEdge 3000 Family FRU Installation Guide.
Each array contains two redundant power and fan modules. Each module contains a 420-watt power supply and two radial 52-cubic-feet-per-minute (CFM) fans. Power module autoranging capabilities range from 90 volts alternating current (VAC) to 264 VAC for AC power supplies, and from -36 volts direct current (VDC) to -72 VDC for DC power supplies.
A single power and fan module can sustain an array.
The array is designed for heterogeneous operation and supports multiple host operating systems. Refer to the release notes for your array to see the current list of supported hosts, operating systems, and application software.
The array does not require any host-based software for configuration, management, and monitoring, which can be handled through the built-in firmware application. The console window can be accessed via the DB9 communications (COM) port using the Solaris tip command or equivalent means for other operating systems, or via the Ethernet port using the telnet command. Management and monitoring software is available and shipped with the array. See Section 1.6, Additional Software Tools for more information.
As a device protocol capable of high data transfer rates, Fibre Channel simplifies data bus sharing and supports not only greater speed than SCSI, but also more devices on the same bus. Fibre Channel can be used over both copper wire and optical cable. It can be used for concurrent communications among multiple workstations, servers, storage systems, and other peripherals using SCSI and IP protocols. When a Fibre Channel hub or fabric switch is employed, it provides flexible topologies for interconnections.
Two common protocols are used to connect Fibre Channel (FC) nodes together:
The point-to-point protocol is straightforward, doing little more than establishing a permanent communication link between two ports.
The arbitrated loop protocol creates a simple network featuring distributed (arbitrated) management between two or more ports, using a circular (loop) data path. Arbitrated loops can support more nodes than point-to-point connections can.
The Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array support both point-to-point and arbitrated loop protocols. Select the protocol you prefer by setting the desired Fibre Channel Connection Option in the Configuration parameters of the firmware application (see Section 5.1, Summary of Array Configuration).
The presence or lack of switches establishes the topology of an FC environment. In a direct attached storage (DAS) topology, servers connect directly to arrays without switches. In a storage area network (SAN) topology, servers and arrays connect to an FC network created and managed by switches.
Refer to the Sun StorEdge 3000 Family Best Practices Manual for your array to see information about optimal configurations for site requirements.
A storage network built on a Fibre Channel architecture might employ several of the following components: Fibre Channel host adapters, hubs, fabric switches, and fibre-to-SCSI bridges.
An arbitrated loop hub is a wiring concentrator. "Arbitrated" means that all nodes communicating over this fibre loop share a 100-megabits-per-second (Mbps) segment. Whenever more devices are added to a single segment, the bandwidth available to each node is further reduced.
A loop configuration allows different devices in the loop to be configured in a token ring style. With a fibre hub, a fibre loop can be rearranged in a star-like configuration because the hub itself contains port bypass circuitry that forms an internal loop inside. Bypass circuits can automatically reconfigure the loop once a device is removed or added without disrupting the physical connection to other devices.
A fabric switch functions as a routing engine, which actively directs data transfers from source to destination and arbitrates every connection. Bandwidth per node via a fabric switch remains constant when more nodes are added, and a node on a switch port uses a data path with speed up to 100 Mbps to send or receive data.
Data availability is one of the major requirements for today's mission-critical applications. Highest availability can be accomplished with the following functionality:
With proper hardware and software configuration in dual-controller mode, a failed controller can be replaced online while the existing controller is actively serving I/O.
Dual loop provides path redundancy and greater throughput.
This option is selectable either through dedicated loops or all drive loops. It allows a more flexible configuration of redundant controllers.
The Fibre Channel architecture brings scalability and easier upgrades to storage. Storage expansion can be as easy as cascading another expansion unit to a configured RAID array without powering down the running system, as long as the expansion unit has not been previously configured with logical drives or logical volumes.
The maximum number of expansion units supported by a single Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array is:
Up to 125 devices can be configured in a single FC loop. By default, the array provides two drive loops and four host loops, and operates in Fibre Channel-Arbitrated Loop (FC-AL) and fabric topologies.
Each RAID array has six Fibre Channels with the following defaults:
The Sun StorEdge 3510 FC expansion unit has a total of four FC-AL ports. The Sun StorEdge 3511 SATA expansion unit has a total of eight FC-AL ports.
This section provides information about setting up redundant configurations for increased reliability. For more detailed information about configuration requirements, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide.
Fibre Channel is widely applied to storage configurations with topologies that aim to avoid loss of data due to component failure. As a rule, the connections between source and target should be configured in redundant pairs.
The recommended host-side connection consists of two or more host bus adapters (HBAs). Each HBA is used to configure a Fibre Channel loop between the host computer and the array.
In active-to-active redundant controller mode, the primary loop serves the I/O traffic directed to the primary controller, and its pair loop serves the I/O traffic to the secondary controller. The host-side management software directs I/O traffic to the pair loop if one of the redundant loops fails.
Since each fibre interface supports only a single loop ID, two HBAs are necessary for active-to-active redundant controller operation. Using two HBAs in each server ensures continued operation even when one data path fails.
In active-to-active mode, the connection to each host adapter should be considered a data path connecting the host to either the primary or the secondary controller. One adapter should be configured to serve the primary controller and the other adapter to serve the secondary controller. Each target ID on the host channels should be assigned either a primary ID or a secondary ID. If one controller fails, the remaining controller can inherit the ID from its counterpart and activate the standby channel to serve host I/O.
The controller passively supports redundant fibre loops on the host side, provided that the host has implemented software support for this feature.
In the unlikely event of controller failure, the standby channels on the remaining controller become an I/O route serving the host I/O originally directed to the failed channel on its pair of controllers. Application failover software should be running on the host computer to control the transfer of I/O from one HBA to another in case either data path fails.
Note - The Sun StorEdge 3510 FC redundant controller configuration utilizes industry standard Port Bypass Circuits (PBC) to connect the disk channels of the primary and secondary controllers. There is no hardware related isolation available in the event of an FC node failure on a disk channel. In rare cases, an FC node (i.e. an FC port chip, bypass circuit, SES, or disk drive) can be faulty and directly impact the operation of the array, including loss of access. This is an architectural limitation of the Sun StorEdge 3510 FC array that requires troubleshooting of the array to determine the faulty component, usually a controller FRU or disk drive. Once the faulty component is identified and removed, the array returns to normal operation.
The following additional software tools are available on the Sun Download Center and on the Sun StorEdge 3000 Family Software and Documentation CD available for your array:
Refer to the Sun StorEdge 3000 Family Software Installation Guide for information about installing these tools.
User guides with configuration procedures for these tools are also provided on the CD.