C H A P T E R  3

Configuration Defaults and Guidelines

This chapter lists default configurations and provides guidelines you need to be aware of when configuring your array.

This chapter covers the following topics:


Default Configurations

This section provides default configuration information for drives and channel settings.

Default Logical Drive Configuration

Sun StorEdge 3000 family arrays are preconfigured with a single RAID 0 logical drive mapped to LUN 0, and no spare drives. This is not a usable configuration. You must delete this logical drive and create new logical drives, as shown in First-Time Configuration for SCSI Arrays and First-Time Configuration for FC or SATA Arrays.

Default Channel Configurations

Sun StorEdge 3000 family arrays are preconfigured with the channel settings shown in the following tables. The most common reason to change a host channel to a drive channel is to attach expansion units to a RAID array.

Sun StorEdge 3310 SCSI array default channel settings are shown in TABLE 3-1.

TABLE 3-1 Sun StorEdge 3310 SCSI Array and Sun StorEdge 3320 SCSI Array Default Channel Settings

Channel

Default Mode

Primary Controller ID (PID)

Secondary Controller (SID)

0

Drive Channel

6

7

1

Host Channel

0

NA

2

Drive Channel

6

7

3

Host Channel

NA

1

6

RCCOM

NA

NA


Sun StorEdge 3510 FC array default channel settings are shown in TABLE 3-2.

TABLE 3-2 Sun StorEdge 3510 FC Array Default Channel Settings

Channel

Default Mode

Primary Controller ID (PID)

Secondary Controller ID (SID)

0

Host Channel

40

NA

1

Host Channel

NA

42

2

Drive Channel + RCCOM

14

15

3

Drive Channel + RCCOM

14

15

4

Host Channel

44

NA

5

Host Channel

NA

46


Sun StorEdge 3511 SATA array default channel settings are shown in TABLE 3-3.

TABLE 3-3 Sun StorEdge 3511 SATA Array Default Channel Settings

Channel

Default Mode

Primary Controller ID (PID)

Secondary Controller ID (SID)

0

Host Channel

40

NA

1

Host Channel

NA

42

2

Drive Channel + RCCOM

14

15

3

Drive Channel + RCCOM

14

15

4

Host Channel

44

NA

5

Host Channel

NA

46



Maximum Drive Configurations per Array

TABLE 3-4 lists the maximum number of physical and logical drives, partitions per logical drive and logical volume, and maximum number of logical unit number (LUN) assignments for each array.

TABLE 3-4 Maximum Number of Supported Physical and Logical Drives, Partitions, and LUN Assignments

Array

 

Physical

Drives

Logical Drives

Partitions per
Logical Drive

Partitions per
Logical Volume

LUN Assignments

 

Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array

36 (1 array and 2 expansion units)

16

32

32

128

Sun StorEdge 3510 FC array

108 (1 array and 8 expansion units)

32

32

32

128 (point-to-point mode)

64 (point-to-point mode, redundant configuration)

1024 (loop mode)

512 (loop mode, redundant configuration)

Sun StorEdge 3511 SATA array

72 (1 array and 5 expansion units)

32

32

32

128 (point-to-point mode)

64 (point-to-point mode, redundant configuration)

1024 (loop mode)

512 (loop mode, redundant configuration)

Sun StorEdge 3510 FC array with Sun StorEdge 3511 SATA expansion units[1]

72 (1 array and 5 expansion units)

32

32

32

128 (point-to-point mode)

64 (point-to-point mode, redundant configuration)

1024 (loop mode)

512 (loop mode, redundant configuration)



Maximum Number of Disks and Maximum Usable Capacity per Logical Drive

The following tables show the maximum number of disks per logical drive, and the maximum usable capacity of a logical drive, depending on RAID level and optimization mode.

The maximum capacity per logical drive supported by the RAID firmware is:

Actual logical drive maximum capacities are usually determined by practical considerations or the amount of disk space available.



caution icon

Caution - In FC and SATA configurations with large drive capacities, the size of the logical drive might exceed the device capacity limitation of your operating system. Be sure to check the device capacity limitation of your operating system before creating the logical drive. If the logical drive size exceeds the capacity limitation, you must partition the logical drive.



TABLE 3-5 shows the usable capacity of the drives available in Sun StorEdge 3000 family arrays.



Note - The 250 Mbyte of reserved space on each drive used for storing controller metadata is not included in this table, since it is not available for storing data.



TABLE 3-5 Actual Capacities per Drive

Drive Size

Usable Capacity (Mbyte)

36 Gbyte

34,482

73 Gbyte

69,757

146 Gbyte

139,759

250 Gbyte

238,216

300 Gbyte

285,852

400 Gbyte

381,291


TABLE 3-6 shows the maximum usable storage capacity for Sun StorEdge 3310 SCSI arrays, Sun StorEdge 3320 SCSI arrays, Sun StorEdge 3510 FC arrays, and Sun StorEdge 3511 SATA arrays, using the maximum number of expansion units, fully populated with the largest currently available drives.

TABLE 3-6 Maximum Usable Storage Capacity Determined by RAID Level

Array

Number of Disks

Drive Size

RAID 0 (Tbyte)

RAID 1 (Tbyte)

RAID 3 or RAID 5 (Tbyte)

Sun StorEdge 3310 SCSI array and Sun StorEdge 3320 SCSI array

36

300 Gbyte

9.81

4.90

9.54

Sun StorEdge 3510 FC array

108

146 Gbyte

14.39

7.20

14.26

Sun StorEdge 3511 SATA array

72

400 Gbyte

26.18

13.09

25.82


TABLE 3-7 shows the maximum number of disks that can be used in a single logical drive, based upon the drive size, and the optimization method chosen.

TABLE 3-7 Maximum Number of Disks per Logical Drive

Drive Size

SCSI (Random and Sequential Optimization)

FC (Random or Sequential Optimization)

SATA

(Random Optimization)

SATA

(Sequential Optimization)

36 Gbyte

36

108

N/A

N/A

73 Gbyte

36

108

N/A

N/A

146 Gbyte

36

108

N/A

N/A

250 Gbyte

n/a

n/a

66

72

300 Gbyte

36

55 random

108 sequential

N/A

N/A

400 Gbyte

n/a

n/a

41

72




Note - Except for SATA arrays using random optimization, it is possible (though impractical) to employ all available disks in a single logical drive



TABLE 3-8 shows the maximum usable capacity of a single logical drive in a Sun StorEdge 3510 FC array, depending on drive size.

TABLE 3-8 Maximum Usable Capacity (in Gbyte) per Sun StorEdge 3510 FC Logical Drive

Drive Size

RAID 0

RAID 1

RAID 3/5

36 Gbyte

3636

1818

3603

73 Gbyte

7357

3678

7289

146 Gbyte

14740

7370

14603

300 Gbyte

30148

15074

29869


TABLE 3-9 shows the maximum usable capacity of a single logical drive in a Sun StorEdge 3310 SCSI array, depending on drive size.

TABLE 3-9 Maximum Usable Capacity (in Gbyte) per Sun StorEdge 3310 SCSI and Sun StorEdge 3320 SCSI Logical Drive

Drive Size

RAID 0

RAID 1

RAID 3 or RAID 5

36 Gbyte

1212

606

1178

73 Gbyte

2452

1226

2384

146 Gbyte

4913

2456

4776

300 Gbyte

10049

5024

9770


TABLE 3-10 shows the maximum usable capacity of a single logical drive in a Sun StorEdge 3511 SATA array, depending on drive size.

TABLE 3-10 Maximum Usable Capacity (in Gbyte) per Sun StorEdge 3511 SATA Logical Drive

Drive Size

RAID 0 (Random)

RAID 0 (Sequential)

RAID 1 (Random)

RAID 1 (Sequential)

RAID 3 or RAID 5 (Random)

RAID 3 or RAID 5 (Sequential)

250 Gbyte

15353

16749

7676

8374

15121

16516

400 Gbyte

15266

26809

7633

13404

14894

26437



Controller Operation Guidelines

This section provides guidelines for dual-controller and single-controller operation.

Dual-Controller Guidelines

Keep the following operation details in mind when configuring a dual-controller array.



caution icon

Caution - Major upgrades of controller firmware, or replacing a controller with one that has a significantly different version of firmware, might involve differences in non-volatile RAM (NVRAM) that require following special upgrade procedures. For more information, refer to the Sun StorEdge 3000 Family FRU Installation Guide and to the release notes for your array.



The two controllers continuously monitor each other. When either controller detects that the other controller is not responding, the working controller immediately takes over and disables the failed controller.

An active-to-standby configuration is also available but is not usually selected. In this configuration, assigning all the logical drives to one controller means that the other controller remains idle, becoming active only if the primary controller fails.

Single-Controller Guidelines

Keep the following operation details in mind when configuring a single-controller array.

A secondary controller is used only in dual-controller configurations for redistributed I/O and for failover.

Using two single controllers in a clustering environment with host-based mirroring provides some of the advantages of using a dual controller. However you still need to disable the Write-Back Cache in case one of the single controllers fails, to avoid the risk of data corruption. For this reason, a dual controller configuration is preferable.



caution icon

Caution - Major upgrades of controller firmware, or replacing a controller with one that has a significantly different version of firmware, might involve differences in non-volatile RAM (NVRAM) that require following special upgrade procedures. For more information, refer to the Sun StorEdge 3000 Family FRU Installation Guide and to the release notes for your array.




Cache Optimization Mode and Stripe Size Guidelines

Before creating or modifying logical drives, determine the appropriate optimization mode for the RAID array. The controller supports two optimization modes, sequential I/O and random I/O. Sequential I/O is the default mode.



Note - Due to firmware improvements beginning with version 4.11, sequential optimization yields better performance than random optimization for most applications and configurations. Use sequential optimization unless real-world tests in your production environment show better results for random optimization.



When you specify sequential or random cache optimization, the controller determines a default stripe size for newly-created logical drives. But you can specify whatever stripe size you choose for each logical drive when you create it, enabling you to maximize performance by matching stripe size with your application requirements. Since different applications may use different logical drives, this functionality provides you with greatly increased flexibility.

See Cache Optimization Mode (SCSI) for information about how to set the cache optimization mode on a Sun StorEdge 3310 SCSI array or Sun StorEdge 3320 SCSI array. See Cache Optimization Mode (FC and SATA) for information about how to set the cache optimization mode for a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array.

The RAID array's cache optimization mode determines the cache block size used by the controller for all logical drives. An appropriate cache block size improves performance when a particular application uses either large or small stripe sizes:

Once logical drives are created, you cannot use the RAID firmware's Optimization for Random I/O or Optimization for Sequential I/O menu option to change the optimization mode without deleting all logical drives. You can, however, use the Sun StorEdge CLI set cache-parameters command to change the optimization mode while logical drives exist. Refer to the Sun StorEdge 3000 Family CLI 2.0 User's Guide for more information.



Note - Using the Sun StorEdge CLI set cache-parameters command to change optimization mode can result in a pre-existing logical drive having a stripe size that, because it is inappropriate for that optimization mode, could not have been selected at the time the logical drive was created. This combination will not yield the best performance possible, but there is no risk of data loss or other data-related problems. You can avoid this inefficiency by choosing stripe sizes and an optimization mode that are appropriate for your applications.



Since the cache block size works in conjunction with stripe size, the optimization mode you choose determines default logical drive stripe sizes that are consistent with the cache block size setting. But you can now fine-tune performance by specifying each logical drive's stripe size so that it matches your application needs, using a firmware menu option that is available at the time you create the logical drive. See Cache Optimization Mode and Stripe Size Guidelines for more information.



Note - Once the stripe size is selected and data is written to logical drives, the only way to change the stripe size of an individual logical drive is to back up all its data to another location, delete the logical drive, and create a logical drive with the stripe size that you want.



See (Optional) Configure the logical drive stripe size. for information about how to set the stripe size for a logical drive you are creating on a Sun StorEdge 3310 SCSI array or Sun StorEdge 3320 SCSI array. See (Optional) Configure the logical drive stripe size. for information about how to set the stripe size for a logical drive you are creating on a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array.


Cache Write Policy Guidelines

The cache write policy determines when cached data is written to the disk drives. The ability to hold data in cache while it is being written to disk can increase storage device speed during sequential reads. Write policy options include write-through and write-back.

When write-through cache is specified, the controller writes the data to the disk drive before signaling the host operating system that the process is complete. Write-through cache has slower write operation and throughput performance than write-back cache, but it is safer, with minimum risk of data loss on power failure. Because a battery module is installed, power is supplied to the data cached in memory and the data can be written to disk after power is restored.

When write-back cache is specified, the controller receives the data to write to disk, stores it in the memory buffer, and immediately sends the host operating system a signal that the write operation is complete, before the data is actually written to the disk drive. Write-back caching improves the performance of write operations and the throughput of the controller card.

Write-back cache is enabled by default. When you disable write-back cache, write-through cache is automatically enabled. The setting you specify becomes the default global cache setting for all logical drives. With RAID firmware version 4.11 and later, the cache setting can now be individually tailored for each logical drive. When you configure a logical drive, you can set its individual cache write policy to default, write-back, or write-through.

If you specify default for an individual logical drive, the global write policy is assigned to it. Then, if the global cache write policy that applies to the entire RAID array is changed, any logical drive that has been assigned the default setting write policy is also changed.

If you specify write-back or write-through for an individual logical drive, the cache write policy for that drive remains the same regardless of any changes to the global cache write policy.

If you have specified a global write-back policy, you can also configure the RAID array to automatically change from a write-back cache policy to a write-through cache policy when one or more of the following trigger events occur:

Once the condition that led to the trigger event is rectified, the cache policy automatically returns to its previous setting. For more information on configuring the write policy to automatically switch from write-back cache to write-through cache, see Event Trigger Operations.


Fibre Connection Protocol Guidelines

Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays support the following connection protocols:

This protocol can be used only with a switched fabric network configuration, also called a storage area network (SAN). The point-to-point protocol supports full duplex communication, but only allows one ID per channel.

The loop mode can be used with direct-attached storage (DAS) or SAN configurations. Loop mode supports only half-duplex communication, but allows up to eight IDs per channel.

The following guidelines apply when implementing point-to-point configurations and connecting to fabric switches.



Note - If you connect to a fabric switch without changing the default loop mode, the array automatically shifts to public loop mode. As a result, communication between the array and the switched fabric runs in half-duplex (send or receive) mode instead of providing the full-duplex (send and receive) performance of point-to-point mode.



The controller displays a warning if the user is in point-to-point mode and tries to add an ID to the same channel but on the other controller. The warning is displayed because you have the ability to disable the internal connection between the channels on the primary and secondary controller using the set inter-controller link CLI command and, by doing this, you can have one ID on the primary and another ID on the secondary as a legal operation.

However, if you ignore this warning and add an ID to the other controller, the RAID controller does not allow a login as an FC-AL port because this would be illegal in a point-to-point configuration.

To use more than 64 LUNs, you must change to Loop only mode, add host IDs to one or more channels, and add 32 LUNs for each host ID.



Note - When in loop mode and connected to a fabric switch, each host ID is displayed as a loop device on the switch so that, if all 16 IDs are active on a given channel, the array looks like a loop with 16 nodes attached to a single switch FL port.





Note - In public loop mode, the array can have a maximum of 1024 LUNs, where 512 LUNs are dual-mapped across two channels, primary and secondary controller, respectively.




A Sample SAN Point-to-Point Configuration

A point-to-point configuration has the following characteristics:

In a dual-controller array, one controller automatically takes over all operation of a second failed controller in all circumstances. However, when an I/O controller module needs to be replaced and a cable to an I/O port is removed, that I/O path is broken unless multipathing software has established a separate path from the host to the operational controller. Supporting hot-swap servicing of a failed controller requires the use of multipathing software, such as Sun StorEdge Traffic Manager software, on the connected servers.



Note - Multipathing for Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays is provided by Sun StorEdge Traffic Manager software. Refer to the release notes for your array for information about which versions of Sun StorEdge Traffic Manager software are supported for your host.



Remember these important considerations:

The following figures show the channel numbers (0, 1, 4, and 5) of each host port and the host ID for each channel. N/A means that the port does not have a second ID assignment. The primary controller is the top I/O controller module, and the secondary controller is the bottom I/O controller module.

The dashed lines between two ports indicate a port bypass circuit that functions as a mini-hub and has the following advantages:

In FIGURE 3-1 and FIGURE 3-2, with multipathing software to reroute the data paths, each logical drive remains fully operational when the following conditions occur:

  FIGURE 3-1 A Point-to-Point Configuration with a Dual-Controller Sun StorEdge 3510 FC Array and Two Switches

Figure shows a point-to-point configuration with two servers connecting to the Sun StorEdge 3510 FC Array through two switches.

  FIGURE 3-2 A Point-to-Point Configuration With a Dual-Controller Sun StorEdge 3511 SATA Array and Two Switches

Figure shows a point-to-point configuration with two servers connecting to the Sun StorEdge 3511 SATA array through two switches.


Note - These illustrations show the default controller locations; however, the primary controller and secondary controller locations can occur in either slot and depend on controller resets and controller replacement operations.



TABLE 3-11 summarizes the primary and secondary host IDs assigned to logical drives 0 and 1, based on FIGURE 3-1 and FIGURE 3-2.

TABLE 3-11 Example Point-to-Point Configuration With Two Logical Drives in a Dual-Controller Array

Task

Logical Drive

LUN IDs

Channel Number

Primary ID Number

Secondary ID Number

Map 32 partitions of LG 0 to CH 0

LG 0

0-31

0

40

N/A

Duplicate-map 32 partitions of LG 0 to CH 1

LG 0

0-31

1

41

N/A

Map 32 partitions of LG 1 to CH 4

LG 1

0-31

4

N/A

50

Duplicate-map 32 partitions of LG 1 to CH 5

LG 1

0-31

5

N/A

51



procedure icon  To Set Up a Typical Point-to-Point SAN Configuration

Perform the following steps, which are described in more detail later in this guide, to set up a typical point-to-point SAN configuration based on FIGURE 5-1 and FIGURE 5-2.

1. Check the position of installed small form-factor pluggable transceivers (SFPs). Move them as necessary to support the connections needed.

You need to add SFP connectors to support more than four connections between servers and a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array. For example, add two SFP connectors to support six connections and add four SFP connectors to support eight connections.

2. Connect expansion units, if needed.

3. Create at least two logical drives (logical drive 0 and logical drive 1) and configure spare drives.

Leave half of the logical drives assigned to the primary controller (default assignment). Assign the other half of the logical drives to the secondary controller to load-balance the I/O.

4. Create up to 32 partitions (LUNs) in each logical drive.

5. Change the Fibre Connection Option to "Point to point only" ("view and edit Configuration parameters right arrow Host-side SCSI Parameters right arrow Fibre Connections Option").

6. For ease of use in configuring LUNs, change the host IDs on the four channels to the following assignments:

Channel 0: PID 40 (assigned to the primary controller)

Channel 1: PID 41 (assigned to the primary controller)

Channel 4: SID 50 (assigned to the secondary controller)

Channel 5: SID 51 (assigned to the secondary controller)



Note - Do not use the "Loop preferred, otherwise point to point" menu option. This command is reserved for special use and should be used only if directed by technical support.



7. Map logical drive 0 to channels 0 and 1 of the primary controller.

Map LUN numbers 0 through 31 to the single ID on each host channel.

8. Map logical drive 1 to channels 4 and 5 of the secondary controller.

Map LUN numbers 0 through 31 to the single ID on each host channel. Since each set of LUNs is assigned to two channels for redundancy, the total working maximum number of LUNs is 64 LUNs.



Note - The LUN ID numbers and the number of LUNs available per logical drive can vary according to the number of logical drives and the ID assignments you want on each channel.



9. Connect the first switch to ports 0 and 4 of the upper controller.

10. Connect the second switch to ports 1 and 5 of the lower controller.

11. Connect each server to each switch.

12. Install and enable multipathing software on each connected server.

The multipathing software prevents path failure but does not alter the controller redundancy through which one controller automatically takes over all functions of a second failed controller.


A Sample DAS Loop Configuration

The typical direct attached storage (DAS) configuration shown in FIGURE 3-3 and FIGURE 3-4 includes four servers, a dual-controller array, and two expansion units. Expansion units are optional.

Servers, as shown in FIGURE 3-3 and FIGURE 3-4, are connected to the channels shown in TABLE 3-12.

TABLE 3-12 Connection for Four Servers in a DAS Configuration

Server Number

Upper I/O Controller Module

Lower I/O Controller Module

1

0

5

2

4

1

3

5

0

4

1

4


Establishing complete redundancy and maintaining high availability requires the use of multipathing software such as Sun StorEdge Traffic Manager software. To configure for multipathing:

1. Establish two connections between each server and the array.

2. Install and enable multipathing software on the server.

3. Map the logical drive each server is using to the controller channels that the server is connected to.

DAS configurations are typically implemented using a fabric loop (FL_port) mode. A loop configuration example is described under A Sample DAS Loop Configuration.

Fabric loop (FL_port) connections between a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array and multiple servers allow up to 1024 LUNs to be presented to servers. For guidelines on how to create 1024 LUNs, see Planning for 1024 LUNs on an FC or SATA Array (Optional, Loop Mode Only).

  FIGURE 3-3 A DAS Configuration With Four Servers, a Dual-Controller Sun StorEdge 3510 FC Array, and Two Expansion Units

Figure shows a DAS configuration with four servers connected to a dual-controller Sun StorEdge 3510 Array and two expansion units.

  FIGURE 3-4 A DAS Configuration With Four Servers, a Dual-Controller Sun StorEdge 3511 SATA Array, and Two Expansion Units

Figure shows a DAS configuration with four servers connected to a dual-controller Sun StorEdge 3511 SATA array and two expansion units.

procedure icon  To Set Up a Typical DAS Loop Configuration

Perform the following steps, which are described in more detail later in this manual, to set up a DAS loop configuration based on FIGURE 3-3 and FIGURE 3-4.

1. Check the location of installed SFPs. Move them as necessary to support the connections needed.

You need to add SFP connectors to support more than four connections between servers and a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array. For example, add two SFP connectors to support six connections and add four SFP connectors to support eight connections.

2. Connect expansion units, if needed.

3. Create at least one logical drive per server, and configure spare drives as needed.

4. Create one or more logical drive partitions for each server.

5. Confirm that the Fibre Connection Option is set to Loop only.



Note - Do not use the "Loop preferred, otherwise point to point" menu option. This command is reserved for special use and should be used only if directed by technical support.



6. Set up to eight IDs on each channel, if needed (see TABLE 3-13).

TABLE 3-13 Example Primary and Secondary ID Numbers in a Loop Configuration With Two IDs per Channel

Channel
Number

Primary
ID Number

Secondary
ID Number

0

40

41

1

43

42

4

44

45

5

47

46


7. Map logical drive 0 to channels 0 and 5 of the primary controller.

8. Map logical drive 1 to channels 1 and 4 of the secondary controller.

9. Map logical drive 2 to channels 0 and 5 of the primary controller.

10. Map logical drive 3 to channels 1 and 4 of the secondary controller.

11. Connect the first server to port FC 0 of the upper controller and port FC5 of the lower controller.

12. Connect the second server to port FC 4 of the upper controller and port FC1 of the lower controller.

13. Connect the third server to port FC 5 of the upper controller and port FC0 of the lower controller.

14. Connect the fourth server to port FC 1 of the upper controller and port FC4 of the lower controller.

15. Install and enable multipathing software on each connected server.


Array Configuration Summary

This section lists the typical sequence of steps for completing a first-time configuration of the array. For detailed steps and more information, see the referenced sections.

Typical steps for completing a first-time configuration of the array are as follows:

1. Set up the serial port connection.

2. Set an IP address for the controller.

See Setting an IP Address.

3. Determine whether sequential or random optimization is more appropriate for your applications and configure your array accordingly.

See Cache Optimization Mode and Stripe Size Guidelines for more information. Also see Cache Optimization Mode (SCSI) for information about how to configure a SCSI array's optimization mode, or Cache Optimization Mode (FC and SATA) for information about how to configure an FC or SATA array's optimization mode.

4. Check physical drive availability.

See To Check Physical Drive Availability for a SCSI array. See Physical Drive Status for FC or SATA arrays.

5. (Optional) Configure host channels as drive channels.

See Channel Settings for a SCSI array. See Channel Settings for FC or SATA arrays.

6. For a Fibre Channel or SATA array, confirm or change the Fibre Connection Option (point-to-point or loop).

See Fibre Connection Protocol Guidelines and Fibre Connection Protocol for the procedure to configure the Fibre Connection protocol.

7. Revise or add host IDs on host channels.

See To Add or Delete a Unique Host ID for SCSI arrays. See To Add or Delete a Unique Host ID for FC or SATA arrays.

The IDs assigned to controllers take effect only after the controller is reset.

8. Delete default logical drives and create new logical drives as required.

See Deleting Logical Drives and Creating Logical Drives for SCSI arrays. See Deleting Logical Drives and Creating Logical Drives for FC or SATA arrays.

9. (Optional) In dual-controller configurations only, assign logical drives to the secondary controller to load-balance the two controllers.

See Controller Assignment for a SCSI array. See Controller Assignment for FC or SATA arrays.

10. (Optional) Partition the logical drives.

See Partitions for SCSI arrays. See Partitions for Fibre Channel and SATA arrays.

11. Map each logical drive partition to an ID on a host channel.

For more information, see Mapping a Partition to a Host LUN for SCSI arrays.



Note - Each operating system has a method for recognizing storage devices and LUNs and might require the use of specific commands or the modification of specific files. Be sure to check the information for your operating system to ensure that you have performed the necessary procedures.



For information about different operating system procedures, refer to the Sun StorEdge 3000 Family Installation, Operation and Service Manual for your array.

12. (Optional) Create and apply host LUN filters to FC or SATA logical drives.

See Mapping a Partition to a Host LUN for Fibre Channel and SATA arrays.

13. Reset the controller.

The configuration is complete.

14. Save the configuration to a disk.

See Saving Configuration (NVRAM) to a Disk.

15. Ensure that the cabling from the RAID array to the hosts is complete.

Refer to the Sun StorEdge 3000 Family Installation, Operation and Service Manual for your array.


1 (TableFootnote) Sun StorEdge 3511 SATA expansion units can be connected to a Sun StorEdge 3510 FC array, either alone or in combination with Sun StorEdge 3510 FC expansion units