Performing Initial Configuration with the CLI
Infiniband Upgrade Procedures for Q3.2010
Associating a LUN with an FC initiator group
Associating a LUN with an FC initiator group
Scripting Aliases for Initiators and Initiator Groups
Configuring FC Client Multipathing
Configuring Solaris Initiators
Configuring Windows Initiators
Windows Tunables - Microsoft DSM Details
Configuring VMware ESX Initiators
Solaris iSCSI/iSER and MPxIO Considerations
Creating an Analytics Worksheet
Adding an iSCSI target with an auto-generated IQN
Adding an iSCSI target with a specific IQN and RADIUS authentication
Adding an iSCSI initiator which uses CHAP authentication
Adding an iSCSI initiator group
Storage Array Type Plugin (satp)
Alert action execution context
Example: device type selection
Configuration Changes in a Clustered Environment
Clustering Considerations for Storage
Clustering Considerations for Networking
Clustering Considerations for Infiniband
Preventing "Split-Brain" Conditions
Storage is configured in pools that are characterized by their underlying data redundancy, and provide space that is shared across all filesystems and LUNs. More information about how storage pools relate to individual filesystems or LUNs can be found in the Shares section.
Each node can have any number of pools, and each pool can be assigned ownership independently in a cluster. While arbitrary number of pools are supported, creating multiple pools with the same redundancy characteristics owned by the same cluster head is not advised. Doing so will result in poor performance, suboptimal allocation of resources, artificial partitioning of storage, and additional administrative complexity. Configuring multiple pools on the same host is only recommended when drastically different redundancy or performance characteristics are desired, for example a mirrored pool and a RAID-Z pool. With the ability to control access to log and cache devices on a per-share basis, the recommended mode of operation is a single pool.
Pools can be created by configuring a new pool, or importing an existing pool. Importing an existing pool is only used to import pools previously configured on a Sun Storage 7000 appliance, and is useful in case of accidental reconfiguration, moving of pools between head nodes, or due to catastrophic head failure.
When allocating raw storage to pools, keep in mind that filling pools completely will result in significantly reduced performance, especially when writing to shares or LUNs. These effects typically become noticeable once the pool exceeds 80% full, and can be significant when the pool exceeds 90% full. Therefore, best results will be obtained by overprovisioning by approximately 20%. The Shares UI can be used to determine how much space is currently being used.
This action configures the storage pool. In the BUI, this is
done by clicking the button next to the list of pools,
at which point you are prompted for the name of the new
pool. In the CLI, this is done by the config command,
which takes the name of the pool as an argument.
After the task is started, storage configuration falls into two different phases: verification and configuration.
The verification phase allows you to verify that all storage is attached and functioning, and allocate disks within chassis. In a standalone system, this presents a list of all available storage and drive types, with the ability to change the number of disks to allocate to the new pool. By default, the maximum number of disks are allocated, but this number can be reduced in anticipation of creating multiple pools.
In an expandable system, JBODs are displayed in a list along with the head node, and allocation can be controlled within each JBOD. For each JBOD, the system must import available disks, a process that can take a significant amount of time depending the number and configuration of JBODs. Disks within the system chassis can be allocated individually (as with cache devices), but JBODs must be allocated as either 'whole' or 'half'. In general, whole JBODs are the preferred unit for managing storage, but half JBODs can be used where storage needs are small, or where NSPF is needed in a smaller configuration.
Attempting to commit this step using chassis with missing or failed devices will result in a warning. Once you configure a storage pool in this manner, you will never be able to add the missing or broken disk. Therefore it is important that all devices must be connected and functioning before continuing past the verification step.
Once verification is completed, the next step involves choosing a storage profile that reflects the RAS and performance goals of your setup. The set of possible profiles presented depends on your available storage. The following table lists all possible profiles and their description.
|
For expandable systems, some profiles may be available with an 'NSPF' option. This stands for 'no single point of failure' and indicates that data is arranged in mirrors or RAID stripes such that a pathological JBOD failure will not result in data loss. Note that systems are already configured with redundancy across nearly all components. Each JBOD has redundant paths, redundant controllers, and redundant power supplies and fans. The only failure that NSPF protects against is disk backplane failure (a mostly passive component), or gross administrative misconduct (detaching both paths to one JBOD). In general, adopting NSPF will result in lower capacity, as it has more stringent requirements on stripe width.
Log devices can also have one of two different profiles: striped or mirrored. The data on log devices is only used in the event of node failure, so in order to lose data with an unmirrored log device it is necessary for both the device to fail and the node to reboot within a few seconds. This constitutes a double failure, but using mirrored log devices can make this effectively impossible, requiring two simultaneous device failures and node failure within a very small time window.
Hot spares are allocated as a percentage of total pool size and are independent of the profile chosen (with the exception of striped, which doesn't support hot spares). Because hot spares are allocated for each storage configuration step, it is much more efficient to configure storage as a whole than it is to add storage in small increments.
In a cluster, cache devices are available only to the node which has the storage pool imported. In a cluster, it is possible to configure cache devices on both nodes to be part of the same pool. To do this, takeover the pool on the passive node, and then add storage and select the cache devices. This has the effect of having half the global cache devices configured at any one time. While the data on the cache devices will be lost on failover, the new cache devices can be used on the new node.
Note: earlier software versions supported a double parity RAID configuration with wide stripes. This has been supplanted by the triple parity RAID, wide stripe configuration as it adds significantly better reliability. Pools configured with double parity RAID with wide stripes under a previous software version continue to be supported but newly configured or reconfigured pools cannot select that option.
This allows you to import an existing storage pool, as well as any inadvertently unconfigured pools. This can be used after a factory reset or service operation to recover user data. Importing a pool requires iterating over all attached storage devices and discovering any existing state. This can take a significant amount of time, during which no other storage configuration activities can take place. To import a pool in the BUI, click the 'IMPORT' button in the storage configuration screen. To import a pool in the CLI, use the 'import' command.
Once the discovery phase has completed, you will be presented with a list of available pools, including some identifying characteristics. If the storage has been destroyed or is incomplete, the pool will not be importable. Unlike storage configuration, the pool name is not specified at the beginning, but rather when selecting the pool. By default, the previous pool name is used, but you can change the pool name, either by clicking the name in the BUI or setting the 'name' property in the CLI.
Use this action to add additional storage to your existing pool. The verification step is identical to the verification step during initial configuration. The storage must be added using the same profile that was used to configure the pool initially. If there is insufficient storage to configure the system with the current profile, some attributes can be sacrificed. For example, adding a single JBOD to a double parity RAID-Z NSPF config makes it impossible to preserve NSPF characteristics. However, you can still add the JBOD and create RAID stripes within the JBOD, sacrificing NSPF in the process.
This will remove any active filesystems and LUNs and unconfigure the storage pool, making the raw storage available for future storage configuration. This process can be undone by importing the unconfigured storage pool, provided the raw storage has not since been used as part of an active storage pool.
This will initiate the storage pool scrub process, which will verify all content to check for errors. If any unrecoverable errors are found, either through a scrub or through normal operation, the BUI will display the affected files. The scrub can also be stopped if necessary.
There are three different ways to arrive at this task: either during initial configuration of the appliance; or at the Configuration->Storage screen.