Selecting a storage profile for a pool.
Storage is configured in pools that are characterized by their underlying data redundancy, and provide space that is shared across all filesystems and LUNs. More information about how storage pools relate to individual filesystems or LUNs can be found in the Shares section.
Each node can have any number of pools, and each pool can be assigned ownership independently in a cluster. While arbitrary number of pools are supported, creating multiple pools with the same redundancy characteristics owned by the same cluster head is not advised. Doing so will result in poor performance, suboptimal allocation of resources, artificial partitioning of storage, and additional administrative complexity. Configuring multiple pools on the same host is only recommended when drastically different redundancy or performance characteristics are desired, for example a mirrored pool and a RAID-Z pool. With the ability to control access to log and cache devices on a per-share basis, the recommended mode of operation is a single pool.
Pools can be created by configuring a new pool, or importing an existing pool. Importing an existing pool is only used to import pools previously configured on a Sun Storage 7000 appliance, and is useful in case of accidental reconfiguration, moving of pools between head nodes, or due to catastrophic head failure.
When allocating raw storage to pools, keep in mind that filling pools completely will result in significantly reduced performance, especially when writing to shares or LUNs. These effects typically become noticeable once the pool exceeds 80% full, and can be significant when the pool exceeds 90% full. Therefore, best results will be obtained by overprovisioning by approximately 20%. The Shares UI can be used to determine how much space is currently being used.
This action configures the storage pool. In the BUI, this is done by clicking the button next to the list of pools, at which point you are prompted for the name of the new pool. In the CLI, this is done by the config command, which takes the name of the pool as an argument.
After the task is started, storage configuration falls into two different phases: verification and configuration.
For optimal performance, keep in mind the following:
Rule 1 -- All "data" disks contained within a head node or JBOD must have the same rotational speed (media rotation rate). The ZFSSA software will detect misconfigurations and generate a fault for the condition.
Recommendation 1 -- Due to unpredictable performance issues, avoid mixing different disk rotational speeds within the same pool.
Recommendation 2 -- For optimal performance, do not combine JBODs with different disk rotational speeds on the same SAS fabric (HBA connection). Such a mixture operates correctly, but likely results in slower performance of the faster devices.
Recommendation 3 -- When configuring storage pools that contain data disks of different capacities, ZFS will in some cases use the size of the smallest capacity disk for some or all of the disks within the storage pool, thereby reducing the overall expected capacity. The sizes used will depend on the storage profile, layout, and combination of devices. Avoid mixing different disk capacities within the same pool.
The verification phase allows you to verify that all storage is attached and functioning, and allocate disks within chassis. In a standalone system, this presents a list of all available storage and drive types, with the ability to change the number of disks to allocate to the new pool. By default, the maximum number of disks are allocated, but this number can be reduced in anticipation of creating multiple pools.
In an expandable system, JBODs are displayed in a list along with the head node, and allocation can be controlled within each JBOD. This will operate slightly differently depending on the model of the head node or JBOD. Attempting to commit this step using chassis with missing or failed devices will result in a warning. Once you configure a storage pool in this manner, you will never be able to add the missing or broken disk. Therefore it is important that all devices must be connected and functioning before continuing past the verification step.
The default number of disks selected in the allocation step will be either the maximum number of disks available when the appliance only contains "data" disks of the same rotational speed; or zero disks when the appliance contains a mixture of rotational speeds.
This avoids the unintentional configuration of a pool with different rotational speed disks.
For each JBOD (specifically the J4400 and J4500), the system must import available disks, a process that can take a significant amount of time depending on the number and configuration of JBODs. Disks within the system chassis can be allocated individually (as with cache devices), but JBODs must be allocated as either 'whole' or 'half'. In general, whole JBODs are the preferred unit for managing storage, but half JBODs can be used where storage needs are small, or where NSPF is needed in a smaller configuration.
Drives within all of the chassis can be allocated individually however care should be taken when allocating disks from JBODs to ensure optimal pool configurations. In general less pools with more disks per pool are preferred as they will simplify management and provide a higher percentage of overall usable capacity. While the system can allocate storage in any increment desired, it is recommended that each allocation include a minimum of 8 disks across all JBODs and ideally many more.
Once verification is completed, the next step involves choosing a storage profile that reflects the RAS and performance goals of your setup. The set of possible profiles presented depends on your available storage. The following table lists all possible profiles and their description.
For expandable systems, some profiles may be available with an 'NSPF' option. This stands for 'no single point of failure' and indicates that data is arranged in mirrors or RAID stripes such that a pathological JBOD failure will not result in data loss. Note that systems are already configured with redundancy across nearly all components. Each JBOD has redundant paths, redundant controllers, and redundant power supplies and fans. The only failure that NSPF protects against is disk backplane failure (a mostly passive component), or gross administrative misconduct (detaching both paths to one JBOD). In general, adopting NSPF will result in lower capacity, as it has more stringent requirements on stripe width.
Log devices can be configured using only one of two different profiles: striped or mirrored. Log devices are only used in the event of node failure, so in order for data to be lost with unmirrored logs it would be necessary for both the device to fail and the node to reboot immediately thereafter. This highly-unlikely event would constitute a double failure, however mirroring log devices can make this effectively impossible, requiring two simultaneous device failures and node failure within a very small time window.
Hot spares are allocated as a percentage of total pool size and are independent of the profile chosen (with the exception of striped, which doesn't support hot spares). Because hot spares are allocated for each storage configuration step, it is much more efficient to configure storage as a whole than it is to add storage in small increments.
In a cluster, cache devices are available only to the node which has the storage pool imported. In a cluster, it is possible to configure cache devices on both nodes to be part of the same pool. To do this, takeover the pool on the passive node, and then add storage and select the cache devices. This has the effect of having half the global cache devices configured at any one time. While the data on the cache devices will be lost on failover, the new cache devices can be used on the new node.
Note: Earlier software versions supported double parity with wide stripes. This has been supplanted by triple parity with wide stripes, as it adds significantly better reliability. Pools configured as double parity with wide stripes under a previous software version continue to be supported, but newly-configured or reconfigured pools cannot select that option.
This allows you to import an existing storage pool, as well as any inadvertently unconfigured pools. This can be used after a factory reset or service operation to recover user data. Importing a pool requires iterating over all attached storage devices and discovering any existing state. This can take a significant amount of time, during which no other storage configuration activities can take place. To import a pool in the BUI, click the 'IMPORT' button in the storage configuration screen. To import a pool in the CLI, use the 'import' command.
Once the discovery phase has completed, you will be presented with a list of available pools, including some identifying characteristics. If the storage has been destroyed or is incomplete, the pool will not be importable. Unlike storage configuration, the pool name is not specified at the beginning, but rather when selecting the pool. By default, the previous pool name is used, but you can change the pool name, either by clicking the name in the BUI or setting the 'name' property in the CLI.
Use this action to add additional storage to your existing pool. The verification step is identical to the verification step during initial configuration. The storage must be added using the same profile that was used to configure the pool initially. If there is insufficient storage to configure the system with the current profile, some attributes can be sacrificed. For example, adding a single JBOD to a double parity RAID-Z NSPF config makes it impossible to preserve NSPF characteristics. However, you can still add the JBOD and create RAID stripes within the JBOD, sacrificing NSPF in the process.
This will remove any active filesystems and LUNs and unconfigure the storage pool, making the raw storage available for future storage configuration. This process can be undone by importing the unconfigured storage pool, provided the raw storage has not since been used as part of an active storage pool.
This will initiate the storage pool scrub process, which will verify all content to check for errors. If any unrecoverable errors are found, either through a scrub or through normal operation, the BUI will display the affected files. The scrub can also be stopped if necessary.
There are two ways to arrive at this task: either during initial configuration of the appliance, or at the Configuration->Storage screen.