After storage devices are physically verified and resources are allocated for a storage pool, the next step is to choose a storage profile that reflects your reliability, availability, serviceability (RAS), and performance goals. The set of possible profiles presented depends on your available storage. The following table lists all possible profiles and their descriptions.
For expandable systems, some profiles may be available with an 'NSPF' option. This stands for 'no single point of failure' and indicates that data is arranged in mirrors or RAID stripes such that a pathological disk shelf failure will not result in data loss. Note that systems are already configured with redundancy across nearly all components. Each disk shelf has redundant paths, redundant controllers, and redundant power supplies and fans. The only failure that NSPF protects against is disk backplane failure (a mostly passive component), or gross administrative misconduct (detaching both paths to one disk shelf). In general, adopting NSPF will result in lower capacity, as it has more stringent requirements on stripe width.
Log devices can be configured using only striped or mirrored profiles. Log devices are only used in the event of node failure. For data to be lost with unmirrored logs, it is necessary for both the device to fail and the node to reboot immediately after. This a highly-unlikely event, however mirroring log devices can make this effectively impossible, requiring two simultaneous device failures and node failure within a very small time window.
In a cluster configuration, cache devices installed in controller slots are available only to the controller which has the storage pool imported. In a cluster, it is possible to configure cache devices on both controllers to be part of the same pool. To do this, take over the pool on the passive node, then add storage, and select the cache devices. This has the effect of having half the global cache devices configured at any one time. While the data on the cache devices will be lost on failover, the new cache devices can be used on the new controller.
Cache devices installed in disk shelf slots, when added to a pool, are automatically imported during a cluster failback or takeover. No additional pool configuration is required.
A meta device is a cache device used to store deduplicated metadata and other metadata for projects and shares. Meta devices can be allocated to a storage pool, but not an all-flash storage pool, during and after storage pool creation. However, they cannot be re-configured as normal cache devices for a pool, nor can they be removed from a pool. A meta device must be a 3.2 TB (minimum) SSD to support the enhanced data deduplication feature available in software version OS8.7.0 (2013.1.7.0) or later.
Before using meta devices and the deduplication feature for new and existing storage pools, accept the deferred software update for Data Deduplication v2, introduced with software version OS8.7.0 (2013.1.7.0). If replicating to other systems, both the replication source and targets must have this deferred update. For more information, see Data Deduplication, and Data Deduplication v2 Deferred Update in Oracle ZFS Storage Appliance Customer Service Manual.
Hot spares are allocated as a percentage of total pool size and are independent of the profile chosen (with the exception of striped, which doesn't support hot spares). Because hot spares are allocated for each storage configuration step, it is much more efficient to configure storage as a whole than it is to add storage in small increments.