When sizing an Oracle ZFS Storage Appliance for use in a cluster configuration, two considerations are very important:
Whether all pools are owned by the same controller, or split between the two controllers.
Whether you want pools with no single point of failure (NSPF).
Assigning storage pool ownership - Perhaps the most important decision is whether all storage pools will be assigned ownership to the same controller, or split between them. There are several trade-offs to consider, as shown in Figure 14, Table 14, Clustering Considerations for Storage Pools.
Generally, pools should be configured on a single controller except when optimizing for throughput during nominal operation or when failed-over performance is not a consideration. The exact changes in performance characteristics when in the failed-over state will depend to a great deal on the nature and size of the workload(s). Generally, the closer a controller is to providing maximum performance on any particular axis, the greater the performance degradation along that axis when the workload is taken over by that controller's peer. Of course, in the multiple pool case, this degradation will apply to both workloads.
Read cache devices are located in the controller or disk shelf, depending on your configuration.
Read cache devices, located in a controller slot (internal L2ARC), do not follow data pools in takeover or failback situations. A read cache device is only active in a particular cluster node when the pool that is assigned to the read cache device is imported on the node where the device resides. Absent additional configuration steps, read cache will not be available for a pool that has migrated due to a failover event. In order to enable a read cache device for a pool that is not owned by the cluster peer, take over the pool on the non-owning node, and then add storage and select the cache devices for configuration. Read cache devices in a cluster node should be configured as described in the Configuring Storage. Write-optimized log devices are located in the storage fabric and are always accessible to whichever controller has imported the pool.
If read cache devices are located in a disk shelf (external L2ARC), read cache is always available. During a failback or takeover operation, read cache remains sharable between controllers. In this case, read performance is sustained. For external read cache configuration details, see Disk Shelf Configurations in Oracle ZFS Storage Appliance Customer Service Manual.
Configuring NSPF - A second important consideration for storage is the use of pool configurations with no single point of failure (NSPF). Since the use of clustering implies that the application places a very high premium on availability, there is seldom a good reason to configure storage pools in a way that allows the failure of a single disk shelf to cause loss of availability. The downside to this approach is that NSPF configurations require a greater number of disk shelves than do configurations with a single point of failure; when the required capacity is very small, installation of enough disk shelves to provide for NSPF at the desired RAID level may not be economical.
The following table describes storage pool ownership for cluster configurations.
|
Related Topics