JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Sun ZFS Storage 7000 System Administration Guide
search filter icon
search icon

Document Information

Preface

1.  Introduction

2.  Status

3.  Configuration

Configuration

Introduction

Initial

Initial Configuration

Prerequisites

Summary

BUI

Configuring Management Port

CLI

Performing Initial Configuration with the CLI

Network

Network Configuration

Devices

Datalinks

Interfaces

IP MultiPathing (IPMP)

Performance and Availability

Routing

Routing Entries

Routing Properties

BUI

Configuration

Addresses

Routing

CLI

Tasks

BUI

CLI

Infiniband Upgrade Procedures for Q3.2010

Storage

Introduction

Configure

Configuration Rules and Guidelines

Verification

Allocation on SAS-1 Systems

Allocation on SAS-2 Systems

Profile Configuration

Import

Add

Unconfig

Scrub

Tasks

BUI

SAN

SAN

Terminology

Targets and Initiators

Target and Initiator Groups

BUI

CLI

Terms

SAN Terminology

FC

Fibre Channel

Target Configuration

Clustering Considerations

Initiator Configuration

Switch Considerations

Clustering Considerations

Performance Considerations

Troubleshooting

Queue Overruns

Link-level Issues

BUI

Changing modes of FC ports

Viewing discovered FC ports

Creating FC Initiator Groups

Associating a LUN with an FC initiator group

CLI

Changing modes of FC ports

Viewing discovered FC ports

Creating FC Initiator Groups

Associating a LUN with an FC initiator group

Scripting Aliases for Initiators and Initiator Groups

Protocol Support

IP based Protocol Support

Initiator Support

Introduction

Supported InfiniBand Client-Side Data Protocols

Supported Fiber Channel Initiators

FCMPxIO

Configuring FC Client Multipathing

Configuring Solaris Initiators

Configuring Windows Initiators

Windows Tunables - Microsoft DSM Details

Configuring Linux Initiators

Configuring VMware ESX Initiators

Troubleshooting

See Also

iSCSI

Introduction

Target Configuration

Clustering Considerations

Initiator Configuration

Planning Client Configuration

Solaris iSCSI/iSER and MPxIO Considerations

Troubleshooting

Observing Performance

BUI

Creating an Analytics Worksheet

CLI

Adding an iSCSI target with an auto-generated IQN

Adding an iSCSI target with a specific IQN and RADIUS authentication

Adding an iSCSI initiator which uses CHAP authentication

Adding an iSCSI target group

Adding an iSCSI initiator group

SRP

Introduction

Target configuration

Clustering Considerations

Initiator configuration

Observing Performance

Multipathing Considerations

Linux with OFED SRP Initiator

OFED 1.5 Issues

VMWare 4.0

Path Selection Plugin (psp)

Storage Array Type Plugin (satp)

VMWare ESX 4.0 Issues

BUI

iSER Target Configuration

SRP Target Configuration

CLI

Users

Introduction

Roles

Authorizations

Properties

Users

Roles

BUI

CLI

Tasks

BUI

CLI

Generic

Preferences

Introduction

BUI

CLI

SSH Public Keys

Alerts

Introduction

Actions

Send Email

Send SNMP trap

Send Syslog Message

Resume/Suspend Dataset

Resume/Suspend Worksheet

Execute Workflow

Threshold Alerts

BUI

CLI

Tasks

BUI

Cluster

Clustering

Features and Benefits

Drawbacks

Terminology

Subsystem Design

Cluster Interconnect I/O

Resource Management Concepts

Takeover and Failback

Configuration Changes in a Clustered Environment

Clustering Considerations for Storage

Clustering Considerations for Networking

Clustering Considerations for Infiniband

Redundant Path Scenarios

Preventing 'Split-Brain' Conditions

Estimating and Reducing Takeover Impact

Setup Procedure

Node Cabling

JBOD Cabling

BUI

Unconfiguring Clustering

4.  Services

5.  Shares

6.  Analytics

7.  Application Integration

Glossary

Index

Storage

image:Image

Selecting a storage profile for a pool.

Introduction

Storage is configured in pools that are characterized by their underlying data redundancy, and provide space that is shared across all filesystems and LUNs. More information about how storage pools relate to individual filesystems or LUNs can be found in the Shares section.

Each node can have any number of pools, and each pool can be assigned ownership independently in a cluster. While arbitrary number of pools are supported, creating multiple pools with the same redundancy characteristics owned by the same cluster head is not advised. Doing so will result in poor performance, suboptimal allocation of resources, artificial partitioning of storage, and additional administrative complexity. Configuring multiple pools on the same host is only recommended when drastically different redundancy or performance characteristics are desired, for example a mirrored pool and a RAID-Z pool. With the ability to control access to log and cache devices on a per-share basis, the recommended mode of operation is a single pool.

Pools can be created by configuring a new pool, or importing an existing pool. Importing an existing pool is only used to import pools previously configured on a Sun Storage 7000 appliance, and is useful in case of accidental reconfiguration, moving of pools between head nodes, or due to catastrophic head failure.

When allocating raw storage to pools, keep in mind that filling pools completely will result in significantly reduced performance, especially when writing to shares or LUNs. These effects typically become noticeable once the pool exceeds 80% full, and can be significant when the pool exceeds 90% full. Therefore, best results will be obtained by overprovisioning by approximately 20%. The Shares UI can be used to determine how much space is currently being used.

Configure

This action configures the storage pool. In the BUI, this is done by clicking the image:Add item button next to the list of pools, at which point you are prompted for the name of the new pool. In the CLI, this is done by the config command, which takes the name of the pool as an argument.

After the task is started, storage configuration falls into two different phases: verification and configuration.

image:Image
Configuration Rules and Guidelines

For optimal performance, keep in mind the following:

Rule 1 -- All "data" disks contained within a head node or JBOD must have the same rotational speed (media rotation rate). The ZFSSA software will detect misconfigurations and generate a fault for the condition.

Recommendation 1 -- Due to unpredictable performance issues, avoid mixing different disk rotational speeds within the same pool.

Recommendation 2 -- For optimal performance, do not combine JBODs with different disk rotational speeds on the same SAS fabric (HBA connection). Such a mixture operates correctly, but likely results in slower performance of the faster devices.

Recommendation 3 -- When configuring storage pools that contain data disks of different capacities, ZFS will in some cases use the size of the smallest capacity disk for some or all of the disks within the storage pool, thereby reducing the overall expected capacity. The sizes used will depend on the storage profile, layout, and combination of devices. Avoid mixing different disk capacities within the same pool.

Verification

The verification phase allows you to verify that all storage is attached and functioning, and allocate disks within chassis. In a standalone system, this presents a list of all available storage and drive types, with the ability to change the number of disks to allocate to the new pool. By default, the maximum number of disks are allocated, but this number can be reduced in anticipation of creating multiple pools.

In an expandable system, JBODs are displayed in a list along with the head node, and allocation can be controlled within each JBOD. This will operate slightly differently depending on the model of the head node or JBOD. Attempting to commit this step using chassis with missing or failed devices will result in a warning. Once you configure a storage pool in this manner, you will never be able to add the missing or broken disk. Therefore it is important that all devices must be connected and functioning before continuing past the verification step.

The default number of disks selected in the allocation step will be either the maximum number of disks available when the appliance only contains "data" disks of the same rotational speed; or zero disks when the appliance contains a mixture of rotational speeds.



This avoids the unintentional configuration of a pool with different rotational speed disks.

image:Image
Allocation on SAS-1 Systems

For each JBOD (specifically the J4400 and J4500), the system must import available disks, a process that can take a significant amount of time depending on the number and configuration of JBODs. Disks within the system chassis can be allocated individually (as with cache devices), but JBODs must be allocated as either 'whole' or 'half'. In general, whole JBODs are the preferred unit for managing storage, but half JBODs can be used where storage needs are small, or where NSPF is needed in a smaller configuration.

Allocation on SAS-2 Systems

Drives within all of the chassis can be allocated individually however care should be taken when allocating disks from JBODs to ensure optimal pool configurations. In general less pools with more disks per pool are preferred as they will simplify management and provide a higher percentage of overall usable capacity. While the system can allocate storage in any increment desired, it is recommended that each allocation include a minimum of 8 disks across all JBODs and ideally many more.

image:Image
Profile Configuration

Once verification is completed, the next step involves choosing a storage profile that reflects the RAS and performance goals of your setup. The set of possible profiles presented depends on your available storage. The following table lists all possible profiles and their description.

Data Profile
Description
Dual Parity Options
Triple mirrored
Data is triply mirrored, yielding a very highly reliable and high-performing system (for example, storage for a critical database). This configuration is intended for situations in which maximum performance and availability are required. Compared with a two-way mirror, a three-way mirror adds additional IOPS per stored block and higher level protection against failures.
Double parity RAID
RAID in which each stripe contains two parity disks. As with triple mirroring, this yields high availability, as data remains available with the failure of any two disks. Double parity RAID is a higher capacity option than the mirroring options and is intended either for high-throughput sequential-access workloads (such as backup) or for storing large amounts of data with low random-read component.
Single Parity Options
Mirrored
Data is mirrored, reducing capacity by half, but yielding a highly reliable and high-performing system. Recommended when space is considered ample, but performance is at a premium (for example, database storage).
Single parity RAID, narrow stripes
RAID in which each stripe is kept to three data disks and a single parity disk. For situations in which single parity protection is acceptable, single parity RAID offers a much higher capacity option than simple mirroring. This higher capacity needs to be balanced against a lower random read capability than mirrored options. Single parity RAID can be considered for non-critical applications with a moderate random read component. For pure streaming workloads, give preference to the Double parity RAID option which has higher capacity and more throughput.
Other
Striped
Data is striped across disks, with no redundancy. While this maximizes both performance and capacity, a single disk failure will result in data loss. This configuration is not recommended. For pure streaming workloads, consider using Double parity RAID.
Triple parity RAID, wide stripes
RAID in which each stripe has three disks for parity. This is the highest capacity option apart from Striped Data. Resilvering data after one or more drive failures can take significantly longer due to the wide stripes and low random I/O performance. As with other RAID configurations, the presence of cache can mitigate the effects on read performance. This configuration is not generally recommended.

For expandable systems, some profiles may be available with an 'NSPF' option. This stands for 'no single point of failure' and indicates that data is arranged in mirrors or RAID stripes such that a pathological JBOD failure will not result in data loss. Note that systems are already configured with redundancy across nearly all components. Each JBOD has redundant paths, redundant controllers, and redundant power supplies and fans. The only failure that NSPF protects against is disk backplane failure (a mostly passive component), or gross administrative misconduct (detaching both paths to one JBOD). In general, adopting NSPF will result in lower capacity, as it has more stringent requirements on stripe width.

Log devices can be configured using only one of two different profiles: striped or mirrored. Log devices are only used in the event of node failure, so in order for data to be lost with unmirrored logs it would be necessary for both the device to fail and the node to reboot immediately thereafter. This highly-unlikely event would constitute a double failure, however mirroring log devices can make this effectively impossible, requiring two simultaneous device failures and node failure within a very small time window.

Hot spares are allocated as a percentage of total pool size and are independent of the profile chosen (with the exception of striped, which doesn't support hot spares). Because hot spares are allocated for each storage configuration step, it is much more efficient to configure storage as a whole than it is to add storage in small increments.

In a cluster, cache devices are available only to the node which has the storage pool imported. In a cluster, it is possible to configure cache devices on both nodes to be part of the same pool. To do this, takeover the pool on the passive node, and then add storage and select the cache devices. This has the effect of having half the global cache devices configured at any one time. While the data on the cache devices will be lost on failover, the new cache devices can be used on the new node.

Note: Earlier software versions supported double parity with wide stripes. This has been supplanted by triple parity with wide stripes, as it adds significantly better reliability. Pools configured as double parity with wide stripes under a previous software version continue to be supported, but newly-configured or reconfigured pools cannot select that option.

Import

This allows you to import an existing storage pool, as well as any inadvertently unconfigured pools. This can be used after a factory reset or service operation to recover user data. Importing a pool requires iterating over all attached storage devices and discovering any existing state. This can take a significant amount of time, during which no other storage configuration activities can take place. To import a pool in the BUI, click the 'IMPORT' button in the storage configuration screen. To import a pool in the CLI, use the 'import' command.

Once the discovery phase has completed, you will be presented with a list of available pools, including some identifying characteristics. If the storage has been destroyed or is incomplete, the pool will not be importable. Unlike storage configuration, the pool name is not specified at the beginning, but rather when selecting the pool. By default, the previous pool name is used, but you can change the pool name, either by clicking the name in the BUI or setting the 'name' property in the CLI.

Add

Use this action to add additional storage to your existing pool. The verification step is identical to the verification step during initial configuration. The storage must be added using the same profile that was used to configure the pool initially. If there is insufficient storage to configure the system with the current profile, some attributes can be sacrificed. For example, adding a single JBOD to a double parity RAID-Z NSPF config makes it impossible to preserve NSPF characteristics. However, you can still add the JBOD and create RAID stripes within the JBOD, sacrificing NSPF in the process.

Unconfig

This will remove any active filesystems and LUNs and unconfigure the storage pool, making the raw storage available for future storage configuration. This process can be undone by importing the unconfigured storage pool, provided the raw storage has not since been used as part of an active storage pool.

Scrub

This will initiate the storage pool scrub process, which will verify all content to check for errors. If any unrecoverable errors are found, either through a scrub or through normal operation, the BUI will display the affected files. The scrub can also be stopped if necessary.

Tasks

BUI

Configuring a Storage Pool

There are two ways to arrive at this task: either during initial configuration of the appliance, or at the Configuration->Storage screen.

  1. Click the image:Add item button above the list of storage pools
  2. Enter a name for the storage pool
  3. At the "Allocate and verify storage" screen, configure the JBOD allocation for the storage pool. JBOD allocation may be none, half or all. If no JBODs are detected, check your JBOD cabling and power.
  4. Click "COMMIT".
  5. On the "Configure Added Storage" screen, select the desired data profile. Each is rated in terms of availability, performance and capacity, to help find the best configuration for your business needs.
  6. Click "COMMIT".