JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Tunable Parameters Reference Manual     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

1.  Overview of Oracle Solaris System Tuning

2.  Oracle Solaris Kernel Tunable Parameters

3.  Oracle Solaris ZFS Tunable Parameters

Where to Find Tunable Parameter Information

Tuning ZFS Considerations

ZFS ARC Parameters

zfs_arc_min

zfs_arc_max

ZFS File-Level Prefetch

zfs_prefetch_disable

ZFS Device I/O Queue Depth

zfs_vdev_max_pending

ZFS and Cache Flushing

zfs_nocacheflush

ZFS Metadata Compression

zfs_mdcomp_disable

Tuning ZFS for Database Products

Tuning ZFS for an Oracle Database

Using ZFS with MySQL Considerations

4.  NFS Tunable Parameters

5.  Internet Protocol Suite Tunable Parameters

6.  System Facility Parameters

A.  Tunable Parameters Change History

B.  Revision History for This Manual

Index

ZFS Device I/O Queue Depth

zfs_vdev_max_pending

Description

This parameter controls the maximum number of concurrent I/Os pending to each device.

Data Type

Integer

Default

10

Range

0 to MAXINT

Dynamic?

Yes

Validation

No

When to Change

In a storage array where LUNs are made of a large number of disk drives, the ZFS queue can become a limiting factor on read IOPS. This behavior is one of the underlying reasoning for the best practice of presenting as many LUNS as there are backing spindles to the ZFS storage pool. That is, if you create LUNS from a 10 disk-wide array level raid-group, then using 5 to 10 LUNs to build a storage pool allows ZFS to manage enough of an I/O queue without the need to set this specific tunable.

However, when no separate intent log is in use and the pool is made of JBOD disks, using a small zfs_vdev_max_pending value, such as 10, can improve the synchronous write latency as those are competing for the disk resource. Using separate intent log devices can alleviate the need to tune this parameter for loads that are synchronously write intensive since those synchronous writes are not competing with a deep queue of non-synchronous writes.

Tuning this parameter is not expected to be effective for NVRAM-based storage arrays in the case where volumes are made of small number of spindles. However, when ZFS is presented with a volume made of a large (greater than 10) number of spindles, then this parameter can limit the read throughput obtained on the volume. The reason is that with a maximum of 10 or 35 queued I/Os per LUN, this can translate into less than 1 I/O per storage spindle, which is not enough for individual disks to deliver their IOPS. This issue would appear in iostat actv queue output approaching the value of zfs_vdev_max_pending.

Device drivers may also limit the number of outstanding I/Os per LUN. If you are using LUNs on storage arrays that can handle large numbers of concurrent IOPS, then the device driver constraints can limit concurrency. Consult the configuration for the drivers your system uses. For example, the limit for the QLogic ISP2200, ISP2300, and SP212 family FCl HBA (qlc) driver is described as the execution-throttle parameter in /kernel/drv/qlc.conf.

Commitment Level

Unstable