Solaris Volume Manager Administration Guide

Chapter 2 Storage Management Concepts

This chapter provides a brief introduction to some common storage management concepts. If you are already familiar with storage management concepts, you can proceed directly to Chapter 3, Solaris Volume Manager Overview.

This chapter contains the following information:

Introduction to Storage Management

Storage management is the means by which you control the devices on which the active data on your system is kept. To be useful, active data must be available and remain unchanged (persistent) even after unexpected events (hardware failure, software failure, or other similar event).

Storage Hardware

There are many different devices on which data can be stored. The selection of devices to best meet your storage needs depends primarily on three factors:

You can use Solaris Volume Manager to help manage the tradeoffs in performance, availability and cost. You can often mitigate many of the tradeoffs completely with Solaris Volume Manager.

Solaris Volume Manager works well with any supported storage on any system that runs the SolarisTM Operating Environment.

RAID Levels

RAID is an acronym for Redundant Array of Inexpensive (or Independent) Disks. Basically, this term refers to a set of disks (called an array, or, more commonly, a volume) that appears to the user as a single large disk drive. This array provides, depending on the configuration, improved reliability, response time, and/or storage capacity.

Technically, there are six RAID levels, 0-5,. Each level refers to a method of distributing data while ensuring data redundancy. (RAID level 0 does not provide data redundancy, but is usually included as a RAID classification because it is the basis for the majority of RAID configurations in use.) Very few storage environments support RAID levels 2, 3, and 4, so they are not described here.

Solaris Volume Manager supports the following RAID levels:

Configuration Planning Guidelines

When you are planning your storage management configuration, keep in mind that for any given application there are trade-offs in performance, availability, and hardware costs. You might need to experiment with the different variables to determine what works best for your configuration.

This section provides guidelines for working with Solaris Volume Manager RAID 0 (concatenation and stripe) volumes, RAID 1 (mirror) volumes, RAID 5 volumes, soft partitions, transactional (logging) volumes, and file systems that are constructed on volumes.

Choosing Storage Mechanisms

Before you implement your storage management approach, you need to decide what kinds of storage devices to use. This set of guidelines compares the various storage mechanisms to help you choose among them. Additional sets of guidelines apply to specific storage mechanisms as implemented in Solaris Volume Manager. See specific chapters about each volume type for details.


Note –

The storage mechanisms listed are not mutually exclusive. You can use them in combination to meet multiple goals. For example, you could create a RAID 1 volume for redundancy, then create soft partitions on it to increase the number of discrete file systems that are possible.


Table 2–1 Choosing Storage Mechanisms

Requirements 

RAID 0 (Concatenation) 

RAID 0 (Stripe) 

RAID 1 (Mirror) 

RAID 5 

Soft Partitions 

Redundant data 

No 

No 

Yes 

Yes 

No 

Improved read performance 

No 

Yes 

Depends on underlying device 

Yes 

No 

Improved write performance 

No 

Yes 

No 

No 

No 

More than 8 slices/device 

No 

No 

No 

No 

Yes 

Larger available storage space 

Yes 

Yes 

No 

Yes 

No 

Table 2–2 Optimizing Redundant Storage

 

RAID 1 (Mirror) 

RAID 5 

Write operations 

Faster 

Slower 

Random read 

Faster 

Slower 

Hardware cost 

Higher 

Lower 


Note –

In addition to these generic storage options, see Hot Spare Pools for more information about using Solaris Volume Manager to support redundant devices.


Performance Issues

General Performance Guidelines

When you design your storage configuration, consider the following performance guidelines:

Optimizing for Random I/O and Sequential I/O

This section explains Solaris Volume Manager strategies for optimizing your particular configuration.

In general, if you do not know if sequential I/O or random I/O predominates on file systems you will be implementing on Solaris Volume Manager volumes, do not implement these performance tuning tips. These tips can degrade performance if they are improperly implemented.

The following optimization suggestions assume that you are optimizing a RAID 0 volume. In general, you would want to optimize a RAID 0 volume, then mirror that volume to provide both optimal performance and data redundancy.

Random I/O

If you have a random I/O environment, such as an environment used for databases and general-purpose file servers, you want all disk spindles to be approximately equal amounts of time servicing I/O requests.

For example, assume that you have 40 Gbytes of storage for a database application. If you stripe across four 10 Gbyte disk spindles, and if the I/O load is truly random and evenly dispersed across the entire range of the table space, then each of the four spindles will tend to be equally busy, which will generally improve performance.

The target for maximum random I/O performance on a disk is 35 percent or lower usage, as reported by the iostat command. Disk use in excess of 65 percent on a typical basis is a problem. Disk use in excess of 90 percent is a significant problem. The solution to having disk use values that are too high is to create a new RAID 0 volume with more disks (spindles).


Note –

Simply attaching additional disks to an existing volume will not improve performance. You must create a new volume with the ideal parameters to optimize performance.


The interlace size of the stripe doesn't matter because you just want to spread the data across all the disks. Any interlace value greater than the typical I/O request will do.

Sequential Access I/O

You can optimize the performance of your configuration to take advantage of a sequential I/O environment, such as DBMS servers that are dominated by full table scans and NFS servers in very data-intensive environments, by setting the interlace value low relative to the size of the typical I/O request.

For example, assume a typical I/O request size of 256 Kbyte and striping across 4 spindles. A good choice for stripe unit size in this example would be: 256 Kbyte / 4 = 64 Kbyte, or smaller.

This strategy ensures that the typical I/O request is spread across multiple disk spindles, thus increasing the sequential bandwidth.


Note –

Seek time and rotation time are practically zero in the sequential case. When you optimize sequential I/O, the internal transfer rate of a disk is most important.


In sequential applications, the typical I/O size is usually large (greater than 128 Kbytes, often greater than 1 Mbytes). Assume an application with a typical I/O request size of 256 Kbytes and assume striping across 4 disk spindles. 256 Kbytes / 4 = 64 Kbytes. So, a good choice for the interlace size would be 32 to 64 Kbyte.