JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Sun ZFS Storage 7000 System Administration Guide
search filter icon
search icon

Document Information

Preface

1.  Introduction

2.  Status

3.  Configuration

4.  Services

5.  Shares

Shares

Introduction

Concepts

Storage Pools

Projects

Shares

Properties

Snapshots

Clones

Shadow Migration

Shadow Data Migration

Traditional Data Migration

Migration via synchronization

Migration via external interposition

Shadow Migration

Shadow migration behavior

Restrictions on shadow source

Shadow filesystem semantics during migration

Identity and ACL migration

Shadow Migration Management

Creating a shadow filesystem

Managing background migration

Handling errors

Monitoring progress

Canceling migration

Snapshots of shadow filesystems

Backing up shadow filesystems

Replicating shadow filesystems

Shadow migration analytics

Shadow migration requests

Shadow migration bytes

Shadow migration operations

Migration of local filesystems

Tasks

Testing potential shadow migration

Migrating data from an active NFS server

Space Management

Introduction

Terms

Space Management Terms

Physical Data

Logical Data

Referenced Data

Snapshot Data

Quota

Reservation

Understanding snapshots

Filesystem and project settings

Data quotas

Data reservations

User and group settings

Viewing current usage

BUI

CLI

User or group quotas

BUI

CLI

Identity management

Filesystem Namespace

Filesystem namespace

Nested mountpoints

Protocol access to mountpoints

NFSv2 / NFSv3

NFSv4

SMB

FTP / FTPS / SFTP

HTTP / HTTPS

Shares

BUI

List of Shares

Editing a Share

Usage Statistics

Available space

Referenced data

Snapshot data

Unused Reservation

Total space

Static Properties

Compression ratio

Case sensitivity

Reject non UTF-8

Normalization

Volume block size

Origin

Data Migration Source

Project Panel

Creating Shares

CLI

Navigation

Share Operations

Properties

General

General Share Properties

Space Usage

Volume size

Thin provisioned

Properties

Mountpoint

Read only

Update access time on read

Non-blocking mandatory locking

Data deduplication

Data compression

Checksum

Cache device usage

Synchronous write bias

Database record size

Additional replication

Virus scan

Prevent destruction

Restrict ownership change

Custom Properties

Protocols

Shares Protocols

NFS

CLI Considerations

Security Modes

Character set encodings

SMB

SCSI

HTTP

FTP

SFTP

Access

Access Control

Root Directory Access

User

Group

Permissions

ACL Behavior

ACL behavior on mode change

ACL inheritance behavior

Root Directory ACL

Snapshots

Introduction

Snapshot Properties

.zfs/snapshot visible

BUI

Listing Snapshots

Taking Snapshots

Renaming a Snapshot

Destroying a Snapshot

Rolling back to a Snapshot

Cloning a Snapshot

Scheduled Snapshots

CLI

Listing Snapshots

Taking Snapshots

Renaming a Snapshot

Destroying a Snapshot

Rolling back to a Snapshot

Cloning a Snapshot

Scheduled Snapshots

Projects

BUI

List of Projects

Editing a Project

Usage Statistics

Available space

Referenced data

Snapshot data

Unused Reservation

Unused Reservation of shares

Total space

Static Properties

Compression ratio

Creating Projects

CLI

Navigation

Project Operations

Selecting a pool in a cluster

Properties

General

General Project Properties

Space Usage

Quota

Reservation

Inherited Properties

Custom Properties

Filesystem Creation Defaults

LUN Creation Defaults

Protocols

Project Protocols

NFS

SMB

iSCSI

HTTP

FTP

Access

Access Control

Inherited ACL Behavior

Snapshots

Introduction

Snapshot Properites

.zfs/snapshot visible

BUI

CLI

Replication

Remote Replication Introduction

Concepts

Terminology

Targets

Actions and Packages

Storage Pools

Project-level vs Share-level Replication

Configuring Replication

Creating and Editing Targets

Creating and Editing Actions

Modes: Manual, Scheduled, or Continuous

Including Intermediate Snapshots

Sending and Cancelling Updates

Managing Replication Packages

BUI

CLI

Cancelling Replication Updates

Disabling a Package

Cloning a Package or Individual Shares

Exporting Replicated Filesystems

Severing Replication

Reversing the Direction of Replication

Destroying a Replication Package

Examples

Remote Replication Details

Authorizations

Alerts

Replication and Clustering

Snapshots and Data Consistency

Snapshot Management

Replicating iSCSI Configuration

Replicating Clones

Observing Replication

Replication Failures

Upgrading From 2009.Q3 and Earlier

Schema

Customized Share Properties

BUI

CLI

Tasks

Create a property to track contact info

6.  Analytics

7.  Application Integration

Glossary

Index

Space Management

Introduction

The behavior of filesystems and LUNs with respect to managing physical storage is different on the 7000 series than on many other systems. As described in the Concepts page, the appliance leverages a pooled storage model where all filesystems and LUNs share common space. Filesystems never have an explicit size assigned to them, and only take up as much space as they need. LUNs reserve enough physical space to write the entire contents of the device, unless they are thinly provisioned, in which case they behave like filesystems and use only the amount of space physically consumed by data.

This system provides maximum flexibility and simplicity of management in an environment when users are generally trusted to do the right thing. A stricter environment, where user's data usage is monitored and/or restricted, requires more careful management. This section describes some of the tools available to the administrator to control and manage space usage.

Terms

Before getting into details, it is important to understand some basic terms used when talking about space usage on the appliance.

Space Management Terms
Physical Data

Size of data as stored physically on disk. Typically, this is equivalent to the logical size of the corresponding data, but can be different in the phase of compression or other factors. This includes the space of the active share as well as all snapshots. Space accounting is generally enforced and managed based on physical space.

Logical Data

The amount of space logically consumed by a filesystem. This does not factor into compression, and can be viewed as the theoretical upper bound on the amount of space consumed by the filesystem. Copying the filesystem to another appliance using a different compression algorithm will not consume more than this amount. This statistic is not explicitly exported and can generally only be computed by taking the amount of physical space consumed and multiplying by the current compression ratio.

Referenced Data

This represents the total amount of space referenced by the active share, independent of any snapshots. This is the amount of space that the share would consume should all snapshots be destroyed. This is also the amount of data that is directly manageable by the user over the data protocols.

Snapshot Data

This represents the total amount of data currently held by all snapshots of the share. This is the amount of space that would be free should all snapshots be destroyed.

Quota

A quota represents a limit on the amount of space that can be consumed by any particular entity. This can be based on filesystem, project, user, or group, and is independent of any current space usage.

Reservation

A reservation represents a guarantee of space for a particular project or filesystem. This takes available space away from the rest of the pool without increasing the actual space consumed by the filesystem. This setting cannot be applied to users and groups. The traditional notion of a statically sized filesystem can be created by setting a quota and reservation to the same value.

Understanding snapshots

Snapshots present an interesting dilemma for space management. They represent the set of physical blocks referenced by a share at a given point in time. Initially, this snapshot consumes no additional space. But as new data is overwritten in the new share, the blocks in the active share will only contain the new data, and older blocks will be "held" by the most recent (and possibly older) snapshots. Gradually, snapshots can consume additional space as the content diverges in the active share.

Some other systems will try to hide the cost of snapshots, by pretending that they are free, or by "reserving" space dedicated to holding snapshot data. Such systems try to gloss over the basic fact inherent with snapshots. If you take a snapshot of a filesystem of any given size, and re-write 100% of the data within the filesystem, by definition you must maintain references to twice the data as was originally in the filesystem. Snapshots are not free, and the only way other systems can present this abstraction is to silently destroy snapshots when space gets full. This can often be the absolute worst thing to do, as a process run amok rewriting data can cause all previous snapshots to be destroyed, preventing any restoration in the process.

In the Sun Storage 7000 series, the cost of snapshots is always explicit, and tools are provided to manage this space in a way that best matches the administrative model for a given environment. Each snapshot has two associated space statistics: unique space and referenced space. The amount of referenced space is the total space consumed by the filesystem at the time the snapshot was taken. It represents the theoretical maximum size of the snapshot should it remain the sole reference to all data blocks. The unique space indicates the amount of physical space referenced only by the current snapshot. When a snapshot is destroyed, the unique space will be made available to the rest of the pool. Note that the amount of space consumed by all snapshots is not equivalent to the sum of unique space across all snapshots. With a share and a single snapshot, all blocks must be referenced by one or both of the snapshot or the share. With multiple snapshots, however, it's possible for a block to be referenced by some subset of snapshots, and not any particular snapshot. For example, if a file is created, two snapshots X and Y are taken, the file is deleted, and another snapshot Z is taken, the blocks within the file are held by X and Y, but not by Z. In this case, destroying X will not free up the space, but destroying both X and Y will. Because of this, destroying any snapshot can affect the unique space referenced by neighboring snapshots, though the total amount of space consumed by snapshots will always decrease.

The total size of a project or share always accounts for space consumed by all snapshots, though the usage breakdown is also available. Quotas and reservations can be set at the project level to enforce physical constraints across this total space. In addition, quotas and reservations can be set at the filesystem level, and these settings can apply to only referenced data or total data. Whether or not quotas and reservations should be applied to referenced data or total physical data depends on the administrative environment. If users are not in control of their snapshots (i.e. an automatic snapshot schedule is set for them), then quotas should typically not include snapshots in the calculation. Otherwise, the user may run out of space but be confused when files cannot be deleted. Without an understanding of snapshots or means to manage those snapshots, it is possible for such a situation to be unrecoverable without administrator intervention. In this scenario, the snapshots represent an overhead cost that is factored into operation of the system in order to provide backup capabilities. On the other hand, there are environments where users are billed according to their physical space requirements, and snapshots represent a choice by the user to provide some level of backup that meets their requirements given the churn rate of their dataset. In these environments, it makes more sense to enforce quotas based on total physical data, including snapshots. The users understand the cost of snapshots, and can be provided a means to actively management them (as through dedicated roles on the appliance).

Filesystem and project settings

The simplest way of enforcing quotas and reservations is on a per-project or per-filesystem basis. Quotas and reservations do not apply to LUNs, though their usage is accounted for in the total project quota or reservations.

Data quotas

A data quota enforces a limit on the amount of space a filesystem or project can use. By default, it will include the data in the filesystem and all snapshots. Clients attempting to write new data will get an error when the filesystem is full, either because of a quota or because the storage pool is out of space. As described in the snapshot section, this behavior may not be intuitive in all situations, particularly when snapshots are present. Removing a file may cause the filesystem to write new data if the data blocks are referenced by a snapshot, so it may be the case that the only way to decrease space usage is to destroy existing snapshots.

If the 'include snapshots' property is unset, then the quota applies only to the immediate data referenced by the filesystem, not any snapshots. The space used by snapshots is enforced by the project-level quota but is otherwise not enforced. In this situation, removing a file referenced by a snapshot will cause the filesystem's referenced data to decrease, even though the system as a whole is using more space. If the storage pool is full (as opposed to the filesystem reaching a preset quota), then the only way to free up space may be to destroy snapshots.

Data quotas are strictly enforced, which means that as space usage nears the limit, the amount of data that can be written must be throttled as the precise amount of data to be written is not known until after writes have been acknowledged. This can affect performance when operating at or near the quota. Because of this, it is generally advisable to remain below the quota during normal operating procedures.

Quotas are managed through the BUI under Shares -> General -> Space Usage -> Data. They are managed in the CLI as the quota and quota_snap properties.

Data reservations

A data reservation is used to make sure that a filesystem or project has at least a certain amount of available space, even if other shares in the system try to use more space. This unused reservation is considered part of the filesystem, so if the rest of the pool (or project) reaches capacity, the filesystem can still write new data even though other shares may be out of space.

By default, a reservation includes all snapshots of a filesystem. If the 'include snapshots' property is unset, then the reservation only applies to the

immediate data of the filesystem. As described in the snapshot section, the behavior when taking snapshots may not always be intuitive. If a reservation on filesystem data (but not snapshots) is in effect, then whenever a snapshot is taken, the system must reserve enough space for that snapshot to diverge completely, even if that never occurs. For example, if a 50G filesystem has a 100G reservation without snapshots, then taking the first snapshot will reserve an additional 50G of space, and the filesystem will end up reserving 150G of space total. If there is insufficient space to guarantee complete divergence of data, then taking the snapshot will fail.

Reservations are managed through the BUI under Shares -> General -> Space Usage -> Data. They are managed in the CLI as the reservation and reservation_snap properties.

User and group settings

Viewing current usage

Regardless of whether user and group quotas are in use, current usage on a per-user or per-group basis can be queried for filesystems and projects. Storage pools created on older versions of software may need to apply deferred updates before making use of this feature. After applying the deferred update, it may take some time for all filesystems to be upgraded to a version that support per-user and per-group usage and quotas.

BUI

To view the current usage in the BUI, navigate to the "Shares -> General" page, under the "Space Usage -> Users and Groups" section. There you will find a text input with a dropdown type selection. This allows you to query the current usage of any given user or group, within a share or across a project. The following types are supported:

Lookups are done as text is typed in the input. When the lookup is completed, the current usage will be displayed. In addition, the "Show All" link will bring up a dialog with a list of current usage of all users or groups. This dialog can only query for a particular type - users or groups - and does not support querying both at the same time. This list displays the canonical UNIX and Windows name (if mappings are enabled), as well as the usage and (for filesystems) quota.

CLI

In the CLI, the users and groups commands can be used from the context of a particular project or share. From here, the show command can be used to display current usage in a tabular form. In addition, the usage for a particular user or group can be retrieved by selecting the particular user or group in question and issuing the get command.

clownfish:> shares select default 
clownfish:shares default> users 
clownfish:shares default users> list
USER       NAME                            USAGE
user-000   root                             325K
user-001   ahl                             9.94K
user-002   eschrock                        20.0G
clownfish:shares default users> select name=eschrock
clownfish:shares default user-002> get              
                          name = eschrock
                      unixname = eschrock
                        unixid = 132651
                       winname = (unset)
                         winid = (unset)
                         usage = 20.0G
User or group quotas

Quotas can be set on a user or group at the filesystem level. These enforce physical data usage based on the POSIX or Windows identity of the owner or group of the file or directory. There are some significant differences between user and group quotas and filesystem and project data quotas:

BUI

In the browser, user quotas are managed from the general tab, under Space Usage -> Users & Groups. As with viewing usage, the current usage is shown as you type a user or group. Once you have finished entering the user or group name and the current usage is displayed, the quota can be set by checking the box next to "quota" and entering a value into the size field. To disable a quota, uncheck the box. Once any changes have been applied, click the 'Apply' button to make changes.

While all the properties on the page are committed together, the user and group quota are validated separately from the other properties. If an invalid user and group is entered as well as another invalid property, only one of the validation errors may be displayed. Once that error has been corrected, an attempt to apply the changes again will show the other error.

CLI

In the CLI, user quotas are managed using the 'users' or 'groups' command from share context. Quotas can be set by selecting a particular user or group and using the 'set quota' command. Any user that is not consuming any space on the filesystem and doesn't have any quota set will not appear in the list of active users. To set a quota for such a user or group, use the 'quota' command, after which the name and quota can be set. To clear a quota, set it to the value '0'.

clownfish:> shares select default select eschrock 
clownfish:shares default/eschrock> users          
clownfish:shares default/eschrock users> list
USER       NAME                            USAGE  QUOTA
user-000   root                             321K      -
user-001   ahl                             9.94K      -
user-002   eschrock                        20.0G      -
clownfish:shares default/eschrock users> select name=eschrock 
clownfish:shares default/eschrock user-002> get               
                          name = eschrock      
                      unixname = eschrock
                        unixid = 132651
                       winname = (unset)
                         winid = (unset)
                         usage = 20.0G
                         quota = (unset)
clownfish:shares default/eschrock user-002> set quota=100G
                         quota = 100G (uncommitted)       
clownfish:shares default/eschrock user-002> commit
clownfish:shares default/eschrock user-002> done  
clownfish:shares default/eschrock users> quota  
clownfish:shares default/eschrock users quota (uncomitted)> set name=bmc
                          name = bmc (uncommitted)                      
clownfish:shares default/eschrock users quota (uncomitted)> set quota=200G
                         quota = 200G (uncommitted)                       
clownfish:shares default/eschrock users quota (uncomitted)> commit
clownfish:shares default/eschrock users> list                     
USER       NAME                            USAGE  QUOTA
user-000   root                             321K      -
user-001   ahl                             9.94K      -
user-002   eschrock                        20.0G   100G
user-003   bmc                                 -   200G
Identity management

User and group quotas leverage the identity mapping service on the appliance. This allows users and groups to be specified as either UNIX or Windows identities, depending on the environment. Like file ownership, these identities are tracked in the following ways:

This means that the canonical form of the identity is the UNIX ID. If the mapping is changed later, the new mapping will be enforced based on the new UNIX ID. If a file is created by a Windows user when no mapping exists, and a mapping is later created, new files will be treated as a different owner for the purposes of access control and usage format. This also implies that if a user ID is reused (i.e. a new user name association created), then any existing files or quotas will appear to be owned by the new user name.

It is recommended that any identity mapping rules be established before attempting to actively use filesystems. Otherwise, any change in mapping can sometimes have surprising results.