Go to main content

Managing ZFS File Systems in Oracle® Solaris 11.4

Exit Print View

Updated: February 2021

Using ZFS on an Oracle Solaris System With Zones Installed

The Oracle Solaris Zones feature in the Oracle Solaris operating system provides an isolated environment in which to run applications on your system. The following sections describe how to use ZFS on a system with Oracle Solaris zones:

    Keep the following points in mind when associating ZFS datasets with zones:

  • You can add a ZFS file system or a clone to a native zone with or without delegating administrative control.

  • You can add a ZFS volume as a device to native zones.

  • You cannot associate ZFS snapshots with zones at this time.

Note -  Oracle Solaris kernel zones use storage differently from native Oracle Solaris zones. For more information about storage use in kernel zones, see the Storage Access section of the solaris-kz(7) man page.

For information about storage use on shared storage, see Chapter 12, Oracle Solaris Zones on Shared Storage in Creating and Using Oracle Solaris Zones.

Adding a ZFS filesystem by using an fs resource enables the native zone to share disk space with the global or kernel zone. However, the zone administrator cannot control properties or create new file systems in the underlying file system hierarchy. This operation is identical to adding any other type of file system to a zone. You should add a file system to a native zone only for the sole purpose of sharing common disk space.

You can also delegate ZFS datasets to a native zone, which would give the zone administrator complete control over the dataset and all its children. The zone administrator can create and destroy file systems or clones within that dataset, as well as modify properties of the datasets. The zone administrator cannot affect datasets that have not been added to the zone, including exceeding any top-level quotas set on the delegated dataset.

When both a source zonepath and a target zonepath reside on a ZFS file system and are in the same pool, the zoneadm clone command, not zfs clone, becomes the command for cloning zones. The zoneadm clone command creates a ZFS snapshot of the source zonepath and sets up the target zonepath. For more information, see Creating and Using Oracle Solaris Zones.

Adding ZFS File Systems to a Non-Global Zone

A ZFS file system that is added to a native zone must have its mountpoint property set to legacy. For example, for the system1/zone/zion file system, you would type the following command on the global or kernel zone:

global$ zfs set mountpoint=legacy system1/zone/zion

Then you would add that file system to the native zone by using the add fs subcommand of the zonecfg command.

Note -  To add the files system, ensure that it is not previously mounted on another location.
global$ zonecfg -z zion
zonecfg:zion> add fs
zonecfg:zion:fs> set type=zfs
zonecfg:zion:fs> set special=system1/zone/zion
zonecfg:zion:fs> set dir=/opt/data
zonecfg:zion:fs> end

This syntax adds the ZFS file system, system1/zone/zion, to the already configured zion zone, which is mounted at /opt/data. The zone administrator can create and destroy files within the file system. The file system cannot be remounted to a different location. Likewise, the zone administrator cannot change properties on the file system such as atime, readonly, compression, and so on.

The global zone administrator is responsible for setting and controlling properties of the file system.

For more information about the zonecfg command and about configuring resource types with zonecfg, see Creating and Using Oracle Solaris Zones.

Delegating Datasets to a Non-Global Zone

To meet the primary goal of delegating the administration of storage to a zone, ZFS supports adding datasets to a native zone through the use of the zonecfg add dataset command.

In the following example, a ZFS file system is delegated to a native zone by a global zone administrator from the global zone or kernel zone.

global$ zonecfg -z zion
zonecfg:zion> add dataset
zonecfg:zion:dataset> set name=system1/zone/zion
zonecfg:zion:dataset> set alias=system1
zonecfg:zion:dataset> end

Unlike adding a file system, this syntax causes the ZFS file system system1/zone/zion to be visible within the already configured zion zone. Within the zion zone, this file system is not accessible as system1/zone/zion, but as a virtual pool named system1. The delegated file system alias provides a view of the original pool to the zone as a virtual pool. The alias property specifies the name of the virtual pool. If no alias is specified, a default alias matching the last component of the file system name is used. In the example, the default alias would be zion.

Within delegated datasets, the zone administrator can set file system properties, as well as create descendant file systems. In addition, the zone administrator can create snapshots and clones, and otherwise control the entire file system hierarchy. If ZFS volumes are created within delegated file systems, these volumes might conflict with ZFS volumes that are added as device resources.

Adding ZFS Volumes to a Non-Global Zone

You can add or create a ZFS volume in a native zone or you can add access to a volume's data in a native zone in the following ways:

  • In a native zone, a privileged zone administrator can create a ZFS volume as descendant of a previously delegated file system. For example, you can type the following command for the file system system1/zone/zion that was delegated in the previous example:

    $ zfs create -V 2g system1/zone/zion/vol1

    After the volume is created, the zone administrator can manage the volume's properties and data in the native zone as well as create snapshots.

  • In a global or kernel zone, use the zonecfg add device command and specify a ZFS volume whose data can be accessed in a native zone. For example:

    global$ zonecfg -z zion
    zonecfg:zion> add device
    zonecfg:zion:device> set match=/dev/zvol/dsk/system1/volumes/vol2
    zonecfg:zion:device> end

    In this example, only the volume data can be accessed in the native zone.

Using ZFS Storage Pools Within a Zone

ZFS storage pools cannot be created or modified within a native zone. The delegated administration model centralizes control of physical storage devices within the global or kernel zone and control of virtual storage to native zones. Although a pool-level dataset can be added to a native zone, any command that modifies the physical characteristics of the pool, such as creating, adding, or removing devices, is not allowed from within a native zone. Even if physical devices are added to a native zone by using the zonecfg add device command, or if files are used, the zpool command does not allow the creation of any new pools within the native zone.

Kernel zones are more powerful and more flexible in terms of data storage management. Devices and volumes can be delegated to a kernel zone, much like a global zone. Also, a ZFS storage pool can be created in a kernel zone.

Managing ZFS Properties Within a Zone

After a dataset is delegated to a zone, the zone administrator can control specific dataset properties. After a dataset is delegated to a zone, all its ancestors are visible as read-only datasets, while the dataset itself is writable, as are all of its descendants. For example, consider the following configuration:

global$ zfs list -Ho name

If system1/data/zion were added to a zone with the default zion alias, each dataset would have the following properties.

Immutable Properties
zoned, quota, reservation

Note that every parent of system1/zone/zion is invisible and all descendants are writable. The zone administrator cannot change the zoned property because doing so would expose a security risk that described in the next section.

Privileged users in the zone can change any other settable property, except for quota and reservation properties. This behavior allows the global zone administrator to control the disk space consumption of all datasets used by the native zone.

In addition, the share.nfs and mountpoint properties cannot be changed by the global zone administrator after a dataset has been delegated to a native zone.

Understanding the zoned Property

When a dataset is delegated to a native zone, the dataset must be specially marked so that certain properties are not interpreted within the context of the global or kernel zone. After a dataset has been delegated to a native zone and is under the control of a zone administrator, its contents can no longer be trusted. As with any file system, setuid binaries, symbolic links, or otherwise questionable contents might exist that might adversely affect the security of the global or kernel zone. In addition, the mountpoint property cannot be interpreted in the context of the global or kernel zone. Otherwise, the zone administrator could affect the global or kernel zone's namespace. To address the latter, ZFS uses the zoned property to indicate that a dataset has been delegated to a native zone at one point in time.

The zoned property is a boolean value that is automatically turned on when a zone containing a ZFS dataset is first booted. A zone administrator does not need to manually set this property. If the zoned property is set, the dataset cannot be mounted or shared in the global or kernel zone. In the following example, system1/zone/zion has been delegated to a zone, while system1/zone/global has not:

$ zfs list -o name,zoned,mountpoint -r system1/zone
NAME                  ZONED   MOUNTPOINT              MOUNTED
system1/zone/global     off   /system1/zone/global        yes
system1/zone/zion        on   /system1/zone/zion          yes
$ zfs mount
system1/zone/global           /system1/zone/global
system1/zone/zion             /export/zone/zion/root/system1/zone/zion

root@kzx-05:~# zonecfg -z sol info dataset                                                
    name: rpool/foo
    alias: foo
root@kzx-05:~# zfs list -o name,zoned,mountpoint,mounted -r rpool/foo                     
NAME       ZONED  MOUNTPOINT                  MOUNTED
rpool/foo     on  /system/zones/sol/root/foo      yes
root@kzx-05:~# zfs mount | grep /foo                                                      
rpool/foo                       /system/zones/sol/root/foo

When a dataset is removed from a zone or a zone is destroyed, the zoned property is not automatically cleared. This behavior would avoid the inherent security risks associated with these tasks. Because an untrusted user has complete access to the dataset and its descendants, the mountpoint property might be set to bad values, or setuid binaries might exist on the file systems.

To prevent accidental security risks, the zoned property must be manually cleared by the global zone administrator if you want to reuse the dataset in any way. Before setting the zoned property to off, ensure that the mountpoint property for the dataset and all its descendants are set to reasonable values and that no setuid binaries exist, or turn off the setuid property.

After you have verified that no security vulnerabilities are left, the zoned property can be turned off by using the zfs set or zfs inherit command. If the zoned property is turned off while a dataset is in use within a zone, the system might behave in unpredictable ways. Only change the property if you are sure the dataset is no longer in use by a native zone.

Copying Zones to Other Systems

When you need to migrate one or more zones needs to another system, use Oracle Solaris Unified Archives, which manage all cloning and recovery operations in the operating system and which operate on global, native, as well as kernel zones. For more information about Unified Archives, see Using Unified Archives for System Recovery and Cloning in Oracle Solaris 11.4. For instructions about migrating zones, which include copying zones to other systems, see Chapter 9, Transforming Systems to Oracle Solaris Zones in Creating and Using Oracle Solaris Zones.

If all zones on one system need to move to another ZFS pool on a different system, consider using a replication stream because it preserves snapshots and clones. Snapshots and clones are used extensively by pkg update, beadm create, and the zoneadm clone commands.

In the following example, the sysA's zones are installed in the rpool/zones file system and they need to be copied to the newpool/zones file system on sysB. The following commands create a snapshot and copy the data to sysB by using a replication stream:

sysA$ zfs snapshot -r rpool/zones@send-to-sysB
sysA$ zfs send -R rpool/zones@send-to-sysB | ssh sysB zfs receive -d newpool

Note -  The commands refer only to the ZFS aspect of the operation. You would need to perform other zones-related command to complete the task. For specific information, refer to Chapter 9, Transforming Systems to Oracle Solaris Zones in Creating and Using Oracle Solaris Zones.