This chapter describes ZFS volumes, using ZFS on a Solaris system with zones installed, ZFS alternate root pools, and ZFS rights profiles.
The following sections are provided in this chapter:
A ZFS volume is a dataset that represents a block device. ZFS volumes are identified as devices in the /dev/zvol/{dsk,rdsk}/pool directory.
In the following example, a 5-Gbyte ZFS volume, tank/vol, is created:
# zfs create -V 5gb tank/vol |
When you create a volume, a reservation is automatically set to the initial size of the volume. The reservation size continues to equal the size of the volume so that unexpected behavior doesn't occur. For example, if the size of the volume shrinks, data corruption might occur. You must be careful when changing the size of the volume.
In addition, if you create a snapshot of a volume that changes in size, you might introduce inconsistencies if you attempt to rollback the snapshot or create a clone from the snapshot.
For information about file system properties that can be applied to volumes, see Table 6–1.
If you are using a Solaris system with zones installed, you cannot create or clone a ZFS volume in a non-global zone. Any attempt to create or clone a volume from within a non-global zone will fail. For information about using ZFS volumes in a global zone, see Adding ZFS Volumes to a Non-Global Zone.
During an installation of a ZFS root file system or a migration from a UFS root file system, a swap device is created on a ZFS volume in the ZFS root pool. For example:
# swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 253,3 16 8257520 8257520 |
During an installation of a ZFS root file system or a migration from a UFS root file system, a dump device is created on a ZFS volume in the ZFS root pool. The dump device requires no administration after it is setup. For example:
# dumpadm Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes |
If you need to change your swap area or dump device after the system is installed or upgraded, use the swap and dumpadm commands as in previous Solaris releases. If you need to set up an additional swap area create a ZFS volume of a specific size and then enable swap on that device. For example:
# zfs create -V 2G rpool/swap2 # swap -a /dev/zvol/dsk/rpool/swap2 # swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 2097136 2097136 /dev/zvol/dsk/rpool/swap2 256,5 16 4194288 4194288 |
Do not swap to a file on a ZFS file system. A ZFS swap file configuration is not supported.
For information about adjusting the size of the swap and dump volumes, see Adjusting the Sizes of Your ZFS Swap and Dump Devices.
Solaris iSCSI targets and initiators are supported in this Solaris release.
In addition, you can easily create a ZFS volume as an iSCSI target by setting the shareiscsi property on the volume. For example:
# zfs create -V 2g tank/volumes/v2 # zfs set shareiscsi=on tank/volumes/v2 # iscsitadm list target Target: tank/volumes/v2 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0 |
After the iSCSI target is created, set up the iSCSI initiator. For more information about Solaris iSCSI targets and initiators, see Chapter 14, Configuring Solaris iSCSI Targets and Initiators (Tasks), in System Administration Guide: Devices and File Systems.
Solaris iSCSI targets can also be created and managed with iscsitadm command. If you set the shareiscsi property on a ZFS volume, do not use the iscsitadm command to also create the same target device. Otherwise, you will end up with duplicate target information for the same device.
A ZFS volume as an iSCSI target is managed just like any other ZFS dataset. However, the rename, export, and import operations work a little differently for iSCSI targets.
When you rename a ZFS volume, the iSCSI target name remains the same. For example:
# zfs rename tank/volumes/v2 tank/volumes/v1 # iscsitadm list target Target: tank/volumes/v1 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0 |
Exporting a pool that contains a shared ZFS volume causes the target to be removed. Importing a pool that contains a shared ZFS volume causes the target to be shared. For example:
# zpool export tank # iscsitadm list target # zpool import tank # iscsitadm list target Target: tank/volumes/v1 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0 |
All iSCSI target configuration information is stored within the dataset. Like an NFS shared file system, an iSCSI target that is imported on a different system is shared appropriately.
The following sections describe how to use ZFS on a system with Solaris zones.
Keep the following points in mind when associating ZFS datasets with zones:
You can add a ZFS file system or a ZFS clone to a non-global zone with or without delegating administrative control.
You can add a ZFS volume as a device to non-global zones
You cannot associate ZFS snapshots with zones at this time
In the sections below, a ZFS dataset refers to a file system or clone.
Adding a dataset allows the non-global zone to share space with the global zone, though the zone administrator cannot control properties or create new file systems in the underlying file system hierarchy. This is identical to adding any other type of file system to a zone, and should be used when the primary purpose is solely to share common space.
ZFS also allows datasets to be delegated to a non-global zone, giving complete control over the dataset and all its children to the zone administrator. The zone administrator can create and destroy file systems or clones within that dataset, and modify properties of the datasets. The zone administrator cannot affect datasets that have not been added to the zone, and cannot exceed any top-level quotas set on the delegated dataset.
Consider the following interactions when working with ZFS on a system with Solaris zones installed:
A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy.
Due to CR 6449301, do not add a ZFS dataset to a non-global zone when the non-global zone is configured. Instead, add a ZFS dataset after the zone is installed.
When a source zonepath and the target zonepath both reside on ZFS and are in the same pool, zoneadm clone will now automatically use ZFS clone to clone a zone. The zoneadm clone command will take a ZFS snapshot of the source zonepath and set up the target zonepath. You cannot use the zfs clone command to clone a zone. For more information, see Part II, Zones, in System Administration Guide: Virtualization Using the Solaris Operating System.
If you delegate a ZFS file system to a non-global zone, you must remove that file system from the non-global zone before using Solaris Live Upgrade. Otherwise, the Live Upgrade operation will fail due to a read-only file system error.
You can add a ZFS file system as a generic file system when the goal is solely to share space with the global zone. A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy.
You can add a ZFS file system to a non-global zone by using the zonecfg command's add fs subcommand. For example:
In the following example, a ZFS file system is added to a non-global zone by a global administrator in the global zone.
# zonecfg -z zion zonecfg:zion> add fs zonecfg:zion:fs> set type=zfs zonecfg:zion:fs> set special=tank/zone/zion zonecfg:zion:fs> set dir=/export/shared zonecfg:zion:fs> end |
This syntax adds the ZFS file system, tank/zone/zion, to the already configured zion zone, mounted at /export/shared. The mountpoint property of the file system must be set to legacy, and the file system cannot already be mounted in another location. The zone administrator can create and destroy files within the file system. The file system cannot be remounted in a different location, nor can the zone administrator change properties on the file system such as atime, readonly, compression, and so on. The global zone administrator is responsible for setting and controlling properties of the file system.
For more information about the zonecfg command and about configuring resource types with zonecfg, see Part II, Zones, in System Administration Guide: Virtualization Using the Solaris Operating System.
If the primary goal is to delegate the administration of storage to a zone, then ZFS supports adding datasets to a non-global zone through the use of the zonecfg command's add dataset subcommand.
In the following example, a ZFS file system is delegated to a non-global zone by a global administrator in the global zone.
# zonecfg -z zion zonecfg:zion> add dataset zonecfg:zion:dataset> set name=tank/zone/zion zonecfg:zion:dataset> end |
Unlike adding a file system, this syntax causes the ZFS file system tank/zone/zion to be visible within the already configured zion zone. The zone administrator can set file system properties, as well as create children. In addition, the zone administrator can take snapshots, create clones, and otherwise control the entire file system hierarchy.
If you are using Solaris Live Upgrade to upgrade your ZFS BE with non-global zones, remove any delegated datasets before the Live Upgrade operation or the Live Upgrade operation will fail with a read-only file system error. For example:
zonecfg:zion> zonecfg:zion> remove dataset name=tank/zone/zion zonecfg:zion1> exit |
For more information about what actions are allowed within zones, see Managing ZFS Properties Within a Zone.
ZFS volumes cannot be added to a non-global zone by using the zonecfg command's add dataset subcommand. If an attempt to add a ZFS volume is detected, the zone cannot boot. However, volumes can be added to a zone by using the zonecfg command's add device subcommand.
In the following example, a ZFS volume is added to a non-global zone by a global administrator in the global zone:
# zonecfg -z zion zion: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zion> create zonecfg:zion> add device zonecfg:zion:device> set match=/dev/zvol/dsk/tank/vol zonecfg:zion:device> end |
This syntax adds the tank/vol volume to the zone. Note that adding a raw volume to a zone has implicit security risks, even if the volume doesn't correspond to a physical device. In particular, the zone administrator could create malformed file systems that would panic the system when a mount is attempted. For more information about adding devices to zones and the related security risks, see Understanding the zoned Property.
For more information about adding devices to zones, see Part II, Zones, in System Administration Guide: Virtualization Using the Solaris Operating System.
ZFS storage pools cannot be created or modified within a zone. The delegated administration model centralizes control of physical storage devices within the global zone and control of virtual storage to non-global zones. While a pool-level dataset can be added to a zone, any command that modifies the physical characteristics of the pool, such as creating, adding, or removing devices, is not allowed from within a zone. Even if physical devices are added to a zone by using the zonecfg command's add device subcommand, or if files are used, the zpool command does not allow the creation of any new pools within the zone.
After a dataset is delegated to a zone, the zone administrator can control specific dataset properties. When a dataset is delegated to a zone, all its ancestors are visible as read-only datasets, while the dataset itself is writable as are all of its children. For example, consider the following configuration:
global# zfs list -Ho name tank tank/home tank/data tank/data/matrix tank/data/zion tank/data/zion/home |
If tank/data/zion is added to a zone, each dataset would have the following properties.
Dataset |
Visible |
Writable |
Immutable Properties |
---|---|---|---|
tank |
Yes |
No |
- |
tank/home |
No |
- |
- |
tank/data |
Yes |
No |
- |
tank/data/matrix |
No |
- |
- |
tank/data/zion |
Yes |
Yes |
sharenfs, zoned, quota, reservation |
tank/data/zion/home |
Yes |
Yes |
sharenfs, zoned |
Note that every parent of tank/zone/zion is visible read-only, all children are writable, and datasets that are not part of the parent hierarchy are not visible at all. The zone administrator cannot change the sharenfs property, because non-global zones cannot act as NFS servers. Neither can the zone administrator change the zoned property, because doing so would expose a security risk as described in the next section.
Any other settable property can be changed, except for quota and reservation properties. This behavior allows the global zone administrator to control the space consumption of all datasets used by the non-global zone.
In addition, the sharenfs and mountpoint properties cannot be changed by the global zone administrator once a dataset has been delegated to a non-global zone.
When a dataset is delegated to a non-global zone, the dataset must be specially marked so that certain properties are not interpreted within the context of the global zone. After a dataset has been delegated to a non-global zone under the control of a zone administrator, its contents can no longer be trusted. As with any file system, there might be setuid binaries, symbolic links, or otherwise questionable contents that might adversely affect the security of the global zone. In addition, the mountpoint property cannot be interpreted in the context of the global zone. Otherwise, the zone administrator could affect the global zone's namespace. To address the latter, ZFS uses the zoned property to indicate that a dataset has been delegated to a non-global zone at one point in time.
The zoned property is a boolean value that is automatically turned on when a zone containing a ZFS dataset is first booted. A zone administrator will not need to manually turn on this property. If the zoned property is set, the dataset cannot be mounted or shared in the global zone, and is ignored when the zfs share -a command or the zfs mount -a command is executed. In the following example, tank/zone/zion has been delegated to a zone, while tank/zone/global has not:
# zfs list -o name,zoned,mountpoint -r tank/zone NAME ZONED MOUNTPOINT tank/zone/global off /tank/zone/global tank/zone/zion on /tank/zone/zion # zfs mount tank/zone/global /tank/zone/global tank/zone/zion /export/zone/zion/root/tank/zone/zion |
Note the difference between the mountpoint property and the directory where the tank/zone/zion dataset is currently mounted. The mountpoint property reflects the property as stored on disk, not where the dataset is currently mounted on the system.
When a dataset is removed from a zone or a zone is destroyed, the zoned property is not automatically cleared. This behavior is due to the inherent security risks associated with these tasks. Because an untrusted user has had complete access to the dataset and its children, the mountpoint property might be set to bad values, or setuid binaries might exist on the file systems.
To prevent accidental security risks, the zoned property must be manually cleared by the global administrator if you want to reuse the dataset in any way. Before setting the zoned property to off, make sure that the mountpoint property for the dataset and all its children are set to reasonable values and that no setuid binaries exist, or turn off the setuid property.
After you have verified that no security vulnerabilities are left, the zoned property can be turned off by using the zfs set or zfs inherit commands. If the zoned property is turned off while a dataset is in use within a zone, the system might behave in unpredictable ways. Only change the property if you are sure the dataset is no longer in use by a non-global zone.
When a pool is created, the pool is intrinsically tied to the host system. The host system maintains knowledge about the pool so that it can detect when the pool is otherwise unavailable. While useful for normal operation, this knowledge can prove a hindrance when booting from alternate media, or creating a pool on removable media. To solve this problem, ZFS provides an alternate root pool feature. An alternate root pool does not persist across system reboots, and all mount points are modified to be relative to the root of the pool.
The most common use for creating an alternate root pool is for use with removable media. In these circumstances, users typically want a single file system, and they want it to be mounted wherever they choose on the target system. When an alternate root pool is created by using the -R option, the mount point of the root file system is automatically set to /, which is the equivalent of the alternate root value.
In the following example, a pool called morpheus is created with /mnt as the alternate root path:
# zpool create -R /mnt morpheus c0t0d0 # zfs list morpheus NAME USED AVAIL REFER MOUNTPOINT morpheus 32.5K 33.5G 8K /mnt |
Note the single file system, morpheus, whose mount point is the alternate root of the pool, /mnt. The mount point that is stored on disk is / and the full path to /mnt is interpreted only in this initial context of the pool creation. This file system can then be exported and imported under an arbitrary alternate root pool on a different system by using -R alternate root value syntax.
# zpool export morpheus # zpool import morpheus cannot mount '/': directory is not empty # zpool export morpheus # zpool import -R /mnt morpheus # zfs list morpheus NAME USED AVAIL REFER MOUNTPOINT morpheus 32.5K 33.5G 8K /mnt |
Pools can also be imported using an alternate root. This feature allows for recovery situations, where the mount points should not be interpreted in context of the current root, but under some temporary directory where repairs can be performed. This feature also can be used when mounting removable media as described above.
In the following example, a pool called morpheus is imported with /mnt as the alternate root path. This example assumes that morpheus was previously exported.
# zpool import -R /a pool # zpool list morpheus NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 44.8G 78K 44.7G 0% ONLINE /a # zfs list pool NAME USED AVAIL REFER MOUNTPOINT pool 73.5K 44.1G 21K /a/pool |
If you want to perform ZFS management tasks without using the superuser (root) account, you can assume a role with either of the following profiles to perform ZFS administration tasks:
ZFS Storage Management – Provides the ability to create, destroy, and manipulate devices within a ZFS storage pool
ZFS File system Management – Provides the ability to create, destroy, and modify ZFS file systems
For more information about creating or assigning roles, see System Administration Guide: Security Services.
In addition to using RBAC roles for administering ZFS file systems, you might also consider using ZFS delegated administration for distributed ZFS administration tasks. For more information, see Chapter 9, ZFS Delegated Administration.