This section provides information about file system issues on an Oracle Solaris system with zones installed. Issues include using the zone as an NFS server or client, mounting and traversing mounted file systems, and file system restrictions that are specific to zones.
Each zone has its own section of the file system hierarchy, rooted at a directory known as the zone root. Processes in the zone can access files only in the file system hierarchy that is located under the zone root. The chroot utility can be used in a zone, but only to restrict the process to a root path within the zone. For more information about chroot, see the chroot(8) man page.
The NFS server package svc:/network/nfs/server:default must be installed in the zone to create NFS shares in a zone.
The sys_share privilege can be prohibited in the zone configuration to prevent NFS sharing within a zone. See Privileges in a Non-Global Zone.
Restrictions and limitations include the following:
Cross-zone LOFS mounts cannot be shared from zones.
File systems mounted within zones cannot be shared from the global zone.
NFS over Remote Direct Memory Access (RDMA) is not supported in zones.
Oracle Sun Cluster HA for NFS (HANFS) failover is not supported in zones.
For more information, see Introduction to Oracle Solaris 11.4 Network Services.
When file systems are mounted from within a zone, the nodevices option applies. For example, if a zone is granted access to a block device (/dev/dsk/c0t0d0s7) and a raw device (/dev/rdsk/c0t0d0s7) corresponding to a UFS file system, the file system is automatically mounted nodevices when mounted from within a zone. This rule does not apply to mounts specified through a zonecfg configuration.
Options for mounting file systems in non-global zones are described in the following table. Any file system type not listed in the table can be specified in the configuration if it has a mount binary in /usr/lib/fstype/mount.
To mount file system types other than HSFS and NFS from inside the non-global zone, also add the file system type to the configuration by using the zonecfg fs-allowed property.
The ability to unmount a file system will depend on who performed the initial mount. If a file system is specified as part of the zone's configuration using the zonecfg command, then the global zone owns this mount and the non-global zone administrator cannot unmount the file system. If the file system is mounted from within the non-global zone, for example, by specifying the mount in the zone's /etc/vfstab file, then the non-global zone administrator can unmount the file system.
Zones can be NFS clients. NFS version 2, version 3, and version 4 protocols are supported. For information about these NFS versions, see Features of the NFS Service in Managing Network File Systems in Oracle Solaris 11.4.
The default version is NFS version 4. You can enable other NFS versions on a client by using one of the following methods:
Use the sharectl command to set properties – See How to Select Different Versions of NFS on a Client in Managing Network File Systems in Oracle Solaris 11.4 and the sharectl(8) man page.
Manually create a version mount – This method overrides the sharectl setting. See How to Select Different Versions of NFS on a Server in Managing Network File Systems in Oracle Solaris 11.4.
A zone's file system namespace is a subset of the namespace accessible from the global zone. Unprivileged processes in the global zone are prevented from traversing a non-global zone's file system hierarchy through the following means:
Specifying that the zone root's parent directory is owned, readable, writable, and executable by root only
Restricting access to directories exported by /proc
Note that attempting to access AutoFS nodes mounted for another zone will fail. The global administrator must not have auto maps that descend into other zones.
Mounting certain file systems from within a zone presents a security risk. Non-default file systems exhibit special behavior when mounted in a zone. The list of modified file systems follows.
AutoFS is a client-side service that automatically mounts the appropriate file system. AutoFS mounts established within a zone are local to that zone and cannot be accessed from other zones, including the global zone. The mounts are removed when the zone is halted or rebooted. For more information about AutoFS, see How Autofs Works in Managing Network File Systems in Oracle Solaris 11.4.
Each zone runs its own copy of automountd. The auto maps and timeouts are controlled by the zone administrator.
An AutoFS mount that is created in the kernel when another mount is triggered cannot be removed by using the regular umount interface. These mounts are unmounted as a group during zone shutdown.
MNTFS is a virtual file system that provides read-only access to the table of mounted file systems for the local system. The set of file systems visible by using mnttab from within a non-global zone is the set of file systems mounted in the zone, plus an entry for root (/). All mounts in the system are visible from the global zone's /etc/mnttab table. For more information about MNTFS, see Mounting File Systems in Managing Network File Systems in Oracle Solaris 11.4.
From within a zone, NFS mounts behave as though mounted with the nodevices option.
The nfsstat command output only pertains to the zone in which the command is run. For more information, see nfsstat(8).
The /proc file system, or PROCFS, provides process visibility and access restrictions as well as information about the zone association of processes. Only processes in the same zone are visible through /proc.
Processes in the global zone can observe processes and other objects in non-global zones.
From within a zone, procfs mounts behave as though mounted with the nodevices option. For more information about procfs, see the proc(5) man page.
When using the zonecfg command to configure storage-based file systems that have an fsck binary, such as UFS, the zone administrator must specify a raw parameter. The parameter indicates the raw (character) device, such as /dev/rdsk/c0t0d0s7. The zoneadmd daemon automatically runs the fsck command in preen mode (fsck –p), which checks and fixes the file system non-interactively, before it mounts the file system. If the fsck fails, zoneadmd cannot bring the zone to the ready state. The path specified by raw cannot be a relative path.
It is an error to specify a device to fsck for a file system that does not provide an fsck binary in /usr/lib/fs/fstype/fsck. It is also an error if you do not specify a device to fsck if an fsck binary exists for that file system.
In addition to the default dataset described in File Systems Mounted in Zones in Oracle Solaris Zones Configuration Resources, you can add a ZFS dataset to a non-global zone by using the zonecfg command with the add dataset resource. The dataset is visible and mounted in the non-global zone, and also visible in the global zone. The zone administrator can create and destroy file systems within that dataset, and modify the properties of the dataset.
The zoned attribute of zfs indicates whether a dataset has been added to a non-global zone.
$ pfexec zfs get zoned dataset NAME PROPERTY VALUE SOURCE dataset zoned on local
Each dataset that is delegated to a non-global zone through a dataset resource is aliased. The dataset layout is not visible within the zone. Each aliased dataset appears in the zone as if it were a pool. The default alias for a dataset is the last component in the dataset name. For example, if the default alias is used for the delegated dataset tank/sales, the zone will see a virtual ZFS pool named sales. The alias can be customized to be a different value by setting the alias property within the dataset resource.
A dataset named rpool exists within each non-global zone's zonepath dataset. For all non-global zones, this zone rpool dataset is aliased as rpool.
zonename$ zfs list -o name,zoned,mounted,mountpoint NAME ZONED MOUNTED MOUNTPOINT rpool on no /rpool rpool/ROOT on no legacy rpool/ROOT/solaris on yes / rpool/export on no /export rpool/export/home on no /export/home
Dataset aliases are subject to the same name restrictions as ZFS pools. These restrictions are documented in the zpool(8) man page.
If you want to share a global zone dataset, you can add an LOFS-mounted ZFS file system by using the zonecfg command with the add fs subcommand. An administrator with the appropriate rights is responsible for setting and controlling the properties of the dataset.
For more information about ZFS, see Chapter 10, Oracle Solaris ZFS Advanced Topics in Managing ZFS File Systems in Oracle Solaris 11.4.
You cannot use the mknod command to make a special file in a non-global zone.
After a non-global zone is installed, the zone must never be accessed directly from the global zone by any commands other than system backup utilities. Moreover, a non-global zone can no longer be considered secure after it has been exposed to an unknown environment. An example would be a zone placed on a publicly accessible network, where it would be possible for the zone to be compromised and the contents of its file systems altered. If any compromise could have occurred, the global administrator should treat the zone as untrusted.
Any command that accepts an alternative root by using the –R or –b options (or the equivalent) must not be used when the following are true:
The command is run in the global zone.
The alternative root refers to any path within a non-global zone, whether the path is relative to the current running system's global zone or the global zone in an alternative root.
An example is the pkgadd -R root-path command when run from the global zone with a non-global zone root path.
Commands that use –R with an alternative root path include auditreduce, metaroot, pkg, and syseventadm.
Commands that use –b with an alternative root path include add_drv and useradd.