Go to main content

Oracle® Solaris Zones Configuration Resources

Exit Print View

Updated: August 2021
 
 

About File Systems and Devices in Zones

File Systems Mounted in Zones

Each zone has a ZFS dataset delegated to it by default. This default delegated dataset mimics the dataset layout of the default global zone dataset layout. A dataset called …/rpool/ROOT contains boot environments.


Note - Do not manipulate the …/rpool/ROOT dataset directly.

The rpool dataset, which must exist, is mounted by default at …/rpool. The …/rpool/export, and …/rpool/export/home datasets are mounted at /export and /export/home. These non-global zone datasets have the same uses as the corresponding global zone datasets, and can be managed in the same way. The zone administrator can create additional datasets within the …/rpool, …/rpool/export, and …/rpool/export/home datasets.


Note - Do not use the zfs command to create, delete, or rename file systems within the hierarchy that starts at the zone's rpool/ROOT file system. The zfs command can be used to set properties other than canmount, mountpoint, sharesmb, zoned, com.oracle.*:*, com.sun:*, and org.opensolaris.*.*..

    Generally, the file systems mounted in a zone include the following:

  • The set of file systems mounted when the virtual platform is initialized

  • The set of file systems mounted from within the application environment itself

    These sets can include, for example, the following file systems:

  • ZFS file systems with a mountpoint other than none or legacy that also have a value of yes for the canmount property.

  • File systems specified in a zone's /etc/vfstab file.

  • AutoFS and AutoFS-triggered mounts. autofs properties are set by using the sharectl described in the sharectl(8) man page.

  • Mounts explicitly performed by a zone administrator

    File system mounting permissions within a running non-global zone are also defined by the fs-allowed global property. This property does not apply to file systems mounted into the zone by using the zonecfg add fs or add dataset commands. By default, only mounts of file systems within a zone's default delegated dataset, hsfs file systems, and network file systems such as NFS, are permitted within a zone.


    Caution  - Certain restrictions are placed on mounts other than the defaults performed from within the application environment. These restrictions prevent the zone administrator from denying service to the rest of the system, or otherwise negatively impacting other zones.


There are security restrictions associated with mounting certain file systems from within a zone. Other file systems exhibit special behavior when mounted in a zone. See File Systems and Non-Global Zones in Creating and Using Oracle Solaris Zones for more information.

For more information about datasets, see the datasets(7) man page. For more information about BEs, see Creating and Administering Oracle Solaris 11.4 Boot Environments.

File System Mounts and Updating

It is not supported to mount a file system in a way that hides any file, symbolic link, or directory that is part of the zone's system image as described in the pkg(7) man page. For example, if there are no packages installed that deliver content into /usr/local, it is permissible to mount a file system at /usr/local.

However, if any package, including legacy SVR4 packages, delivers a file, directory, or symbolic link into a path that begins with /usr/local, it is not supported to mount a file system at /usr/local. It is supported to temporarily mount a file system at /mnt.

Due to the order in which file systems are mounted in a zone, it is not possible to have an fs resource mount a file system at /export/filesys if /export comes from the zone's rpool/export dataset or another delegated dataset.

/dev File System in Non-Global Zones

The zonecfg command uses a rule-matching system to specify which devices should appear in a particular zone. Devices matching one of the rules are included in the /dev file system for the zone. For more information, see How to Create and Deploy a Non-Global Zone in Creating and Using Oracle Solaris Zones.

Removable lofi Device in Non-Global Zones

A removable loopback file lofi device, which works like a CD-ROM device, can be configured in a non-global zone. You can change the file that the device maps to and create multiple lofi devices to use the same file in read-only mode. This type of lofi device is created by using the lofiadm command with the –r option. A file name is not required at creation time.

During the lifecycle of a removable lofi device, a file can be associated with an empty device, or dissociated from a device that is not empty. A file can be associated with multiple removable lofi devices safely at the same time. You cannot remap a file that has been mapped to either a normal read-write lofi device or to a removable lofi device.

The number of potential lofi devices is limited by the zone.max-lofi resource control, which can be set by using the zonecfg command in the global zone.

Once created, a removable lofi device is read-only. The lofi driver will return an error on any write operation to a removable lofi device.

Example 3  Using the lofiadm Command to List Removable lofi Devices

The following command creates a removable lofi device with an associated file.

$ lofiadm -r /path/to/file
/dev/lofi/1

The following command creates an empty removable lofi device.

$ lofiadm -r
/dev/lofi/2

The following command inserts a file into a removable lofi device.

$ lofiadm -r /path/to/file /dev/lofi/1
/dev/lofi/1

For more information, see the lofiadm(8), zonecfg(8), and lofi(4D) man pages. Also see Setting Zone-Wide Resource Controls.

Disk Format Support in Non-Global Zones

Disk partitioning and use of the uscsi command are enabled through the zonecfg tool. See device in Zone Resource Types and Their Properties for an example. For more information about the uscsi command, see uscsi(4I).

  • Delegation is only supported for solaris zones.

  • Disks must use the sd target as shown by using the prtconf -D command. See the prtconf(8) man page.

Kernel Zones Device Resources With Storage URIs

    The following support is available:

  • Devices that are used as disks are supported. This support includes whole physical disks, whole physical or virtual disks on a SAN, devices in conjunction with Oracle Solaris Cluster, and ZFS volumes.

  • Kernel zones also support NFS-based storage objects through the nfs: URI.

    The NFS URI specifies an object based on a lofi device created on the given NFS file. The NFS file is accessed with credentials derived from user and group. User and group can be given as usernames or as user IDs. The host can be given as an IPv4 address, as an IPv6 address, or as a host name. IPv6 addresses must be enclosed in square brackets ([ ]).

    Format:

    nfs://user:group@host[:port]/nfs-share-path/file

    Examples:

    nfs://admin:staff@host/export/test/nfs_file
    nfs://admin:staff@host:1000/export/test/nfs_file 
  • Kernel zones support the in device resources.

    • Only set the bootpri property on disks that will be part of the root pool for the zone. If you set bootpri on disks that will not be part of the root pool for the zone, you might damage the data on the disk.

    • Only set the bootpri property on devices that must be bootable.

    • The id property controls the instance of the disk in the kernel zone. For example, id=5 means that the disk will be c1d5 in the zone.

    For more information about the bootpri and id properties, see the solaris-kz(7) man page.

  • The root zpool that is created on bootable solaris-kz disks can be imported into the global zone during installation. The root zpool is visible with the zpool command. See the zpool(8) man page for more information.

Example 4  Configuring a Storage URI to Create a Portable Zone Configuration

This example uses a device resource type to configure a storage URI that makes the zone configuration portable to other host systems.

$ pfbash zonecfg -z my-zone
zonecfg:my-zone> add device
zonecfg:my-zone:device> set storage=nfs://user1:staff@host1/export/file1
zonecfg:my-zone:device> set create-size=4g

For more information, see the suri(7) man page.

Example 5  Viewing the Current Device Resources Configuration

This example displays information about the current configuration for device resources.

$ pfbash zonecfg -z my-zone info device
device: 
    storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/my-zone/disk0
    id: 0
    bootpri: 0
device:
    storage: nfs://user1:staff@host1/export/file1
    create-size: 4g
Example 6  Viewing the Current Device Resources Configuration for a Specified ID

This example displays the output for a specific zone by specifying the ID for the zone.

$ zonecfg -z my-zone info device id=1
device:
    storage: nfs://user1:staff@host1/export/file1
    create-size: 4g
    id: 1
    bootpri not specified