Go to main content

man pages section 7: Standards, Environments, Macros, Character Sets, and Miscellany

Exit Print View

Updated: Wednesday, July 27, 2022
 
 

solaris-kz(7)

Name

solaris-kz - solaris kernel zone

Description

The solaris-kz brand uses the branded zones framework described in brands(7) to run zones with a separate kernel and OS installation from that used by the global zone.

Installation and Update

A solaris-kz installation is independent of the global zone; it is not a pkg linked image and can be modified regardless of the global zone content. A solaris-kz zone can be installed in the same manner as other brands directly from the global zone, or via a boot media as described below.

When specifying a manifest for installation, the manifest should be suitable for a global zone installation. As kernel zones always install into a known location for the root pool, an installation target disk should not be specified.

If an AI manifest is used to install a different version of Solaris than the one that is installed in the global zone, the installation must be performed using installation media that matches the version of Solaris being installed. A typical command line would resemble:

zoneadm -z kzone1 install -b <ai.iso> -m <manifest.xml>

Boot environment (BE) management is independent of the global zone. BE creation in the global zone does not create a new BE in the zone. For more information, see the beadm(8) man page.

Process Management and Visibility

Unlike other brands, a solaris-kz zone runs a separate kernel in its own address space and differences are apparent when examining the zone from the global zone.

As an example, processes running in a solaris-kz zone are not directly accessible by the global zone. To see the list of processes in a kernel zone named kz-zone, rather than using the ps command with the –z kz-zone option, you need to use the following command:

# zlogin kz-zone ps -e

The global zone and each kernel zone manage their own process ID space. Thus, the process 1234 may exist in the global zone and one or more kernel zones and are unique processes. If the global zone administrator wishes to kill process 1234 in kz-zone, it must be done with the following command or an equivalent:

# zlogin kz-zone kill 1234

ps(1) and similar tools run from the global zone will see processes associated with managing a solaris-kz zone instance, such as kzhost and zlogin-kz. This can be useful for debugging, but otherwise they are private implementation details.

Similarly, resource management functionality is different. For example, resource controls such as max-processes are not available when configuring a solaris-kz zone, as they are only meaningful when sharing a single kernel instance. That is, a process running inside a solaris-kz zone cannot take up a process table slot in the global zone, as the kernels are independent.

The zonestat utility displays the resource usage of the zone. The output is generally correct, but may reflect the host values. For example, the resource control values such as lwps show the lwps used on the host, not the ones used inside the zone.

The solaris-kz brand uses certain hardware features which may not be available in older systems, or in virtualized environments. To detect whether a system supports the solaris-kz brand, install the brand-solaris-kz package and then run the virtinfo command.

# virtinfo -c supported list kernel-zone

If kernel-zone is not shown in the supported list, you can see syslog for more information. Messages pertaining to kernel zones will contain the string kernel-zone.

Stolen time as reported by the mpstat(8), iostat(8), vmstat(8) and other utilities directly reflects the time when the kernel zone could not run due to the host using CPU resources for other purposes.

Storage Access

A solaris-kz brand zone must reside on one or more devices. A default zfs(8) volume will be created in the global zone's root zpool if the configuration is not customized prior to installation. The device onto which the zone is installed is specified with device resources which have the bootpri property set to a non-negative integer value. If a device will not be used as a boot device, it must not have the bootpri property set. To unset bootpri, use clear bootpri while in the device resource scope. If multiple bootable devices are present during installation, the devices will be used for a mirrored root ZFS pool in the zone. The bootpri property specifies a relative boot order with respect to other bootpri entries. The default boot order is determined by sorting the entries first by bootpri where lower entries are ordered before larger entries, then by id if multiple devices have the same bootpri value. For example:

# zonecfg -z vzl-129 info device
device:
    storage: dev:dsk/c0t0d0s0
    id: 0
    bootpri: 2
device:
    storage: dev:dsk/c0t1d0s0
    id: 1
    bootpri: 2
device:
    storage: dev:dsk/c0t2d0s0
    id: 2
    bootpri: 1

In the above example, the boot order is: id=2, id=0, id=1.

The zonepath property cannot be set for a kernel zone. As an implementation detail, it is set to a fixed location using tmpfs(4FS) and contains no persistent or otherwise user-serviceable data. As the zone root is contained within the root ZFS volume, it is not mounted in the global zone under the zone path, unlike traditional zones. Access to the zone root can only be done through the zone itself, for example logging into the zone via zlogin.

A solaris-kz zone cannot directly take advantage of shared-kernel features such as ZFS datasets and file system mounts. Instead, storage is made available to the zone via block devices such as raw disks, ZFS volumes, and lofi devices.

Storage can be added by using add device in zonecfg(8) to specify a storage URI via the storage property; the match property is not supported for kernel zones. For more information, see the suri(7) man page.

Local devices can be specified in the storage property using a dev URI which maps to a ZFS volume or a raw disk device, or a file URI. If a raw disk is specified it must be a whole disk or LUN. You can use the device path without any partition/slice suffix, for example:

# zonecfg -z myzone
zonecfg:myzone> add device
zonecfg:myzone:device> set storage=dev:/dev/rdsk/c4t9d0
zonecfg:myzone:device> set id=4
zonecfg:myzone:device> set bootpri=1

The id can be specified to fix the disk address inside the zone. If id is not specified, the system will automatically allocate and assign a value.

Shared storage URIs are required to enable kernel zones to be migratable to other host systems; this applies to both the root pool device(s) and any other storage devices configured as device or suspend resources for the zone. See the suri(7) man page for more information on shared storage URIs. For example:

# zonecfg -z myzone
zonecfg:myzone> add device
zonecfg:myzone:device> set storage=nfs://user1:staff@host1/export/file1
zonecfg:myzone:device> set create-size=4g

To see the configuration of a kernel zones device resources, use the zonecfg(8) info subcommand.

# zonecfg -z myzone info device
device:
    storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/myzone/disk0
    id: 0
    bootpri: 0
device:
    storage: nfs://user1:staff@host1/export/file1
    create-size: 4g
    id: 1
    bootpri not specified

You can also shorten the output by specifying the id:

# zonecfg -z myzone info device id=1
device:
    storage: nfs://user1:staff@host1/export/file1
    create-size: 4g
    id: 1
    bootpri not specified

To install a zone to a non-default location such as an iSCSI logical unit, the device resource for the root disk must be modified from the system specified default. For example:

# zonecfg -z myzone
zonecfg:myzone> select device id=0
zonecfg:myzone:device> set storage=iscsi://host/luname.naa.0000abcd

At least one device must have bootpri set to a non-negative integer to indicate that it is bootable. Within a kernel zone, all devices that act as mirrors or spares for the root ZFS pool must be bootable. If the storage URI cannot be mapped at boot time, such as when the device the URI is mapped to is missing or the iSCSI LUN goes offline, the zone will fail to boot.

SCSI Reservations

Setting the allow-mhd property to "true" allows applications to use the mhd(4I) SCSI reservation ioctls on the given device. This is possible only if the backend SCSI device supports reservation. Setting this property has the following impact on the zone:

  • Live migration and suspend/resume of the zone is disabled.
  • Live Zone Reconfiguration is disallowed for such devices.
  • The device cannot be shared at the same time by any other running zone on the host.

Network Access

Kernel zones must use an exclusive IP stack. Network access is provided by adding net or anet resources for Ethernet datalinks and by adding anet resources for IPoIB datalinks. The datalink specified by these resources will be used as the backend of the datalinks visible in the zone. Both IPoIB and Ethernet network resources can be specified, and the datalinks visible in the zone will be of the corresponding media type. As with storage devices, an ID may be specified to identify the virtual NIC address inside the zone. Adding InfiniBand network links through net resources is not supported.

Kernel zones may themselves host zones (in which case they play the role of the global zone for those zones). The network access to the hosted zones are provided over the Ethernet datalinks only and not over the IPoIB datalinks. However, because the networking configuration of the kernel zone is partially defined by its zone configuration, hosted zones are restricted in which MAC addresses may be used.

Attempting to boot a zone with mac-address settings of random is permitted for the following cases:

  1. If anet is configured with allowed-mac-address as any.
  2. If anet is configured with allowed-mac-address as 2:8:20, where 2:8:20 is the default OUI for VNICs.

Attempting to boot a zone with mac-address settings of a specific MAC address is permitted if the user specified mac address matches any of the allowed-mac-address list. For more information about the allowed-mac-address anet property, see the zonecfg(8) man page.

To supply additional MAC addresses to a kernel zone, add them to the mac-address property for the relevant resource. For more information, see the zonecfg(8) man page. This will make that mac-address available as a factory address inside the kernel zone.

A hosted zone may then use that MAC address itself. To do this, configure the mac-address property of the hosted zone to be either the explicit MAC address configured (use mac-address property), or specify auto. For details of these settings, see the zonecfg(8) man page.

Memory Configuration

Host memory is allocated to a kernel zone when the zone is booted. The size of the allocation is configured by setting the physical property of the capped-memory resource in zonecfg(8). The value of the physical property is a number with an optional scale suffix (K, M, G, T). If a real number with a fractional part is specified, it is internally converted to an integer value. Because this conversion can introduce an unexpected rounding or representation error it is recommended to use an integer number with an appropriate scale suffix.

The minimal size of the physical memory for a kernel zone is 2GB and is allocated using a single page size. The allocated memory is locked and as such is not pageable to a swap device.

You can set the pagesize-policy property of the capped-memory resource in the zones configuration to specify a policy for selecting the page size used to allocate the kernel zones memory. The policy can be set to one of:

largest-only

Use the largest page size provided by the system for the kernel zones memory allocation. The zone will fail to boot if the system is unable to allocate the required number of pages to satisfy the memory allocation or the allocation size is not aligned with the size of largest page provided by the host.

largest-available

The system attempts to use the largest possible page size, scaling down the page size until the allocation can be satisfied. The priority is to boot the zone.

smallest-only

Use the smallest page size provided by the host for the memory allocation.

fixed

The kernel zones memory is allocated using only pages of the size specified by the pagesize property. The physical memory size of the kernel zone must be aligned with the pagesize. The zone will fail to boot if there are insufficient pages of the specified size available or the pagesize is not supported by the host.

Clearing the pagesize-policy property may be necessary to support resuming an older suspend image format or to permit live or warm migration between systems. If not specified, the smallest supported page size provided by the host platform will be used to make the allocation.

If the pagesize-policy property is set to fixed, the pagesize property must be set to a page size supported by the host. The size is specified as an integer value with an optional scale suffix (K, M, G). See pagesize(1) on how to determine supported page sizes for a given kernel zone host. The smallest allowed page size is 256MB.

Memory Reservation Pools

Guaranteeing memory allocation for booting and rebooting kernel zones may be difficult on systems experiencing memory contention. To mitigate this situation, memory reservation pools (MRP) provide a mechanism to reserve memory during system boot for later use by kernel zones. It is managed by the svc:/system/memory-reserve:zones service and is disabled by default.

To configure the kernel zone MRP, modify the following properties of svc:/system/memory-reserve:zones service instance:

config/size

The size of the memory reservation. A scale (K, M, G, T) can be applied to the value.

config/pagesize

The page size to use for the memory reservation if the config/pagesize-policy is set to fixed. Must be an integer with an optional scale suffix (K, M, G). If the specified page size is not supported on the host, the service transitions into the maintenance state and any kernel zones using the service will fail to boot. Use pagesize(1) to determine the page sizes supported by the host system.

config/pagesize-policy

Mimics pagesize-policy under capped-memory. Acceptable values are fixed, smallest-only, largest-available, and largest-only.

config/type

Must be set to solaris-kz.

config/lgrps

Locality group IDs to allocate memory from. Can be empty. See lgrpinfo(1).

Here is an example on how to configure and enable the kernel zone MRP service to reserve 80G of memory using the pagesize-policy set to largest-available.

# svccfg -s svc:/system/memory-reserve:zones
svc:/system/memory-reserve:zones> setprop config/pagesize-policy=largest-available
svc:/system/memory-reserve:zones> setprop config/size=80G
svc:/system/memory-reserve:zones> setprop config/type=solaris-kz
svc:/system/memory-reserve:zones> setprop config/lgrps=""
svc:/system/memory-reserve:zones> refresh
svc:/system/memory-reserve:zones> exit
# svcadm enable svc:/system/memory-reserve:zones

To configure a kernel zone to have its memory allocated from a MRP, the memory-reserve property under the capped-memory resource must be set to the MRP service instance name (e.g. 'zones'). This property cannot be reconfigured live and is mutually exclusive with the pagesize-policy property of the capped-memory resource. That is, if a MRP is used, the kernel zone uses the pagesize-policy of the MRP service. The following is an example on how to configure the zone kz-zone to use the above MRP service instance for its memory allocation:

# zonecfg -z kz-zone
zonecfg:kz-zone> select capped-memory
zonecfg:kz-zone:capped-memory> clear pagesize-policy
zonecfg:kz-zone:capped-memory> set memory-reserve=zones
zonecfg:kz-zone:capped-memory> end
zonecfg:kz-zone> exit
#

Memory Live Zone Reconfiguration

Memory Live Zone Reconfiguration (Memory LZR) is currently supported only on the SPARC platform.

If enabled, Memory LZR allows the system administrator to change the amount of RAM allocated to a kernel zone without requiring the kernel zone to be rebooted. Memory LZR is enabled for a kernel zone if both of the following requirements are met:

  1. The host compatibility level set by host-compatible property must be either native or level2 or the memlzr modifier must be used.
  2. Either pagesize-policy or memory-reserve property of the capped-memory resource must be set.

The amount of RAM must be a multiple of the page size used by the kernel zone. You can find out the page size for a running kernel zone using the kstat2 command:

# kstat2 kstat:/zones/mykz/memory/usage:pagesize
kstat:/zones/mykz/memory/usage
        pagesize                        2147483648 bytes

CPU Configuration

As described in zonecfg(8), virtual CPU and dedicated CPU resources, and the resource pool property can be used to define the CPUs available to the kernel zone. Typically, the dedicated-cpu resource is used to isolate CPU resources for the sole use of the kernel zone, while the virtual-cpu resource is used when sharing CPUs to provide finer-grained control over CPUs available in the kernel zone.

Note that the dedicated-cpu resource and the pool property are mutually exclusive.

CPU configuration behaves differently depending on how the virtual CPU and dedicated CPU resources are configured.

If none of the virtual-cpu, dedicated-cpu, nor pool are specified:

The kernel zone gets four virtual CPU threads that compete for compute time with all other application threads on the physical host system.

If virtual-cpu is specified but not the dedicated-cpu nor pool:

For a virtual-cpu:ncpus value of N, the kernel zone gets N virtual CPU threads that compete for or share the compute time with all the other application threads on the physical host system.

If the dedicated-cpu is specified but not the virtual-cpu:

The kernel zone sets the number of virtual CPU threads to the number of CPUs in the dedicated-cpu resource. The virtual CPU threads will have sole use of the dedicated CPUs, and not share them with other application threads running on the same physical host.

If pool is specified but not virtual-cpu:

If the pool specified in the pool property is associated with a non-default pset, the kernel zone sets a number of virtual CPU threads to the number of CPUs in the pset. The virtual CPU threads will use CPUs of the pset but these CPUs can be shared with other threads bound to the same pset running on the physical host.

However, if the pool is associated with the default pset, it is equivalent to not setting the pool property at all.

If both virtual-cpu and dedicated-cpu are specified:

This case allows you to override the default number of virtual CPU threads so that it does not match the number of CPUs defined by the dedicated-cpu resource.

However, such configurations usually lead to sub-optimal system performance and in general should be avoided as CPUs reserved by the dedicated-cpu resource are either not fully utilized because of a smaller number of virtual CPU threads, or the system suffers from overcommit caused by mapping a greater number of virtual CPU threads to a smaller set of CPUs reserved via the dedicated-cpu resource.

Using a range for the dedicated-cpu resource is not recommended. The number of virtual CPUs created for a kernel zone is fixed at the time the kernel zone is booted. For a zone with the dedicated-cpu ncpus property set to a range, the number of CPUs lie anywhere in the range. If more CPUs are automatically added to the zones pset, the kernel zone will be unable to use the CPUs causing them to sit idle. If CPUs are automatically removed from the zones pset, the guest can become severely overcommitted, that is, with more virtual CPUs than physical CPUs, resulting in poor performance.

Suspend, Resume, and Warm Migration

Kernel zones may be suspended by executing the zoneadm suspend command. The running state of the zone is written to the device or file specified in the suspend resource. As this includes the entire RAM used by the zone, this can take a significant amount of time and space.

Suspend and resume are supported for a kernel zone only if it has a suspend resource in its configuration. Within a suspend resource, either the path or storage properties must be specified. The path property specifies the name of a file that will contain the suspend image. The directory containing the file must exist and be writable by the root user. Any file system that is mounted prior to the start of svc:/system/zones:default may be used. The storage property specifies the storage URI (see suri(7)) of a device that will contain the suspend image. The whole device will be used. This device may not be shared with anything else.

The suspend image is compressed prior to writing. As such, the size of the suspend image will typically be significantly smaller than the size of the zone's RAM. During suspend, a message is printed and logged to the console log indicating the size of the suspend image.

After compression, the suspend image is encrypted using AES-128-CCM. The encryption key is automatically generated by /dev/random (see random(4D) man page) and is stored in the keysource resource's raw property.

If a zone is suspended, the zoneadm boot command will resume it. The boot –R option can be used to boot afresh if a resume is not desired.

If the suspend image and the zone's storage is accessible by multiple hosts (by using suspend:storage and device:storage properties), the suspend image can be used to support warm migration via the zoneadm migrate command in zoneadm(8) by using zoneadm suspend before the migration. This will avoid any zone startup cost on the destination host, excluding the time spent to resume.

Warm migration does not check for compatibility between the source and destination hosts.

The supported storage URI types for warm migration match those supported for live and cold migration.

Note: on x86 platforms, live reconfiguration of the virtual-cpu resource is disabled after the kernel zone has been resumed or has been warm or live migrated. To re-enable live reconfiguration of the virtual-cpu resource, the kernel zone must be rebooted.

The source and the destination host must be the same platform. On x86, the vendor (AMD/Intel) as well as the CPU model name must match. On SPARC, the hardware platform must be the same. For example, you cannot warm migrate from a T4 host to a T5 host. If you want to migrate between different hardware platforms, you must specify the appropriate migration class in the cpu-arch property.

The migration classes for SPARC platforms are:

generic

kernel zones can be migrated between SPARC platforms T4 and newer.

migration-class1

kernel zones can be migrated between SPARC T4, SPARC T5, SPARC M5, SPARC M6, SPARC M7, SPARC T7, SPARC S7, SPARC T8, and SPARC M8 series platforms.

migration-class2

kernel zones can be migrated between SPARC T7, SPARC M7, SPARC S7, SPARC T8, and SPARC M8 series platforms.

sparc64-class1

kernel zones can be migrated between Fujitsu M10 and Fujitsu SPARC M12 platforms.

If no value is set, the kernel zone's CPU migration class is the same as the host and can migrate between platforms compatible with the hosts CPU class.

Note that the kernel zone's CPU migration class cannot exceed the limit of the hosts CPU class.

Also note that performance counters are not available when cpu-arch is set to a migration class.

The possible migration classes on Intel platforms are:

migration-class1

A kernel zone can perform cross-CPU type migration between CPUs of Nehalem or later micro architectures. Features supported by this class are: sse, sse2, sse3, sse4.1, sse4.2, ssse, cx8, cx16, pdcm, popcnt, fpu, pse, pse36, tsc, tscp, msr, pae, mce, sep, pge, cmov, clfsh, mmx, fxsr, htt, ss, ahf64, sysc, nx-bit, long-mode.

migration-class2

A kernel zone can perform cross-CPU type migration between CPUs of Westmere or later micro architectures. Features supported by this class are: all features supported by migration-class1 and pclmulqdq, aes, 1g-page.

migration-class3

A kernel zone can perform cross-CPU type migration between CPUs of Sandy Bridge or later micro architectures. Features supported by this class are: all features supported by migration-class2 and xsave, avx.

migration-class4

A kernel zone can perform cross-CPU type migration between CPUs of Ivy Bridge or later micro architectures. Features supported by this class are: all features supported by migration-class3 and f16c, rdrand, efs.

migration-class5

A kernel zone can perform cross-CPU type migration between CPUs of Haswell or later micro architectures. Features supported by this class are: all features supported by migration-class4 and fma, movbe, bmi1, bmi2, avx2, lzcnt.

migration-class6

A kernel zone can perform cross-CPU type migration between CPUs of Broadwell or later micro architectures. Features supported by this class are: all features supported by migration-class5 and rdseed, adx, prfchw.

migration-class7

A kernel zone can perform cross-CPU type migration between CPUs of Sky Lake or later micro architectures. Features supported by this class are: all features supported by migration-class6 along with avc512f, avx512cd, avx512bw, avx512dq, avx512vl and clwb.

migration-class8

A kernel zone can perform cross-CPU type migration between CPUs of Ice Lake or later micro architectures. Features supported by this class are: all features supported by migration-class7 and includes rdpid, rep_mov, vpclmulqdq, vaes, gfni, avx512vpopcntdq, avx512bitalg, and avx512vbmi2.

Note that performance counters are not available when cpu-arch is set to a migration class. Only the strand or hyperthread specific CPU performance counters are available. This means that some commands, such as busstat and daxstat, which reference other kinds of counters may not work in kernel zones.

There are no migration classes applicable to AMD CPUs.

If no value is set, the kernel zone can migrate between CPUs of the same micro architecture or exact same type if the micro architecture cannot be determined.

Also, besides migration class you may need to specify the host compatibility level in the host-compatible property to make sure the hardware features supported by the version of Oracle Solaris running on source and target host systems match.

    On resume, the current configuration of the zone is used to boot and to allow specifying a new configuration. However, there are restrictions, as the resuming zone is expecting a particular setup. Any incompatibilities may cause the kernel zone to fail to resume or boot, such as:

  • The CPU supports different features (for example, see cpuid(4D)).

  • The configuration has incompatible capped-memory or pagesize-policy values.

  • The configuration defines a different number of virtual CPUs.

  • A storage device is missing (no device resource with a suitable id property).

  • A virtual NIC is missing (no net or anet resource with a suitable id property).

No specific check for storage identification is done. Note that it is the administrator's responsibility to ensure that the device listed under a particular ID is the one that the zone is expecting to see.

Live and Cold Migration

Kernel zones can be cold or live migrated to compatible hosts by using the zoneadm migrate command, as described in the zoneadm(8) man page.

For live and cold migration, the following services and packages must be configured:

  • The package pkg://system/management/rad/module/rad-zonemgr must be installed on both the target and the source system.

  • The instances svc:/system/rad:local or svc:/system/rad:remote must be enabled depending on the RAD URI for use by the zoneadm migrate command.

  • The instance svc:/system/rad:local must be enabled on the source system.

Specifically for live migration alone, the service svc:/network/kz-migr:stream must be enabled on the destination system.

Live migration has the same compatibility restrictions as described in the Suspend, Resume, and Warm Migration section above.

Only zones on shared storage may be migrated. Supported storage URI types for migration are iscsi, lu, and nfs.

Live Storage Migration

Kernel zones support live storage migration of its root ZFS pool. See zoneadm(8) move subcommand for more information.

Auxiliary State

The following auxiliary states (as shown by zoneadm list -is) are defined for this brand:

suspended

The zone has been suspended and will resume on next boot. Note that the zone must be attached before this state is visible.

debugging

The zone is in running state, but the kernel debugger is running within the zone and therefore cannot service network requests etc. Connect to the zone console to interact with the debugger (kmdb).

panicked

The zone is in running state, but the zone has panicked.

migrating-out

The zone is fully running, but is being live migrated to another host.

migrating-in

The zone is booted on the host, and is receiving the live migration image, so is not yet fully running until migration is complete.

no-config

The zone is known to the system, but its configuration is missing and the state of the zone is marked incomplete.

Host Data

Each of a kernel zones bootable devices contains state information known as host data. This data keeps track of where a zone is in use, if it is suspended, and other state information. Host data is encrypted and authenticated with AES-128-CCM, using the same encryption key used for the suspend image.

As a kernel zone is readied or booted, the host data is read to determine if the kernel zone's boot storage is in use on another system. If it is in use by another system, the kernel zone will enter the unavailable state and an error message will indicate which system is using it. If it is certain that the storage is not in use on the other system, the kernel zone can be repaired by using the -x force-takeover extended option to zoneadm attach. See the warning below before executing this command.

If the encryption key is inaccessible, the host data and any suspend image will not be readable. In such a circumstance, any attempt to ready or boot the zone will cause the zone to enter the unavailable state. If recovery of the encryption key is not possible, the -x initialize-hostdata extended option to the zoneadm attach subcommand can be used to generate a new encryption key and host data. See the warning below before executing this command.


Note -  WARNING: Forcing a take over or reinitialization of host data will make it impossible to detect if the zone is in use on any other system. Running multiple instances of a zone that reference the same storage will lead to unrepairable corruption of the zone's file systems.

To prevent loss of the encryption key during a manual warm or cold migration, use zonecfg export on the source system to generate a command file to be used on the destination system. For example:

root@host1# zonecfg -z myzone export -f /net/.../myzone.cfg
root@host2# zonecfg -z myzone -f /net/.../myzone.cfg

Because myzone.cfg in this example contains the encryption key, it is important to protect its contents from disclosure.

Configuration

A solaris-kz brand zone can be configured by using the SYSsolaris-kz template.

The following zonecfg(8) resources and properties are not supported for this brand:

anet:address
capped-memory:locked
capped-memory:swap
dataset
device:allow-partition
device:allow-raw-io
fs
file-mac-profile
fs-allowed
ip-type
limitpriv
global-time
max-lwps
max-msg-ids
max-processes
max-sem-ids
max-shm-memory
rctl:zone.max-lofi
rctl:zone.max-swap
rctl:zone.max-locked-memory
rctl:zone.max-shm-memory
rctl:zone.max-shm-ids
rctl:zone.max-sem-ids
rctl:zone.max-msg-ids
rctl:zone.max-processes
rctl:zone.max-lwps
rootzpool
zpool

The following zonecfg(8) resources and properties are supported by the live zone reconfiguration for this brand:

anet (with exceptions stated below)
capped-memory:physical
device
ib-vhca
ib-vhca:port
net (with exceptions stated below)
virtual-cpu

The following zonecfg(8) resources and properties are not supported by the live zone reconfiguration for this brand:

anet:allowed-address
anet:configure-allowed-address
anet:defrouter
anet:evs
anet:vport
capped-cpu (zone.cpu-cap)
capped-memory (with an exception stated above)
cpu-shares (zone.cpu-shares)
dedicated-cpu
hostid
keysource
net:allowed-address
net:configure-allowed-address
net:defrouter
pool
rctl
scheduling-class
cpu-arch
tenant
host-compatible

Any changes made to the listed unsupported resources and properties in the persistent configuration will be ignored by the live zone reconfiguration if they are applied to the running zone.

Any attempts to modify the listed unsupported resources and properties in the live configuration will be refused.

Changes made to anet and net properties supported for solaris-kz brand should be for the same media type.

On x86 hosts live reconfiguration of the virtual-cpu resource is enabled for a kernel zone until the zone is suspended or migrated (warm or live). After migration or resumption, live reconfiguration of the virtual-cpu resource is disabled until the kernel zone has been rebooted.

There are defaults for specific properties for solaris-kz brand zones which are defined in the SYSsolaris-kz template.

Resource        Property                    Default Value
global          zonepath                    /system/zones/%{zonename}
                autoboot                    false
                ip-type                     exclusive
                auto-shutdown               shutdown
capped-memory   physical                    4G
                pagesize-policy             largest-available
virtual-cpu     ncpus                       4
net             configure-allowed-address   true
anet            mac-address                 auto
                lower-link                  auto
                link-protection             mac-nospoof
                linkmode                    cm
anet:mac        mac-address                 auto
ib-vhca         smi-enabled                 off
ib-vhca:port    pkey                        auto

Sub Commands

For the list of solaris-kz brand-specific subcommand options, see zoneadm(8).

Examples

Example 1 Boot from a particular BE
# zoneadm -z myzone boot -- -Z rpool/ROOT/solaris
Example 2 Boot from an alternate boot device
# zoneadm -z myzone halt
# zoneadm -z myzone boot -- disk2

See Also

ai_manifest(5), archiveadm(8), brands(7), zfs(8), zlogin(1), zoneadm(8), zonecfg(8), zones(7), resource-management(7), memory-reserve(8s), psrset(8), poolcfg(8)

Notes

VirtualBox can be used on the same host as kernel zones, but must be configured appropriately. See the VirtualBox documentation for more details.

Since kernel zones are running in a separate Oracle Solaris kernel environment they may possibly crash and dump the same core that a kernel in a global zone running on metal would. In such a case the dump is saved in the kernel zone storage and found in the same place as any Oracle Solaris crash dump would be found, subject to the crash dump parameters as configured by dumpadm(8). Kernel zones also have the ability to have a core dump generated from the host environment using the zoneadm savecore subcommand. Additionally, if a kernel zone does crash and attempts to dump a core image but is unable to successfully save the core in the kernel zone's storage, it will request the host to attempt to save a core image as if a zoneadm savecore subcommand had been issued. The core will be saved in a location specified by coreadm(8), this will only succeed if coreadm(8) has configured a location for and enabled kernel zone core dumps.