Guidelines for Using Solaris Live Upgrade With Non-Global
Zones (Planning)
Planning for using non-global zones includes the limitations described
below.
Table 8–1 Limitations When Upgrading With Non-Global
Zones
Problem
|
Description
|
Consider these issues when using Solaris Live Upgrade on a system with
zones installed. It is critical to avoid zone state transitions during lucreate and lumount operations.
|
-
When you use the lucreate command to create
an inactive boot environment, if a given non-global zone is not running, then
the zone cannot be booted until the lucreate operation
has completed.
-
When you use the lucreate command to create
an inactive boot environment if a given non-global zone is running, the zone
should not be halted or rebooted until the lucreate operation
has completed.
-
When an inactive boot environment is mounted with the lumount command, you cannot boot non-global zones or reboot them, although
zones that were running before the lumount operation can
continue to run.
-
Because a non-global zone can be controlled by a non-global
zone administrator as well as by the global zone administrator, to prevent
any interaction, halt all zones during lucreate or lumount operations.
|
Problems can occur when the global zone administrator does not notify
the non-global zone administrator of an upgrade with Solaris Live Upgrade.
|
When Solaris Live Upgrade operations are underway, non-global zone administrator
involvement is critical. The upgrade affects the work of the administrators,
who will be addressing the changes that occur as a result of the upgrade.
Zone administrators should ensure that any local packages are stable throughout
the sequence, handle any post-upgrade tasks such as configuration file adjustments,
and generally schedule around the system outage.
For example, if a non-global zone administrator adds a package while
the global zone administrator is copying the file systems with the lucreate command, the new package is not copied with the file systems and
the non-global zone administrator is unaware of the problem.
|
Creating a Boot Environment When a Non-Global Zone
Is on a Separate File System
Creating
a new boot environment from the currently running boot environment remains
the same as in previous releases with one exception. You can specify a destination
disk slice for a shared file system within a non-global zone. This exception
occurs under the following conditions:
-
If on the current boot environment the zonecfg add
fs command was used to create a separate file system for a non-global
zone
-
If this separate file system resides on a shared file system,
such as /zone/root/export
To prevent this separate file system from being shared in the new boot
environment, the lucreate command enables specifying a
destination slice for a separate file system for a non-global zone. The argument
to the -m option has a new optional field, zonename.
This new field places the non-global zone's separate file system on a separate
slice in the new boot environment. For more information about setting up a
non-global zone with a separate file system, see zonecfg(1M).
Note – By default, any file system other than the critical file systems
(root (/), /usr, and /opt file
systems) is shared between the current and new boot environments. Updating
shared files in the active boot environment also updates data in the inactive
boot environment. For example, the /export file system
is a shared file system. If you use the -m option and the zonename option, the non-global zone's file system is copied
to a separate slice and data is not shared. This option prevents non-global
zone file systems that were created with the zonecfg add fs command
from being shared between the boot environments.