This section provides the following procedures:
How to Use ZFS Archives to Convert and to Deploy a Non-Global Zone on a Different Host
How to Migrate Zones from One ZFS Pool to Another ZFS Pool in a Different System
If a zone's storage is configured by using a rootzpool resource, has no dataset resources, and optionally contains one or more rpool resources, migration is quick and simple. For this procedure, both the source system and target system (in the examples shown in this procedure, host1 and host2) must have access to the storage referenced in the rootzpool and zpool resources.
For more information, see Assigning Limited Rights to Zone Administrators.
source$ zonecfg -z zonename export -f /net/hostname/zonename.cfg target $ zonecfg -z zonename -f /net/hostname/zonename.cfg
For example:
host1$ zonecfg -z my-zone export -f /net/my-host/my-zone.cfg host2$ zonecfg -z my-zone -f /net/my-host/my-zone.cfg
source$ zoneadm -z zonename shutdown
For example:
host1$ zoneadm -z my-zone shutdown
source$ zoneadm -z zonename detach
For example:
host1$ zoneadm -z my-zone detach
The –u and –Uoptions might be needed.
target$ zoneadm -z zonename attach
For example:
host2$ zoneadm -z my-zone attach
target$ zoneadm -z zonename boot
For example:
host2$ zoneadm -z my-zone boot
To create an archive, you must be assigned the Unified Archive Administration rights profile. The root role has all of these rights.
For more information, see Assigning Limited Rights to Zone Administrators.
Use this procedure on the source system to create a recovery archive of the zone to be migrated.
source$ archiveadm create -r -z zonename archive-name
For example:
host1$ archiveadm create -r -z my-zone /net/server/my-zone-archive.uar
source$ zonecfg -z zonename set autoboot=false
target$ zonecfg -z zonename create -a /net/server/zonename.uar
For example:
host2$ zonecfg -z my-zone create -a /net/server/my-zone-archive.uar
target$ zoneadm -z zonename install -a archive-name
target$ zoneadm -z zonename boot
See Also
For additional information about creating and deploying Unified Archives, refer to Chapter 2, Working With Unified Archives in Using Unified Archives for System Recovery and Cloning in Oracle Solaris 11.3.
This example describes how to create an archive of a non-global zone using the zfs command. The archive is then attached and deployed to another system.
This example assumes that the administrator on the source and target systems are able to access a shared NFS server for temporary file storage. In the event that shared temporary space is not available, other means, such as the scp remote secure copy program, can be used to copy the files between the source and target systems. The scp program requests passwords or passphrases if they are needed for authentication.
For more information, see Using Your Assigned Administrative Rights in Securing Users and Processes in Oracle Solaris 11.3.
source# zoneadm -z my-zone shutdown
source# zoneadm -z my-zone detach
The detached zone is now in the configured state. The zone will not automatically boot when the global zone next boots.
source# mkdir /net/server/zonearchives/my-zone source# zonecfg -z my-zone export> /net/server/zonearchives/my-zone/my-zone.zonecfg
source# zfs list -H -o name /zones/my-zone rpool/zones/my-zone source# zfs snapshot -r rpool/zones/my-zone@v2v source# zfs send -rc rpool/zones/my-zone@v2v | gzip > /net/server/zonearchives/my-zone/my-zone.zfs.gz
Use of compression is optional, but it is generally faster because less I/O is performed while writing and subsequently reading the archive. For more information, see Managing ZFS File Systems in Oracle Solaris 11.3.
target# zonecfg -z my-zone -f /net/server/zonearchives/my-zone/my-zone.zonecfg
You will see the following system message:
my-zone: No such zone configured Use 'create' to begin configuring a new zone.
target# zonecfg:my-zone> info zonename: my-zone zonepath: /zones/my-zone autoboot: false pool: net: address: 192.0.2.0 physical: net0
For example, the network physical device is different on the target system, or devices that are part of the configuration might have different names on the target system.
target# zonecfg -z my-zone zonecfg:my-zone> select net physical=net0 zonecfg:my-zone:net> set physical=net100 zonecfg:my-zone:net> end
zonecfg:my-zone> commit zonecfg:my-zone> exit
Use of the install subcommand is recommended.
target# zoneadm -z my-zone install -p -a /net/server/zonearchives/my-zone/my-zone.zfs.gz
In this release, you can also attach the zone, performing the minimum updates required to allow the attach to succeed. If updates are allowed, catalogs from publishers are refreshed during a zoneadm attach.
target# zoneadm -z my-zone attach -u -a /net/server/zonearchives/my-zone/my-zone.zfs.gz
target# zoneadm -z my-zone install -U -p -a /net/server/zonearchives/my-zone/my-zone.zfs.gz
In this release, you can also attach the zone, updating all in the zone to the latest version that is compatible with the global zone.
target# zoneadm -z my-zone install -U -a /net/server/zonearchives/my-zone/my-zone.zfs.gz
target# zoneadm -z my-zone attach -a /net/server/zonearchives/my-zone/my-zone.zfs.gz
Troubleshooting
If a storage object contains any preexisting partitions, zpools, or UFS file systems, the install fails and an error message is displayed. To continue the installation and overwrite any preexisting data, use the –x force-zpool-create option to zoneadm install.
The following procedure shows how to migrate all the zones from one ZFS pool to another ZFS pool in a different system.
In this procedure, the zones in source SysA will be moved to a pool in target SysB. The ZFS pool in SysA is rpool/zones and mounted at /zones. In SysB, the target pool is newpool/zones. The new zone path for the migrated zone is /newpool/zones.
The procedure also assumes that the target pool already exists.
For more information about creating ZFS pools, see Chapter 3, Creating and Destroying Oracle Solaris ZFS Storage Pools in Managing ZFS File Systems in Oracle Solaris 11.3.
Before You Begin
Before you begin, you must ensure the following:
All the compatibility requirements for performing a zones migration are met.
Secure Shell has been configured to enable key-based authentication in both systems and root login. For more information about Secure Shell authentication, see How to Generate a Public/Private Key Pair for Use With Secure Shell in Managing Secure Shell Access in Oracle Solaris 11.3
You must also be assigned the ZFS File System Management and ZFS Storage Management rights profiles. The root role has all of these rights.
For more information, see Assigning Limited Rights to Zone Administrators.
Perform this step for all the zones that are using rpool/zones.
sysA$ zonename=zone1 sysA$ zoneadm -z $zonename shutdown sysA$ zoneadm -z $zonename detach sysA$ zonecfg -z $zonename export -f /zones/$zonename.cfg
sysA$ zfs snapshot -r rpool/zones@send-to-sysB
sysA$ zfs send -R rpool/zones@send-to-sysB | ssh sysB zfs receive -d newpool
sysB$ zonename=zone1 sysB$ recvmountpoint=/newpool/zones sysB$ zonecfg -z $zonename -f /newpool/zones/$zonename.cfg sysB$ zonecfg -z $zonename 'set zonepath=/newpool/zones/%{zonename}' sysB$ zoneadm -z $zonename attach -u sysB$ zoneadm -z $zonename boot
where %{zonename} is a token as described in the Tokens section of the zonecfg(1M) man page.