Certain requirements and restrictions are imposed on the domain to be migrated, the source machine, and the target machine when you attempt to migrate an active domain. For more information, see Domain Migration Restrictions.
A domain “loses time” during the migration process. To mitigate this time-loss issue, synchronize the domain to be migrated with an external time source, such as a Network Time Protocol (NTP) server. When you configure a domain as an NTP client, the domain's date and time are corrected shortly after the migration completes.
To configure a domain as an Oracle Solaris 10 NTP client, see Managing Network Time Protocol (Tasks) in System Administration Guide: Network Services. To configure a domain as an Oracle Solaris 11 NTP client, see Managing Network Time Protocol (Tasks) in Introduction to Oracle Solaris 11 Network Services.
Following are the requirements and restrictions on CPUs when you perform a migration:
The target machine must have sufficient free virtual CPUs to accommodate the number of virtual CPUs in use by the domain to be migrated.
Setting the cpu-arch property on the guest domain enables you to migrate the domain between systems that have different processor types. Note that the guest domain must be in a bound or inactive state to change the cpu-arch value.
The supported cpu-arch property values are as follows:
native uses CPU-specific hardware features to enable a guest domain to migrate only between platforms that have the same CPU type. native is the default value.
migration-class1 is a cross-CPU migration family for SPARC platforms starting with the SPARC T4. These platforms support hardware cryptography during and after these migrations so that there is a lower bound to the supported CPUs.
This value is not compatible with UltraSPARC T2, UltraSPARC T2 Plus, or SPARC T3 platforms, or Fujitsu M10 platforms.
sparc64-class1 is a cross-CPU migration family for SPARC64 platforms. Because the sparc64-class1 value is based on SPARC64 instructions, it has a greater number of instructions than the generic value. Therefore, it does not have a performance impact compared to the generic value.
This value is compatible only with Fujitsu M10 servers.
generic uses the lowest common CPU hardware features that are used by all platforms to enable a guest domain to perform a CPU-type-independent migration.
The following isainfo -v commands show the instructions that are available on a system when cpu-arch=generic and when cpu-arch=migration-class1.
# isainfo -v 64-bit sparcv9 applications asi_blk_init vis2 vis popc 32-bit sparc applications asi_blk_init vis2 vis popc v8plus div32 mul32
# isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32
Using the generic value might result in reduced performance of the guest domain compared to using the native value. The possible performance degradation occurs because the guest domain uses only generic CPU features that are available on all supported CPU types instead of using the native hardware features of a particular CPU. By not using these features, the generic value enables the flexibility of migrating the domain between systems that use CPUs that support different features.
If migrating a domain between at least SPARC T4 systems, you can set cpu-arch=migration-class1 to improve the guest domain performance. While the performance is improved from using the generic value, the native value still provides the best performance for the guest domain.
Use the psrinfo -pv command when the cpu-arch property is set to native to determine the processor type, as follows:
# psrinfo -pv The physical processor has 2 virtual processors (0 1) SPARC-T5 (chipid 0, clock 3600 MHz)
Note that when the cpu-arch property is set to a value other than native, the psrinfo -pv output does not show the platform type. Instead, the command shows that the sun4v-cpu CPU module is loaded.
# psrinfo -pv The physical processor has 2 cores and 13 virtual processors (0-12) The core has 8 virtual processors (0-7) The core has 5 virtual processors (8-12) sun4v-cpu (chipid 0, clock 3600 MHz)
The target machine memory requirements are as follows:
Sufficient free memory to accommodate the migration of a domain
The free memory must be available in a compatible layout
Compatibility requirements differ for each SPARC platform. However, at a minimum the real address and physical address alignment relative to the largest supported page size must be preserved for each memory block in the migrated domain.
Use the pagesize command to determine the largest page size that is supported on the target machine.
For a guest domain that runs at least the Oracle Solaris 11.3 OS, the migrated domain's memory blocks might be automatically split up during the migration so that the migrated domain can fit into smaller available free memory blocks. Memory blocks can only be split up on boundaries aligned with the largest page size.
Other memory layout requirements for operating systems, firmware, or platforms might prevent memory blocks from being split during a given migration. This situation could cause the migration to fail even when the total amount of free memory available is sufficient for the domain.
Domains that have direct access to physical devices cannot be migrated. For example, you cannot migrate I/O domains. However, virtual devices that are associated with physical devices can be migrated.
For more information, see Migration Requirements for PCIe Endpoint Devices and Migration Requirements for PCIe SR-IOV Virtual Functions.
All virtual I/O services that are used by the domain to be migrated must be available on the target machine. In other words, the following conditions must exist:
Each virtual disk back end that is used in the domain to be migrated must be defined on the target machine. This shared storage can be a SAN disk, or storage that is available by means of the NFS or iSCSI protocols. The virtual disk back end you define must have the same volume and service names as on the source machine. Paths to the back end might be different on the source and target machines, but they must refer to the same back end.
Caution - A migration will succeed even if the paths to a virtual disk back end on the source and target machines do not refer to the same storage. However, the behavior of the domain on the target machine will be unpredictable, and the domain is likely to be unusable. To remedy the situation, stop the domain, correct the configuration issue, and then restart the domain. If you do not perform these steps, the domain might be left in an inconsistent state.
Each virtual network device in the domain to be migrated must have a corresponding virtual network switch on the target machine. Each virtual network switch must have the same name as the virtual network switch to which the device is attached on the source machine.
For example, if vnet0 in the domain to be migrated is attached to a virtual switch service named switch-y, a domain on the target machine must provide a virtual switch service named switch-y.
For example, you might want to ensure that the domain can access the correct network subnet. Also, you might want to ensure that gateways, routers, or firewalls are properly configured so that the domain can reach the required remote systems from the target machine.
MAC addresses used by the domain to be migrated that are in the automatically allocated range must be available for use on the target machine.
A virtual console concentrator (vcc) service must exist on the target machine and have at least one free port. Explicit console constraints are ignored during the migration. The console for the migrated domain is created by using the migrated domain name as the console group and by using any available port on any available vcc device in the control domain. If no available ports are available in the control domain, the console is created by using an available port on an available vcc device in a service domain. The migration fails if there is a conflict with the default group name.
Each virtual SAN that is used by the domain to be migrated must be defined on the target machine.
You cannot perform a domain migration on an I/O domain that is configured with PCIe endpoint devices.
For information about the direct I/O feature, see Creating an I/O Domain by Assigning PCIe Endpoint Devices.
You cannot perform a domain migration on an I/O domain that is configured with PCIe SR-IOV virtual functions.
For information about the SR-IOV feature, see Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions.
On platforms that have cryptographic units, you can migrate a guest domain that has bound cryptographic units if it runs an operating system that supports cryptographic unit dynamic reconfiguration (DR).
At the start of the migration, the Logical Domains Manager determines whether the domain to be migrated supports cryptographic unit DR. If supported, the Logical Domains Manager attempts to remove any cryptographic units from the domain. After the migration completes, the cryptographic units are re-added to the migrated domain.
Any active delayed reconfiguration operations on the source or target machine prevent a migration from starting. You are not permitted to initiate a delayed reconfiguration operation while a migration is in progress.
You can perform a live migration when the power management (PM) elastic policy is in effect on either the source machine or the target machine.
While a migration is in progress on a machine, any operation that might result in the modification of the state or configuration of the domain being migrated is blocked. All operations on the domain itself, as well as operations such as bind and stop on other domains on the machine, are blocked.
Performing a domain migration requires coordination between the Logical Domains Manager and the Oracle Solaris OS that is running in the domain to be migrated. When a domain to be migrated is running in OpenBoot or in the kernel debugger (kmdb), this coordination is not possible. As a result, the migration attempt fails.
When a domain to be migrated is running in OpenBoot, you will see the following message:
primary# ldm migrate ldg1 system2 Migration is not supported while the domain ldg1 is in the 'OpenBoot Running' state Domain Migration of LDom ldg1 failed
When a domain to be migrated is running in the kernel debugger (kmdb), you will see the following message:
primary# ldm migrate ldg1 system2 Migration is not supported while the domain ldg1 is in the 'Solaris debugging' state Domain Migration of LDom ldg1 failed