Certain requirements and restrictions are imposed on the domain to be migrated, the source machine, and the target machine when you attempt to migrate an active domain. For more information, see Domain Migration Restrictions.
A domain “loses time” during the migration process. To mitigate this time-loss issue, synchronize the domain to be migrated with an external time source, such as a Network Time Protocol (NTP) server. When you configure a domain as an NTP client, the domain's date and time are corrected shortly after the migration completes.
To configure a domain as an Oracle Solaris 10 NTP client, see Managing Network Time Protocol (Tasks) in System Administration Guide: Network Services. To configure a domain as an Oracle Solaris 11 NTP client, see Key Tasks for Managing Time-Related Services in Introduction to Oracle Solaris 11.4 Network Services.
Following are the requirements and restrictions on CPUs when you perform a migration:
The target machine must have sufficient free virtual CPUs to accommodate the number of virtual CPUs in use by the domain to be migrated.
Setting the cpu-arch property on the guest domain enables you to migrate the domain between systems that have different processor types. Note that the guest domain must be in a bound or inactive state to change the cpu-arch value.
The supported cpu-arch property values are as follows:
native uses CPU-specific hardware features to enable a guest domain to migrate only between platforms that share the same CPU characteristics, such as CPUs that share the same processor core. native is the default value.
migration-class1 is a cross-CPU migration family for SPARC platforms starting with the SPARC T4, SPARC M5, and SPARC S7 series server. These platforms support hardware cryptography during and after these migrations so that there is a lower bound to the supported CPUs.
Starting with the Oracle VM Server for SPARC 3.6 software, the migration-class1 definition no longer includes support for a 2-Gbyte page size because this page size is not available on SPARC M8 and SPARC T8 series servers.
So, any migration that uses migration-class1 on a source machine that runs software prior to Oracle VM Server for SPARC 3.6 is blocked if the target machine is a SPARC M8 or SPARC T8 series server that runs at least the Oracle VM Server for SPARC 3.6 software. If the target machine is not a SPARC M8 or SPARC T8 series server, the migration succeeds and the domain continues to have access to 2-Gbyte pages until any subsequent reboot. As part of this post-migration reboot, the domain inherits the new migration-class1 definition and loses access to 2-Gbyte pages.
This value is not compatible with Fujitsu SPARC M12 platforms or Fujitsu M10 platforms.
sparc64-class1 is a cross-CPU migration family for SPARC64 platforms. Because the sparc64-class1 value is based on SPARC64 instructions, it has a greater number of instructions than the generic value. Therefore, it does not have a performance impact compared to the generic value.
This value is compatible only with Fujitsu SPARC M12 servers or Fujitsu M10 servers.
generic uses the lowest common CPU hardware features that are used by all platforms to enable a guest domain to perform a CPU-type-independent migration.
The following isainfo -v commands show the instructions that are available on a system when cpu-arch=generic and when cpu-arch=migration-class1.
# isainfo -v 64-bit sparcv9 applications asi_blk_init vis2 vis popc 32-bit sparc applications asi_blk_init vis2 vis popc v8plus div32 mul32
# isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32
Using the generic value might result in reduced performance of the guest domain compared to using the native value. The possible performance degradation occurs because the guest domain uses only generic CPU features that are available on all supported CPU types instead of using the native hardware features of a particular CPU. By not using these features, the generic value enables the flexibility of migrating the domain between systems that use CPUs that support different features.
If migrating a domain between at least SPARC T4, SPARC M5, and SPARC S7 series servers, you can set cpu-arch=migration-class1 to improve the guest domain performance. While the performance is improved from using the generic value, the native value still provides the best performance for the guest domain.
Use the psrinfo -pv command when the cpu-arch property is set to native to determine the processor type, as follows:
# psrinfo -pv The physical processor has 2 virtual processors (0 1) SPARC-T5 (chipid 0, clock 3600 MHz)
Note that when the cpu-arch property is set to a value other than native, the psrinfo -pv output does not show the platform type. Instead, the command shows that the sun4v-cpu CPU module is loaded.
# psrinfo -pv The physical processor has 2 cores and 13 virtual processors (0-12) The core has 8 virtual processors (0-7) The core has 5 virtual processors (8-12) sun4v-cpu (chipid 0, clock 3600 MHz)
The target machine memory requirements are as follows:
Sufficient free memory to accommodate the migration of a domain
The free memory must be available in a compatible layout
Compatibility requirements differ for each SPARC platform, but in all cases Oracle VM Server for SPARC must take into account memory page sizes during a migration operation. In particular, two page sizes are used when laying out the memory of a migrating domain on the target machine:
Hardware largest page size – The largest page size that is supported by the hardware platform.
Effective largest page size – The largest page size that is available for the domain to use. This page size is always less than or equal to the hardware largest page size.
The real address and physical address alignments are relative to the hardware largest supported page size and must be preserved for each memory block in the migrated domain.
For a guest domain that runs at least the Oracle Solaris 11.3 OS, the migrated domain's memory blocks might be split automatically during the migration so that the migrated domain can fit into smaller available free memory blocks. Memory blocks can only be split on boundaries that are aligned with the effective largest page size.
Other memory layout requirements for operating systems, firmware, or platforms might prevent memory blocks from being split during a given migration. This situation could cause the migration to fail even when the total amount of free memory available is sufficient for the domain.
Use the ldm list-domain -o domain command to determine the hardware largest page size that is supported by the target machine and the effective largest page size that is supported by the domain.
The following example shows the read-only effective-max-pagesize and hardware-max-pagesize property values. The effective-max-pagesize property value is for the ldg1 domain. The hardware-max-pagesize is for the platform.
primary# prtdiag|head -n 1 System Configuration: Oracle Corporation sun4v SPARC T7-2 primary# ldm list-domain -o domain ldg1 | grep pagesize effective-max-pagesize=2G hardware-max-pagesize=16G
Except for SR-IOV Ethernet virtual functions, domains that have direct access to physical devices cannot be migrated. However, virtual devices that are associated with physical devices can be migrated.
For information, see Migrating a Domain That Has an SR-IOV Ethernet Virtual Function Assigned.
For information about the direct I/O feature, see Creating an I/O Domain by Assigning PCIe Endpoint Devices.
For information about the SR-IOV feature, see Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions.
All virtual I/O services that are used by the domain to be migrated must be available on the target machine. In other words, the following conditions must exist:
Each virtual disk back end that is used in the domain to be migrated must be defined on the target machine. This shared storage can be a SAN disk, or storage that is available by means of the NFS or iSCSI protocols. The virtual disk back end you define must have the same volume and service names as on the source machine. Paths to the back end might be different on the source and target machines, but they must refer to the same back end.
Caution - A migration will succeed even if the paths to a virtual disk back end on the source and target machines do not refer to the same storage. However, the behavior of the domain on the target machine will be unpredictable, and the domain is likely to be unusable. To remedy the situation, stop the domain, correct the configuration issue, and then restart the domain. If you do not perform these steps, the domain might be left in an inconsistent state.
Each virtual network device in the domain to be migrated must have a corresponding virtual network switch on the target machine. Each virtual network switch must have the same name as the virtual network switch to which the device is attached on the source machine.
For example, if vnet0 in the domain to be migrated is attached to a virtual switch service named switch-y, a domain on the target machine must provide a virtual switch service named switch-y.
For example, you might want to ensure that the domain can access the correct network subnet. Also, you might want to ensure that gateways, routers, or firewalls are properly configured so that the domain can reach the required remote systems from the target machine.
MAC addresses used by the domain to be migrated that are in the automatically allocated range must be available for use on the target machine.
A virtual console concentrator (vcc) service must exist on the target machine and have at least one free port. Starting with LDoms 3.5, explicit console constraints are preserved during the migration.Otherwise, the console for the migrated domain is created by using the migrated domain name as the console group and by using any available port on any available vcc device in the control domain. If no available ports are available in the control domain, the console is created by using an available port on an available vcc device in a service domain. The migration fails if there is a conflict with the default group name.
Each virtual SAN that is used by the domain to be migrated must be defined on the target machine.
Any active delayed reconfiguration operations on the source or target machine prevent a migration from starting. You are not permitted to initiate a delayed reconfiguration operation while a migration is in progress.
You can perform a live migration when the power management (PM) elastic policy is in effect on either the source machine or the target machine.
While a migration is in progress on a machine, any operation that might result in the modification of the state or configuration of the domain being migrated is blocked. All operations on the domain itself, as well as operations such as bind and stop on other domains on the machine, are blocked.
Performing a domain migration requires coordination between the Logical Domains Manager and the Oracle Solaris OS that is running in the domain to be migrated. When a domain to be migrated is running in OpenBoot or in the kernel debugger (kmdb), this coordination is not possible. As a result, the migration attempt fails.
When a domain to be migrated is running in OpenBoot, you will see the following message:
primary# ldm migrate-domain ldg1 system2 Migration is not supported while the domain ldg1 is in the 'OpenBoot Running' state Domain Migration of LDom ldg1 failed
When a domain to be migrated is running in the kernel debugger (kmdb), you will see the following message:
primary# ldm migrate-domain ldg1 system2 Migration is not supported while the domain ldg1 is in the 'Solaris debugging' state Domain Migration of LDom ldg1 failed
Caution - Do not assign named resources unless you are an expert administrator.
You can migrate a domain that is configured to use named resources by specifying the cores and memory ranges on the target machine to be used by the migrating domain. To migrate such a domain, ensure that the domain is in the native migration class and that it has the whole-core constraint applied.
The ldm migrate-domain command uses the cidmap and mblockmap properties to specify physical resource mappings between the source machine and the target machine.
ldm migrate-domain -c domain-name cidmap=core-ID:core-ID[,core-ID:core-ID,...] \ mblockmap=phys-addr:phys-addr[,phys-addr:phys-addr,...] target-machine
In the following example, the ldm migrate-domain command migrates the ldg1 domain from the system1 machine to the system2 machine. The ldg1 domain has named cores 8 and 9 and a named memory block at physical address 0x400000000. The domain is migrated to the system2 machine and will use cores 16 and 17 and a memory block at physical address 0xc00000000:
system1:primary# ldm migrate-domain -c ldg1 cidmap=8:16,9:17 \ mblockmap=0x400000000:c00000000 system2
Ensure that the cidmap property specifies free, non-duplicate cores on the target machine and that the mblockmap property specifies free, non-overlapping physical address ranges on the target machine. The physical address ranges must meet the migration requirements for target machine memory. See Migration Requirements for Memory.
If you omit the cidmap and mblockmap properties from the ldm migrate-domain command, each core ID on the source machine is mapped to the same core ID on the target machine and each physical address range on the source machine is mapped to the same physical address range on the target machine. Thus, the following command migrates the ldg1 domain to the system2 machine and the migrated domain uses cores 8 and 9 and a memory block at physical address 0x400000000:
system1:primary# ldm migrate-domain -c ldg1 system2
On a SPARC server, a running kernel zone within a guest domain blocks live migration of the domain with the following error message:
Guest suspension failed because Kernel Zones are active. Stop Kernel Zones and retry.
Stop or suspend the running kernel zone prior to migrating the kernel zone:
Stop running the kernel zone.
# zoneadm -z zonename shutdown
Suspend the kernel zone.
# zoneadm -z zonename suspend
Then, perform a live migration of the kernel zone to another system before migrating the guest domain.