Use this procedure to ensure that the target system is configured to provide CPU, memory, and storage resources for the incoming source environment.
CPU Resources – During the shift, you can assign any amount of CPU resources to the guest domain that are appropriate for the guest domain's workload. However, prior to the lift and shift, you must ensure that those CPU resources are available as described in this procedure.
If you are uncertain about the CPU utilization of the guest domain's workload on the target system, then the target system should provide at minimum the same available CPU and memory resources as on the source system. This conservative approach helps maintain the same or better performance level of the workload after the migration. On the other hand, if the CPU utilization is estimated to be significantly lower for the guest domain on the target system, for example, if the target system has faster CPUs, then the target system can provide fewer CPU resources to the guest domain. In some cases, using fewer CPU cores reduces software licensing costs.
Memory Resources – By default, the target guest domain is allocated the same amount of memory that exists on the source system. Ensure that there is at least the same amount of memory available on the target system as described in this procedure.
Storage Resources – The number of available virtual disks and sizes on the target must match the source system's physical disks, with one exception. The disks that store the SVM metadb data can be different. This procedure describes how to configure the target storage.
Software Version Requirements – The target control domain must be running Oracle Solaris 11.3 SRU 35 (or later) and LDOM 3.5.0.3.3 (or later). The first step in this procedure describes how to identify these software versions.
If your target system does not have the minimum required software versions, update the target system before continuing. Refer to the applicable Oracle Solaris documentation for updating instructions:
root@TargetControlDom# uname -a SunOS TargetControlDom 5.11 11.3 sun4v sparc sun4v root@TargetControlDom# pkg list entire ldomsmanager NAME (PUBLISHER) VERSION IFO entire 0.5.11-0.175.3.35.0.6.0 i-- system/ldoms/ldomsmanager 3.5.0.3.3-0.175.3.35.0.4.0 i-- root@TargetControlDom# ldm -V Logical Domains Manager (v 3.5.0.3.3) Hypervisor control protocol v 1.12 Using Hypervisor MD v 1.4 System PROM: Hostconfig v. 1.8.3.a @(#)Hostconfig 1.8.3.a 2016/09/16 14:15 Hypervisor v. 1.17.3.a @(#)Hypervisor 1.17.3.a 2016/09/16 13:38 OpenBoot v. 4.40.3 @(#)OpenBoot 4.40.3 2016/08/17 12:17
In this example, the LUNS on the target system that are intended for the guest domain are provisioned with the exact same capacity as in the source system (see Review the Source System Configuration).
This table lists the disk configuration for this example.
|
In this example, disk names are displayed as c0t5000CCA08041FCF4d0, c0t600144F09F2C0BFD00005BE4A9A90005d0, and so on.
root@TargetControlDom# echo | format AVAILABLE DISK SELECTIONS: 0. c0t5000CCA08041FCF4d0 <HGST-H101812SFSUN1.2T-A990-1.09TB> /scsi_vhci/disk@g5000cca08041fcf4 /dev/chassis/SYS/HDD0/disk 1. c0t5000CCA080409B54d0 <HGST-H101812SFSUN1.2T-A990-1.09TB> /scsi_vhci/disk@g5000cca080409b54 /dev/chassis/SYS/HDD1/disk 2. c0t5000CCA080409D24d0 <HGST-H101812SFSUN1.2T-A990-1.09TB> /scsi_vhci/disk@g5000cca080409d24 /dev/chassis/SYS/HDD2/disk 3. c0t5000CCA080409F14d0 <HGST-H101812SFSUN1.2T-A990-1.09TB> /scsi_vhci/disk@g5000cca080409f14 /dev/chassis/SYS/HDD3/disk 4. c1t0d0 <MICRON-eUSB DISK-1112 cyl 246 alt 0 hd 255 sec 63> /pci@300/pci@1/pci@0/pci@2/usb@0/storage@1/disk@0,0 /dev/chassis/SYS/MB/EUSB_DISK/disk 5. c0t600144F09F2C0BFD00005BE4A9A90005d0 <SUN-ZFS Storage 7355-1.0 cyl 9749 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4a9a90005 6. c0t600144F09F2C0BFD00005BE4A90F0004d0 <SUN-ZFS Storage 7355-1.0 cyl 19501 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4a90f0004 7. c0t600144F09F2C0BFD00005BE4A8500003d0 <SUN-ZFS Storage 7355-1.0 cyl 19501 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4a8500003 8. c0t600144F09F2C0BFD00005BE4BF670006d0 <SUN-ZFS Storage 7355-1.0 cyl 9749 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4bf670006 9. c0t600144F09F2C0BFD00005BE4BFC90007d0 <SUN-ZFS Storage 7355-1.0 cyl 6499 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4bfc90007 10. c0t600144F09F2C0BFD00005BE4BFE60008d0 <SUN-ZFS Storage 7355-1.0 cyl 6499 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4bfe60008 11. c0t600144F09F2C0BFD00005BE4C3EC000Bd0 <SUN-ZFS Storage 7355-1.0 cyl 8190 alt 2 hd 8 sec 32> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4c3ec000b 12. c0t600144F09F2C0BFD00005BE4C4BB000Dd0 <SUN-ZFS Storage 7355-1.0 cyl 4873 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4c4bb000d 13. c0t600144F09F2C0BFD00005BE4C4E4000Ed0 <SUN-ZFS Storage 7355-1.0 cyl 4873 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4c4e4000e 14. c0t600144F09F2C0BFD00005BE4C06B000Ad0 <SUN-ZFS Storage 7355-1.0 cyl 8124 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4c06b000a 15. c0t600144F09F2C0BFD00005BE4C0410009d0 <SUN-ZFS Storage 7355-1.0 cyl 8124 alt 2 hd 254 sec 254> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4c0410009 16. c0t600144F09F2C0BFD00005BE4C424000Cd0 <SUN-ZFS Storage 7355-1.0 cyl 8190 alt 2 hd 8 sec 32> /scsi_vhci/ssd@g600144f09f2c0bfd00005be4c424000c Specify disk (enter its number): Specify disk (enter its number):
The disk topology and capacities must match the disk topology and capacities as the source system.
You can use the iostat -En command with each disk name that was provided in the previous step. For example:
iostat -En disk_name
Repeat the iostat command for each disk.
Note that the sizes shown represent the raw whole disk capacity including a reserved area. The actual usable capacity is less.
This example shows that the target system is configured to provide the exact same virtual disks and capacities that the guest domain had on the source system.
root@TargetControlDom# iostat -En c0t600144F09F2C0BFD00005BE4C424000Cd0 | grep -i size Size: 1.07GB <1073741824 bytes>
For example:
root@TargetControlDom# export PATH=$PATH:/opt/ovmtutils/bin
If your system has multiple domains, add the vCPU and memory resources for all domains to determine the total allocated resources.
In this example, the target system is set to factory defaults, and all of the resources are currently assigned to the control domain. To make resources available for the guest domain, some resources must be removed from the control domain.
root@TargetControlDom# ldm ls NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME primary active -n-c-- UART 128 260352M 0.3% 0.3% 1d
Note that the ldm ls command only displays resources that are allocated to domains. If there are unallocated resources, they are not displayed. If you need to identify unallocated resources, run these commands.
List the number of unallocated cores:
# ldm list-devices -p core | grep cid | wc -l
List the unallocated memory (add the values displayed under the size column):
# ldm list-devices memory
In this example, the target control domain's resources are reduced to free up resources for the incoming guest domain, while maintaining an optimal amount of resources to continue providing services.
root@TargetControlDom# ldm set-core 2 primary root@TargetControlDom# ldm set-mem 32G primary
This ensures that the configuration is saved and used after a power cycle. If it isn't saved, the configuration is lost after a power cycle.
root@TargetControlDom# ldm add-spconfig initial
In this example, the output shows that the newly created initial configuration is now active.
root@TargetControlDom# ldm ls-spconfig factory-default initial [current]
root@TargetControlDom# shutdown -i 6
Ensure that the new configuration is the current configuration.
root@TargetControlDom# ldm ls-spconfig factory-default initial [current]
In this example, the output shows that the control domain has reduced resources. Unallocated resources are now available to be allocated to the incoming guest domain.
root@TargetControlDom# ldm ls NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME primary active -n-cv- UART 16 32G 0.7% 0.7% 1m
On the source system, network redundancy was configured using IPMP. On the target system, link aggregation is used to follow best practices when deploying logical domains that use networking from one service domain.
In this example, network redundancy is provided by aggr0 in the control domain.
root@TargetControlDom# dladm show-aggr LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER aggr0 trunk L4 auto active short root@TargetControlDom# dladm show-aggr -L LINK PORT AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED aggr0 net0 yes yes yes yes no no -- net2 yes yes yes yes no no