The virt-install program can be run as a command-line utility, with parameters specified through options, or interactively, in response to a series of prompts.
The default values for the action to be taken on a domU shutdown, reboot, or crash are set by virt-install. You currently cannot change these defaults.
This example uses virt-install with options to install a Solaris domU from the command line using an ISO image. The command line options specify for virt-install to create an 18-Gbyte root disk image file /xvm/domu-x16.img. The option --nographics is used because this is a Solaris paravirtualized configuration. If you invoke virt-install with command line options but do not supply all required information, the tool prompts you for the needed information.
machine:root> virt-install --nographics -n domu-x16 --paravirt \ -f /xvm/domu-x16.img -r 1011 \ -l /net/inst-server/export/xVM/x_iso/63-0419-nd.iso Starting install... Creating domain... SunOS Release 5.11 Version 64-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Configuring /dev Solaris Interactive Text (Console session) Using install cd in /dev/dsk/c0d1p0 Using RPC Bootparams for network configuration information. Attempting to configure interface xnf0... Skipped interface xnf0 Setting up Java. Please wait... Beginning system identification... Searching for configuration file(s)... Search complete. Discovering additional network configuration... |
When the domain creation completes, sysidcfg runs to complete the system identification.
Use this procedure to set up the OpenSolaris 2008.11 or later release as a paravirtual guest. You must be running a Solaris dom0 on your system.
To start the installation of the OpenSolaris 2008.11 or later release, run the following commands:
# zfs create rpool/zvol # zfs create -V 10G rpool/zvol/domu-220-root # virt-install --nographics --paravirt --ram 1024 --name domu-220 -f /dev/zvol/dsk/rpool/zvol/domu-220-root -l /isos/osol-2008.11.iso |
This procedure assumes that your server is set up to assign dynamic addresses. If you want to assign static addresses, specify the address with the mac property of the -w/--network option. See limiting bandwidth and setting a VLAN ID.
Choose the defaults on the console for the two questions regarding the server setup.
After the OpenSolaris 2008.11 Live CD or OpenSolaris 2009.06 release has finished booting, a VNC session is available from within the guest domain. You can connect to the guest domain's VNC session as follows:
# domid=`virsh domid domu-220` # ip=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/ipaddr/0` # port=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/port` # /usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/passwd DJP9tYDZ # vncviewer $ip:$port |
Enter the given password at the VNC password prompt to bring up a VNC session.
VNC sessions are not secure. However, because you need elevated privileges to read the VNC password from XenStore, the VNC sessions are secure as long as you run the VNC viewer locally on dom0, or via SSH tunnelling or another secure method.
Enable the post-install VNC viewer.
By default, the VNC session is not enabled after the installation. You can change the default configuration as follows:
# svccfg -s x11-server setprop options/xvm_vnc = "true" # svcadm restart xvm/vnc-config # svcadm restart gdm |
The following sequence of commands installs a Red Hat Enterprise Linux guest over NFS using the text installer:
# mount -F hsfs /rhel.iso /mnt # share -o ro /mnt # virt-install -n pv-rhel -r 1024 -l nfs:mydom0:/mnt \ --os-type=linux os-variant=rhel5.3 \ -f /dev/zvol/dsk/pv-rhel.zvol -p --nographics |
The following command installs a Red Hat Enterprise Linux guest using the media in the dom0 optical drive (CD-ROM/DVD) , utilizing the RedHat Linux version 5 kickstart feature to automate the installation process.
# virt-install \ --name rhat \ --ram 500 \ --file /dev/zvol/dsk/rhat.zvol \ --paravirt \ --location /dev/dsk/c2t0d0s2 \ --os-type=linux os-variant=rhel5 \ --extra-args "ks=/export/install/rhat/ks.cfg |
Because of a standard Linux restriction that PV guests cannot be installed from a CD, the CD must be mounted in a location (usually dom0) exported over NFS, and then an NFS Linux installation done. Often it is much easier to do an HTTP Linux install.
virt-install -n solarisPV --paravirt -r 1024 \ --nographics -f /export/solarisPV/root.img -s 16 \ -l /ws/xvm-gate/public/isos/72-0910/solarisdvd.iso |
virt-install -n solarisHVM --hvm -r 1024 --vnc \ -f /export/solarisHVM/root.img -s 16 \ -c /ws/xvm-gate/public/isos/72-0910/solarisdvd.iso |
For this version of virt-install, ISO, physical media, and Preboot Execution Environment (PXE) pxe network installations are supported for HVM. If a physical CD is used, remember to unmount it after use.
# virt-install -n winxp --hvm -r 1024 --vnc \ -f /export/winxp/root.img -s 16 -c /windows/media.iso |
For this version of virt-install, ISO, physical CD, and PXE network installations are supported for HVM. If a physical CD is used, remember to unmount it after use.
A normal file is used to store the contents of the guest domain disk image, as opposed to using a ZFS volume, for example.
virt-install --name windows1 --ram 1024 \ --cdrom /en_winxp_pro_with_sp2.iso --file /guests/windows1-disk --file-size 10 --vnc |
zfs create -V 8G pool/solaris1-disk virt-install --name solaris1 --ram 1024 --nographics \ --file /dev/zvol/dsk/pool/solaris1-disk \ --location nfs:install.domain.com:/export/solaris/nv75 \ --autocf nfs:install.domain.com:/export/jumpstart/solaris1 |
After the domain is created, the sysidcfg is initiated and you are prompted to answer a series of questions. Your screen will look similar to this:
SunOS Release 5.11 Version 64-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: my-zone Loading smf(5) service descriptions: 114/114 Select a Language 1. English 2. French 3. German 4. Italian 5. Japanese 6. Korean 7. Simplified Chinese 8. Spanish 9. Swedish 10. Traditional Chinese Please make a choice (1 - 10), or press h or ? for help: What type of terminal are you using? 1) ANSI Standard CRT 2) DEC VT52 3) DEC VT100 4) Heathkit 19 5) Lear Siegler ADM31 6) PC Console 7) Sun Command Tool 8) Sun Workstation 9) Televideo 910 10) Televideo 925 11) Wyse Model 50 12) X Terminal Emulator (xterms) 13) CDE Terminal Emulator (dtterm) 14) Other Type the number of your choice and press Return: . . . |
For more information on the sysidcfg file, see the sysidcfg(4) man page. For an example sysidcfg file, see Understanding the Solaris xVM Server Architecture, Part No 820-3089-102.
The main interface for command and control of both xVM control domains and guest domains is the virsh(1M) utility. Users should use virsh wherever possible to control virtualized operating systems. Some xVM operations are not yet implemented by the virsh utility. In those cases, the legacy utility xm(1M) can be used for detailed control.
The following actions can be performed:
Start Selected Guest
# virsh start sdomu |
Suspend Selected Guest
# virsh suspend sdomu |
You can also use the destroy subcommand to bring the system to the shutoff state, but this can result in damage to data.
Resume Selected Guest
# virsh resume sdomu |
Shutdown Selected Guest
# virsh shutdown sdomu |
Reboot Selected Guest
# virsh reboot sdomu |
Undefine Selected Guest
# virsh undefine sdomu |
Undefine the configuration for an inactive domain which is specified by either its domain name or UUID.
To delete a guest, undefine the guest and then remove any associated disk resources.
Connect Selected Guest to Network
# virsh attach-interface sdomu bridge e1000g0 |
Take Snapshot of Active Domain
# virsh save sdomu /domains/sdomusnap |
The domain will be in the shut down state when the save operation completes. The resources, such as memory, allocated for the domain will be freed for use by other running domains.
To restore:
# virsh restore sdomu /domains/sdomusnap |
Note that network connections present before the save operation might be severed, since TCP timeouts might have expired.
Pin vCPU
# virsh vcpupin domain vcpu cpulist |
Pin domain's virtual CPUs to the host's physical CPUs. The domain parameter is the domain name, ID, or UUID. The vcpu parameter is the VCPU number. The cpulist parameter is a list of host CPU numbers, separated by commas.
Restore Selected Guest
# virsh restore sdomu |
Restore a domain that is in the saved state.
Delete Selected Guest
Deleting the guest will cause its image and snapshot to be deleted from the system.
To view the state of a domain, use the virsh list command.
For example:
# virsh list ID Name State ------------------------- 0 Domain-0 running 2 sxc18 running |
By default, only running domains are displayed. Use virsh list --inactive to display only non-running domains.
The following prerequisites apply:
Both the source machine and the target host must be on the same subnet.
The host and the target must each have the same CPU type (AMD or Intel).
Both systems must be running the same release of the xVM software.
There must be sufficient CPU and memory resources on the target to host the domain.
The target dom0 should have the same network interface as the source dom0 network interface used by the domU. For example, if the domU to be migrated has a VNIC that is bridged over the e1000g0 interface on the source dom0, then the target dom0 must also have the e1000g0 interface.
By default, xend listens only on the loopback address for requests from the localhost. The target host must be configured to accept the migration of a guest domain. The following example configures the xend SMF service on the target machine to accept guest migration from a system named host1. The caret (^) and dollar sign ($) are pattern-matching characters to ensure that the entire host name matches. The host1 name must match the name the target thinks the machine is called, which could be a host name, but could also be a fully qualified domain name (FQDN).
# svccfg -s svc:system/xvm/xend svc:/system/xvm/xend> setprop config/xend-relocation-address = "" svc:/system/xvm/xend> setprop config/xend-relocation-hosts-allow = "^host1\.?.*$ ^localhost$" svc:/system/xvm/xend> end # svcadm refresh svc:system/xvm/xend:default && \ svcadm restart svc:system/xvm/xend:default |
You can test the connection by using:
host1# telnet target-host 8002 |
If connection fails, check the /var/log/xen/xend.log file on the target system.
In addition to configuring the target system to accept migrations, you must also configure the domain that will be migrated so that the domain's storage is accessible from both the source and the target systems. The domain's accessible disks must reside on some form of shared storage, such as NFS files or iSCSI volumes. This document uses the NFS method available in the OpenSolaris 2009.06 release.
On the NFS server, share the directory:
# sharectl set -p nfsmapid_domain=sun.com nfs # svcadm restart svc:/network/nfs/mapid:default # share -F nfs -o "sec=sys,root=host1:host2,rw" /vdisks |
On both the host1 source system and the host2target system, also execute the sharectl to set the NFS mapid name to sun.com, and the svcadm command restart the xend service.
# virt-install -p --nographics -n domain -r 1024 -l /isos/os0906/os0906.iso -f /net/hostname_of_nfs_server/vdisks/testpv |
The virt-install command then creates the virtual disk on the NFS server and starts the guest installation process.
This method should be available in a build after the OpenSolaris 2009.06 release.
An iscsi-backed guest would be created with this base command:
# virt-install -n <name> -r <ram> -p --nographics -l /path/to/iso \ -m <mac address> \ --disk path=/static/<iscsi target ip address>/<lun>/<iqnnumber>,driver=phy,subdriver=iscsi |
An example would be:
# virt-install -n ubuntu -r 1024 -p --nographics -l /net/xvm-4200m2-03/isos/ubuntu-7.04-32.iso \ --disk path=/static/172.20.26.10/0/iqn.1986-03.com.sun:02:52ac879e-788e-e0ea-bf5c-f86b2b63258a,driver=phy,subdriver=iscsi |
In addition to setting up the xend relocation host allow field as described above in the section "Enabling Live Migration on a Target Host,” also issue the following command to enable static discovery on both dom0s:
# iscsiadm modify discovery -s enable |
Use the following command to migrate the guest.
host1# virsh migrate domain --live xen:// xenmigr://target-host |
While the use of the virsh command is preferred, you can try the following command if the virsh command appears to fail.
host1# xm migrate -l domain target-host |
You can observe the migration while it occurs by monitoring the domain status on both machines using virsh list.
Note that the domain definition remains on the system on which it was created. You can start the domain on that system's dom0 with the virsh start domain command.
You cannot add temporary access to a device for an HVM domU because you cannot dynamically add IDE CD-ROM devices. To make a CD-ROM available for use in an HVM guest, you must add the device before you boot the guest. You can then change the media by using virsh attach-disk.
Note that in an HVM domU, you must use eject in the domU to unlock a removable-media device, such as a CD device, before running the attach-disk subcommand.
The attach-disksubcommand is also used to change the media in a CD drive.
For additional explanation, see the virsh(1M) man page, including “Example 4 Changing a CD in a Solaris HVM Guest Domain.”
In PV guest domains, you can use virsh attach-disk to add a CD device and detach-disk to remove it. These commands can be used on halted or running domUs. If you want to change the CD after attaching the device, you must use virshdetach-disk and then virsh attach-disk with the new media.