You can create, monitor, manage, and configure guests within a given OpenSolarisTM xVM hypervisor instance.
For the latest information on using the virt-install command-line utility to install guest domains, see Using virt-install to Install a Guest. This document provides xVM 3.3 examples.
To create a guest by using the virt-install utility, you must specify an installation source, disk storage, networking, and other parameters. After the guest OS is installed, it can be managed through the virsh utility, as described in Managing Guests.
The number of guests that can be created is determined primarily by the amount of memory and the disk space available.
Size your domain as you would configure a machine to do the same workload. The virtual disk requirement is dependent on the guest operating system and software that you install.
Types of virt-install installations that can be performed include the following:
Interactive (not available in xVM 3.3)
Command line, with options supplied
Netinstall
ISO image
Solaris JumpStart
After you configure the installation server, you can run the virt-install command described in the virt-install(1M) from dom0. Use the -d option to add_install_client to specify that the client use DHCP. If the -d option is not specified, the client uses bootparams. For xVM paravirtualized guests, both approaches work. Use your site-specific tool for setting up the appropriate DHCP parameters for the client.
To do a network installation, use the -l option and provide a path to the network installation image. When giving a machine name or IP address, the domU must be able to get to that install machine directly, not go through a router to another net. For example:
-l nfs:install:/export/xvm/xvmgate-70i72-nd |
You can also use an IP address instead of a machine name. For example:
-l nfs:172.20.25.12:/export/xvm/xvmgate-70i72-nd |
virt-install -n gath-01 -r 1000 --nographics -f /dev/dsk/c1t0d0s3 \ -m "aa:04:03:35:a8:06" -p \ -l nfs:install48:/export/xvm/xvmgate-70i72-nd |
To use the ISO image, use the -l option with a full path to the ISO image. If a full path is given instead of the nfs:mach_name:path format of a network installation, then virt-install assumes that this is an ISO image:
-l /net/install/export/xvm/solarisdvd.iso |
virt-install -n gath-01 -r 1000 --nographics -f /dev/dsk/c1t0d0s3 \ -m aa:04:03:35:a8:06 -p \ -l /net/install48/export/xvm/solarisdvd.iso |
You can quote arguments to options. While arguments, such as the path to an ISO image, are generally not quoted on the command line, quotes might be used in scripts.
-l "/net/install48/export/xvm/solarisdvd.iso" |
JumpStart configuration files are manually created and managed. You can initiate a custom JumpStart through network installation after setting up the server. When you create a profile server, you must ensure that systems can access the JumpStart directory on the profile server during a custom JumpStart installation. Each time that you add a system for network installation, use the add_install_client command to specify the profile server. You use the add_install_client command to create the /etc/bootparams entry for the domU.
To do a JumpStart with virt-install, use the --autocf option. For example:
--autocf nfs:install:/export/jumpstart/jump-user/x86 |
You cannot use a full path such as:
--autocf /net/install/export/jumpstart/jump-user/x86 |
virt-install -n gath-01 -r 1000 --nographics -f /dev/dsk/c1t0d0s3 \ -m aa:04:03:35:a8:06 -p \ -l /net/install48/export/xvm/xvmgate-70i72/solarisdvd.iso --autocf nfs:install:/export/jumpstart/jump-user/x86 |
You will need to supply the guest domain information listed below.
Name for the guest domain. Each guest domain must have a unique name. This name serves as the label of the guest operating system. The name must be a real hostname for network installations to work.
Location of the installation software. Installation must be over a network (which includes an NFS share from the local host operating system) or be an ISO install.
For example:
--location nfs:my.nfs.server.com:/home/install/test/mydomain |
For HVM, an ISO or CDROM device should be given instead of an image location.
Installations using http or ftp, as shown in the following examples, are supported for Linux paravirtualized domain installations only:
http://my.http.server.com:/install/test/mydomain ftp://my.ftp.server.com:/install/test/mydomain |
The number of CPUs for the guest domain. The default is 1. You can assign specific CPUs. If undefined, the hypervisor makes the selection.
Amount of RAM to be allocated to the guest, in megabytes. A running domain should use a minimum of 512 megabytes. However, to install the guest domain, 1 Gbyte (1024 megabytes) is required.
Graphical console. Default is graphics. The nographics option applies to paravirtual guests only. If you intend to enable graphics support, you must decide whether the graphical installer should be used.
This is the MAC address of the dom0's network interface that you want the domU to use to send and receive internet traffic. By default, the hypervisor tools uses the first available network interface card (NIC) when creating guest domains.
The default values for the action to be taken on a domU shutdown, reboot, or crash are set by virt-install. You currently cannot change these defaults.
The complete list of supported virt-install options are listed in the virt-install(1M) man page.
The virt-install program can be run as a command-line utility, with parameters specified through options, or interactively, in response to a series of prompts.
The default values for the action to be taken on a domU shutdown, reboot, or crash are set by virt-install. You currently cannot change these defaults.
This example uses virt-install with options to install a Solaris domU from the command line using an ISO image. The command line options specify for virt-install to create an 18-Gbyte root disk image file /xvm/domu-x16.img. The option --nographics is used because this is a Solaris paravirtualized configuration. If you invoke virt-install with command line options but do not supply all required information, the tool prompts you for the needed information.
machine:root> virt-install --nographics -n domu-x16 --paravirt \ -f /xvm/domu-x16.img -r 1011 \ -l /net/inst-server/export/xVM/x_iso/63-0419-nd.iso Starting install... Creating domain... SunOS Release 5.11 Version 64-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Configuring /dev Solaris Interactive Text (Console session) Using install cd in /dev/dsk/c0d1p0 Using RPC Bootparams for network configuration information. Attempting to configure interface xnf0... Skipped interface xnf0 Setting up Java. Please wait... Beginning system identification... Searching for configuration file(s)... Search complete. Discovering additional network configuration... |
When the domain creation completes, sysidcfg runs to complete the system identification.
Use this procedure to set up the OpenSolaris 2008.11 or later release as a paravirtual guest. You must be running a Solaris dom0 on your system.
To start the installation of the OpenSolaris 2008.11 or later release, run the following commands:
# zfs create rpool/zvol # zfs create -V 10G rpool/zvol/domu-220-root # virt-install --nographics --paravirt --ram 1024 --name domu-220 -f /dev/zvol/dsk/rpool/zvol/domu-220-root -l /isos/osol-2008.11.iso |
This procedure assumes that your server is set up to assign dynamic addresses. If you want to assign static addresses, specify the address with the mac property of the -w/--network option. See limiting bandwidth and setting a VLAN ID.
Choose the defaults on the console for the two questions regarding the server setup.
After the OpenSolaris 2008.11 Live CD or OpenSolaris 2009.06 release has finished booting, a VNC session is available from within the guest domain. You can connect to the guest domain's VNC session as follows:
# domid=`virsh domid domu-220` # ip=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/ipaddr/0` # port=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/port` # /usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/passwd DJP9tYDZ # vncviewer $ip:$port |
Enter the given password at the VNC password prompt to bring up a VNC session.
VNC sessions are not secure. However, because you need elevated privileges to read the VNC password from XenStore, the VNC sessions are secure as long as you run the VNC viewer locally on dom0, or via SSH tunnelling or another secure method.
Enable the post-install VNC viewer.
By default, the VNC session is not enabled after the installation. You can change the default configuration as follows:
# svccfg -s x11-server setprop options/xvm_vnc = "true" # svcadm restart xvm/vnc-config # svcadm restart gdm |
The following sequence of commands installs a Red Hat Enterprise Linux guest over NFS using the text installer:
# mount -F hsfs /rhel.iso /mnt # share -o ro /mnt # virt-install -n pv-rhel -r 1024 -l nfs:mydom0:/mnt \ --os-type=linux os-variant=rhel5.3 \ -f /dev/zvol/dsk/pv-rhel.zvol -p --nographics |
The following command installs a Red Hat Enterprise Linux guest using the media in the dom0 optical drive (CD-ROM/DVD) , utilizing the RedHat Linux version 5 kickstart feature to automate the installation process.
# virt-install \ --name rhat \ --ram 500 \ --file /dev/zvol/dsk/rhat.zvol \ --paravirt \ --location /dev/dsk/c2t0d0s2 \ --os-type=linux os-variant=rhel5 \ --extra-args "ks=/export/install/rhat/ks.cfg |
Because of a standard Linux restriction that PV guests cannot be installed from a CD, the CD must be mounted in a location (usually dom0) exported over NFS, and then an NFS Linux installation done. Often it is much easier to do an HTTP Linux install.
virt-install -n solarisPV --paravirt -r 1024 \ --nographics -f /export/solarisPV/root.img -s 16 \ -l /ws/xvm-gate/public/isos/72-0910/solarisdvd.iso |
virt-install -n solarisHVM --hvm -r 1024 --vnc \ -f /export/solarisHVM/root.img -s 16 \ -c /ws/xvm-gate/public/isos/72-0910/solarisdvd.iso |
For this version of virt-install, ISO, physical media, and Preboot Execution Environment (PXE) pxe network installations are supported for HVM. If a physical CD is used, remember to unmount it after use.
# virt-install -n winxp --hvm -r 1024 --vnc \ -f /export/winxp/root.img -s 16 -c /windows/media.iso |
For this version of virt-install, ISO, physical CD, and PXE network installations are supported for HVM. If a physical CD is used, remember to unmount it after use.
A normal file is used to store the contents of the guest domain disk image, as opposed to using a ZFS volume, for example.
virt-install --name windows1 --ram 1024 \ --cdrom /en_winxp_pro_with_sp2.iso --file /guests/windows1-disk --file-size 10 --vnc |
zfs create -V 8G pool/solaris1-disk virt-install --name solaris1 --ram 1024 --nographics \ --file /dev/zvol/dsk/pool/solaris1-disk \ --location nfs:install.domain.com:/export/solaris/nv75 \ --autocf nfs:install.domain.com:/export/jumpstart/solaris1 |
After the domain is created, the sysidcfg is initiated and you are prompted to answer a series of questions. Your screen will look similar to this:
SunOS Release 5.11 Version 64-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: my-zone Loading smf(5) service descriptions: 114/114 Select a Language 1. English 2. French 3. German 4. Italian 5. Japanese 6. Korean 7. Simplified Chinese 8. Spanish 9. Swedish 10. Traditional Chinese Please make a choice (1 - 10), or press h or ? for help: What type of terminal are you using? 1) ANSI Standard CRT 2) DEC VT52 3) DEC VT100 4) Heathkit 19 5) Lear Siegler ADM31 6) PC Console 7) Sun Command Tool 8) Sun Workstation 9) Televideo 910 10) Televideo 925 11) Wyse Model 50 12) X Terminal Emulator (xterms) 13) CDE Terminal Emulator (dtterm) 14) Other Type the number of your choice and press Return: . . . |
For more information on the sysidcfg file, see the sysidcfg(4) man page. For an example sysidcfg file, see Understanding the Solaris xVM Server Architecture, Part No 820-3089-102.
The main interface for command and control of both xVM control domains and guest domains is the virsh(1M) utility. Users should use virsh wherever possible to control virtualized operating systems. Some xVM operations are not yet implemented by the virsh utility. In those cases, the legacy utility xm(1M) can be used for detailed control.
The following actions can be performed:
Start Selected Guest
# virsh start sdomu |
Suspend Selected Guest
# virsh suspend sdomu |
You can also use the destroy subcommand to bring the system to the shutoff state, but this can result in damage to data.
Resume Selected Guest
# virsh resume sdomu |
Shutdown Selected Guest
# virsh shutdown sdomu |
Reboot Selected Guest
# virsh reboot sdomu |
Undefine Selected Guest
# virsh undefine sdomu |
Undefine the configuration for an inactive domain which is specified by either its domain name or UUID.
To delete a guest, undefine the guest and then remove any associated disk resources.
Connect Selected Guest to Network
# virsh attach-interface sdomu bridge e1000g0 |
Take Snapshot of Active Domain
# virsh save sdomu /domains/sdomusnap |
The domain will be in the shut down state when the save operation completes. The resources, such as memory, allocated for the domain will be freed for use by other running domains.
To restore:
# virsh restore sdomu /domains/sdomusnap |
Note that network connections present before the save operation might be severed, since TCP timeouts might have expired.
Pin vCPU
# virsh vcpupin domain vcpu cpulist |
Pin domain's virtual CPUs to the host's physical CPUs. The domain parameter is the domain name, ID, or UUID. The vcpu parameter is the VCPU number. The cpulist parameter is a list of host CPU numbers, separated by commas.
Restore Selected Guest
# virsh restore sdomu |
Restore a domain that is in the saved state.
Delete Selected Guest
Deleting the guest will cause its image and snapshot to be deleted from the system.
To view the state of a domain, use the virsh list command.
For example:
# virsh list ID Name State ------------------------- 0 Domain-0 running 2 sxc18 running |
By default, only running domains are displayed. Use virsh list --inactive to display only non-running domains.
The following prerequisites apply:
Both the source machine and the target host must be on the same subnet.
The host and the target must each have the same CPU type (AMD or Intel).
Both systems must be running the same release of the xVM software.
There must be sufficient CPU and memory resources on the target to host the domain.
The target dom0 should have the same network interface as the source dom0 network interface used by the domU. For example, if the domU to be migrated has a VNIC that is bridged over the e1000g0 interface on the source dom0, then the target dom0 must also have the e1000g0 interface.
By default, xend listens only on the loopback address for requests from the localhost. The target host must be configured to accept the migration of a guest domain. The following example configures the xend SMF service on the target machine to accept guest migration from a system named host1. The caret (^) and dollar sign ($) are pattern-matching characters to ensure that the entire host name matches. The host1 name must match the name the target thinks the machine is called, which could be a host name, but could also be a fully qualified domain name (FQDN).
# svccfg -s svc:system/xvm/xend svc:/system/xvm/xend> setprop config/xend-relocation-address = "" svc:/system/xvm/xend> setprop config/xend-relocation-hosts-allow = "^host1\.?.*$ ^localhost$" svc:/system/xvm/xend> end # svcadm refresh svc:system/xvm/xend:default && \ svcadm restart svc:system/xvm/xend:default |
You can test the connection by using:
host1# telnet target-host 8002 |
If connection fails, check the /var/log/xen/xend.log file on the target system.
In addition to configuring the target system to accept migrations, you must also configure the domain that will be migrated so that the domain's storage is accessible from both the source and the target systems. The domain's accessible disks must reside on some form of shared storage, such as NFS files or iSCSI volumes. This document uses the NFS method available in the OpenSolaris 2009.06 release.
On the NFS server, share the directory:
# sharectl set -p nfsmapid_domain=sun.com nfs # svcadm restart svc:/network/nfs/mapid:default # share -F nfs -o "sec=sys,root=host1:host2,rw" /vdisks |
On both the host1 source system and the host2target system, also execute the sharectl to set the NFS mapid name to sun.com, and the svcadm command restart the xend service.
# virt-install -p --nographics -n domain -r 1024 -l /isos/os0906/os0906.iso -f /net/hostname_of_nfs_server/vdisks/testpv |
The virt-install command then creates the virtual disk on the NFS server and starts the guest installation process.
This method should be available in a build after the OpenSolaris 2009.06 release.
An iscsi-backed guest would be created with this base command:
# virt-install -n <name> -r <ram> -p --nographics -l /path/to/iso \ -m <mac address> \ --disk path=/static/<iscsi target ip address>/<lun>/<iqnnumber>,driver=phy,subdriver=iscsi |
An example would be:
# virt-install -n ubuntu -r 1024 -p --nographics -l /net/xvm-4200m2-03/isos/ubuntu-7.04-32.iso \ --disk path=/static/172.20.26.10/0/iqn.1986-03.com.sun:02:52ac879e-788e-e0ea-bf5c-f86b2b63258a,driver=phy,subdriver=iscsi |
In addition to setting up the xend relocation host allow field as described above in the section "Enabling Live Migration on a Target Host,” also issue the following command to enable static discovery on both dom0s:
# iscsiadm modify discovery -s enable |
Use the following command to migrate the guest.
host1# virsh migrate domain --live xen:// xenmigr://target-host |
While the use of the virsh command is preferred, you can try the following command if the virsh command appears to fail.
host1# xm migrate -l domain target-host |
You can observe the migration while it occurs by monitoring the domain status on both machines using virsh list.
Note that the domain definition remains on the system on which it was created. You can start the domain on that system's dom0 with the virsh start domain command.
You cannot add temporary access to a device for an HVM domU because you cannot dynamically add IDE CD-ROM devices. To make a CD-ROM available for use in an HVM guest, you must add the device before you boot the guest. You can then change the media by using virsh attach-disk.
Note that in an HVM domU, you must use eject in the domU to unlock a removable-media device, such as a CD device, before running the attach-disk subcommand.
The attach-disksubcommand is also used to change the media in a CD drive.
For additional explanation, see the virsh(1M) man page, including “Example 4 Changing a CD in a Solaris HVM Guest Domain.”
In PV guest domains, you can use virsh attach-disk to add a CD device and detach-disk to remove it. These commands can be used on halted or running domUs. If you want to change the CD after attaching the device, you must use virshdetach-disk and then virsh attach-disk with the new media.
Virtual network computing (VNC) is a remote control software product that allows you to view and fully interact with one computer desktop, the Xvnc server, by using the VNC viewer on another computer desktop. The two computers do not have to be running the same type of operating system. VNC provides a guest domain graphical login.
By default, consoles for HVM guests are graphics consoles. You can use VNC to view a Windows guest domain from a Solaris dom0. You only need to set the address and password for VNC to work with HVM guests. HVM installs may specify either VNC (--vnc) or Simple DirectMedia Layer (SDL) (--sdl) for graphics support. You can later configure the OS to use the serial console as the main console.
Use the vncpasswd command to set the password used to access VNC desktops. The password is stored on the server. For more information, see vncpasswd(1).
Xvnc displays to a VNC viewer over the network. The VNC server display number is the same as the X server display number. For example, snoopy:2 refers to display 2on machine snoopy for both VNC and an X server.
For the latest information on VNC setup, see .
Become superuser, or assume the appropriate role.
Enable XDMCP connections:.
# svccfg -s cde-login svc:/application/graphical-login/cde-login> setprop dtlogin/args="" |
(Optional) If you are not running vncviewer locally on the control domain, set X11-server to listen to the tcp port:
# svccfg -s x11-server svc:/application/x-11/x11-server> setprop options/tcp_listen=true |
The VNC listen facility should be used with caution due to security considerations.
Enable the Xvnc inetd services.
# svcadm enable xvnc-inetd |
Connect from another machine and verify that you see the login screen and can log in to a desktop session.
# vncviewer domU:0 |
Become superuser, or assume the appropriate role.
Enable XDMCP for GDM:
# printf '[xdmcp]\nEnable=true\n' >>/etc/X11/gdm/custom.conf # svcadm restart gdm |
Make sure that GDM is running:
# svcadm enable -s gdm |
Set the X11-server to listen to the tcp port:
# svccfg -s x11-server svc:/application/x-11/x11-server> setprop options/tcp_listen=true |
Enable the Xvnc inetd services:
# svcadm enable xvnc-inetd |
Connect from another machine and verify that you see the login screen and can log in to a desktop session.
# vncviewer domU:0 |
This procedure starts VNC at system boot from the dtlogin, displaying the dtlogin login screen.
Become superuser, or assume the appropriate role.
Add an instance of x11-server service called display1 for configuration, and configure it to run Xvnc.
svccfg -s application/x11/x11-server add display1 svccfg -s application/x11/x11-server:display1 addpg options application svccfg -s application/x11/x11-server:display1 addpropvalue options/server astring: "/usr/X11/bin/Xvnc" svccfg -s application/x11/x11-server:display1 addpropvalue options/server_args astring: '"SecurityTypes=None"' |
Configure dtlogin to start it.
mkdir -p /etc/dt/config cp /usr/dt/config/Xservers /etc/dt/config/Xservers echo " :1 Local local_uid@none root /usr/X11/bin/Xserver :1" >> /etc/dt/config/Xservers pkill -HUP dtlogin |
Connect from another machine and verify that you see the login screen and can log in to a desktop session.
# vncviewer domU:0 |
Use the following to start the GNOME session.
# /bin/sh # mkdir <your homedir>/.vnc # echo "#!/bin/sh\n/usr/bin/dbus-launch /usr/bin/gnome-session" > <your homedir>/.vnc/xstartup |
You can use the man command to view the man pages.
man Xvnc
man vncviewer
man vncpasswd
Live links to these man pages cannot be made from this book.
The MANPATH variable is normally set for you in desktop login sessions. If the entry is not found, check your MANPATH environment variable and add the path to the X11 man pages if necessary.