System Administration Guide: Virtualization Using the Solaris Operating System

Chapter 40 xVM System Administration

This chapter covers xVM system administration topics.

Printing Kernel and Machine Information

Use uname to determine the kernel you are running.


hostname% uname -a
SunOS hostname 5.11 snv_80 i86pc i386 i86xpv

Use the isainfo command to print the basic application environments supported by the currently running system.


hostname% isainfo -x
amd64: sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov cx8 tsc fpu
i386: ahf sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov cx8 tsc fpu

Use the psrinfo command to display information about processors.


hostname% psrinfo -vp
The physical processor has 1 virtual processor (0)
x86 (Authentic AMD family 15 model 5 step 10 clock 2200 MHz)
Dual Core AMD Opteron(tm) Processor 275

Configuring the Serial Console to Be the Main Console

After an HVM guest domain has been installed, you can configure the OS to use the serial console as the main console.

ProcedureHow to Configure the Serial Console as the Main Console

  1. Type:


    # eeprom console=ttya
    
  2. Reboot the HVM guest domain.

virsh Command and Domain Management

The main command interface used to control both Solaris xVM and guest domains is the virsh command. virsh provides a generic and stable interface for controlling virtualized operating systems. Use virsh instead of xm wherever possible.

Many virsh commands act asynchronously. This means that the system prompt can return before the operation has completed.

If you modify CPUs or memory by using the virsh command, these changes will be saved in the configuration file and persist across reboots.

virsh Command Structure

Most virsh commands follow the format:


# virsh subcommand domain-id | name | uuid [options]
subcommand

One of the subcommands listed in the virsh(1M) man page

domain-id, name, or uuid

A domain identifier

options

An option to a subcommand


Example 40–1 Using a virsh Command

This line connects to a domU named sxc18.


# virsh console sxc18

virshCommand

The virsh is used to manage domains. You must run the commands as the root user or by assuming the appropriate role account on the host operating system. The commands cannot be run in the guest domain.

virsh Command

Description 

virshattach-device

Attach device from an XML file  

virsh attach-disk

Attach disk device 

virsh autostart

Configure a domain to automatically start at boot time. 

virsh capabilities

Return capabilities of the hypervisor and drivers. 

virsh connect

Connect to the hypervisor. 

virsh connect--readonly

Connect to the hypervisor in read-only mode. 

virsh consoledomain

Connect to a guest console 

virsh create file

Create a domain based on the parameters contained in the XML file, where file is an absolute pathname. Such a file can be created using virsh dumpxml subcommand. The XML configuration file should not be directly edited.

virsh define file

Define a domain from an XML file, but do not start the domain 

virsh destroy domain-id

Terminate a domain immediately 

virsh detach-device domain-idfile

Detach a device defined by the given XML file from the specified domain. 

virsh domid domain_name

Converts a domain name to a numeric domain ID. 

virsh dominfo domain_id

Return basic info about a domain 

virsh domname domain_id

Converts a numeric domain ID to a domain name. 

virsh domstate domain_id

Returns the state of a running domain. See the list subcommand.

virsh domuuid domain

Convert the specified domain name or ID to a domain UUID.  

virsh dump domainfile

Dump the core of the domain specified by domain to the file specified by file for analysis.

virsh dumpxml domain-id

Obtain domain information in XML 

virsh help

Display descriptions of the subcommands. Include a subcommand at the end of the command line to display help about that subcommand. 

virsh list

List domains. By default, only running domains are displayed. Use --inactive to display only non-running domains.

virsh nodeinfo

Print basic information about a node. 


# virsh nodeinfo
CPU model:           i86pc
CPU(s):              2
CPU frequency:       2391 MHz
CPU socket(s):       2
Core(s) per socket:  1
Thread(s) per core:  1
NUMA cell(s):        1
Memory size:         4127744 kB

virsh quit

Quit this interactive terminal 

virsh reboot domain-id

Reboot a domain. 

This command is identical to the effect of running init 6. The command returns immediately, but the entire reboot process might take a minute or more.

virsh restore state-file

Restore a domain from a saved state file. 

virsh resume domain-id

Moves a domain out of the paused state, making the domain eligible for scheduling by the hypervisor. 

virsh reboot domain-id

Reboot a domain 

virsh restore domain-id

Restore a domain from a saved state 

virsh resume domain-id

Resume running a suspended domain. 

virsh save domain state-file

Save a running domain to a state file so that it can be restored by using the restore subcommand at a later time. In this state, the domain is not running on the system, thus the memory allocated for the domain will be free for use br other domains.

Note that network connections present before the save operation might be severed because TCP timeouts might have expired. 

virsh setvcpus domaincount

Change the number of virtual CPUs active in the specified guest domain. The count parameter is the number of virtual CPUs.

virsh schedinfo domain

Show or set the scheduling paramaters for the specified domain name, ID, or UUID. This subcommand takes the options --weight number and --cap number.

virsh setmaxmem domain kilobytes

Change the maximum memory allocation limit in the specified guest domain. The kilobytes parameter is the maximum memory limit in kilobytes.

virsh setmem domain kilobytes

Change the current memory allocation in the specified guest domain. The kilobytes parameter is the number of kilobytes in memory.

virsh setvcpus domain count

Change the number of virtual CPUs active in the specified guest domain. The count parameter is the number of virtual CPUs.

virsh shutdown domain

Coordinates with the domain operating system to perform graceful shutdown. The effect of this command is identical to the effect of running init 5.

The shutdown might take an unexpected length of time, depending on the services that must be shut down in the domain. In addition, it is possible that the subcommand will not succeed. 

virsh start domain

Start a previously defined inactive domain. 

virsh suspend domain

Suspend a domain. A domain in this state still consumes allocated resources, such as memory, but is not eligible for scheduling by the hypervisor. 

virsh undefine domain

Undefine the configuration for the inactive domain by specifying either its domain name or UUID. 

virsh vcpuinfo domain

Return basic information about the domain's virtual CPUs. 

virsh vcpupin domain vcpu cpulist

Pin domain's virtual CPUs to the host's physical CPUs. The domain parameter is the domain name, ID, or UUID. The vcpu parameter is the VCPU number. The cpulist parameter is a list of host CPU numbers, separated by commas.

virsh version

Display version information. 

virsh vncdisplay domain-id

VNC display 

Ethernet-Type Interface Support

The OpenSolarisTM OS supports all Ethernet-type interfaces, and their data-links can be administered with the dladm command.

Suspend and Resume Functions and Commands

Some xVM operations are not yet implemented in the virsh command. In those cases, the equivalent xm command can be used. Subcommand terminology differs between the xmand virsh commands. In particular, the suspend and resume commands have different meanings.

Table 40–1 Equivalent virsh and xmCommands

virsh

xm

suspend

pause

resume

unpause

save

suspend without an output file argument

restore

resume without an output file argument

Cloning ZFS-Based Solaris Domains

If you are using a ZFS volume as the root disk for a domU, you can use the ZFS snapshot facilities to clone another domU with the same configuration. By taking a clone of the root disk, you can quickly provision similar domains.

For example, you might install Solaris as a guest domain, run sys-unconfig(1M), then clone that disk image for use in new Solaris domains. Installing a Solaris domain in this way requires only the configuration step, rather than a full install. The only extra storage used for the cloned domain is the amount needed for the differences between the domains.

You also have the capability to revert to a previous configuration if necessary.


Note –

Any clones created from a snapshot must be destroyed before the snapshot can be destroyed.


ProcedureHow to Use ZFS Snapshot to Clone a Solaris DomU

If you use a ZFS volume as the virtual disk for your guest domain, you can take a snapshot of the storage. The snapshot is used to create clones.

Note that you might want to use the sys-unconfig command described in sys-unconfig(1M) in the domain before you take the snapshot. The resulting clones would not have host names or name services configured, which is also known as "as-manufactured." When it comes up, the new clone displays the configuration screens.

  1. Become superuser, or assume the appropriate role.

  2. (Optional) To create a snapshot to produce domains that require sysidcfg to complete system identification, use the sys-unconfig command in a domain named domain1.

  3. Shut down domain1.


    # virsh shutdown domain1
    
  4. Take a snapshot of the root disk used by domain1.


    # zfs snapshot pool/domain1-root@clone
    
  5. Create a clone named domain2 from the snapshot domain1-root@clone.


    # zfs clone pool/domain1-root@clone pool/domain2-root
    
  6. (Optional) Display the snapshot and clone.


    # zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    pool                  92.0K  67.0G   9.5K  /pool
    pool/domain1          8K  67.0G     8K     /pool/domain1
    pool/domain2          8K  67.0G     8K     /pool/domain2
  7. Dump the configuration of domain1.


    # virsh dumpxml domain1 >domain1.xml
    
  8. Copy the configuration file domain1.xml to a file named domain2.xml.


    # cp domain1.xml domain2.xml
    
  9. Make the following changes in the domain2.xml file.

    1. Replace domain1 in this line:


      <name>domain1</name>

      With the new name, domain2:


      <name>domain2</name>
    2. So that virsh will generate a new domain configuration, remove the UUID line, which looks like this:


      <uuid>72bb96b6e6cf13594fb0cd1290610611</uuid>
    3. Point to the new disk by editing the following line:


      <source dev='/dev/zvol/dsk/export/domain1-root'/>

      Change domain1–root to domain2–root so that the line appears as follows:


      <source dev='/dev/zvol/dsk/export/domain2-root'/>
  10. Inform virsh about the new domain:


    # virsh define domain2.xml
    
  11. Boot the cloned domain.

More Information on ZFS Snapshot Features

Also see Chapter 7, Working With ZFS Snapshots and Clones, in Solaris ZFS Administration Guide.

Recovery

You can keep snapshots of the guest domain OS installations that are known to be good images, and use ZFS rollback to revert to a snapshot if the domain has a problem. For more information, see Rolling Back a ZFS Snapshot in Solaris ZFS Administration Guide.

Communication From xVM Hypervisor to Dom0 Using xm

Although the hypervisor and dom0 work closely together to manage a running system, the dom0 operating system has little direct visibility into the hypervisor. The hypervisor's entire address space is inaccessible to the dom0.

The only source of information is provided by the xm command, a user-space tool that communicates with the hypervisor via hypercalls.

Some of the commonly used xm commands are:

xm info

Report static information about the machine, such as number of CPUs, total memory, and xVM version.


# xm info
host                   : test
release                : 5.11
version                : onnv-userj
machine                : i86pc
nr_cpus                : 2
nr_nodes               : 1
sockets_per_node       : 2
cores_per_socket       : 1
threads_per_core       : 1
cpu_mhz                : 2391
hw_caps                : 078bfbff:e1d3fbff:00000000:00000010
total_memory           : 4031
free_memory            : 1953
xen_major              : 3
xen_minor              : 1
xen_extra              : .2-xvm
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : Thu Dec 20 20:11:49 2007 -0800 15623:41d827ccece7
cc_compiler            : gcc version 3.4.3 (csl-sol210-3_4-20050802)
cc_compile_by          : userj
cc_compile_domain      : lab.sun.com
cc_compile_date        : Thu Dec 20 20:24:36 PST 2007
xend_config_format     : 4
xm list

List all domains and some high-level information.

xm top

Analogous to the Linux top command, but it reports domain information instead of process information. Information about the xVM system and domains is displayed in a continuously updating manner through the xentop command. See xentop.

xm log

Display the contents of the xend log.

xm help

List all the available commands.

xentrace

Capture trace buffer data from xVM.

xentop

Display information about the xVM system and domains in a continuously updating manner. See xm top.

xm start domain

Start a managed domain that was created by virt-install.

If you modify guest domain CPUs or memory by using the xm command, these changes will be saved in the configuration file and persist across reboots.

See the xm(1M) man page for more information.

About Crash Dumps

Domain 0 and Hypervisor Crashes

On a running system, the hypervisor's memory is completely off-limits to dom0. If the hypervisor crashes, however, the resulting panic dump will generate a core file that provides a unified view of both xVM and dom0. In this core file, xVM appears as a Solaris kernel module named xpv. For example:


 > $c
                xpv`panic+0xbf()
                xpv`do_crashdump_trigger+0x19()
                xpv`keypress_softirq+0x35()
                xpv`do_softirq+0x54()
                xpv`idle_loop+0x55()

The following applies to crash dumps:

How to Force a Crash Dump of a Guest

You can use the following command to force a crash dump of an OpenSolaris guest in the event of problems within the guest:


# virsh dump domain

The crash dump file created can be analyzed through /bin/mdb. The xvm user must be able to write to the location specified.