System Administration Guide: Virtualization Using the Solaris Operating System

Chapter 37 Sun xVM Hypervisor System

This chapter introduces the SunTM xVM hypervisor. The hypervisor is derived from an open source project, Xen.

For more information about using the hypervisor and the xVM architecture, see:

Sun xVM Hypervisor Virtualization System Overview

The Sun xVM hypervisor is a type 1 hypervisor that partitions a single physical machine into multiple virtual machines, to provide server consolidation and utility computing. Existing applications and binaries run unmodified.

The hypervisor presents a virtual machine to guests. The hypervisor forms a layer between the software running in the virtual machine and the hardware. This separation enables the hypervisor to control how guest operating systems running inside a virtual machine use hardware resources.

The hypervisor securely executes multiple virtual machines, or guest domains, simultaneously on a single x64 or x86 compatible computer. Unlike virtualization using zones, each virtual machine runs a full instance of an operating system.

There are two kinds of domains, the control domain and the guest domain. The control domain is also known as domain 0, or dom0. A guest operating system, or unprivileged domain, is also called a domain U or domU.

When working with the xVM software, note that the virsh and virt-install commands are preferred over the use of the legacy xm command whenever possible.

Uniform View of Hardware

A hypervisor provides a uniform view of underlying hardware. Machines from different vendors with different I/O subsystems appear to be the same machine, which means that virtual machines can run on any available supported computer. Thus, administrators can view hardware as a pool of resources that can run arbitrary services on demand. Because the hypervisor also encapsulates a virtual machine's software state, the hypervisor layer can map and remap virtual machines to available hardware resources at any time and also use live migration to move virtual machines across computers. These capabilities can also be used for load balancing among a collection of machines, dealing with hardware failures, and scaling systems. When a computer fails and must go offline or when a new machine comes online, the hypervisor layer can remap virtual machines accordingly. Virtual machines are also easy to replicate, which allows administrators to bring new services online as needed.

The hypervisor virtualizes the system's hardware. A virtualization API and tools are provided by the libvirt and virt-install utilities. The hypervisor transparently shares and partitions the system's CPUs, memory, and NIC resources among the user domains. The hypervisor performs the low-level work required to provide a virtualized platform for operating systems.

The hypervisor assigns one or more virtual CPUs (VCPUs) to each domain, allocated from dom0. The virsh setvcpus and virsh vcpupin commands can be used to dynamically set and pin VCPUs to processors. Each VCPU contains all the state one would typically associate with a physical CPU, such as registers, flags, and timestamps. A VCPU in xVM is an entity that can be scheduled, like a thread in the SolarisTMsystem. When it is a domain's turn to run on a CPU, xVM loads the physical CPU with the state in the VCPU, and lets it run. The Solaris system treats each VCPU as it would treat a physical CPU. When the hypervisor selects a VCPU to run, it will be running the thread that the Solaris system loaded on the VCPU.

When to Use Domains

Containment

Containment gives administrators a general-purpose undo capability. Administrators can suspend a virtual machine and resume it at any time, or checkpoint a virtual machine and roll it back to a previous execution state. With this capability, systems can more easily recover from crashes or configuration errors. See Recovery.

Containment also supports a very flexible mobility model. Users can copy a suspended virtual machine over a network or store and transport it on removable media. The hypervisor provides total mediation of all interactions between the virtual machine and underlying hardware, thus allowing strong isolation between virtual machines and supporting the multiplexing of many virtual machines on a single hardware platform. The hypervisor can consolidate several physical machines with low rates of utilization as virtual systems on a single computer, thereby lowering hardware costs and space requirements.

Security

Strong isolation is also valuable for reliability and security. Applications that previously ran together on one machine can now be separated on different virtual machines. If one application experiences a fault, the other applications are isolated from this occurrence and will not be affected. Further, if a virtual machine is compromised, the incident is contained to only that compromised virtual machine.

Resource Virtualization to Enable Interoperability

The hypervisor provides a layer between software environments and physical hardware that has the following characteristics:

Virtualization provides a way to bypass interoperability constraints. Virtualizing a system or component such as a processor, memory, or an I/O device at a given abstraction level maps its interface and visible resources onto the interface and resources of an underlying, possibly different, real system. Consequently, the real system appears as a different virtual system or even as multiple virtual systems.

Hardware Platform for Running Sun xVM Hypervisor

Supported hardware includes x64 and x86 machines.

Requirements 

Description 

Processors 

AMD or Intel x86/x64 (AMD-V or Intel-VT for HVM domains) 

Requires at least 2 cores, 1 for the control domain and at least 1 for the guest. 

Memory 

Default configuration requires 4 GB. 

I/O (Disk on which control domain 0 is installed) 

  • SCSI, iSCSI, Serial ATA (SATA) or Serial Attached SCSI (SAS).

  • Fibre channel to a JBOD (term for just a bunch of disks or drives).


Note –

Legacy ATA (IDE) drives are acceptable for devices such as CD-ROM. These drives are not recommended as the root or image drive on which control domain 0 is installed, for performance reasons.


I/O (Attached Storage) 

CIFS, NFS, TCP/IP, and Ethernet 

Determining HVM Support

To run Windows hardware-assisted virtual machine (HVM) domains, an HVM-capable machine that is running a Solaris xVM dom0 is required. A machine is HVM-capable if it is has either an AMD Opteron Revision F or later, or an Intel CPU with VT extensions.

To determine whether a machine is HVM-capable, run the virt-install program with the -v option.

If HVM is not supported, an error message displays:


# virt-install -v
   ERROR    Host does not support virtualization type 'hvm'

Sun xVM Hypervisor Memory Requirements

The current 4–GB default configuration includes memory for the OpenSolaris xVM hypervisor control functions and 10 or fewer guests.

When dom0 is booted, the OpenSolaris xVM hypervisor reserves a small amount of memory for itself and assigns the remainder to dom0. When a domU is created, the Sun xVM hypervisor takes memory from dom0 and assigns it to the new domU.

There is no default configuration or a default amount of memory to assign to dom0, unless the user sets a dom0_mem value in the GRUB menu when booting dom0. The values that can be set are initial memory, minimum memory, and maximum memory.

If you are running known memory hungry apps in dom0, then you need more memory. Examples include cacao processes and Firefox.

The Solaris System and x86 Platforms

With the introduction of the Sun xVM hypervisor, there are now two platform implementations on the x86 architecture. The implementation names are returned by the uname command with the -i option and refer to the same machine, i86pc.

Applications should use the uname command with the -p option. On x86 platforms, regardless of whether the Solaris system is running under xVM, this option always returns i386. The -p option is equivalent to using the -m option in some other operating systems.

Guests That Are Known to Work

Supported virtual machine configurations in the OpenSolaris 2009.06 release include the OpenSolaris control domain (domain 0), and guest, or domU, operating systems.

Because the control domain must work closely with the hypervisor layer, the control domain is always paravirtualized. A system can have both paravirtualized and fully virtualized domains running simultaneously.

Types of guests include the following:

Guest OS 

Type of Guest 

32 or 64–Bit 

Uniprocessor or Multiprocessor 

Notes 

Windows 2003 SP2 

HVM + PVIO  

32-bit 

MP 

Obtain the Sun xVM Guest Additions (Early Access 3) drivers and Sun xVM Guest Additions (Early Access 3) Installation guide here, and install in the guest. Reboot the guest from inside the guest.

Windows XP 

HVM + PVIO  

32-bit 

MP 

Obtain the OpenSolarisTM xVM Guest Additions (Early Access 3) drivers and Sun xVM Guest Additions (Early Access 3) Installation guide here, and install in the guest. Reboot the guest from inside the guest.

Windows Server 2008 

HVM + PVIO  

32-bit 

MP 

Obtain the Sun xVM Guest Additions (Early Access 3) drivers and Sun xVM Guest Additions (Early Access 3) Installation guide here, and install in the guest. Reboot the guest from inside the guest.

Solaris 10 5/09 (S10U7) + PVIO 

HVM + PVIO  

64-bit 

UP 

The Solaris 10 5/09 (Solaris 10 Update 7) release is shipped with the Solaris PV drivers. 

A Solaris guest domain works like a normal Solaris Operating System. All of the expected tools are available. 

Solaris 10 10/08 (S10U6) + PVIO 

HVM + PVIO  

64-bit 

UP 

The Solaris 10 10/08 (Solaris 10 Update 6) release is shipped with the Solaris PV drivers. 

Solaris 10 5/08 (S10U5) + PVIO 

HVM + PVIO  

64-bit 

UP 

To run the Solaris 10 5/08 release as a guest, download Solaris 10 patch 137112-06 (or later) from SunSolve to obtain the Solaris PV drivers. The SunSolveSM site provides download instructions. After the patch is applied to the domain, perform the following steps:

  1. Run sys-unconfig(1M).

  2. Verify that the file /etc/driver_aliases contains the line xpv "pci5853,1.1".

  3. Verify that the file /etc/name_to_major contains the following lines:

    • xpv 249

    • xpvd 250

    • xnf 251

    • xdf 252

  4. Reboot the guest. Upon reboot, select the PV network interface (xnf).

Solaris Express Community Edition (SXCE) Build 111 or later 

HVM + PVIO 

64-bit 

UP 

SXCE 110b and later builds are shipped with the Solaris PV drivers. 

A Solaris guest domain works like a normal Solaris Operating System. All of the expected tools are available. 

OpenSolaris 2008.11 and 2009.06 

HVM + PVIO 

PV 

64-bit 

UP 

OpenSolaris is shipped with the Solaris PV drivers. 

Continue to update your system for the latest bug fixes and features. 

A Solaris guest domain works like a normal Solaris Operating System. All of the expected tools are available. 

For PV installation instructions, see “How to Install Open Solaris 2008.11 or later in Paravirtualized Mode,” below. 

RHEL 5.3 

HVM 

64-bit 

UP 

 


Caution – Caution –

Note that Windows HVM domains can be susceptible to viruses, so make sure you comply with your site's network security policies.


The following information applies to the control domain:

The Sun xVM Hypervisor and Domain 0

The hypervisor is responsible for controlling and executing each of the domains and runs with full privileges. The control tools for managing the domains run under the specialized control domain domain 0 (dom0).

The hypervisor virtualizes the system's hardware. The hypervisor transparently shares and partitions the system's CPUs, memory, and NIC resources among the user domains. The hypervisor performs the low-level work required to provide a virtualized platform for operating systems.

The hypervisor relies primarily on the control domain for the following:

Thus, by default, only the control domain has access to physical devices. The guest domains running on the host are presented with virtualized devices. The domain interacts with the virtualized devices in the same way that the domain would interact with the physical devices. Also see Resource Virtualization to Enable Interoperability.

The following figure shows the Sun xVM hypervisor configuration.

Figure 37–1 Sun xVM Configuration

Figure shows domains and the hypervisor layer.

Sun xVM Hypervisor Scheduler

The hypervisor schedules running domains (including domain 0) onto the set of physical CPUs as configured. The scheduler is constrained by configuration specifications such as the following.

The default domain scheduler for the hypervisor is the credit scheduler. This is a fair-share domain scheduler that balances virtual CPUs of domains across the allowed set of physical CPUs according to workload. No CPU will be idle if a domain has work to do and wants to run.

The scheduler is configured through the xm sched-credit command described in the xm(1M) man page.

The following parameters are used to configure the scheduler:

-d domain, --domain=domain

Domain for which to set scheduling parameters.

-c cap, --cap=cap

The maximum amount of CPU the domain can consume. A value of zero, which is the default, means no maximum is set. The value is expressed in percentage points of a physical CPU. A value of 100 percent corresponds to one full CPU. Thus, a value of 50 specifies a cap of half a physical CPU.

-w weight, --weight=weight

The relative weight, or importance, of the domain. A domain with twice the weight of the other domains receives double the CPU time of those other domains when CPU use is in contention. Valid weights are in the range 1-65536. The default weight is 256.


Example 37–1 xm sched-credit Configuration

The following line configures scheduling parameters for the domain sol1. The domain has a weight of 500 and a cap of 1 CPU.


xm sched-credit -d sol1 -w 500 -c100

If used without the -w and -c options, the current settings for the given domain are shown.


Supported Virtualization Modes

There are two basic types of virtualization, full virtualization and paravirtualization. The hypervisor supports both modes.

Full virtualization allows any x86 operating system, including Solaris, Linux, or Windows systems, to run in a guest domain.

Paravirtualization requires changes to the operating system, with the minimum requirement being modifications to support the virtual device interfaces.

A system can have both paravirtualized and fully virtualized domains running simultaneously.

For paravirtualized mode and for all types of operating systems, the only requirement is that the operating system be modified to support the virtual device interfaces.

Overview of Paravirtualization

In the more lightweight paravirtualization, the operating system is both aware of the virtualization layer and modified to support it, which results in higher performance.

The paravirtualized guest domain operating system is ported to run on top of the hypervisor, and uses virtual network, disk, and console devices.

Since the control domain must work closely with the hypervisor layer, control domain is always paravirtualized. Guest domains can be either paravirtualized or fully virtualized.

Devices and Drivers

Because full paravirtualization requires changes to the OS, only specific operating systems can be hosted in a fully paravirtualized guest domain. Currently, those are limited to Solaris, Linux FreeBSD, and NetBSD, although others might be made available in the future.

Partial paravirtualization describes a mechanism in which an otherwise unmodified OS is augmented with paravirtualized drivers for I/O devices. This can significantly improve the performance of the otherwise unmodified guest domain.

Paravirtualized drivers for I/O devices are implemented as a pair of drivers, one in each of the guest and host domains. This mechanism is often termed split drivers.

A frontend driver runs in the guest domain and communicates with a backend driver running in domain 0. This enables the guest domain to access services available to domain 0.

xVM software in the OpenSolaris release currently supports two main split drivers, one for network I/O and one for disk I/O.

Within the guest domain, the frontend driver appears as a normal device. For network, this is an Ethernet device. For disk, this is a traditional block device.

Within domain 0, the behavior of the backend driver depends on the type of device and the configuration of the domain. Network backend drivers typically extend the physical network connectivity available to domain 0 into the guest domain by using a virtual NIC feature. Disk backend drivers can make disk space available to guest domains by using files, ZFS volumes, and physical devices. Various file formats are supported when files are used to provide storage, for example, VMDK. (For more information on VMDK, see the section Using vdiskadm to Create Virtual Disks .)

The Solaris frontend drivers share the same source code whether they are used in paravirtualized (PV) or partially paravirtualized (HVM+PV) domains. There are #ifdefs in the driver source code to accommodate differences between HVM+PV and PV environments, but otherwise they are the same.

The Windows frontend drivers have different source code than those for Solaris, but the protocol between them and the Solaris backend drivers is the same as that used by the Solaris frontend drivers. This protocol was developed by the Xen open source community and is defined by the source code for the Linux variants of the drivers.

The source code for the Solaris drivers is found in /usr/src/uts/common/xen/io. The network frontend driver is xnf, and the disk frontend driver is xdf. The backend drivers have various names, such as xnb, xnbo, xnbu, xdb, and xpvtap.

In addition to these drivers, the Solaris console is virtualized when the Solaris system is running as a guest domain. The console driver interacts with the xenconsoled(1M) daemon running in domain 0 to provide console access.

Overview of Full Virtualization

In a full virtualization, the operating system is not aware that it is running in a virtualized environment under xVM. A fully virtualized guest domain is referred to as a hardware-assisted virtual machine (HVM). An HVM guest domain runs an unmodified operating system.

Fully-virtualized guest domains are supported under xVM with virtualization extensions available on Intel-VT or AMD Secure Virtual Machine (SVM) processors. These extensions must be present and enabled. Some BIOS versions disable the extensions by default. Note that this that hardware is also needed in HVM+PVIO configurations such as Solaris 10 5/09 (Solaris 10 U7) guest domains.


Note –

Full virtualization requires that the hypervisor transparently intercept many operations that an operating system typically performs directly on the hardware. This interception allows the hypervisor to ensure that a domain cannot read or modify another domain's memory, cannot interfere with its device access, and cannot shut down the CPUs it is using.


Virtual Devices

Networks

The physical network consists of both an external physical LAN and the extension of the LAN within the host to a guest's network. Paravirtualized domains use the xnb backend driver in dom0 to communicate with a physical network interface.

The virtual network operates through the underlying physical network infrastructure.

You can create one physical network, also referred to as a switch, for each NIC on the system. If you have more than one physical NIC configured, you might want to configure the default-nic property of the xvm/xend SMF service, as described in the xend(1M).

To view the IP address assigned through DHCP, use the ifconfig command.

See the document New Network Options, Including Limiting Bandwidth and Setting a VLAN ID, for Virtual Network Interfaces Attached to a Guest to learn about conventions for network options used in the virt-install utility.

Virtual NICs

A single physical NIC can be carved into multiple VNICs, which can be assigned to different Solaris xVM instances running on the same system. VNICs are managed using the dladm command line utility described in the dladm(1M) man page. You can use virsh dumpxml output to correlate the domain's network interface with the assigned VNIC.

The dladm show-vnic command can be used to display VNIC information. In the following output, 1 is the domain ID, and 0 is the NIC number for that guest.


# dladm show-vnic
LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE VID
xvm1_0       e1000g0      1000   0:16:3e:64:99:4d     fixed 0

For more information, see OpenSolaris Project: Crossbow: Network Virtualization and Resource Control. Crossbow provides network virtualization and resource control by virtualizing the stack and NIC around any service, protocol, or virtual machine.

Virtual FibreChannel HBAs

Certain FibreChannel HBAs that support N Port ID virtualization (NPIV) can be carved into multiple virtual HBAs, in a manner similar to VNICs. Each virtual HBA has a unique identity, or WWN, on the SAN, which can be used for LUN masking or zoning. Block devices present on these virtual HBAs can be automatically mounted to specific OpenSolaris xVM guest operating systems. When these guest operating systems migrate, the HBA's identity also migrates. NPIV is administered using the fcinfo(1M) and xm(1M) commands.

Virtual FibreChannel HBAs

Virtual block devices can be stored on FibreChannel disk subject to the following limitations:

NPIV and FibreChannel

NPIV is fully supported for xVM. The xm command can associate all block devices from one NPIV port to a guest domain. NPIV identity, specifically the port WWN and node WWN, will migrate between devices on the SAN. NPIV allows zoning and LUN masking to be used with Xen. Zoning and LUN masking are useful tools for access control on a SAN. Soft zoning rather than hard zoning (grouping by switch port) should be used on the switch. Soft zoning groups devices by the HBA's Port WWN. If there is more than one physical SAN, or if the system uses hard zoning, the administrator must ensure that the physical HBA is connected to the correct SAN. Switch administrative tools can be used for this purpose.

How to Configure NPIV for Hypervisor Use

This procedure uses the fcinfo command described in fcinfo(1M).

  1. Identify the FibreChannel HBAs that will be used. If migration will be supported, the HBAs must be identified. The fcinfo command can be used to list the Port WWN.

  2. Create NPIV ports in the Dom0 control domain.

  3. View devices by usingfcinfo in dom0. Verify that they are visible in the respective guest domains.

Using vdiskadm to Create Virtual Disks

You can use the virt-install command to create disks.

The vdiskadm command described in the vdiskadm(1M) man page creates and manages virtual disks. vdiskadm is implemented as a set of subcommands, some of which have their own options and operands. All operations on the vdisk need to be performed using vdiskadm.

The types of virtual disks are:

vmdk is the native VMware format, vdi is the native Sun VirtualBox format, vhd is the native Hyper-V format, and raw describes a file that looks like a raw disk. A raw disk is always in fixed format so that option can be explicitly set or implicitly understood. If the type is not specified, the default value is vmdk. If the option is not specified, the default value is fixed for type raw and sparse for types vmdk and vdi.

Create a new virtual disk of the specified size and at the location specified by vdname. If vdname includes a path to the virtual disk, the directories from that path will be created during creation of the virtual disk. -t type[:opt],[opt]] specifies the type of virtual disk to be created. Type as one line.


# vdiskadm create -s size [-t type[:opt],[opt]] [-c comment] vdname

You can import a disk image from a block device or file to a vdisk, convert it to a different type of vdisk, and export from a vdisk to a block device or file. This includes the full vhd support (sparse and fixed) and the ability to import a vmdk 1.1 optimized stream file. An optimized stream file is read-only and must be imported to another type (vmdk:sparse by default) in order to be used as a vdisk.

Examples

Creating a Default vmdk:sparse File

A directory of the vdisk name is created and populated with two files. vdisk.vmdk is the file with the disk data. vdisk.xml is the file containing information about the disk, such as creation time, type:option of disk, and snapshot information. Note that vdisk.vmdk has a suffix of the vdisk type.


# vdiskadm create -s 8g root_disk
# ls -l root_disk
total 82
-rw-------   1 root     root     1114112 May  8 16:15 vdisk.vmdk
-rw-r--r--   1 root     root         584 May  8 16:15 vdisk.xml

Creating a vdisk File of Type vhd

The suffix specified is now vhd. Since the option isn't specified with a type, the option has the default of sparse. Note that the disk file, vdisk_vhd, isn't fully populated to 8G.


# vdiskadm create -s 8g -t vhd root_disk_vhd
# ls -l root_disk_vhd
total 44
-rw-------   1 root     root       21504 May  8 16:15 vdisk.vhd
-rw-r--r--   1 root     root         590 May  8 16:15 vdisk.xml

Creating a vmdk:fixed File

Creating a vmdk type vdisk with option fixed takes a minute or more to create since it is creating and initializing 8G of data. The creation time is dependent upon the size of the vdisk.


# vdiskadm create -s 8g -t vmdk:fixed root_disk_fix
# ls -l root_disk_fix
total 16785428
-rw-------   1 root     root     8589934592 May  8 16:18 vdisk-flat.vmdk
-rw-------   1 root     root         638 May  8 16:18 vdisk.vmdk
-rw-r--r--   1 root     root         593 May  8 16:18 vdisk.xml

The contents of the xml file for root_disk_fix are:


# cat root_disk_fix/vdisk.xml
<?xml version="1.0"?>
<!DOCTYPE vdisk PUBLIC "-//Sun Microsystems Inc//DTD xVM Management All//EN" "file:///usr/share/lib/xml/dtd/vdisk.dtd">
<vdisk readonly="false" removable="false" cdrom="false" creation-time-epoch="1241821124" vtype="vmdk" sparse="false" rwcnt="0" rocnt="0">
  <name>root_disk_fix</name>
  <version>1.0</version>
  <parent>none</parent>
  <diskprop>
    <filename>root_disk_fix</filename>
    <vdfile>vdisk.vmdk</vdfile>
    <owner>root</owner>
    <max-size>8589934592</max-size>
    <sectors>16777216</sectors>
    <description>none</description>
  </diskprop>
</vdisk>

This same information can be retrieved from the vdiskadm command by using the subcommand prop-get:


# vdiskadm prop-get -p all root_disk_fix
readonly: false
removable: false
cdrom: false
creation-time-epoch: 1241821124
vtype: vmdk
sparse: false
rwcnt: 0
rocnt: 0
name: root_disk_fix
version: 1.0
parent: none
filename: root_disk_fix
vdfile: vdisk.vmdk
owner: root
max-size: 8589934592
sectors: 16777216
description: none
effective-size: 8589935230
creation-time: Fri May  8 16:18:44 2009
modification-time: Fri May  8 16:18:44 2009
modification-time-epoch: 1241821124

The modification times and effective size are all derived “on the fly,” and are not stored in the xml file. The creation and modification times are shown both in epoch format and in human readable format, for use by both software applications (such as Sun Ops Center) and system administrators.

The rwcnt and rocnt fields shown in the xml file are the reader/writer locks on the vdisk. There can be only one writer at a time, but multiple readers can be using the vdisk. These fields are used to set or reset the reader/writer lock associated with the virtual disk. These fields should not be set or reset by hand; they can only be modified by using vdiskadm ref-inc [-r] vdame or vdiskadm ref-dec vdname. These fields are used by blktap for shared or exclusive use of the virtual disk.

Snapshots

A snapshot is a read-only copy of a virtual disk. Snapshots can be created quickly and initially consume little space. As data within the active virtual disk changes, the snapshot consumes more data than would otherwise be shared with the active virtual disk.

vdisk supports snapshots in a manner similar to ZFS, except that the vdisk cannot be in use during a snapshot. The user can take a snapshot of the vdisk and later rollback to that snapshot, if needed. The user can also take a snapshot and then clone that snapshot into another vdisk.

To see the images are associated with a vdisk, type:


# vdiskadm list vhd_sp
vhd_sp

Take a snapshot of the virtual disk immediately after installing it:


# vdiskadm snapshot /export/home/vdisks/vhd_sp@install

List all images associated with the virtual disk:


# vdiskadm list /export/home/vdisks/vhd_sp
vhd_sp@install
vhd_sp

The original file, vdisk.vhd, has been moved to vdisk@install.vhd. A new file that contains the differences has been created. It is named vdisk.vhd.


# ls -l vhd_sp
total 2717732
-rw-------   1 root     root       17408 May 11 16:41 vdisk.vhd
-rw-r--r--   1 xvm      root         717 May 11 16:41 vdisk.xml
-rw-------   1 root     root     1390768640 May 11 16:41 vdisk@install.vhd 

The vdisk.xml file shows the added snapshot element. When additional snapshots are created, new snapshot elements will be added to the xml description. The snapshot order in the list (and shown with vdiskadm list) shows the order in which the snapshots are loaded.


# cat vhd_sp/vdisk.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE vdisk PUBLIC "-//Sun Microsystems Inc//DTD xVM Management All//EN" "file:///usr/share/lib/xml/dtd/vdisk.dtd">
<vdisk readonly="false" removable="false" cdrom="false" creation-time-epoch="1241643718" vtype="vhd" sparse="true" rwcnt="0" rocnt="0">
  <name>vhd_sp</name>
  <version>1.0</version>
  <parent>none</parent>
  <diskprop>
    <filename>vhd_sp</filename>
    <vdfile>vdisk.vhd</vdfile>
    <owner>xvm</owner>
    <max-size>6442450944</max-size>
    <sectors>12582912</sectors>
    <description>none</description>
  </diskprop>
  <snapshot creation-time-epoch="1242081709">
    <name>@install</name>
    <vdfile>vdisk@install.vhd</vdfile>
  </snapshot>
</vdisk>

Now, take another snapshot after a bfu and list the contents:


# vdiskadm snapshot /export/home/vdisks/vhd_sp@bfu

# vdiskadm list /export/home/vdisks/vhd_sp
vhd_sp@install
vhd_sp@bfu
vhd_sp

To roll back the disk to a point right after the install:


# vdiskadm rollback -r /export/home/vdisks/vhd_sp@install
# vdiskadm list /export/home/vdisks/vhd_sp
vhd_sp@install
vhd_sp

The rollback operation removes vdisk.vhd and any intervening snapshot images after vdisk@install.vhd, and creates a new differences file named vdisk.vhd.


# ls -l vhd_sp
total 2717732
-rw-------   1 root     root       17408 May 11 16:47 vdisk.vhd
-rw-r--r--   1 xvm      root         717 May 11 16:47 vdisk.xml
-rw-------   1 root     root     1390768640 May 11 16:47 vdisk@install.vhd

Clones

A clone is a writable copy of a virtual disk. The default type of clone is a merged (that is, coalesced) copy of the original virtual disk. An example of a merged clone occurs when a virtual disk is comprised of several snapshots; a subsequent clone operation results in a new virtual disk containing no snapshots. A clone will be of the same type as the original virtual disk (that is, vmdk:fixed). When a merged clone is created there is no linkage back to the original virtual disk or to any of its snapshots. This lack of linkage allows the merged clone to be moved to another physical machine.

Create a clone of the specified snapshot or virtual disk. The clone is created with the type and option and the size of the virtual disk being cloned. If clone_vdname includes a path, the subdirectories from that path will be created during creation of the cloned virtual disk. By default, a merged clone image is created:


# vdiskadm clone [-c comment] vdname|snapshot clone_vdname

About Domains

The control domain and the guest domain are separate entities.

Each domain has a name and a UUID. Domains can be renamed, but typically retain the same UUID.

A domain ID is an integer that is specific to a running instance. This ID changes whenever a guest domain is booted. A domain must be running to have a domain ID. UUID 0 is assigned to dom0.

Control Domain 0

For the latest information on Domain 0, see the document dom0 configuration for admins .

The control domain is a version of Solaris modified to run under the xVM hypervisor. When the control domain is running, the control tools are enabled. In most other respects, the control domain 0 instance runs and behaves like an unmodified instance of the Solaris Operating System.

The control domain provides console access to the guest domains it controls, but you cannot otherwise access a guest domain from the control domain unless you use the remote login commands rlogin, telnet, and ssh. A control domain should be reserved for system management work associated with running a hypervisor. This means, for example, that users should not have logins on the control domain. The control domain provides shared access to a physical network interface to the guest domains, which have no direct access to physical devices.

If a control domain crashes with a standard Solaris panic, the dump will include just the control domain. Also see About Crash Dumps.

The following information applies to the control domain:

Guest Domain Space Requirements

Size your domain as you would configure a machine to do the same workload.

The virtual disk requirement is dependent on the guest operating system and software that you install.

Domain States

A domain can be in one of six states. States are shown in virsh list displays.

For example:


#  virsh list
ID    Name        State  
-------------------------
0     Domain-0    running
2     sxc18       paused

The states are:

r, running

The domain is currently running on a CPU.

b, blocked

The domain is blocked, and not running or able to be run. This can be caused because the domain is waiting on IO (a traditional wait state) or it has gone to sleep because there was nothing running in it.

p, paused

The domain has been paused, usually through the administrator running virsh suspend. When in a paused state, the domain still consumes allocated resources like memory, but is not eligible for scheduling by the hypervisor. Run resume domain to place the domain in the running state.

s, in shutdown

The domain is in process of shutting down, but has not completely shutdown or crashed.

s, shutoff

The domain is shut down.

c, crashed

The domain has crashed. Usually this state can only occur if the domain has been configured not to restart on crash. See xmdomain.cfg(5) for more information.