|C H A P T E R 1|
This chapter contains a brief overview of the Logical Domains software. All of the Solaris OS functionality necessary to use Sun’s Logical Domains technology is in the Solaris 10 11/06 release (at a minimum) with the addition of necessary patches. However, system firmware and the Logical Domains Manager are also required to use logical domains. Refer to “Required and Recommended Software” in the Logical Domains (LDoms) 1.1 Release Notes for specific details.
The SPARC hypervisor is a small firmware layer that provides a stable virtualized machine architecture to which an operating system can be written. Sun servers using the hypervisor provide hardware features to support the hypervisor’s control over a logical operating system’s activities.
A logical domain is a discrete logical grouping with its own operating system, resources, and identity within a single computer system. Each logical domain can be created, destroyed, reconfigured, and rebooted independently, without requiring a power cycle of the server. You can run a variety of applications software in different logical domains and keep them independent for performance and security purposes.
Each logical domain is allowed to observe and interact with only those server resources made available to it by the hypervisor. Using the Logical Domains Manager, the system administrator specifies what the hypervisor should do through the control domain. Thus, the hypervisor enforces the partitioning of the resources of a server and provides limited subsets to multiple operating system environments. This is the fundamental mechanism for creating logical domains. The following diagram shows the hypervisor supporting two logical domains. It also shows the layers that make up the Logical Domains functionality:
The number and capabilities of each logical domain that a specific SPARC hypervisor supports are server-dependent features. The hypervisor can allocate subsets of the overall CPU, memory, and I/O resources of a server to a given logical domain. This enables support of multiple operating systems simultaneously, each within its own logical domain. Resources can be rearranged between separate logical domains with an arbitrary granularity. For example, memory is assignable to a logical domain with an 8-kilobyte granularity.
The hypervisor software is responsible for maintaining the separation between logical domains. The hypervisor software also provides logical domain channels (LDCs), so that logical domains can communicate with each other. Using logical domain channels, domains can provide services to each other, such as networking or disk services.
The service processor (SP), also known as the system controller (SC), monitors and runs the physical machine, but it does not manage the virtual machines. The Logical Domains Manager runs the virtual machines.
The Logical Domains Manager is used to create and manage logical domains. There can be only one Logical Domains Manager per server. The Logical Domains Manager maps logical domains to physical resources.
|Control domain||Domain in which the Logical Domains Manager runs allowing you to create and manage other logical domains and allocate virtual resources to other domains. There can be only one control domain per server. The initial domain created when installing Logical Domains software is a control domain and is named primary.|
|Service domain||Domain that provides virtual device services to other domains, such as a virtual switch, a virtual console concentrator, and a virtual disk server.|
|I/O domain||Domain that has direct ownership of and direct access to physical I/O devices, such as a network card in a PCI Express controller. Shares the devices with other domains in the form of virtual devices when the I/O domain is also the control domain. The number of I/O domains you can have is dependent on your platform architecture. For example, if you are using a Sun UltraSPARC® T1 processor, you can have a maximum of two I/O domains, one of which also must be the control domain.|
|Guest domain||Domain that is managed by the control domain and uses services from the I/O and service domains.|
If you have an existing system and already have an operating system and other software running on your server, that will be your control domain once you install the Logical Domains Manager. You might want to remove some of your applications from the control domain once it is set up, and balance the load of your applications throughout your domains to make the most efficient use of your system.
The Logical Domains Manager provides a command-line interface (CLI) for the system administrator to create and configure logical domains. The CLI is a single command, ldm(1M), with multiple subcommands.
To use the Logical Domains Manager CLI, you must have the Logical Domains Manager daemon, ldmd, running. The ldm(1M) command and its subcommands are described in detail in the ldm(1M) man page and the Logical Domains (LDoms) Manager Man Page Guide. The ldm(1M) man page is part of the SUNWldm package and is installed when the SUNWldm package is installed.
To execute the ldm command, you must have the /opt/SUNWldm/bin directory in your UNIX $PATH variable. To access the ldm(1M) man page, add the directory path /opt/SUNWldm/man to the variable $MANPATH. Both are shown as follows:
$ PATH=$PATH:/opt/SUNWldm/bin; export PATH (for Bourne or K shell) $ MANPATH=$MANPATH:/opt/SUNWldm/man; export MANPATH % set PATH=($PATH /opt/SUNWldm/bin) (for C shell) % set MANPATH=($MANPATH /opt/SUNWldm/man)
In a Logical Domains environment, an administrator can provision up to 32 domains on a Sun Fire™ or SPARC Enterprise T1000 or T2000 server. Though each domain can be assigned dedicated CPUs and memory, the limited number of I/O buses and physical I/O slots in these systems makes it impossible to provide all domains exclusive access to the disk and network devices. Though some physical devices can be shared by splitting the PCI Express® (PCIe) bus into two (see Configuring PCI Express Busses Across Multiple Logical Domains), it is not sufficient to provide all domains exclusive device access. This lack of direct physical I/O device access is addressed by implementing a virtualized I/O model.
All logical domains with no direct I/O access are configured with virtual I/O devices that communicate with a service domain, which runs a service to provide access to a physical device or its functions. In this client-server model, virtual I/O devices either communicate with each other or a service counterpart through interdomain communication channels called logical domain channels (LDCs). In Logical Domains 1.1 software, the virtualized I/O functionality comprises support for virtual networking, storage, and consoles.
The virtual network support is implemented using two components: the virtual network and virtual network switch device. The virtual network (vnet) device emulates an Ethernet device and communicates with other vnet devices in the system using a point-to-point channel. The virtual switch (vsw) device mainly functions as a multiplexor of all the virtual network’s incoming and outgoing packets. The vsw device interfaces directly with a physical network adapter on a service domain, and sends and receives packets on a virtual network’s behalf. The vsw device also functions as a simple layer-2 switch and switches packets between the vnet devices connected to it within the system.
The virtual storage infrastructure enables logical domains to access block-level storage that is not directly assigned to them through a client-server model. It consists of two components: a virtual disk client (vdc) that exports as a block device interface; and a virtual disk service (vds) that processes disk requests on behalf of the virtual disk client and submits them to the physical storage residing on the service domain. Although the virtual disks appear as regular disks on the client domain, all disk operations are forwarded to the physical disk through the virtual disk service.
In a Logical Domains environment, console I/O from all domains, except the primary domain, is redirected to a service domain running the virtual console concentrator (vcc) and virtual network terminal server, instead of the systems controller. The virtual console concentrator service functions as a concentrator for all domains’ console traffic, and interfaces with the virtual network terminal server daemon (vntsd) and provides access to each console through a UNIX socket.
Dynamic reconfiguration (DR) is the ability to add or remove resources while the operating system is running. The ability to perform dynamic reconfiguration of a particular resource type is dependent on having support in the OS running in the logical domain. Support for dynamic reconfiguration of virtual CPUs is available in all versions of the Solaris 10 OS. Dynamic reconfiguration of virtual I/O devices is supported in logical domains running Solaris 10 10/08 at a minimum. There is no support for dynamic reconfiguration of memory and physical I/O devices. To use the dynamic reconfiguration capability in the Logical Domains Manager CLI, you must have the Logical Domains dynamic reconfiguration daemon, drd(1M) running in the domain you want to change.
In contrast to dynamic reconfiguration operations that take place immediately, delayed reconfiguration operations take effect after the next reboot of the OS or stop and start of the logical domain if no OS is running. Any add or remove operations on active logical domains, except add-vcpu, set-vcpu, and remove-vcpu subcommands, are considered delayed reconfiguration operations if you have Solaris 10 5/08 OS or earlier running in the domain. If you have Solaris 10 10/08 OS running in the domain, the addition and removal of virtual input/output devices do not result in a delayed configuration. The set-vswitch subcommand on an active logical domain is considered a delayed reconfiguration operation no matter what Solaris OS is running in the domain.
If you are using a Sun UltraSPARC T1 processor, when the Logical Domains Manager is first installed and enabled (or when the configuration is restored to factory-default), the LDoms Manager runs in the configuration mode. In this mode, reconfiguration requests are accepted and queued up, but are not acted upon. This allows a new configuration to be generated and stored to the SC without affecting the state of the running machine, and therefore, without being encumbered by any of the restrictions around things like delayed reconfiguration and reboot of I/O domains.
Once a delayed reconfiguration is in progress for a particular logical domain, any other reconfiguration requests for that logical domain are also deferred until the domain is rebooted or stopped and started. Also, when there is a delayed reconfiguration outstanding for one logical domain, reconfiguration requests for other logical domains are severely restricted and will fail with an appropriate error message.
Even though attempts to remove virtual I/O devices on an active logical domain are handled as a delayed reconfiguration operation if you are running the Solaris 10 5/08 OS or earlier, some configuration change does occur immediately in the domain. This means the device will stop functioning as soon as the associated Logical Domains Manager CLI operation is invoked. This issue does not apply if you are running the Solaris 10 10/08 OS in the domain, since the entire removal happens immediately as part of a virtual I/O dynamic reconfiguration operation.
The Logical Domains Manager subcommand remove-reconf cancels delayed reconfiguration operations. You can list delayed reconfiguration operations by using the ldm list-domain command. Refer to the ldm(1M) man page or the Logical Domains (LDoms) Manager Man Page Guide for more information about how to use the delayed reconfiguration feature.
The current configuration of a logical domain can be stored on the system controller (SC) using the Logical Domains Manager CLI commands. You can add a configuration, specify a configuration to be used, remove a configuration, and list the configurations on the system controller. (Refer to the ldm(1M) man page or the Logical Domains (LDoms) Manager Man Page Guide.) In addition, there is an ALOM CMT Version 1.3 command that enables you to select a configuration to boot (see Using LDoms With ALOM CMT).