4 Planning Your Environment

Before installing Oracle Linux Virtualization Manager, review this section to help plan your deployment. For more information about the virtualization management platform, see Architecture.

Important:

The engine server and KVM hosts can be configured on a single NIC, a Bond, or a VLAN interface, but all hosts must be connected to the same network segment.

Data Centers

A data center is a high-level logical entity for all physical and logical resources in the environment. You can have multiple data centers and all the data centers are controlled from a single Administration Portal. For more information, see Data Centers in the Oracle Linux Virtualization Manager: Administration Guide.

When you install Oracle Linux Virtualization Manager, a default data center (Default), which you can rename and configure. You can also create and configure additional data centers. To initialize any data center, you must add a cluster, a host, and a storage domain:

  • Cluster - A cluster is an association of physical hosts sharing the same storage domains and having compatible processors. Every cluster belongs to a data center; every host belongs to a cluster. A cluster has to have a minimum of one host, and at least one active host is required to connect the system to a storage pool.
  • KVM Host - Hosts, or hypervisors, are the physical servers that run virtual machines. You must have at least one host in a cluster. KVM hosts in a datacenter must have access to the same storage domains.
  • Storage Domain - Data centers must have at least one data storage domain. Set up the data storage domain of the type required for the data center: NFS, iSCSI, FCP or Local.

Logical networks are not required to initialize a data center, but are required for Oracle Linux Virtualization Manager to communicate with all components of a data center. Logical networks are also used for the virtual machines to communicate with hosts and storage, for connecting clients to virtual machine resources, and for migrating virtual machines between the hosts in a cluster.

Figure 4-1 Data Center


A single engine connects to 3 data centers, each with 3 clusters. The data centers connect to a single storage server.

Clusters

A cluster consists of one or more logical grouping of Oracle Linux KVM hosts on which a collection of virtual machines can run. The KVM hosts in a cluster must have the same type of CPU (either Intel or AMD).

Each cluster in the environment must belong to a data center and each KVM host must belong to a cluster. During installation, a default cluster is created in the Default data center. For more information, see Clusters in the Oracle Linux Virtualization Manager: Administration Guide.

Virtual machines are dynamically allocated to any KVM host in the cluster and can be migrated between them, according to policies defined on the cluster and settings on the virtual machines. The cluster is the highest level at which power and load-sharing policies can be defined. Since virtual machines are not bound to any specific host in the cluster, virtual machines always start even if one or more of the hosts are unavailable.

Figure 4-2 Single Cluster


Single engine connected to a cluster with 3 VDSMs each connected to a Storage Domain.

Figure 4-3 Multiple Clusters


Single engine connected to a data center with 3 clusters.

Hosts

In Oracle Linux Virtualization Manager, you install Oracle Linux on a bare metal (physical) server and leverage the Unbreakable Enterprise Kernel, which allows the server to be used as a KVM hypervisor. When you are running a hypervisor on a server it is referred to as a host meaning it is capable of hosting virtual machines.

The engine host is a separate physical host and provides the administration tools for managing the Oracle Linux Virtualization Manager environment. All hosts in your environment must be Oracle Linux KVM hosts, except for the host running the engine which is an Oracle Linux hosts.

Oracle Linux Virtualization Manager can manage many Oracle Linux KVM hosts, each of which can run multiple virtual machines concurrently. Each virtual machine runs as individual Linux processes and threads on the KVM host.

Using the Administration Portal you can install, configure and manage your KVM hosts. You can also use the Cockpit web interface to monitor a KVM host's resources and perform administrative tasks. The Cockpit feature must be installed and enabled separately. You can access a host's Cockpit web interface from the Administration Portal or by connecting directly to the host.

The Virtual Desktop and Server Manager (VDSM) service is a host agent that runs as a daemon on the KVM hosts and communicates with the engine to:

  • Manage and monitor physical resources, including storage, memory, and networks.
  • Manage and monitor the virtual machines running on a host.
  • Gather statistics and collects logs.

For more information on engine host and virtual machine requirements, see Requirements and Scalability Limits Requirements and Scalability Limits.

For more information, see Host Architecture and Adding a KVM Host in the Oracle Linux Virtualization Manager: Getting Started Guide.

Virtual Machines

Virtual machines can be created to a certain specification or cloned from an existing template in the virtual machine pools. For more information, see Creating a New Virtual Machine and Creating a Template in the Oracle Linux Virtualization Manager: Administration Guide. You can also import an Open Virtual Appliance (OVA) file into your environment from any host in the data center. For more information, see oVirt Virtual Machine Management Guide in oVirt Documentation.

  • A virtual machine pool is a group of on-demand virtual machines that are all clones of the same template. They are available to any user in a given group.

    When accessed from the VM Portal, virtual machines in a pool are stateless, meaning that data is not persistent across reboots. Each virtual machine in a pool uses the same backing read-only image, and uses a temporary copy-on-write image to hold changed and newly generated data. Each time a virtual machine is assigned from a pool, it is allocated in its base state. Users who have been granted permission to access and use virtual machines from a pool receive an available virtual machine based on their position in a queue of requests.

    When accessed from the Administration Portal, virtual machines in a pool are not stateless so that administrators can make changes to the disk if needed.

  • Guest agents and drivers provide functionality for virtual machines such as the ability to monitor resource usage, shutdown and reboot the virtual machines from the Administration Portal.

    Important:

    See Windows Virtual Machines Lose Functionality Due To Deprecated Guest Agent in the Known Issues section of the Oracle Linux Virtualization Manager: Release Notes.
  • A snapshot captures a virtual machine's operating system and applications on all available disks at a given point in time. Use a snapshot to restore a virtual machine to its previous state. For more information, see
  • A template is a copy of a virtual machine that you can use to simplify the subsequent, repeated creation of similar virtual machines. Templates capture the configuration of software, the configuration of hardware, and the software installed on the virtual machine on which the template is based, which is known as the source virtual machine.

    Virtual machines that are created based on a template use the same NIC type and driver as the original virtual machine but are assigned separate, unique MAC addresses.

Considerations When Using Snapshots

A snapshot is a picture of a virtual machine's state and should not be uses a primary backup process. You take snapshots so you can revert the virtual machine to a specific point if and when required. Before creating snapshots, consider the following:

  • Do not shutdown or start a virtual machine that displays an illegal status in the Administration Portal as this might cause data corruption or the virtual machine might fail to start.
  • As soon as you revert to a snapshot and it is no longer required, delete the snapshot.
  • Taking several snapshots in a row without any cleanup can affect virtual machine and host performance.
  • When taking a snapshot, it creates a new copy of the virtual machine disk; so the more data you write to the snapshot, the longer it takes when using it.
  • For an I/O intensive virtual machine, when deleting a snapshot remove it while the virtual machine is shut down (cold merge) instead of deleting the snapshot while the virtual machine is up (live merge).
  • Ensure you have installed the latest guest agent package on your virtual machine before taking snapshots.

Virtual Machine Consoles

You access virtual machine consoles using the Remote Viewer application (virt-viewer) on Enterprise Linux and Microsoft Windows clients. Remote Viewer allows you to interact with a virtual machine in a similar way to a physical machine. For more information, see Consoles.

To download Remote Viewer, see Installing a Remote Viewer on Client Machine in the Oracle Linux Virtualization Manager: Administration. You must have Administrator privileges to install the Remote Viewer application.

High Availability and Optimization

You can configure Oracle Linux Virtualization Manager so that your cluster is optimized and your hosts and virtual machine are highly available. You can also enable or disable devices (hot plug) while a virtual machine is running.

For more information about high availability and optimization, see Deployment Optimization in the Oracle Linux Virtualization Manager: Administration Guide.

Clusters

Using the Optimization tab when creating or editing a cluster, you can select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster. Some of the benefits are:
  • Virtual machines run on hosts up to the specified overcommit threshold. Higher values conserve memory at the expense of great CPU usage.
  • Hosts can run virtual machines with a total number of CPU cores greater than the number of cores in the host.
  • Memory overcommitment on virtual machines running on the hosts in the cluster.
  • Memory Overcommitment Manager (MoM) runs Kernel Same-page Merging (KSM) when it can yield a memory saving benefit. To use KSM, you have to explicitly enable it at the cluster level.

You can set cluster optimization for the MoM to start ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. To have a ballooning running, a virtual machine needs to have a balloon device with relevant drivers. Each virtual machine includes a balloon device unless specifically removed. Each host in the cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a KVM host without having to change the status.

Hosts

If you want a cluster to be responsive when unexpected host failures happen, you should configure fencing. Fencing keeps hosts in a cluster highly available by enforcing any associated policies for power saving, load balancing, and virtual machine availability. If you want highly available virtual machines on a particular host:

  • You must also enable and configure power management for the host
  • The host must have access to the Power Management interface via the ovirtmgmt network

Important:

For power management operations, you need at least two KVM hosts in a cluster or data center that are in Up or Maintenance status.

The Manager does not communicate directly with fence agents. Instead, the Engine uses a proxy to send power management commands to a host power management device. The Engine uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy. You can select between:

  • Any host in the same cluster as the host requiring fencing.
  • Any host in the same data center as the host requiring fencing.

Each KVM host in a cluster has limited resources. If a KVM host becomes overutilized, there is an adverse impact on the virtual machines that are running on the host. To avoid or mitigate overutlization, you can use scheduling, load balancing, and migration policies to ensure the performance of virtual machines. If a KVM host becomes overutilized, virtual machines are migrated to another KVM host in the cluster.

Virtual Machines

A highly available virtual machine automatically migrates to and restarts on another host in the cluster if the host crashes or becomes non-operational. If a virtual machine is not configured for high availability it will not restart on another available host. If a virtual machine's host is manually shut down, the virtual machine does not automatically migrate to another host.

Note:

Virtual machines do not live migrate unless you are using shared storage and have explicitely configured your environment for live migration in the event of host failures. Policies, such as power saving or distribution, as well as maintenance events trigger live migrations of virtual machines.

Using the Resource Allocation tab when creating or editing a virtual machine, you can:

  • Set the maximum amount of processing capability a virtual machine can access on its host.
  • Pin a virtual CPU to a specific physical CPU.
  • Guarantee an amount of memory for the virtual machine.
  • Enable the memory balloon device for the virtual machine. For this feature to work, memory balloon optimization must also be enabled for the cluster.
  • Improve the speed of disks that have a VirtIO interface by pinning them to a thread separate from the virtual machine's other functions.

When a KVM host goes into maintenance mode, all virtual machines are migrated to other servers in the cluster. This mean there is no downtime for virtual machines during planned maintenance windows.

If a virtual machine is unexpectedly terminated, it is automatically restarted, either on the same KVM host or another host in the cluster. This is achieved through monitoring of the hosts and storage to detect any hardware failures. If you configure a virtual machine for high availability and its host fails, the virtual machine automatically restarts on another KVM host in the cluster.

Policies

Load balancing, scheduling, and resiliency policies, enable critical virtual machines to be restarted on another KVM host in the event of hardware failure with three levels of priority.

Scheduling policies enable you to specify the usage and distribution of virtual machines between available hosts. You can define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine does not start on a host with an overloaded CPU. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies.

There are five default scheduling policies:

  • Evenly_Distributed - evenly distributes the memory and CPU processing load across all hosts in a cluster.

    Note:

    All virtual machines must have the latest qemu-guest-agent installed and its service running.
  • Cluster_Maintenance - during maintenance tasks activity in a cluster is limited.
  • Power_Saving - reduces power consumption on underutilized hosts by distributing memory and CPU processing load across a subset of available hosts.
  • VM_Evenly_Distributed - evenly distributes virtual machines between hosts.
  • None

Migration policies enable you to define the conditions for live migrating virtual machines in the event of KVM host failure. These conditions include how long a virtual machine can be down during migration, how much network bandwidth is used, and how the virtual machines are prioritized.

Resilience policies enable you to define how the virtual machines are prioritized in migration. You can configure the policy so that all or no virtual machines migrate, or that only highly available virtual machines migrate which helps to prevent overloading hosts.

For more information on policies, refer to the Administration Guide in oVirt Documentation.

Networks

The following are general, high-level networking recommendations.

  • Use bond network interfaces, especially on production hosts
  • Use VLANs to separate different traffic types
  • Use 1 GbE networks for management traffic
  • Use 10 GbE, 25 GbE, 40 GbE, or 100 GbE for virtual machines and Ethernet-based storage
  • When adding physical interfaces to a host for storage use, uncheck VM network so that the VLAN is assigned directly to the physical interface

The Oracle Linux Virtualization Manager host and all Oracle Linux KVM hosts must have a fully-qualified domain name (FQDN) as well as forward and reverse name resolution. Oracle recommend using DNS. Alternatively, you can use the /etc/hosts file for name resolution, however, this requires more work and is error-prone.

All DNS services used for name resolution must be hosted outside of the environment.

Logical Networks

In Oracle Linux Virtualization Manager, you configure logical networks to represent the resources required to ensure the network connectivity of the Oracle Linux KVM hosts for a specific purpose, for example to indicate that a network interface controller (NIC) is on a management network.

You define a logical network for a data center, apply the network to one or more clusters, and then configure the hosts by assigning the logical networks to the hosts physical interfaces. Once you implement the network on all the hosts in a cluster, the network becomes operational. You perform all these operations from the Administration Portal.

At the cluster level, you can assign one or more network roles to a logical network to specify its purpose:

  • A management network is used for communication between Oracle Linux Virtualization Manager and the hosts.
  • A VM network is used for virtual machine communication, a virtual machine's virtual NIC is attached to a VM network. For more information, see Creating a Logical Network in the Oracle Linux Virtualization Manager: Administration Guide.
  • A display network is used to connect clients to virtual machine graphical consoles, using either the VNC or RDP protocols.
  • A migration network is used to migrate virtual machines between the hosts in a cluster.

By default a single logical network named ovirtmgmt is created and this is used for all network communication in a data center. You separate the network traffic according to your needs by defining and applying additional logical networks.

One logical network is configured as the default route for the hosts.

A logical network can be marked as a required network. If a required network ceases to function, any KVM hosts associated with the network become non-operational.

For logical networks that are not VM networks, you connect the host directly to the network using either a physical network interface, a VLAN interface, or a bond.

For VM networks, a bridge is created on the host for each logical network. Virtual machine VNICs are connected to the bridges as needed. The bridge is connected to the network using either a physical network interface, a VLAN interface, or a bond.

Figure 4-4 Bridge Networks


Shows bridges created on Oracle Linux KVM hosts for VM networks, as described in the preceding text.

You can perform most network configuration operations on hosts from the Administration Portal, including:

  • Assign a host NIC to logical networks.
  • Configure a NIC's boot protocol, IP settings, and DNS settings.
  • Create bonds and VLAN interfaces on KVM hosts.

When there are a large number of KVM hosts and logical networks, using network labels enables you to simplify administration. Labels can be applied to logical networks and host interfaces. When you set a label on a network, you to deploy the network on host NICs that have the same label. This requires that the host NICs are configured for DHCP.

VLANs

A virtual local area network (VLAN) enables hosts and virtual machines to communicate regardless of their actual physical location on a LAN.

VLANs enable you improve security by segregating network traffic. Broadcasts between devices in the same VLAN are not visible to other devices with a different VLAN, even if they exist on the same switch.

VLANs can also help to compensate for the lack of physical NICs on hosts. A host or virtual machine can be connected to different VLANs using a single physical NIC or bond. This is implemented using VLAN interfaces.

A VLAN is identified by an ID. A VLAN interface attached to a host's NIC or bond is assigned a VLAN ID and handles the traffic for the VLAN. When traffic is routed through the VLAN interface, it is automatically tagged with the VLAN ID configured for that interface, and is then routed through the NIC or bond that the VLAN interface is attached to.

The switch uses the VLAN ID to segregate traffic among the different VLANs operating on the same physical link. In this way, a VLAN functions exactly like a separate physical connection.

You need to configure the VLANs needed to support your logical networks before you can use them. This is usually accomplished using switch trunking. Trunking involves configuring ports on the switch to enable multiple VLAN traffic on these ports, to ensure that packets are correctly transmitted to their final destination. The configuration required depends on the switches you use.

When you create a logical network, you can assign a VLAN ID to the network. When you assign a host NIC or bond to the network, the VLAN interface is automatically created on the host and attached to the selected device.

Figure 4-5 VLANs


Diagram illustrating the use of VLANs on logical networks, as described in the preceding text.

Figure 4-6 VLANs over Network Bonds


Diagram illustrating VLAN over network bonds, as described in the preceding text.

Virtual NICs

A virtual machine uses a virtual network interface controller (VNIC) to connect to a logical network.

VNICs are always attached to a bridge on a KVM host. A bridge is a software network device that enables the VNICS to share a physical network connection and to appear as separate physical devices on a logical network.

Oracle Linux Virtualization Manager automatically assigns a MAC address to a VNIC. Each MAC address corresponds to a single VNIC. Because MAC addresses must be unique on a network, the MAC addresses are allocated from a predefined range of addresses, known as a MAC address pool. MAC address pools are defined for a cluster.

Virtual machines are connected to a logical network by their VNICs. The IP address of each VNIC can be set independently, by DHCP or statically, using the tools available in the operating system of the virtual machine. To use DHCP, you need to configure a DHCP server on the logical network.

Virtual machines can communicate with any other machine on the virtual network, and, depending on the configuration of the logical network, with public networks such as the Internet.

For more information, see Customizing vNIC Profiles for Virtual Machines in the Oracle Linux Virtualization Manager: Administration Guide.

Bonds

Bonds bind multiple NICs into a single interface. A bonded network interface combines the transmission capability of all the NICs included in the bond and acts as a single network interface, which can provide greater transmission speed. Because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance.

There are four different bonding modes:

  • Mode 1 - Active-Backup
  • Mode 2 - Load balance XOR Policy
  • Mode 3 - Broadcast
  • Mode 4 (default) - Dynamic link aggregation IEEE 802.3ad

Bonding modes 2 and 4 require static etherchannel enabled (not LACP-negotiated) and LACP-negotiated etherchannel enabled on physical switches respectively.

Figure 4-7 Network Bonds


Diagram illustrating bonds binding NICs into a single interface, as described in the preceding text.

MAC Address Pools

MAC address pools define the range (or ranges) of MAC addresses allocated for each cluster. A MAC address pool is specified for each cluster. By using MAC address pools, the Manager can automatically generate and assign MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. MAC address pools are more memory efficient when all MAC addresses related to a cluster are within the range for the assigned MAC address pool.

The same MAC address pool can be shared by multiple clusters, but each cluster has a single MAC address pool assigned. A default MAC address pool is created by the Manager and is used if another MAC address pool is not assigned.

Note:

If more than one cluster shares a network, you should not rely solely on the default MAC address pool because the virtual machines in each cluster attempt to use the same range of MAC addresses, which can lead to conflicts. To avoid MAC address conflicts, check the MAC address pool ranges to ensure that each cluster is assigned a unique MAC address range.

The MAC address pool assigns the next available MAC address after the last address that is returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses defined in a single MAC address pool, the ranges take turns in serving incoming requests in a similar manner as when MAC addresses are selected.

Storage

Oracle Linux Virtualization Manager uses a centralized storage system for virtual machine disk images, ISO files and snapshots. You can use Network File System (NFS), Internet Small Computer System Interface (iSCSI), Fibre Channel Protocol (FCP), or Gluster FS storage. You can also configure local storage attached directly to hosts. For more information, see Storage in the Oracle Linux Virtualization Manager: Administration Guide.

A data center cannot be initialized unless a storage domain is attached to it and activated.

The storage must be located on the same subnet as the Oracle Linux KVM hosts that will use the storage, in order to avoid issues with routing.

Since you need to create, configure, attach and maintain storage, make sure you are familiar with the storage types and their use. Read your storage array manufacturer guides for more information.

Storage Domains

A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates, virtual machines, virtual machine snapshots, or ISO files. Oracle Linux Virtualization Manager supports storage domains that are block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS or Gluster).

On NFS or Gluster, all virtual disks, templates, and snapshots are files. On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume.

Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.

Storage, also referred to as a data domain, is used to store the virtual hard disks, snapshots, ISO files, and Open Virtualization Format (OVF) files for virtual machines and templates. Every data center must have at least one data domain. Data domains cannot be shared between data centers.

Note:

The Administration Portal currently offers options for creating storage domains that are export domains or ISO domains. These options are deprecated.

Detaching a storage domain from a data center stops the association, but does not remove the storage domain from the environment. A detached storage domain can be attached to another data center. And, the data, such as virtual machines and templates, remains attached to the storage domain.

Storage Pool Manager

The Storage Pool Manager (SPM) is a management role assigned to one of the hosts in a data center enabling it to manage the storage domains of the data center. Any host in the data center can run the SPM entity, which is assigned by the engine. SPM controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN).

The host running as SPM can still host virtual resources. The SPM priority setting for hosts enables you to prioritize which host is assigned the SPM role. Since the SPM role uses some of the host's available resources, it is important to prioritize hosts that can afford the resources.

Because the SPM must always be available, the engine assigns the SPM role to another host if the SPM host becomes unavailable. A host with higher SPM priority is assigned the SPM role before a host with lower SPM priority.

Virtual Machine Storage

The Storage Pool Manager (SPM) is responsible for creating and deleting virtual disks, as well as snapshots, and templates. In addition it allocates storage for sparse block devices.

  • If you are using NFS or local storage, the SPM creates a thin provisioned virtual disk by default.
  • If you are using iSCSI storage or other block-based devices, Logical Unit Numbers (LUNs) are provided to the SPM. Then, a volume group on top of the LUNs and logical volumes for use as virtual machine disks are created and the SPM preallocates the space by default.
  • If a virtual disk is thinly-provisioned, a 1 GB logical volume is created with a QCOW2 format. Use thin provisioning for virtual machines with low I/O requirements.
  • The virtual machine's host continuously monitors the logical volume used for its virtual disk. You can set a threshold so that when the disk usage nears the threshold the host notifies the SPM and extends the logical volume by 1 GB.
  • If the storage in a pool starts to become exhausted, a new LUN can be added to the volume group. The SPM automatically distributes the additional storage to logical volumes that need it.
  • If a virtual disk is preallocated, a logical volume of the specified size in GB and a virtual disk of RAW format is created. Use preallocated disks for virtual machines with high levels of I/O. Preallocated disks cannot be enlarged.
  • If an application requires storage to be shared between virtual machines, use Shareable virtual disks which can be attached to multiple virtual machines concurrently.

    QCOW2 format virtual disks cannot be shareable. You cannot take a snapshot of a shared disk and virtual disks that have snapshots that cannot be marked shareable. You cannot live migrate a shared disk.

    If the virtual machines are not cluster-aware, mark shareable disks as read-only to avoid data corruption.

  • Use direct LUN to enable virtual machines to directly access RAW block-based storage devices on the host bus adapter (HBA). The mapping of the direct LUN to the host causes the storage to be emulated as file-based storage to virtual machines. This removes a layer of abstraction between virtual machines and their data as the virtual machine is being granted direct access to block-based storage LUNs.

Storage Leases

When you add a storage domain to Oracle Linux Virtualization Manager, a special volume is created called xleases. Virtual machines are able to acquire a lease on this special volume, which enables the virtual machine to start on another host even if the original host loses power.

A storage lease is configured automatically for the virtual machine when you select a storage domain to hold the VM lease. (See Configuring a Highly Available Virtual Machine in the Oracle Linux Virtualization Manager: Administration Guide.) This triggers a create a new lease request to the engine which then send the request to the SPM. The SPM creates a lease and a lease id for the virtual machine on the xreleases volume. VDSM creates the sanlock which is used to acquire an exclusive lock on a virtual disk.

The lease id and other information is then sent from the SPM to the engine. The engine then updates the virtual machine's device list with the lease information.

Local Storage

Local storage is storage that is attached directly to an Oracle Linux KVM host, such as a local physical disk or a locally attached SAN. When a KVM host is configured to use local storage, it is automatically added to a cluster where it is the only host. This is because clusters with multiple hosts must have shared storage domains accessible to all hosts.

When you use local storage, features such as live migration, scheduling, and fencing are not available.

For more information, see Configuring a KVM Host to Use Local Storage in the Oracle Linux Virtualization Manager: Administration Guide.

System Backup and Recovery

You use the engine-backup tool to take regular backups of the Oracle Linux Virtualization Manager. The tool backs up the engine database and configuration files into a single file and can be run without interrupting the ovirt-engine service.

You also use the engine-backup tool to restore a backup. However, the steps you need to take can be more involved depending on your restoration destination. For example, the engine-backup tool can be used to restore backups to fresh installations of Oracle Linux Virtualization Manager, on top of existing installations of Oracle Linux Virtualization Manager, and using local or remote databases.

If you restore a backup to a fresh installation of Oracle Linux Virtualization Manager, you do not run the engine-setup command to configure the Manager.

You can also use data center recovery if the data in your primary data domain gets corrupted. This enables you to replace the primary data domain of a data center with a new primary data domain.

Reinitializing a data center enables you to restore all other resources associated with the data center, including clusters, hosts, and storage domains. You can import any backup or exported virtual machines or templates into the new primary data domain.

For more information, see Backup and Restore in the Oracle Linux Virtualization Manager: Administration Guide.

Users, Roles, and Permissions

In Oracle Linux Virtualization Manager, there are two types of user domains: local domain and external domain. During the installation of the Manager, a default local domain called the internal domain is created with a default admin@internal user. This account is intended for use when initially configuring the environment and for troubleshooting.

You can create additional users on the internal domain using ovirt-aaa-jdbc-tool command utility. For more information about creating users, see Administering User and Group Accounts from the Command Line in the Oracle Linux Virtualization Manager: Administration Guide.

User properties consist of the roles and permissions assigned to a user. The security roles for all actions and objects in the platform are granular, inheritable, and provide for multi-level administration.

Roles are sets of permissions defined in the Administration Portal and are used to specify permissions to resources in the environment. There are two types of roles:

  • Administrator Role - Conveys management permissions of physical and virtual resources through the Administration Portal. Examples of roles within this group are SuperUser, ClusterAdmin and DataCenterAdmin.

  • User Role - Conveys permissions for managing and accessing virtual machines and templates through the VM Portal by filtering what is visible to a user. Roles can be assigned to the users for individual resources, or levels of objects. Examples of roles within this group are UserRole, PowerUserRole and UserVmManager.

It is possible to create new roles with specific permissions applicable to a user's role within the environment. It is also possible to remove specific permissions to a resource from a role assigned to a specific user.

You can also use an external directory server to provide user account and authentication services. You can use Active Directory, OpenLDAP, and 389ds. Use the ovirt-engine-extension-aaa-ldap-setup command to configure the connection to these directories.

Note:

After you have attached an external directory server, added the directory users, and assigned them with appropriate roles and permissions, the admin@internal user can be disabled if it is not required. For more information, see Disabling User Accounts in the Oracle Linux Virtualization Manager: Administration Guide.

For more information on users, roles, and permissions, see Global Configuration in the Oracle Linux Virtualization Manager: Administration Guide.

System State and History

When you install and configure Oracle Linux Virtualization Manager, you are prompted to install and configure the engine and data warehouse PostgreSQL databases. See Engine Configuration Options in the Oracle Linux Virtualization Manager: Getting Started Guide.

  • The engine database (engine) stores information about the state of the Oracle Linux Virtualization Manager environment and its configuration and performance.
  • The data warehouse database is a management history database (ovirt_engine_history) that can be used by any application to retrieve historical configuration information and statistical metrics for data centers, clusters, and hosts.

The data warehouse service (ovirt-engine-dwd) extracts data from the engine database and loads it into the ovirt_engine_history database. This is commonly known as ETL (extract, transform, load).

Both the history and engine databases can run on a remote host to reduce the load on the Manager host. Running these databases on a remote host is a technology preview feature, see Technology Preview in the Oracle Linux Virtualization Manager: Release Notes.

For more information, see Data Warehouse and Databases.

Event Logging and Notifications

Oracle Linux Virtualization Manager captures events in the following log files:

  • /var/log/ovirt-engine/engine.log contains all Oracle Linux Virtualization Manager UI crashes, Active Directory lookups, database issues, and other events.
  • /var/log/vdsm/vdsm.log is the log file for VDSM, the engine's agent on the virtualization host(s), and contains host-related events.

Within the Administration Portal, you can also view Alerts and Events in the Notification Drawer, which you can access by clicking Bell icon in the upper-right corner.

The ovirt-log-collector tool enables you to collect relevant logs from across the environment. To use the tool, you must log into the Oracle Linux Virtualization Manager host as the root user and log into the Administration Portal with administration credentials.

The tool collects all logs from the Manager host, the Oracle Linux KVM hosts it manages, and the database.

Oracle Linux Virtualization Manager provides event notification services that allow you to configure the Engine to notify designated users by email when certain events occur or to send Simple Network Management Protocol (SNMP) traps to one or more external SNMP manager with system event information to monitor your virtualization environment.

For more information about configuring event notifications, see Using Event Notifications in the Oracle Linux Virtualization Manager: Administration Guide.

Data Visualization with Grafana

Grafana is a web-based tool used to display data collected from the data warehouse database (ovirt_engine_history database). Data from the Engine is collected every minute and aggregated in hourly and daily aggregations. Data retention is defined during engine setup in the scale setting of the data warehouse configuration. The sample scaling can be set as:

  • Basic (default) - samples data saved for 24 hours, hourly data saved for 1 (one) month, and daily data; no daily aggregations are saved.

  • Full (recommended)- samples data saved for 24 hours, hourly data saved for 2 (two) months and daily aggregations saved for 5 (five) years.

    Note:

    Full sample scaling may require migrating the data warehouse to a separate virtual machine. For more information, see the oVirt Data Warehouse Guide.

For information on configuring Oracle Linux Virtualization Manager for Grafana and the default dashboards, see Using Grafana in the Oracle Linux Virtualization Manager: Administration Guide.

For more information on using Grafana, see the Grafana website.

Default Grafana Dashboards

The following dashboards are available by default in the initial Grafana setup to report data center, cluster, host, and virtual machine data.

Executive

  • System

    Resource usage and up-time for hosts and storage domains in the system, according to the latest configurations.

  • Data Center

    Resource usage, peaks, and up-time for clusters, hosts, and storage domains in a selected data center, according to the latest configurations.

  • Cluster

    Resource usage, peaks, over-commit, and up-time for hosts and virtual machines in a selected cluster, according to the latest configurations.

  • Host

    Latest and historical configuration details and resource usage metrics of a selected host over a selected period.

  • Virtual Machine

    Latest and historical configuration details and resource usage metrics of a selected virtual machine over a selected period.

  • Executive

    User resource usage and number of operating systems for hosts and virtual machines in selected clusters over a selected period.

Inventory

  • Inventory

    Number of hosts, virtual machines, and running virtual machines, resources usage and over-commit rates for selected data centers, according to the latest configurations.

  • Hosts Inventory

    FQDN, VDSM version, operating system, CPU model, CPU cores, memory size, create date, delete date, and hardware details for selected hosts, according to the latest configurations.

  • Storage Domains Inventory

    Domain type, storage type, available disk size, used disk size, total disk size, creation date, and delete date for selected storage domains over a selected period.

  • Virtual Machines Inventory

    Template name, operating system, CPU cores, memory size, create date, and delete date for selected virtual machines, according to the latest configurations.

Service Level

  • Uptime

    Planned downtime, unplanned downtime, and total time for the hosts, high availability virtual machines, and all virtual machines in selected clusters in a selected period.

  • Hosts Uptime

    Uptime, planned downtime, and unplanned downtime for selected hosts in a selected period.

  • Virtual Machines Uptime

    Uptime, planned downtime, and unplanned downtime for selected virtual machines in a selected period.

  • Cluster Quality of Service
    • Hosts

      Time selected hosts have performed above and below the CPU and memory threshold in a selected period.

    • Virtual Machines

      Time selected virtual machines have performed above and below the CPU and memory threshold in a selected period.

Trend

  • Trend

    Usage rates for the 5 most and least utilized virtual machines and hosts by memory and by CPU in selected clusters over a selected period.

  • Hosts Trend

    Resource usage (number of virtual machines, CPU, memory, and network Tx/Rx) for selected hosts over a selected period.

  • Virtual Machines Trend

    Resource usage (CPU, memory, network Tx/Rx, disk I/O) for selected virtual machines over a selected period.

  • Hosts Resource Usage

    Daily and hourly resource usage (number of virtual machines, CPU, memory, network Tx/Rx) for selected hosts in a selected period.

  • Virtual Machines Resource Usage

    Daily and hourly resource usage (CPU, memory, network Tx/Rx, disk I/O) for selected virtual machines in a selected period.