3 Administrative Tasks

The following are common Oracle Linux Virtualization Manager administration tasks. For conceptual information about these topics, refer to the Oracle Linux Virtualization Manager: Architecture and Planning Guide.

For additional administrative tasks, see the oVirt Documentation.

Databases

Oracle Linux Virtualization Manager creates a PostgreSQL database called engine during installation. Optionally, you might have the ovirt_engine_history database if you installed the data warehouse.

Occasionally, you should perform maintenance on these databases. Running the Engine Vacuum tool updates tables and removes dead rows, allowing disk space to be reused.

Reclaiming Database Storage

To reclaim database storage using the Engine Vacuum tool, you must log into the engine host as the root user and provide the administration credentials for the oVirt environment.

  1. Check the current database size:
    # /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "SELECT datname as db_name, pg_size_pretty(pg_database_size(datname)) as db_usage FROM pg_database"
  2. Vacuum the Engine database.
    1. Stop the ovirt-engine, ovirt-engine-dwhd, and grafana-server services:
      # systemctl stop ovirt-engine ovirt-engine-dwhd grafana-server
    2. Backup the engine database:
      # grep 'ENGINE_DB_PASSWORD=' /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
      
      # PGPASSWORD=your-engine-db-pw /usr/bin/pg_dump \
       -E UTF8 \
       --disable-dollar-quoting \
       --disable-triggers \
       -U engine \
       -h localhost \
       -p 5432 \
       --format=custom \
       --file=/var/lib/ovirt-engine/backups/engine-$(date +%Y%m%d%H%M%S).$$.dump engine
    3. Vacuum the engine database:
      /usr/share/ovirt-engine/bin/engine-vacuum.sh -f -v
    4. Start the ovirt-engine, ovirt-engine-dwhd, and grafana-server services:
      # systemctl start ovirt-engine ovirt-engine-dwhd grafana-server
  3. Vacuum the data warehouse (ovirt_engine_history) database.
    1. Stop the ovirt-engine, ovirt-engine-dwhd, and grafana-server services:
      # systemctl stop ovirt-engine ovirt-engine-dwhd grafana-server
    2. Backup the ovirt_engine_history database:
      # grep 'DWH_DB_PASSWORD=' /etc/ovirt-engine/engine.conf.d/10-setup-dwh-database.conf
      
      # PGPASSWORD=your-datawarehouse-db-pw /usr/bin/pg_dump \
       -E UTF8 \
       --disable-dollar-quoting \
       --disable-triggers \
       -U ovirt_engine_history \
       -h localhost \
       -p 5432 \
       --format=custom \
       --file=/var/lib/ovirt-engine-dwh/backups/dwh-$(date +%Y%m%d%H%M%S).$$.dump ovirt_engine_history
    3. Vacuum the ovirt_engine_history database:
      # /usr/share/ovirt-engine-dwh/bin/dwh-vacuum.sh -f -v
    4. Start the ovirt-engine, ovirt-engine-dwhd, and grafana-server services:
      # systemctl start ovirt-engine ovirt-engine-dwhd grafana-server
  4. Check the post-vacuum database size:
    # /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "SELECT datname as db_name, pg_size_pretty(pg_database_size(datname)) as db_usage FROM pg_database"

Data Centers

Oracle Linux Virtualization Manager creates a default data center during installation. You can configure the default data center, or set up new appropriately named data centers.

A data center requires a functioning cluster, host, and storage domain to operate in your virtualization environment.

Creating a New Data Center

  1. Go to Compute and then select Data Centers.

    The Data Centers pane opens.

  2. Click New.

  3. Enter a Name and optional Description.

  4. Select the storage Type, Compatibility Version, and Quota Mode of the data center from the respective drop-down menus.

  5. Click OK to create the data center.

    The new data center is added to the virtualization environment and the Data Center - Guide Me menu opens to guide you through the entities that are required be configured for the data center to operate.

    The new data center remains in Uninitialized state until a cluster, host, and storage domain are configured for it.

    You can postpone the configuration of these entities by clicking the Configure Later button. You can resume the configuration of these entities by selecting the respective data center and clicking More Actions and then choosing Guide Me from the drop-down menu.

Clusters

Oracle Linux Virtualization Manager creates a default cluster in the default data center during installation. You can configure the default cluster, or set up new appropriately named clusters.

Creating a New Cluster

  1. Go to Compute and then select Clusters.

    The Clusters pane opens.

  2. Click New.

    The New Cluster dialog box opens with the General tab selected on the sidebar.

  3. From the Data Center drop-down list, choose the Data Center to associate with the cluster.

  4. For the Name field, enter an appropriate name for the data center.

  5. For the Description field, enter an appropriate description for the cluster.

  6. From the Management Network drop-down list, choose the network for which to assign the management network role.

  7. From the CPU Architecture and CPU Type drop-down lists, choose the CPU processor family and minimum CPU processor that match the hosts that are to be added to the cluster.

    For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. If your cluster includes hosts with different CPU models, choose the oldest CPU model from the list to ensure that all hosts can operate in the cluster.

  8. From the Compatibility Version drop-down list, choose the compatibility version of the cluster.

    Note:

    For more information on compatibility versions, see Changing Data Center and Cluster Compatibility Versions After Upgrading.

  9. From the Switch Type drop-down list, choose the type of switch to be used for the cluster.

    By default, Linux Bridge is selected from the drop-down list.

  10. From the Firewall Type drop-down list, choose the firewall type for hosts in the cluster.

    The firewall types available are either iptables or firewalld. By default, the firewalld option is selected from the drop-down list.

  11. The Enable Virt Service check box is selected by default. This check box designates that the cluster is to be populated with virtual machine hosts.

  12. (Optional) Review the other tabs to further configure your cluster:

    1. Click the Optimization tab on the sidebar to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster. See Deployment Optimization.

    2. Click the Migration Policy tab on the sidebar menu to define the virtual machine migration policy for the cluster.

    3. Click the Scheduling Policy tab on the sidebar to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and add a custom serial number policy.

    4. Click the Fencing policy tab on the sidebar to enable or disable fencing in the cluster, and select fencing options.

    5. Click the MAC Address Pool tab on the sidebar to specify a MAC address pool other than the default pool for the cluster.

  13. Click OK to create the data center.

    The cluster is added to the virtualization environment and the Cluster - Guide Me menu opens to guide you through the entities that are required to be configured for the cluster to operate.

    You can postpone the configuration of these entities by clicking the Configure Later button. You can resume the configuration of these entities by selecting the respective cluster and clicking More Actions and then choosing Guide Me from the drop-down menu.

Hosts

Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM). KVM can concurrently host multiple virtual machines. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the engine.

Moving a Host to Maintenance Mode

Place a host into maintenance mode when performing common maintenance tasks, including network configuration and deployment of software updates, or before any event that might cause VDSM to stop working properly, such as a reboot, or issues with networking or storage.

When you place a host into maintenance mode the engine attempts to migrate all running virtual machines to alternative hosts. The standard prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.

Note:

Virtual machines that are pinned to the host and cannot be migrated are shut down. You can check which virtual machines are pinned to the host by clicking Pinned to Host in the Virtual Machines tab of the host’s details view.

  1. Click Compute and then select Hosts.

  2. Select the desired host.

  3. Click Management and then select Maintenance.

  4. Optionally, enter a Reason for moving the host into maintenance mode, which will appear in the logs and when the host is activated again. Then, click OK.

    The host maintenance Reason field will only appear if it has been enabled in the cluster settings.

  5. Optionally, select the required options for hosts that support Gluster.

    Select the Ignore Gluster Quorum and Self-Heal Validations option to avoid the default checks. By default, the Engine checks that the Gluster quorum is not lost when the host is moved to maintenance mode. The Engine also checks that there is no self-heal activity that will be affected by moving the host to maintenance mode. If the Gluster quorum will be lost or if there is self-heal activity that will be affected, the Engine prevents the host from being placed into maintenance mode. Only use this option if there is no other way to place the host in maintenance mode.

    Select the Stop Gluster Service option to stop all Gluster services while moving the host to maintenance mode.

    These fields will only appear in the host maintenance window when the selected host supports Gluster.

  6. Click OK to initiate maintenance mode.

  7. All running virtual machines are migrated to alternative hosts. If the host is the Storage Pool Manager (SPM), the SPM role is migrated to another host. The Status field of the host changes to Preparing for Maintenance, and finally Maintenance when the operation completes successfully. VDSM does not stop while the host is in maintenance mode.

    Note:

    If migration fails on any virtual machine, click Management and then select Activate on the host to stop the operation placing it into maintenance mode, then click Cancel Migration on the virtual machine to stop the migration.

Activating a Host from Maintenance Mode

You must activate a host from maintenance mode before using it.

  1. Click Compute and then select Hosts.

  2. Select the host.

  3. Click Management and then select Activate.

  4. When complete, the host status changes to Unassigned, and finally Up.

    Virtual machines can now run on the host. Virtual machines that were migrated off the host when it was placed into maintenance mode are not automatically migrated back to the host when it is activated, but can be migrated manually. If the host was the Storage Pool Manager (SPM) before being placed into maintenance mode, the SPM role does not return automatically when the host is activated.

Removing a Host

You may need to remove a host from the Oracle Linux Virtualization Manager environment when upgrading to a newer version.

  1. Click Compute and then select Hosts and select the host.

  2. Select the host.

  3. Click Management and then select Maintenance.

  4. Once the host is in maintenance mode, click Remove.

    Select the Force Remove check box if the host is part of a Gluster Storage cluster and has volume bricks on it, or if the host is non-responsive.

  5. Click OK.

Networks

With Oracle Linux Virtualization Manager, you can create custom vNICs for your virtual machines.

Note:

If you plan to use VLANs on top of bonded interfaces, refer to the My Oracle Support (MOS) article How to Configure 802.1q VLAN on NIC (Doc ID 1642456.1) for instructions.

Creating a Logical Network

To create a logical network:

  1. Go to Network and then click Networks.

  2. On the Networks pane, click New.

    The New Logical Network dialog box opens with the General tab selected on the sidebar.

  3. From the Data Center drop-down list, select the Data Center for the network.

    The Default data center is pre-selected in the drop-down list.

    For the procedures to create new data centers or a new clusters, refer to Data Centers or Clusters tasks.

  4. For the Name field, enter a name for the new network.

  5. Under the Network Parameters section, the VM Network check box is selected by default. Leave the VM Network check box selected if you want to create a new virtual machine network.

  6. (Optional) Configure other settings for the new logical network from the other tabs on the New Logical Network sidebar.

  7. Click OK to create the network.

Assigning a Logical Network to a KVM Host

To assign a logical network to a KVM host:

  1. Go to Compute and then click Hosts.

    The Hosts pane opens.

  2. Under the Name column, click the name of the host for which to add the network.

    The following screenshot shows the Hosts pane with the name of the host highlighted in a red rectangular box to emphasize where you need to click to set up a network on a host.

    Figure 3-1 Hosts Pane


    The Hosts pane, as described in the preceding text.

    After clicking the name of the host, the General tab opens with details about the host.

  3. Click the Network Interfaces tab on the horizontal menu.

    The Network Interfaces tab opens with details about the network interfaces on the available host.

  4. Highlight the network interface that you want to use for the network being added by clicking the row for the respective interface.

  5. Click Setup Host Networks.

    The Setup Host Networks dialog box opens for the host. The physical interfaces on the host are listed under the Interfaces column and any logical networks assigned to the interface are displayed under the Assigned Logical Networks column. Unassigned logical networks are displayed under the Unassigned Logical Networks column.

    In the following example screenshot, a logical network named vm_pub is displayed under the Unassigned Logical Networks column.

    Figure 3-2 Setup Host Dialog Box: Unassigned Logical Networks


    The Setup Host Networks dialog box for an example host, as described in the preceding text.
  6. Select the network you want to add from the Unassigned Logical Networks column by left-clicking the network and, while holding down the mouse, drag the network over to the box to the right of the available network interface where you want to add the network.

    Alternatively, you can right-click the network and select the available interface from a drop-down list.

    For example, the logical network named vm_pub is assigned to the available network interface named eno2. In the following screenshot, after dragging the network from Unassigned Logical Networks over to this interface, the network named vm_pub appears under the Assigned Logical Networks column as assigned to the network interface named eno2.

    Figure 3-3 Setup Host Dialog Box: Assigned Logical Networks


    The Setup Host Networks dialog box after the example logical network has been assigned to a network interface, as described in the preceding text.
  7. After editing the network settings, click OK to save the settings.

  8. Click OK to add the network.

Customizing vNIC Profiles for Virtual Machines

To customize vNICs for virtual machines:

  1. Go to Compute and then click Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  2. Under the Name column, select the virtual machine for which to add the virtual machine network.

    The General tab opens with details about the virtual machine.

  3. Click the Network Interfaces tab.

    The Network Interfaces tab opens with the available network interface to be used for the network.

  4. Highlight the network interface by clicking the row for the respective interface and then click Edit on the right side above the interface listing.

    The Edit Network Interface dialog box opens.

  5. In the Edit Network Interface dialog box, update the following fields:
    1. From the Profile drop-down list, select the network to be added to the virtual machine.

    2. Click the Custom MAC address check box, and then enter or update the MAC address that is allocated for this virtual machine in the text entry field.

  6. Click OK when you are finished editing the network interface settings for the virtual machine.

  7. Go to Compute and then click Virtual Machines.

    The Virtual Machines pane opens.

    Important:

    Since virtual machines can start on any host in a data center/cluster, all hosts must have the customized VM network assigned to one of its NICs. Ensure that you assign this customized VM network to each host before booting the virtual machine. For more information, see Assigning a Logical Network to a KVM Host.

  8. Highlight the virtual machine where you added the network and then click Run to boot the virtual machine.

    The red down arrow icon to the left of the virtual machine turns green and the Status column displays UP when the virtual machine is up and running on the network.

Attaching and Configuring a Logical Network to a Host Network Interface

You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces.

Before you begin the steps below, keep in mind the following:

  • To change the IP address of a host, you must remove the host and then re-add it.

  • To change the VLAN settings of a host, see Editing a Host’s VLAN Settings in oVirt Documentation.

  • You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.

  • If a switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port’s current configuration.

    Note:

    Before assigning logical networks, check the configuration. To help detect to which ports and on which switch the host’s interfaces are patched, review Port Description (TLV type 4) and System Name (TLV type 5). The Port VLAN ID shows the native VLAN ID configured on the switch port for untagged ethernet frames. All VLANs configured on the switch port are shown as VLAN Name and VLAN ID combinations.

To edit host network interfaces and assign logical networks:

  1. Click Compute Hosts.

  2. Click the host’s name. This opens the details view.

  3. Click the Network Interfaces tab.

  4. Click Setup Host Networks.

  5. Optionally, hover your cursor over host network interface to view configuration information provided by the switch.

  6. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.

    If a NIC is connected to more than one logical network, only one of the networks can be non-VLAN. All the other logical networks must be unique VLANs.

  7. Configure the logical network.

    1. Hover your cursor over an assigned logical network and click the pencil icon. This opens the Edit Management Network window.

    2. Configure IPv4 or IPv6:

      • From the IPv4 tab, set the Boot Protocol. If you select Static, enter the IP, Netmask / Routing Prefix, and the Gateway.

      • From the IPv6 tab:
        • Set theBoot Protocol to Static.

        • For Routing Prefix, enter the length of the prefix using a forward slash and decimals. For example: /48 IP:

        • In the IP field, enter the complete IPv6 address of the host network interface. For example: 2001:db8::1:0:0:6

        • In the Gateway field, enter the source router’s IPv6 address. For example: 2001:db8::1:0:0:1

      Note:

      If you change the host’s management network IP address, you must reinstall the host for the new IP address to be configured.

      Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network is forwarded using the logical network’s gateway instead of the default gateway used by the management network.

      Set all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only.

    3. To configure a network bridge, click the Custom Properties tab, select bridge_opts from the list, and enter a valid key and value with the syntax of key=value.

      The following are valid keys with example values:

      forward_delay=1500
      group_addr=1:80:c2:0:0:0
      group_fwd_mask=0x0
      hash_max=512
      hello_time=200
      max_age=2000
      multicast_last_member_count=2
      multicast_last_member_interval=100
      multicast_membership_interval=26000
      multicast_querier=0
      multicast_querier_interval=25500
      multicast_query_interval=13000
      multicast_query_response_interval=1000
      multicast_query_use_ifaddr=0
      multicast_router=1
      multicast_snooping=1
      multicast_startup_query_count=2
      multicast_startup_query_interval=3125

      Separate multiple entries with a whitespace character.

    4. To configure ethernet properties, click the Custom Properties tab, select ethtool_opts from the list, and enter a valid value using the format of the command-line arguments of ethtool. For example:

      --coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off \
      --change em1 speed 1000 duplex half

      You can use wildcard to apply the same option to all of a network’s interfaces, for example:

      --coalesce * rx-usecs 14 sample-interval 3

      The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. To view ethtool properties, from a command line type man ethtool to open the man page. For more information, see How to Set Up oVirt Engine to Use Ethtool in oVirt Documentation.

    5. To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties tab, select fcoe from the list, and enter enable=yes. Separate multiple entries with a whitespace character.

      The fcoe option is not available by default; you need to add it using the engine configuration tool. For more information, see How to Set Up oVirt Engine to Use FCoE in oVirt Documentation.

    6. To change the default network used by the host from the management network (ovirtmgmt) to a non-management network, configure the non-management network’s default route. For more information, see Configuring a Non-Management Logical Network as the Default Route in oVirt Documentation.

    7. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. For more information about unsynchronized hosts and how to synchronize them, see Synchronizing Host Networks in oVirt Documentation.

  8. To check network connectivity, select the Verify connectivity between Host and Engine check box.

    Note:

    The host must be in maintenance mode.
  9. Click OK.

    Note:

    If not all network interface cards for the host are displayed, click Management and then Refresh Capabilities to update the list of network interface cards available for that host.

Storage

Oracle Linux Virtualization Manager uses a centralized storage system for virtual machine disk images, ISO files, and snapshots. You can use Network File System (NFS), Internet Small Computer System Interface (iSCSI), or Fibre Channel Protocol (FCP) storage. You can also configure local storage attached directly to hosts.

This following administration tasks cover preparing and adding local, NFS, iSCSI, and FCP storage.

Using Local Storage on a KVM Host

Before you begin, ensure the following prerequisites have been met:

  • You have allocated disk space for local storage. You can allocate an entire physical disk on the host or you can use a portion of the disk.

  • You have created a filesystem on the block device path to be used for local storage. Local storage should always be defined on a file system that is separate from the root directory (/root).

Preparing Local Storage for a KVM Host

To prepare local storage for a KVM host:

  1. Create the directory to be used for the local storage on the host.

    Copy# mkdir -p /data/images
  2. Ensure that the directory has permissions that allows read-write access to the vdsm user (UID 36) and kvm group (GID 36).

    Copy# chown 36:36 /data /data/images
    # chmod 0755 /data /data/images

    The local storage can now be added to your virtualization environment.

Configuring a KVM Host to Use Local Storage

When you configure a KVM host to use local storage, it is automatically added to a new data center and cluster that can contain no other hosts. With local storage, features, such as live migration, fencing, and scheduling, are not available.

To configure a KVM host to use local storage:

  1. Go to Compute, and then click Hosts.

    The Hosts pane opens.

  2. Highlight the host on which to add the local storage domain.

  3. Click Management and then select Maintenance from the drop-down list.

    The Status column for the host displays Maintenance when the host has successfully entered into Maintenance mode.

  4. After the host is in Maintenance mode, click Management and then select Configure Local Storage from the drop-down list.

    The Configure Local Storage pane opens with the General tab selected.

  5. Click Edit next to the Data Center, Cluster, and Storage fields to configure and name the local storage domain.

  6. In the Set the path to your local storage text input field, specify the path to your local storage domain.

    For more information, refer to Preparing Local Storage for a KVM Host.

  7. Click OK to add the local storage domain.

    When the virtualization environment is finished adding the local storage, the new data center, cluster, and storage created for the local storage appears on the Data Center, Clusters, and Storage panes, respectively.

    You can click Tasks to monitor the various processing steps that are completed to add the local storage to the host.

    You can also verify the successful addition of the local storage domain by viewing the /var/log/ovirt-engine/engine.log file.

Using NFS Storage

Before preparing the NFS share, ensure your environment meets the following conditions:

  • Ensure that the Manager and KVM host installation are running the Oracle Linux 8.8 or later in an environment with two or more servers where one acts as the Manager host and the other servers act as KVM hosts.

    The installation creates a vdsm:kvm (36:36) user and group in the /etc/passwd and /etc/group directories, respectively.

    # grep vdsm /etc/passwd
    vdsm:x:36:36:Node Virtualization Manager:/:/sbin/nologin
    # grep kvm /etc/group
    kvm:x:36:qemu,sanlock
  • An Oracle Linux NFS File server that is reachable by your virtualization environment.

Preparing NFS Storage

To prepare NFS storage:

  1. On a Linux fileserver that has access to the virtualization environment, create a directory that is to be used for the data domain.

    # mkdir -p /nfs/olv_ovirt/data
  2. Set the required permissions on the new directory to allow read-write access to the vdsm user (UID 36) and kvmgroup (GID 36).

    # chown -R 36:36 /nfs/olv_ovirt
    # chmod -R 0755 /nfs/olv_ovirt
  3. Add an entry for the newly created NFS share in the /etc/exports directory on the NFS file server that uses the following format: full-path-of-share-created *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36).

    For example:

    # vi /etc/exports
    # added the following entry
    /nfs/olv_ovirt/data *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

    Verify that the entry has been added.

    # grep "/nfs/olv_ovirt/data" /etc/exports
    /nfs/ol_ovirt/data *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

    If you do not want to export the domain share to all servers on the network (denoted by the * before the left parenthesis), you can specify each individual host in your virtualization environment by using the following format: /nfs/ol_ovirt/data hostname-or-ip-address (rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36).

    For example:

    /nfs/olv_ovirt/data
    hostname
    (rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
  4. Export the NFS share.

    # exportfs -rv
  5. Confirm that the added export is available to Oracle Linux Virtualization Manager hosts by using the following showmount commands on the NFS File Server.

    # showmount -e | grep pathname-to-domain-share-added
    # showmount -e | grep ip-address-of-host
Attaching an NFS Data Domain

To attach an NFS data domain:

  1. Go to Storage and then click Domains.

    The Storage Domains pane opens.

  2. Click New Domain.

    The New Domain dialog box opens.

  3. From the Data Center drop-down list, select the Data Center for which to attach the data domain.

  4. From the Domain Function drop-down list, select Data. By default, the Data option is selected in the drop-down list.

  5. From the Storage Type drop-down list, select NFS. By default, the NFS option is selected in the drop-down list.

    When NFS is selected for the Storage Type, the options that are applicable to this storage types (such as the required Export Path option) are displayed in the New Domain dialog box.

  6. For the Host to Use drop-down list, select the host for which to attach the data domain.

  7. For the Export Path option, enter the remote path to the NFS export to be used as the storage data domain in the text input field.

    The Export Path option must be entered in one of the following formats: IP:/pathname or FQDN:/pathname (for example, server.example.com:/nfs/olv_ovirt/data).

    The /pathname that you enter must be the same as the path that you created on the NFS file server for the data domain in Preparing NFS Storage.

  8. Click OK to attach the NFS storage data domain.

    For information about uploading images to the data domain, see Uploading Images to the Data Domain.

Using iSCSI Storage

For iSCSI storage, a storage domain is created from a volume group that is composed of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.

Multiple network paths between hosts and iSCSI storage prevent host downtime caused by network path failure. iSCSI multipathing enables you to create and manage groups of logical networks and iSCSI storage connections. Once configured, the Manager connects each host in a data center to each storage target using the NICs or VLANs that are assigned to the logical networks in the iSCSI bond.

You can create an iSCSI bond with multiple targets and logical networks for redundancy.

Attaching an iSCSI Data Domain

For iSCSI storage, a storage domain is created from a volume group that is composed of pre-existing LUNs.

To attach an iSCSI data domain to your virtualization environment:

  1. Go to Storage and then click Domains.

    The Storage Domains pane opens.

  2. Click New Domain.

    The New Domain dialog box opens.

  3. From the Data Center drop-down list, select the data center for which to attach the data domain.

    The Default data center is pre-selected in the drop-down list.

    For the procedures to create new data centers or a new clusters, refer to Data Centers or Clusters tasks.

  4. For the Name field, enter a name for the data domain.

  5. From the Domain Function drop-down list, select the domain function. By default, the Data option is selected in the drop-down list.

    For this step, leave Data as the domain function because you are creating a data domain in this procedure.

  6. From the Storage Type drop-down list, select iSCSI.

  7. From the Host drop-down list, select the host for which to attach the data domain.

  8. When iSCSI is selected for the Storage Type, the Discover Targets dialog box opens and the New Domain dialog box automatically displays the known targets with unused LUNs under the Target Name column.

    If the target from which you are adding storage is not listed, complete the following fields in the Discover Targets dialog box:

    1. For the Address field, enter fully-qualified domain name or IP address of the iSCSI host on the storage array.

    2. For the Port field, enter the port to connect to on the host when browsing for targets. By default, this field is automatically populated with the default iSCSI Port, 3260.

    After completing these fields, click Discover.

    The Target Name column updates to list all the available targets discovered on the storage array.

  9. Under the Target Name column, select the desired target and select the black right-directional arrow to log in to the target.

    The Storage Domains pane refreshes to list only the targets for which you logged in.

  10. Click + to expand the desired target.

    The target expands to display all the unused LUNS.

  11. Click Add for each LUN ID that is to connect to the target.

  12. (Optional) Configure the advanced parameters.

    If you are using ZFS storage, you must uncheck the Discard after Delete option.

  13. Click OK.

    You can click Tasks to monitor the various processing steps that are completed to attach the iSCSI data domain to the data center.

    After the iSCSI data domain has been added to your virtualization environment, you can then upload the ISO images that are used for creating virtual machines.

Configuring iSCSI Multipathing

Before you can configure iSCSI multipathing, ensure you have the following:

To configure iSCSI multipathing:

  1. Click Compute Data Centers.

  2. Click the data center name.

  3. In the iSCSI Multipathing tab, click Add.

  4. In the Add iSCSI Bond window, enter a Name and optionally add a Description.

  5. Select a logical network from Logical Networks and a storage domain from Storage Targets. You must select all paths to the same target.

  6. Click OK.

The hosts in the data center are connected to the iSCSI targets through the logical networks in the iSCSI bond.

Migrating a Logical Network to an iSCSI Bond

If you have a logical network that you created for iSCSI traffic and configured on top of an existing network bond, you can migrate the logical network to an iSCSI bond on the same subnet without disruption or downtime.

To migrate a logical network to an iSCSI bond:

  1. Modify the current logical network so that it is not required.

    1. Click Compute and then click Clusters.

    2. Click the cluster name.

    3. In the Logical Networks tab of the cluster detail page, select a current logical network and click Manage Networks.

      As an example, net-1 is the name of the current logical network.

    4. Clear the Require check box and click OK.

  2. Create a new logical network that is not Required and not VM network.

    1. Click Add Network. This opens the New Logical Network window.

    2. In the General tab, enter a Name (for example, net-2) and clear the VM network check box.

      As an example, net-2 is the name of the new logical network.

    3. In the Cluster tab, clear the Require check box and click OK.

  3. Remove the current network bond and reassign the logical networks.

    1. Click Compute and then click Hosts.

    2. Click the host name.

    3. In the Network Interfaces tab of the host detail page, click Setup Host Networks.

    4. Drag the old logical network (for example, net-1) to the right to unassign it.

    5. Drag the current bond to the right to remove it.

    6. Drag the old logical network (for example, net-1) and the new logical network (for example, net-2) to the left to assign them to physical interfaces.

    7. To edit the new logical network (for example, net-2), click its pencil icon.

    8. In the IPV4 tab of the Edit Network window, select Static.

    9. Enter the IP and Netmask/Routing Prefix of the subnet and click OK.

  4. Create the iSCSI bond.

    1. Click Compute and then click Data Centers.

    2. Click the data center name.

    3. In the iSCSI Multipathing tab of the data center details page, click Add.

    4. In the Add iSCSI Bond window, enter a Name, select the the old and new networks (for example, net-1 and net-2) and click OK.

Your data center has an iSCSI bond containing the old and new logical networks.

Adding an FC Data Domain

To add an FC data domain:

  1. Go to Storage and then click Domains.

    The Storage Domains pane opens.

  2. On the Storage Domains pane, click the New Domain button.

    The New Domain dialog box opens.

  3. For the Name field, enter a name for the data domain.

  4. From the Data Center drop-down list, select the Data Center for which to attach the data domain. By default, the Default option is selected in the drop-down list.

  5. From the Domain Function drop-down list, select the domain function. By default, the Data option is selected in the drop-down list.

    For this step, leave Data as the domain function because you are creating a data domain in this example.

  6. From the Storage Type drop-down list, select Fibre Channel.

  7. For the Host to Use drop-down list, select the host for which to attach the data domain.

  8. When Fibre Channel is selected for the Storage Type, the New Domain dialog box automatically displays the known targets with unused LUNs.

  9. Click Add next to the LUN ID that is connect to the target.

  10. (Optional) Configure the advanced parameters.

  11. Click OK.

    You can click Tasks to monitor the various processing steps that are completed to attach the FC data domain to the data center.

Uploading Images to the Data Domain

Before using the Manager to upload images to the data domain, you must perform the following steps to ensure that the prerequisites for uploading images have been met on the Manager and KVM hosts.

Before You Begin

To ensure that the prerequisites for uploading images to the data domain have been met:

  1. On the engine host, verify that the ovirt-imageio service has been configured and is running.

    # systemctl status ovirt-imageio.service     

    When the service is running, the output displays as follows.

    # systemctl status ovirt-imageio.service
      ovirt-imageio.service - oVirt ImageIO
       Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio.service; enabled; 
       vendor preset: disabled)
       Active: active (running) since Mon 2019-03-25 13:12:29 PDT; 2 weeks 0 days ago
     Main PID: 28708 (ovirt-imageio-p)
       CGroup: /system.slice/ovirt-imageio.service
               └─28708 /usr/bin/python2 /usr/bin/ovirt-imageio
    ...

    This service is automatically configured and is started when you run the engine-setup command during the installation of the Manager.

  2. On the KVM host, verify that the ovirt-imageio service has been configured and is running. For example:

    # systemctl status ovirt-imageio-daemon
      ovirt-imageio-daemon.service - oVirt ImageIO Daemon
       Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; 
       vendor preset: disabled)
       Active: active (running) since Wed 2019-03-27 18:38:36 EDT; 3 weeks 4 days ago
     Main PID: 366 (ovirt-imageio-d)
        Tasks: 4
       CGroup: /system.slice/ovirt-imageio-daemon.service
               └─366 /usr/bin/python /usr/bin/ovirt-imageio-daemon
    
    Mar 27 18:38:36 myserver systemd[1]: Starting oVirt ImageIO Daemon...
    Mar 27 18:38:36 myserver systemd[1]: Started oVirt ImageIO Daemon.
  3. Verify that the certificate authority has been imported into the web browser used to access the Manager by browsing to the following URL and enabling the trust settings: https://engine_address/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

  4. Verify that you are using a browser that meets the browser requirement to access the Administration Portal.

    For more information, refer to Logging into the Administration Portal in the Oracle Linux Virtualization Manager: Getting Started Guide.

Uploading an ISO Image to the Data Domain

To upload an ISO image to data domain using the Manager:

  1. Download or copy an ISO image file that you want to upload into your environment to a location on your desktop, laptop, or a system where the Manager is accessible from a Web browser.

  2. Go to Storage and then click Disks.

    The Disks pane opens.

  3. Click Upload and then select Start from the drop-down list.

    The Upload Image dialog box opens.

  4. Click Choose File and navigate to the location where you saved the ISO image.

  5. Complete the Disk Options section of the dialog box.

  6. Ensure that the prerequisites have been met by clicking Test Connection.

    If the test returns a warning or error message, refer to Before You Begin to review the prerequisites.

  7. Click OK to start uploading the ISO image.

    The status field on the Disks pane tracks the progress of the upload.

    After the ISO image upload is completed successfully, you can attach the image to virtual machines as CDROMs or use the image to boot virtual machines.

Note:

For information on uploading ISO images to data domains from the command line, see the My Oracle Support article Sample Script to Upload Disk/ISO To Storage Domain From Remote Linux Server (Doc ID 2830534.1).

Detaching a Storage Domain from a Data Center

A storage domain must be in maintenance mode before it can be detached and removed. This is required to redesignate another data domain as the master data domain.

You cannot move a storage domain into maintenance mode if a virtual machine has a lease on the storage domain. The virtual machine needs to be shut down, or the lease needs to be to removed or moved to a different storage domain first.

To detach a storage domain from one data center to migrate it to another data center:

  1. Shut down all the virtual machines running on the storage domain.

  2. Go to Storage and then click Domains.

    The Storage Domains pane opens.

  3. Click the storage domain’s name.

    The details view of the storage domain opens.

  4. Click the Data Center tab.

  5. Click Maintenance.

    The Ignore OVF update failure check box allows the storage domain to go into maintenance mode even if the OVF update fails.

    Note:

    The OVF_STORE disks are images that contain the metadata of virtual machines and disks that reside on the storage data domain.

  6. Click OK.

    The storage domain is deactivated and has an Inactive status in the results list. You can now detach the inactive storage domain from the data center.

  7. Click Detach.

  8. Click OK to detach the storage domain.

Now that the storage domain is detached from the data center, it can be attached to another data center.

Virtual Machines

Oracle Linux Virtualization Manager lets create virtual machines as well as perform basic administration tasks such as live editing, live migration, and creating and using templates and snapshots.

Creating a New Virtual Machine

This section shows you how to install a remove viewer, create Oracle Linux or Microsoft Windows virtual machines, and install the respective guest OS, agents and drivers.

For detailed information on the supported guest operating systems, see the Oracle® Linux: KVM User's Guide.

Before creating Microsoft Windows virtual machines, ensure the following prerequisites are met.

Obtain the Oracle VirtIO Drivers for Microsoft Windows.

  1. Download Oracle VirtIO Drivers for Microsoft Windows to the Manager host from Oracle Software Delivery Cloud or My Oracle Support (MOS). Refer to Oracle VirtIO Drivers for Microsoft Windows for Use With KVM for more information.
  2. Upload the Oracle VirtIO Drivers for Microsoft Windows ISO image to an Oracle Linux Virtualization Manager storage domain. Refer to Uploading an ISO Image to the Data Domain for more information.

Download the QEMU guest agent to the Manager host.

  • For ULN registered hosts or using Oracle Linux Manager, download qemu-ga-win from the oVirt Release 4.5 on Oracle Linux 8 (x86_64) - Extra channel.
  • For Oracle Linux yum server configured KVM hosts, download qemu-ga-win from the Oracle Linux 8 (x86_64) oVirt 4.5 Extra repository.

Note:

In addition to creating virtual machines, you can import an Open Virtual Appliance (OVA) file into your environment from any host in the data center. For more information, see oVirt Virtual Machine Management in oVirt Documentation.

Installing Remote Viewer on Client Machine

A console is a UI that allows you to view and interact with a virtual machine similar to a physical machine. The default console is a Remote Viewer application that provides users with a UI for connecting to virtual machines.

Before you begin a Linux or Windows OS installation, download the appropriate install package from the Virtual Machine Manager web site.

Note:

See See Windows Virtual Machines Lose Functionality Due To Deprecated Guest Agent in the Known Issues section of the Oracle Linux Virtualization Manager: Release Notes.

For more information, see Consoles in the Oracle Linux Virtualization Manager: Architecture and Planning Guide.

To install Remote Viewer on Linux:

  1. Ensure you have downloaded the virt-viewer installation package.

  2. Install the virt-viewer package using one of the following commands depending on your system.

    # yum install virt-viewer                      
    # dnf install virt-viewer            
  3. Restart your browser for the changes to take effect in the Oracle Linux Virtualization Manager.

You can now connect to your virtual machines using the VNC protocol.

To install Remote Viewer on Windows:

  1. Ensure you have downloaded either the 32-bit or 64-bit virt-viewer installer depending on the architecture of your system.

  2. Go to the folder where you saved the file and double-click the file.

  3. If prompted with a security warning, click Run.

  4. If prompted by User Account Control, click Yes.

Once installed, you can access Remote Viewer in the VirtViewer folder of All Programs from the Start menu.

Creating a New Linux or Microsoft Windows Virtual Machine

Follow this general procedure to create a new virtual machine:

  1. Go to Compute and then click Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  2. Click New.

    The New Virtual Machine dialog box opens with the General tab selected on the sidebar.

  3. From the Cluster drop-down list, select the data center and host cluster for the new host.

    The Default data center is pre-selected in the drop-down list.

    For the procedures to create new data centers or a new clusters, refer to Data Centers or Clusters tasks.

  4. From the Operating System drop-down list, select the operating system for the virtual machine.

  5. Using the Optimized for dropdown list, select the type of system for which the virtual machine will be optimized.
    • Server (default) - have no sound card, use a cloned disk image, and are not stateless
    • Desktop - have a sound card, use an image (thin allocation), and are stateless
    • High Performance - have a number of configuration changes; see Optimizing Clusters, Hosts and Virtual Machines.
  6. For the Name field, enter a name for the new virtual machine.

  7. Under Instance Images, add storage to the virtual machine by either using an existing virtual disk or creating a new virtual desk.

    • To use an existing virtual disk, click Attach and select the virtual disk to use for the virtual machine storage. Then click OK.

    • To create a new virtual disk, click Create and update the fields for the virtual machine storage or accept the default settings. Then click OK.

    If you are creating a new virtual disk, the following fields are key:

    • Check Bootable.

    • Enter a disk Size (GiB).

    • From the Interface drop-down list, select VirtIO-SCSI.

    • From the Allocation Policy drop-down list, selecting Preallocated reserves the disk space, while selecting Thin Provision creates a sparse allocated virtual disk.

    • From the Disk Profile list, select the storage domain where you can save the virtual disk.

    Note:

    Repeat this step if you need to create additional virtual disks.
  8. Connect the virtual machine to a network by adding a network interface.

    See Creating a Logical Network, Assigning a Logical Network to a KVM Host, and Customizing vNIC Profiles for Virtual Machines.

  9. Click Show Advanced Options to display additional configuration options available for the new virtual machine.

  10. (Optional) Click the System tab on the sidebar to adjust the CPU and memory size for the virtual machine from the defaults.

    • For Memory Size field, the default value is 1024 MB.

    • For the Maximum memory field, the default value is 4096 MB, which is four times the memory size but can be manually configured.

    • For the Total Virtual CPUs field, the default value is 1.

    Note:

    Depending on the operating system you are installing, there might be memory and vCPU requirements.
  11. Click OK to create the virtual machine.

  12. Proceed to Installing the Oracle Linux Guest OS or Installing the Microsoft Windows Guest OS.

Installing the Oracle Linux Guest OS

To install the Oracle Linux guest OS:

  1. From the Virtual Machines pane, select the Oracle Linux virtual machine created in Creating a New Linux or Microsoft Windows Virtual Machine.

  2. Using the down arrow next to Run, select Run Once.

  3. Attach your ISO file, for example OracleLinux-R7-U6-Server-x86_64-dvd.iso, and click OK.
  4. Click Console to open a console to the virtual machine.

    If you have not installed the Remote Viewer application, refer to Installing Remote Viewer on Client Machine.

  5. Install the Oracle Linux guest OS.

    Refer to the Oracle® Linux documentation for more information on how to install Oracle Linux.

  6. When the installation completes, reboot the virtual machine.
  7. (Optional) If you use a proxy server for Internet access, configure Yum with the proxy server settings. For more information about configuring firewalld, see Configuring Packet-filtering Firewalls in the Oracle® Linux 7: Security Guide or Oracle Linux 8: Configuring the Firewall

  8. (Optional) If you are using yum to update the host, make sure the host is using the modular yum repository configuration. For more information, see Getting Started with Oracle Linux Yum Server.

  9. Proceed to Installing the Oracle Linux Guest Agent.

Installing the Oracle Linux Guest Agent

To install the Oracle Linux guest agent, follow the appropriate steps for your version.

  1. Open a console session for the Oracle Linux guest and log in to the terminal.

  2. Install the latest guest agent package.

    For Oracle Linux 8 guests:

    # dnf module reset virt
    # dnf config-manager --enable ol8_kvm_appstream
    # dnf -y module enable virt:kvm_utils3
    # dnf -y install qemu-guest-agent       

    For Oracle Linux 7 guests:

    # yum install yum-utils -y
    # yum-config-manager --enable ol7_latest
    # yum install qemu-guest-agent          
    For Oracle Linux 6 guests:
    # yum install yum-utils -y
    # yum-config-manager --enable ol6_latest
    # yum install qemu-guest-agent
    For Oracle Linux 5 guests:
    # yum install yum-utils -y
    # yum install http://yum.oracle.com/repo/OracleLinux/OL7/ovirt42/x86_64/getPackage/ \
      ovirt-guest-agent-1.0.13-2.el5.noarch.rpm
  3. Start the guest agent service for the Oracle Linux guest.

    For Oracle Linux 8 and Oracle Linux 7 guests:

    # systemctl start qemu-guest-agent.service     
    For Oracle Linux 6 guests:
    # service qemu-ga enable
    # service qemu-ga start        
    For Oracle Linux 5 guests:
    # service ovirt-guest-agent enable
    # service ovirt-guest-agent start  
  4. (Optional) Enable an automatic restart of the guest agent service when the virtual machine is rebooted.

    For Oracle Linux 8 and Oracle Linux 7 guests:

    # systemctl enable qemu-guest-agent.service   
    For Oracle Linux 6 guests:
    # chkconfig qemu-ga on    
    For Oracle Linux 5 guests:
    # chkconfig ovirt-guest-agent on
Installing the Microsoft Windows Guest OS

To install the Microsoft Windows guest OS:

  1. From the Virtual Machines pane, select a virtual machine.

  2. Using the down arrow next to Run, select Run Once.

  3. Expand the Boot Options menu, check Attach CD, and select the ISO image.
  4. Click OK to boot the virtual machine.
  5. Click Console to open a console to the virtual machine.

    If you have not installed the Remote Viewer application, refer to Installing Remote Viewer on Client Machine.

  6. Install the Microsoft Windows guest OS.

    Refer to the applicable Microsoft Windows documentation for instructions on how to install the operating system.

  7. When the installation completes, reboot the virtual machine.
  8. Proceed to Installing the VirtIO Drivers and then to Installing the QEMU Guest Agent.

Installing the VirtIO Drivers

Before attempting to install the Oracle VirtIO Drivers for Microsoft Windows on a new Microsoft Windows virtual machine, ensure that you have downloaded the drivers onto the Manager host and uploaded the ISO image to an Oracle Linux Virtualization Manager storage domain. For more information, see the prerequisites.

To install the Oracle VirtIO Drivers for Microsoft Windows:

  1. After you finish installing the Microsoft Windows guest OS, return to the Virtual Machines pane, highlight the row for this virtual machine, and click Edit.

    The Edit Virtual Machines dialog box opens.

  2. Click the Boot Options tab on the sidebar of the dialog box to specify the boot sequence for the virtual device.

    1. From the First Device drop-down list, change CD-ROM to Hard Disk.

    2. From the Second Device drop-down list, select CD-ROM.

    3. Select the Attach CD checkbox and choose virtio from the drop-down list.

  3. Click OK to save the changes to the virtual machine configuration.

  4. Click OK when the Pending Virtual Machine changes dialog box appears.

  5. From the Virtual Machines pane, reboot the virtual machine.

  6. Click Console to open a console to the virtual machine and navigate to the CDROM.

  7. Double-click the virtio folder and then click Setup to start the Oracle VirtIO Drivers for Microsoft Windows installer.

    The installer window is displayed.

  8. Click Install to start the Oracle VirtIO Drivers for Microsoft Windows installer.

    The installer copies the Oracle VirtIO Drivers for Microsoft Windows installer files and then installs the drivers on the Microsoft Microsoft Windows guest operating system.

  9. Click Yes, I want to restart my computer now and click Finish.

    The virtual machine is restarted.

  10. Stop the virtual machine.

  11. Go to Compute and then click Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  12. Select the virtual machine where you installed the Microsoft Windows guest OS and click Edit.

  13. Edit the virtual disk. From the Interface drop-down list, change SATA to VirtIO-SCSI .

  14. Click the Boot Options tab on the sidebar.

    1. Do not make any changes to the First Device drop-down list. The Hard Disk option is selected from a previous step.

    2. From the Second Device drop-down list, select None.

    3. Deselect the Attach CD checkbox.

  15. Click OK to save the changes to the virtual machine configuration.

  16. Restart the virtual machine.
  17. Proceed to Installing the QEMU Guest Agent.

Installing the QEMU Guest Agent

Before attempting to install the QEMU guest agent on a new Microsoft Windows virtual machine, ensure that you have downloaded the drivers onto the Manager host. For more information, see the prerequisites.

  1. On the Manager host, install the QEMU guest agent.
    # dnf install qemu-ga-win
  2. Verify the installation.
    # ls -lat /usr/i686-w64-mingw32/sys-root/mingw/bin/
    total 9280
    drwxr-xr-x. 2 root root      30 Nov  3 13:56 .
    -rw-r--r--. 1 root root 9499648 Nov  2 09:45 qemu-ga-i386.msi
    drwxr-xr-x. 3 root root      17 Sep 23 19:02 ..
     
    # ls -lat /usr/x86_64-w64-mingw32/sys-root/mingw/bin/
    total 9472
    drwxr-xr-x. 2 root root      32 Nov  3 13:56 .
    -rw-r--r--. 1 root root 9697280 Nov  2 09:45 qemu-ga-x86_64.msi

Important:

  • If you have access to the virtual machine, you can copy the appropriate MSI (32-bit or 64-bit) to the virtual machine and then run the installer to install the QEMU guest agent.
  • If you do not have access to the virtual machine, use the following steps to build and upload an ISO and then install the QEMU guest agent.

Build the ISO and install the QEMU guest agent on the virtual machine.

  1. Build the QEMU guest agent ISO.
    # dnf install genisoimage -y
    
    # pwd
    /root
    
    # mkdir build-iso
    
    # cp /usr/i686-w64-mingw32/sys-root/mingw/bin/qemu-ga-i386.msi build-iso/
    
    # cp /usr/x86_64-w64-mingw32/sys-root/mingw/bin/qemu-ga-x86_64.msi build-iso/
    
    # mkisofs -R -J -o qemu-ga-windows.iso build-iso/*
    I: -input-charset not specified, using utf-8 (detected in locale settings)
    Using QEMU_000.MSI;1 for  /qemu-ga-x86_64.msi (qemu-ga-i386.msi)
     52.36% done, estimate finish Thu Nov  3 14:20:49 2022
    Total translation table size: 0
    Total rockridge attributes bytes: 347
    Total directory bytes: 0
    Path table size(bytes): 10
    Max brk space used 0
    9549 extents written (18 MB)
     
    # ll qemu-ga-windows.iso
    -rw-r--r--. 1 root root 19556352 Nov  3 14:20 qemu-ga-windows.iso
  2. Upload the QEMU guest agent ISO image to an Oracle Linux Virtualization Manager storage domain. Refer to Uploading an ISO Image to the Data Domain for more information.
  3. From the Virtual Machines pane, select a virtual machine.

  4. Highlight the row for this virtual machine, and click Edit.

    The Edit Virtual Machines dialog box opens.
  5. Click the Boot Options tab on the sidebar of the dialog box to specify the boot sequence for the virtual device.

    1. From the First Device drop-down list, change CD-ROM to Hard Disk.

    2. From the Second Device drop-down list, select CD-ROM.

    3. Select the Attach CD checkbox and choose the qemu executable from the drop-down list.

  6. Click OK to save the changes to the virtual machine configuration.

  7. Click OK when the Pending Virtual Machine changes dialog box appears.

  8. From the Virtual Machines pane, reboot the virtual machine.

  9. Click Console to open a console to the virtual machine and navigate to the CDROM.

  10. Double-click the the qemu executable to launch the installation program.

  11. When installation completes, click Yes, I want to restart my computer now and click Finish.

    The virtual machine is restarted.

  12. Stop the virtual machine.

  13. Go to Compute and then click Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  14. Select the virtual machine where you installed the Microsoft Windows guest OS and click Edit.

  15. Click the Boot Options tab on the sidebar.

    1. Do not make any changes to the First Device drop-down list. The Hard Disk option is selected from a previous step.

    2. From the Second Device drop-down list, select None.

    3. Deselect the Attach CD checkbox.

  16. Click OK to save the changes to the virtual machine configuration.

  17. From the Virtual Machines pane, reboot the virtual machine.

  18. Run the Microsoft Windows virtual machine.

For more information, see the Oracle® Linux: KVM User's Guide

Live Editing a Virtual Machine

You can optionally change many settings for a virtual machine while it is running.

  1. From the Administration Portal, click Compute and then select Virtual Machines.

  2. Under the Name column, select the virtual machine you want to make changes to and then click Edit.

  3. On the bottom left of the Edit Virtual Machine window, click Show Advanced Options.

  4. Change any of the following properties while the virtual machine is running without restarting the virtual machine.

    Select the General tab, to modify:

    • Optimized for

      You can select from three options:

      • Desktop - the virtual machine has a sound card, uses an image (thin allocation), and is stateless.

      • Server - the virtual machine does not have a sound card, uses a cloned disk image, and is not stateless. In contrast, virtual machines optimized to act as desktop machines.

      • High Performance - the virtual machine is pre-configured with a set of suggested and recommended configuration settings for reaching the best efficiency.

      • Name

        A virtual machine's name must be unique within the data center. It must not contain any spaces and must contain at least one character from A-Z or 0-9. The maximum length is 255 characters.

        The name can be re-used in different data centers within Oracle Linux Virtualization Manager.

      • Description and Comment

      • Delete Protection

        If you want to make it impossible to delete a virtual machine, check this box. If you later decide you want to delete the virtual machine, remove the check.

      • Network Interfaces

        Add or remove network interfaces or change the network of an existing NIC.

    Select the System tab, to modify:

    • Memory Size

      Use to hot plug virtual memory. For more information, see Hot Plugging Virtual Memory.

    • Virtual Sockets (Under Advance Parameters)

      Use to hot plug CPUs to the virtual machine. Do not assign more sockets to a virtual machine than are present on its KVM host. For more information, see Hot Plugging vCPUs.

    Select the Console tab, to modify:

    • Disable strict user checking

      By default, strict checking is enabled allowing only one user to connect to the console of a virtual machine until it has been rebooted. The exception is that a SuperUser can connect at any time and replace a existing connection. When a SuperUser has connected, no normal user can connect again until the virtual machine is rebooted.

      Important:

      Check this box with caution because you can expose the previous user's session to the new user.

    Select the High Availability tab, to modify:

    • Highly Available

      Check this box if you want the virtual machine to automatically live migrate to another host if its host crashes or becomes non-operational. Only virtual machines with high availability are restarted on another host. If the virtual machine's host is manually shut down, the virtual machine does not automatically live migrate to another host. For more information, see Configuring a Highly Available Virtual Machine.

      Note:

      You are not able to check this box if on the Host tab you have selected either Allow manual migration only or Do not allow migration for the Migration mode. For a virtual machine to be highly-available it must be possible for the engine to migrate the virtual machine to another host when needed.

    • Priority for Run/Migration Queue

      Select the priority level (Low, Medium or High) for the virtual machine to live migrate or restart on another host.

    Select the Icon tab, to upload a new icon.

  5. Click OK when you are finished with all tabs to save your changes.

Changes to any other settings are applied when you shut down and restart your virtual machine. Until then, an orange icon displays to indicate pending changes.

Migrating Virtual Machines between Hosts

Virtual machines that share the same storage domain can live migrate between hosts that belong to the same cluster. Live migration allows you to move a running virtual machine between physical hosts with no interruption to service. The virtual machine stays powered on and user applications continue running while the virtual machine is relocated to a new physical host. In the background, the virtual machine's RAM is copied from the source host to the destination host. Storage and network connectivity are not changed.

You use live migration to seamlessly move virtual machines to support a number of common maintenance tasks. Ensure that your environment is correctly configured to support live migration well in advance of using it.

Configuring Your Environment for Live Migration

To enable successful live migrations, you should ensure you correctly configure it. At a minimum, to successfully migrate running virtual machines:

  • Source and destination hosts should be in the same cluster

  • Source and destination hosts must have a status of Up.

  • Source and destination hosts must have access to the same virtual networks and VLANs

  • Source and destination hosts must have access to the data storage domain where the virtual machines reside

  • There must be enough CPU capacity on the destination host to support the virtual machine's requirements.

  • There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements

Note:

Live migrations are performed using the management network. The number of concurrent migrations supported is limited by default. Even with these limits, concurrent migrations can potentially saturate the management network. To minimize the risk of network saturation, we recommended that you create separate logical networks for storage, display, and virtual machine data.

To configure virtual machines so they reduce network outage during migration:

  • Ensure that the destination host has an available virtual function (VF)

  • Set the Passthrough and Migrateable options in the passthrough vNIC’s profile

  • Enable hotplugging for the virtual machine's network interface

  • Ensure that the virtual machine has a backup VirtIO vNIC to maintain the virtual machine's network connection during migration

  • Set the VirtIO vNIC’s No Network Filter option before configuring the bond

  • Add both vNICs as subordinate under an active-backup bond on the virtual machine, with the passthrough vNIC as the primary interface

Automatic Virtual Machine Migration

The Engine automatically initiates live migration of virtual machines in two situations:

  • When a host is moved into maintenance mode live migration is initiated for all virtual machines running on the host. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster.

  • To maintain load balancing or power saving levels in line with scheduling policy live migrations are initiated.

You can disable automatic, or even manual, live migration of specific virtual machines if required.

Setting Virtual Machine Migration Mode

Using the Migration mode setting for a virtual machine, you can allow automatic and manual migration, disable automatic migration, or disable automatic and manual migration. If a virtual machine is configured to run only on a specific host, you cannot migrate in manually.

To set a virtual machine's migration mode:

From the Migration mode drop-down list, select Allow manual and automatic migration, Allow manual migration only or Do not allow migration.

To set the migration mode of a virtual machine:

  1. Click Compute and select Virtual Machines.

  2. Select a virtual machine and click Edit.

  3. Click the Host tab.

  4. Use the Start Running On radio buttons to specify whether the virtual machine should run on any host in the cluster, a specific host, or a group of hosts.

    If the virtual machine has host devices attached to it, and you choose a different host, the host devices from the previous host are removed from the virtual machine.

    Attention:

    Assigning a virtual machine to one specific host and disabling migration is mutually exclusive in Oracle Linux Virtualization Manager high availability (HA). Virtual machines that are assigned to one specific host can only be made highly available using third-party HA products. This restriction does not apply to virtual machines that are assigned to a group of hosts.

  5. From the Migration mode drop-down list, select Allow manual and automatic migration, Allow manual migration only or Do not allow migration.

  6. (Optional) Check Use custom migration downtime and specify a value in milliseconds.

  7. Click OK.

Manually Migrate a Virtual Machine

To manually migrate a virtual machine:

  1. Click Compute and the select Virtual Machines.

  2. Select a running virtual machine and click Migrate.

  3. Choose either Select Host Automatically or Select Destination Host and select the destination host from the drop-down list.

    When you choose Select Host Automatically, the system determines the destination host according to the load balancing and power management rules set up in the scheduling policy.

  4. Click OK.

During migration, progress is shown in the Status field. When the virtual machine has been migrated, the Host field updates to show the virtual machine's new host.

Working with Templates

For this example scenario, you seal the Oracle Linux virtual machine created in Creating a New Virtual Machine and then you create an Oracle Linux template based on that virtual machine. You then use that template as the basis for a Cloud-Init enabled template to automate the initial setup of a virtual machine.

A template is a copy of a virtual machine that you can use to simplify the subsequent, repeated creation of similar virtual machines. Templates capture the configuration of software, the configuration of hardware, and the software installed on the virtual machine on which the template is based, which is known as the source virtual machine.

Virtual machines that are created based on a template use the same NIC type and driver as the original virtual machine but are assigned separate, unique MAC addresses.

Important:

Oracle provides pre-installed and pre-configured templates that allow you to deploy a fully configured software stack. Use of Oracle Linux templates eliminates the installation and configuration costs and reduces the ongoing maintenance costs. For more information, see Importing an Oracle Linux Template.

Sealing an Oracle Linux Virtual Machine for Use as a Template

Sealing is the process of removing all system-specific details from a virtual machine before creating a template based on that virtual machine. Sealing is necessary to prevent the same details from appearing on multiple virtual machines that are created based on the same template. It is also necessary to ensure the functionality of other features, such as predictable vNIC order.

To seal an Oracle Linux virtual machine for use as a template:

  1. Log in to the Oracle Linux virtual machine as the root user.

  2. Flag the system for reconfiguration.

    # touch /.unconfigured
  3. Remove the SSH host keys.

    # rm -rf /etc/ssh/ssh_host_*
  4. Do one of the following:
    1. For Oracle Linux 6 (or earlier), set the host name value of the HOSTNAME=localhost.localdomain in the /etc/sysconfig/network file.
    2. For Oracle Linux 7 remove the /etc/hostname file.
  5. Remove /etc/udev/rules.d/70-*.

    # rm -rf /etc/udev/rules.d/70-*
  6. Remove the HWADDR and UUID lines in the /etc/sysconfig/network-scripts/ifcfg-eth* file.

  7. (Optional) Delete all the logs from /var/log and build logs from /root.

  8. Cleanup the command history.

    # history -c
  9. Shutdown the virtual machine.

    # poweroff      

    The Oracle Linux virtual machine is now sealed and ready to be made into a template.

Creating an Oracle Linux Template

When you create a template based on a virtual machine, a read-only copy of the virtual machine's disk is created. This read-only disk becomes the base disk image of the new template, and of any virtual machines that are created based on the template.

To create an Oracle Linux template:

  1. Go to Compute, and then click Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  2. Click More Actions to expand the drop-down list and select Make Template from the drop-down list.

    The following screenshot shows the More Actions drop-down list expanded to display the Make Template option. The Make Template option is highlighted with a red rectangular box for emphasis.

    Figure 3-4 Make Template Option


    The More Actions drop-down list expanded to display the Make Template option, as described in the preceding text.
  3. For the Name field, enter a name for the new virtual machine template.

  4. In the Disc Allocation: section under the Alias column, rename the disk alias to be the same as the template name entered for the Name field.

  5. Click the Seal Template (Linux only) checkbox.

    The following screenshot shows the New Template dialog box completed for a new template named ol7-vm-template. In the dialog box, the disk alias has been renamed to ol7-vm-template and the Seal Template (Linux only) checkbox is selected.

    Figure 3-5 New Template Dialog Box


    The New Template dialog box completed for a new template, as described in the preceding text.
  6. Click the OK button to create the template.

    The virtual machine displays a status of image Locked while the template is being created. The time it takes for the template to be created depends on the size of the virtual disk and the capabilities of your storage hardware. When the template creation process completes, the template is added to the list of templates displayed on the Templates pane.

    You can now create new Oracle Linux virtual machines that are based on this template.

Creating a Cloud-Init Enabled Template

For Oracle Linux 7 (and later) virtual machines, you can use the Cloud-Init tool to automate the initial setup of virtual machines. Common tasks, such as configuring host names, network interfaces, and authorized keys, can be automated by using this tool. When provisioning virtual machines that have been deployed based on a template, the Cloud-Init tool can be used to prevent conflicts on the network.

Before you create Cloud-Init enabled templates, ensure the following prerequisites are met:

  • You must have sealed an Oracle Linux for use as a template. For more information, refer to Sealing an Oracle Linux Virtual Machine for Use as a Template.

  • You must create a template. For more information, refer to Creating an Oracle Linux Template.

  • The cloud-init package must first be installed on the virtual machine. Once installed, the Cloud-Init service starts during the boot process and searches for instructions on what to configure. Use the Run Once window to provide these instructions on a one-time only basis.

Installing the Cloud-Init Package

Note:

The following procedure assumes your operating system is Oracle Linux 8 or later.

To install Cloud-Init on a virtual machine:

  1. Log in to a Oracle Linux virtual machine.

  2. List the cloud-init package.

    # dnf list cloud-init
  3. Install the cloud-init package.

    # dnf install cloud-init
  4. Run the following command to enable the cloud-init service.

    # systemctl enable cloud-init
  5. Run the following command to start the cloud-init service.

    # systemctl start cloud-init
Using Cloud-Init to Automate the Initial Setup of a Virtual Machine

To use Cloud-Init to automate the initial setup of a virtual machine:

  1. Go to Compute and then click Templates.

    The Templates pane opens with the list of templates that have been created.

  2. Select a template and click the Edit button.

  3. Click Show Advanced Options.

  4. Click the Initial Run tab and select the Use Cloud-Init/Sysprep check box.

  5. Expand the Authentication section.

    • Select the Use already configured password check box to use the existing credentials, or clear that check box and enter a root password in the Password and Verify Password text fields to specify a new root password.

    • Enter any SSH keys to be added to the authorized hosts file on the virtual machine in the SSH Authorized Keys text area.

    • Select the Regenerate SSH Keys check box to regenerate SSH keys for the virtual machine.

  6. Expand the Networks section.

    • Enter any DNS servers in the DNS Servers text field.

    • Enter any DNS search domains in the DNS Search Domains text field.

    • Select the In-guest Network Interface check box and use the + Add new and - Remove selected buttons to add or remove network interfaces to or from the virtual machine.

      Important:

      You must specify the correct network interface name and number (for example, eth0, eno3, enp0s); otherwise, the virtual machine’s interface connection will be up but will not have the Cloud-Init network configuration.

  7. Expand the Custom Script section and enter any custom scripts in the Custom Script text area.

Importing an Oracle Linux Template

Oracle provides pre-installed and pre-configured templates that allow you to deploy a fully configured software stack. Use of Oracle Linux templates eliminates the installation and configuration costs and reduces the ongoing maintenance costs.

To import an Oracle Linux template:

  1. Download a the template OVA file from http://yum.oracle.com/oracle-linux-templates.html and copy to your KVM host.

  2. Assign permissions to the file.

    # chown 36:36 /tmp/<myfile>.ova
  3. Ensure that the kvm user has access to the OVA file's path, for example:

    # -rw-r--r-- 1 vdsm kvm 872344576 Jan 15 17:43 OLKVM_OL7U7_X86_64.ova
  4. In the Admistration Portal, click Compute and then select Templates.

  5. Click Import.

  6. From the Import Template(s) window, select the following options:

    • Data Center: <datacenter>

    • Source: Virtual Appliance (OVA)

    • Host: <kvm_host_containing_ova>

    • File Path: <full_path_to_ova_file>

  7. Click Load.

  8. From the Virtual Machines on Source list, select the virtual appliance's check box.

    Note:

    You can select more than one virtual appliance to import.

  9. Click the right arrow to move the appliance(s) to the Virtual Machines to Import list and then click Next.

  10. Click the Clone field for the template you want to import and review its General, Network Interfaces, and Disks configuration.

  11. Click OK.

The import process can take several minutes. Once it completes, you can view the template(s) by clicking Compute and then Templates.

You can now create a virtual machine from your imported template.

Creating an Oracle Linux Virtual Machine from a Template

To create an Oracle Linux virtual machine from a template:

  1. Go to Compute and then click Virtual Machines.

  2. Click New VM.

  3. From the Template drop-down list, select the desired template from the drop-down list. For example, select the template created in Creating an Oracle Linux Template.

  4. On the Cluster drop-down list, select the data center and host cluster for the new host.

    The Default data center is pre-selected in the drop-down list.

    For the procedures to create new data centers or a new clusters, refer to Data Centers or Clusters tasks.

  5. At a minimum, complete the following key fields.

    For example, if the new Oracle Linux virtual machine that is being created is based on the template that was created in Creating an Oracle Linux Template:

    • Name - enter a name for the virtual machine, for example ol7-vm2.

    • Cluster - select a cluster or leave Default option selected.

    • Template - select a template, for example, ol7-vm-template.

    • Operating System - select an operating system, for example, Oracle Linux 7.x x64.

    • nic1 - select a logical network, for example, vm_pub.

  6. Click OK to create the virtual machine from the template.

    Once created, the new virtual machine appears in the Virtual Machines pane and shows a status of Down.

  7. Highlight the virtual machine that you created from the template. From the drop-down arrow next to Run, select Run Once to customize the template on-the-fly to create users, set passwords, configure the network.

    The red down arrow icon to the left of the virtual machine turns green and the Status column displays Up when the virtual machine is up and running on the network.

    Depending on your template, you might need to configure the Cloud-Init option when you run the virtual machine for the first time:

    1. From the drop-down arrow next to Run, select Run Once
    2. Expand Initial Run and check Use Cloud-init,
    3. The hostname is pre-filled. Fill in other options such as a new user and password, network configuration, timezone.
    4. Add a cloud-init script.

Working with Virtual Machine Snapshots

A snapshot is a view of a virtual machine’s operating system and applications on any or all available disks at a given point in time. You can take a snapshot of a virtual machine before you make a change to it that may have unintended consequences. If needed, you can use the snapshot to return the virtual machine to its previous state.

Note:

For best practices when using snapshots, see Considerations When Using Snapshots in the Oracle Linux Virtualization Manager: Architecture and Planning Guide.

Creating a Snapshot of a Virtual Machine

Note:

This procedure is for taking a live snapshot. The QEMU guest agent must be installed and the qemu-guest-agent service must be up and running.
To create a snapshot of a virtual machine:
  1. Click Compute and then select Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  2. Under the Name column, select the virtual machine for which to take a snapshot.

    The General tab opens with details about the virtual machine.

  3. Click the Snapshots tab.

  4. Click Create.

  5. (Optional) For the Description field, enter a description for the snapshot.

  6. (Optional) Select the Disks to include checkboxes. By default, all disks are selected.

    Important:

    Not selecting a disk results in the creation of a partial snapshot of the virtual machine without a disk. Although a saved partial snapshot does not have a disk, you can still preview a partial snapshot to view the configuration of the virtual machine.

  7. (Optional) Select the Save Memory check box to include the virtual machine's memory in the snapshot. By default, this checkbox is selected.

  8. Click OK to save the snapshot.

    The virtual machine’s operating system and applications on the selected disks are stored in a snapshot that can be previewed or restored.

    On the Snapshots pane, the Lock icon appears next to the snapshot as it is being created. Once complete, the icon changes to the Snapshot (camera) icon. You can then display details about the snapshot by selecting the General, Disks, Network Interfaces, and Installed Applications drop-down views.

Restoring a Virtual Machine from a Snapshot
A snapshot can be used to restore a virtual machine to a previous state.

Note:

The virtual machine must be in a Down state before performing this task.

To restore a virtual machine from a snapshot:

  1. Click Compute and then select Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  2. Under the Name column, select the virtual machine that you want to restore from a snapshot.

    The General tab opens with details about the virtual machine.

  3. Click the Snapshots tab.

  4. On the Snapshots pane, select the snapshot to be used to restore the virtual machine.

  5. From the Preview drop-down list, select Custom.

    On the Virtual Machines pane, the status of the virtual machine briefly changes to Image Locked before returning to Down.

    On the Snapshots pane, the Preview (eye) icon appears next to the snapshot when the preview of the snapshot is completed.

  6. Click Run to start the virtual machine.

    The virtual machine runs using the disk image of the snapshot. You can preview the snapshot and verify the state of the virtual machine.

  7. Click Shutdown to stop the virtual machine.

  8. From the Snapshot pane, perform one of the following steps:
    1. Click Commit to permanently restore the virtual machine to the condition of the snapshot. Any subsequent snapshots are erased.

    2. Alternatively, click Undo to deactivate the snapshot and return the virtual machine to its previous state.

Creating a Virtual Machine from a Snapshot

Before performing this task, you must create a snapshot of a virtual machine. For more information, refer to Creating a Snapshot of a Virtual Machine.

To create a virtual machine from a snapshot:
  1. Click Compute and then select Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  2. Under the Name column, select the virtual machine with the snapshot that you want to use as the basis from which to create another virtual machine.

    The General tab opens with details about the virtual machine.

  3. Click the Snapshots tab.

  4. On the Snapshots pane, select the snapshot from which to create the virtual machine.

  5. Click Clone.

    The Clone VM from Snapshot dialog box opens.

  6. For the Name field, enter a name for the virtual machine.

    Note:

    The Name field is the only required field on this dialog box.

    After a short time, the cloned virtual machine appears on the Virtual Machines pane with a status of Image Locked. The virtual machine remains in this state until the Manager completes the creation of the virtual machine. When the virtual machine is ready to use, its status changes from Image Locked to Down on the Virtual Machines pane.

Deleting a Snapshot

You can delete a virtual machine snapshot and permanently remove it from your virtualization environment. This operation is supported on a running virtual machine and does not require the virtual machine to be in a Down state.

Important:

  • When you delete a snapshot from an image chain, there must be enough free space in the storage domain to temporarily accommodate both the original volume and the newly merged volume; otherwise, the snapshot deletion fails. This is due to the data from the two volumes being merged in the resized volume and the resized volume growing to accommodate the total size of the two merged images. In this scenario, you must export and reimport the volume to remove the snapshot.

  • If the snapshot being deleted is contained in a base image, the volume subsequent to the volume containing the snapshot being deleted is extended to include the base volume.

  • If the snapshot being deleted is contained in a QCOW2 (thin-provisioned), non-base image hosted on internal storage, the successor volume is extended to include the volume containing the snapshot being deleted.

To delete a snapshot:

  1. Click Compute and then select Virtual Machines.

    The Virtual Machines pane opens with the list of virtual machines that have been created.

  2. Under the Name column, select the virtual machine with the snapshot that you want to delete.

    The General tab opens with details about the virtual machine.

  3. Click the Snapshots tab.

  4. On the Snapshots pane, select the snapshot to delete.

  5. Select the snapshot to delete.

  6. Click Delete.

  7. Click OK.

    On the Snapshots pane, a Lock icon appears next to the snapshot until the snapshot is deleted.

Security

You can encrypt communications by configuring your organization’s third-party CA certificate to identify the Oracle Linux Virtualization Manager to users connecting over HTTPS.

Using a third-party CA certificate for HTTPS connections does not affect the certificate that is used for authentication between the engine host and KVM hosts. They continue to use the self-signed certificate generated by the Manager.

You can also enable HTTP Strict Transport Security (HSTS) to help protect websites against man-in-the-middle attacks such as protocol downgrade attacks and cookie hijacking.

Replacing the Oracle Linux Virtualization Manager Apache SSL Certificate

Before you begin you must obtain a third-party CA certificate, which is a digital certificate issued by a certificate authority (CA). The certificate is provided as a PEM file. The certificate chain must be complete up to the root certificate. The chain’s order is critical and must be from the last intermediate certificate to the root certificate.

Caution:

Do not change the permissions and ownerships for the /etc/pki directory or any subdirectories. The permission for the /etc/pki and /etc/pki/ovirt-engine directories must remain as the default value of 755.

To replace the Oracle Linux Virtualization Manager Apache SSL Certificate:

  1. Copy the new third-party CA certificate to the host-wide trust store and update the trust store.

    # cp third-party-ca-cert.pem /etc/pki/ca-trust/source/anchors/
    # update-ca-trust export
  2. Remove the symbolic link to /etc/pki/ovirt-engine/apache-ca.pem.

    The Engine has been configured to use /etc/pki/ovirt-engine/apache-ca.pem, which is symbolically linked to /etc/pki/ovirt-engine/ca.pem.

    # rm /etc/pki/ovirt-engine/apache-ca.pem 
  3. Copy the CA certificate into the PKI directory for the Manager.

    # cp third-party-ca-cert.pem /etc/pki/ovirt-engine/apache-ca.pem 
  4. Back up the existing private key and certificate.

    # cp /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache.cer.bck
    # cp /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache.key.nopass.bck
  5. Copy the new Apache private key into the PKI directory for the Manager by entering the following command and respond to prompt.

    # cp apache.key /etc/pki/ovirt-engine/keys/apache.key.nopass
    cp: overwrite /etc/pki/ovirt-engine/keys/apache.key.nopass? y
  6. Copy the new Apache certificate into the PKI directory for the Manager by entering the following command and respond to the prompt.

    # cp apache.cer /etc/pki/ovirt-engine/certs/apache.cer 
    cp: overwrite /etc/pki/ovirt-engine/certs/apache.cer? y
  7. Restart the Apache HTTP server (httpd) and the Manager.

    # systemctl restart httpd
    # systemctl restart ovirt-engine
  8. Create a new trust store configuration file (or edit the existing one) at /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf by adding the following parameters.

    ENGINE_HTTPS_PKI_TRUST_STORE="/etc/pki/java/cacerts" 
    ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD=""  
  9. Back up the existing Websocket configuration file.

    # cp /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf/etc/ovirt-engine/ \
    ovirt-websocket-proxy.conf.d/10-setup.conf.bck
  10. Edit the Websocket configuration file at /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf by adding the following parameters.

    SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/apache.cer 
    SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
  11. Restart the ovirt-provider-ovn service.

    # systemctl restart ovirt-provider-ovn
  12. Restart the ovirt-engine service.

    # systemctl restart ovirt-engine

Enabling HTTP Strict Transport Security

To enable HTTP Strict Transport Security, complete the following steps.

  1. For the ovirt-engine service port 443, create a configuration file for httpd, for example:
    # cat ovirt-enable-strict-transport-security.conf
    LoadModule headers_module modules/mod_headers.so
    Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
    <IfModule mod_rewrite.c>
        RewriteEngine On
        RewriteCond %{HTTPS} off
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </IfModule>
     
    # systemctl restart httpd
For the ovirt-imageio service port, modify the _internal/http.py file:
# vi /usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py
 
class Response:
 
    def __init__(self, con):
        self._con = con
        self.status_code = OK
        self.headers = Headers({"content-length": 0, "Strict-Transport-Security": "max-age=31536000"})     
        self._started = False
 
# systemctl restart ovirt-imageio
 
# curl -s -I -k https://localhost:54323
HTTP/1.1 404 Not Found
server: imageio/2.4.7
date: Wed, 13 Sep 2023 16:56:45 GMT
content-length: 19
Strict-Transport-Security: max-age=31536000
content-type: text/plain; charset=UTF-8
For the ovirt-provider-ovn service port, modify the server.py file:
# vi /usr/lib64/python3.6/http/server.py
 
    def send_response(self, code, message=None):
        """Add the response header to the headers buffer and log the
        response code.
 
        Also send two standard headers with the server software
        version and the current date.
 
        """
        self.log_request(code)
        self.send_response_only(code, message)
        self.send_header('Server', self.version_string())
        self.send_header('Date', self.date_time_string())
        # Oracle Bug-33308887: added below header for security scans
        self.send_header("Strict-Transport-Security", "max-age=31536000")     
 
# systemctl restart ovirt-provider-ovn
 
# curl -s -I -k https://localhost:35357
HTTP/1.0 501 Unsupported method ('HEAD')
Server: BaseHTTP/0.6 Python/3.6.8
Date: Wed, 13 Sep 2023 17:34:32 GMT
Strict-Transport-Security: max-age=31536000
Connection: close
Content-Type: application/json
Content-Length: 137
For the ovirt-websocket-proxy service port, modify the response.py file.
# vi /usr/lib/python3.6/site-packages/webob/response.py
 
        # Initialize headers
        self._headers = None
        if headerlist is None:
            self._headerlist = []
        else:
            self._headerlist = headerlist
        self._headerlist.append(('Strict-Transport-Security', 'max-age=31536000'))  
                  
# systemctl restart ovirt-websocket-proxy
 
# curl -s -I -k https://localhost:6100
HTTP/1.1 405 Method Not Allowed
Server: WebSockify Python/3.6.8
Date: Wed, 13 Sep 2023 18:31:12 GMT
Strict-Transport-Security: max-age=31536000
Connection: close
Content-Type: text/html;charset=utf-8
Content-Length: 472

Monitoring

The following section explains how to setup and use Grafana dashboards and event notifications for your virtualization environment.

Using Event Notifications

The following section explains how to set up event notifications to monitor events in your virtualization environment. You can configure the Manager to send event notifications in email to alert designated users when certain events occur or enable Simple Network Management Protocol (SNMP) traps to monitor your virtualization environment.

For more information, see Event Logging and Notifications in the Oracle Linux Virtualization Manager: Architecture and Planning Guide.

Configuring Event Notification Services on the Engine

For event notifications to be sent properly to email recipients, you must configure the mail server on the Engine and enable ovirt-engine-notifier service. For more information about creating event notifications in the Administration portal, see Creating Event Notifications in the Administration Portal.

To configure notification services on the Engine:
  1. Log in to the host that is running the Manager.

  2. Copy the ovirt-engine-notifier.conf to a new file named 90-email-notify.conf.

    # cp /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf/ \
    etc/ovirt-engine/notifier/notifier.conf.d/90-email-notify.conf
  3. Edit the 90-email-notify.conf file by deleting everything except the EMAIL Notifications section.

    Note:

    If you plan to also configure SNMP traps in your virtualization environment, you can also copy the values from the SNMP_TRAP Notifications section of the ovirt-notifier.conf file to a file named 20-snmp.conf. For more information, see Configuring the Engine to Send SNMP Traps.

  4. Enter the correct email variables. This file overrides the values in the original ovirt-engine-notifier.conf file.
    ---------------------
    # EMAIL Notifications #
    ---------------------
    
    # The SMTP mail server address. Required.
    MAIL_SERVER=myemailserver.mycompany.com
    
    # The SMTP port (usually 25 for plain SMTP, 465 for SMTP with SSL, 587 for SMTP with TLS)
    MAIL_PORT=25
    
    # Required if SSL or TLS enabled to authenticate the user. Used also to specify 'from' 
      user address if mail server
    # supports, when MAIL_FROM is not set. Address is in RFC822 format
    MAIL_USER=email.example.com
    
    # Required to authenticate the user if mail server requires authentication or if SSL or 
      TLS is enabled
    SENSITIVE_KEYS="${SENSITIVE_KEYS},MAIL_PASSWORD"
    MAIL_PASSWORD=
    
    # Indicates type of encryption (none, ssl or tls) should be used to communicate with 
      mail server.
    MAIL_SMTP_ENCRYPTION=none
    
    # If set to true, sends a message in HTML format.
    HTML_MESSAGE_FORMAT=false
    
    # Specifies 'from' address on sent mail in RFC822 format, if supported by mail server.
    MAIL_FROM=myovirtengine@mycompany.com
    
    # Specifies 'reply-to' address on sent mail in RFC822 format.
    MAIL_REPLY_TO=myusername@mycompany.com
    
    # Interval to send smtp messages per # of IDLE_INTERVAL
    MAIL_SEND_INTERVAL=1
    
    # Amount of times to attempt sending an email before failing.
    MAIL_RETRIES=4

    Note:

    For information about the other parameters available for event notification in the ovirt-engine-notifier.conf file, refer to oVirt Documentation.

  5. Enable and restart the ovirt-engine-notifier service to activate your changes.

    # systemctl daemon-reload
    # systemctl enable ovirt-engine-notifier.service
    # systemctl restart ovirt-engine-notifier.service
Creating Event Notifications in the Administration Portal

Before creating event notifications, you must have access to an email server that can handle incoming automated messages and deliver these messages to a distribution list. You should also configure event notification services on the Engine. For more information, see Configuring Event Notification Services on the Engine.

To create event notifications in the Administration Portal:

  1. Go to Administration and then click Users.

    The Users pane opens.

  2. Under the User Name column, click the name of the user to display the detailed view for the user.

    Note:

    A user does not appear in the Administration Portal until the user is created and assigned appropriate permissions. For more information, refer to Creating a New User Account.

  3. Click the Event Notifier tab.

  4. Click Manage Events.

    The Add Event Notification dialog box opens.

  5. Select the events for which you want to create notifications by selecting the check box next to individual events or event topic areas for notification.

    The events available for notification are grouped under topic areas. By default, selecting the check box for a top-level topic area, such as General Host Events, selects all events under that topic area. You can optionally expand or collapse all the event topic areas by clicking Expand All or Collapse All. Additionally, you can click the arrow icon next to a specific top-level topic area to expand or collapse the events associated with a specific topic area.

  6. For the Mail Recipient field, enter an email address.

  7. Click OK to save the changes.

Canceling Event Notifications in the Administration Portal
To cancel event notifications in the Administration Portal:
  1. Go to Administration and then click Users.

    The Users pane opens.

  2. Under the User Name column, click the name of the user to display the detailed view for the user.

  3. Click the Event Notifier tab.

  4. Click Manage Events.

    The Add Event Notification dialog box opens.

  5. Click Expand All, or the topic-specific expansion options, to display the events.

  6. Clear the appropriate check boxes to cancel the notification for that event.

  7. Click OK to save your changes.

Configuring the Engine to Send SNMP Traps

You can configure the Manager to send SNMP traps to one or more external SNMP managers. SNMP traps contain system event information that are used to monitor your virtualization environment. The number and type of traps sent to the SNMP manager can be defined within the Engine.

Before performing this task, you must have configured one or more external SNMP managers to receive traps, and know the following details:

  • The IP addresses or fully-qualified domain names of machines that act as SNMP managers. Optionally, determine the port through which the SNMP manager receives trap notifications; the default UDP port is 162.

  • The SNMP community. Multiple SNMP managers can belong to a single community. Management systems and agents can communicate only if they are within the same community. The default community is public.

  • The trap object identifier for alerts. The Engine provides a default OID of 1.3.6.1.4.1.2312.13.1.1. All trap types are sent, appended with event information, to the SNMP manager when this OID is defined.

    Note:

    • Changing the default trap prevents generated traps from complying with the Engine’s management information base.

    • The Engine provides management information bases at /usr/share/doc/ovirt-engine/mibs/OVIRT-MIB.txt and /usr/share/doc/ovirt-engine/mibs/REDHAT-MIB.txt. Load the MIBs in your SNMP manager before proceeding.

To configure SNMP traps on the Engine:

  1. Log in to the host that is running the Manager.

  2. On the Engine, create the SNMP configuration file:
    # vi /etc/ovirt-engine/notifier/notifier.conf.d/20-snmp.conf

    Default SNMP configuration values exist on the Engine in the events notifications configuration file (ovirt-engine-notifier.conf), which is available at the following directory path: /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf. The values provided in this step are based on the default or example values provided in that file. To persist that your configuration settings persist across reboots, define an override file for your SNMP configuration (20-snmp.conf), rather than edit the ovirt-engine-notifier.conf file, For more information, see Configuring Event Notification Services on the Engine.

  3. Specify the SNMP manager, the SNMP community, and the OID in the following format:
    SNMP_MANAGERS="manager1.example.com manager2.example.com:162"
    SNMP_COMMUNITY=public
    SNMP_OID=1.3.6.1.4.1.2312.13.1.1
    The following values can be configured in the 20-snmp.conf file.
    #-------------------------#
    # SNMP_TRAP Notifications #
    #-------------------------#
    # Send v2c snmp notifications
    
    # Minimum SNMP configuration
    #
    # Create /etc/ovirt-engine/notifier/notifier.conf.d/20-snmp.conf with:
    # SNMP_MANAGERS="host"
    # FILTER="include:*(snmp:) ${FILTER}"
    
    # Default whitespace separated IPv4/[IPv6]/DNS list with optional port, default is 162.
    # SNMP_MANAGERS="manager1.example.com manager2.example.com:164"
    SNMP_MANAGERS=
    
    # Default SNMP Community String.
    SNMP_COMMUNITY=public
    
    # SNMP Trap Object Identifier for outgoing notifications.
    # { iso(1) org(3) dod(6) internet(1) private(4) enterprises(1) redhat(2312) ovirt(13) 
      engine(1) notifier(1) }
    #
    # Note: changing the default will prevent generated traps from complying with 
      OVIRT-MIB.txt.
    SNMP_OID=1.3.6.1.4.1.2312.13.1.1
    
    # Default SNMP Version. SNMP version 2 and version 3 traps are supported
    # 2 = SNMPv2
    # 3 = SNMPv3
    SNMP_VERSION=2
    
    # The engine id used for SNMPv3 traps
    SNMP_ENGINE_ID=
    
    # The user name used for SNMPv3 traps
    SNMP_USERNAME=
    
    # The SNMPv3 auth protocol. Supported values are MD5 and SHA.
    SNMP_AUTH_PROTOCOL=
    
    # The SNMPv3 auth passphrase, used when SNMP_SECURITY_LEVEL is set to AUTH_NOPRIV 
      and AUTH_PRIV
    SNMP_AUTH_PASSPHRASE=
    
    # The SNMPv3 privacy protocol. Supported values are AES128, AES192 and AES256.
    # Be aware that AES192 and AES256 are not defined in RFC3826, so please verify
    # that your SNMP server supports those protocols before enabling them.
    SNMP_PRIVACY_PROTOCOL=
    
    # The SNMPv3 privacy passphrase, used when SNMP_SECURITY_LEVEL is set to AUTH_PRIV
    SNMP_PRIVACY_PASSPHRASE=
    
    # The SNMPv3 security level.
    # 1 = NOAUTH_NOPRIV
    # 2 = AUTH_NOPRIV
    # 3 = AUTH_PRIV
    SNMP_SECURITY_LEVEL=1
    
    # SNMP profile support
    #
    # Multiple SNMP profiles are supported.
    # Specify profile settings by using _profile suffix,
    # for example, to define a profile to sent specific
    # message to host3, specify:
    # SNMP_MANAGERS_profile1=host3
    # FILTER="include:VDC_START(snmp:profile1) ${FILTER}"
  4. Define which events to send to the SNMP Manager.

    By default, the following default filter is defined in the ovirt-engine-notifier.conf file; if you do not override this filter or apply overriding filters, no notifications are sent.
    FILTER="exclude:\*"
    The following are other common examples of event filters.
    • Send all events to the default SNMP profile.
      FILTER="include:\*(snmp:) ${FILTER}"
    • Send all events with the severity ERROR or ALERT to the default SNMP profile:
      FILTER="include:\*:ERROR(snmp:) ${FILTER}"
       FILTER="include:\*:ALERT(snmp:) ${FILTER}"
  5. Save the file.

  6. Start the ovirt-engine-notifier service, and ensure that this service starts on boot.
     # systemctl start ovirt-engine-notifier.service
     # systemctl enable ovirt-engine-notifier.service
  7. (Optional) Validate that traps are being sent to the SNMP Manager.

Using Grafana

The following section explains how to setup and use Grafana dashboards in your virtualization environment.

Important:

You must install the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine, even though you can install each of these components on separate machines from each other.

For more information, see Monitoring with Grafana in the Oracle Linux Virtualization Manager: Architecture and Planning Guide.

For more information on using Grafana, see the Grafana website.

Installing Grafana

Grafana integration is enabled and installed by default when you run engine-setup in a standard engine or a self-hosted engine installation. In some scenarios, you might need to install Grafana manually, such as performing an upgrade, restoring a backup, or after migrating the data warehouse to a separate machine.

To install Grafana manually:

  1. (Self-hosted engine only) Put the environment in global maintenance mode:

              # hosted-engine --set-maintenance --mode=global
            
  2. Log in to the machine where you want to install Grafana. This should be the same machine where the data warehouse is configured; usually the engine machine.

  3. Run the engine-setup command as follows to initiate the reconfiguration for Grafana:

    # engine-setup --reconfigure-optional-components
  4. Press Enter to answer Yes to install Grafana on this machine.

    Configure Grafana on this host (Yes, No) [Yes]:
  5. (Self-hosted engine only) Disable global maintenance mode.

              # hosted-engine --set-maintenance --mode=none
            

Once installed, you can access the Grafana dashboards in one of the following ways:

  • Go to https://<engine FQDN or IP address>/ovirt-engine-grafana.

  • Click Monitoring Portal in the web administration welcome page.

For more information, see Default Grafana Dashboards in the Oracle Linux Virtualization Manager: Architecture and Planning Guide.

Configuring Users for Single Sign-On with Grafana

Even though engine-setup automatically configures Grafana to allow existing users to log in from the Administration Portal, it does not automatically create these users within Grafana.

To configure a user for single sign-on with Grafana:

  1. Log in to the host that is running the Manager.

  2. Edit the user account to add an email address if not already defined, for example:

    # ovirt-aaa-jdbc-tool user edit test1 --attribute=email=jdoe@example.com
    updating user test1...
    user updated successfully
  3. Log in to Grafana with an existing admin user (the initially configured admin).
  4. Navigate to Configuration and then Users and select Invite.
  5. Enter the email address and name of the user account and select a Role.
  6. Send the invitation using one of these options:

    • Select Send invite mail and click Submit. For this option, you need an operational local mail server configured on the Grafana machine.

    • Select Pending Invites

      • Locate the entry you want
      • Select Copy invite
      • Use this link to create the account by pasting it directly into a browser address bar, or by sending it to another user.

      Note:

      If you use the Pending Invites option, no email is sent, and the email address does not really need to exist; any valid-looking address will work, as long as it’s configured as the email address of a Manager user.

To log in with this account:

  1. Log in to the Administration Portal using the account that has the email address configured in Grafana.
  2. Select Monitoring Portal to open the Grafana dashboard.
  3. Select Sign in with oVirt Engine Auth.

Backup and Restore

You can use the engine-backup command utility to take regular backups of Oracle Linux Virtualization Manager. The tool backs up the engine database and configuration files into a single file and can be run without interrupting the ovirt-engine service.

The engine-backup command has two modes:

# engine-backup --mode=backup
# engine-backup --mode=restore

Run engine-backup --help for a full list of options and their function.

The basic options are:

--mode

Specifies whether the command performs a backup operation or a restore operation. The available options are: backup (default), restore, and verify.

--file

Specifies the path and name of the backup file, for example, file_name.backup. For backup mode, the file is where backups are saved. For restore mode, the fileis read as backup data. The default path is /var/lib/ovirt-engine-backup/.

--log

Specifies the path and name of the log file, for example, log_file_name. This file logs the backup or restore operations. The default path is /var/log/ovirt-engine-backup/.

--scope

Specifies the scope of the backup or restore operation and can be specified multiple times in the same engine-backup command. There are four options:
  • all (default) - back up or restore all databases and configuration data
  • files - back up or restore only files on the system
  • db - back up or restore only the Engine database
  • dwhdb - back up or restore only the Data Warehouse database

For more information on backup and restore, see the oVirt documentation Administration Guide.

Backing Up the Manager

To backup the Manager:

  1. Log into the host that is running the Manager.

    Note:

    When running the Manager within a virtual machine (standalone or self-hosted engine) log into the virtual machine that is running the engine.

  2. Create a full backup of the Manager. You do not need to stop the ovirt-engine service before creating your backup.

    # engine-backup --mode=backup --scope=all --file=path --log=path

    The following example shows how to use the engine-backup command to create a full backup of the Manager. A backup file and log file for the Manager backup is created in the path specified.

    # engine-backup --mode=backup --scope=all --file=backup/file/ovirt-engine-backup --log=backup/log/ovirt-engine-backup.log
    Backing up:
    Notifying engine
    - Files
    - Engine database 'engine'
    - DWH database 'ovirt_engine_history'
    Packing into file 'backup/file/ovirt-engine-backup'
    Notifying engine
    Done.
  3. (Optional) Set up a cron job to take regular backups.

    By default, the Manager does not take automatic backups. Oracle recommends that you take you regular backups of the Manager.

    The following example shows a sample cron job defined in a crontab-format file.

    today=`date +'%Y%m%d-%H%M'`
    engine-backup --mode=backup --scope=all --file=/backup/file/ovirt-engine-backup-${today} 
    --log=/backup/log/ovirt-engine-backup-${today}.log

Restoring a Full Backup of the Manager

To restore a full backup of the Manager:

  1. Login to the host that is running the Manager.

    Note:

    When running the Manager within a virtual machine (standalone or self-hosted engine) log into the virtual machine that is running the engine.

  2. Clean up the objects associated with the Manager.

    # engine-cleanup

    This engine-cleanup command removes the configuration files and cleans the database associated with the Manager.

    The following example shows output from the engine-cleanup command.

    # engine-cleanup
    [ INFO  ] Stage: Initializing
    [ INFO  ] Stage: Environment setup
              Configuration files: ...
              Log file: ...
              Version: otopi-1.7.8 (otopi-1.7.8-1.el7)
    [ INFO  ] Stage: Environment packages setup
    [ INFO  ] Stage: Programs detection
    [ INFO  ] Stage: Environment customization
              Do you want to remove all components? (Yes, No) [Yes]: Yes
              The following files were changed since setup:
              /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf
              Remove them anyway? (Yes, No) [Yes]: Yes
    
              --== PRODUCT OPTIONS ==--
    
    [ INFO  ] Stage: Setup validation
              During execution engine service will be stopped (OK, Cancel) [OK]: OK
              All the installed ovirt components are about to be removed ...(OK, Cancel) 
              [Cancel]: OK
    [ INFO  ] Stage: Transaction setup
    [ INFO  ] Stopping engine service
    [ INFO  ] Stopping ovirt-fence-kdump-listener service
    [ INFO  ] Stopping dwh service
    [ INFO  ] Stopping Image I/O Proxy service
    [ INFO  ] Stopping vmconsole-proxy service
    [ INFO  ] Stopping websocket-proxy service
    [ INFO  ] Stage: Misc configuration
    [ INFO  ] Stage: Package installation
    [ INFO  ] Stage: Misc configuration
    [ INFO  ] Backing up PKI configuration and keys
    ...
    [ INFO  ] Clearing Engine database engine
    ...
    [ INFO  ] Clearing DWH database ovirt_engine_history
    [ INFO  ] Removing files
    [ INFO  ] Reverting changes to files
    ...
    [ INFO  ] Stage: Transaction commit
    [ INFO  ] Stage: Closing up
    
              --== SUMMARY ==--
    
              Engine setup successfully cleaned up
              A backup of PKI configuration and keys is available at ...
              ovirt-engine has been removed
              A backup of the Engine database is available at ...
              A backup of the DWH database is available at ...
    
              --== END OF SUMMARY ==--
    
    [ INFO  ] Stage: Clean up
              Log file is located at ...
    [ INFO  ] Generating answer file ...
    [ INFO  ] Stage: Pre-termination
    [ INFO  ] Stage: Termination
    [ INFO  ] Execution of cleanup completed successfully
  3. Restore a full backup of the Manager.

    The following form of the engine-backup command is used to a restore a full backup of the Manager.

    engine-backup --mode=restore --scope=all --file=path --log=path --restore-permissions

    The following example shows how to use the engine-backup command to restore a full backup of the Manager.

    # engine-backup --mode=restore --scope=all --file=backup/file/ovirt-engine-backup \
      --log=backup/log/ovirt-engine-backup.log --restore-permissions
    Preparing to restore:
    - Unpacking file 'backup/file/ovirt-engine-backup'
    Restoring:
    - Files
    - Engine database 'engine'
      - Cleaning up temporary tables in engine database 'engine'
      - Updating DbJustRestored VdcOption in engine database
      - Resetting DwhCurrentlyRunning in dwh_history_timekeeping in engine database
      - Resetting HA VM status
    ------------------------------------------------------------------------------
    Please note:
    
    The engine database was backed up at 2019-03-25 12:48:02.000000000 -0700 .
    
    Objects that were added, removed or changed after this date, such as virtual
    machines, disks, etc., are missing in the engine, and will probably require
    recovery or recreation.
    ------------------------------------------------------------------------------
    - DWH database 'ovirt_engine_history'
    You should now run engine-setup.
    Done.
  4. Run the engine-setup command to complete the setup of the restored Manager.

    # engine-setup    

    This command reconfigures the firewall and ensures that the Manager service is correctly configured.

  5. Log in to the Manager and verify that the Manager has been restored to the backup.