4 Self-Hosted Engine Deployment

In Oracle Linux Virtualization Manager, a self-hosted engine is a virtualized environment where the engine runs inside a virtual machine on the hosts in the environment. The virtual machine for the engine is created as part of the host configuration process. And, the engine is installed and configured in parallel to the host configuration.

Since the engine runs as a virtual machine and not on physical hardware, a self-hosted engine requires less physical resources. Additionally, since the engine is configured to be highly available, if the host running the Engine virtual machine goes into maintenance mode or fails unexpectedly the virtual machine is migrated automatically to another host in the environment. A minimum of two KVM hosts are required.

To review conceptual information, troubleshooting, and administration tasks, see the oVirt Self-Hosted Engine Guide in oVirt Documentation.

To deploy a self-hosted engine, you perform a fresh installation of Oracle Linux 8.8 (or later Oracle Linux 8 release) on the host, install the Oracle Linux Virtualization Manager Release 4.5 package, and then run the hosted engine deployment tool to complete configuration.

Note:

If you are deploying a self-hosted engine as a hyperconverged infrastructure with GlusterFS storage, you must deploy GlusterFS BEFORE you deploy the self-hosted engine. See Deploying GlusterFS Storage.

You can also deploy a self-hosted engine using the command line or Cockpit portal. If you want to use the command line, proceed to Using the Command Line to Deploy. If you want to use the Cockpit portal, proceed to Using the Cockpit Portal to Deploy.

Note:

If you are behind a proxy, you must use the command line option to deploy.

Self-Hosted Engine Prerequisites

In addition to the Requirements and Scalability Limits, you must satisfy the following prerequisites before deploying a self-hosted engine.

  • A minimum of two KVM hosts.
  • A fully-qualified domain name for your engine and host with forward and reverse lookup records set in the DNS.

  • A directory of at least 5 GB on the host for the oVirt Engine Appliance. During the deployment process the /var/tmp directory is checked to see if it has enough space to extract the appliance files. If the /var/tmp directory does not have enough space, you can specify a different directory or mount external storage.

    Note:

    The VDSM user and KVM group must have read, write, and execute permissions on the directory.

  • Prepared storage of at least 74 GB to be used as a data storage domain dedicated to the engine virtual machine. The data storage domain is created during the self-hosted engine deployment.

    If you are using iSCSI storage, do not use the same iSCSI target for the self-hosted engine storage domain and any additional storage domains.

    Attention:

    When you have a data center with only one active data storage domain and that domain gets corrupted, you are unable to add new data storage domains or remove the corrupted data storage domain. If you have deployed your self-hosted engine in such a data center and its data storage domain gets corrupted, you must redeploy your self-hosted engine.

  • The host you are using to deploy a self-hosted engine, must be able to access yum.oracle.com.

Deploying the Self-Hosted Engine

You must perform a fresh installation of Oracle Linux 8.8 (or later Oracle Linux 8 release) on an Oracle Linux Virtualization Manager host before deploying a self-hosted engine. You can download the installation ISO for from the Oracle Software Delivery Cloud at https://edelivery.oracle.com.

  1. Install Oracle Linux 8.8 (or later Oracle Linux 8 release) on the host using the Minimal Install base environment.

    Caution:

    Do NOT select any other base environment than Minimal Install for the installation or your hosts will have incorrect qemu and libvirt versions, incorrect repositories configured, and no access to virtual machine consoles.

    Do not install any additional packages until after you have installed the Manager packages, because they may cause dependency issues.

    Follow the instructions in the Oracle® Linux 8: Installing Oracle Linux.

  2. Ensure that the firewalld service is enabled and started.

    For more information about configuring firewalld, see Configuring a Packet Filtering Firewall in the Oracle® Linux 8: Configuring the Firewall.

  3. Complete one of the following sets of steps:

    • For ULN registered hosts or using Oracle Linux Manager

      Subscribe the system to the required channels.

      1. For ULN registered hosts, log in to https://linux.oracle.com with your ULN user name and password. For Oracle Linux Manager registered hosts, access your internal server URL.

      2. On the Systems tab, click the link named for the host in the list of registered machines.

      3. On the System Details page, click Manage Subscriptions.

      4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels:

        • ol8_x86_64_baseos_latest

        • ol8_x86_64_appstream

        • ol8_x86_64_kvm_appstream

        • ol8_x86_64_ovirt45

        • ol8_x86_64_ovirt45_extras

        • ol8_x86_64_gluster_appstream

        • (For VDSM) ol8_x86_64_UEKR7

      5. Click Save Subscriptions.

      6. Install the Oracle Linux Virtualization Manager Release 4.5 package, which automatically enables/disables the required repositories.

        # dnf install oracle-ovirt-release-45-el8                    
    • For Oracle Linux yum server hosts

      Install the Oracle Linux Virtualization Manager Release 4.5 package and enable the required repositories.

      1. Enable the ol8_baseos_latest yum repository.

        # dnf config-manager --enable ol8_baseos_latest   
      2. Install the Oracle Linux Virtualization Manager Release 4.5 package, which automatically enables/disables the required repositories.

        # dnf install oracle-ovirt-release-45-el8   
      3. Use the dnf command to verify that the required repositories are enabled.

        1. Clear the yum cache.

          # dnf clean all                
        2. List the configured repositories and verify that the required repositories are enabled.

          # dnf repolist               

          The following repositories must be enabled:

          • ol8_baseos_latest

          • ol8_appstream

          • ol8_kvm_appstream

          • ovirt-4.5

          • ovirt-4.5-extra

          • ol8_gluster_appstream

          • (For VDSM) ol8_UEKR7

        3. If a required repository is not enabled, use the config-manager to enable it.

          # dnf config-manager --enable repository                             
  4. If your host is running UEK R7:
    1. Install the Extra kernel modules package.
      # dnf install kernel-uek-modules-extra
    2. Reboot the host.
  5. Install the hosted engine deployment tool and engine appliance.

    # dnf install ovirt-hosted-engine-setup -y

Using the Command Line to Deploy

You can deploy the self-hosted engine from the command line. A script collects the details of your environment and uses them to configure the host and the engine.

  1. Start the deployment. IPv6 is used by default. To use IPv4, specify the --4 option:

    # hosted-engine --deploy --4

    Optionally, use the --ansible-extra-vars option to define variables for the deployment. For example:

    # hosted-engine --deploy --4 --ansible-extra-vars="@/root/extra-vars.yml"
    
    cat /root/extra-vars.yml
    ---
    he_pause_host: true
    he_proxy: "http://<host>:<port>"
    he_enable_keycloak: false

    See the oVirt Documentation for more information.

  2. Enter Yes to begin deployment.

    Continuing will configure this host for serving as hypervisor and will create a local VM 
    with a running engine. The locally running engine will be used to configure a new storage 
    domain and create a VM there. At the end the disk of the local VM will be moved to the 
    shared storage.
    Are you sure you want to continue? (Yes, No)[Yes]:

    Note:

    The hosted-engine script creates a virtual machine and uses cloud-init to configure it. The script also runs engine-setup and reboots the system so that the virtual machine can be managed by the high availability agent.

  3. Enter the name of the data center or accept the default.

    Please enter the name of the data center where you want to deploy this hosted-engine
    host. Data center [Default]: 
  4. Enter a name for the cluster or accept the default.

    Please enter the name of the cluster where you want to deploy this hosted-engine host. 
    Cluster [Default]: 
  5. Keycloak integration is a technology preview feature for internal Single-Sign-On (SSO) provider for the Engine and it deprecates AAA. The default response is Yes; however, since this is a preview feature, enter No.

    Configure Keycloak integration on the engine(Yes, No) [Yes]:No
  6. Configure the network.

    1. If the gateway that displays is correct, press Enter to configure the network.

    2. Enter a pingable address on the same subnet so the script can check the host’s connectivity.

      Please indicate a pingable gateway IP address [X.X.X.X]:
    3. The script detects possible NICs to use as a management bridge for the environment. Select the default.

      Please indicate a nic to set ovirtmgmt bridge on: (eth1, eth0) [eth1]:
  7. Enter the path to an OVA archive if you want to use a custom appliance for the virtual machine installation. Otherwise, leave this field empty to use the oVirt Engine Appliance.

    If you want to deploy with a custom engine appliance image, please specify the path to 
    the OVA archive you would like to use.
    Entering no value will use the image from the ovirt-engine-appliance rpm, 
    installing it if needed.
    Appliance image path []:
  8. Specify the fully-qualified domain name for the engine virtual machine.

    Please provide the FQDN you would like to use for the engine appliance.
     Note: This will be the FQDN of the engine VM you are now going to launch,
     it should not point to the base host or to any other existing machine.
     Engine VM FQDN:  manager.example.com
     Please provide the domain name you would like to use for the engine appliance.
     Engine VM domain: [example.com]
  9. Enter and confirm a root password for the engine.

    Enter root password that will be used for the engine appliance:
    Confirm appliance root password:
  10. Optionally, enter an SSH public key to enable you to log in to the engine as the root user and specify whether to enable SSH access for the root user.

    Enter ssh public key for the root user that will be used for the engine 
    appliance (leave it empty to skip):
    Do you want to enable ssh access for the root user (yes, no, without-password) 
    [yes]:
    You may provide an SSH public key, that will be added by the deployment script to the 
    authorized_keys file of the root user in the engine appliance.
    This should allow you passwordless login to the engine machine after deployment.
    If you provide no key, authorized_keys will not be touched.
    SSH public key []:
    [WARNING] Skipping appliance root ssh public key
    Do you want to enable ssh access for the root user? (yes, no, without-password) [yes]:
  11. Enter the virtual machine’s CPU and memory configuration.

    Please specify the number of virtual CPUs for the VM (Defaults to appliance 
    OVF value): [4]:
    Please specify the memory size of the VM in MB. The default is the appliance 
    OVF value [16384]:
  12. Enter a MAC address for the engine virtual machine or accept a randomly generated MAC address.

    You may specify a unicast MAC address for the VM or accept a randomly 
    generated default [00:16:3e:3d:34:47]:

    Note:

    If you want to provide the engine virtual machine with an IP address using DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script does not configure the DHCP server for you.

  13. Enter the virtual machine’s networking details.

    How should the engine VM network be configured (DHCP, Static)[DHCP]?

    Note:

    If you specified Static, enter the IP address of the Engine. The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).

    Please enter the IP address to be used for the engine VM [x.x.x.x]:
    Please provide a comma-separated list (max 3) of IP addresses of domain 
    name servers for the engine VM
    Engine VM DNS (leave it empty to skip):
  14. Specify whether to add entries in the virtual machine’s /etc/hosts file for the engine virtual machine and the base host. Ensure that the host names are resolvable.

    Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
    Note: ensuring that this host could resolve the engine VM hostname is still up to you.
    Add lines to /etc/hosts? (Yes, No)[Yes]:
  15. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Or, press Enter to accept the defaults.

    Please provide the name of the SMTP server through which we will send 
    notifications [localhost]:
    Please provide the TCP port number of the SMTP server [25]:
    Please provide the email address from which notifications will be sent 
    [root@localhost]:
    Please provide a comma-separated list of email addresses which will get 
    notifications [root@localhost]:
  16. Enter and confirm a password for the admin@internal user to access the Administration Portal.

    Enter engine admin password:
    Confirm engine admin password:

    The script creates the virtual machine which can take time if it needs to install the oVirt Engine Appliance. After creating the virtual machine, the script continues gathering information.

  17. Select the type of storage to use.

    Please specify the storage you would like to use (glusterfs, iscsi, fc, 
    nfs)[nfs]:
    • If you selected NFS, enter the version, full address and path to the storage, and any mount options.

      Please specify the nfs version you would like to use (auto, v3, v4, 
      v4_1)[auto]:
      Please specify the full shared storage connection path to use (example: 
      host:/path): 
      storage.example.com:/hosted_engine/nfs
      If needed, specify additional mount options for the connection to the 
      hosted-engine storage domain []:
    • If you selected iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note:

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      Please specify the iSCSI portal IP address:
        Please specify the iSCSI portal port [3260]:
        Please specify the iSCSI discover user:
        Please specify the iSCSI discover password:
        Please specify the iSCSI portal login user:
        Please specify the iSCSI portal login password:
      
        The following targets have been found:
        	[1]	iqn.2017-10.com.redhat.example:he
        		TPGT: 1, portals:
        			192.168.1.xxx:3260
        			192.168.2.xxx:3260
        			192.168.3.xxx:3260
      
        Please select a target (1) [1]: 1
      
        The following luns have been found on the requested target:
          [1] 360003ff44dc75adcb5046390a16b4beb   199GiB  MSFT   Virtual HD
              status: free, paths: 1 active
      
        Please select the destination LUN (1) [1]:
    • If you selected GlusterFS, enter the full address and path to the storage, and any mount options. Only replica 3 Gluster storage is supported.

      * Configure the volume as follows as per [Gluster Volume Options for Virtual 
        Machine Image Store] 
        (documentation/admin-guide/chap-Working_with_Gluster_Storage#Options 
        set on Gluster Storage Volumes to Store Virtual Machine Images)
      
        Please specify the full shared storage connection path to use 
      (example: host:/path): 
        storage.example.com:/hosted_engine/gluster_volume
        If needed, specify additional mount options for the connection to the 
      hosted-engine storage domain []:
      
    • If you selected Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected. The deployment script auto-detects the available LUNs, and the LUN must not contain any existing data.

      The following luns have been found on the requested target:
        [1] 3514f0c5447600351   30GiB   XtremIO XtremApp
        		status: used, paths: 2 active
      
        [2] 3514f0c5447600352   30GiB   XtremIO XtremApp
        		status: used, paths: 2 active
      
        Please select the destination LUN (1, 2) [1]:
  18. Enter the engine disk size:

    Please specify the size of the VM disk in GB: [50]:

    If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.

  19. Optionally, log into the Oracle Linux Virtualization Manager Administration Portal to add any other resources.

    In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.

  20. Enable the required repositories on the Engine virtual machine.

  21. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment.

Using the Cockpit Portal to Deploy

Note:

If you are behind a proxy, you must use the command line option to deploy your self-hosted engine.

To deploy the self-hosted engine using the Cockpit portal, complete the following steps.

  1. Install the Cockpit dashboard.

    # dnf install cockpit-ovirt-dashboard -y
  2. Open the Cockpit port 9090 on firewalld.

    # firewall-cmd --permanent --zone=public --add-port=9090/tcp
    # firewall-cmd --reload      
  3. Enable and start the Cockpit service

    # systemctl enable --now cockpit.socket
  4. Log into the Cockpit portal from the following URL:

    https://host_IP_or_FQDN:9090

  5. To start the self-hosted engine deployment, click Virtualization and select Hosted Manager.

  6. Click Start under Hosted Manager.

  7. Provide the following details for the Engine virtual machine.

    1. In the Engine VM FQDN field, enter the Engine virtual machine FQDN. Do not use the FQDN of the host.

    2. In the MAC Address field, enter a MAC address for the Engine virtual machine or leave blank and the system provides a randomy-generated address.

    3. From the Network Configuration drop-down list, select DHCP or Static.

      • To use DHCP, you must have a DHCP reservation (a pre-set IP address on the DHCP server) for the Engine virtual machine. In the MAC Address field, enter the MAC address.

      • To use Static, enter the virtual machine IP, the gateway address, and the DNS servers. The IP address must belong to the same subnet as the host.

    4. Select the Bridge Interface from the drop-down list.

    5. Enter and confirm the virtual machine’s Root Password.

    6. Specify whether to allow Root SSH Access.

    7. Enter the Number of Virtual CPUs for the virtual machine.

    8. Enter the Memory Size (MiB). The available memory is displayed next to the field.

  8. Optionally, click Advanced to provide any of the following information.

    • Enter a Root SSH Public Key to use for root access to the Engine virtual machine.

    • Select the Edit Hosts File check box if you want to add entries for the Engine virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.

    • Change the management Bridge Name, or accept the default of ovirtmgmt.

    • Enter the Gateway Address for the management bridge.

    • Enter the Host FQDN of the first host to add to the Engine. This is the FQDN of the host you are using for the deployment.

  9. Click Next.

  10. Enter and confirm the Admin Portal Password for the admin@internal user.

  11. Optionally, configure event notifications.

    • Enter the Server Name and Server Port Number of the SMTP server.

    • Enter a Sender E-Mail Address.

    • Enter Recipient E-Mail Addresses.

  12. Click Next.

  13. Review the configuration of the Engine and its virtual machine. If the details are correct, click Prepare VM.

  14. When the virtual machine installation is complete, click Next.

  15. Select the Storage Type from the drop-down list and enter the details for the self-hosted engine storage domain.

    • For NFS:

      1. In the Storage Connection field, enter the full address and path to the storage.

      2. If required, enter any Mount Options.

      3. Enter the Disk Size (GiB).

      4. Select the NFS Version from the drop-down list.

      5. Enter the Storage Domain Name.

    • For iSCSI:

      1. Enter the Portal IP Address, Portal Port, Portal Username, and Portal Password.

      2. Click Retrieve Target List and select a target. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

        Note:

        To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      3. Enter the Disk Size (GiB).

      4. Enter the Discovery Username and Discovery Password.

    • For FibreChannel:

      1. Enter the LUN ID. The host bus adapters must be configured and connected and the LUN must not contain any existing data.

      2. Enter the Disk Size (GiB).

    • For Gluster Storage:

      1. In the Storage Connection field, enter the full address and path to the storage.

      2. If required, enter any Mount Options.

      3. Enter the Disk Size (GiB).

  16. Click Next.

  17. Review the storage configuration. If the details are correct, click Finish Deployment.

  18. When the deployment is complete, click Close.

    If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.

  19. Optionally, log into the Oracle Linux Virtualization Manager Administration Portal to add any other resources.

    In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.

  20. Enable the required repositories on the Engine virtual machine.

  21. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment.

  22. To view the self-hosted engine’s status in Cockpit, under Virtualization click Hosted Engine.

Enabling High-Availability

The host that houses the self-hosted engine is not highly available by default. Since the self-hosted engine runs inside a virtual machine on a host, if you do not configure high-availability for the host, then virtual machine recovery after a host crash is not possible.

Configuring a Highly Available Host

If you want the hosts in a cluster to be responsive and available when unexpected failures happen, you should use fencing. Fencing allows a cluster to react to unexpected host failures and enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host’s power management device and test their correctness from time to time.

A Non Operational host is different from a Non Responsive host. A Non Operational host can communicate with the Manager, but has incorrect configuration, for example a missing logical network. A Non Responsive host cannot communicate with the Manager.

In a fencing operation, a non-responsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains non-responsive pending manual intervention and troubleshooting.

The Manager can perform management operations after it reboots, by a proxy host, or manually in the Administration Portal. All the virtual machines running on the non-responsive host are stopped, and highly available virtual machines are restarted on a different host. At least two hosts are required for power management operations.

Important:

If a host runs virtual machines that are highly available, power management must be enabled and configured.

Configuring Power Management and Fencing on a Host

The Manager uses a proxy to send power management commands to a host power management device because the engine does not communicate directly with fence agents. The host agent (VDSM) executes power management device actions and another host in the environment is used as a fencing proxy. This means that you must have at least two hosts for power management operations.

When you configure a fencing proxy host, make sure the host is in:

  • the same cluster as the host requiring fencing.

  • the same data center as the host requiring fencing.

  • UP or Maintenance status to remain viable.

Power management operations can be performed in three ways:

  • by the Manager after it reboots

  • by a proxy host

  • manually in the Administration Portal

To configure power management and fencing on a host:

  1. Click Compute and select Hosts.

  2. Select a host and click Edit.

  3. Click the Power Management tab.

  4. Check Enable Power Management to enable the rest of the fields.

  5. Check Kdump integration to prevent the host from fencing while performing a kernel crash dump. Kdump integration is enabled by default.

    Important:

    If you enable or disable Kdump integration on an existing host, you must reinstall the host.

  6. (Optional) Check Disable policy control of power management if you do not want your host’s power management to be controlled by the scheduling policy of the host's cluster.

  7. To configure a fence agent, click the plus sign (+) next to Add Fence Agent.

    The Edit fence agent pane opens.

  8. Enter the Address (IP Address or FQDN) to access the host's power management device.

  9. Enter the User Name and Password of the of the account used to access the power management device.

  10. Select the power management device Type from the drop-down list.

  11. Enter the Port (SSH) number used by the power management device to communicate with the host.

  12. Enter the Slot number used to identify the blade of the power management device.

  13. Enter the Options for the power management device. Use a comma-separated list of key-value pairs.
    • If you leave the Options field blank, you are able to use both IPv4 and IPv6 addresses

    • To use only IPv4 addresses, enter inet4_only=1

    • To use only IPv6 addresses, enter inet6_only=1

  14. Check Secure to enable the power management device to connect securely to the host.

    You can use ssh, ssl, or any other authentication protocol your power management device supports.

  15. Click Test to ensure the settings are correct and then click OK.

    Test Succeeded, Host Status is: on displays if successful.

    Attention:

    Power management parameters (userid, password, options, etc.) are tested by the Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without changing in the Manager as well, fencing is likely to fail when most needed.

  16. Fence agents are sequential by default. To change the sequence in which the fence agents are used:
    1. Review your fence agent order in the Agents by Sequential Order field.

    2. To make two fence agents concurrent, next to one fence agent click the Concurrent with drop-down list and select the other fence agent.

      You can add additional fence agents to this concurrent fence agent group.

  17. Expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager searches the host’s cluster and dc (data center) for a power management proxy.

  18. To add an additional power management proxy:
    1. Click the plus sign (+) next to Add Power Management Proxy.

      The Select fence proxy preference type to add pane opens.

    2. Select a power management proxy from the drop-down list and then click OK.

      Your new proxy displays in the Power Management Proxy Preference list.

    Note:

    By default, the Manager searches for a fencing proxy within the same cluster as the host. If The Manager cannot find a fencing proxy within the cluster, it searches the data center.

  19. Click OK.

From the list of hosts, the exclamation mark next to the host’s name disappeared, signifying that you have successfully configured power management and fencing.

Preventing Host Fencing During Boot

After you configure power management and fencing, when you start the Manager it automatically attempts to fence non-responsive hosts that have power management enabled after the quiet time (5 minutes by default) has elapsed. You can opt to extend the quiet time to prevent, for example, a scenario where the Manager attempts to fence hosts while they boot up. This can happen after a data center outage because a host’s boot process is normally longer than the Manager boot process.

You can configure quiet time using the engine-config command option DisableFenceAtStartupInSec:

# engine-config -s DisableFenceAtStartupInSec=number
Checking Fencing Parameters

To automatically check the fencing parameters, you can configure the PMHealthCheckEnabled (false by default) and PMHealthCheckIntervalInSec (3600 sec by default) engine-config options.

# engine-config -s PMHealthCheckEnabled=True
# engine-config -s PMHealthCheckIntervalInSec=number

When set to true, PMHealthCheckEnabled checks all host agents at the interval specified by PMHealthCheckIntervalInSec and raises warnings if it detects issues.

Installing Additional Self-Hosted Engine Hosts

You add self-hosted engine hosts the same way as a regular host, with an additional step to deploy the host as a self-hosted engine host. The shared storage domain is automatically detected and the host can be used as a failover host to host the Engine virtual machine when required. You can also add regular hosts to a self-hosted engine environment, but they cannot be used to host the Engine virtual machine.

Important:

Before you begin, refer to Preparing a KVM Host.

To install an additional self-hosted engine host, complete the following steps.

  1. In the Administration Portal, go to Compute and click Hosts.

  2. Click New.

    For information on additional host settings, see the Admin Guide in the latest upstream oVirt Documentation.

  3. Use the drop-down list to select the Data Center and Host Cluster for the new host.

  4. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.

  5. Select an authentication method to use for the engine to access the host.

    • Enter the root user’s password to use password authentication.

    • Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.

  6. Optionally, configure power management, where the host has a supported power management card. For information, see Configuring Power Management and Fencing on a Host.

  7. Click the Hosted Engine sub-tab.

  8. Select the Deploy radio button.

  9. Click OK.

Cleaning up the Deployment

If your self-hosted engine deployment fails, you must perform a few cleanup tasks before retrying.

  1. Run the hosted engine cleanup command:

    # /usr/sbin/ovirt-hosted-engine-cleanup
  2. Remove the storage:

    # rm -rf <storage_repo>/*
  3. If the deployment failed after the local, temporary hosted engine virtual machine is created, you might need to clean up the local virtual machine repository:

    # rm -rf /var/tmp/localvm*

Upgrading Or Updating the Self-Hosted Engine

See Upgrading Your Environment to 4.5 or Updating the Self-Hosted Engine in the Oracle Linux Virtualization Manager: Administration Guide.