Chapter 5 Self-Hosted Engine Deployment

In Oracle Linux Virtualization Manager, a self-hosted engine is a virtualized environment where the engine runs inside a virtual machine on the hosts in the environment. The virtual machine for the engine is created as part of the host configuration process. And, the engine is installed and configured in parallel to the host configuration.

Since the engine runs as a virtual machine and not on physical hardware, a self-hosted engine requires less physical resources. Additionally, since the engine is configured to be highly available, if the host running the Engine virtual machine goes into maintenance mode or fails unexpectedly the virtual machine is migrated automatically to another host in the environment. A minimum of two KVM hosts are required to support high availability for a single virtual machine running the self-hosted engine.

Note

To review conceptual information, troubleshooting, and administration tasks, see the oVirt Self-Hosted Engine Guide in oVirt Documentation.

To deploy a self-hosted engine, you perform a fresh installation of Oracle Linux 7 Update 7 on the host, install the Oracle Linux Virtualization Manager Release 4.3.6 package, and then run the hosted engine deployment tool to complete configuration.

5.1 Self-Hosted Engine Prerequisites

In addition to the Chapter 1, Requirements and Scalability Limits, you must satisfy the following prerequisites before deploying a self-hosted engine.

  • A fully qualified domain name for your engine and host with forward and reverse lookup records set in the DNS.

  • A directory of at least 5 GB on the host for the oVirt Engine Appliance. During the deployment process the /var/tmp directory is checked to see if it has enough space to extract the appliance files. If the /var/tmp directory does not have enough space, you can specify a different directory or mount external storage.

    Note

    The VDSM user and KVM group must have read, write, and execute permissions on the directory.

  • Prepared storage of at least 74 GB to be used as a data storage domain dedicated to the engine virtual machine. The data storage domain is created during the self-hosted engine deployment.

    If you are using iSCSI storage, do not use the same iSCSI target for the self-hosted engine storage domain and any additional storage domains.

    Warning

    When you have a data center with only one active data storage domain and that domain gets corrupted, you are unable to add new data storage domains or remove the corrupted data storage domain. If you have deployed your self-hosted engine in such a data center and its data storage domain gets corrupted, you must redeploy your self-hosted engine.

5.2 Deploying the Self-Hosted Engine

You must perform a fresh installation of Oracle Linux 7 Update 7 an Oracle Linux Virtualization Manager host before deploying a self-hosted engine. You can download the installation ISO for the latest Oracle Linux 7 Update 7 from the Oracle Software Delivery Cloud at https://edelivery.oracle.com.

  1. Install Oracle Linux 7 Update 7 on the host using the Minimal Install base environment.

    Follow the instructions in the Oracle® Linux 7: Installation Guide.

    Important

    Do not install any additional packages until after you have installed the Manager packages, because they may cause dependency issues.

  2. Ensure that the firewalld service is enabled and started.

    For more information about firewalld, see Configuring Packet-filtering Firewalls in the Oracle® Linux 7: Security Guide.

  3. (Optional) If you use a proxy server for Internet access, configure Yum with the proxy server settings. For more information, see Configuring Use of a Proxy Server in Oracle® Linux 7: Managing Software.

  4. Subscribe the system to the required channels OR install the Release 4.3.6 package and enable the required repositories.

    • For ULN registered hosts only: If the host is registered on ULN, subscribe the system to the required channels.

      1. Log in to https://linux.oracle.com with your ULN user name and password.

      2. On the Systems tab, click the link named for the host in the list of registered machines.

      3. On the System Details page, click Manage Subscriptions.

      4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels:

        • ol7_x86_64_latest

        • ol7_x86_64_optional_latest

        • ol7_x86_64_kvm_utils

        • ol7_x86_64_ovirt43

        • ol7_x86_64_ovirt43_extras

        • ol7_x86_64_gluster6

        • (For VDSM) ol7_x86_64_UEKR5

      5. Click Save Subscriptions.

    • For Oracle Linux yum server hosts only: Install the Oracle Linux Virtualization Manager Release 4.3.6 package and enable the required repositories.

      1. (Optional) Make sure the host is using the modular yum repository configuration. For more information, see Getting Started with Oracle Linux Yum Server.

      2. Enable the ol7_latest yum repository.

        # yum-config-manager --enable ol7_latest
        Important

        Before you execute yum-config-manager ensure the yum-utils package is installed on your system. For more information, see Using Yum Utilities to Manage Configuration in Oracle® Linux 7: Managing Software

      3. Install the Oracle Linux Virtualization Manager Release 4.3.6 package.

        # yum install oracle-ovirt-release-el7
      4. Use the yum command to verify that the required repositories are enabled.

        1. Clear the yum cache.

          # yum clean all
        2. List the configured repositories and verify that the required repositories are enabled.

          # yum repolist

          The following repositories must be enabled:

          • ol7_latest

          • ol7_optional_latest

          • ol7_kvm-utils

          • ol7_gluster6

          • ol7_UEKR5

          • ovirt-4.3

          • ovirt-4.3-extra

        3. If a required repository is not enabled, use the yum-config-manager to enable it.

          # yum-config-manager --enable repository
  5. Unsubscribe to the 4.2 channels OR disable the 4.2 repositories.

    • For ULN registered hosts only: If the host is registered on ULN, unsubscribe to the following channels.

      • ol7_x86_64_ovirt42

      • ol7_x86_64_ovirt42_extras

    • For Oracle Linux yum server hosts only: Run the following commands.

      # yum-config-manager --disable ovirt-4.2
      # yum-config-manager --disable ovirt-4.2-extra
  6. Install the hosted engine deployment tool and engine appliance.

    # yum install ovirt-hosted-engine-setup -y
    # yum install ovirt-engine-appliance -y

You can deploy a self-hosted engine using the command line or Cockpit portal. If you want to use the command line, proceed to Section 5.2.1, “Using the Command Line to Deploy”. If you want to use the Cockpit portal, proceed to Section 5.2.2, “Using the Cockpit Portal to Deploy”.

5.2.1 Using the Command Line to Deploy

To deploy the self-hosted engine using the command line, complete the following steps.

  1. Start the deployment.

    # hosted-engine --deploy
    Note

    You can deploy the hosted engine using all the default settings. Make sure the auto-detected fully qualified DNS name of the host is correct. The fully qualified DNS name should resolve to the IP address that is accessible through the host's main interface. For more information on the default settings, see Section 2.1.2, “Engine Configuration Options”.

  2. Enter Yes to begin deployment.

    Continuing will configure this host for serving as hypervisor and will create a local VM 
    with a running engine. The locally running engine will be used to configure a new storage 
    domain and create a VM there. At the end the disk of the local VM will be moved to the 
    shared storage.
    Are you sure you want to continue? (Yes, No)[Yes]:
    Note

    The hosted-engine script creates a virtual machine and uses cloud-init to configure it. The script also runs engine-setup and reboots the system so that the virtual machine can be managed by the high availability agent.

  3. Configure the network.

    1. If the gateway that displays is correct, press Enter to configure the network.

    2. Enter a pingable address on the same subnet so the script can check the host’s connectivity.

      Please indicate a pingable gateway IP address [X.X.X.X]:
    3. The script detects possible NICs to use as a management bridge for the environment. Select the default.

      Please indicate a nic to set ovirtmgmt bridge on: (eth1, eth0) [eth1]:
  4. Enter the path to an OVA archive if you want to use a custom appliance for the virtual machine installation. Otherwise, leave this field empty to use the oVirt Engine Appliance.

    If you want to deploy with a custom engine appliance image,
    please specify the path to the OVA archive you would like to use
    (leave it empty to skip, the setup will use ovirt-engine-appliance rpm installing it if missing):
  5. Specify the fully-qualified domain name for the engine virtual machine.

    Please provide the FQDN you would like to use for the engine appliance.
     Note: This will be the FQDN of the engine VM you are now going to launch,
     it should not point to the base host or to any other existing machine.
     Engine VM FQDN:  manager.example.com
     Please provide the domain name you would like to use for the engine appliance.
     Engine VM domain: [example.com]
  6. Enter and confirm a root password for the engine.

    Enter root password that will be used for the engine appliance:
    Confirm appliance root password:
  7. Optionally, enter an SSH public key to enable you to log in to the engine as the root user and specify whether to enable SSH access for the root user.

    Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip):
    Do you want to enable ssh access for the root user (yes, no, without-password) [yes]:
  8. Enter the virtual machine’s CPU and memory configuration.

    Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]:
    Please specify the memory size of the VM in MB (Defaults to maximum available): [7267]:
  9. Enter a MAC address for the engine virtual machine or accept a randomly generated MAC address.

    You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:3d:34:47]:
    Note

    If you want to provide the engine virtual machine with an IP address using DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script does not configure the DHCP server for you.

  10. Enter the virtual machine’s networking details.

    How should the engine VM network be configured (DHCP, Static)[DHCP]?
    Note

    If you specified Static, enter the IP address of the Engine. The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).

    Please enter the IP address to be used for the engine VM [x.x.x.x]:
    Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM
    Engine VM DNS (leave it empty to skip):
  11. Specify whether to add entries in the virtual machine’s /etc/hosts file for the engine virtual machine and the base host. Ensure that the host names are resolvable.

    Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
    Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No]
  12. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Or, press Enter to accept the defaults.

    Please provide the name of the SMTP server through which we will send notifications [localhost]:
    Please provide the TCP port number of the SMTP server [25]:
    Please provide the email address from which notifications will be sent [root@localhost]:
    Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
  13. Enter and confirm a password for the admin@internal user to access the Administration Portal.

    Enter engine admin password:
    Confirm engine admin password:

    The script creates the virtual machine which can take time if it needs to install the oVirt Engine Appliance. After creating the virtual machine, the script continues gathering information.

  14. Select the type of storage to use.

    Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
    • If you selected NFS, enter the version, full address and path to the storage, and any mount options.

      Please specify the nfs version you would like to use (auto, v3, v4, v4_1)[auto]:
        Please specify the full shared storage connection path to use (example: host:/path): 
        storage.example.com:/hosted_engine/nfs
        If needed, specify additional mount options for the connection to the hosted-engine 
        storage domain []:
    • If you selected iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      Please specify the iSCSI portal IP address:
        Please specify the iSCSI portal port [3260]:
        Please specify the iSCSI discover user:
        Please specify the iSCSI discover password:
        Please specify the iSCSI portal login user:
        Please specify the iSCSI portal login password:
      
        The following targets have been found:
        	[1]	iqn.2017-10.com.redhat.example:he
        		TPGT: 1, portals:
        			192.168.1.xxx:3260
        			192.168.2.xxx:3260
        			192.168.3.xxx:3260
      
        Please select a target (1) [1]: 1
      
        The following luns have been found on the requested target:
          [1] 360003ff44dc75adcb5046390a16b4beb   199GiB  MSFT   Virtual HD
              status: free, paths: 1 active
      
        Please select the destination LUN (1) [1]:
    • If you selected GlusterFS, enter the full address and path to the storage, and any mount options. Only replica 3 Gluster storage is supported.

      * Configure the volume as follows as per [Gluster Volume Options for Virtual 
        Machine Image Store](documentation/admin-guide/chap-Working_with_Gluster_Storage#Options 
        set on Gluster Storage Volumes to Store Virtual Machine Images)
      
        Please specify the full shared storage connection path to use (example: host:/path): 
        storage.example.com:/hosted_engine/gluster_volume
        If needed, specify additional mount options for the connection to the hosted-engine storage domain []:
      
    • If you selected Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected. The deployment script auto-detects the available LUNs, and the LUN must not contain any existing data.

      The following luns have been found on the requested target:
        [1] 3514f0c5447600351   30GiB   XtremIO XtremApp
        		status: used, paths: 2 active
      
        [2] 3514f0c5447600352   30GiB   XtremIO XtremApp
        		status: used, paths: 2 active
      
        Please select the destination LUN (1, 2) [1]:
  15. Enter the engine disk size:

    Please specify the size of the VM disk in GB: [50]:

    If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.

  16. Optionally, log into the Oracle Linux Virtualization Manager Administration Portal to add any other resources.

    In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.

  17. Enable the required repositories on the Engine virtual machine.

  18. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment.

5.2.2 Using the Cockpit Portal to Deploy

To deploy the self-hosted engine using the Cockpit portal, complete the following steps.

  1. Install the Cockpit dashboard.

    # yum install cockpit-ovirt-dashboard -y
  2. Open the Cockpit port 9090 on firewalld.

    # firewall-cmd --permanent --zone=public --add-port=9090/tcp
    # systemctl restart firewalld
  3. Start the Cockpit service

    # systemctl start cockpit
    # systemctl enable cockpit
  4. Log into the Cockpit portal from the following URL:

    https://host_IP_or_FQDN:9090

  5. To start the self-hosted engine deployment, click Virtualization and select Hosted Manager.

  6. Click Start under Hosted Manager.

  7. Provide the following details for the Engine virtual machine.

    1. In the Engine VM FQDN field, enter the Engine virtual machine FQDN. Do not use the FQDN of the host.

    2. In the MAC Address field, enter a MAC address for the Engine virtual machine or leave blank and the system provides a randomy-generated address.

    3. From the Network Configuration drop-down list, select DHCP or Static.

      • To use DHCP, you must have a DHCP reservation (a pre-set IP address on the DHCP server) for the Engine virtual machine. In the MAC Address field, enter the MAC address.

      • To use Static, enter the virtual machine IP, the gateway address, and the DNS servers. The IP address must belong to the same subnet as the host.

    4. Select the Bridge Interface from the drop-down list.

    5. Enter and confirm the virtual machine’s Root Password.

    6. Specify whether to allow Root SSH Access.

    7. Enter the Number of Virtual CPUs for the virtual machine.

    8. Enter the Memory Size (MiB). The available memory is displayed next to the field.

  8. Optionally, click Advanced to provide any of the following information.

    • Enter a Root SSH Public Key to use for root access to the Engine virtual machine.

    • Select the Edit Hosts File check box if you want to add entries for the Engine virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.

    • Change the management Bridge Name, or accept the default of ovirtmgmt.

    • Enter the Gateway Address for the management bridge.

    • Enter the Host FQDN of the first host to add to the Engine. This is the FQDN of the host you are using for the deployment.

  9. Click Next.

  10. Enter and confirm the Admin Portal Password for the admin@internal user.

  11. Optionally, configure event notifications.

    • Enter the Server Name and Server Port Number of the SMTP server.

    • Enter a Sender E-Mail Address.

    • Enter Recipient E-Mail Addresses.

  12. Click Next.

  13. Review the configuration of the Engine and its virtual machine. If the details are correct, click Prepare VM.

  14. When the virtual machine installation is complete, click Next.

  15. Select the Storage Type from the drop-down list and enter the details for the self-hosted engine storage domain.

    • For NFS:

      1. In the Storage Connection field, enter the full address and path to the storage.

      2. If required, enter any Mount Options.

      3. Enter the Disk Size (GiB).

      4. Select the NFS Version from the drop-down list.

      5. Enter the Storage Domain Name.

    • For iSCSI:

      1. Enter the Portal IP Address, Portal Port, Portal Username, and Portal Password.

      2. Click Retrieve Target List and select a target. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

        Note

        To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      3. Enter the Disk Size (GiB).

      4. Enter the Discovery Username and Discovery Password.

    • For FibreChannel:

      1. Enter the LUN ID. The host bus adapters must be configured and connected and the LUN must not contain any existing data.

      2. Enter the Disk Size (GiB).

    • For Gluster Storage:

      1. In the Storage Connection field, enter the full address and path to the storage.

      2. If required, enter any Mount Options.

      3. Enter the Disk Size (GiB).

  16. Click Next.

  17. Review the storage configuration. If the details are correct, click Finish Deployment.

  18. When the deployment is complete, click Close.

    If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.

  19. Optionally, log into the Oracle Linux Virtualization Manager Administration Portal to add any other resources.

    In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.

  20. Enable the required repositories on the Engine virtual machine.

  21. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment.

  22. To view the self-hosted engine’s status in Cockpit, under Virtualization click Hosted Engine.

5.3 Enabling High-Availability

The host that houses the self-hosted engine is not highly available by default. Since the self-hosted engine runs inside a virtual machine on a host, if you do not configure high-availability for the host, then live VM migration is not possible. For more information, see Section 5.3.1, “Configuring a Highly Available Host”

Further, you must have an additional self-hosted engine host so that it is capable of hosting the engine virtual machine in case of a failure, maintenance issue, etc. This ensures that the Engine virtual machine can failover to another host thus making it highly available.

5.3.1 Configuring a Highly Available Host

Fencing keeps hosts in a cluster responsive. Fencing allows a cluster to react to unexpected host failures and enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host’s power management device and test their correctness from time to time.

A Non Operational host is different from a Non Responsive host. A Non Operational host can communicate with the Manager, but has incorrect configuration, for example a missing logical network. A Non Responsive host cannot communicate with the Manager.

In a fencing operation, a non-responsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains non-responsive pending manual intervention and troubleshooting.

The Manager can perform management operations after it reboots, by a proxy host, or manually in the Administration Portal. All the virtual machines running on the non-responsive host are stopped, and highly available virtual machines are restarted on a different host. At least two hosts are required for power management operations.

Important

If a host runs virtual machines that are highly available, power management must be enabled and configured.

5.3.1.1 Configuring Power Management and Fencing on a Host

The Manager uses a proxy to send power management commands to a host power management device because the engine does not communicate directly with fence agents. The host agent (VDSM) executes power management device actions and another host in the environment is used as a fencing proxy. This means that you must have at least two hosts for power management operations.

When you configure a fencing proxy host, make sure the host is in:

  • the same cluster as the host requiring fencing.

  • the same data center as the host requiring fencing.

  • UP or Maintenance status to remain viable.

Power management operations can be performed in three ways:

  • by the Manager after it reboots

  • by a proxy host

  • manually in the Administration Portal

To configure power management and fencing on a host:

  1. Click Compute and select Host.

  2. Select a host and click Edit.

  3. Click the Power Management tab.

  4. Check Enable Power Management to enable the rest of the fields.

  5. Check Kdump integration to prevent the host from fencing while performing a kernel crash dump. Kdump integration is enabled by default.

    Important

    If you enable or disable Kdump integration on an existing host, you must reinstall the host.

  6. (Optional) Check Disable policy control of power management if you do not want your host’s power management to be controlled by the scheduling policy of the host's cluster.

  7. To configure a fence agent, click the plus sign (+) next to Add Fence Agent.

    The Edit fence agent pane opens.

  8. Enter the Address (IP Address or FQDN) to access the host's power management device.

  9. Enter the User Name and Password of the of the account used to access the power management device.

  10. Select the power management device Type from the drop-down list.

  11. Enter the Port (SSH) number used by the power management device to communicate with the host.

  12. Enter the Slot number used to identify the blade of the power management device.

  13. Enter the Options for the power management device. Use a comma-separated list of key-value pairs.

    • If you leave the Options field blank, you are able to use both IPv4 and IPv6 addresses

    • To use only IPv4 addresses, enter inet4_only=1

    • To use only IPv6 addresses, enter inet6_only=1

  14. Check Secure to enable the power management device to connect securely to the host.

    You can use ssh, ssl, or any other authentication protocol your power management device supports.

  15. Click Test to ensure the settings are correct and then click OK.

    Test Succeeded, Host Status is: on displays if successful.

    Warning

    Power management parameters (userid, password, options, etc.) are tested by the Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without changing in the Manager as well, fencing is likely to fail when most needed.

  16. Fence agents are sequential by default. To change the sequence in which the fence agents are used:

    1. Review your fence agent order in the Agents by Sequential Order field.

    2. To make two fence agents concurrent, next to one fence agent click the Concurrent with drop-down list and select the other fence agent.

      You can add additional fence agents to this concurrent fence agent group.

  17. Expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager searches the host’s cluster and dc (data center) for a power management proxy.

  18. To add a additional power management proxy:

    1. Click the plus sign (+) next to Add Power Management Proxy.

      The Select fence proxy preference type to add pane opens.

    2. Select a power management proxy from the drop-down list and then click OK.

      Your new proxy displays in the Power Management Proxy Preference list.

    Note

    By default, the Manager searches for a fencing proxy within the same cluster as the host. If The Manager cannot find a fencing proxy within the cluster, it searches the data center.

  19. Click OK.

From the list of hosts, the exclamation mark next to the host’s name disappeared, signifying that you have successfully configured power management and fencing.

5.3.1.2 Preventing Host Fencing During Boot

After you configure power management and fencing, when you start the Manager it automatically attempts to fence non-responsive hosts that have power management enabled after the quiet time (5 minutes by default) has elapsed. You can opt to extend the quiet time to prevent, for example, a scenario where the Manager attempts to fence hosts while they boot up. This can happen after a data center outage because a host’s boot process is normally longer than the Manager boot process.

You can configure quiet time using the engine-config command option DisableFenceAtStartupInSec:

#engine-config -s DisableFenceAtStartupInSec=<number>

5.3.1.3 Checking Fencing Parameters

To automatically check the fencing parameters, you can configure the PMHealthCheckEnabled (false by default) and PMHealthCheckIntervalInSec (3600 sec by default) engine-config options.

#engine-config -s PMHealthCheckEnabled=True
#engine-config -s PMHealthCheckIntervalInSec=<number>

When set to true, PMHealthCheckEnabled checks all host agents at the interval specified by PMHealthCheckIntervalInSec and raises warnings if it detects issues.

5.4 Installing Additional Self-Hosted Engine Hosts

You add self-hosted engine hosts the same way as a regular host, with an additional step to deploy the host as a self-hosted engine host. The shared storage domain is automatically detected and the host can be used as a failover host to host the Engine virtual machine when required. You can also add regular hosts to a self-hosted engine environment, but they cannot be used to host the Engine virtual machine.

To install an additional self-hosted engine host, complete the following steps.

  1. In the Administration Portal, go to Compute and click Hosts.

  2. Click New.

    For information on additional host settings, see the Admin Guide in the latest upstream oVirt Documentation.

  3. Use the drop-down list to select the Data Center and Host Cluster for the new host.

  4. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.

  5. Select an authentication method to use for the engine to access the host.

    • Enter the root user’s password to use password authentication.

    • Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.

  6. Optionally, configure power management, where the host has a supported power management card. For information, see Section 5.3.1.1, “Configuring Power Management and Fencing on a Host”.

  7. Click the Hosted Engine sub-tab.

  8. Select the Deploy radio button.

  9. Click OK.

5.5 Cleaning up the Deployment

If your self-hosted engine deployment fails, you must perform a few cleanup tasks before retrying.

  1. Run the hosted engine cleanup command:

    # /usr/sbin/ovirt-hosted-engine-cleanup
  2. Remove the storage:

    # rm -rf <storage_repo>/*
  3. If the deployment failed after the local, temporary hosted engine virtual machine is created, you might need to clean up the local virtual machine repository:

    # rm -rf /var/tmp/localvm*

5.6 Deploying GlusterFS Storage

Oracle Linux Virtualization Manager has been integrated with GlusterFS, an open source scale-out distributed filesystem, to provide a hyperconverged solution where both compute and storage are provided from the same hosts. Gluster volumes residing on the hosts are used as storage domains in the Manager to store the virtual machine images. In this scenario, the Manager is run as a self-hosted engine within a virtual machine on these hosts.

Note

For more information about using GlusterFS, including prerequisites, see the latest upstream oVirt Documentation.

5.6.1 Deploying GlusterFS Storage Using Cockpit

To deploy GlusterFS storage using the Cockpit web interface, complete the following steps.

Note

Ensure that on all three hosts you have installed the following packages:

  • cockpit-ovirt-dashboard to provide a UI for installation

  • vdsm-gluster to manage gluster services

  • ansible-host-roles on the KVM host used for cockpit deployment

  1. Go to Compute, and then click Hosts.

    The Hosts pane opens.

  2. Under the Name column, click the host to be used as the designated server.

  3. Click Host Console.

    The login page for the Cockpit web interface opens.

  4. Enter your login credentials (the user name and password of the root account.).

  5. Go to Virtualization and then click Hosted Engine.

  6. Click Redeploy under Hosted Engine Setup.

  7. Click Start under Hyperconverged.

  8. On the Hosts screen, enter 3 (or more) KVM hosts that are in the data center to be used for GlusterFS, with the main designated KVM host entered first and click Next when finished.

  9. On the FQDNs screen, enter the FQDN (or IP address) for the hosts to be managed by the Hosted Engine and click Next when finished.

    Note

    The FQDN of the designated server is input during the Hosted Engine deployment process and is not asked for here.

  10. Click Next on the Packages screen.

  11. On the Volumes screen, create the minimum storage domains that are required: engine, data, export, and iso. Click Next when finished.

    For example:

    • Name: engine

    • Volume Type: Replicate (default)

    • Arbiter: Ensure the check box is selected.

    • Brick Dirs: /gluster_bricks/engine/engine (default)

    data

    • Name: data

    • Volume Type: Replicate (default)

    • Arbiter: Ensure the check box is selected.

    • Brick Dirs: /gluster_bricks/data/data (default)

    export

    • Name: export

    • Volume Type: Replicate (default)

    • Arbiter: Ensure the check box is selected.

    • Brick Dirs: /gluster_bricks/export/export (default)

    iso

    • Name: iso

    • Volume Type: Replicate (default)

    • Arbiter: Ensure the check box is selected.

    • Brick Dirs: /gluster_bricks/iso/iso (default)

  12. On the Brick Locations screen, specify the brick locations for your volumes and click Next when finished.

    For this step, you specify the brick locations for your volumes (engine, data, export, and iso).

  13. Review the screen and click Deploy.

    • If you are using an internal disk as the Gluster disk, no edits are required and you can simply click Deploy to continue with the deployment.

    • If you are using an external iSCSI ZFS external drive as the Gluster disk, click Edit to edit the gdeployConfig.conf file and specify the block device on each server that is being used for storage. Click Save and then click Deploy to continue with the deployment.

    This process takes some time to complete, as the gdeploy tool installs required packages and configures Gluster volumes and their underlying storage.

    A message display on the screen when the deployment completes successfully.

5.6.2 Creating a GlusterFS Storage Domain Using the Manager

To add a GlusterFS storage volume as a storage domain:

  1. Go to Storage and then click Domains.

    The Storage Domains pane opens.

  2. On the Storage Domains pane, click the New Domain button.

    The New Domain dialog box opens.

  3. For the Name field, enter a name for the data domain.

  4. From the Data Center drop-down list, select the data center where the GlusterFS volume is deployed. By default, the Default option is selected in the drop-down list.

  5. From the Domain Function drop-down list, select the domain function. By default, the Data option is selected in the drop-down list.

    For this step, leave Data as the domain function because a data domain is being created in this example.

  6. From the Storage Type drop-down list, select GlusterFS.

  7. For the Host to Use drop-down list, select the host for which to attach the data domain.

  8. When GlusterFS is selected for the Storage Type, the New Domain dialog box updates to display additional configuration fields associated with GlusterFS storage domains.

  9. Ensure the Use managed gluster volume check box is not selected.

  10. From the Gluster drop-down list, select the path to which domain function you are creating.

  11. For the Mount Options option, specify additional mount options in a comma-separated list, as you would using the mount -o command.

  12. (Optional) Configure the advanced parameters.

  13. Click OK to mount the volume as a storage domain.

    You can click Tasks to monitor the various processing steps that are completed to add the GlusterFS storage domain to the data center.