Deploy the Self-Hosted Engine

You must perform a fresh installation of Oracle Linux 8.8 (or later Oracle Linux 8 release) on an Oracle Linux Virtualization Manager host before deploying a self-hosted engine. You can download the installation ISO for from the Oracle Software Delivery Cloud at https://edelivery.oracle.com.

  1. Install Oracle Linux 8.8 (or later Oracle Linux 8 release) on the host using the Minimal Install base environment.

    Caution:

    Do NOT select any other base environment than Minimal Install for the installation or your hosts will have incorrect qemu and libvirt versions, incorrect repositories configured, and no access to virtual machine consoles.

    Do not install any additional packages until after you have installed the Manager packages, because they may cause dependency issues.

    Follow the instructions in the Oracle® Linux 8: Installing Oracle Linux.

  2. Ensure that the firewalld service is enabled and started.

    For more information about configuring firewalld, see Configuring a Packet Filtering Firewall in the Oracle® Linux 8: Configuring the Firewall.

  3. Complete one of the following sets of steps:

    • For ULN registered hosts or using Oracle Linux Manager

      Subscribe the system to the required channels.

      1. For ULN registered hosts, log in to https://linux.oracle.com with your ULN user name and password. For Oracle Linux Manager registered hosts, access your internal server URL.

      2. On the Systems tab, click the link named for the host in the list of registered machines.

      3. On the System Details page, click Manage Subscriptions.

      4. On the System Summary page, select each required channel from the list of available channels and click the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels:

        • ol8_x86_64_baseos_latest

        • ol8_x86_64_appstream

        • ol8_x86_64_kvm_appstream

        • ol8_x86_64_ovirt45

        • ol8_x86_64_ovirt45_extras

        • ol8_x86_64_gluster_appstream

        • (For VDSM) ol8_x86_64_UEKR7

      5. Click Save Subscriptions.

      6. Install the Oracle Linux Virtualization Manager Release 4.5 package, which automatically enables/disables the required repositories.

        # dnf install oracle-ovirt-release-45-el8                    
    • For Oracle Linux yum server hosts

      Install the Oracle Linux Virtualization Manager Release 4.5 package and enable the required repositories.

      1. Enable the ol8_baseos_latest yum repository.

        # dnf config-manager --enable ol8_baseos_latest   
      2. Install the Oracle Linux Virtualization Manager Release 4.5 package, which automatically enables/disables the required repositories.

        # dnf install oracle-ovirt-release-45-el8   
      3. Use the dnf command to verify that the required repositories are enabled.

        1. Clear the yum cache.

          # dnf clean all                
        2. List the configured repositories and verify that the required repositories are enabled.

          # dnf repolist               

          The following repositories must be enabled:

          • ol8_x86_64_baseos_latest
          • ol8_x86_64_appstream
          • ol8_x86_64_kvm_appstream
          • ol8_x86_64_ovirt45
          • ol8_x86_64_ovirt45_extras
          • ol8_x86_64_gluster_appstream
          • ol8_x86_64_addons
          • (For VDSM) ol8_x86_64_UEKR7
        3. If a required repository is not enabled, use the config-manager to enable it.

          # dnf config-manager --enable repository                             
  4. If your host is running UEK R7:
    1. Install the Extra kernel modules package.
      # dnf install kernel-uek-modules-extra
    2. Reboot the host.
  5. Install the hosted engine deployment tool and engine appliance.

    # dnf install ovirt-hosted-engine-setup -y

Use Command Line to Deploy Self-Hosted Engine

You can deploy the self-hosted engine from the command line. A script collects the details of your environment and uses them to configure the host and the engine.

  1. Start the deployment. IPv6 is used by default. To use IPv4, specify the --4 option:

    # hosted-engine --deploy --4

    Optionally, use the --ansible-extra-vars option to define variables for the deployment. For example:

    # hosted-engine --deploy --4 --ansible-extra-vars="@/root/extra-vars.yml"
    
    cat /root/extra-vars.yml
    ---
    he_pause_host: true
    he_proxy: "http://<host>:<port>"
    he_enable_keycloak: false

    See the oVirt Documentation for more information.

  2. Enter Yes to begin deployment.

    Continuing will configure this host for serving as hypervisor and will create a local VM 
    with a running engine. The locally running engine will be used to configure a new storage 
    domain and create a VM there. At the end the disk of the local VM will be moved to the 
    shared storage.
    Are you sure you want to continue? (Yes, No)[Yes]:

    Note:

    The hosted-engine script creates a virtual machine and uses cloud-init to configure it. The script also runs engine-setup and reboots the system so that the virtual machine can be managed by the high availability agent.

  3. Enter the name of the data center or accept the default.

    Please enter the name of the data center where you want to deploy this hosted-engine
    host. Data center [Default]: 
  4. Enter a name for the cluster or accept the default.

    Please enter the name of the cluster where you want to deploy this hosted-engine host. 
    Cluster [Default]: 
  5. Keycloak integration is a technology preview feature for internal Single-Sign-On (SSO) provider for the Engine and it deprecates AAA. The default response is Yes; however, since this is a preview feature, enter No.

    Configure Keycloak integration on the engine(Yes, No) [Yes]:No
  6. Configure the network.

    1. If the gateway that displays is correct, press Enter to configure the network.

    2. Enter a pingable address on the same subnet so the script can check the host’s connectivity.

      Please indicate a pingable gateway IP address [X.X.X.X]:
    3. The script detects possible NICs to use as a management bridge for the environment. Select the default.

      Please indicate a nic to set ovirtmgmt bridge on: (eth1, eth0) [eth1]:
  7. Enter the path to an OVA archive if you want to use a custom appliance for the virtual machine installation. Otherwise, leave this field empty to use the oVirt Engine Appliance.

    If you want to deploy with a custom engine appliance image, please specify the path to 
    the OVA archive you would like to use.
    Entering no value will use the image from the ovirt-engine-appliance rpm, 
    installing it if needed.
    Appliance image path []:
  8. Specify the fully-qualified domain name for the engine virtual machine.

    Please provide the FQDN you would like to use for the engine appliance.
     Note: This will be the FQDN of the engine VM you are now going to launch,
     it should not point to the base host or to any other existing machine.
     Engine VM FQDN:  manager.example.com
     Please provide the domain name you would like to use for the engine appliance.
     Engine VM domain: [example.com]
  9. Enter and confirm a root password for the engine.

    Enter root password that will be used for the engine appliance:
    Confirm appliance root password:
  10. Optionally, enter an SSH public key to enable you to log in to the engine as the root user and specify whether to enable SSH access for the root user.

    Enter ssh public key for the root user that will be used for the engine 
    appliance (leave it empty to skip):
    Do you want to enable ssh access for the root user (yes, no, without-password) 
    [yes]:
    You may provide an SSH public key, that will be added by the deployment script to the 
    authorized_keys file of the root user in the engine appliance.
    This should allow you passwordless login to the engine machine after deployment.
    If you provide no key, authorized_keys will not be touched.
    SSH public key []:
    [WARNING] Skipping appliance root ssh public key
    Do you want to enable ssh access for the root user? (yes, no, without-password) [yes]:
  11. Enter the virtual machine’s CPU and memory configuration.

    Please specify the number of virtual CPUs for the VM (Defaults to appliance 
    OVF value): [4]:
    Please specify the memory size of the VM in MB. The default is the appliance 
    OVF value [16384]:
  12. Enter a MAC address for the engine virtual machine or accept a randomly generated MAC address.

    You may specify a unicast MAC address for the VM or accept a randomly 
    generated default [00:16:3e:3d:34:47]:

    Note:

    If you want to provide the engine virtual machine with an IP address using DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script does not configure the DHCP server for you.

  13. Enter the virtual machine’s networking details.

    How should the engine VM network be configured (DHCP, Static)[DHCP]?

    Note:

    If you specified Static, enter the IP address of the Engine. The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).

    Please enter the IP address to be used for the engine VM [x.x.x.x]:
    Please provide a comma-separated list (max 3) of IP addresses of domain 
    name servers for the engine VM
    Engine VM DNS (leave it empty to skip):
  14. Specify whether to add entries in the virtual machine’s /etc/hosts file for the engine virtual machine and the base host. Ensure that the host names are resolvable.

    Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
    Note: ensuring that this host could resolve the engine VM hostname is still up to you.
    Add lines to /etc/hosts? (Yes, No)[Yes]:
  15. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Or, press Enter to accept the defaults.

    Please provide the name of the SMTP server through which we will send 
    notifications [localhost]:
    Please provide the TCP port number of the SMTP server [25]:
    Please provide the email address from which notifications will be sent 
    [root@localhost]:
    Please provide a comma-separated list of email addresses which will get 
    notifications [root@localhost]:
  16. Enter and confirm a password for the admin@internal user to access the Administration Portal.

    Enter engine admin password:
    Confirm engine admin password:

    The script creates the virtual machine which can take time if it needs to install the oVirt Engine Appliance. After creating the virtual machine, the script continues gathering information.

  17. Select the type of storage to use.

    Please specify the storage you would like to use (glusterfs, iscsi, fc, 
    nfs)[nfs]:
    • If you selected NFS, enter the version, full address and path to the storage, and any mount options.

      Please specify the nfs version you would like to use (auto, v3, v4, 
      v4_1)[auto]:
      Please specify the full shared storage connection path to use (example: 
      host:/path): 
      storage.example.com:/hosted_engine/nfs
      If needed, specify additional mount options for the connection to the 
      hosted-engine storage domain []:
    • If you selected iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note:

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      Please specify the iSCSI portal IP address:
        Please specify the iSCSI portal port [3260]:
        Please specify the iSCSI discover user:
        Please specify the iSCSI discover password:
        Please specify the iSCSI portal login user:
        Please specify the iSCSI portal login password:
      
        The following targets have been found:
        	[1]	iqn.2017-10.com.redhat.example:he
        		TPGT: 1, portals:
        			192.168.1.xxx:3260
        			192.168.2.xxx:3260
        			192.168.3.xxx:3260
      
        Please select a target (1) [1]: 1
      
        The following luns have been found on the requested target:
          [1] 360003ff44dc75adcb5046390a16b4beb   199GiB  MSFT   Virtual HD
              status: free, paths: 1 active
      
        Please select the destination LUN (1) [1]:
    • If you selected GlusterFS, enter the full address and path to the storage, and any mount options. Only replica 3 Gluster storage is supported.

      * Configure the volume as follows as per [Gluster Volume Options for Virtual 
        Machine Image Store] 
        (documentation/admin-guide/chap-Working_with_Gluster_Storage#Options 
        set on Gluster Storage Volumes to Store Virtual Machine Images)
      
        Please specify the full shared storage connection path to use 
      (example: host:/path): 
        storage.example.com:/hosted_engine/gluster_volume
        If needed, specify additional mount options for the connection to the 
      hosted-engine storage domain []:
      
    • If you selected Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected. The deployment script auto-detects the available LUNs, and the LUN must not contain any existing data.

      The following luns have been found on the requested target:
        [1] 3514f0c5447600351   30GiB   XtremIO XtremApp
        		status: used, paths: 2 active
      
        [2] 3514f0c5447600352   30GiB   XtremIO XtremApp
        		status: used, paths: 2 active
      
        Please select the destination LUN (1, 2) [1]:
  18. Enter the engine disk size:

    Please specify the size of the VM disk in GB: [50]:

    If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.

  19. Optionally, log into the Oracle Linux Virtualization Manager Administration Portal to add any other resources.

    In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.

  20. Enable the required repositories on the Engine virtual machine.

  21. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment.

Use Cockpit to Deploy Self-Hosted Engine

Note:

If you are behind a proxy, you must use the command line option to deploy your self-hosted engine.

To deploy the self-hosted engine using the Cockpit portal, complete the following steps.

  1. Install the Cockpit dashboard.

    # dnf install cockpit-ovirt-dashboard -y
  2. Open the Cockpit port 9090 on firewalld.

    # firewall-cmd --permanent --zone=public --add-port=9090/tcp
    # firewall-cmd --reload      
  3. Enable and start the Cockpit service

    # systemctl enable --now cockpit.socket
  4. Log into the Cockpit portal from the following URL:

    https://host_IP_or_FQDN:9090

  5. To start the self-hosted engine deployment, click Virtualization and select Hosted Manager.

  6. Click Start under Hosted Manager.

  7. Provide the following details for the Engine virtual machine.

    1. In the Engine VM FQDN field, enter the Engine virtual machine FQDN. Do not use the FQDN of the host.

    2. In the MAC Address field, enter a MAC address for the Engine virtual machine or leave blank and the system provides a randomy-generated address.

    3. From the Network Configuration drop-down list, select DHCP or Static.

      • To use DHCP, you must have a DHCP reservation (a pre-set IP address on the DHCP server) for the Engine virtual machine. In the MAC Address field, enter the MAC address.

      • To use Static, enter the virtual machine IP, the gateway address, and the DNS servers. The IP address must belong to the same subnet as the host.

    4. Select the Bridge Interface from the drop-down list.

    5. Enter and confirm the virtual machine’s Root Password.

    6. Specify whether to allow Root SSH Access.

    7. Enter the Number of Virtual CPUs for the virtual machine.

    8. Enter the Memory Size (MiB). The available memory is displayed next to the field.

  8. Optionally, click Advanced to provide any of the following information.

    • Enter a Root SSH Public Key to use for root access to the Engine virtual machine.

    • Select the Edit Hosts File check box if you want to add entries for the Engine virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.

    • Change the management Bridge Name, or accept the default of ovirtmgmt.

    • Enter the Gateway Address for the management bridge.

    • Enter the Host FQDN of the first host to add to the Engine. This is the FQDN of the host you are using for the deployment.

  9. Click Next.

  10. Enter and confirm the Admin Portal Password for the admin@internal user.

  11. Optionally, configure event notifications.

    • Enter the Server Name and Server Port Number of the SMTP server.

    • Enter a Sender E-Mail Address.

    • Enter Recipient E-Mail Addresses.

  12. Click Next.

  13. Review the configuration of the Engine and its virtual machine. If the details are correct, click Prepare VM.

  14. When the virtual machine installation is complete, click Next.

  15. Select the Storage Type from the drop-down list and enter the details for the self-hosted engine storage domain.

    • For NFS:

      1. In the Storage Connection field, enter the full address and path to the storage.

      2. If required, enter any Mount Options.

      3. Enter the Disk Size (GiB).

      4. Select the NFS Version from the drop-down list.

      5. Enter the Storage Domain Name.

    • For iSCSI:

      1. Enter the Portal IP Address, Portal Port, Portal Username, and Portal Password.

      2. Click Retrieve Target List and select a target. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

        Note:

        To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      3. Enter the Disk Size (GiB).

      4. Enter the Discovery Username and Discovery Password.

    • For FibreChannel:

      1. Enter the LUN ID. The host bus adapters must be configured and connected and the LUN must not contain any existing data.

      2. Enter the Disk Size (GiB).

    • For Gluster Storage:

      1. In the Storage Connection field, enter the full address and path to the storage.

      2. If required, enter any Mount Options.

      3. Enter the Disk Size (GiB).

  16. Click Next.

  17. Review the storage configuration. If the details are correct, click Finish Deployment.

  18. When the deployment is complete, click Close.

    If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.

  19. Optionally, log into the Oracle Linux Virtualization Manager Administration Portal to add any other resources.

    In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.

  20. Enable the required repositories on the Engine virtual machine.

  21. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment.

  22. To view the self-hosted engine’s status in Cockpit, under Virtualization click Hosted Engine.