4.6 Setting up Nova

The Nova compute service is responsible for managing the hypervisors and virtual machine instances. You might need to perform additional configuration before you deploy this service.

4.6.1 Automatic Hypervisor Configuration

For the Nova compute service, the virt_type option in the [libvirt] section of the nova.conf configuration file sets the hypervisor that runs the instances in your deployment. KVM is the default hypervisor.

In an OpenStack deployment, it is possible to use a mixture of hypervisors as compute nodes. To simplify configuration, when you deploy the Nova compute service to a node, the hypervisor is detected and the virt_type option is configured automatically. The following table shows the virt_type option settings and the conditions when they are used.

Setting

Conditions

xen

Platform: Linux

Distribution: Oracle VM Server

kvm

Platform: Linux

Distribution: Oracle Linux Server

Virtualization support: enabled

qemu

Platform: Linux

Distribution: Oracle Linux Server

Virtualization support: disabled

hyperv

Platform: Microsoft Windows

To check whether virtualization support is enabled on the compute node, run the following command:

$ egrep '(vmx|svm)' /proc/cpuinfo

If virtualization support is enabled, the output should contain vmx (for an Intel CPU) or svm (for an AMD CPU).

4.6.2 Preparing a Compute Node

Before you deploy Nova compute services to a compute node, perform the following:

  • Ensure you have sufficient disk space, if you use ephemeral storage for instances (virtual machines).

    The Nova data container uses the /var/lib/kolla/var/lib/nova/instances directory on the compute node for ephemeral storage for instances. You must ensure there is sufficient disk space in this location for the amount of virtual machine data you intend to store.

  • (Oracle Linux compute nodes only) Stop and disable the libvirtd service.

    # systemctl stop libvirtd.service
    # systemctl disable libvirtd.service

    If the libvirtd service is running, this prevents the nova_libvirt container from starting when you deploy Nova services.

    Do not perform this step on Oracle VM Server compute nodes.

  • Stop iSCSI initiator processes.

    You only need to perform this step if you are using Cinder block storage with a volume driver that uses the iSCSI protocol.

    By default, the Cinder block storage service uses local volumes managed by the Linux Logical Volume Manager (LVM). The Cinder LVM volume driver uses the iSCSI protocol to connect an instance to a volume and the nova_iscsid container handles the iSCSI session. If there are iSCSI initiator processes running on the compute node, this prevents the nova_iscsid container from starting when you deploy Nova services.

    First, unmount the file systems on any attached iSCSI disks and disconnect from all iSCSI targets. Then do either of the following:

    • Uninstall the iscsi-initiator-utils package.

      # yum remove iscsi-initiator-utils 
    • Disable iSCSI services.

      On Oracle Linux compute nodes:

      # systemctl stop iscsid.socket iscsiuio.socket iscsid.service 
      # systemctl disable iscsid.socket iscsiuio.socket iscsid.service  

      On Oracle VM Server compute nodes:

      # service iscsid stop
      # chkconfig iscsid off

4.6.3 Setting the iSCSI Initiator Name

By default, the Cinder block storage service uses the iSCSI protocol to connect instances to volumes. The nova_iscsid container runs on compute nodes and handles the iSCSI session using an iSCSI initiator name that is generated when you deploy Nova compute services.

If you prefer, you can configure your own iSCSI initiator name. You set the iSCSI initiator name in the /etc/kolla/nova-iscsid/initiatorname.iscsi file on each compute node. If the initiatorname.iscsi file does not exist, create it. The file has one line, which contains the name of the initiator, in the format:

InitiatorName=iqn.yyyy-mm.naming-authority:unique_name

For example:

InitiatorName=iqn.1988-12.com.oracle:myiscsihost

4.6.4 Enabling iSCSI Multipath

The Nova compute service supports iSCSI multipath for failover purposes and increased performance. When multipath is enabled, the iSCSI initiator (the Compute node) is able to obtain a list of addresses from the storage node that the initiator can use as multiple paths to the iSCSI LUN (the Cinder volume).

To enable iSCSI multipath:

$ kollacli property set nova_iscsi_use_multipath true