Global Configuration Issues

Virtual Machine Started on KVM Host Whose Virtual Machine Count Exceeds the HighVirtual MachineCount Property Set for an Evenly_Distributed Scheduling Policy

A virtual machine started on a KVM host whose virtual machine count exceeded the number of virtual machines set in the HighVMCount property for an Evenly_Distributed scheduling policy. Based on the scheduling policy configured in this scenario, load balancing should have been triggered and this virtual machine should have started on another KVM host in the cluster.

Solution: There is no workaround for this behavior.

Bug: 29168788

Virtual Machine Can Be Started on a KVM Host Whose CPU Utilization Exceeds the HighUtilization Property Set for an Evenly_Distributed Scheduling Policy

In a 3-host cluster where only one host is active (the other two hosts are in Maintenance mode), 5 virtual machines are created by importing OVA files. An Evenly_Distributed scheduling policy is configured with the HighUtlilization property set to 50. When the CPU utilization exceed 50% on the KVM host and a virtual machine is started, the virtual machine should fail to startup; however, the virtual machine is starting up on the KVM host in this scenario.

Solution: There is no workaround for this behavior.

Bug: 29171712

CPU Load Not Evenly Load Balanced for an Evenly_Distributed Scheduling Policy

In a 3-host cluster where an Evenly_Distributed scheduling policy is configured with the HighUtlilization property set to 50 and the CPUOverCommitDuration set to 1, CPU load did not evenly distribute across the KVM hosts in the cluster. In this scenario, virtual machines did not migrate that should have been migrated due to load balancing based on the configured scheduling policy.

Solution: There is no workaround for this behavior.

Bug: 29172270

Power_Saving Scheduling Policy Not Shutting Down Any of the KVM Hosts in a Cluster with CPU Utilization Less Than 20%

In a cluster with 3 running KVM hosts and with 4 running virtual machines, a Power_Savings scheduling policy is configured with the EnableAutomaticHostPowerManagement property set to true when the CPU and memory is found to be low on the KVM hosts. After this policy is set, the KVM hosts are not being shutdown and the virtual machines are not being migrated, even though the CPU utilization is less than 20%. Given the configured Power_Savings scheduling policy for this scenario, some of the hosts should have been shutdown.

Solution: There is no workaround for this behavior.

Bug: 29418541

Virtual Machine Not Migrating After Exceeding the MaxFreeMemoryForOverUtilized Property Value for a Power_Savings Scheduling Policy

A virtual machine is observed not migrating to another KVM host in a cluster that has enough free memory when the virtual machine exceeds the value set for the MaxFreeMemoryForOverUtilized property of a Power_Savings scheduling policy.

Solution: There is no workaround for this behavior.

Bug: 29419399

MinFreeMemoryForUnderUtilized Property Not Working for Evenly_Distributed and Power_Savings Scheduling Policies

In a cluster with 3 active hosts where there are 4 running virtual machines (3 virtual machines running on one of the hosts and 1 virtual machine running on another one of the hosts), an Evenly_Distributed policy is configured with a value set for the MinFreeMemoryForUnderUtilized property. The virtual machines in this environment then exceed the MinFreeMemoryForUnderUtilized property value set for the policy, but neither the KVM hosts are shutdown nor are the virtual machines migrated in this scenario.

The policy is then changed to a Power_Savings scheduling policy and the MinFreeMemoryForUnderUtilized property is changed to the same value as previously set for the Evenly_Distributed scheduling policy, and again it is observed that neither the KVM hosts are shutdown nor the virtual machines are migrated when this property value is exceeded.

Solution: There is no workaround for this behavior.

Bug: 29425062

MacPoolAdmin Role Is Available Only for System-Level Users

Although the MacPoolAdmin role can be assigned to users of different levels (for example, System, Data Center, Cluster, and so on), only users who are given this role at the System level are actually able to perform MacPoolAdmin tasks on the Oracle Linux Virtualization Manager, such as creating, editing, or deleting MAC address pools.

Solution: If a user requires MacPoolAdmin privileges, ensure that the user is assigned the MacPoolAdmin role at the System level on the Manager.

Bug: 29534106