7. Known Limitations and Workarounds

7.1. Oracle VM Manager User Interface
7.2. Oracle VM Servers and Server Pools
7.3. Virtual Machines
7.4. Networks
7.5. Storage

This section contains information on the known limitations and workarounds for Oracle VM.

7.1. Oracle VM Manager User Interface

This section contains the known issues and workarounds related to the Oracle VM Manager user interface.

7.1.1. Only One Instance of the UI Per Browser Supported

If you connect to the Oracle VM Manager UI in a web browser twice, in two different tabs or windows, unexpected display issues may occur.

Workaround: You should only connect to one instance of the Oracle VM Manager UI per web browser session.

Bug 13034728

7.1.2. Drag and Drop Requires UI Refresh

At times the user interface may become in an inconsistent state and require a refresh. This may occur when using the drag and drop feature.

Workaround: Refresh the user interface using the F5 key.

Bug 13580163

7.1.3. Oracle SPARC Server Pools Not Supported

The user interface contains options for creating and editing a server pool with Oracle SPARC-based Oracle VM Servers. Although this functionality is listed in the user interface, it is not yet available nor supported in this release.

7.1.4. 32-bit Server Pools Not Supported

The user interface contains an option for creating a server pool containing 32-bit Oracle VM Servers. This option is listed as x86-32b in the Architecture Type drop-down list in the Create a Server Pool wizard. Although this functionality is listed in the user interface, it is not supported in this release. In this release, all Oracle VM Servers in a server pool must be 64-bit machines.

7.1.5. Open Port 7002 on Oracle VM Servers When Using HTTPS

If you use the HTTPS protocol to connect to the Oracle VM Manager UI, make sure you open port 7002 on all Oracle VM Servers managed by Oracle VM Manager as this port is also used for communication between Oracle VM Manager and the Oracle VM Servers when using HTTPS.

Bug: 14198734

7.2. Oracle VM Servers and Server Pools

This section contains the known issues and workarounds related to Oracle VM Servers and server pools.

7.2.1. Server BIOS Settings

The following server BIOS settings may be required to use Oracle VM Server:

  • AHCI mode may be necessary to recognize the CDROM device to perform an installation from CDROM.

  • Disable I/O MMU Virtualization Settings; for Intel-based servers this is VT-d; for AMD based servers this is AMD-Vi or IOMMU. I/O MMU is not supported in this release.

7.2.2. Redundant Grub Entries after Oracle VM Server Upgrade

When upgrading your Oracle VM Servers from CD or ISO, as opposed to a YUM upgrade, new entries are created in the Oracle Linux grub menu. The entries from the previous installation, however, are not removed from the grub configuration, although they no longer serve any purpose.

Workaround: If necessary, remove the redundant entries from /etc/grub.conf.

Bug 13935067

7.2.3. Network Interface in PXE Kickstart File

When using a PXE boot kickstart file to perform an Oracle VM Server installation, make sure you specify the network interface to be used for the management interface first. If you have more than one network interfaces specified in a kickstart file, the first interface is used as the management interface. The management interface is the only live interface after a PXE install of Oracle VM Server. You should manually configure any other network interfaces on the Oracle VM Server to start on boot in the /etc/sysconfig/network-scripts/ifcfg-* files. Set the ONBOOT=NO parameter to ONBOOT=YES.

Bug 12557470

7.2.4. Oracle VM Server Kickstart Installation Not Supported with Multipath SAN Boot

When using a PXE boot kickstart file to perform an Oracle VM Server installation, there is no option available to enable the installer setting to make the server boot from a multipath device on a SAN. The installer option to boot from a multipath SAN device can only be set manually during the installation procedure.

Bug 13967964

7.2.5. Dom0 Memory Calculations for Servers with More than 128GB RAM

When installing Oracle VM Server on a machine with more than 128GB RAM, the memory allocated to Dom0 may turn out to be too restrictive for some configurations. For example, you may need to increase Dom0 memory in case the Oracle VM Server runs a large number of virtual machines and connects to a large number of LUNs.

Dom0 memory is calculated as follows: 2% of server RAM + 512MB. However, calculations are cut off at a maximum of 2% of 128GB RAM, meaning that Dom0 has a maximum of around 3GB RAM available. This should be sufficient for most scenarios, but an increase in Dom0 memory may be required to accommodate the demand of a particular environment.

Bug 13922885

7.2.6. Increasing Dom0 Memory

There are cases where it may become necessary to increase the Dom0 memory size to meet the demand of running applications. For example, presenting one iSCSI LUN takes approximately 3.5MB of system memory. Consequently, a system that uses many LUNs quickly requires a larger amount of memory in accordance with the storage configuration.

Workaround: Change the amount of memory allocated to Dom0. See Changing the Dom0 Memory Size for information on how to change the Dom0 memory allocation.

7.2.7. Oracle VM Server Installation Fails with Broadcom Gigabit Ethernet Controller

Installing Oracle VM Server on a system such as the Dell 380, with a Broadcom Gigabit Ethernet Controller fails with an error similar to the following example:

Traceback (most recent call first):
File "/usr/lib/anaconda/network.py", line 685, in write
if dev.get('BOOTPROTO').lower() in ['dhcp', 'ibft']:
File "/usr/lib/anaconda/yuminstall.py", line 1394, in doPreInstall
anaconda.id.network.write(anaconda.rootPath)
File "/usr/lib/anaconda/backend.py", line 184, in doPreInstall
anaconda.backend.doPreInstall(anaconda)
File "/usr/lib/anaconda/dispatch.py", line 207, in moveStep 

The installer cannot detect the network adapter, so fails to complete the installation.

Bug 13387076

7.2.8. Oracle VM Server Installation on Sun Fire X4800

If you are installing Oracle VM Server on a Sun Fire X4800, you must provide extra parameters when booting from the installation media (CDROM or ISO file), or when using a kickstart installation. These parameters allow the megaraid_sas driver to load correctly.

If booting from the installation media, press F2 when the initial boot screen is displayed and provide the following additional parameters as part of the boot command:

mboot.c32 xen.gz extra_guest_irqs=64,2048 nr_irqs=2048 --- vmlinuz --- initrd.img

If using a kickstart installation, add the additional kernel parameters to the PXE configuration file.

If you want to make these changes permanent, edit the /boot/grub/grub.conf file in your Oracle VM Server after the installation has completed.

Bug 12657272

7.2.9. Wake On Lan (WOL) Fails if Oracle VM Servers on Different Subnets

Starting or restarting Oracle VM Servers fails if the Oracle VM Servers in a server pool are on different subnets.

Workaround: Use IPMI (Intelligent Platform Management Interface) to start or restart Oracle VM Servers in a server pool that are on different subnets.

Bug 12410458

7.2.10. Remove Oracle VM Server from Server Pool Fails on Redeployed Oracle VM Manager

If Oracle VM Manager is redeployed to a new computer, you should rediscover any file servers. If you do not rediscover the file servers, and the server pool file system is on the file server, you cannot remove Oracle VM Servers from the server pool.

Bug 12333132

7.2.11. Unable to Rediscover Server Pool after Database Clean

If you have server pools in your environment and you clear the Oracle VM Manager database, you cannot rediscover and rebuild your previous Oracle VM environment. The following error is in the job:

OVMRU_000021E Cannot perform operation on pool: Unknown pool found for discovered Pool FS. 
The pool does not have a Virtual IP Address.

Workaround: Follow these steps to rediscover the server pools:

  1. Discover one Oracle VM Server from the server pool.

  2. Register, and refresh your storage server.

  3. Refresh the file system that contains the server pool file system.

  4. Refresh the file systems that contains the repositories.

  5. Refresh the repositories.

  6. Refresh all Oracle VM Servers in the server pool to discover the virtual machines.

Bug 12724969

7.2.12. Inconsistent Master Role Assignment after Cluster Failure

If the master Oracle VM Server in a clustered server pool becomes unavailable or loses its connection to the storage containing the server pool file system, another Oracle VM Server will take over the master role and the server pool virtual IP. When the unavailable server comes back online, it will rejoin the cluster, unless its access to the server pool file system is not (yet) restored. This is where the inconsistency occurs. The original master server contains information indicating it has the master role, while another server in the cluster may have assumed the master role in the meantime. If the entire cluster has been down, the original master server can continue to fulfill that role. If the cluster remained operational, however, changes may have occurred that the original master server has no information about. As a result, two Oracle VM Servers in the same clustered server pool may claim the master role and the virtual IP, and Oracle VM Manager may not be able to resolve that conflict.

Workaround: Manually assign the master role either to the original master or to another server that is still active within the cluster. Follow these steps:

  1. In Oracle VM Manager, open the Servers and VMs tab.

  2. In the navigation pane, select the server pool and click Edit Server Pool.

  3. Select the appropriate server as master server and click OK to save your changes.

Bugs 13875603 and 13969945

7.2.13. I/O-intensive Storage Operations Disrupt Cluster Heartbeat

The OCFS2 heartbeating function can be disturbed by I/O-intensive operations on the same physical storage. For example: importing a template or cloning a virtual machine in a storage repository on the same NFS server where the server pool file system resides may cause a time-out in the heartbeat communication, which in turn leads to server fencing and reboot.

Workaround: To avoid unwanted rebooting, it is recommended that you choose a server pool file system location with sufficient and stable I/O bandwidth. Place server pool file systems on a separate NFS server or use a small LUN, if possible.

Bug 12813694

7.2.14. Unable to Remove Oracle VM Server from Cluster: Heartbeat Region Still Active

If OCFS2 file systems are still mounted on an Oracle VM Server you want to remove from a cluster, the remove operation may fail. This is due to the fact that the OCFS2 mount is an active pool file system or storage repository.

Workaround: If a storage repository is still presented, unpresent it from the Oracle VM Server before attempting to remove the Oracle VM Server from the cluster. If a pool file system is causing the remove operation to fail, other processes might be working on the pool file system during the unmount. Try removing the Oracle VM Server at a later time.

7.2.15. Unable to Remove Oracle VM Server from Cluster: Heartbeat in Configured Mode

An Oracle VM Server is in heartbeat configured mode if the server pool file system is not mounted. Either the file system failed to mount or the mount was lost because of hardware issues such as unavailability of the LUN of NFS server containing the file system to be mounted.

Workaround: Mount the file system at the Oracle VM Server command line, or, as a last resort, reboot the Oracle VM Server and allow it to join the cluster automatically at boot time.

7.2.16. Netconsole Error During Oracle VM Server Startup, Unknown Error 524

To use netconsole you must specify a non-bridged ethx device, in the /etc/sysconfig/netconsole file on an Oracle VM Server, for example:

# The ethernet device to send console messages out of (only set this if it
# can't be automatically determined)
# DEV=
DEV=eth2

Bug 12861134

7.2.17. ACPI Buffer Error After Installing Oracle VM Server

On some Intel-based systems, the following error may occur after installing Oracle VM Server when the computer is started:

[    0.674742] ACPI Error: Field [CPB3] at 96 exceeds Buffer [NULL] size 64 (bits) 
  (20090903/dsopcode-596)
[    0.675167] ACPI Error (psparse-0537): Method parse/execution failed
[\_SB_._OSC] (Node ffff88002e4fba50), AE_AML_BUFFER_LIMIT

This has been observed on systems with the following BIOS information, but may also occur in other BIOS versions:

Vendor: Intel Corp. Version: S5500.86B.01.00.0036-191.061320091126 Release Date: 06/13/2009 BIOS Revision: 17.18 Firmware Revision: 0.0

HP ProLiant BL685c G6 HP BIOS A17 12/09/2009 Backup Version 12/01/2008 Bootblock 10/02/2008

This error can safely be ignored.

Bugs 12865298 and 12990146

7.2.18. Cisco Blade Servers Must Have Fixed UUID

Cisco Blade servers cannot be configured to boot with random UUIDs. Each Oracle VM Server must have a fixed UUID to allow the Oracle VM Server to move between Blades. To configure a fixed UUID, see:

http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_GUI_Configuration_Guide_1_3_1_chapter26.html#task_6026472137893749620

Bug 13000392

7.3. Virtual Machines

7.3.1. Editing Virtual Machine Configuration File Not Supported
7.3.2. Virtual Machine Console Prompts for VNC Password
7.3.3. Concurrent Template Operations May Fail Due to Lock Issue
7.3.4. Cloning Virtual Machine from Oracle VM 2.x Template Stuck in Pending
7.3.5. Cloning Hangs for Virtual Machines and Templates with Unsupported Network Configurations
7.3.6. Cloning a HA-enabled Virtual Machine does not Maintain the HA Property
7.3.7. Live Migration Fails but Oracle VM Manager Reports Running Virtual Machine on Target Oracle VM Server
7.3.8. Virtual Machine Created with Network (PXE) Installation Does Proceed Beyond Pre Boot
7.3.9. Hardware Requirements for Hardware Virtualized Guests
7.3.10. Creating a PVM Guest Slow Using Local Storage
7.3.11. Virtual CD-ROM in PVHVM Guests not Initialized as IDE Device
7.3.12. PVM Guest Fails to Boot with Large Number of Disks
7.3.13. Suspending a Virtual Machine with More than 64 Virtual CPUs Fails
7.3.14. Virtual Machines with 32 or More Virtual CPUs Fail
7.3.15. Guests with More than 32 Virtual CPUs Hang at Boot
7.3.16. Limitations for Hot-Changing Number of Virtual CPUs
7.3.17. Hot-Plugging Virtual CPUs Not Supported for HVM Guests with PXE Boot Option
7.3.18. Virtual Machine Console Certificate Error
7.3.19. Improving Mouse Pointer Control in Virtual Machine Console
7.3.20. Solaris 10 Kernel Panic on Some AMD Systems
7.3.21. Solaris 10 Release 8/11 Guest Hangs at Boot
7.3.22. Solaris 11 Release 2011.11 Guest Hangs at Boot
7.3.23. Windows Server 2008 R2 x64 HVM Guests Do Not Cleanly Shut Down
7.3.24. New Disks Displayed with Yellow Icon in Windows Guests
7.3.25. New Disks Are Not Automatically Detected in Windows 2008 Guests
7.3.26. (x86 Only) Netconsole Does Not Work on a SUSE Linux Enterprise Server Guest

This section contains the known issues and workarounds related to virtual machines.

7.3.1. Editing Virtual Machine Configuration File Not Supported

It is not supported to manually modify virtual machine configuration files. Any changes to the virtual machine configuration file should be performed using Oracle VM Manager. The only exception to this is if advised to manually edit the vm.cfg file by Oracle Support Services, or as explicitly specified in these Release Notes.

Manual changes made to the virtual machine configuration file (vm.cfg) are not reflected in Oracle VM Manager. Any manual changes to the vm.cfg file may result in unexpected and undesirable behaviour. For example, if you edit the HA setting in the vm.cfg to disable HA, and the virtual machine is stopped by any method other than using Oracle VM Manager, the virtual machine is restarted. Oracle VM Manager is not aware of the HA change in the virtual machine's configuration file.

Bugs 12654125 and 13391811

7.3.2. Virtual Machine Console Prompts for VNC Password

When launching the virtual machine console you are prompted to enter a VNC password. If you enter a password, the VNC window disappears. This is caused by an entry in the virtual machine configuration file which specifies a VNC password. VNC console passwords are not supported in this release.

Workaround: Remove the VNC password from the virtual machine configuration file using the Oracle VM Utilities:

  1. Download and install the Oracle VM Utilities Release 0.4.3 or above from:

    http://www.oracle.com/technetwork/server-storage/vm/downloads/index.html

  2. Use the following Oracle VM Utilities command to remove the VNC password from the virtual machine configuration file:

    # ./ovm_vmcontrol -u admin -p password -h localhost -c fixcfg -v Solaris11 
    Oracle VM VM Control utility 0.4.3. 
    Connected. 
    Command : fixcfg 
    Removing vncpassword option 
    Fixing vm config

See the Oracle VM Utilities Guide for more information on using the Oracle VM Utilities.

Bug 14014952

7.3.3. Concurrent Template Operations May Fail Due to Lock Issue

When importing and deleting several templates concurrently, or when an Oracle VM Server is removed during the import of a template, a lock exception error may appear. However, the template upload often completes successfully despite the error message, but it does not appear in the list of available templates in the storage repository.

To resolve this problem, refreshing the storage repository is often sufficient, after which the uploaded template file appears in the list. In some cases the imported template turns out to be incomplete. In that case, you should delete the template and import the template again.

7.3.4. Cloning Virtual Machine from Oracle VM 2.x Template Stuck in Pending

When creating a virtual machine from an Oracle VM 2.x template, the clone job fails with the error:

OVMAPI_9039E Cannot place clone VM: template_name.tgz, in Server Pool: server-pool-uuid.
That server pool has no servers that can run the VM.

This is caused by a network configuration inconsistency with the vif = ['bridge=xenbr0'] entry in the virtual machine's configuration file.

Workaround: Remove any existing networks in the virtual machine template, and replace them with valid networks which have the Virtual Machine role. Start the clone job again and the virtual machine clone is created. Alternatively, remove any existing networks in the template, restart the clone job, and add in any networks after the clone job is complete.

Bug 13911597

7.3.5. Cloning Hangs for Virtual Machines and Templates with Unsupported Network Configurations

When cloning a virtual machine or template containing network configurations that cannot be accommodated by any of the Oracle VM Servers in the environment, the clone operation fails. However, the failure does not return an error message to the user, and the clone job hangs silently.

Bug 13946567

7.3.6. Cloning a HA-enabled Virtual Machine does not Maintain the HA Property

When cloning a HA-enabled virtual machine, the resulting clone (either a virtual machine or template) is not HA-enabled.

Workaround: Edit the cloned virtual machine or template and enable HA.

Bug 13911082

7.3.7. Live Migration Fails but Oracle VM Manager Reports Running Virtual Machine on Target Oracle VM Server

When live migration of a virtual machine fails, Oracle VM Manager reports the migration job as failed. However, the migrated virtual machine may appear under the correct target Oracle VM Server in running status, as if the migration had completed successfully. At the same time, the virtual machine remains in running status on the Oracle VM Server it was supposed to be migrated away from, but Oracle VM Manager reports it as stopped.

Workaround: Kill the virtual machine affected by the failed migration operation on the target Oracle VM Server; then restart the server. Rediscover the Oracle VM Server on which the virtual machine was originally running, and it should appear again in running status.

Bugs 13939895, and 13939802

7.3.8. Virtual Machine Created with Network (PXE) Installation Does Proceed Beyond Pre Boot

Creating a virtual machine using the Network method (PXE) does not proceed beyond pre boot, so the virtual machine is not created. This occurs for Oracle Linux 5.x virtual machines.

Bug 12905120

7.3.9. Hardware Requirements for Hardware Virtualized Guests

Creating hardware virtualized guests requires the Oracle VM Server has an Intel-VT (code named as Vanderpool) or AMD-V (code named as Pacifica) CPU. See the Oracle VM Installation and Upgrade Guide for a list of supported hardware.

7.3.10. Creating a PVM Guest Slow Using Local Storage

Creating a PVM guest using local storage may take a very long time. This may be caused by write caching being turned off for the local disk.

Workaround: Enable write caching for the disk using the hdparam utility.

Bug 12922626

7.3.11. Virtual CD-ROM in PVHVM Guests not Initialized as IDE Device

If the configuration file of a PVHVM virtual machine or template lists a virtual CD-ROM drive as an IDE device, that is as /dev/hda, dev/hdb etc., the virtual CD-ROM is not available inside the guest. To enable the CD-ROM drive inside the guest, it must be defined in the vm.cfg file as a paravirtual device, that is /dev/xvda, /dev/xvdb, etc.

Bug 14000249

7.3.12. PVM Guest Fails to Boot with Large Number of Disks

A PVM guest with many disks attached may be unable to boot. PVM guests sometimes fail to start with 12 to 16 virtual disks attached, although Oracle VM has a much higher general limit of over 100 virtual disks.

Bug 13979094

7.3.13. Suspending a Virtual Machine with More than 64 Virtual CPUs Fails

If a virtual machine has more than 64 virtual CPUs, a suspend operation will fail. This occurs with both HVM and PVM guests.

Bug 13587669

7.3.14. Virtual Machines with 32 or More Virtual CPUs Fail

When creating or editing a virtual machine, you can set both the number of CPUs it uses and the maximum number of CPUs it is allowed to use. However, if the guest is configured with a large number of virtual CPUs, the respective vcpus and maxvcpus values are not passed correctly to the hypervisor. As a result, the creation of the virtual machine fails, or if the virtual machine already exists, it fails to start.

Workaround: When creating or editing a virtual machine with 32 or more virtual CPUs, do not enter a value for the maximum number of CPUs.

Bug 13823522

7.3.15. Guests with More than 32 Virtual CPUs Hang at Boot

If a guest is configured with a large number of virtual CPUs, the virtual machine may hang at some point in the boot sequence. This is likely caused by the kernel or the hypervisor, and there is no workaround if you need more than 32 virtual CPUs.

Bug 12913287

7.3.16. Limitations for Hot-Changing Number of Virtual CPUs

For both hardware virtualized (HVM) guests and hardware virtualized guests with paravirtualized drivers (PVHVM), the possibility to change the number of virtual CPUs is limited by the kernel of the virtual machine. The table below provides an overview of guest kernel support; it applies to both x86 and x86_64 guest architectures.

Table 10. Guest kernel support for hot-changing virtual CPUs

Oracle Linux Version

Type

Kernel Version

Hot-Add

Hot-Remove

Oracle Linux R5U5

PVM

2.6.18-194.0.0.0.3.el5xen

Yes

Yes

Oracle Linux R5U5

PVHVM

2.6.18-194.0.0.0.3.el5

No

No

Oracle Linux R5U6

PVM

2.6.18-238.0.0.0.1.el5xen

Yes

Yes

Oracle Linux R5U6

PVHVM

2.6.18-238.0.0.0.1.el5

No

No

Oracle Linux R5U7

PVM

2.6.32-200.13.1.el5uek

No

Yes

Oracle Linux R5U7

PVHVM

2.6.32-200.13.1.el5uek

No

No

Oracle Linux R5U8

PVM

2.6.32-300.10.1.el5uek

No

Yes

Oracle Linux R5U8

PVHVM

2.6.32-300.10.1.el5uek

No

No

Oracle Linux R6U1

PVM

2.6.32-100.34.1.el6uek

Yes

Yes

Oracle Linux R6U1

PVHVM

2.6.32-100.34.1.el6uek

No

No

Oracle Linux R6U2

PVM

2.6.32-300.3.1.el6uek

Yes

Yes

Oracle Linux R6U2

PVHVM

2.6.32-300.3.1.el6uek

No

No

Oracle Linux R5U8 with UEK2

PVM

2.6.39-100.5.1.el5uek

No

Yes

Oracle Linux R5U8 with UEK2

PVHVM

2.6.39-100.5.1.el5uek

No

No

Oracle Linux R6U2 with UEK2

PVM

2.6.39-100.5.1.el6uek

Yes

Yes

Oracle Linux R6U2 with UEK2

PVHVM

2.6.39-100.5.1.el6uek

Yes

No


Bugs 12913287, 13905845, 13823853, and 13898210

7.3.17. Hot-Plugging Virtual CPUs Not Supported for HVM Guests with PXE Boot Option

For hardware virtualized guests, hot-plugging of virtual CPUs is not supported if the PXE option is selected in the boot order of the virtual machine. If you attempt to change the number of virtual CPUs, Oracle VM Manager does not always display an error message.

Bug 13963301

7.3.18. Virtual Machine Console Certificate Error

When using the virtual machine console, the following error may be displayed:

JAR sources in JNLP file are not signed by the same certificate

This occurs if the client JRE version is 1.6.0_14.

Workaround: Upgrade the client JRE version to 1.6.0_30 or above.

Bug 13621606

7.3.19. Improving Mouse Pointer Control in Virtual Machine Console

When you launch the virtual machine console from Oracle VM Manager you may experience that the mouse pointer on your local machine and the mouse pointer in the virtual machine travel across the screen at different speeds.

If your guest virtual machine's operating system is Linux-based, the following workaround may reduce the mouse control issue. Enter the following on the guest's command line:

# xset m 1 1

7.3.20. Solaris 10 Kernel Panic on Some AMD Systems

Virtual machines with the guest operating system Solaris 10 may experience kernel panic on systems with AMD processors. Kernel panic has been seen in Solaris 10 9/10 (Update 9) and Solaris 10 8/11 (Update 10).

Workaround: To work around this issue:

  1. During the installation or first time boot up, edit the grub menu and append the -kd kernel boot parameter.

  2. Continue with the boot to run the Solaris kmdb. When the following screen prompt is displayed:

    Welcome to kmdb 
    [0]> 

    Enter the command:

    cmi_no_init/W 1

    Enter the following to continue the installation or system boot:

    :c
  3. After Solaris is installed and booted up, append the following line to the /etc/system file to make this change persistent across system reboot.

    set cmi_no_init = 1

Bug 13332538

7.3.21. Solaris 10 Release 8/11 Guest Hangs at Boot

When booting a virtual machine with Oracle Solaris 10 Release 8/11, the guest OS hangs when the copyright information screen appears. This is caused by a change in CPUID handling in dom0, which triggers a Solaris bug on platforms with CPUs of the Westmere-EP family.

Workaround: To make Solaris 10 run on Oracle VM 3.1.1, apply the following manual fix:

  1. At boot, edit the grub menu: append the -kd kernel boot parameter. This runs the Solaris kernel debugger.

  2. Continue the boot sequence up to Solaris kmdb.

  3. At the kmdb prompt, enter the following command:

    Welcome to kmdb
    [0]> apix_enable/W 0
  4. Enter :c to continue the system boot sequence.

  5. When Solaris has been installed and has successfully booted, make this fix persistent by adding the following line to /etc/system:

    set apix_enable = 0

Bug 13876544

7.3.22. Solaris 11 Release 2011.11 Guest Hangs at Boot

When booting a virtual machine with Oracle Solaris 11 Release 2011.11, the guest OS hangs. This is caused by a Solaris bug where interrupt storms occur on Intel systems based on Sandy bridge and Westmere CPUs. The issue has been fixed in Solaris 11 2011.11 SRU 2a.

Workaround: To make Solaris 11 2011.11 run on Oracle VM 3.1.1, use the SRU 2a, or apply the following manual fix to the GA release:

  1. At boot, edit the grub menu: append the -kd kernel boot parameter. This runs the Solaris kernel debugger.

  2. In the kernel debugger, enter the following commands:

    [0]> ::bp pcplusmp`apic_clkinit
    [0]> :c
    kmdb: stop at pcplusmp`apic_clkinit
    kmdb: target stopped at: pcplusmp`apic_clkinit: pushq %rbp
    [0]> apic_timer_preferred_mode/W 0
    pcplusmp`apic_timer_preferred_mode:    0x2    =    0x0
    [0]> :c
  3. Continue the system boot sequence.

  4. When Solaris has been installed and has successfully booted, make this fix persistent by adding the following line to /etc/system:

    set pcplusmp:apic_timer_preferred_mode = 0x0

For more information, please consult the Support Note with ID 1372094.1. You can also find this document by logging on to My Oracle Support and searching the knowledge base for 1372094.1.

Bug 13885097

7.3.23. Windows Server 2008 R2 x64 HVM Guests Do Not Cleanly Shut Down

Windows Server 2008 Release 2 64-bit hardware virtualized guests fail to shut down cleanly. After the guest has been shut down, and started again, Windows reports that it was not shut down cleanly.

This is not an issue for Windows Server 2008 Release 2 32-bit hardware virtualized guests with paravirtualized drivers (PVHVM).

Bug 12658534

7.3.24. New Disks Displayed with Yellow Icon in Windows Guests

When you add a new disk to a virtual machine and refresh the device manager, the new disk is displayed with a yellow mark. This occurs in Microsoft Windows guests that have the Oracle VM Windows Paravirtual Drivers for Microsoft Windows Release 2.0.7 installed.

Bug 12837744

7.3.25. New Disks Are Not Automatically Detected in Windows 2008 Guests

When you add a new disk to a virtual machine, the new disk is not automatically detected. This occurs in Microsoft Windows 2008 Release 2, 64-bit guests that have the Oracle VM Windows Paravirtual Drivers for Microsoft Windows Release 2.0.7 installed.

Workaround: After you add a new disk, scan for new hardware changes using Server Manager > Disk Drives > Scan for hardware changes.

Bug 12837004

7.3.26. (x86 Only) Netconsole Does Not Work on a SUSE Linux Enterprise Server Guest

Netconsole does not work on a virtual machine running SUSE Linux Enterprise Server 11 (SLES) guest, as polling is not supported by xennet.

Novell Bug ID 763858

7.4. Networks

This section contains the known issues and workarounds related to networks and networking.

7.4.1. Network Configuration May Be Lost After Oracle VM Server Upgrade

The operating system service Kudzu is removed from Oracle VM Release 3.1.1. However, upgrading an Oracle VM 3.0.x environment (which included Kudzu) to Oracle VM Release 3.1.1 using a YUM repository can result in Kudzu being present in Oracle VM Release 3.1.1. When started, Kudzu can cause the network configuration files ifcfg-ethx to be renamed to ifcg-ethx.bak. As a result, the network configuration files are not seen when the system is rebooted after the upgrade and the network is not configured correctly.

Workaround: Disable Kudzu before upgrading Oracle VM 3.0.x to Oracle VM Release 3.1.1. Enter the following command on each Oracle VM Server 3.0.x (dom0) to disable Kudzu at all run levels:

# chkconfig kudzu off

Enter the following command to verify that Kudzu is disabled:

# chkconfig --list | grep kudzu

If the system has been upgraded with Kudzu in place, rename the ifcfg-ethx.bak files to ifcfg-ethx and restart the network or reboot the Oracle VM Server to workaround the issue.

A fix has been added to the ovs-release package to obsolete Kudzu during YUM update. Make sure you download the latest patches from Oracle Unbreakable Linux Network (ULN): http://linux.oracle.com and add the Oracle VM 3.1.1 Server Patches channel to the YUM repository before you perform server upgrade from 3.0.x to 3.1.1. Then the above mentioned workaround is not needed.

For more information, see the My Oracle Support website at: https://support.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=1464126.1.

Bug 14082470

7.4.2. Network Card Limit in Virtual Machines

Oracle VM Manager supports eight network cards for each HVM virtual machine as outlined in Table 5 “Virtual machine maximums”. However, the system library does not allow users to add more than three network cards when creating a virtual machine from installation media.

Workaround: After the virtual machine is created, add up to five new network cards by editing the virtual machine in Oracle VM Manager.

7.4.3. Broadcom BCM5754 Gigabit Ethernet Does Not Support Jumbo Frames

The Broadcom BCM5754 Gigabit Ethernet network controller does not support Jumbo Frames.

7.4.4. ARP Packet Checksum Errors

VLANs over bond mode 6 (balance-alb) bridge interface are not supported as this mode is incompatible with VLAN bridge interfaces.

Workaround: There are two workarounds for this problem:

  • Use bond mode 6 as a bridge interface; do not use VLANs over bond mode 6.

  • Use VLANs over bond modes (1=active-backup or 4=802.3ad) as a bridge interface.

7.4.5. Changing Cluster Heartbeat Network Does Not Reflect New IP Address

If you move the Cluster Heartbeat network role to another network, with a different IP address, the change is not reflected in the Oracle VM Servers.

Workaround: Edit the /etc/ocfs2/cluster.conf file on each Oracle VM Server in the network to reflect the new IP address, and restart each Oracle VM Server.

Bug 12870099

7.4.6. Changing MTU of Network Port in Bond Not Allowed

If the network port of an Oracle VM Server is part of a bond configuration, you cannot change the MTU setting of the individual network port. The Oracle VM Manager user interface may allow you to change the MTU value, but the job will fail.

Workaround: If your network setup supports an MTU other than the default 1500, and the network interfaces are grouped in a bond, change the MTU setting for the bond port instead of the individual network ports.

Bug 13928090

7.5. Storage

This section contains the known issues and workarounds related to storage.

7.5.1. Unclean File System Causes Errors When Used as a Server Pool File System

If a server pool file system is not clean (contains existing files and server pool cluster information) and used to create a server pool, a number of errors may occur.

  • Cannot create a server pool using the file system. The following error is displayed:

    OVMAPI_4010E Attempt to send command: create_pool_filesystem to server: server_name failed. 
    OVMAPI_4004E Server Failed Command: create_pool_filesystem  ... No such file or directory

    Bug 12839313

  • An OCFS2-based storage repository becomes orphaned (the clusterId that was used when the OCFS2 file system was created no longer exists), you cannot mount or refresh the repository, and the following error is displayed:

    "OVMRU_002037E Cannot present the Repository to server: server_name. Both server and repository 
    need to be in the same cluster."

Workaround: Clean the file system of all files before it is used as a server pool file system.

Bug 12838839

7.5.2. Rescanning a LUN Does Not Show the New Size of a Resized LUN

When you resize a LUN and rescan the physical disks on the storage array, the new size is not reflected in Oracle VM Manager.

Bug 12772588

7.5.3. Storage Repositories Must Be 10GB or Larger

When you attempt to create a storage repository with a size of less than 10GB, the job fails. There is currently no mechanism in place to verify the minimum size before launching the job.

Workaround: Make sure that the storage repository you create is at least 10GB.

Bugs 13925381, and 13893328

7.5.4. LUNs Must Be Cleaned Prior to Storage Repository Creation

When you attempt to create a storage repository on a LUN that was previously used by another clustered server pool, the operation will fail. This is due to a built-in mechanism that prevents the creation of a new OCFS2 file system if the disk or partition already contains cluster data.

Workaround: Clear all files and file system information on the LUN before placing a storage repository on it.

Bugs 13806344

7.5.5. Black Listing of System Disks for Legacy LSI MegaRAID Controllers Not Supported

Oracle VM Server cannot add the system disks for Legacy LSI MegaRAID (Dell PERC4) bus controllers to the /etc/blacklisted.wwids file, so the disks are not blacklisted in the multipath configuration. This occurs because the bus controllers are not capable of returning a unique hardware ID for each disk. Using system disks on Legacy LSI MegaRAID (Dell PERC4) bus controllers is therefore not supported.

Bug 12944281

7.5.6. Blacklisting of System Disks for Multipathing Fails on HP Smart Array (CCISS) Disk Devices

Installing Oracle VM Server on an HP Smart Array (CCISS) fails to blacklist system disks (they are not included in the /etc/blacklisted.wwids file). Messages similar to the following are logged in the /var/log/messages file:

multipathd: /sbin/scsi_id exited with 1
last message repeated 3 times

CCISS disks are only supported for installing Oracle VM Server. If you want to install Oracle VM Server on a CCISS disk, use the workaround below. CCISS disks are not supported when used for storage repositories, raw disks for virtual machines, or server pool file systems.

Workaround: Configure multipathing to blacklist the CCISS system devices by adding a new line to the multipath.conf file:

# List of device names to discard as not multipath candidates
#
## IMPORTANT for OVS do not remove the black listed devices.
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|nbd)[0-9]*"
        devnode "^hd[a-z][0-9]*"
        devnode "^etherd" 
+       devnode "^cciss!c[0-9]d[0-9]*"    <<====
        %include "/etc/blacklisted.wwids"
}

Bug 12722044

7.5.7. Multi-homed NFS Shares Are Not Supported

When an NFS file server has two IP addresses, it cannot expose the same file system over both interfaces. This situation would occur if you configure both IP addresses as separate access hosts; for example to provide access to different Oracle VM Servers via different paths. As a result, the same file system would correspond with two different storage object entries, each with a different path related to each of the IP addresses. As a storage server can only be represented by one object, this configuration is not supported in Oracle VM Release 3.1.1.

Workaround: Configure only one access host per storage server.

7.5.8. Refreshing a NAS-based File System Produces Invalid/Overlapping Exports

When a NAS-based file system is refreshed, it may produce invalid or overlapping exports. During a file system refresh job, all mount points defined in the NAS-based file server's exports file are refreshed, even file systems that are not intended to be used in Oracle VM environments.

Top level directories which also contain subdirectories in the exports file may also cause problems, for example, if an export file contains /xyz as an export location, and also contains /xyz/abc. In this case, the following error may be displayed during a refresh file system job:

OVMRU_002024E Cannot perform operation. File Server: server_name, has invalid exports.

Workaround: For the second issue, to work around this problem, do not export top level file systems in the NAS-based file server's exports file.

Bug 12800760

7.5.9. SAS Disks Only Supported in Shared Configuration (SAS SAN)

Local SAS storage is not supported: the devices do not show up in Oracle VM Manager because they are not associated with any storage array. Oracle VM Release 3.1.1 does support shared SAS storage (SAS SAN), meaning SAS disks that use expanders to enable a SAN-like behavior. SAS SAN disks show up as physical disks in a storage array, just like LUNs on a typical SAN. To find out whether your storage is SAS SAN, enter:

# HBA->SAS disk are 'end_device-<SCSI Host Nr>:<Port ID>'. Not-SAS SAN 
# HBA->SAS Expander->SAS disk are 'end_device-<SCSI Host Nr>:<SCSI Id>:<Port Id>'. SAS SAN 

Local SAS:

/sys/devices/pci0000:00/0000:00:03.0/0000:01:00.0/host0/port-0:0/end_device-0:0/ 
  sys/class/sas_end_device/end_device-0:0 

SAS SAN:

/sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/0000:04:00.0/host0/port-0:0/
  expander-0:0/port-0:0:1/end_device-0:0:1 /sys/class/sas_end_device/end_device-0:0:1

Bug 13409094

7.5.10. Dom0 Memory Requires 350 MB of Memory per 100 LUNs

For every 100 LUNs on an iSCSI target, you should allocate at least 350MB of Dom0 memory. For example, to support 1,000 LUNs, you should allocate at least of 4GB memory to Dom0.

Bug 12428154

7.5.11. SAN Server Access Remains after Access Group Reconfiguration or SAN Server Removal

An Oracle VM Server is allowed to access a SAN server when its storage initiator is added to an access group for that SAN server. However, when the Oracle VM Server's storage initiator is removed from the access group, the connections to the SAN server physical disks remain.

Similarly, when a SAN server is removed, the Oracle VM Servers continue to automatically log into the SAN server.

The only way to resolve both issues is by removing the storage from Dom0 on the affected Oracle VM Servers.

Bug 13968454

7.5.12. Errors Occur when Storage Plug-in Versions Do Not Match Oracle VM Server Version

Oracle Storage Connect plug-ins for generic as well as vendor-specific storage hardware exist in different versions that have been adapted for use with a particular release of Oracle VM Server. If storage operations in Oracle VM Manager fail consistently with your storage plug-in, verify that the correct plug-in version is installed on your Oracle VM Servers. The lists below show compatibility of Oracle Storage Connect plug-ins for Oracle VM Server version 3.0.3 and version 3.1.1.

Oracle VM Server Release 3.1.1 (GA release) compatible plug-ins:

  • osc-plugin-manager-devel-1.2.8-19.el5

  • osc-oracle-netapp-1.2.8-6.el5

  • osc-plugin-manager-1.2.8-19.el5

  • osc-oracle-generic-1.1.0-55.el5

  • osc-oracle-s7k-0.1.2-45.el5

  • osc-oracle-ocfs2-0.1.0-36.el5

Oracle VM Server Release 3.0.3 (GA release) compatible plug-ins:

  • osc-plugin-manager-devel-1.2.8-9.el5

  • osc-oracle-netapp-1.2.8-1.el5

  • osc-plugin-manager-1.2.8-9.el5

  • osc-oracle-generic-1.1.0-44.el5

  • osc-oracle-s7k-0.1.2-31.el5

  • osc-oracle-ocfs2-0.1.0-31.el5

Bug 13938125

7.5.13. Shared OCFS2 Cluster File System on Virtual Disk Not Supported

When you create a configuration with virtual machines sharing an OCFS2 cluster file system on a virtual disk, severe I/O interruptions may occur. These may affect the heartbeating function of a clustered server pool and even cause Oracle VM Servers to reboot. Therefore, a shared OCFS2 cluster file system on a virtual disk is not a supported configuration.

Workaround: Use a physical disk or LUN. Make sure that the virtual machines in your configuration have shared access to this physical disk or LUN, and create the shared OCFS2 cluster file system there.

Bug 13935496

7.5.14. Multipath SAN Boot with Single Path Causes Kernel Panic or File System to be Read-Only

On a multipath SAN boot server with only one active path from server to storage, there is a potential risk of kernel panic, or the file system becoming read-only when doing storage rescans. This occurs because multipath SAN boot is only supported with full path redundancy. A minimal full path redundancy configuration can be illustrated as:

Server HBA1 --- FC switch 1 ---- Storage controller 1 port1
                             |-- Storage controller 2 port1

Server HBA2 --- FC switch 2 ---- Storage controller 1 port2
                             |-- Storage controller 2 port2

Bug 13774291