3.2 Oracle Virtual Compute Appliance Software

3.2.1 Do Not Reconfigure Network During Compute Node Provisioning or Upgrade
3.2.2 Nodes Attempt to Synchronize Time with the Wrong NTP Server
3.2.3 Unknown Symbol Warning during InfiniBand Driver Installation
3.2.4 Do Not Add Compute Node When Management Node Services Are Restarted
3.2.5 Do Not Execute ovca-node-db from the Command Line
3.2.6 Node Manager Does Not Show Node Offline Status
3.2.7 Update Functionality Not Available in Dashboard
3.2.8 Interrupting Download of Software Update Leads to Inconsistent Image Version and Leaves Image Mounted and Stored in Temporary Location
3.2.9 Management Node Cluster Configuration Fails During Software Update
3.2.10 Oracle VM Manager Tuning Settings Are Lost During Software Update
3.2.11 Compute Nodes Lose Oracle VM iSCSI LUNs During Software Update
3.2.12 External Storage Cannot Be Discovered Over Data Center Network

This section describes software-related limitations and workarounds.

3.2.1 Do Not Reconfigure Network During Compute Node Provisioning or Upgrade

In the Oracle Virtual Compute Appliance Dashboard, the Network Setup tab becomes available when the first compute node has been provisioned successfully. However, when installing and provisioning a new system, you must wait until all nodes have completed the provisioning process before changing the network configuration. Also, when provisioning new nodes at a later time, or when upgrading the environment, do not apply a new network configuration before all operations have completed. Failure to follow these guidelines is likely to leave your environment in an indeterminate state.

Workaround: Before reconfiguring the system network settings, make sure that no provisioning or upgrade processes are running.

Bug 17475738

3.2.2 Nodes Attempt to Synchronize Time with the Wrong NTP Server

External time synchronization, based on ntpd , is left in default configuration at the factory. As a result, NTP does not work when you first power on the Oracle Virtual Compute Appliance, and you may find messages in system logs similar to these:

Oct  1 11:20:33 ovcamn06r1 kernel: o2dlm: Joining domain ovca ( 0 1 ) 2 nodes
Oct  1 11:20:53 ovcamn06r1 ntpd_initres[3478]: host name not found:0.rhel.pool.ntp.org
Oct  1 11:20:58 ovcamn06r1 ntpd_initres[3478]: host name not found:1.rhel.pool.ntp.org
Oct  1 11:21:03 ovcamn06r1 ntpd_initres[3478]: host name not found:2.rhel.pool.ntp.org

Workaround: Apply the appropriate network configuration for your data center environment, as described in the section Network Setup in the Oracle Virtual Compute Appliance Administrator's Guide. When the data center network configuration is applied successfully, the default values for NTP configuration are overwritten and components will synchronize their clocks with the source you entered.

Bug 17548941

3.2.3 Unknown Symbol Warning during InfiniBand Driver Installation

Towards the end of the management node install.log file, the following warnings appear:

> WARNING:
> /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \
> hw/ipath/ib_ipath.ko needs unknown symbol ib_wq
> WARNING:
> /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \
> hw/qib/ib_qib.ko needs unknown symbol ib_wq
> WARNING:
> /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \
> ulp/srp/ib_srp.ko needs unknown symbol ib_wq
> *** FINISHED INSTALLING PACKAGES ***

These warnings have no adverse effects and may be disregarded.

Bug 16946511

3.2.4 Do Not Add Compute Node When Management Node Services Are Restarted

Compute node provisioning fails if services on the management node are shut down or restarted during the process. For example, upgrading management nodes involves restarting services. Adding compute nodes to the system must be avoided at such times.

Workaround: When adding a compute node to the environment, make sure that there are no active processes that may interrupt services on the management node.

Bug 17431002

3.2.5 Do Not Execute ovca-node-db from the Command Line

As a rule, you should not run any of the ovca-commands at the Oracle Linux prompt, unless Oracle explicitly instructs you to.

Specifically, if you accidentally run ovca-node-db delete without any additional arguments, all entries in the node database are deleted.

Bug 18435883

3.2.6 Node Manager Does Not Show Node Offline Status

The role of the Node Manager database is to track the various states a compute node goes through during provisioning. After successful provisioning the database continues to list a node as running, even if it is shut down. For nodes that are fully operational, the server status is tracked by Oracle VM Manager. However, the Oracle Virtual Compute Appliance Dashboard displays status information from the Node Manager. This may lead to inconsistent information between the Dashboard and Oracle VM Manager.

Workaround: To verify the status of operational compute nodes, use the Oracle VM Manager user interface.

Bug 17456373

3.2.7 Update Functionality Not Available in Dashboard

The Oracle Virtual Compute Appliance Dashboard cannot be used to perform an update of the software stack.

Workaround: Use the command line tool ovca-updater to update the software stack of your Oracle Virtual Compute Appliance. For details, refer to the section Oracle Virtual Compute Appliance Software Update in the Oracle Virtual Compute Appliance Administrator's Guide. For step-by-step instructions, refer to the section Update. You can use SSH to log in to each management node and check /etc/ovca-info for log entries indicating restarted services and new software revisions.

Bug 17476010, 17475976 and 17475845

3.2.8 Interrupting Download of Software Update Leads to Inconsistent Image Version and Leaves Image Mounted and Stored in Temporary Location

The first step of the software update process is to download an image file, which is unpacked in a particular location on the ZFS storage appliance. When the download is interrupted, the file system is not cleaned up or rolled back to a previous state. As a result, contents from different versions of the software image may end up in the source location from where the installation files are loaded. In addition, the downloaded *.iso file remains stored in /tmp and is not unmounted. If downloads are frequently started and stopped, this could cause the system to run out of free loop devices to mount the *.iso files, or even to run out of free space.

Workaround: The files left behind by previous downloads do not prevent you from running the update procedure again and restarting the download. Download a new software update image. When it completes successfully you can install the new version of the software, as described in the section Update in the Oracle Virtual Compute Appliance Administrator's Guide.

Bug 18352512

3.2.9 Management Node Cluster Configuration Fails During Software Update

After the Oracle Virtual Compute Appliance software update is initiated from the master management node, it may occur that the secondary management node fails to complete the configuration of the cluster service. As a consequence, the master management node goes down, but the ovca service is not started on the secondary management node, which prevents the upgrade of the master management node.

To verify this, run the command service o2cb status from the Oracle Linux prompt on the secondary management node, and confirm that the service is down. The software update process has stopped at this point and will not resume or recover automatically.

Workaround: When this condition occurs, the master management node is down, so you must power it on first. When it is back online, proceed as follows:

  1. Revert the management node entries in the node database to their pre-update state. Run the following commands from the Oracle Linux prompt on the master management node:

    ovca-node-db update ip=192.168.4.3 to state=RUNNING
    ovca-node-db update ip=192.168.4.103 to state=running
    ovca-node-db update ip=192.168.4.4 to state=RUNNING
    ovca-node-db update ip=192.168.4.104 to state=running
  2. Restart the software update process, as described in the section Update in the Oracle Virtual Compute Appliance Administrator's Guide.

Bug 18967069

3.2.10 Oracle VM Manager Tuning Settings Are Lost During Software Update

During the Oracle Virtual Compute Appliance software update from Release 1.0.2 to Release 1.1.x, it may occur that the specific tuning settings for Oracle VM Manager are not applied correctly, and that default settings are used instead.

Workaround: Verify the Oracle VM Manager tuning settings and re-apply them if necessary. Follow the instructions in the section Verifying and Re-applying Oracle VM Manager Tuning after Software Update in the Oracle Virtual Compute Appliance Administrator's Guide.

Bug 18477228

3.2.11 Compute Nodes Lose Oracle VM iSCSI LUNs During Software Update

Several iSCSI LUNs, including the essential server pool file system, are mapped on each compute node. When you update the appliance software, it may occur that one or more LUNs are missing on certain compute nodes. In addition, there may be problems with the configuration of the clustered server pool, preventing the existing compute nodes from joining the pool and resuming correct operation after the software update.

Workaround: To avoid these software update issues, upgrade all previously provisioned compute nodes by following the procedure described in the section Upgrading Existing Compute Node Configuration to Release 1.1.1 in the Oracle Virtual Compute Appliance Administrator's Guide.

Bugs 17922555, 18459090, 18433922 and 18397780

3.2.12 External Storage Cannot Be Discovered Over Data Center Network

The default compute node configuration does not allow connectivity to additional storage resources in the data center network. Compute nodes are connected to the data center subnet to enable public connectivity for the virtual machines they host, but the compute nodes' physical network interfaces have no IP address in that subnet. Consequently, SAN or file server discovery will fail.

Bug 17508885