This section describes software-related limitations and workarounds.
In the Oracle Virtual Compute Appliance Dashboard, the Network Setup tab becomes available when the first compute node has been provisioned successfully. However, when installing and provisioning a new system, you must wait until all nodes have completed the provisioning process before changing the network configuration. Also, when provisioning new nodes at a later time, or when upgrading the environment, do not apply a new network configuration before all operations have completed. Failure to follow these guidelines is likely to leave your environment in an indeterminate state.
Workaround: Before reconfiguring the system network settings, make sure that no provisioning or upgrade processes are running.
Bug 17475738
External time synchronization, based on
ntpd
, is left in default configuration at the factory. As a
result, NTP does not work when you first power on the
Oracle Virtual Compute Appliance, and you may find messages in system logs
similar to these:
Oct 1 11:20:33 ovcamn06r1 kernel: o2dlm: Joining domain ovca ( 0 1 ) 2 nodes Oct 1 11:20:53 ovcamn06r1 ntpd_initres[3478]: host name not found:0.rhel.pool.ntp.org Oct 1 11:20:58 ovcamn06r1 ntpd_initres[3478]: host name not found:1.rhel.pool.ntp.org Oct 1 11:21:03 ovcamn06r1 ntpd_initres[3478]: host name not found:2.rhel.pool.ntp.org
Workaround: Apply the appropriate network configuration for your data center environment, as described in the section Network Setup in the Oracle Virtual Compute Appliance X3-2 Administrator's Guide. When the data center network configuration is applied successfully, the default values for NTP configuration are overwritten and components will synchronize their clocks with the source you entered.
Bug 17548941
Towards the end of the management node
install.log
file, the following warnings
appear:
> WARNING: > /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \ > hw/ipath/ib_ipath.ko needs unknown symbol ib_wq > WARNING: > /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \ > hw/qib/ib_qib.ko needs unknown symbol ib_wq > WARNING: > /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \ > ulp/srp/ib_srp.ko needs unknown symbol ib_wq > *** FINISHED INSTALLING PACKAGES ***
These warnings have no adverse effects and may be disregarded.
Bug 16946511
Compute node provisioning fails if services on the management node are shut down or restarted during the process. For example, upgrading management nodes involves restarting services. Adding compute nodes to the system must be avoided at such times.
Workaround: When adding a compute node to the environment, make sure that there are no active processes that may interrupt services on the management node.
Bug 17431002
The role of the Node Manager database is to track the various states a compute node goes through during provisioning. After successful provisioning the database continues to list a node as running, even if it is shut down. For nodes that are fully operational, the server status is tracked by Oracle VM Manager. However, the Oracle Virtual Compute Appliance Dashboard displays status information from the Node Manager. This may lead to inconsistent information between the Dashboard and Oracle VM Manager.
Workaround: To verify the status of operational compute nodes, use the Oracle VM Manager user interface.
Bug 17456373
When you use Internet Explorer 8 or 9 to access the Oracle Virtual Compute Appliance Dashboard, the Network View tab is not displayed correctly. Only a part of the image in the tab is displayed.
Workaround: Use another web browser instead. This issue does not occur in Firefox or Chrome.
Bug 17607389
The Oracle Virtual Compute Appliance Dashboard cannot be used to perform an update of the software stack.
Workaround: Use the command
line tool ovca-updater to update the
software stack of your Oracle Virtual Compute Appliance. For details, refer to
the section
Oracle Virtual Compute Appliance X3-2
Software Update in the Oracle Virtual Compute Appliance X3-2 Administrator's Guide. For step-by-step
instructions, refer to the section
Update.
You can use SSH to log in to each management node and check
/etc/ovca-info
for log entries indicating
restarted services and new software revisions.
Bug 17476010, 17475976 and 17475845
If during provisioning a compute node becomes stuck in an intermittent state or goes into error status, the solution is to reprovision the compute node. The Hardware View tab of the Oracle Virtual Compute Appliance Dashboard has a Reprovision button specifically for this purpose. However, this functionality may become unavailable depending on the provisioning stage that failed. If the Reprovision button does not work, the compute node has likely become stuck after joining the Oracle VM server pool.
Workaround: Remove the failing compute node from the Oracle VM configuration first. Then use the Reprovision button to restart the provisioning process for the compute node in question. In some cases it may be necessary to manually power-on the compute node. For detailed instructions, refer to the section A Compute Node Fails to Complete Provisioning in the Oracle Virtual Compute Appliance X3-2 Administrator's Guide.
Bug 17430135, 17192103 and 17389234
When too many old backups are stored on the Sun ZFS Storage Appliance 7320,
the cron-based backup system fails. Typically, the
sosreport output contains entries like
this: KeyError: 'pop from an empty set'
.
Currently, there is no mechanism in place to clean up stale
backup data.
Workaround: Clean up the old backups manually.
Manually Removing Backups from the Sun ZFS Storage Appliance 7320
Using SSH and an account with superuser privileges, log in to the Sun ZFS Storage Appliance 7320 on the appliance management network.
# ssh root@192.168.4.1 root@192.168.4.1's password: ovcasn01:>
Change directory to the location of the backups.
ovcasn01:> maintenance system configs ovcasn01:maintenance system configs>
Display a list of existing backups, which are called saved configurations.
The first column contains the UUID of each saved configuration.
ovcasn01:maintenance system configs> ls Saved Configurations: CONFIG DATE SYSTEM VERSION 0263e115-10cc-e7a9-f6cd-b24bb7265260 2013-12-1 17:03:59 ovcasn01 2011.04.24.5.0,1-1.33 07bc03c0-3da8-4039-fc8b-a68733a40232 2013-12-16 05:04:06 ovcasn01 2011.04.24.5.0,1-1.33 144881c9-5868-e790-a4e3-da9cc8d0e670 2013-12-9 17:04 ovcasn01 2011.04.24.5.0,1-1.33 2054ae5f-e316-eeb4-aa23-c518a0588af1 2013-12-27 17:04:21 ovcasn01 2011.04.24.5.0,1-1.33 [...] 212cfb03-dd25-cc44-d4de-820b80438212 2014-1-7 17:04:20 ovcasn01 2011.04.24.5.0,1-1.33 ovcasn01:maintenance system configs>
Remove any obsolete saved configuration by its UUID.
Confirm deletion by entering "Y" when prompted.
ovcasn01:maintenance system configs> destroy 0263e115-10cc-e7a9-f6cd-b24bb7265260 Are you sure you want to delete the configuration "Backup on 2013_12_01-09.00.01"? Are you sure? (Y/N) ovcasn01:maintenance system configs>
Repeat the destroy command for each saved configuration to be removed.
Bug 17895011
The default compute node configuration does not allow connectivity to additional storage resources in the data center network. Compute nodes are connected to the data center subnet to enable public connectivity for the virtual machines they host, but the compute nodes' physical network interfaces have no IP address in that subnet. Consequently, SAN or file server discovery will fail.
Bug 17508885