- 6.2.1 Do Not Install Additional Software on Appliance Components
- 6.2.2 Do Not Reconfigure Network During Compute Node Provisioning or Upgrade
- 6.2.3 Nodes Attempt to Synchronize Time with the Wrong NTP Server
- 6.2.4 Unknown Symbol Warning during InfiniBand Driver Installation
- 6.2.5 Node Manager Does Not Show Node Offline Status
- 6.2.6 Compute Node State Changes Despite Active Provisioning Lock
- 6.2.7 Compute Nodes Are Available in Oracle VM Server Pool Before Provisioning Completes
- 6.2.8 Compute Node Provisioning Fails When InfiniBand Ports Are Not Configured
- 6.2.9 Provisioning Fails When Another Compute Node Is In An Unknown State
- 6.2.10 Reprovisioning or Upgrading a Compute Node Hosting Virtual Machines Leads to Errors
- 6.2.11 When Compute Node Upgrade to Oracle VM Server 3.4 Fails, Backup and Reprovisioning Are Required
- 6.2.12 Virtual Machines Remain in Running Status when Host Compute Node Is Reprovisioned
- 6.2.13 Provisioning Is Slow in Systems with Many VMs and VLANs
- 6.2.14 Static Routes for Custom Host Networks Are Not Configured on Compute Nodes
- 6.2.15 Altering Custom Network VLAN Tag Is Not Supported
- 6.2.16 Host Network Parameter Validation Is Too Permissive
- 6.2.17 Virtual Appliances Cannot Be Imported Over a Host Network
- 6.2.18 Compute Node Networking Limitations Differ from Specified Configuration Maximums
- 6.2.19 Customizations for ZFS Storage Appliance in
multipath.conf
Are Not Supported - 6.2.20 Custom Network Configuration in Release 2.3.1 Not Shown in
modprobe.conf
- 6.2.21 Empty Tenant Groups Keep Virtual IP Address After Upgrade
- 6.2.22 Bond Port MTU Cannot Be Changed in Oracle VM Manager
- 6.2.23 Update Functionality Not Available in Dashboard
- 6.2.24 Interrupting Download of Software Update Leads to Inconsistent Image Version and Leaves Image Mounted and Stored in Temporary Location
- 6.2.25 Do Not Update Controller Software to Release 2.3.1 from Release 2.0.5 or Earlier
- 6.2.26 Software Update Is Halted By Terminal Disconnect
- 6.2.27 Software Update Fails Due to Error in
AdminServer.log
- 6.2.28 Compute Nodes Must Be Upgraded to Oracle VM Server Release 3.4.2 Using the Oracle PCA CLI
- 6.2.29 Compute Node Upgrade Attempt During Management Node Upgrade Results in Misleading Error
- 6.2.30 Software Update with Mixed Oracle VM Server Versions Does Not Fail Gracefully
- 6.2.31 Missing Physical Disk on Oracle VM Server 3.2 Compute Node After Management Node Upgrade to Oracle VM Manager 3.4
- 6.2.32 OSWatcher Must Be Disabled Before Software Update to Release 2.3.1
- 6.2.33 Unmanaged Storage Arrays Have No Name After Controller Software Update to Release 2.3.1
- 6.2.34 Software Update Hangs Because Storage Firmware Upgrade Fails
- 6.2.35 Compute Nodes Lose Oracle VM iSCSI LUNs During Software Update
- 6.2.36 Customer Created LUNs Are Mapped to the Wrong Initiator Group
- 6.2.37 Storage Head Failover Disrupts Running Virtual Machines
- 6.2.38 Oracle VM Manager Tuning Settings Are Lost During Software Update
- 6.2.39 Oracle VM Manager Fails to Restart after Restoring a Backup Due to Password Mismatch
- 6.2.40 Changing Multiple Component Passwords Causes Authentication Failure in Oracle VM Manager
- 6.2.41 Password Changes Are Not Synchronized Correctly Across All Components
- 6.2.42 Software Update Fails with Authentication Error During MySQL Upgrade
- 6.2.43 ILOM Password of Expansion Compute Nodes Is Not Synchronized During Provisioning
- 6.2.44 SSH Host Key Mismatch After Management Node Failover
- 6.2.45 Oracle VM Java Processes Consume Large Amounts of Resources
- 6.2.46 External Storage Cannot Be Discovered Over Data Center Network
- 6.2.47 LUNs Are Not Reconnected After External Storage Connection Failure or Failback
- 6.2.48 Third-Party Oracle Storage Connect Plugins Must Be Removed Before Appliance Software Update to Release 2.3.1
- 6.2.49 I/O Errors Occur During Failover on External ZFS Storage Appliance with Certain Firmwares
- 6.2.50 Fibre Channel LUNs Presented to Management Nodes Cause Kernel Panic
- 6.2.51 High Network Load with High MTU May Cause Time-Out and Kernel Panic in Compute Nodes
- 6.2.52 Oracle PCA Dashboard URL Is Not Redirected
- 6.2.53 Network View in Oracle PCA Dashboard Contains Misaligned Labels with Screen Reader Enabled
- 6.2.54 User Interface Does Not Support Internet Explorer 10 and 11
- 6.2.55 Mozilla Firefox Cannot Establish Secure Connection with User Interface
- 6.2.56 Authentication Error Prevents Oracle VM Manager Login
- 6.2.57 Error Getting VM Stats in Oracle VM Agent Logs
- 6.2.58 Virtual Machine with High Availability Takes Five Minutes to Restart when Failover Occurs
- 6.2.59 Secure Migration of PVM Guest to Compute Node with PVM Support Disabled, Fails with Wrong Error Message
- 6.2.60 Live Migration of Oracle Solaris Guest Results in Reboot
- 6.2.61 Compute Node CPU Load at 100 Percent Due to Hardware Management Daemon
- 6.2.62 CLI Output Misaligned When Listing Tasks With Different UUID Length
- 6.2.63 Stopping an Update Task Causes its Records to Be Removed
- 6.2.64 The CLI Command update appliance Is Still Available When Updating From Release 2.3.x to Release 2.3.4
- 6.2.65 The CLI Command diagnose software Displays Test Failures When Compute Nodes Are Running Different Software Versions
- 6.2.66 The CLI Command diagnose software Reports Package Acceptance Test Failure
- 6.2.67 The CLI Command diagnose Reports ILOM Related Failures
- 6.2.68 The CLI Command list opus-ports Shows Information About Non-existent Switches
- 6.2.69 Setting a Proxy with Access Credentials Is Not Accepted by the CLI
- 6.2.70 Additionally Created WebLogic Users Are Removed During Controller Software Update
This section describes software-related limitations and workarounds.
Oracle PCA is delivered as an appliance: a complete and controlled system composed of selected hardware and software components. If you install additional software packages on the pre-configured appliance components, be it a compute node, management node or storage component, you introduce new variables that potentially disrupt the operation of the appliance as a whole. Unless otherwise instructed, Oracle advises against the installation of additional packages, either from a third party or from Oracle's own software channels like the Oracle Linux YUM repositories.
Workaround: Do not install additional software on any internal Oracle PCA system components. If your internal processes require certain additional tools, contact your Oracle representative to discuss these requirements.
In the Oracle PCA Dashboard, the Network Setup tab becomes available when the first compute node has been provisioned successfully. However, when installing and provisioning a new system, you must wait until all nodes have completed the provisioning process before changing the network configuration. Also, when provisioning new nodes at a later time, or when upgrading the environment, do not apply a new network configuration before all operations have completed. Failure to follow these guidelines is likely to leave your environment in an indeterminate state.
Workaround: Before reconfiguring the system network settings, make sure that no provisioning or upgrade processes are running.
Bug 17475738
External time synchronization, based on
ntpd
, is left in default
configuration at the factory. As a result, NTP does not work
when you first power on the Oracle PCA, and you may find
messages in system logs similar to these:
Oct 1 11:20:33 ovcamn06r1 kernel: o2dlm: Joining domain ovca ( 0 1 ) 2 nodes Oct 1 11:20:53 ovcamn06r1 ntpd_initres[3478]: host name not found:0.rhel.pool.ntp.org Oct 1 11:20:58 ovcamn06r1 ntpd_initres[3478]: host name not found:1.rhel.pool.ntp.org Oct 1 11:21:03 ovcamn06r1 ntpd_initres[3478]: host name not found:2.rhel.pool.ntp.org
Workaround: Apply the appropriate network configuration for your data center environment, as described in the section Network Setup in the Oracle Private Cloud Appliance Administrator's Guide. When the data center network configuration is applied successfully, the default values for NTP configuration are overwritten and components will synchronize their clocks with the source you entered.
Bug 17548941
Towards the end of the management node
install.log
file, the following warnings
appear:
> WARNING: > /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \ > hw/ipath/ib_ipath.ko needs unknown symbol ib_wq > WARNING: > /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \ > hw/qib/ib_qib.ko needs unknown symbol ib_wq > WARNING: > /lib/modules/2.6.39-300.32.1.el6uek.x86_64/kernel/drivers/infiniband/ \ > ulp/srp/ib_srp.ko needs unknown symbol ib_wq > *** FINISHED INSTALLING PACKAGES ***
These warnings have no adverse effects and may be disregarded.
Bug 16946511
The role of the Node Manager database is to track the various states a compute node goes through during provisioning. After successful provisioning the database continues to list a node as running, even if it is shut down. For nodes that are fully operational, the server status is tracked by Oracle VM Manager. However, the Oracle PCA Dashboard displays status information from the Node Manager. This may lead to inconsistent information between the Dashboard and Oracle VM Manager, but it is not considered a bug.
Workaround: To verify the status of operational compute nodes, use the Oracle VM Manager user interface.
Bug 17456373
The purpose of a lock of the type
provisioning
or
all_provisioning
is to prevent all compute
nodes from starting or continuing a provisioning process.
However, when you attempt to reprovision a running compute
node from the Oracle PCA CLI while an active lock is in
place, the compute node state changes to "reprovision_only"
and it is marked as "DEAD". Provisioning of the compute node
continues as normal when the provisioning lock is deactivated.
Bug 22151616
Compute node provisioning can take up to several hours to complete. However, those nodes are added to the Oracle VM server pool early on in the process, but they are not placed in maintenance mode. In theory the discovered servers are available for use in Oracle VM Manager, but you must not attempt to alter their configuration in any way before the Oracle PCA Dashboard indicates that provisioning has completed.
Workaround: Wait for compute node provisioning to finish. Do not modify the compute nodes or server pool in any way in Oracle VM Manager.
Bug 22159111
During the provisioning of a compute node, the Fabric Interconnect management software sets up the required InfiniBand connectivity. The provisioning process waits for this configuration task to be completed, but in certain high load circumstances the Fabric Interconnect cannot populate all the compute node details quickly enough, and returns partial information when queried. As a result of this timing issue, the provisioning process is halted and the compute node is marked 'dead'.
Workaround: If compute node provisioning fails, showing state=dead, the first course of action is to reprovision the node. When the InfiniBand configuration on the Fabric Interconnects is complete, the compute node should be provisioned correctly.
Bug 28679751
On rare occasions a provisioning error or another failure can cause a compute node to enter an unknown state. Its provisioning state is then set to "dirty" and the node is marked "dead", as shown in this CLI example:
PCA> list compute-node Compute_Node IP_Address Provisioning_Status ILOM_MAC Provisioning_State ------------ ---------- ------------------- -------- ------------------ ovcacn07r1 192.168.4.7 RUNNING 00:10:e0:8e:8e:bf running ovcacn08r1 192.168.4.8 RUNNING 00:10:e0:40:ce:d7 running ovcacn09r1 192.168.4.9 DEAD 00:10:e0:3f:96:df dirty ovcacn10r1 192.168.4.10 RUNNING 00:10:e0:62:33:25 running ovcacn11r1 192.168.4.11 RUNNING 00:10:e0:2e:87:db running ovcacn12r1 192.168.4.12 INITIALIZING 00:10:e0:da:a6:d9 initializing_stage_wait_for_lease_renewal ----------------- 6 rows displayed Status: Success
When a compute node is in this "dirty" provisioning state, all
further provisioning operations fail with a timeout. This
timeout is caused by the system attempting to refresh the
storage connection status for the problem node, instead of
ignoring it. In the example, compute node
ovcacn12r1
is being provisioned. However,
it will fail and be marked "dead" when the timeout occurs.
Compute node ovcacn09r1
must be
deprovisioned or reprovisioned before any new provisioning
operations can succeed.
Workaround: Deprovision or reprovision the compute node marked "dirty". Make sure the process completes successfully. Then initiate provisioning again for the compute node you originally intended to add or reprovision.
Bug 27444018
Reprovisioning or upgrading a compute node that hosts virtual machines (VMs) is considered bad practice. Good practice is to migrate all VMs away from the compute node before starting a reprovisioning operation or software update. At the start of the reprovisioning, the removal of the compute node from its server pool could fail partially, due to the presence of configured VMs that are either running or powered off. When the compute node returns to normal operation after reprovisioning, it could report failures related to server pool configuration and storage layer operations. As a result, both the compute node and its remaining VMs could end up in an error state. There is no straightforward recovery procedure.
Workaround: Avoid upgrade and reprovisioning issues due to existing VM configurations by migrating all VMs away from their host first.
Bug 23563071
As part of the pre-processing phase of the compute node upgrade, certain packages used by previous versions of the software are removed. These include InfiniBand modules that are no longer required after the upgrade. However, if the Oracle VM Server upgrade to version 3.4 fails, any subsequent upgrade attempt also fails, because InfiniBand networking is disabled. The compute node can only be returned to normal operation through reprovisioning.
In addition, the command line option to save the compute node's local repository during reprovisioning, is not functional in Oracle PCA Release 2.3.1. Backing up the local repository must be done separately.
Workaround: Manually create a backup copy of the virtual machines and other data in the local repository. Then reprovision the compute node for a clean installation of Oracle VM Server 3.4. When provisioning is complete, the compute node is a member of the default Rack1_ServerPool. Next, you can restore the local repository content.
Bugs 26199657 and 26222844
Using the Oracle PCA CLI it is possible to force the reprovisioning of a compute node even if it is hosting running virtual machines. The compute node is not placed in maintenance mode when running Oracle VM Server 3.4.4. Consequently, the active virtual machines are not shut down or migrated to another compute node. Instead these VMs remain in running status and Oracle VM Manager reports their host compute node as "N/A".
Reprovisioning a compute node that hosts virtual machines is considered bad practice. Good practice is to migrate all virtual machines away from the compute node before starting a reprovisioning operation or software update.
Workaround: In this particular condition the VMs can no longer be migrated. They must be killed and restarted. After a successful restart they return to normal operation on a different host compute node in accordance with start policy defined for the server pool.
Bug 22018046
As the Oracle VM environment grows and contains more and more virtual machines and many different VLANs connecting them, the number of management operations and registered events increases rapidly. In a system with this much activity the provisioning of a compute node takes significantly longer, because the provisioning tasks run through the same management node where Oracle VM Manager is active. There is no impact on functionality, but the provisioning tasks can take several hours to complete.
There is no workaround to speed up the provisioning of a compute node when the entire system is under heavy load. It is recommended to perform compute node provisioning at a time when system activity is at its lowest.
Bug 22159038 and 22085580
The host network is a custom network type that enables connectivity between the physical Oracle PCA hosts and external network resources. As part of the host network configuration, a static route is configured on each server participating in the network. However, the required static route can only be configured if the server in question has been upgraded to the version of Oracle VM Server included in Release 2.3.1 of the Oracle Private Cloud Appliance Controller Software. If a host is running a previous version its routing table is not updated.
Workaround: If you intend to use a host network in your environment, make sure that the compute nodes are running the correct version of Oracle VM Server, as included in the ISO image of the Oracle PCA Controller Software.
Bug 23182978 and 23233700
When you create a custom network, it is technically possible – though not supported – to alter the VLAN tag in Oracle VM Manager. However, when you attempt to add a compute node, the system creates the network bond on the server but fails to enable the modified VLAN configuration. At this point the custom network is stuck in a failed state: neither the network nor the vNIC bond can be deleted, and the VLAN configuration can no longer be changed back to the original tag.
Workaround: Do not modify appliance-level networking in Oracle VM Manager. There are no documented workarounds and any recovery operation is likely to require significant downtime of the Oracle PCA environment.
Bug 23250544
When you define a host network, it is possible to enter invalid or contradictory values for the Prefix, Netmask and Route_Destination parameters. For example, when you enter a prefix with "0" as the first octet, the system attempts to configure IP addresses on compute node Ethernet interfaces starting with 0. Also, when the netmask part of the route destination you enter is invalid, the network is still created, even though an exception occurs. When such a poorly configured network is in an invalid state, it cannot be reconfigured or deleted with standard commands.
Workaround: Double-check
your CLI command parameters before pressing Enter. If an
invalid network configuration is applied, use the
--force
option to delete the network.
Bug 25729227
A host network provides connectivity between compute nodes and hosts external to the appliance. It is implemented to connect external storage to the environment. If you attempt to import a virtual appliance, also known as assemblies in previous releases of Oracle VM and Oracle PCA, from a location on the host network, it is likely to fail, because Oracle VM Manager instructs the compute nodes to use the active management node as a proxy for the import operation.
Workaround: Make sure that the virtual appliance resides in a location accessible from the active management node.
Bug 25801215
Compute nodes currently support a maximum of 36 vNICs, of which 6 are used by the default network configuration. In theory, this allows for 15 more custom network bonds of 2 vNICs each to be created. However, the maximum allowed is 3 internal custom networks and 7 external custom networks, which is equivalent to 10 network bonds. You should not configure any vNICs beyond these maximums, even if the system allows you to.
Workaround: When configuring custom networking, always adhere to the limitations set forth in Chapter 4, Configuration Maximums.
Bug 24407432
The ZFS stanza in multipath.conf
is
controlled by the Oracle PCA software. The internal ZFS
Storage Appliance is a critical component of the appliance and
the multipath configuration is tailored to the internal
requirements. You should never modify the ZFS parameters in
multipath.conf
, because it could adversely
affect the appliance performance and functionality.
Even if customizations were applied for (external) ZFS
storage, they are overwritten when the Oracle PCA Controller
Software is updated. A backup of the file is saved prior to
the update. Customizations in other stanzas of
multipath.conf
, for storage devices from
other vendors, are preserved during upgrades.
Bug 25821423
In previous releases with custom network support, the file
/etc/modprobe.conf
in compute nodes
contained information about custom network connections
configured for that compute node. After the software update to
Release 2.3.1 that file no longer exists, and none of the
files in /etc/modprobe.d/
contain
information about Ethernet ports or bond ports. This is the
result of the compute node operating system and kernel change
in Release 2.3.1.
Workaround: Information
about a compute node's Ethernet connectivity can be found in
these files:
/etc/sysconfig/network-scripts/ifcfg-eth*
.
Bug 25508659
After the Oracle PCA Controller Software has been updated to Release 2.3.1, and when Oracle VM is upgraded to version 3.4.2, configured but empty tenant groups keep the virtual IP address assigned to them. Because the concepts of server pool virtual IP and master server are deprecated in the new version of Oracle VM, the virtual IP address should be stripped from the tenant group configuration during upgrade.
Workaround: Delete the empty tenant group and create a new one using the Oracle PCA Release 2.3.1 CLI. It is created without a virtual IP address.
Bug 25919445
If you change the MTU setting of a server bond port, it appears to be applied successfully. However, in reality these are bond ports configured on the Fabric Directors, and the Fabric Director configuration cannot be modified this way. In other words, the effective MTU for the bond port remains the same, even if Oracle VM Manager suggests it was changed.
There is no workaround. Fabric Director configuration changes are not supported.
Bug 25526544
The Oracle PCA Dashboard cannot be used to perform an update of the software stack.
Workaround: Use the command
line tool pca-updater to update the
software stack of your Oracle PCA. For details, refer to the
section
Oracle Private Cloud Appliance
Software Update in the Oracle Private Cloud Appliance Administrator's Guide. For step-by-step
instructions, refer to the section
Update.
You can use SSH to log in to each management node and check
/etc/pca-info
for log entries indicating
restarted services and new software revisions.
Bug 17476010, 17475976 and 17475845
The first step of the software update process is to download
an image file, which is unpacked in a particular location on
the ZFS storage appliance. When the download is interrupted,
the file system is not cleaned up or rolled back to a previous
state. As a result, contents from different versions of the
software image may end up in the source location from where
the installation files are loaded. In addition, the downloaded
*.iso
file remains stored in
/tmp
and is not unmounted. If downloads are
frequently started and stopped, this could cause the system to
run out of free loop devices to mount the
*.iso
files, or even to run out of free
space.
Workaround: The files left behind by previous downloads do not prevent you from running the update procedure again and restarting the download. Download a new software update image. When it completes successfully you can install the new version of the software, as described in the section Update in the Oracle Private Cloud Appliance Administrator's Guide.
Bug 18352512
You must NEVER attempt to run the UPDATE TO RELEASE 2.3.1 if the currently installed controller software is Release 2.0.5 or earlier.
These earlier releases do not have the necessary mechanisms to verify that the update path is valid or known to be invalid. Consequently, the update process will start, and make both management nodes inaccessible. There may be significant downtime and data loss or corruption.
Workaround: Follow the controller software update path as outlined in the section Update in the Oracle Private Cloud Appliance Administrator's Guide. If you did run the update on a non-qualified release of the software, contact Oracle Support.
Bug 25558718
To execute the controller software update to Release 2.3.1,
you log on to the active management node by setting up a
terminal connection and using SSH to gain access to the
Oracle Linux shell and Oracle PCA CLI. After you enter the
command update appliance install_image
, as
series of pre-upgrade tasks are launched, including the Oracle VM
database export. These tasks take relatively long to complete,
but generate no terminal output. If the terminal session from
where the command was entered, is inadvertently disconnected
while the pre-upgrade tasks are in progress, those tasks will
complete but the actual controller software update is not
started.
Workaround: If the update does not start as a result of the terminal disconnecting, monitor the preinstall.log file. The final two lines in the example below indicate that the pre-upgrade tasks have completed.
tail -f /nfs/shared_storage/ovmm_upgrade/<timestamp>
/preinstall.log
[...]
[06/16/2017 22:40:19 33679] INFO (<stdin>:6) [PCA Upgrade] Database export complete
[06/16/2017 22:40:19 33687] INFO (<stdin>:6) [PCA Upgrade] Shutting down bond0...
Once you have confirmed that the pre-upgrade tasks have
completed, restart the ovca
service. Then
open the Oracle PCA CLI and start the update again with the
update appliance install_image
command.
To avoid terminal connection issues, you can open a console on a virtual machine, running on the Oracle PCA or another Oracle VM environment in the same network environment. If the command is issued from the VM, it is not affected by connection issues on the side of the administrator's machine.
Bug 26259717
The pre-upgrade validation mechanism for Release 2.3.1 is made intentionally strict in order to avoid as many failure scenarios as possible. It includes checks for certain types of existing errors in log files, and prevents the controller software update from starting if potential problems are revealed.
For example, if an AdminServer.log
file
contains any "ObjectNotFoundException
", the
update will fail, even if the error no longer impacts the
running environment.
Workaround: First, resolve
the ObjectNotFoundException
issues. For
details, refer to
Eliminating
ObjectNotFound Exceptions and Restoring the Oracle VM Manager
Database in the Oracle Private Cloud Appliance Administrator's Guide. Then, either clear the
logs or, if you need them for future reference, move the log
files to a location outside the
/u01/app/oracle/ovm-manager-3/
directory.
Bug 25448087
If you are installing Oracle Private Cloud Appliance Controller Software Release 2.3.1, then the management nodes are set up to run Oracle VM Manager 3.4.2. Compute nodes cannot be upgraded to Oracle VM Server Release 3.4.2 with the Oracle VM Manager web UI. You must upgrade them using the update server command within the Oracle PCA CLI.
However, if you do attempt to upgrade a compute node from within Oracle VM Manager, the resulting error message provides false information, as shown in the following example:
OVMRU_000024E Cannot upgrade server: ovcacn07r1, at version: 3.2.10-762a, using the UI/CLI. You must use the UpgradeServers.py script. Please see the Installation Manual for details.
Workaround: Do not use the
suggested UpgradeServers.py
script.
Instead, use the Oracle PCA CLI as described in the section
Upgrading
Oracle VM to Release 3.4 in the Oracle Private Cloud Appliance Administrator's Guide.
Bug 25659536
When you initiate a management node upgrade with the Oracle PCA Upgrader, to install Controller Software Release 2.3.4 or newer, locks are imposed to prevent you from executing commands that interfere with the Upgrader tasks. For example, when you issue a compute node upgrade command, the CLI displays an error:
PCA> update compute-node ovcacn07r1 ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Failure Error Message: Error (UPDATE_004): Another compute node is updating right now. Please wait for that process to complete before starting a new compute node update.
This error message is misleading, because it suggests another compute node is being upgraded, while the running upgrade is actually on the standby management node. The message is the result of standard CLI behavior when compute node upgrade is locked.
Workaround: Compute node upgrade is locked for the duration of the management node upgrade. The CLI error appears as expected. Please ignore the misleading message.
Bug 28715245
If you attempt to update the Oracle Private Cloud Appliance Controller Software to Release 2.3.1 on an appliance that contains compute nodes running Oracle VM Server 3.2.9, while other compute nodes run an eligible version (3.2.10 or 3.2.11), then the update fails with errors similar to those in the following example:
ERROR (pca_pre_upgrade_check:438) [OS Check] FAILED: The check failed on the following nodes: ['ovcacn07r1', 'ovcacn08r1', 'ovcacn10r1', 'ovcacn09r1']. Exception '3.2.9' received while processing compute node ovcacn07r1 Exception '3.2.9' received while processing compute node ovcacn08r1 Exception '3.2.9' received while processing compute node ovcacn10r1 Exception '3.2.9' received while processing compute node ovcacn09r1
When the controller software update fails in this manner, the environment is left in an incorrect state, and the secondary management node is marked "dead".
Workaround: Please contact Oracle Support and submit a service request. For details, refer to the support note with Doc ID 2241716.1.
Bug 25585372
When you perform the Oracle PCA Controller Software update to Release 2.3.1, Oracle VM Manager 3.4.2 is installed on the management nodes. Typically, compute nodes are then upgraded one by one to the matching version of Oracle VM Server. However, it is possible to continue to manage a number of compute nodes running Oracle VM Server 3.2.10 or 3.2.11 in case your existing configuration requires it. For details, refer to the section Managing a Mixed Compute Node Environment in the Oracle Private Cloud Appliance Administrator's Guide
If Oracle PCA is running controller software release 2.3.1 with compute nodes that have not yet been upgraded to the newer version of Oracle VM Server, warnings about missing physical disks in the internal ZFS storage appliance may occur in the Oracle VM Manager Storage tab and the Physical Disks perspective of the Servers and VMs tab. They look similar to this example:
OVMEVT_007005D_000 Discover storage elements on server [ovcacn10r1] did not return physical disk [SUN (2)] for storage array [OVCA_ZFSSA_Rack1].
Workaround: Upgrade all compute nodes to Oracle VM Server 3.4.2.
Bug 25870801
If you have manually configured OSWatcher to run on your compute nodes, you must disable it before updating the environment to Release 2.3.1. If left configured, the existing OSWatcher version causes conflicts with the version of OSWatcher that is installed and enabled with Oracle VM Server version 3.4.x on the compute nodes.
Workaround: Make sure that OSWatcher is no longer configured on the compute nodes before you upgrade them to Oracle VM Server version 3.4.x. For details, refer to the support note with Doc ID 2258195.1.
Bugs 25821384 and 24964998
Certain fields in the Oracle VM Manager UI are populated differently between version 3.2.x and 3.4.x. This causes the name of the Unmanaged iSCSI and Fibre Channel Storage Arrays to appear blank after the controller software update to Release 2.3.1.
Workaround: Make the storage array names reappear by executing the steps below in Oracle VM Manager 3.4.x. For details, refer to the support note with Doc ID 2244130.1.
In Oracle VM Manager select the Storage tab.
From the SAN Servers list, select one of the unmanaged storage arrays that appear unnamed.
Click Edit to modify the storage array properties.
The correct name is already displayed. Click OK to confirm.
Repeat these steps for the other unnamed storage array.
Bug 25660037
When the Oracle PCA Controller Software update includes a firmware upgrade of the internal ZFS storage appliance, the automated upgrade process requires full exclusive control of the storage appliance ILOM. If another user is logged onto the ILOM, the firmware upgrade fails. As a result, the Controller Software update cannot continue.
Workaround: Make sure that no other user is logged onto the storage appliance ILOM and restart the software update with the pca-updater command. Proceed as follows:
Stop the current software update process on the master management node.
# pca-updater -m update -x -i <master management node IP address>
Verify that no users are logged onto the storage appliance ILOM, so that the controller software update process can take full control.
From the command line of the master management node, restart the Oracle PCA Controller Software update.
# pca-updater -m update -s
Bug 23149946
Several iSCSI LUNs, including the essential server pool file system, are mapped on each compute node. When you update the appliance software, it may occur that one or more LUNs are missing on certain compute nodes. In addition, there may be problems with the configuration of the clustered server pool, preventing the existing compute nodes from joining the pool and resuming correct operation after the software update.
Workaround: To avoid these software update issues, upgrade all previously provisioned compute nodes by following the procedure described in the section Upgrading Existing Compute Node Configuration from Release 1.0.2 in the Oracle Private Cloud Appliance Administrator's Guide.
Bugs 17922555, 18459090, 18433922 and 18397780
When adding LUNs on the Oracle PCA internal ZFS Storage Appliance you must add them under the "OVM" target group. Only this default target group is supported; there can be no additional target groups. However, on the initiator side you should not use the default configuration, otherwise all LUNs are mapped to the "All Initiators" group, and accessible for all nodes in the system. Such a configuration may cause several problems within the appliance.
Additional, custom LUNs on the internal storage must instead be mapped to one or more custom initiator groups. This ensures that the LUNs are mapped to the intended initiators, and are not remapped by the appliance software to the default "All Initiators" group.
Workaround: When creating additional, custom LUNs on the internal ZFS Storage Appliance, always use the default target group, but make sure the LUNs are mapped to one or more custom initiator groups.
Bugs 27591420, 22309236 and 18155778
When a failover occurs between the storage heads of a ZFS Storage Appliance, virtual machine operation could be disrupted by temporary loss of disk access. Depending on the guest operating system, and on the configuration of the guest and Oracle VM, a VM could hang, power off or reboot. This behavior is caused by an iSCSI configuration parameter that does not allow sufficient recovery time for the storage failover to complete.
Workaround: The replacement_timeout value controls how long the iSCSI layer should wait for a timed-out session to become available before failing any commands to it. The method to change the replacement timeout differs for undiscovered portals and already discovered portals.
To set a longer replacement timeout for new and undiscovered
portals, modify
node.session.timeo.replacement_timeout in
the file /etc/iscsi/iscsid.conf
; then
restart the iscsid
service, or reboot the
compute node and restart the VMs it hosts.
Use iscsiadm
on the compute node to
increase the replacement timeout of an already discovered
target portal; then log out of the target portal and log back
in for the change to take effect.
The initial command below applies to Oracle PCA Release 2.3.1 and later. If your system is running an earlier release of the controller software, use this syntax instead:
# iscsiadm -m node -T<target_iqn>
-p <target_IP
:port
> \ --op=update --name=node.session.timeo.replacement_timeout --value=<timeout>
# iscsiadm -m node -T<target_iqn>
-p <target_IP
:port
> \ -o update -n node.session.timeo.replacement_timeout -v<timeout>
# iscsiadm -m node -p <target_IP
:port
> -T<target_iqn>
--logout # iscsiadm -m node -p <target_IP
:port
> -T<target_iqn>
--login
Alternatively, instead of logging out and back in, you can
restart the iscsi
service, or reboot the
compute node and restart the VMs it hosts.
Bug 24439070
During the Oracle PCA software update from Release 1.0.2 to Release 1.1.x, it may occur that the specific tuning settings for Oracle VM Manager are not applied correctly, and that default settings are used instead.
Workaround: Verify the Oracle VM Manager tuning settings and re-apply them if necessary. Follow the instructions in the section Verifying and Re-applying Oracle VM Manager Tuning after Software Update in the Oracle Private Cloud Appliance Administrator's Guide.
Bug 18477228
If you have changed the password for Oracle VM Manager or its related components Oracle WebLogic Server and Oracle MySQL database, and you need to restore the Oracle VM Manager from a backup that was made prior to the password change, the passwords will be out of sync. As a result of this password mismatch, Oracle VM Manager cannot connect to its database and cannot be started.
Workaround: Follow the instructions in the section Restoring a Backup After a Password Change in the Oracle Private Cloud Appliance Administrator's Guide.
Bug 19333583
When several different passwords are set for different appliance components using the Oracle PCA Dashboard, you could be locked out of Oracle VM Manager, or communication between Oracle VM Manager and other components could fail, as a result of authentication failures. The problem is caused by a partially failed password update, whereby a component has accepted the new password while another component continues to use the old password to connect.
The risk of authentication issues is considerably higher when Oracle VM Manager and its directly related components Oracle WebLogic Server and Oracle MySQL database are involved. A password change for these components requires the ovmm service to restart. If another password change occurs within a matter of a few minutes, the operation to update Oracle VM Manager accordingly could fail because the ovmm service was not active. An authentication failure will prevent the ovmm service from restarting.
Workaround: If you set
different passwords for appliance components using the
Oracle PCA Dashboard, change them one by one with an
interval of at least 10 minutes. If the
ovmm service is stopped as a result of a
password change, wait for it to restart before making further
changes. If the ovmm service fails to
restart due to authentication issues, it may be necessary to
replace the file
/nfs/shared_storage/wls1/servers/AdminServer/security/boot.properties
with the previous version of the file
(boot.properties.old
).
Bug 26007398
Passwords for Oracle PCA components are centrally managed through the Dashboard or the CLI. All password changes require synchronization, for example across all compute nodes or across HA pairs of infrastructure components. These synchronization tasks are sensitive to timing and, if interrupted, could cause a component to become inaccessible with the new credentials.
Workaround: When you update
passwords for appliance components, using the Oracle PCA
Dashboard or CLI, change them one by one with an interval of
at least 10 minutes. If the ovmm service is
stopped as a result of a password change, wait for it to
restart before making further changes. If the
ovmm service fails to restart due to
authentication issues, it may be necessary to replace the file
/nfs/shared_storage/wls1/servers/AdminServer/security/boot.properties
with the previous version of the file
(boot.properties.old
).
Bug 27666884
Although not recommended, it is technically possible to make changes to the MySQL database directly or through Oracle VM. The database might have been accessed with user accounts that are not under the control of the Oracle PCA software, resulting in synchronization issues. In this situation, the software update fails when running the MySQL database upgrade.
Workaround: The Oracle PCA 2.3.2 software update ISO image contains a Bash script that allows you to detect if this problem exists on your system. Please run the script, named pca_precheck_mysql.sh, on the master management node before starting the software update process. If the script detects a problem, you are advised to execute the corrective actions documented in the support note with Doc ID 2334970.1.
Bug 27190661
After the rack components have been configured with a custom password, any compute node ILOM of a newly installed expansion compute node does not automatically take over the password set by the user in the Wallet. The compute node provisions correctly, and the Wallet maintains access to its ILOM even though it uses the factory-default password. However, it is good practice to make sure that custom passwords are correctly synchronized across all components.
Workaround: Set or update the compute node ILOM password using the Oracle PCA Dashboard or CLI. This sets the new password both in the Wallet and the compute node ILOM.
Bug 26143197
When logging in to the active management node using SSH, you typically use the virtual IP address shared between both management nodes. However, since they are separate physical hosts, they have a different host key. If the host key is stored in the SSH client, and a failover to the secondary management node occurs, the next attempt to create an SSH connection through the virtual IP address results in a host key verification failure.
Workaround: Do not store
the host key in the SSH client. If the key has been stored,
remove it from the client's file system; typically inside the
user directory in .ssh/known_hosts
.
Bug 22915408
Particularly in environments with a large number of virtual machines, and when many virtual machine operations – such as start, stop, save, restore or migrate – occur in a short time, the Java processes of Oracle VM may consume a lot of CPU and memory capacity on the master management node. Users will notice the browser and command line interfaces becoming very slow or unresponsive. This behavior is likely caused by a memory leak in the Oracle VM CLI.
Workaround: A possible remedy is to restart the Oracle VM CLI from the Oracle Linux shell on the master management node.
# /u01/app/oracle/ovm-manager-3/ovm_cli/bin/stopCLIMain.sh # nohup /u01/app/oracle/ovm-manager-3/ovm_cli/bin/startCLIMain.sh&
Bug 18965916
The default compute node configuration does not allow connectivity to additional storage resources in the data center network. Compute nodes are connected to the data center subnet to enable public connectivity for the virtual machines they host, but the compute nodes' physical network interfaces have no IP address in that subnet. Consequently, SAN or file server discovery will fail.
Bug 17508885
When the connection between Oracle PCA and its external storage is interrupted, for example due to a network issue or a controller failover, it may occur in rare cases that, when the connection is restored, the affected LUNs are not automatically reconnected to the compute nodes that use them. The issue is caused by the timing of the RSCN protocol, which is implemented differently depending on the manufacturer of the external storage.
There is no workaround available. Please contact Oracle for assistance. The recovery procedure is documented in the support note with Doc ID 2330092.1.
Bug 27025655
The Oracle Private Cloud Appliance Software Release 2.3.1 includes the upgrade to Oracle VM Release 3.4.2. However, it is not possible to upgrade Oracle VM Server from Release 3.2.x to Release 3.4.x if third-party Oracle Storage Connect plugins are installed on the compute nodes.
Prior to the software update, unconfigure and remove third-party plugins as follows:
Remove any dependencies on the implicated storage array in Oracle VM Manager: remove virtual machine disk mappings and storage repository configurations using the third-party storage in question.
Remove the storage array from your Oracle VM Manager configuration to disconnect any configured compute nodes from that storage array.
Remove the third- party Oracle Storage Connect plugins from the compute nodes where they are installed.
After the compute nodes have been upgraded to Oracle VM Server 3.4.2, you may not be able to continue to use the same plugin. This depends on the storage vendor's policy regarding Oracle VM upgrade. For example, NetApp no longer provides support for its vendor-specific plugin, which has been removed from the Hardware Certification List for Oracle Linux and Oracle VM. For supported NetApp storage systems in combination with Oracle VM 3.4, use the generic storage plugin.
Bug 25203207
When a ZFS Storage Appliance is used externally with an Oracle PCA, failover operations between the storage heads have been known to result in I/O errors on the virtual machines using the affected LUNs. It appears that these I/O errors occur with ZFS Storage Appliances running administration software Release OS8.7.x.
Workaround: Avoid OS8.7 firmware when using a ZFS Storage Appliance with Oracle PCA. It is recommended that you run OS8.6.15.
Bugs 26850962 and 26964098
Fibre Channel LUNs should only be presented to compute nodes. Presenting the LUNs to the management nodes can cause their kernel to panic. Use proper (soft) zoning on the FC switch to prevent the management nodes from accessing the LUNs. For details, refer to the support note with Doc ID 2148589.1.
Bug 22983265
When network throughput is very high, certain conditions, like
a large number of MTU 9000 streams, have been known to cause a
kernel panic in a compute node. In that case,
/var/log/messages
on the affected compute
node contains entries like "Task Python:xxxxx
blocked for more than 120 seconds". As a result, HA
virtual machines may not have been migrated in time to another
compute node. Usually compute nodes return to their normal
operation automatically.
Workaround: If HA virtual machines have not been live-migrated off the affected compute node, log into Oracle VM Manager and restart the virtual machines manually. If an affected compute node does not return to normal operation, restart it from Oracle VM Manager.
Bugs 20981004 and 21841578
Before the product name change from Oracle
Virtual Compute Appliance to Oracle Private Cloud Appliance, the
Oracle PCA Dashboard could be accessed at
https://<manager-vip>
:7002/ovca.
As of Release 2.0.5, the URL ends in
/dashboard
instead. However, there is no
redirect from /ovca
to
/dashboard
.
Workaround: Enter the
correct URL:
https://<manager-vip>
:7002/dashboard.
Bug 21199163
When you activate the Screen Reader, through the Accessibility Options at login or in the Settings toolbar, the labels on the network ports of the I/O modules in the Network View tab of the Oracle PCA Dashboard are no longer correctly aligned with the background image.
There is no workaround available.
Bug 23099040
Oracle PCA Release 2.3.1 uses the Oracle Application Development Framework (ADF) version 11.1.1.2.0 for both the Dashboard and the Oracle VM Manager user interface. This version of ADF does not support Microsoft Internet Explorer 10 or 11.
Workaround: Use Internet Explorer 9 or a different web browser; for example Mozilla Firefox.
Bug 18791952
Both the Oracle PCA Dashboard and the Oracle VM Manager user interface run on an architecture based on Oracle WebLogic Server, Oracle Application Development Framework (ADF) and Oracle JDK 6. The cryptographic protocols supported on this architecture are SSLv3 and TLSv1.0. Mozilla Firefox version 38.2.0 or later no longer supports SSLv3 connections with a self-signed certificate. As a result, an error message might appear when you try to open the user interface login page.
In Oracle PCA Release 2.1.1 – with Oracle VM Release 3.2.10 – a server-side fix eliminates these secure connection failures. If secure connection failures occur with future versions of Mozilla Firefox, the workaround below might resolve them.
Workaround: Override the default Mozilla Firefox security protocol as follows:
In the Mozilla Firefox address bar, type about:config to access the browser configuration.
Acknowledge the warning about changing advanced settings by clicking I'll be careful, I promise!.
In the list of advanced settings, use the Search bar to filter the entries and look for the settings to be modified.
Double-click the following entries and then enter the new value to change the configuration preferences:
security.tls.version.fallback-limit: 1
security.ssl3.dhe_rsa_aes_128_sha: false
security.ssl3.dhe_rsa_aes_256_sha: false
If necessary, also modify the configuration preference
security.tls.insecure_fallback_hosts
and enter the affected hosts as a comma-separated list, either as domain names or as IP addresses.Close the Mozilla Firefox advanced configuration tab. The pages affected by the secure connection failure should now load normally.
Bug 21622475 and 21803485
In environments with a large number of virtual machines and
frequent connections through the VM console of Oracle VM Manager,
the browser UI login to Oracle VM Manager may fail with an
"unexpected error during login". A
restart of the ovmm
service is required.
Workaround: From the
Oracle Linux shell of the master management node, restart the
ovmm
service by entering the command
service ovmm restart. You should now be
able to log into Oracle VM Manager again.
Bug 19562053
During the upgrade to Oracle PCA Software Release 2.0.4 a
new version of the Xen hypervisor is installed on the compute
nodes. While the upgrade is in progress, entries may appear in
the ovs-agent.log
files on the compute
nodes indicating that xen commands are not
executed properly ("Error getting VM
stats"). This is a benign and temporary condition
resolved by the compute node reboot at the end of the upgrade
process. No workaround is required.
Bug 20901778
The compute nodes in an Oracle PCA are all placed in a single clustered server pool during provisioning. A clustered server pool is created as part of the provisioning process. One of the configuration parameters is the cluster time-out: the time a server is allowed to be unavailable before failover events are triggered. To avoid false positives, and thus unwanted failovers, the Oracle PCA server pool time-out is set to 300 seconds. As a consequence, a virtual machine configured with high availability (HA VM) can be unavailable for 5 minutes when its host fails. After the cluster time-out has passed, the HA VM is automatically restarted on another compute node in the server pool.
This behavior is as designed; it is not a bug. The server pool cluster configuration causes the delay in restarting VMs after a failover has occurred.
PVM support can be disabled on compute nodes, as described in
the section
Disabling
Paravirtualized Virtual Machines to Avoid Security
Issues in the Oracle Private Cloud Appliance Administrator's Guide. When you attempt to
migrate a PVM guest with secure migration enabled, to a
compute node that does not accept PVM guests, an incorrect
error message is displayed: stderr: Error: [Errno 9]
Bad file descriptor
.
Workaround: The error message is incorrect and should be ignored. It is a known Oracle VM issue. The migration of the PVM guest is prevented, as expected.
The server pools or tenant groups in Oracle PCA have secure
VM migration enabled by default. Without secure migration, the
correct error message would be displayed: Error: PV
guests disabled by xend
.
Bug 27679662
In a mixed Oracle VM environment, when you attempt to live-migrate a virtual machine running Oracle Solaris as its guest operating system, there could be problems due to the Oracle VM Server version. If the VM is live-migrated to a compute node running Oracle VM Server 3.4.4 or newer, from a version prior to 3.4.4, Oracle Solaris will reboot.
Workaround: Consult the Oracle VM Manager Release Notes for 3.4.4 for additional information regarding the issue, as well as possible solutions to manage your Oracle Solaris VMs when you update to Oracle PCA Release 2.3.2. Refer to the section Live Migrating Oracle Solaris Guests to Oracle VM Server Release 3.4.4 or Later Results in Guest Reboot.
Bug 26637606
The Hardware Management daemon, which runs as the process
named hwmgmtd
, can sometimes consume a
large amount of CPU capacity. This tends to become worse over
time and eventually reach 100 percent. As a direct result, the
system becomes less responsive over time.
Workaround: If you find
that CPU load on a compute node is high, log in to its
Oracle Linux shell and use the top command to
check if hwmgmtd
is consuming a lot of CPU
capacity. If so, restart the daemon by entering the command
/sbin/service hwmgmtd restart.
Bug 23174421
To simplify task management in the CLI the task identifiers (UUIDs) have been shortened. After an upgrade from a Release 2.0.x the task list may still contain entries from before the upgrade, resulting in misaligned entries due to the longer UUID. The command output then looks similar to this example:
PCA> list task Task_ID Status Progress Start_Time Task_Name ------- ------ -------- ---------- --------- 3327cc9b1414e2 RUNNING None 08-18-2015 11:45:54 update_download_image 9df321d37eed4bfea74221d22c26bfce SUCCESS 100 08-18-2015 09:59:08 update_run_ovmm_upgrader 8bcdcdf785ac4dfe96406284f1689802 SUCCESS 100 08-18-2015 08:46:11 update_download_image f1e6e60351174870b853b24f8eb7b429 SUCCESS 100 08-18-2015 04:00:01 backup e2e00c638b9e43808623c25ffd4dd42b SUCCESS 100 08-17-2015 16:00:01 backup d34325e2ff544598bd6dcf786af8bf30 SUCCESS 100 08-17-2015 10:47:20 update_download_image dd9d1f3b5c6f4bf187298ed9dcffe8f6 SUCCESS 100 08-17-2015 04:00:01 backup a48f438fe02d4b9baa91912b34532601 SUCCESS 100 08-16-2015 16:00:01 backup e03c442d27bb47d896ab6a8545482bdc SUCCESS 100 08-16-2015 04:00:01 backup f1d2f637ad514dce9a3c389e5e7bbed5 SUCCESS 100 08-15-2015 16:00:02 backup c4bf0d86c7a24a4fb656926954ee6cf2 SUCCESS 100 08-15-2015 04:00:01 backup 016acaf01d154095af4faa259297d942 SUCCESS 100 08-14-2015 16:00:01 backup ----------------- 12 rows displayed
Workaround: It is generally
good practice to purge old jobs from time to time. You can
remove the old tasks with the command delete task
uuid
. When all old tasks
have been removed the task list is output with correct column
alignment.
Bug 21650772
From the CLI perspective there are two phases to a controller software update; each with its own update command:
downloading the software image (update appliance get_image)
installing the software image (update appliance install_image)
The get_image
operation appears in the
standard task list. When the task is stopped, it is marked as
aborted, but its detailed record remains
available. The install_image
operation
belongs to the special update-task
category. Its progress can be tracked with the list
update-task command. However, if this task is
stopped, it disappears from the update task list and its
details can no longer be retrieved.
This behavior is inherent to the current design of task management and the software update mechanism. Update task information is handled and stored differently from all other tasks. This is not strictly considered a bug.
Bug 26088336
With Controller Software Release 2.3.4, the upgrade of the management nodes must be executed with the Oracle PCA Upgrader. However, the update appliance CLI command is still available in Release 2.3.1, 2.3.2 and 2.3.3, and cannot be disabled. While you can use the command to download the Controller Software ISO image, the code in the Release 2.3.4 image prevents you from installing the software image with from the CLI.
PCA> update appliance get_image http://<path-to-iso>/ovca-2.3.4-b000.iso.zip ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y The update job has been submitted. Use "show task <task id>" to monitor the progress. [...] PCA> show task 368f46bcb993c6 ---------------------------------------- Task_Name update_download_image Status SUCCESS Progress 100 Start_Time 08-31-2018 12:24:31 End_Time 08-31-2018 12:44:39 Pid 1179702 Result None ---------------------------------------- Status: Success PCA> update appliance install_image ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y [PCA Upgrade] FATAL ERROR: Upgrading PCA through the pca-admin command is no longer supported. Please use the pca_upgrade utility.
Workaround: This behavior is not a bug. Update the Controller Software to Release 2.3.4 using the Oracle PCA Upgrader.
Bug 28583284
The CLI command diagnose software can be used to run the basic software acceptance tests. However, these tests were designed for default configurations, so server pools with compute nodes running different versions of Oracle VM Server are beyond the scope of the tool. The example displays the typical test failures, which can be safely ignored in an environment with mixed compute node versions.
PCA> diagnose software PCA Software Acceptance Test runner utility [...] Test - 966 - Check Oracle VM 3.4 xen security update Acceptance [FAILED] Test - 909 - IPoIB is configuration on compute nodes Acceptance [FAILED] Test - 927 - OVM server model Acceptance [FAILED] [...]
Workaround: This is not strictly a bug, but the test failures will disappear after all compute nodes have been upgraded to the latest version of Oracle VM Server supported on an Oracle PCA compute node.
Bug 27158343
The CLI command diagnose software can be
used to run the basic software acceptance tests, and one of
those tests verifies the presence of certain packages on the
management nodes. After the controller software has been
updated to Release 2.3.3, the Package Acceptance test fails
because it searches for the wrong versions of two packages:
iscsi-initiator-utils
and
ocfs2-tools
.
PCA> diagnose software PCA Software Acceptance Test runner utility [...] Test - 964 - Bash Code Injection Vulnerability bug Acceptance [PASSED] Test - 785 - PCA package Acceptance [FAILED] [...]
Workaround: This is a bug in the acceptance test code. No workaround is required. This bug will be fixed in a future release.
Bug 27690915
The CLI command diagnose can be used to run
the basic ILOM health and software acceptance tests. Security
patches for ILOMs are sometimes released in between
Oracle Private Cloud Appliance Controller Software releases, and when these
are installed, the tests executed through the
diagnose ilom
and diagnose
software
commands may return failures because these
ILOM versions are not recognized. However, these ILOM-related
failures can be ignored.
Workaround: This is a bug in the acceptance test code. No workaround is required. This bug will be fixed in a future release.
No associated bug ID
The CLI command list opus-ports lists ports for additional switches that are not present within your environment. These switches are labelled OPUS-3, OPUS-4, OPUS-5 and OPUS-6 and are listed as belonging to rack numbers that are not available in a current deployment. This is due to the design, which caters to the future expansion of an environment. These entries are currently displayed in the listing of port relationships between compute nodes and each Oracle Switch ES1-24, and can be safely ignored.
Bug 18904287
Different proxies can be configured with the CLI command
set system-property. It was previously
possible to provide access credentials as part of the proxy
URL in this format:
username:password@IP:port
. However, this
method implies that you send sensitive information over a
connection that is not secure.
Workaround: To avoid that user credentials are displayed in clear text format, they should be entered at the time when the proxy service is used. The fact that the CLI refuses a proxy URL format with user credentials is technically correct. This is not a bug.
Bugs 27185941 and 28207923
If additional WebLogic users were created besides those configured by default, the additional users are removed by the Oracle PCA Controller Software update. This is known to occur when Release 2.2.1 is updated to Release 2.2.2, and when Release 2.2.2 is updated to Release 2.3.1.
Workaround: At this time, the only option is to create the WebLogic user again after the software update has completed.
Bug 25919226