6.2 Oracle Private Cloud Appliance Software

6.2.1 Do Not Install Additional Software on Appliance Components
6.2.2 Node Manager Does Not Show Node Offline Status
6.2.3 Compute Node State Changes Despite Active Provisioning Lock
6.2.4 Compute Nodes Are Available in Oracle VM Server Pool Before Provisioning Completes
6.2.5 Virtual Machines Remain in Running Status when Host Compute Node Is Reprovisioned
6.2.6 Ethernet-Based System Management Nodes Have Non-Functional bond0 Network Interface
6.2.7 Network Performance Is Impacted by VxLAN Encapsulation
6.2.8 Altering Custom Network VLAN Tag Is Not Supported
6.2.9 Configuring Uplinks with Breakout Ports Results in Port Group Named 'None'
6.2.10 DPM Server Pool Policy Interrupts Synchronization of Tenant Group Settings
6.2.11 Host Network Parameter Validation Is Too Permissive
6.2.12 Virtual Appliances Cannot Be Imported Over a Host Network
6.2.13 Customizations for ZFS Storage Appliance in multipath.conf Are Not Supported
6.2.14 Customer Created LUNs Are Mapped to the Wrong Initiator Group
6.2.15 Storage Head Failover Disrupts Running Virtual Machines
6.2.16 Changing Multiple Component Passwords Causes Authentication Failure in Oracle VM Manager
6.2.17 ILOM Password of Expansion Compute Nodes Is Not Synchronized During Provisioning
6.2.18 SSH Host Key Mismatch After Management Node Failover
6.2.19 External Storage Cannot Be Discovered Over Data Center Network
6.2.20 Mozilla Firefox Cannot Establish Secure Connection with User Interface
6.2.21 Virtual Machine with High Availability Takes Five Minutes to Restart when Failover Occurs
6.2.22 CLI Command update appliance Is Deprecated
6.2.23 Certain CLI Commands Fail in Single-command Mode
6.2.24 Upgrader Checks Logged in Different Order
6.2.25 Virtual Machine Loses IP Address Due to DHCP Timeout During High Network Load
6.2.26 Adding the Virtual Machine Role to the Storage Network Causes Cluster to Lose Heartbeat Networking
6.2.27 Adding Virtual Machine Role to the Management Network Causes Oracle VM Manager to Lose Contact with the Compute Nodes

This section describes software-related limitations and workarounds.

6.2.1 Do Not Install Additional Software on Appliance Components

Oracle Private Cloud Appliance is delivered as an appliance: a complete and controlled system composed of selected hardware and software components. If you install additional software packages on the pre-configured appliance components, be it a compute node, management node or storage component, you introduce new variables that potentially disrupt the operation of the appliance as a whole. Unless otherwise instructed, Oracle advises against the installation or upgrade of additional packages, either from a third party or from Oracle's own software channels like the Oracle Linux YUM repositories.

Workaround: Do not install additional software on any internal Oracle Private Cloud Appliance system components. If your internal processes require certain additional tools, contact your Oracle representative to discuss these requirements.

6.2.2 Node Manager Does Not Show Node Offline Status

The role of the Node Manager database is to track the various states a compute node goes through during provisioning. After successful provisioning the database continues to list a node as running, even if it is shut down. For nodes that are fully operational, the server status is tracked by Oracle VM Manager. However, the Oracle Private Cloud Appliance Dashboard displays status information from the Node Manager. This may lead to inconsistent information between the Dashboard and Oracle VM Manager, but it is not considered a bug.

Workaround: To verify the status of operational compute nodes, use the Oracle VM Manager user interface.

Bug 17456373

6.2.3 Compute Node State Changes Despite Active Provisioning Lock

The purpose of a lock of the type provisioning or all_provisioning is to prevent all compute nodes from starting or continuing a provisioning process. However, when you attempt to reprovision a running compute node from the Oracle Private Cloud Appliance CLI while an active lock is in place, the compute node state changes to "reprovision_only" and it is marked as "DEAD". Provisioning of the compute node continues as normal when the provisioning lock is deactivated.

Bug 22151616

6.2.4 Compute Nodes Are Available in Oracle VM Server Pool Before Provisioning Completes

Compute node provisioning can take up to several hours to complete. However, those nodes are added to the Oracle VM server pool early on in the process, but they are not placed in maintenance mode. In theory the discovered servers are available for use in Oracle VM Manager, but you must not attempt to alter their configuration in any way before the Oracle Private Cloud Appliance Dashboard indicates that provisioning has completed.

Workaround: Wait for compute node provisioning to finish. Do not modify the compute nodes or server pool in any way in Oracle VM Manager.

Bug 22159111

6.2.5 Virtual Machines Remain in Running Status when Host Compute Node Is Reprovisioned

Using the Oracle Private Cloud Appliance CLI it is possible to force the reprovisioning of a compute node even if it is hosting running virtual machines. The compute node is not placed in maintenance mode. Consequently, the active virtual machines are not shut down or migrated to another compute node. Instead these VMs remain in running status and Oracle VM Manager reports their host compute node as "N/A".

Caution

Reprovisioning a compute node that hosts virtual machines is considered bad practice. Good practice is to migrate all virtual machines away from the compute node before starting a reprovisioning operation or software update.

Workaround: In this particular condition the VMs can no longer be migrated. They must be killed and restarted. After a successful restart they return to normal operation on a different host compute node in accordance with start policy defined for the server pool.

Bug 22018046

6.2.6 Ethernet-Based System Management Nodes Have Non-Functional bond0 Network Interface

When the driver for network interface bonding is loaded, the system automatically generates a default bond0 interface. However, this interface is not activated or used in the management nodes of an Oracle Private Cloud Appliance with the Ethernet-based network architecture.

Workaround: The bond0 interface is not configured in any usable way and can be ignored on Ethernet-based systems. On InfiniBand-based systems, the bond0 interface is functional and configured.

Bug 29559810

6.2.7 Network Performance Is Impacted by VxLAN Encapsulation

The design of the all-Ethernet network fabric in Oracle Private Cloud Appliance relies heavily on VxLAN encapsulation and decapsulation. This extra protocol layer requires additional CPU cycles and consequently reduces network performance compared to regular tagged or untagged traffic. In particular the connectivity to and from VMs can be affected. To compensate for the CPU load of VxLAN processing, the MTU (Maximum Transmission Unit) on VM networks can be increased to 9000 bytes, which is the setting across the standard appliance networks. However, the network paths should be analyzed carefully to make sure that the larger MTU setting is supported between the end points: if an intermediate network device only supports an MTU of 1500 bytes, then the fragmentation of the 9000 byte packets will result in a bigger performance penalty.

Workaround: If the required network performance cannot be obtained with a default MTU of 1500 bytes for regular VM traffic, you should consider increasing the MTU to 9000 bytes; on the VM network and inside the VM itself.

Bug 29664090

6.2.8 Altering Custom Network VLAN Tag Is Not Supported

When you create a custom network, it is technically possible – though not supported – to alter the VLAN tag in Oracle VM Manager. However, when you attempt to add a compute node, the system creates the network interface on the server but fails to enable the modified VLAN configuration. At this point the custom network is stuck in a failed state: neither the network nor the interfaces can be deleted, and the VLAN configuration can no longer be changed back to the original tag.

Workaround: Do not modify appliance-level networking in Oracle VM Manager. There are no documented workarounds and any recovery operation is likely to require significant downtime of the Oracle Private Cloud Appliance environment.

Bug 23250544

6.2.9 Configuring Uplinks with Breakout Ports Results in Port Group Named 'None'

When you split uplink ports for custom network configuration by means of a breakout cable, and subsequently start configuring the port pairs through the Oracle Private Cloud Appliance CLI, all four breakout ports are stored in the configuration database at the same time. This means that when you add the first two of four breakout ports to a port group, the remaining two breakout ports on the same cable are automatically added to another port group named "None", which remains disabled. When you add the second pair of breakout ports to a port group, "None" is replaced with the port group name of your choice, and the port group is enabled. The sequence of commands in the example shows how the configuration changes step by step:

PCA> create uplink-port-group custom_ext_1 '1:1 1:2' 10g-4x
Status: Success

PCA> list uplink-port-group
Port_Group_Name    Ports      Mode    Speed    Breakout_Mode    Enabled   State
---------------    -----      ----    -----    -------------    -------   -----
default_5_1        5:1 5:2    LAG     10g      10g-4x           True      (up)* Not all ports are up
default_5_2        5:3 5:4    LAG     10g      10g-4x           False     down
custom_ext_1       1:1 1:2    LAG     10g      10g-4x           True      up
None               1:3 1:4    LAG     10g      10g-4x           False     up
----------------
4 rows displayed
Status: Success

PCA> create uplink-port-group custom_ext_2 '1:3 1:4' 10g-4x
Status: Success

PCA> list uplink-port-group
Port_Group_Name    Ports      Mode    Speed    Breakout_Mode    Enabled   State
---------------    -----      ----    -----    -------------    -------   -----
default_5_1        5:1 5:2    LAG     10g      10g-4x           True      (up)* Not all ports are up
default_5_2        5:3 5:4    LAG     10g      10g-4x           False     down
custom_ext_1       1:1 1:2    LAG     10g      10g-4x           True      up
custom_ext_2       1:3 1:4    LAG     10g      10g-4x           True      up
----------------
4 rows displayed
Status: Success

Workaround: This behavior is by design, because it is a requirement that all four breakout ports must be added to the network configuration at the same time. When a port group is named "None", and it consists of two ports in a 4-way breakout cable, which are otherwise (temporarily) unconfigured, this can be ignored.

Bug 30426198

6.2.10 DPM Server Pool Policy Interrupts Synchronization of Tenant Group Settings

Tenant groups in Oracle Private Cloud Appliance are based on Oracle VM server pools, with additional configuration for network and storage across the servers included in the tenant group. When a compute node is added to a tenant group, its network and storage configuration is synchronized with the other servers already in the tenant group. This process takes several minutes, and could therefore be interrupted if a distributed power management (DPM) policy is active for the Oracle VM server pool. The DPM policy may force the new compute node to shut down because it contains no running virtual machines, while the tenant group configuration process on the compute node is still in progress. The incomplete configuration causes operational issues at the level of the compute node or even the tenant group.

Workaround: If server pool policies are a requirement, it is suggested to turn them off temporarily when modifying tenant groups or during the installation and configuration of expansion compute nodes.

Bug 30478940

6.2.11 Host Network Parameter Validation Is Too Permissive

When you define a host network, it is possible to enter invalid or contradictory values for the Prefix, Netmask and Route_Destination parameters. For example, when you enter a prefix with "0" as the first octet, the system attempts to configure IP addresses on compute node Ethernet interfaces starting with 0. Also, when the netmask part of the route destination you enter is invalid, the network is still created, even though an exception occurs. When such a poorly configured network is in an invalid state, it cannot be reconfigured or deleted with standard commands.

Workaround: Double-check your CLI command parameters before pressing Enter. If an invalid network configuration is applied, use the --force option to delete the network.

Bug 25729227

6.2.12 Virtual Appliances Cannot Be Imported Over a Host Network

A host network provides connectivity between compute nodes and hosts external to the appliance. It is implemented to connect external storage to the environment. If you attempt to import a virtual appliance, also known as assemblies in previous releases of Oracle VM and Oracle Private Cloud Appliance, from a location on the host network, it is likely to fail, because Oracle VM Manager instructs the compute nodes to use the active management node as a proxy for the import operation.

Workaround: Make sure that the virtual appliance resides in a location accessible from the active management node.

Bug 25801215

6.2.13 Customizations for ZFS Storage Appliance in multipath.conf Are Not Supported

The ZFS stanza in multipath.conf is controlled by the Oracle Private Cloud Appliance software. The internal ZFS Storage Appliance is a critical component of the appliance and the multipath configuration is tailored to the internal requirements. You should never modify the ZFS parameters in multipath.conf, because it could adversely affect the appliance performance and functionality.

Even if customizations were applied for (external) ZFS storage, they are overwritten when the Oracle Private Cloud Appliance Controller Software is updated. A backup of the file is saved prior to the update. Customizations in other stanzas of multipath.conf, for storage devices from other vendors, are preserved during upgrades.

Bug 25821423

6.2.14 Customer Created LUNs Are Mapped to the Wrong Initiator Group

When adding LUNs on the Oracle Private Cloud Appliance internal ZFS Storage Appliance you must add them under the "OVM" target group. Only this default target group is supported; there can be no additional target groups. However, on the initiator side you should not use the default configuration, otherwise all LUNs are mapped to the "All Initiators" group, and accessible for all nodes in the system. Such a configuration may cause several problems within the appliance.

Additional, custom LUNs on the internal storage must instead be mapped to one or more custom initiator groups. This ensures that the LUNs are mapped to the intended initiators, and are not remapped by the appliance software to the default "All Initiators" group.

Workaround: When creating additional, custom LUNs on the internal ZFS Storage Appliance, always use the default target group, but make sure the LUNs are mapped to one or more custom initiator groups.

Bugs 22309236 and 18155778

6.2.15 Storage Head Failover Disrupts Running Virtual Machines

When a failover occurs between the storage heads of a ZFS Storage Appliance, virtual machine operation could be disrupted by temporary loss of disk access. Depending on the guest operating system, and on the configuration of the guest and Oracle VM, a VM could hang, power off or reboot. This behavior is caused by an iSCSI configuration parameter that does not allow sufficient recovery time for the storage failover to complete.

Workaround: Increase the value of node.session.timeo.replacement_timeout in the file /etc/iscsi/iscsid.conf. For details, refer to the support note with Doc ID 2189806.1.

Bug 24439070

6.2.16 Changing Multiple Component Passwords Causes Authentication Failure in Oracle VM Manager

When several different passwords are set for different appliance components using the Oracle Private Cloud Appliance Dashboard, you could be locked out of Oracle VM Manager, or communication between Oracle VM Manager and other components could fail, as a result of authentication failures. The problem is caused by a partially failed password update, whereby a component has accepted the new password while another component continues to use the old password to connect.

The risk of authentication issues is considerably higher when Oracle VM Manager and its directly related components Oracle WebLogic Server and Oracle MySQL database are involved. A password change for these components requires the ovmm service to restart. If another password change occurs within a matter of a few minutes, the operation to update Oracle VM Manager accordingly could fail because the ovmm service was not active. An authentication failure will prevent the ovmm service from restarting.

Workaround: If you set different passwords for appliance components using the Oracle Private Cloud Appliance Dashboard, change them one by one with a 10 minute interval. If the ovmm service is stopped as a result of a password change, wait for it to restart before making further changes. If the ovmm service fails to restart due to authentication issues, it may be necessary to replace the file /nfs/shared_storage/wls1/servers/AdminServer/security/boot.properties with the previous version of the file (boot.properties.old).

Bug 26007398

6.2.17 ILOM Password of Expansion Compute Nodes Is Not Synchronized During Provisioning

After the rack components have been configured with a custom password, any compute node ILOM of a newly installed expansion compute node does not automatically take over the password set by the user in the Wallet. The compute node provisions correctly, and the Wallet maintains access to its ILOM even though it uses the factory-default password. However, it is good practice to make sure that custom passwords are correctly synchronized across all components.

Workaround: Set or update the compute node ILOM password using the Oracle Private Cloud Appliance Dashboard or CLI. This sets the new password both in the Wallet and the compute node ILOM.

Bug 26143197

6.2.18 SSH Host Key Mismatch After Management Node Failover

When logging in to the active management node using SSH, you typically use the virtual IP address shared between both management nodes. However, since they are separate physical hosts, they have a different host key. If the host key is stored in the SSH client, and a failover to the secondary management node occurs, the next attempt to create an SSH connection through the virtual IP address results in a host key verification failure.

Workaround: Do not store the host key in the SSH client. If the key has been stored, remove it from the client's file system; typically inside the user directory in .ssh/known_hosts.

Bug 22915408

6.2.19 External Storage Cannot Be Discovered Over Data Center Network

The default compute node configuration does not allow connectivity to additional storage resources in the data center network. Compute nodes are connected to the data center subnet to enable public connectivity for the virtual machines they host, but the compute nodes' network interfaces have no IP address in that subnet. Consequently, SAN or file server discovery will fail.

Bug 17508885

6.2.20 Mozilla Firefox Cannot Establish Secure Connection with User Interface

Both the Oracle Private Cloud Appliance Dashboard and the Oracle VM Manager user interface run on an architecture based on Oracle WebLogic Server, Oracle Application Development Framework (ADF) and Oracle JDK 6. The cryptographic protocols supported on this architecture are SSLv3 and TLSv1.0. Mozilla Firefox version 38.2.0 or later no longer supports SSLv3 connections with a self-signed certificate. As a result, an error message might appear when you try to open the user interface login page.

Workaround: Override the default Mozilla Firefox security protocol as follows:

  1. In the Mozilla Firefox address bar, type about:config to access the browser configuration.

  2. Acknowledge the warning about changing advanced settings by clicking I'll be careful, I promise!.

  3. In the list of advanced settings, use the Search bar to filter the entries and look for the settings to be modified.

  4. Double-click the following entries and then enter the new value to change the configuration preferences:

    • security.tls.version.fallback-limit: 1

    • security.ssl3.dhe_rsa_aes_128_sha: false

    • security.ssl3.dhe_rsa_aes_256_sha: false

  5. If necessary, also modify the configuration preference security.tls.insecure_fallback_hosts and enter the affected hosts as a comma-separated list, either as domain names or as IP addresses.

  6. Close the Mozilla Firefox advanced configuration tab. The pages affected by the secure connection failure should now load normally.

Bug 21622475 and 21803485

6.2.21 Virtual Machine with High Availability Takes Five Minutes to Restart when Failover Occurs

The compute nodes in an Oracle Private Cloud Appliance are all placed in a single clustered server pool during provisioning. A clustered server pool is created as part of the provisioning process. One of the configuration parameters is the cluster time-out: the time a server is allowed to be unavailable before failover events are triggered. To avoid false positives, and thus unwanted failovers, the Oracle Private Cloud Appliance server pool time-out is set to 300 seconds. As a consequence, a virtual machine configured with high availability (HA VM) can be unavailable for 5 minutes when its host fails. After the cluster time-out has passed, the HA VM is automatically restarted on another compute node in the server pool.

This behavior is as designed; it is not a bug. The server pool cluster configuration causes the delay in restarting VMs after a failover has occurred.

6.2.22 CLI Command update appliance Is Deprecated

The Oracle Private Cloud Appliance command line interface contains the update appliance command, which is used in releases prior to 2.3.4 to unpack a Controller Software image and update the appliance with a new software stack. This functionality is now part of the Upgrader tool, so the CLI command is deprecated and will be removed in the next release.

Workaround: Future updates and upgrades will be executed through the Oracle Private Cloud Appliance Upgrader.

Bug 29913246

6.2.23 Certain CLI Commands Fail in Single-command Mode

The Oracle Private Cloud Appliance command line interface can be used in an interactive mode, using a closed shell environment, or in a single-command mode. When using the single-command mode, commands and arguments are entered at the Oracle Linux command prompt as a single line. If such a single command contains special characters, such as quotation marks, they may be stripped out and interpreted incorrectly.

Workaround: Use the CLI in interactive mode to avoid special characters being stripped out of command arguments. If you must use single-command mode, use single and double quotation marks around the arguments where required, so that only the outer quotation marks are stripped out. For example, change this command from:

# pca-admin create uplink-port-group myPortGroup '2:1 2:2' 10g-4x

to

# pca-admin create uplink-port-group myPortGroup "'2:1 2:2'" 10g-4x

Do not use doubles of the same quotation marks.

Bug 30421250

6.2.24 Upgrader Checks Logged in Different Order

Due to a change in how the Oracle Private Cloud Appliance Upgrader test are run, the output of the checks could be presented in a different order each time the tests are run.

This behavior is not a bug. There is no workaround required.

Bug 30078487

6.2.25 Virtual Machine Loses IP Address Due to DHCP Timeout During High Network Load

When an Oracle Private Cloud Appliance is configured to the maximum limits and a high load is running, a situation may occur where general DHCP/IP bandwidth limits are exceeded. In this case the DHCP client eventually reaches a timeout, and as a result the virtual machine IP address is lost, then reset to 0.0.0.0. This is normal behavior when the system is operating at full bandwidth capacity.

Workaround: When adequate bandwidth is available, recover from the situation by issuing the dhclient command from the virtual machine to request a new IP address.

Bug 30143723

6.2.26 Adding the Virtual Machine Role to the Storage Network Causes Cluster to Lose Heartbeat Networking

Attempting to add the Virtual Machine role to the storage network in Oracle VM Manger on an Oracle Private Cloud Appliance can cause your cluster to lose heartbeat networking, which will impact running Virtual Machines and their workloads. This operation is not supported on Oracle Private Cloud Appliance.

Workaround: Do not add the VM role to the storage-int network.

Bug 30936974

6.2.27 Adding Virtual Machine Role to the Management Network Causes Oracle VM Manager to Lose Contact with the Compute Nodes

Attempting to add the Virtual Machine role to the management network in Oracle VM Manger on an Oracle Private Cloud Appliance causes you to lose connectivity with your compute nodes. The compute nodes are still up, however your manager can not communicate with the compute nodes, which leaves your rack in a degraded state. This operation is not supported on Oracle Private Cloud Appliance.

Workaround: Do not add the VM role to the mgmt-int network.

Bug 30937049