Oracle Private Cloud Appliance Software
This section describes software-related limitations and workarounds.
Upgrade to UEK6 Kernel Fails to Boot with 1000 FC LUNs Attached
Compute node upgrade to the UEK6 kernel fails during boot with 1000 FC LUNs attached.
The following workaround is applied to /etc/systemd/system.conf
during
upgrade of the CNs to Private Cloud Appliance 2.4.4.3 and to
newly provisioned CNs. You do not need to perform this workaround yourself.
Workaround: Increase systemd
timeouts. In
/etc/systemd/system.conf
, update both
DefaultTimeoutStartSec
and DefaultTimeoutStopSec
as
follows:
DefaultTimeoutStartSec=900s DefaultTimeoutStopSec=900s
Bug 35780853
Fault Reports Are Not Removed Until Another Fault Is Reported
Fault Monitor reports that are older than report_dir_cleanup_days
are not
removed until a new fault is generated. Fault report cleanup is performed only when a fault is
reported.
Bug 35665992
Do Not Install Additional Software on Appliance Components
Oracle Private Cloud Appliance is delivered as an appliance: a complete and controlled system composed of selected hardware and software components. If you install additional software packages on the pre-configured appliance components, be it a compute node, management node or storage component, you introduce new variables that potentially disrupt the operation of the appliance as a whole. Unless otherwise instructed, Oracle advises against the installation or upgrade of additional packages, either from a third party or from Oracle's own software channels like the Oracle Linux YUM repositories.
Workaround: Do not install additional software on any internal Oracle Private Cloud Appliance system components. If your internal processes require certain additional tools, contact your Oracle representative to discuss these requirements.
Upgrader UI Installation Step Is Skipped
This issue causes the pca_upgrader
to skip the ui_install
step. The following message appears in the log file: "The Oracle VM upgrade has already been
completed on the other manager. Skipping OVCA UI installation on
management-node-name
."
If you previously performed an upgrade and did not perform the preventive step described in the following workaround, you might need to manually install the UI.
Workaround:Before you begin the upgrade, delete the following file to prevent skipping UI installation:
/nfs/shared_storage/pca_upgrader/pxe_upgrade/.standby_upgrade_complete
To manually install the UI if you did not delete the
.standby_upgrade_complete
file prior to performing the upgrade, see [PCA 2.x] Accessing Dashboard Gets HTTP 404 Error After Upgrading
Management Nodes (Doc ID 2491230.1).
Bug: 34906041
Deleting a Storage Network Is Allowed When the Network Is Assigned to NFS or iSCSI Storage
You are able to delete a storage_network
network while that network is
assigned to nfs-storage
or iscsi-storage
.
Workaround: Before you delete a storage_network
network, use
show nfs-storage
and show iscsi-storage
to verify that the
storage_network
is not assigned.
Bug 34767928
Node Manager Does Not Show Node Offline Status
The role of the Node Manager database is to track the various states a compute node goes through during provisioning. After successful provisioning the database continues to list a node as running, even if it is shut down. For nodes that are fully operational, the server status is tracked by Oracle VM Manager. However, the Oracle Private Cloud Appliance Dashboard displays status information from the Node Manager. This may lead to inconsistent information between the Dashboard and Oracle VM Manager, but it is not considered a bug.
Workaround: To verify the status of operational compute nodes, use the Oracle VM Manager user interface.
Bug 17456373
Compute Node State Changes Despite Active Provisioning Lock
The purpose of a lock of the type provisioning
or
all_provisioning
is to prevent all compute nodes from starting or
continuing a provisioning process. However, when you attempt to reprovision a running
compute node from the Oracle Private Cloud Appliance CLI while an
active lock is in place, the compute node state changes to "reprovision_only" and it is
marked as "DEAD". Provisioning of the compute node continues as normal when the
provisioning lock is deactivated.
Bug 22151616
Compute Nodes Are Available in Oracle VM Server Pool Before Provisioning Completes
Compute node provisioning can take up to several hours to complete. However, those nodes are added to the Oracle VM server pool early on in the process, but they are not placed in maintenance mode. In theory the discovered servers are available for use in Oracle VM Manager, but you must not attempt to alter their configuration in any way before the Oracle Private Cloud Appliance Dashboard indicates that provisioning has completed.
Workaround: Wait for compute node provisioning to finish. Do not modify the compute nodes or server pool in any way in Oracle VM Manager.
Bug 22159111
Virtual Machines Remain in Running Status when Host Compute Node Is Reprovisioned
Using the Oracle Private Cloud Appliance CLI it is possible to force the reprovisioning of a compute node even if it is hosting running virtual machines. The compute node is not placed in maintenance mode. Consequently, the active virtual machines are not shut down or migrated to another compute node. Instead these VMs remain in running status and Oracle VM Manager reports their host compute node as "N/A".
Caution:
Reprovisioning a compute node that hosts virtual machines is considered bad practice. Good practice is to migrate all virtual machines away from the compute node before starting a reprovisioning operation or software update.
Workaround: In this particular condition the VMs can no longer be migrated. They must be killed and restarted. After a successful restart they return to normal operation on a different host compute node in accordance with start policy defined for the server pool.
Bug 22018046
Ethernet-Based System Management Nodes Have Non-Functional
bond0
Network Interface
When the driver for network interface bonding is loaded, the system
automatically generates a default bond0
interface. However, this interface is
not activated or used in the management nodes of an Oracle Private Cloud Appliance with the Ethernet-based network architecture.
Workaround: The
bond0
interface is not configured in any
usable way and can be ignored on Ethernet-based systems. On
InfiniBand-based systems, the bond0
interface is functional and configured.
Bug 29559810
Network Performance Is Impacted by VxLAN Encapsulation
The design of the all-Ethernet network fabric in Oracle Private Cloud Appliance relies heavily on VxLAN encapsulation and decapsulation. This extra protocol layer requires additional CPU cycles and consequently reduces network performance compared to regular tagged or untagged traffic. In particular the connectivity to and from VMs can be affected. To compensate for the CPU load of VxLAN processing, the MTU (Maximum Transmission Unit) on VM networks can be increased to 9000 bytes, which is the setting across the standard appliance networks. However, the network paths should be analyzed carefully to make sure that the larger MTU setting is supported between the end points: if an intermediate network device only supports an MTU of 1500 bytes, then the fragmentation of the 9000 byte packets will result in a bigger performance penalty.
Workaround: If the required network performance cannot be obtained with a default MTU of 1500 bytes for regular VM traffic, you should consider increasing the MTU to 9000 bytes; on the VM network and inside the VM itself.
Bug 29664090
Altering Custom Network VLAN Tag Is Not Supported
When you create a custom network, it is technically possible – though not supported – to alter the VLAN tag in Oracle VM Manager. However, when you attempt to add a compute node, the system creates the network interface on the server but fails to enable the modified VLAN configuration. At this point the custom network is stuck in a failed state: neither the network nor the interfaces can be deleted, and the VLAN configuration can no longer be changed back to the original tag.
Workaround: Do not modify appliance-level networking in Oracle VM Manager. There are no documented workarounds and any recovery operation is likely to require significant downtime of the Oracle Private Cloud Appliance environment.
Bug 23250544
Configuring Uplinks with Breakout Ports Results in Port Group Named 'None'
When you split uplink ports for custom network configuration by means of a breakout cable, and subsequently start configuring the port pairs through the Oracle Private Cloud Appliance CLI, all four breakout ports are stored in the configuration database at the same time. This means that when you add the first two of four breakout ports to a port group, the remaining two breakout ports on the same cable are automatically added to another port group named "None", which remains disabled. When you add the second pair of breakout ports to a port group, "None" is replaced with the port group name of your choice, and the port group is enabled. The sequence of commands in the example shows how the configuration changes step by step:
PCA> create uplink-port-group custom_ext_1 '1:1 1:2' 10g-4x Status: Success PCA> list uplink-port-group Port_Group_Name Ports Mode Speed Breakout_Mode Enabled State --------------- ----- ---- ----- ------------- ------- ----- default_5_1 5:1 5:2 LAG 10g 10g-4x True (up)* Not all ports are up default_5_2 5:3 5:4 LAG 10g 10g-4x False down custom_ext_1 1:1 1:2 LAG 10g 10g-4x True up None 1:3 1:4 LAG 10g 10g-4x False up ---------------- 4 rows displayed Status: Success PCA> create uplink-port-group custom_ext_2 '1:3 1:4' 10g-4x Status: Success PCA> list uplink-port-group Port_Group_Name Ports Mode Speed Breakout_Mode Enabled State --------------- ----- ---- ----- ------------- ------- ----- default_5_1 5:1 5:2 LAG 10g 10g-4x True (up)* Not all ports are up default_5_2 5:3 5:4 LAG 10g 10g-4x False down custom_ext_1 1:1 1:2 LAG 10g 10g-4x True up custom_ext_2 1:3 1:4 LAG 10g 10g-4x True up ---------------- 4 rows displayed Status: Success
Workaround: This behavior is by design, because it is a requirement that all four breakout ports must be added to the network configuration at the same time. When a port group is named "None", and it consists of two ports in a 4-way breakout cable, which are otherwise (temporarily) unconfigured, this can be ignored.
Bug 30426198
DPM Server Pool Policy Interrupts Synchronization of Tenant Group Settings
Tenant groups in Oracle Private Cloud Appliance are based on Oracle VM server pools, with additional configuration for network and storage across the servers included in the tenant group. When a compute node is added to a tenant group, its network and storage configuration is synchronized with the other servers already in the tenant group. This process takes several minutes, and could therefore be interrupted if a distributed power management (DPM) policy is active for the Oracle VM server pool. The DPM policy may force the new compute node to shut down because it contains no running virtual machines, while the tenant group configuration process on the compute node is still in progress. The incomplete configuration causes operational issues at the level of the compute node or even the tenant group.
Workaround: If server pool policies are a requirement, it is suggested to turn them off temporarily when modifying tenant groups or during the installation and configuration of expansion compute nodes.
Bug 30478940
Host Network Parameter Validation Is Too Permissive
When you define a host network, it is possible to enter invalid or contradictory values for the Prefix, Netmask and Route_Destination parameters. For example, when you enter a prefix with "0" as the first octet, the system attempts to configure IP addresses on compute node Ethernet interfaces starting with 0. Also, when the netmask part of the route destination you enter is invalid, the network is still created, even though an exception occurs. When such a poorly configured network is in an invalid state, it cannot be reconfigured or deleted with standard commands.
Workaround: Double-check
your CLI command parameters before pressing Enter. If an
invalid network configuration is applied, use the
--force
option to delete the network.
Bug 25729227
Virtual Appliances Cannot Be Imported Over a Host Network
A host network provides connectivity between compute nodes and hosts external to the appliance. It is implemented to connect external storage to the environment. If you attempt to import a virtual appliance, also known as assemblies in previous releases of Oracle VM and Oracle Private Cloud Appliance, from a location on the host network, it is likely to fail, because Oracle VM Manager instructs the compute nodes to use the active management node as a proxy for the import operation.
Workaround: Make sure that the virtual appliance resides in a location accessible from the active management node.
Bug 25801215
Customizations for ZFS Storage Appliance in
multipath.conf
Are Not Supported
The ZFS stanza in multipath.conf
is controlled by the Oracle Private Cloud Appliance software. The internal ZFS Storage Appliance is a critical component of the appliance and the
multipath configuration is tailored to the internal requirements. You should never modify the
ZFS parameters in multipath.conf
, because it could adversely affect the
appliance performance and functionality.
Even if customizations were applied for (external) ZFS storage, they are
overwritten when the Oracle Private Cloud Appliance Controller Software is
updated. A backup of the file is saved prior to the update. Customizations in other stanzas of
multipath.conf
, for storage devices from other vendors, are preserved
during upgrades.
Bug 25821423
Customer Created LUNs Are Mapped to the Wrong Initiator Group
When adding LUNs on the Oracle Private Cloud Appliance internal ZFS Storage Appliance you must add them under the "OVM" target group. Only this default target group is supported; there can be no additional target groups. However, on the initiator side you should not use the default configuration, otherwise all LUNs are mapped to the "All Initiators" group, and accessible for all nodes in the system. Such a configuration may cause several problems within the appliance.
Additional, custom LUNs on the internal storage must instead be mapped to one or more custom initiator groups. This ensures that the LUNs are mapped to the intended initiators, and are not remapped by the appliance software to the default "All Initiators" group.
Workaround: When creating additional, custom LUNs on the internal ZFS Storage Appliance, always use the default target group, but make sure the LUNs are mapped to one or more custom initiator groups.
Bugs 22309236 and 18155778
Storage Head Failover Disrupts Running Virtual Machines
When a failover occurs between the storage heads of a ZFS Storage Appliance, virtual machine operation could be disrupted by temporary loss of disk access. Depending on the guest operating system, and on the configuration of the guest and Oracle VM, a VM could hang, power off or reboot. This behavior is caused by an iSCSI configuration parameter that does not allow sufficient recovery time for the storage failover to complete.
Workaround: Increase the
value of
node.session.timeo.replacement_timeout in
the file /etc/iscsi/iscsid.conf
. For
details, refer to the support note with
Doc
ID 2189806.1.
Bug 24439070
Changing Multiple Component Passwords Causes Authentication Failure in Oracle VM Manager
When several different passwords are set for different appliance components using the Oracle Private Cloud Appliance Dashboard, you could be locked out of Oracle VM Manager, or communication between Oracle VM Manager and other components could fail, as a result of authentication failures. The problem is caused by a partially failed password update, whereby a component has accepted the new password while another component continues to use the old password to connect.
The risk of authentication issues is considerably higher when Oracle VM Manager and its directly related
components Oracle WebLogic Server and Oracle MySQL database are involved. A password
change for these components requires the ovmm
service to restart. If
another password change occurs within a matter of a few minutes, the operation to update Oracle VM Manager accordingly could fail because
the ovmm
service was not active. An authentication failure will prevent the
ovmm
service from restarting.
Workaround: If you set different passwords for appliance components using the Oracle Private Cloud Appliance Dashboard, change them one by one with a 10
minute interval. If the ovmm
service is stopped as a result of a password
change, wait for it to restart before making further changes. If the ovmm
service fails to restart due to authentication issues, it may be necessary to replace the file
/nfs/shared_storage/wls1/servers/AdminServer/security/boot.properties
with
the previous version of the file (boot.properties.old
).
Bug 26007398
ILOM Password of Expansion Compute Nodes Is Not Synchronized During Provisioning
After the rack components have been configured with a custom password, any compute node ILOM of a newly installed expansion compute node does not automatically take over the password set by the user in the Wallet. The compute node provisions correctly, and the Wallet maintains access to its ILOM even though it uses the factory-default password. However, it is good practice to make sure that custom passwords are correctly synchronized across all components.
Workaround: Set or update the compute node ILOM password using the Oracle Private Cloud Appliance Dashboard or CLI. This sets the new password both in the Wallet and the compute node ILOM.
Bug 26143197
SSH Host Key Mismatch After Management Node Failover
When logging in to the active management node using SSH, you typically use the virtual IP address shared between both management nodes. However, since they are separate physical hosts, they have a different host key. If the host key is stored in the SSH client, and a failover to the secondary management node occurs, the next attempt to create an SSH connection through the virtual IP address results in a host key verification failure.
Workaround: Do not store
the host key in the SSH client. If the key has been stored,
remove it from the client's file system; typically inside the
user directory in .ssh/known_hosts
.
Bug 22915408
External Storage Cannot Be Discovered Over Data Center Network
The default compute node configuration does not allow connectivity to additional storage resources in the data center network. Compute nodes are connected to the data center subnet to enable public connectivity for the virtual machines they host, but the compute nodes' network interfaces have no IP address in that subnet. Consequently, SAN or file server discovery will fail.
Bug 17508885
Mozilla Firefox Cannot Establish Secure Connection with User Interface
Both the Oracle Private Cloud Appliance Dashboard and the Oracle VM Manager user interface run on an architecture based on Oracle WebLogic Server, Oracle Application Development Framework (ADF) and Oracle JDK 6. The cryptographic protocols supported on this architecture are SSLv3 and TLSv1.0. Mozilla Firefox version 38.2.0 or later no longer supports SSLv3 connections with a self-signed certificate. As a result, an error message might appear when you try to open the user interface login page.
Workaround: Override the default Mozilla Firefox security protocol as follows:
-
In the Mozilla Firefox address bar, type
about:config
to access the browser configuration. -
Acknowledge the warning about changing advanced settings by clicking I'll be careful, I promise!.
-
In the list of advanced settings, use the Search bar to filter the entries and look for the settings to be modified.
-
Double-click the following entries and then enter the new value to change the configuration preferences:
-
security.tls.version.fallback-limit: 1
-
security.ssl3.dhe_rsa_aes_128_sha: false
-
security.ssl3.dhe_rsa_aes_256_sha: false
-
-
If necessary, also modify the configuration preference
security.tls.insecure_fallback_hosts
and enter the affected hosts as a comma-separated list, either as domain names or as IP addresses. -
Close the Mozilla Firefox advanced configuration tab. The pages affected by the secure connection failure should now load normally.
Bug 21622475 and 21803485
Virtual Machine with High Availability Takes Five Minutes to Restart when Failover Occurs
The compute nodes in an Oracle Private Cloud Appliance are all placed in a single clustered server pool during provisioning. A clustered server pool is created as part of the provisioning process. One of the configuration parameters is the cluster time-out: the time a server is allowed to be unavailable before failover events are triggered. To avoid false positives, and thus unwanted failovers, the Oracle Private Cloud Appliance server pool time-out is set to 300 seconds. As a consequence, a virtual machine configured with high availability (HA VM) can be unavailable for 5 minutes when its host fails. After the cluster time-out has passed, the HA VM is automatically restarted on another compute node in the server pool.
This behavior is as designed; it is not a bug. The server pool cluster configuration causes the delay in restarting VMs after a failover has occurred.
CLI Command update appliance
Is Deprecated
The Oracle Private Cloud Appliance command line interface
contains the update appliance
command, which is used in releases prior to
2.3.4 to unpack a Controller Software image and update the appliance with a new software
stack. This functionality is now part of the Upgrader tool, so the CLI command is deprecated
and will be removed in the next release.
Workaround: Future updates and upgrades will be executed through the Oracle Private Cloud Appliance Upgrader.
Bug 29913246
Certain CLI Commands Fail in Single-command Mode
The Oracle Private Cloud Appliance command line interface can be used in an interactive mode, using a closed shell environment, or in a single-command mode. When using the single-command mode, commands and arguments are entered at the Oracle Linux command prompt as a single line. If such a single command contains special characters, such as quotation marks, they may be stripped out and interpreted incorrectly.
Workaround: Use the CLI in interactive mode to avoid special characters being stripped out of command arguments. If you must use single-command mode, use single and double quotation marks around the arguments where required, so that only the outer quotation marks are stripped out. For example, change this command from:
# pca-admin create uplink-port-group myPortGroup '2:1 2:2' 10g-4x
to
# pca-admin create uplink-port-group myPortGroup "'2:1 2:2'" 10g-4x
Do not use doubles of the same quotation marks.
Bug 30421250
Upgrader Checks Logged in Different Order
Due to a change in how the Oracle Private Cloud Appliance Upgrader test are run, the output of the checks could be presented in a different order each time the tests are run.
This behavior is not a bug. There is no workaround required.
Bug 30078487
Virtual Machine Loses IP Address Due to DHCP Timeout During High Network Load
When an Oracle Private Cloud Appliance is configured to the maximum limits and a high load is running, a situation may occur where general DHCP/IP bandwidth limits are exceeded. In this case the DHCP client eventually reaches a timeout, and as a result the virtual machine IP address is lost, then reset to 0.0.0.0. This is normal behavior when the system is operating at full bandwidth capacity.
Workaround: When adequate
bandwidth is available, recover from the situation by issuing
the dhclient
command from the virtual
machine to request a new IP address.
Bug 30143723
Adding the Virtual Machine Role to the Storage Network Causes Cluster to Lose Heartbeat Networking
Attempting to add the Virtual Machine role to the storage network in Oracle VM Manger on an Oracle Private Cloud Appliance can cause your cluster to lose heartbeat networking, which will impact running Virtual Machines and their workloads. This operation is not supported on Oracle Private Cloud Appliance.
Workaround: Do not add the
VM role to the storage-int
network.
Bug 30936974
Adding Virtual Machine Role to the Management Network Causes Oracle VM Manager to Lose Contact with the Compute Nodes
Attempting to add the Virtual Machine role to the management network in Oracle VM Manger on an Oracle Private Cloud Appliance causes you to lose connectivity with your compute nodes. The compute nodes are still up, however your manager can not communicate with the compute nodes, which leaves your rack in a degraded state. This operation is not supported on Oracle Private Cloud Appliance.
Workaround: Do not add the
VM role to the mgmt-int
network.
Bug 30937049
Inadvertant Reboot of Stand-by Management Node During Upgrade Suspends Upgrade
When upgrading to Oracle Private Cloud Appliance Controller Software release 2.4.3 from either release 2.3.4 or 2.4.x releases you are required to upgrade the original stand-by management node first. Part of that upgrade is a reboot of this node which happens automatically during the upgrade process. After this reboot the original stand-by management node becomes the new active node. The next step is to upgrade the original active management node. However, if instead, you inadvertently reboot the original stand-by node again (the node that is now the new active) you will be unable to proceed with the upgrade because this will cause Oracle Private Cloud Appliance services on the new active node to fail.
Workaround: Reboot the original active node. This restarts the Oracle Private Cloud Appliance services on the new active node and you can proceed with upgrading the original active node.
Bug 30968544
Loading Incompatible Spine Switch Configuration Causes Storage Network Outage
When upgrading to Oracle Private Cloud Appliance Controller Software release 2.4.3 on an Ethernet-based system do not attempt to make any manual changes to the spine switch configurations prior to the completion of the storage network upgrade. Doing so could cause the management nodes to lose access to the storage network. The management nodes might also be rebooted.
Additionally, once an upgrade to Controller Software release 2.4.3 is complete on an Ethernet-based system, do not attempt to reload a spine switch backup from a prior software release. This could cause the management nodes to lose access to the storage network. The management nodes might also be rebooted. For example, you might see these error messages:
192.0.2.1 is unreachable [root@ovcamn05r1 data]# ping 192.0.2.1 PING 192.0.2.1 (192.0.2.1) 56(84) bytes of data. Mount points under shared storage are gone. [root@ovcamn05r1 ~]# ls /nfs/shared_storage/ logs NO_STORAGE_MOUNTED No master management node any more. o2cb service is offline. Both management nodes are slave now. [root@ovcamn05r1 ~]# pca-check-master o2cb service is offline. NODE: 192.0.2.2 MASTER: False
Workaround: Manually roll back the changes made on the spine switch configurations, then reboot both management nodes.
Bug 31407007
Cloud Backup Task Hangs When a ZFSSA Takeover is Performed During Backup
When the connection to the ZFS storage appliance is interrupted, the Oracle Cloud Infrastructure process will terminate the operation and mark it failed in the task database. In some cases, such as a management node reboot, there is no mechanism to update the state.
Workaround: When the task is unable to change state, delete the
task from the task database, delete the oci_backup
lock file, and institute a
new backup operation. See "Cloud Backup" in Monitoring and Managing Oracle Private Cloud Appliance in the Oracle Private Cloud Appliance Administration Guide for Release 2.4.4.
Bug 31028898
Export VM to Oracle Cloud Infrastructure Job Shows as Aborted During MN Failover but it is Running in the Background
If there is an Export VM to Oracle Cloud Infrastructure job running when an active management node reboots or crashes, that job status changes to Aborted on Oracle VM Manager. In some cases, the export job will continue on the Exporter Appliance, despite the Abort message.
Workaround: Restart the Export VM to Oracle Cloud Infrastructure job. If the job is still running in the background, a pop up message shows An export operation is already in progress for VM. If the export job was aborted gracefully with the management node failover, then the export job is restarted.
Bug 31687516
Remove Deprecated pca-admin diagnose software
Command
As of the Oracle Private Cloud Appliance Software
Controller version 2.4.3 release, the pca-admin diagnose software
command is
no longer functional.
Workaround: Use the diagnostic functions now available through a separate health check tool. See "Health Monitoring" in Monitoring and Managing Oracle Private Cloud Appliance in the Oracle Private Cloud Appliance Administration Guide for Release 2.4.4 for more information.
Bug 31705580
Virtual Machine get
Message Failed After 200 Seconds - Observed
When kube clusters
are Created Concurrently
When using the Oracle Private Cloud Appliance Cloud Native
Environment release 1.2 OVA to create kube clusters
, if you attempt to start
multiple clusters at the same time, some clusters may fail with the following message:
Error_Code VM_ERROR_004 Error_Message Error (VM_ERROR_004): Virtual machine autonas-cc3-master-3 get message failed after 200 seconds: com.oracle.linux.keepalived.master-addr,com.oracle.linux.k8s.error,com.oracle. linux.k8s.script-result,com.oracle.linux.keepalived.error.
Workaround: Stop the
kube cluster
that has failed, then restart
that kube cluster
.
Bug 32799556
Kube Cluster Creation/Deletion Should Not Be in Progress When Management Node Upgrade is Initiated
When upgrading management nodes from Software Controller release 2.4.3 to Software Controller release 2.4.4, do not initiate any kube cluster start or stop operations. As part of the upgrade procedure, a management node failover occurs. This failover can cause a kube cluster to go into a degraded state, if the kube cluster was attempting to start or stop at the time of the upgrade.
Workaround: Stop the kube cluster
that has
failed, then restart that kube cluster
. These operations will clean up and
recreate any VMs that were corrupted.
Bug 32880993
o2cb
Service Status Reports "Registering O2CB cluster "ocfs2":
Failed
" State After Compute Node Provisioned
After compute nodes are provisioned, during the upgrade from Oracle Private Cloud Appliance release 2.4.3 to 2.4.4, you may encounter
error messages with the o2cb
service. When queried, the service is in the
active state, but some clusters may show a failed state, as seen in the example below.
[root@ovcacn08r1 ~]# service o2cb status
Redirecting to /bin/systemctl status o2cb.service
o2cb.service - Load o2cb Modules
Loaded: loaded (/usr/lib/systemd/system/o2cb.service; enabled; vendor
preset: disabled)
Active: active (exited) since Thu 2021-04-22 09:00:51 UTC; 21h ago
Main PID: 2407 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/o2cb.service
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: Loading stack plugin "o2cb": OK
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: Loading filesystem "ocfs2_dlmfs":
OK
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: Creating directory '/dlm': OK
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: Mounting ocfs2_dlmfs filesystem
at /dlm: OK
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: Setting cluster stack "o2cb": OK
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: Registering O2CB cluster "ocfs2":
Failed
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: o2cb: Unknown cluster 'ocfs2'
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: Unregistering O2CB cluster
"ocfs2": Failed
Apr 22 09:00:51 ovcacn08r1 o2cb.init[2407]: o2cb: Cluster 'ocfs2' is not
active
Apr 22 09:00:51 ovcacn08r1 systemd[1]: Started Load o2cb Modules.
[root@ovcacn08r1 ~]#
Workaround: The failed messages are incorrectly reporting the status of the clusters,
the clusters are functioning properly. It is safe to ignore these error messages. To clear the
false messages, restart the o2cb
service and check the status.
[root@ovcacn10r1 ~]# service o2cb restart
Redirecting to /bin/systemctl restart o2cb.service
[root@ovcacn10r1 ~]# service o2cb status
Bug 32667300
Compute Node Upgrade Restores Default Repository When Compute Node Was Previously Not Part of Any Tenant Group or Repository
When upgrading compute nodes from Software Controller release 2.4.3 to Software Controller release 2.4.4, if you have a compute node that is a part of the default tenant group but has no assigned repositories, the upgrade process restores the default repository to that compute node.
Workaround: If you wish to keep a compute node with no assigned repositories, and the upgrade process assigns the default repository to that compute node, simply unpresent the repository from that compute node after upgrade.
Bug 32847571
Check of Local Repository to Ensure Target Compute Node is Empty
When upgrading compute nodes from Software Controller release 2.4.3 to Software Controller release 2.4.4, if the local repository for a compute node being upgraded has any ISO, VM Files, VM Templates, Virtual Appliances or Virtual Disks present, the upgrade precheck fails. This is expected behavior to ensure the data inside the local repository is retained before the upgrade occurs, which erases that data. If your compute node upgrade pre-checks fail, move all objects located in the compute node local repository to another repository, then retry the upgrade. If there is no need to retain any ISOs, VM Files, VM Templates, Virtual Appliances or Virtual Disks, delete them in order to make the local repository empty.
Workaround: Move items to another repository and retry the upgrade.
- Log in to the Oracle VM Manger Web UI for the compute node you are upgrading
- Move each file type as described below:
Table 6-1 Moving Items Out of the Local Repository
Item Steps ISO
- Clone the ISOs to other repositories.
- Delete the ISO files from the local repository.
VM Template
- Move the template to another repository using clone customiser.
Virtual Appliances
- Create VMs using each of the virtual appliances.
- Create virtual appliances from those VMs just create using "Export to Virtual Appliance" and point them to other repositories.
- Delete the virtual appliances (created in step 1) from the local repository.
Vitrual Disks
- If the VMs using these Virtual disks are in the local repository, migrate the corresponding VMs along with the Virtual disks (residing in local repository) to some other repository, using clone customiser.
- If the VMs using these Virtual disks, are not in the local
repository, (for example, few or all Virtual disks of some VMs reside in the
local repository), follow these steps:
- Stop the VMs using those virtual disks.
- Clone the virtual disks with clone target as some other repository. This clone target repository should be presented to the compute node on which the VMs are hosted
- Delete the actual virtual disks from the VMs.
- Attach the cloned virtual disks to their corresponding VMs
- Start the VMs.
VM Files
- Migrate the corresponding VMs to some other repository.
- Run the
pca_upgrader
in verify mode to confirm the pre-checks pass.If the pre-checks pass, run the upgrade.
[root@ovcamn05r1 ~]# pca_upgrader -V -t compute -c ovcacnXXr1 PCA Rack Type: PCA X8_BASE. Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_<timestamp>.log for more details. Beginning PCA Compute Node Pre-Upgrade Checks... Check target Compute Node exists 1/8 Check the provisioning lock is not set 2/8 Check OVCA release on Management Nodes 3/8 Check Compute Node's Tenant matches Server Pool 4/8 Check target Compute Node has no local networks VNICs 5/8 Check target Compute Node has no VMs 6/8 Check local repository of target Compute Node is empty 7/8 Check no physical disks on target Compute Node have repositories 8/8 PCA Compute Node Pre-Upgrade Checks completed after 0 minutes --------------------------------------------------------------------------- PCA Compute Node Pre-Upgrade Checks Passed --------------------------------------------------------------------------- Check target Compute Node exists Passed Check the provisioning lock is not set Passed Check OVCA release on Management Nodes Passed Check Compute Node's Tenant matches Server Pool Passed Check target Compute Node has no local networks VNICs Passed Check target Compute Node has no VMs Passed Check local repository of target Compute Node is empty Passed Check no physical disks on target Compute Node have repositories Passed --------------------------------------------------------------------------- Overall Status Passed --------------------------------------------------------------------------- PCA Compute Node Pre-Upgrade Checks Passed Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_<timestamp>.log for more details. [root@ovcamn05r1 ~]#
- After you successfully perform the upgrade, restore the files you just backed up to the
local repository on the newly-upgraded compute node
(
ovcacnXXr1-localfsrepo
). You can use the table above to restore the items, or find detailed instructions in Repositories Tab in the Oracle VM Manager User's Guide.
Bug 33093080
Check No Physical Disks on Target Compute Node Have Repositories
When upgrading compute nodes from Software Controller release 2.4.3 to Software Controller release 2.4.4, if there are repositories present on any Physical Disks (iSCSI/FC) and those Physical Disks (iSCSI/FC) are only presented to the compute node which is being upgraded, the precheck will fail.
Workaround: Release the ownership of the repository from the physical disk.
Note:
Check all physical disks that are only presented to the compute node being upgraded for repositories. You must perform this procedure for each repository that is present on each of these physical disks.
Pre-Upgrade Steps
- Log in to the Oracle VM Manager Web UI.
- In the Servers and VMs tab, select the appropriate server pool and validate that the compute node is part of that server pool.
- From the Repositories tab, select the repository and note the physical disk over which the repository lies.
- From the Repositories tab, select the repository, then edit the concerned repository and check Release Ownership.
- From Repository tab, click Show All Repositories, then select the repository and delete
it.
This only deletes the repository from Oracle VM Manager and not the actual filesystem on the physical disk.
Retry the Compute Node Upgrade
- Run the
pca_upgrader
in verify mode to confirm the pre-checks pass.[root@ovcamn05r1 ~]# pca_upgrader -V -t compute -c ovcacnXXr1 PCA Rack Type: PCA X8_BASE. Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_<timestamp>.log for more details. Beginning PCA Compute Node Pre-Upgrade Checks... Check target Compute Node exists 1/8 Check the provisioning lock is not set 2/8 Check OVCA release on Management Nodes 3/8 Check Compute Node's Tenant matches Server Pool 4/8 Check target Compute Node has no local networks VNICs 5/8 Check target Compute Node has no VMs 6/8 Check local repository of target Compute Node is empty 7/8 Check no physical disks on target Compute Node have repositories 8/8 PCA Compute Node Pre-Upgrade Checks completed after 0 minutes --------------------------------------------------------------------------- PCA Compute Node Pre-Upgrade Checks Passed --------------------------------------------------------------------------- Check target Compute Node exists Passed Check the provisioning lock is not set Passed Check OVCA release on Management Nodes Passed Check Compute Node's Tenant matches Server Pool Passed Check target Compute Node has no local networks VNICs Passed Check target Compute Node has no VMs Passed Check local repository of target Compute Node is empty Passed Check no physical disks on target Compute Node have repositories Passed --------------------------------------------------------------------------- Overall Status Passed --------------------------------------------------------------------------- PCA Compute Node Pre-Upgrade Checks Passed Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_<timestamp>.log for more details. [root@ovcamn05r1 ~]#
- If the pre-checks pass, run the
upgrade.
[root@ovcamn05r1 ~]# pca_upgrader -U -t compute -c ovcacnXXr1
Post Upgrade Steps to Restore the Repository
- In the Storage tab, click on the SAN Server which hold the physical disks and refresh the physical disk (which held the repository before the upgrade).
- In the Storage tab, select Shared File System/Local File System for the corresponding file system for the physical disk on which you had the repository, then click the refresh button.
- In the Repository tab, click Show All Repositories, then confirm the repository (which was deleted earlier in this procedure) is restored.
- From Repository tab, click Show All Repositories, then edit the repository that was deleted pre-upgrade. Click on take ownership and select the same server pool it was associated with prior to the upgrade.
- Select the repository and click Refresh Selected Repository.
Bug 33093068
Backup of Config Can Fill Filesystem and Cause Numerous Problems
Over time, backups of the Private Cloud Appliance
configuration information can accumulate at /nfs/shared_storage/backups
and
fill the filesystem. Periodically you must remove old backups to ensure the filesystem does
not run out of room.
Workaround: Remove backups using the following procedure.
Removing Backups
- Log in to the active management node as root user.
- Move any custom scripts or data located in
/nfs/shared_storage/backups
to a different location. During this cleanup procedure everything in that location will be deleted, except the backup files for the selected retention period./nfs/shared_storage/backups
must only contain backup tarballs and or uncompressed backup directories. - Remove old backups using this
command:
[root@ovcamn05r1 ~]# find /nfs/shared_storage/backups -maxdepth 1 -mtime +<retention-period-in-days> -exec rm -rf {} \;
For example, to delete all older backups, inclusive of any uncompressed backup directories, older than 30 days, type:[root@ovcamn05r1 ~]# find /nfs/shared_storage/backups -maxdepth 1 -mtime +30 -exec rm -rf {} \;
Bug 33947155
Two Different Release Packages of Same Tool Present in the ISO
There is a chance that an ISO file could contain multiple rpm files for
impitool
. There is no action to be taken, the upgrader tool will install
the correct version.
Bug 34375901
Intermittent Error When Deleting the Uplink Port Group
When you create an uplink-port-group
, then restore the spine switch
configuration from a backup the uplink-port-group
is removed from the switch
configuration. However, the uplink-port-group
still appears in the Oracle Private Cloud Appliance interface. An attempt to delete the
uplink-port-group
may fail.
PCA> delete uplink-port-group group-name ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Failure Error Message: Error (UPLINK_PORT_GROUP_012): Failed to delete the uplink port group: group-name.
PCA> delete uplink-port-group group-name
************************************************************
WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
************************************************************
Are you sure [y/N]:y
Status: Success
PCA>
Bug 34379557