147 PRVE-00004 to PRVE-10378

PRVE-00004: The given operator is not supported.

Cause: This is an internal error.

Action: Please contact Oracle Support.

PRVE-00005: The string does not represent a numeric value.

Cause: This is an internal error.

Action: Please contact Oracle Support.

PRVE-00008: Could not find command name in description of executable

Cause: This is an internal error.

Action: Please contact Oracle Support.

PRVE-00009: Failed to resolve variable "{0}"

Cause: This is an internal error.

Action: Please contact Oracle Support.

PRVE-00010: Properly formatted RESULT tags are not found in the command output: "{0}".

Cause: This is an internal error.

Action: Please contact Oracle Support.

PRVE-00011: Properly formatted COLLECTED tags are not found in the command output.

Cause: This is an internal error.

Action: Please contact Oracle Support.

PRVE-00012: Execution of verification executable failed.

Cause: An error was encountered while executing verification executable.

Action: Look at the accompanying messages for details on the cause of failure.

PRVE-00013: Execution of analyzer failed.

Cause: An error was encountered while executing analyzer.

Action: Look at the accompanying messages for details on the cause of failure.

PRVE-00016: Unable to write data to file "{0}".

Cause: The path specified is not writeable.

Action: Ensure that write access exists for the specified path.

PRVE-00018: Encountered a NULL executable argument.

Cause: This is an internal error.

Action: Please contact Oracle Support.

PRVE-00021: HugePages feature is not enabled on nodes "{0}"

Cause: Available memory is greater than 4GB, but OS HugePages feature is not enabled.

Action: If available memory is greater than 4GB, Oracle recommends configuring HugePages. Refer to OS documentation to configure HugePages.

PRVE-00022: Could not get available memory on node "{0}"

Cause: An error occurred accessing /proc/meminfo to determine available system memory.

Action: Ensure that the OS /proc virtual file system is functioning correctly and that /proc/meminfo is accessible.

PRVE-00023: HugePages feature is not supported on node "{0}"

Cause: HugePages feature of Linux operating system was not found supported on the indicated node.

Action: Oracle recommends enabling the database automatic memory management feature on database instances that run on a Linux operating system that does not support HugePages.

PRVE-00024: Transparent HugePages feature is enabled on node "{0}"

Cause: Transparent HugePages feature was found always enabled on the indicated node.

Action: Oracle recommends disabling Transparent HugePages on all servers running Oracle databases. To disable the Transparent HugePages feature add "transparent_hugepage=never" to the kernel boot line in the "/etc/default/grub" file of the indicated node and reboot the node to make the changes effective. In case of a container node, the recommended action shall be performed on the container host.

PRVE-00027: Hardware clock synchronization could not be determined on node "{0}"

Cause: The hwclock command is used in the shutdown script, but it is not possible to establish that the --systohc option is enabled.

Action: Check the shutdown/halt script and verify manually whether this script executes the command 'hwclock --systohc'.

PRVE-00028: Hardware clock synchronization could not be established as halt script does not exist on node "{0}"

Cause: The shutdown or halt script /etc/rc.d/init.d/halt does not exist or is missing.

Action: Check for the existence of the shutdown or halt script and ensure this script executes the command 'hwclock --systohc'.

PRVE-00029: Hardware clock synchronization check could not run on node "{0}"

Cause: The shutdown or halt script may not be accessible or a command may have failed.

Action: Contact Oracle Support.

PRVE-00033: Core files are not enabled on node "{0}"

Cause: The core file setting currently prevents the creation of a core file for process aborts and exceptions.

Action: Oracle recommends enabling core file creation.

PRVE-00034: Error encountered while trying to obtain the core file setting on node "{0}"

Cause: An error occurred while attempting to determine the core file setting.

Action: Oracle recommends enabling core file creation.

PRVE-00038: The SSH LoginGraceTime setting on node "{0}" may result in users being disconnected before login is completed

Cause: The LoginGraceTime timeout value is too low which is causing users to be disconnected before their login completes.

Action: Oracle recommends setting the LoginGraceTime to a value of zero (unlimited), Restart the SSH daemon on the identified node to make the changes effective.

PRVE-00039: Error encountered while trying to obtain the LoginGraceTime setting on node "{0}"

Cause: An error occurred while attempting to obtain the LoginGraceTime setting.

Action: Oracle recommends setting the LoginGraceTime to a value of zero (unlimited).

PRVE-00042: Maximum locked memory "{0}" limit when automatic memory management is enabled is less than the recommended value in the file "{1}" [Expected = "{2}", Retrieved="{3}"] on node "{4}"

Cause: The value of maximum locked memory is less than the recommended value for automatic memory management.

Action: Increase the value of locked memory in the indicated file. Refer to your operating system documentation for details.

PRVE-00043: Error encountered when checking maximum locked memory limit on node "{0}"

Cause: An error was encountered when retrieving the value of maximum locked memory.

Action: Modify the value of max locked memory. Refer your operating system documentation for details.

PRVE-00044: No entry in configuration file when checking locked memory limit on node "{0}"

Cause: No entry for maximum locked memory limit was found in /etc/security/limits.conf.

Action: Specify or correct the value of locked memory by modifying /etc/security/limits.conf. Refer your operating system documentation for details.

PRVE-00047: Error when checking IP port availability

Cause: Command failed when checking port availability.

Action: Contact Oracle Support.

PRVE-00048: Check for IP port "{0}" availability failed

Cause: IP Port 8888 not available.

Action: Stop any applications listening on port 8888 as it is needed for oc4j.

PRVE-00052: The syslog.conf log file sync setting on node "{0}" may result in users being disconnected before logins are completed

Cause: Not all log file specifications in /etc/syslog.conf are prefixed by the '-' character which will cause log messages being written to disk before control is released. This may cause users to be disconnected before their login completes.

Action: Oracle recommends prefixing all log file specifications in /etc/syslog.conf with the '-' character which result in the log messages being written to system cache and flushed to disk later.

PRVE-00053: Error encountered while trying to obtain the syslog.conf log file sync settings on node "{0}"

Cause: An error occurred while attempting to read the log file sync settings specified in file: /etc/syslog.conf.

Action: Verify read access to file '/etc/syslog.conf'. Oracle recommends prefixing all log file specifications in /etc/syslog.conf with the '-' character which will result in the log messages being written to system cache and flushed to disk later.

PRVE-00054: File '/etc/syslog.conf' not found on node "{0}"

Cause: An error occurred while attempting to read the log file sync settings specified in file: '/etc/syslog.conf'. The file '/etc/syslog.conf' was not found on the system.

Action: Please verify that the file '/etc/syslog.conf' exists on the system.

PRVE-00055: Cannot read the file '/etc/syslog.conf' on node "{0}"

Cause: The user did not have permissions to read the '/etc/syslog.conf' file on the system.

Action: Please verify that the current user has read access to the file.

PRVE-00059: no default entry or entry specific to user "{0}" was found in the configuration file "{1}" when checking the maximum locked memory "{2}" limit on node "{3}"

Cause: There was no default or user-specific entry for maximum locked memory limit found in the indicated configuration file.

Action: Specify or correct the value of the maximum locked memory by modifying the indicated configuration file. Refer to your operating system documentation for details.

PRVE-00060: Cannot read the shutdown script file "{0}"

Cause: The current user did not have read access to the indicated file.

Action: Ensure that the current user has read access to the indicated file.

PRVE-00067: Maximum locked memory "{0}" limit setting is less than the recommended value in the file "{1}" [Expected = "{2}", Actual="{3}"] on node "{4}".

Cause: A check of maximum locked memory settings determined that the value of maximum locked memory was less than the recommended value of 3 GB in the indicated file for the current user.

Action: Oracle recommends that the maximum locked memory limit should be at least 3GB. Increase the value of maximum locked memory specified in the indicated file of the identified node. Refer to the operating system documentation or issue the command 'man limits.conf' for details.

PRVE-00068: Maximum locked memory "{0}" limit setting is less than the recommended value in the file "{1}" [Expected = "{2}", Actual="{3}"] when huge pages are enabled on node "{4}".

Cause: A check of maximum locked memory settings determined that the value of maximum locked memory specified for the current user in the indicated file was less than the recommended value of maximum locked memory when huge pages are enabled on the indicated node.

Action: Oracle recommends that the maximum locked memory be at least 90 percent of the installed physical memory when huge pages are enabled. Increase the value of maximum locked memory specified in the indicated file of the identified node. Refer to the operating system documentation or issue the command 'man limits.conf' for details.

PRVE-00072: Kernel parameter "{0}" required for the NUMA nodes from which huge pages are allocated or freed by the NUMA memory policy does not have expected current value on node "{1}" [Expected = "{2}" ; Found = "{3}"].

Cause: The Configuration Verification Utility (CVU) determined that the indicated HugePages memory policy kernel parameter was not set to the expected current value on the indicated node.

Action: Ensure that the indicated HugePages memory policy kernel parameter is set to the expected current value on the indicated node. In case of the container node, ensure the setting on the container host.

PRVE-00075: The verification for device special file '/dev/ofsctl' failed, the file is not present on node "{0}".

Cause: The device special file '/dev/ofsctl' was expected to be present on the node indicated as the ACFS drivers are installed and the file was missing.

Action: Verify that the ACFS installation completed successfully and ensure that the device special file '/dev/ofsctl' is created.

PRVE-00079: The UDEV rule for "ofsctl" was not found in the rule file '55-usm.rules' on node "{0}".

Cause: The ACFS verification found that the UDEV rule specification 'KERNEL=="ofsctl"' was not found in the rule file '55-usm.rules'.

Action: Verify that the ACFS installation completed successfully and ensure that the UDEV rule is created in the '55-usm.rules' file.

PRVE-00080: failed to execute the command 'osdbagrp -a' successfully on node "{0}"

Cause: The ACFS verification encountered a problem in attempting to execute the command '$CRS_HOME/bin/osdbagrp -a' to obtain the ASM admin group name.

Action: Verify that the image 'osdbagrp' is available in the 'CRS home' location and the command execution can be performed successfully.

PRVE-00082: The device special file attributes did not meet the expected requirements on node "{0}".\n[permissions: Expected="{1}"; Found="{2}"] [owner: Expected="{3}"; Found="{4}"] [group: Expected="{5}"; Found="{6}"]

Cause: The file attributes for the device special file '/dev/ofsctl' did not match the expected values.

Action: Verify that the ACFS installation completed successfully and check the file attributes defined for '/dev/ofsctl'.

PRVE-00083: The UDEV rule specified in the '55-usm.rules' file does not meet the expected requirements on node "{0}".\n[group: Expected="{1}"; Found="{2}"] [mode: Expected="{3}"; Found="{4}"]

Cause: The ACFS verification encountered the problem that the UDEV rule defined in the rule file for "ofsctl" did not match the expected values.

Action: Verify that the ACFS installation completed successfully and check the UDEV rule defined for "ofsctl".

PRVE-00084: Current '/dev/shm/' mount options do not contain one or more required options. [ Found: "{0}"; Missing: "{1}" ].

Cause: Required '/dev/shm' mount options were missing.

Action: Ensure that current mount options on the node for '/dev/shm/' satisfy the requirements expressed in the error.

PRVE-00085: Configured '/dev/shm/' mount options do not contain one or more required options. [ Found: "{0}"; Missing: "{1}" ].

Cause: Required '/dev/shm' mount options were missing.

Action: Ensure that configured mount options in fstab on the node for '/dev/shm/' satisfy the requirements expressed in the error.

PRVE-00086: Current '/dev/shm/' mount options include one or more invalid options. [ Found: "{0}"; Invalid: "{1}" ].

Cause: One or more invalid '/dev/shm' mount options were found.

Action: Ensure that current mount options on the node for '/dev/shm/' satisfy the requirements expressed in the error.

PRVE-00087: Configured '/dev/shm/' mount options include one or more invalid options. [ Found: "{0}"; Invalid: "{1}" ].

Cause: One or more invalid '/dev/shm' mount options were found.

Action: Ensure that configured mount options in fstab on the node for '/dev/shm/' satisfy the requirements expressed in the error.

PRVE-00088: '/dev/shm/' mount options did not satisfy the requirements on node "{0}".

Cause: '/dev/shm/' mount options mismatch found. The reasons for the mismatch could be the following: 1. One or more required mount options were missing in current and configured mount options 2. One or more invalid mount options were found in current and configured mount options

Action: Check the accompanying error messages for the expected mount options. Configure '/dev/shm' mount options accordingly.

PRVE-00093: The 'DefaultTasksMax' parameter is set to an incorrect value in file '/etc/systemd/system.conf' on node "{0}". [ Found: "{1}"; Expected: "{2}" ].

Cause: The DefaultTasksMax parameter was set to an incorrect value in /etc/systemd/system.conf file on the specified node.

Action: Ensure that the DefaultTasksMax parameter is correctly set in file /etc/systemd/system.conf.

PRVE-00234: Error encountered while trying to obtain the hangcheck_timer setting on node "{0}"

Cause: An error occurred while attempting to determine the hangcheck_timer setting.

Action: n/a

PRVE-00243: CSS diagwait is not set to recommended value of 13 on node "{0}"

Cause: CSS diagwait does not meet the recommendation

Action: Set the diagwait to the recommended setting using the '$CRS_HOME/bin/crsctl set css diagwait' command.

PRVE-00244: Error encountered while trying to obtain the CSS diagwait setting on node "{0}"

Cause: An error occurred while attempting to determine the CSS diagwait setting.

Action: n/a

PRVE-00253: CSS misscount is not set to recommended on node "{0}"

Cause: CSS misscount does not meet the requirement

Action: Set the misscount to the recommended setting using the '$CRS_HOME/bin/crsctl set css misscount' command.

PRVE-00254: Error encountered while trying to obtain the CSS misscount setting on node "{0}"

Cause: An error occurred while attempting to determine the CSS misscount setting.

Action: n/a

PRVE-00263: CSS reboottime is not set to recommended value of 3 seconds on node "{0}"

Cause: CSS reboottime does not meet the requirement

Action: Set the reboottime to the recommended setting using the '$CRS_HOME/bin/crsctl set css reboottime' command.

PRVE-00264: Error encountered while trying to obtain the CSS reboottime setting on node "{0}"

Cause: An error occurred while attempting to determine the CSS reboottime setting.

Action: n/a

PRVE-00273: The value of network parameter "{0}" for interface "{4}" is not configured to the expected value on node "{1}".[Expected="{2}"; Found="{3}"]

Cause: The indicated parameter of the indicated interface on the indicated node was not configured to the expected value.

Action: Correct the configuration of the indicated parameter to the indicated expected value.

PRVE-00274: Error encountered while trying to obtain the network parameter setting on node "{0}"

Cause: An error occurred while attempting to retrieve network parameter setting.

Action: n/a

PRVE-00284: Error encountered while trying to obtain the virtual memory parameter setting on node "{0}"

Cause: An error occurred while attempting to retrieve virtual memory parameter setting.

Action: n/a

PRVE-00294: Error trying to obtain the MTU setting on node "{0}"

Cause: An error occurred while attempting to retrieve MTU setting.

Action: n/a

PRVE-00296: Error retrieving cluster interfaces on node "{0}"

Cause: Cluster interfaces could not be retrieved on the specified node using 'oifcfg getif'.

Action: Ensure that the Oracle Clusterware is configured and that the Oracle Clusterware stack is running.

PRVE-00304: Error while checking flow control settings in the E1000 on node "{0}"

Cause: An error occurred while attempting to retrieve E1000 flow control settings.

Action: n/a

PRVE-00314: Error while checking default gateway subnet on node "{0}"

Cause: An error occurred while attempting to retrieve subnet of default gateway.

Action: n/a

PRVE-00315: Error while checking VIP subnet on node "{0}"

Cause: An error occurred while attempting to retrieve subnet of VIP.

Action: n/a

PRVE-00324: Error while checking VIP restart configuration on node "{0}"

Cause: An error occurred while attempting to retrieve VIP restart configuration.

Action: n/a

PRVE-00334: Error while checking TCP packet re-transmit rate on node "{0}"

Cause: An error occurred while attempting to retrieve TCP packet re-transmit rate.

Action: n/a

PRVE-00343: Network packet reassembly occurring on node "{1}".

Cause: A possible cause is the difference in the MTU size across network

Action: Ensure that the MTU size is same across network

PRVE-00344: Error while checking network packet reassembly on node "{0}"

Cause: An error occurred while attempting to check network packet reassembly.

Action: n/a

PRVE-00354: Error encountered while trying to obtain the CSS disktimeout setting

Cause: An error occurred while attempting to determine the CSS disktimeout setting.

Action: n/a

PRVE-00384: Error encountered while trying to obtain the hangcheck reboot setting on node "{0}"

Cause: An error occurred while attempting to determine the hangcheck reboot setting.

Action: n/a

PRVE-00394: Error encountered while trying to obtain the hangcheck tick setting on node "{0}"

Cause: An error occurred while attempting to determine the hangcheck tick setting.

Action: n/a

PRVE-00404: Error encountered while trying to obtain the hangcheck margin setting on node "{0}"

Cause: An error occurred while attempting to determine the hangcheck margin setting.

Action: n/a

PRVE-00414: Error encountered while trying to obtain the listener name on node "{0}"

Cause: An error occurred while attempting to determine the listener name.

Action: n/a

PRVE-00420: /dev/shm is not found mounted on node "{0}"

Cause: During database installation it is recommended to have /dev/shm mounted as a RAM file system.

Action: Mount /dev/shm as RAM file system with appropriate size.

PRVE-00421: No entry exists in /etc/fstab for mounting /dev/shm

Cause: The file /etc/fstab did not have an entry specifying /dev/shm and its size to be mounted.

Action: Modify /etc/fstab to mount /dev/shm with appropriate size.

PRVE-00422: The size of in-memory file system mounted at /dev/shm is "{0}" megabytes which does not match the size in /etc/fstab as "{1}" megabytes

Cause: The size of the mounted RAM file system did not equal the value configured for system startup.

Action: Modify /etc/fstab to mount /dev/shm with appropriate size.

PRVE-00423: The file /etc/fstab does not exist on node "{0}"

Cause: The file /etc/fstab should exist on the node.

Action: Recreate or retrieve the /etc/fstab file.

PRVE-00426: The size of in-memory file system mounted as /dev/shm is "{0}" megabytes which is less than the required size of "{1}" megabytes on node "{2}"

Cause: The in-memory file system was found mounted a smaller size than required size on the identified node.

Action: Ensure that /dev/shm is correctly mounted with greater than or equal to the required size.

PRVE-00427: Failed to retrieve the size of in-memory file system mounted as /dev/shm on node "{0}"

Cause: An attempt to retrieve the in-memory file system size failed on the identified node.

Action: Ensure that /dev/shm is correctly mounted and that the current user has the permissions required to access the /dev/shm mount information.

PRVE-00428: No entry exists in /proc/mounts for temporary file system /dev/shm.

Cause: A CVU pre-install check for Linux containers failed because it determined that the file /proc/mounts did not have an entry for the temporary file system /dev/shm.

Action: Ensure that /dev/shm is correctly mounted. Verify that rc.sysinit is correctly configured to mount /dev/shm.

PRVE-00438: Oracle Solaris Support Repository Updates version "{0}" is older than minimum supported version "{1}" on node "{2}".

Cause: A CVU pre-install check found that the indicated version of Oracle Solaris Support Repository Updates was older than the minimum supported version as indicated on the identified node.

Action: Ensure that the Oracle Solaris Support Repository Updates version on the identified node is updated to the minimum supported version as indicated.

PRVE-00439: Command "{0}" issued on node "{1}" to retrieve the operating system version failed with error "{2}".

Cause: A CVU check for minimum SRU version could not complete because an error orccurred while attempting to determine the current operating system version on the indicated node using the indicated command.

Action: Examine the accompanying Operating System error message and respond accordingly.

PRVE-00453: Reverse path filter parameter "rp_filter" for private interconnect network interfaces "{0}" is not set to 0 or 2 on node "{1}".

Cause: Reverse path filter parameter 'rp_filter' was not set to 0 or 2 for identified private interconnect network interfaces on specified node.

Action: Ensure that the 'rp_filter' parameter is correctly set to the value of 0 or 2 for each of the interface used in the private interconnect classification, This will disable or relax the filtering and allow Oracle Clusterware to function correctly. Use 'sysctl' command to modify the value of this parameter.

PRVE-00454: Error encountered while trying to retrieve the value of "rp_filter" parameter of "{0}" network interfaces on node "{1}"

Cause: An error occurred while attempting to retrieve reverse path filter parameter 'rp_filter' value on specified node.

Action: n/a

PRVE-00456: Reverse path filter parameter "rp_filter" for private interconnect network interfaces "{0}" is not configured to the value of 0 or 2 in file /etc/sysctl.conf on node "{1}".

Cause: Reverse path filter parameter 'rp_filter' was not configured to the value 0 or 2 for identified private interconnect network interfaces on the specified node.

Action: Ensure that the 'rp_filter' parameter is correctly configured to the value of 0 or 2 inside the /etc/sysctl.conf file for all the interface used in the private interconnect classification. This will disable or relax the filtering and allow Oracle Clusterware to function correctly. Configuring this value correctly inside the /etc/sysctl.conf file ensures the persistent setting of this value on reboots.

PRVE-00463: Network bonding feature is enabled on node "{0}" with bonding mode "{1}" which is not a recommended value of 0 "balance-rr" or 1 "active-backup" for private interconnect network interfaces "{2}"

Cause: The bonding mode specified for the network interfaces used as private cluster interconnect was not a recommended value.

Action: Configure network bonding mode 0 or 1 on interfaces used for private cluster interconnect.

PRVE-00464: Network bonding feature is enabled on nodes "{0}" with a bonding mode discouraged for cluster private interconnect usage.

Cause: An incorrect bonding mode was used for the network bonds in which private interconnect network interfaces participate on the indicated nodes.

Action: Ensure that the network bonding mode is set to the recommended value of 0 or 1 for all the network bonds in which private interconnect network interfaces participate.

PRVE-00465: Network bonding mode used on interfaces classified for private cluster interconnect on nodes "{0}" is inconsistent.\nNetwork bonding details are as follows:\n{1}

Cause: An inconsistent network bonding mode was used for the network bonds in which cluster interconnect interfaces participate on the indicated nodes.

Action: Ensure that the network bonding mode used for all the network bonds in which cluster interconnect interfaces participate is consistent across all the nodes.

PRVE-00466: Private interconnect network interface list for current network configuration was not available

Cause: An attempt to retrieve cluster private network classifications failed.

Action: Ensure that the configuration of public and private network classifications was done correctly during the installation process. If Oracle Clusterware is already configured then issue 'oifcfg getif' command to list the current network configuration.

PRVE-00468: Different MTU values "{0}" configured for the network interfaces "{1}" participate in the network bonding with mode "{2}" on node "{3}".

Cause: The Configuration Verification Utility (CVU) determined that the indicated network interfaces were not configured with the same maximum transmission units (MTU) value on the indicated node.

Action: Ensure that all of the indicated network interfaces participate in the network bonding configured with the same MTU value on the indicated node.

PRVE-00473: Berkeley Packet Filter device "{0}" is created with major number "{1}" which is already in use by devices "{2}" on node "{3}".

Cause: The indicated Berkeley Packet Filter device was found to be using a major number which is also in use by the indicated devices

Action: Ensure that the major number of the indicated Berkeley Packet Filter device is not in use by any other device on the indicated node.

PRVE-00474: Berkeley Packet Filter devices do not exist under directory /dev on nodes "{0}".

Cause: Berkeley Packet Filter devices /dev/bpf* were not found under the /dev directory on the indicated nodes.

Action: Create the Berkeley Packet Filter devices using the 'mknod' command on the indicated nodes and ensure that the devices are created using unique major and minor numbers.

PRVE-00475: Berkeley Packet Filter devices "{0}" are using same major number "{1}" and minor number "{2}" on node "{3}"

Cause: The indicated devices were found using same major and minor number on the identified node.

Action: Ensure that the minor number of each Berkeley Packet Filter device is unique on the identified node.

PRVE-00476: Failed to list the devices under directory /dev on node "{0}"

Cause: An attempt to read the attributes of all the devices under /dev directory failed on the identified node.

Action: Ensure that the current user has permission to list and read the attributes of devices listed under directory /dev on the identified node.

PRVE-02503: FAST_START_MTTR_TARGET should be 0 when _FAST_START_INSTANCE_RECOVERY_TARGET > 0 on rac.

Cause: FAST_START_MTTR_TARGET > 0, when _FAST_START_INSTANCE_RECOVERY_TARGET > 0

Action: set FAST_START_MTTR_TARGET to 0 when _FAST_START_INSTANCE_RECOVERY_TARGET > 0

PRVE-02504: Error while checking FAST_START_MTTR_TARGET

Cause: An error occurred while attempting to retrieve FAST_START_MTTR_TARGET.

Action: n/a

PRVE-02734: Error while checking _FAST_START_INSTANCE_RECOVERY_TARGET

Cause: An error occurred while attempting to retrieve _FAST_START_INSTANCE_RECOVERY_TARGET.

Action: n/a

PRVE-02873: Core files older than "{2}" days found in the core files destination "{5}" on node "{0}". [Expected = "{4}" ; Found = "{3}"]

Cause: Too many old core files found in the database core files destination.

Action: Move the old core files out of the database core files destination.

PRVE-02874: An error occured while checking core files destination on node "{0}".

Cause: The check to verify the existence of old core files failed.

Action: This is an internal error. Contact Oracle Support.

PRVE-02883: ORA-00600 errors found in the alert log in alert log destination "{4}" on node "{0}".

Cause: ORA-00600 errors were found in the alert log.

Action: See the alert log for more information. Contact Oracle Support if the errors persists.

PRVE-02884: An error occured while checking for ORA-00600 errors in the alert log.

Cause: The check to verify the existence of ORA-00600 errors in the alert log failed.

Action: This is an internal error. Contact Oracle Support.

PRVE-02893: Alert log files greater than "{2}" found in the alert log destination "{5}" on node "{0}". [Expected = "{4}" ; Found = "{3}"]

Cause: Alert log files greater than the indicated size found in the alert log destination.

Action: Rollover alert log to a new file and backup the old file.

PRVE-02894: Error while checking the size of alert log file

Cause: The check to verify presence of large alert logs in the alert log destination failed.

Action: This is an internal error. Contact Oracle Support.

PRVE-02913: Trace files older than "{2}" days found in the background dump destination "{5}" on node "{0}". [Expected = "{4}" ; Found = "{3}"]

Cause: Too many old trace files found in the background dump destination.

Action: Move the old trace files out of the background dump destination.

PRVE-02914: Error while checking trace files in background dump destination

Cause: The check to verify the existence of old trace files failed.

Action: This is an internal error. Contact Oracle Support.

PRVE-02923: ORA-07445 errors found in the alert log in alert log destination "{4}" on node "{0}".

Cause: ORA-07445 errors were found in the alert log.

Action: See the alert log for more information. Contact Oracle Support if the errors persists.

PRVE-02924: An error occured while checking for ORA-07445 errors in the alert log.

Cause: The check to verify the existence of ORA-07445 errors in the alert log failed.

Action: This is an internal error. Contact Oracle Support.

PRVE-03073: Disks "{1}" are not part of any disk group.

Cause: The indicated disks were found not to be part of any disk group.

Action: Create one or more disk groups from the indicated disks or add the indicated disks to any existing disk groups.

PRVE-03142: One or more ASM disk rebalance operations found in WAIT status

Cause: A query on V$ASM_OPERATION showed one or more ASM disk rebalance operations in WAIT status.

Action: Identify the ASM rebalance operations in WAIT status by running the query "SELECT * FROM V$ASM_OPERATION WHERE OPERATION LIKE 'REBAL' AND STATE LIKE 'WAIT'" and resume the operations by altering rebalance power to a non-zero value for the related ASM disk groups.

PRVE-03143: Error occurred while checking ASM disk rebalance operations in WAIT status

Cause: An ASM query to obtain details of the ASM disk rebalance operations failed. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error message for details, resolve problems identified and retry.

PRVE-03149: ASM disk group files "{2}" are incorrectly owned by users "{3}" respectively.

Cause: A query showed that the indicated ASM disk group files were not owned by the Grid user.

Action: Refer to MOS Note 1959446.1 to take corrective action.

PRVE-03150: error occurred while checking for the correctness of ASM disk group files ownership

Cause: An ASM query failed unexpectedly.

Action: Examine the accompanying error messages for details.

PRVE-03155: ASM discovery string is set to the value "{1}" that matches TTY devices.

Cause: The ASM discovery string parameter ASM_DISKSTRING was set to a value that matches TTY devices.

Action: Ensure that the parameter ASM_DISKSTRING is altered to a restrictive value that does not match TTY devices using the command 'asmcmd dsset --normal discovery string' in ASM 11.2 or later. If SPFILE is in use for 11.1 or earlier ASM, then use the command 'ALTER SYSTEM SET ASM_DISKSTRING=discovery string SCOPE=SPFILE;'. Otherwise, update the value of parameter ASM_DISKSTRING in the PFILE of each ASM instance.

PRVE-03156: error occurred while checking for the selectivity of ASM discovery string

Cause: An ASM query failed unexpectedly.

Action: Examine the accompanying error messages for details.

PRVE-03163: Exadata cell nodes "{2}" contain more than one ASM failure group.

Cause: A query showed that the indicated Exadata cell nodes contain more than one ASM failure group.

Action: It is advisable to have a single ASM failure group assigned to an Exadata cell node. Ensure that the indicated Exadata cell nodes contain only one ASM failure group.

PRVE-03164: error occurred while checking the Exadata cell nodes for multiple ASM failure groups

Cause: An ASM query failed unexpectedly.

Action: Examine the accompanying error messages for details.

PRVE-03170: ASM spare parameters "{2}" are set to values different from their default values.

Cause: A query showed that values of the indicated ASM spare parameters had been changed.

Action: Review and reset the values of the indicated ASM spare parameters before upgrade. If unable to resolve the issue, contact Oracle Support.

PRVE-03171: An error occurred while checking ASM spare parameters.

Cause: An ASM query to obtain details of the spare parameters before upgrade failed unexpectedly. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error messages for details, resolve problems identified and retry.

PRVE-03175: ASM compatibility for ASM disk group "{1}" is set to "{2}", which is less than the minimum supported value "{3}".

Cause: A query showed that the ASM disk group attribute "compatible.asm" for the indicated disk group was set to a value less than the minimum supported value.

Action: Ensure that the ASM compatibility of the indicated disk group is set to a value greater than or equal to the indicated minimum supported value by running the command 'asmcmd setattr -G diskgroup compatible.asm value'.

PRVE-03176: An error occurred while checking ASM disk group compatibility attribute.

Cause: An ASM query to obtain details of the ASM compatibility disk group attribute failed. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error messages for details, resolve problems identified and retry.

PRVE-03180: RDBMS compatibility for ASM disk group "{1}" is set to "{2}", which is less than the minimum supported value "{3}".

Cause: A query showed that the ASM disk group attribute "compatible.rdbms" for the indicated disk group was set to a value less than the minimum supported value.

Action: Ensure that the RDBMS compatibility of the indicated disk group is set to a value greater than or equal to the indicated minimum supported value by running the command 'asmcmd setattr -G diskgroup compatible.rdbms value'.

PRVE-03181: An error occurred while checking ASM disk group RDBMS compatibility attribute.

Cause: An ASM query to obtain details of the RDBMS compatibility disk group attribute failed. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error messages for details, resolve problems identified and retry.

PRVE-03185: One or more ASM disk rebalance operations found in WAIT status

Cause: A query on V$ASM_OPERATION showed one or more ASM disk rebalance operations in WAIT status.

Action: Identify the ASM rebalance operations in WAIT status by running the query "SELECT * FROM V$ASM_OPERATION WHERE PASS LIKE 'REBALANCE' AND STATE LIKE 'WAIT'" and resume the operations by altering rebalance power to a non-zero value for the related ASM disk groups.

PRVE-03186: Error occurred while checking ASM disk rebalance operations in WAIT status

Cause: An ASM query to obtain details of the ASM disk rebalance operations failed. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error message for details, resolve problems identified and retry.

PRVE-03191: Free space on one or more ASM diskgroups is below the reccommended value of {3}.

Cause: A query on V$ASM_DISKGROUP showed that free space on one or more ASM disk groups is below the indicated value.

Action: Refer to MOS Note 473271.1 to take corrective action

PRVE-03192: Error occurred while checking ASM disk group free space.

Cause: An ASM query to obtain details of the ASM disk group failed. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error message for details, resolve problems identified and retry.

PRVE-03202: User "{0}" does not have the operating system privilege "{1}" on node "{2}"

Cause: A Direct Access (DAX) device '/dev/dax' was mounted on the node indicated but the Oracle user does not have the required operating system privilege to access this device.

Action: Ensure that the Oracle user has the indicated privilege. The privilege to access DAX device can be granted by running the command 'usermod -S files -K defaultpriv+=basic,dax_access Oracle user' as root.

PRVE-03206: The disks in the ASM disk group "{1}" have different sizes.

Cause: The Configuration Verification Utility (CVU) found that disk size was not consistent across the disks in the indicated ASM disk group.

Action: Use the query "SELECT D.PATH,D.OS_MB,G.NAME FROM V$ASM_DISK D, V$ASM_DISKGROUP G WHERE G.GROUP_NUMBER=D.GROUP_NUMBER" to retrieve the details of ASM disk sizes in the indicated ASM disk group. Ensure that disk size is consistent across the indicated ASM disk group and retry the operation.

PRVE-03207: Error occurred while checking ASM disk size consistency.

Cause: An ASM query to obtain details of the ASM disk group failed. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error message for details, resolve problems identified and retry the operation.

PRVE-03211: Software version of ASM client "{1}" is "{2}", which is less than the minimum supported value "{3}".

Cause: A query showed that the software version of the indicated ASM client was less than the minimum supported value.

Action: Upgrade the client to the later release that is compatible with Oracle ASM and retry the operation.

PRVE-03212: An error occurred while checking compatibility of ASM client.

Cause: An ASM query to obtain details of the active ASM clients failed. Accompanying error messages provide detailed failure information.

Action: Examine the accompanying error messages for details, resolve problems identified and retry.

PRVE-10073: Required /boot data is not available on node "{0}"

Cause: The file '/boot/symvers-<kernel_release>.gz', required for proper driver installation, was not found on the indicated node.

Action: Ensure that /boot is mounted and /boot/symvers-<kernel_release>.gz is available on the node.

PRVE-10077: NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "{0}"

Cause: During NOZEROCONF check, it was determined that NOZEROCONF parameter was not specified or was not set to 'yes' in /etc/sysconfig/network file.

Action: Ensure NOZEROCONF is set to 'yes' in /etc/sysconfig/network to disable 169.254/16 being added to the routing table.

PRVE-10078: LINKLOCAL_INTERFACES network parameter was defined in the file "/etc/sysconfig/network/config" on node "{0}".

Cause: During LINKLOCAL_INTERFACES parameter check, it was determined that the LINKLOCAL_INTERFACES network parameter was defined in the /etc/sysconfig/network/config file.

Action: Ensure that the LINKLOCAL_INTERFACES network parameter is not defined in the /etc/sysconfig/network/config file to avoid having the link local addresses 169.254/16 added to the routing table.

PRVE-10079: Parameter "{0}" value in the file "{1}" cannot be verified on node "{2}".

Cause: An error occurred while verifying the indicated parameter value in the indicated file on the indicated node. The accompanying messages provide detailed failure information.

Action: Examine the accompanying messages, resolve the indicated problems, and then retry the operation.

PRVE-10083: Java Virtual Machine is not installed properly

Cause: There were not enough JAVA objects in DBA_OBJECTS table.

Action: Refer to MOS note 397770.1 to take corrective action

PRVE-10084: Error while checking JAVAVM installation in the database

Cause: An error occurred while performing the check.

Action: Look at the accompanying messages for details on the cause of failure.

PRVE-10094: Error while checking Time Zone file

Cause: An error occurred while performing the check.

Action: Look at the accompanying messages for details on the cause of failure.

PRVE-10104: Error while checking standby databases

Cause: An error occurred while checking standby databases.

Action: Look at the accompanying messages for details on the cause of failure.

PRVE-10113: "multi-user-server" service is "{0}" on node "{1}"

Cause: The 'svcs svc:/milestone/multi-user-server' command reported that the multi-user-server was not online on the specified node.

Action: Consult the OS documentation/System Administrator to bring up multi-user-server service.

PRVE-10114: "multi-user" service is "{0}" on node "{1}"

Cause: The 'svcs svc:/milestone/multi-user' command reported that the multi-user was not online on the specified node.

Action: Consult the OS documentation/System Administrator to bring up multi-user service.

PRVE-10115: Error while checking multiuser service

Cause: An error occurred while checking multiuser service

Action: Look at the accompanying messages for details on the cause of failure.

PRVE-10123: Selected "{0}" group "{1}" is not same as the currently configured group "{2}" for existing Oracle Clusterware home "{3}"

Cause: An attempt to upgrade the database was rejected because the selected group was not the group configured for the existing Oracle Clusterware installation.

Action: Select the same groups as the currently configured groups of the existing Oracle Clusterware installation.

PRVE-10124: Current selection of the "{0}" group could not be retrieved

Cause: The indicated group was not found selected or set to any valid operating system group name.

Action: Ensure that the indicated group is selected and set to some valid operating system group name.

PRVE-10125: Error while checking privileged groups consistency. \nError: {0}

Cause: An error occurred while checking privileged groups consistency.

Action: Look at the accompanying messages for details on the cause of failure.

PRVE-10126: Configured "{0}" group for Oracle Clusterware home "{1}" could not be retrieved

Cause: The indicated group could not be retrieved using the 'osdbagrp' utility from the identified Oracle Clusterware home.

Action: Ensure that the indicated group is configured correctly and that the 'osdbagrp' utility reports it correctly from the identified Oracle Clusterware home.

PRVE-10128: Selected "{0}" group "{1}" is not same as the currently configured group "{2}" for existing database home "{3}"

Cause: An attempt to upgrade the database was rejected because the selected group was not the group configured for the existing database installation.

Action: Select the same groups as the currently configured groups of the existing database installation.

PRVE-10129: Configured "{0}" group for database home "{1}" could not be retrieved

Cause: The indicated group could not be retrieved using the 'osdbagrp' utility from the identified database home.

Action: Ensure that the indicated group is configured correctly and that the 'osdbagrp' utility reports it correctly from the identified database home.

PRVE-10138: FILESYSTEMIO_OPTIONS is not set to the recommended value of setall

Cause: An attempt to match the value of parameter FILESYSTEMIO_OPTIONS with the recommended value failed.

Action: Set the FILESYSTEMIO_OPTIONS to the recommended value using SQL statement 'alter system set'.

PRVE-10139: Error while checking FIELSYSTEMIO_OPTIONS

Cause: An attempt to check the value of the parameter FILESYSTEMIO_OPTIONS failed because the database was not configured correctly.

Action: Ensure that the database is configured correctly and retry the operation.

PRVE-10150: The current IP hostmodel configuration for both IPV4 and IPV6 protocols does not match the required configuration on node "{0}". [Expected = "{1}" ; Found = "{2}"]

Cause: The IP hostmodel configuration on the indicated node for the specified protocols was 'strong' and should have been 'weak'.

Action: Modify the IP hostmodel configuration to meet the required configuration. Use command 'ipadm set-prop -p hostmodel=weak [ipv4|ipv6]' to modify the IP hostmodel configuration.

PRVE-10151: The current IP hostmodel configuration for "{0}" protocol does not match the required configuration on node "{1}". [Expected = "{2}" ; Found = "{3}"]

Cause: The IP hostmodel configuration on the indicated node for the specified protocol was 'strong' and should have been 'weak'.

Action: Modify the IP hostmodel configuration to meet the required configuration. Use command 'ipadm set-prop -p hostmodel=weak [ipv4|ipv6]' to modify the IP hostmodel configuration.

PRVE-10155: GSD resource is running and enabled on node "{0}".

Cause: GSD was found to be running and enabled on the indicated node.

Action: Stop and disable GSD using the commands 'srvctl stop nodeapps -g' and 'srvctl disable nodeapps -g' respectively.

PRVE-10156: GSD resource is enabled on node "{0}".

Cause: GSD was found to be enabled on the indicated node.

Action: Disable GSD using the command 'srvctl disable nodeapps -g'.

PRVE-10167: I/O Completion Ports (IOCP) device status did not match the required value on node "{0}". [Expected = "Available"; Found = "{1}"]

Cause: IOCP device status was not 'Available' on the indicated node. The IOCP device status must be 'Available' in order to list the candidate disks when creating ASM disk group.

Action: Login as the root user and change the IOCP device status to 'Available' using the command '/bin/smitty iocp' and reboot the node for the changes to take effect.

PRVE-10168: Failed to obtain the I/O Completion Ports (IOCP) device status using command "{0}" on node "{1}"

Cause: An attempt to obtain the status of IOCP device failed on the indicated node.

Action: Ensure that the identified command exists and the current user has read/execute permissions for it.

PRVE-10169: North America region (nam) is not installed on node "{0}".

Cause: The command 'localeadm -q nam' reported that the North America region (nam) was not installed on the specified node.

Action: North America region (nam) must be installed, in order to install it use 'localeadm -a nam -d <packages_path>', where <packages_path> is the full path to a directory where the Solaris packages are available like '/cdrom/cdrom0/s0/Solaris_10/Product' if using a Solaris Compact Disc.

PRVE-10170: an error ocurred when trying to determine if North America region (nam) is installed on node "{0}".

Cause: An error ocurred while executing the command 'localeadm -q nam' and failed to determine whether North America region (nam) was installed on the specified node.

Action: North America region (nam) must be installed. Consult the OS documentation/System Administrator to diagnose and fix the issue with the command 'localeadm -q nam' and install the region using the command 'localeadm -a nam -d <packages_path>', where <packages_path> is the full path to a directory where the Solaris packages are available like '/cdrom/cdrom0/s0/Solaris_10/Product' if using a Solaris Compact Disc.

PRVE-10171: English locale is not installed on node "{0}".

Cause: The command 'pkg facet -H *locale.en*' reported that the English locale was not installed on the specified node.

Action: The English locale must be installed. Issue 'pkg change-facet locale.en_US=true' to install it.

PRVE-10172: An error occurred when trying to determine if English locale is installed on node "{0}".

Cause: An error ocurred while executing the command 'pkg facet -H *locale.en*'. Installation of English locale on the node could not be verified.

Action: English locale must be installed. Consult the OS documentation or System Administrator to diagnose and fix the issue with the command 'pkg facet -H *locale.en*' and install the locale using the command 'pkg change-facet locale.en_US=true'.

PRVE-10183: File system path "{0}" is mounted with 'nosuid' option on node "{1}".

Cause: The identified file system path is mounted with the 'nosuid' option on the indicated node. This mount option creates permission problems in the cluster.

Action: Ensure that the identified file system path is not mounted with 'nosuid' option.

PRVE-10184: Could not find file system for the path "{0}" using command "{1}" on node "{2}"

Cause: An error occurred while determining the file system for the identified path on the indicated node.

Action: Make sure that the identified path is a valid absolute path on the indicated node.

PRVE-10210: error writing to the output file "{0}" for verification type "{1}": "{2}"

Cause: An error was encountered while writing the indicated output file.

Action: Correct the problem indicated in the accompanying messages, then reissue the original command.

PRVE-10211: An error occurred while writing the report.

Cause: An error was encountered while writing one or more output files.

Action: Correct the problem indicated in the accompanying messages, then reissue the original command.

PRVE-10232: Systemd login manager parameter 'RemoveIPC' is enabled in the configuration file "{0}" on node "{1}". [Expected="no"; Found="{2}"]

Cause: The 'RemoveIPC' systemd login manager parameter was found to be enabled on the indicated node. Enabling this parameter causes termination of Automatic Storage Management (ASM) instances when the last oracle/grid user session logs out.

Action: Set the 'RemoveIPC' systemd login manager parameter to 'no' in the identified configuration file on the indicated node.

PRVE-10233: Systemd login manager parameter 'RemoveIPC' entry does not exist or is commented out in the configuration file "{0}" on node "{1}". [Expected="no"]

Cause: The 'RemoveIPC' systemd login manager parameter entry was not found or was commented out in the identified configuration file on the indicated node. By default this parameter is enabled and it causes termination of Automatic Storage Management (ASM) instances when the last oracle/grid user session logs out.

Action: Add the entry 'RemoveIPC=no' to the identified configuration file on the indicated node.

PRVE-10237: Existence of files "{1}" is not expected on node "{0}" before Clusterware installation or upgrade.

Cause: The indicated files were found on the specified node.

Action: Remove the indicated files and retry.

PRVE-10238: Error occurred while running commands "{1}" on node "{0}" to check for ASM Filter Driver configuration

Cause: An attempt to check for ASM Filter Driver configuration by running the indicated commands failed.

Action: Ensure that the identified commands exist on the indicated node and that the current user has read/execute permissions and retry.

PRVE-10239: ASM Filter Driver "{1}" is not expected to be loaded on node "{0}" before Clusterware installation or upgrade.

Cause: An attempt to install or upgrade Clusterware on the indicated node was rejected because the indicated ASM Filter Driver was already loaded.

Action: Refer to OS documentation to remove the indicated driver and retry the operation.

PRVE-10243: Failed to mount "{0}" at location "{1}" with NFS mount options "{2}".

Cause: An attempt to mount the indicated file system at the indicated location with the indicated mount options failed because the 'insecure' NFS export option was not used. The 'insecure' option was required for Oracle Direct NFS to make connections using a non-privileged source port.

Action: Ensure that the indicated network file system is exported with the "insecure" option and retry the operation. If the retry fails, examine the accompanying error message, resolve the problems indicated there, and then retry the operation again.

PRVE-10248: The file "{0}" either does not exist or is not accessible on node "{1}".

Cause: A Configuration Verification Utility (CVU) operation could not complete, because the indicated file was not accessible on the node shown.

Action: Ensure that the indicated file exists and can be accessed on the indicated node.

PRVE-10253: The path "{0}" either does not exist or is not accessible on node "{1}".

Cause: The Configuration Verification Utility (CVU) determined that the indicated path was not accessible.

Action: Ensure that the indicated path exists and can be accessed by the current user on the indicated node.

PRVE-10254: The path "{0}" does not have read permission for the current user on node "{1}".

Cause: A check for access control attributes found that the indicated path did not have read permission for the current user on the indicated node.

Action: Ensure that the current user has read permission for the indicated path on the indicated node, and then retry the operation.

PRVE-10255: The path "{0}" does not have write permission for the current user on node "{1}".

Cause: A check for access control attributes found that the indicated path did not have write permission for the current user on the indicated node.

Action: Ensure that the current user has write permission for the indicated path on the indicated node, and then retry the operation.

PRVE-10256: The path "{0}" does not have execute permission for the current user on node "{1}".

Cause: A check for access control attributes found that the indicated path did not have execute permission for the current user on the indicated node.

Action: Ensure that the current user has execute permission for the indicated path on the indicated node, and then retry the operation.

PRVE-10257: The path "{0}" permissions did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]

Cause: A check for access control attributes found that the permissions of the indicated path on the indicated node were different from the required permissions.

Action: Change the permissions of the indicated path to match the required permissions.

PRVE-10258: The path "{0}" permissions for owner did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]

Cause: A check for access control attributes found that the owner permissions of the indicated path on the indicated node were different from the required permissions.

Action: Change the owner permissions of the indicated path to match the required permissions.

PRVE-10259: The path "{0}" permissions for group did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]

Cause: A check for access control attributes found that the group permissions of the indicated path on the indicated node were different from the required permissions.

Action: Change the group permissions of the indicated path to match the required permissions.

PRVE-10260: The path "{0}" permissions for others did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]

Cause: A check for access control attributes found that the others permissions of the indicated path on the indicated node were different from the required permissions.

Action: Change the others permissions of the indicated path to match the required permissions.

PRVE-10261: The path "{0}" owner did not match the expected value on node "{1}". [Expected = "{2}" ; Found = "{3}"]

Cause: A check for access control attributes found that the owner of the indicated path on the indicated node was different from the required owner.

Action: Change the owner of the indicated path to match the required owner.

PRVE-10262: The path "{0}" group did not match the expected value on node "{1}". [Expected = "{2}" ; Found = "{3}"]

Cause: A check for access control attributes found that the group of the indicated path on the indicated node was different from the required group.

Action: Change the group of the indicated path to match the required group.

PRVE-10266: Error occurred while running command "{1}" on node "{0}" to check for logical partition capacity entitlement.

Cause: An attempt to check for logical partition entitled capacity by running the indicated command failed. The accompanying messages provide detailed failure information.

Action: Examine the accompanying messages, resolve the identified issues, and then retry the operation after ensuring that the identified command exists on the indicated node and that the current user has permission to execute the command.

PRVE-10269: Logical partition entitled processing capacity is configured with a value less than expected on node "{0}".[Expected = "{2}" ; Found = "{1}"]

Cause: A check for capacity entitlement on the indicated node found that the entitled processing capacity is less than the expected value.

Action: Refer to OS documentation to configure capacity entitlement of the logical partition to more than the indicated expected value, and then retry the operation.

PRVE-10303: No such entry as "{0}" exists in the configuration file "{1}" on node "{2}"

Cause: The indicated configuration file does not contain the indicated entry. This will result in the 'systemd-tmpfiles' service removing required communication socket files from the directory named in the indicated entry.

Action: Edit the indicated configuration file to create the indicated entry in it.

PRVE-10314: Kernel does not have retpoline enabled on node "{0}".

Cause: The Configuration Verification Utility (CVU) determined that the kernel did not have the Spectre V2 retpoline mitigation which impacted the performance of Oracle ACFS.

Action: Ensure that the kernel has retopline mitigation enabled. Consult the operating system vendor documentation on Spectre V2 retpoline mitigation to ensure that retpoline capability is correctly enabled and installed.

PRVE-10318: rds_tcp module not loaded in the kernel on node "{0}"

Cause: The Configuration Verification Utility (CVU) determined that the rds_tcp module was not loaded in the kernel on the indicated node.

Action: Ensure that the rds_tcp module is loaded in the kernel on the indicated node by running the command '/usr/sbin/modprobe rds_tcp'.

PRVE-10323: Systemd is not in running state on node "{0}". [Found state = "{1}"]

Cause: The Configuration Verification Utility (CVU) determined that the systemd was not in running state on the indicated node.

Action: Ensure that the systemd is in running state and system is fully operational on the indicated node.

PRVE-10327: Operating system package "{0}" required for the kernel crash dumping mechanism was not installed on node "{1}". Command "{2}" returned "{3}"

Cause: The Configuration Verification Utility (CVU) determined that the indicated Operating System package required for the kernel crash dumping mechanism was not installed on the indicated node.

Action: Ensure that the indicated package is installed and that the kernel crash dumping mechanism is enabled and in active state on the indicated node.

PRVE-10328: Kernel crash dumping mechanism was not enabled on node "{0}". Command "{1}" returned "{2}"

Cause: The Configuration Verification Utility (CVU) determined that the kernel crash dumping mechanism was not enabled on the indicated node.

Action: Ensure that the kernel crash dumping mechanism is enabled and in active state on the indicated node. Operating system command 'systemctl enable kdump.service' may be issued as root user to enable and 'systemctl start kdump.service' may be issued as root user to start the kernel crash dumping mechanism.

PRVE-10329: Kernel crash dumping mechanism was not active on node "{0}". Command "{1}" returned "{2}"

Cause: The Configuration Verification Utility (CVU) determined that the kernel crash dumping mechanism was not active on the indicated node.

Action: Ensure that the kernel crash dumping mechanism is in active state on the indicated node. Operating system command 'systemctl start kdump.service' may be issued as root user to to activate the kernel crash dumping mechanism.

PRVE-10333: Oracle ACFS member count "{0}" is not equal to the node count "{1}" on the cluster

Cause: The Configuration Verification Utility (CVU) determined that the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) member count was not equal to the node count.

Action: Execute the following steps: 1. Restart Cluster Ready Services (CRS) (or just the Oracle Automatic Storage Management (ASM) proxy) on each node, restarting the Oracle ACFS primary node last, as follows: Run 'acfsutil cluster info | grep -i master' to get the primary node. 2. Restart CRS (or just ASM proxy) on each non-primary node in any order. 3. Restart CRS (or just ASM proxy) on the Oracle ACFS primary node last. Alternatively, upgrade Oracle Grid Infrastructure to a release that contains the fix for 29391849 (18.8/19.5). Refer to 2584567.1 for additional details.

PRVE-10334: Local node "{0}" is not part of the Oracle ACFS cluster.

Cause: The Configuration Verification Utility (CVU) determined that the indicated node was not part of the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) cluster.

Action: Contact Oracle Support.

PRVE-10335: Oracle ACFS proxy process is not running on node "{0}"

Cause: The Configuration Verification Utility (CVU) determined that the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) proxy process was not running on the indicated node.

Action: Ensure that the Cluster Ready Services (CRS) stack is up on the indicated node and retry the verification.

PRVE-10336: Trace file for Oracle ACFS proxy process with PID "{0}" not found on node "{1}"

Cause: The Configuration Verification Utility (CVU) determined that the trace file for Oracle Automatic Storage Management Cluster File System (Oracle ACFS) proxy was not found on the indicated node.

Action: Contact Oracle Support.

PRVE-10337: failed to get the cluster information for current incarnation "{0}" in the trace file "{1}" on node "{2}"

Cause: The Configuration Verification Utility (CVU) determined that the cluster information for the current incarnation was not found in the indicated trace file on the indicated node.

Action: Contact Oracle Support.

PRVE-10345: Kernel parameter "{0}" for private interconnect network interfaces "{1}" does not have expected current value on node "{2}" [Expected = "{3}" ; Found = "{4}"].

Cause: The Configuration Verification Utility (CVU) determined that the indicated Address Resolution Protocol (ARP) kernel parameter was not set to the expected current value for the identified private interconnect interfaces on the indicated node.

Action: Ensure that the indicated ARP kernel parameter is set to the expected current value for all the identified private interconnect interfaces on the indicated node.

PRVE-10346: Kernel parameter "{0}" for private interconnect network interfaces "{1}" does not have expected configured value on node "{2}" [Expected = "{3}" ; Found = "{4}"].

Cause: The Configuration Verification Utility (CVU) determined that the indicated Address Resolution Protocol (ARP) kernel parameter was not set to the expected configured value for the identified private interconnect interfaces on the indicated node.

Action: Ensure that the indicated ARP kernel parameter is set to the expected configured value for all the identified private interconnect interfaces on the indicated node.

PRVE-10350: Current clock source is not set to the expected value on node "{0}" [Expected = "{1}" ; Found = "{2}"].

Cause: The Configuration Verification Utility (CVU) determined that the current clock source was not set to the expected value on the indicated node, which would impact the database performance.

Action: Ensure that the current clock source is set to the expected value on the indicated node.

PRVE-10354: Network parameter "{0}" is not set to 0 or 2 on node "{1}".

Cause: The Configuration Verification Utility (CVU) determined that the specified network parameter value was not set to either 0 or 2 on the indicated node.

Action: Ensure that the specified network parameter is set to either 0 or 2 on the indicated node by using command '/usr/bin/ndd -set /dev/ip ip_strong_es_model 0'. This setting allows Oracle Clusterware to function correctly. Update the specified network parameter value inside '/etc/rc.config.d/nddconf' file to retain this setting even after a reboot.

PRVE-10355: failed to retrieve the network parameter "{0}" value on node "{1}"

Cause: The specified network parameter value could not be retrieved using the '/usr/bin/ndd' command on the indicated node.

Action: Ensure that the user running the command has required privileges and retry the operation.

PRVE-10362: Process "{0}" is running with cgroup CPU parameter "{1}" configured with value "{2}" in "{3}" which is less than the recommended value of "{4}" for the real-time scheduling of this process on node "{5}".

Cause: The Configuration Verification Utility (CVU) determined that the identified real-time CPU scheduling parameter value was not set to the indicated required value inside the indicated cgroup configuration file on the indicated node.

Action: Ensure that the indicated real-time CPU scheduling parameter is correctly configured in the indicated cgroup configuration file on the identified node. Refer to MOS note 2718971.1 to take corrective action.

PRVE-10363: failed to check the real-time CPU priority of the processes which must run in real-time on node "{0}"

Cause: The real-time CPU priority configuration setting of the critical processes could not be retrieved from the cgroup on the indicated node.

Action: Refer to MOS note 2718971.1 to take corrective action.

PRVE-10372: Failed to check the status of "{0}" service on node "{1}". command "{2}" failed with error: "{3}"

Cause: The Configuration Verification Utility (CVU) failed to obtain the current status of the indicated service on the indicated node.

Action: Ensure that the indicated service is configured and running on the indicated node. Examine the accompanying error messages, resolve the problems indicated there, and then retry the operation.

PRVE-10373: failed to verify the service file "{0}" used by the "{1}" service on node "{2}"

Cause: The Configuration Verification Utility (CVU) failed to verify the contents of the indicated service file on the indicated node.

Action: Ensure that the indicated service file exists on the indicated node. Examine the accompanying error messages, resolve the problems indicated there, and then retry the operation.

PRVE-10374: The service file "{0}" loaded by the "{1}" service on node "{2}" does not have the real time mode set.

Cause: The Configuration Verification Utility (CVU) determined that the indicated service on the indicated node was running with the indicated service file with missing real time mode setting.

Action: Ensure that the indicated service is configured to run with a service file with '--cpu-rt-runtime=950000' as a value for ExecStart= under the [Service] section .

PRVE-10375: The "{0}" service on node "{1}" has not loaded the latest content of the service file "{2}" which has the real time mode set.

Cause: The Configuration Verification Utility (CVU) determined that the indicated service on the indicated node was running with the stale content of the indicated service file.

Action: Ensure that the indicated service is restarted to run with a service file with '--cpu-rt-runtime=950000' as a value for ExecStart= under the [Service] section .

PRVE-10376: real-time CPU scheduling parameter "{0}" is set to a value "{1}" on the container host "{2}" in file "{3}" which is less than the recommended value of "{4}"

Cause: The Configuration Verification Utility (CVU) determined that the identified real-time CPU scheduling parameter value was not set to the indicated required value inside the indicated cgroup configuration file on the indicated node.

Action: Ensure that the indicated real-time CPU scheduling parameter is correctly configured in the indicated cgroup configuration file on the identified node. Refer to MOS note 2718971.1 to take corrective action.

PRVE-10377: real-time CPU scheduling parameter "{0}" is not set to the recommended value "{1}" on the container host "{2}"

Cause: The Configuration Verification Utility (CVU) determined that the indicated real-time CPU scheduling parameter value was not set to the recommended value the indicated node.

Action: Ensure that the indicated real-time CPU scheduling parameter is set to the recommended value of -1 on the indicated node.

PRVE-10378: real-time CPU scheduling parameter "{0}" is set to "{1}" in file "{2}" which is not the recommended value "{3}" on the container host "{4}"

Cause: The Configuration Verification Utility (CVU) determined that the indicated real-time CPU scheduling parameter value was not set to the recommended value in the specified file on the indicated node.

Action: Ensure that the indicated real-time CPU scheduling parameter is set to the recommended value of -1 in the specified file on the indicated node.