4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered. - Error in patching on a newly-provisioned appliance
When applying the patch for Oracle Database Appliance release 19.13 to a newly-provisioned system, an error may be encountered. - AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.13, theodacli update-dbhome
command may fail. - Error in running the update-dbhome command
When you patch database homes to Oracle Database Appliance release 19.13, theodacli update-dbhome
command may fail. - Error in patching database
When you patch Oracle Database 19c Standard Edition single-instance or Oracle RAC One Node database, an error is encountered. - Error in patching prechecks report
The patchung prechecks report may display an error. - Error when patching DB systems
When patching DB systems on Oracle Database Appliance, an error may be encountered. - Error in patching database
When you patch Oracle Database 19c Standard Edition single-instance or Oracle RAC One Node database, an error is encountered. - Error in listing patches
When running the commandodacli list-availablepatches
, an error is encountered. - Error in updating dbhome
When you patch database homes to Oracle Database Appliance release 19.13, theodacli update-dbhome
command may fail. - Error when patching DB systems
When patching DB systems on Oracle Database Appliance, an error may be encountered. - Error in server patching
When patching Oracle Database Appliance, errors may be encountered. - Error in storage patching
When patching Oracle Database Appliance, errors are encountered. - Error in running the update-dbhome command
When you patch database homes to Oracle Database Appliance release 19.11, theodacli update-dbhome
command fails. - Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed. - Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commandsodacli create-prepatchreport
,odacli update-server
,odacli update-dbhome
, an error is encountered. - Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered.
odacli update-server -f version
, an error may be
displayed.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
The STIG V1R2 rule OL7-00-040420 tries to change the permission of
the file /etc/ssh/ssh_host_rsa_key
from '640' to '600'
which causes the error. During patching, run the command chmod 600
/etc/ssh/ssh_host_rsa_key
command on both nodes.
This issue is tracked with Oracle bug 33168598.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching on a newly-provisioned appliance
When applying the patch for Oracle Database Appliance release 19.13 to a newly-provisioned system, an error may be encountered.
DCS-10001:Internal error encountered: Cluster ware is not running to get the resource details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart the node. For high-availability systems, restart both nodes.
- Run the
odacli update-server
command again.
This issue is tracked with Oracle bug 33641038.
Parent topic: Known Issues When Patching Oracle Database Appliance
AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.13, the odacli update-dbhome
command may
fail.
Verify the Alternate Archive Failed AHF-4940: One or more log archive
Destination is Configured to destination and alternate log archive
Prevent Database Hangs destination settings are not as recommended
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the
odacli update-dbhome
command with the-f
option./opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.13.0.0.0 -f
This issue is tracked with Oracle bug 33144170.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in running the update-dbhome command
When you patch database homes to Oracle Database Appliance release 19.13, the odacli update-dbhome
command may
fail.
odacli update-dbhome
command, due to
the inclusion of the non-rolling DST patch, the job waits for 12,000 seconds (around
3.5 hours). The following error message is
displayed:PRCC-1021 : One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node <node_name> timed out after 12,000 seconds.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Shut down and restart database the failed database and run the
datapatch script manually to complete the database
update.
db_home_path_the_database_is_running_on/OPatch/datapatch
- If the database is an Oracle ACFS database that was patched from
a release earlier than 19.11, then run the
odacli list-dbstorages
command, and locate the corresponding entries bydb_unique_name
. Check the DATA and RECO destination location if they exist in the result. - For DATA destination location, the value should be similar to the
following:
/u02/app/oracle/oradata/db_unique_name
- For RECO, pre-process the values from the beginning to the last forward
slash (/). For
example:
/u03/app/oracle addlFS = /u01/app/odaorahome,/u01/app/odaorabase0(for single-node systems) addlFS = /u01/app/odaorahome,/u01/app/odaorabase0, /u01/app/odaorabase1(for high availability systems)
- Run the srvctl command
db_home_path_the_database_is_running_on/bin/srvctl modify database -d db_unique_name -acfspath $data, $reco, $addlFS -diskgroup DATA
. For example:srvctl modify database -d provDb0 -acfspath /u02/app/oracle/oradata/provDb0,/u03/app/oracle/,/u01/app/odaorahome,/u01/app/ odaorabase0 -diskgroup DATA
This issue is tracked with Oracle bug 32740491.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database
When you patch Oracle Database 19c Standard Edition single-instance or Oracle RAC One Node database, an error is encountered.
PRGH-1106 : NULL
Hardware Models
All Oracle Database Appliance hardware models high-availability systems
Workaround
srvctl stop database -database db_unique_name
srvctl start database -database db_unique_name
- Manually apply the
datapatch:
db_home_path_the_database_is_running_on/OPatch/datapatch
- If you are patching an Oracle ACFS database from an Oracle Database
Appliance release earlier than 19.11 to the latest release, then run the
command
odacli list-dbstorages
and locate the corresponding entries bydb_unique_name
. from the results, identify the DATA and RECO destination location if they exist.For DATA destination location, the value must be as follows:/u02/app/oracle/oradata/db_unique_name
For RECO, pre-process the value from the beginning to the last /.
For RECO destination location after pre-process:/u03/app/oracle addlFS = /u01/app/odaorahome,/u01/app/odaorabase0 (for Oracle Database Appliance single-node system) addlFS = /u01/app/odaorahome,/u01/app/odaorabase0, /u01/app/odaorabase1(for Oracle Database Appliance high-availability system)
Run the following command:db_home_path_the_database_is_running_on/bin/srvctl modify database -d db_unique_name -acfspath <$data, $reco, $addlFS> -diskgroup DATA
For example, on Oracle Database Appliance high-availability system:srvctl modify database -d provDb0 -acfspath /u02/app/oracle/oradata/provDb0,/u03/app/oracle/,/u01/app/odaorahome,/u01/app/ odaorabase0 -diskgroup DATA
This issue is tracked with Oracle bug 33367771.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching prechecks report
The patchung prechecks report may display an error.
Failure in the pre-patch report caused by “AHF-5190: operating system boot device order is not configured as recommended”
Hardware Models
Oracle Database Appliance X-7 hardware models
Workaround
Run the odacli update-server
or odacli
update-dbhome
command with the -f
option.
This issue is tracked with Oracle bug 33631256.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching DB systems
When patching DB systems on Oracle Database Appliance, an error may be encountered.
Task failure:
KVM infra update_KvmLockContainer_25915 December 2, 2021 1:10:08 PM CET
December 2, 2021 1:10:08 PM CET InternalError
dcs-agent.log
file may contain the following
entry:com.oracle.dcs.commons.exception.DcsException: DCS-10001:Internal error
encountered: Zookeeper is down.
This is because of an error in Zookeeper reconnection.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Run the odacli update-server
command within the
DB system again.
This issue is tracked with Oracle bug 33631056.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database
When you patch Oracle Database 19c Standard Edition single-instance or Oracle RAC One Node database, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models high-availability systems
Workaround
srvctl stop database -database db_unique_name
srvctl start database -database db_unique_name
This issue is tracked with Oracle bug 33178198.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in listing patches
When running the command odacli list-availablepatches
, an
error is encountered.
# odacli list-availablepatches
DCS-10001:Internal error encountered: For input string: ""
Hardware Models
All Oracle Database Appliance hardware models high-availability systems
Workaround
/opt/oracle/oak/pkgrepos/System/19.6.0.0.0/patchmetadata.xml
file, modify the database patch information and add targetVersion
as
follows:<component baseversion="19.0.0.0" name="DB" repotag="19.6.0.0.200114"
targetVersion="19.6.0.0.200114"> </component>
<component baseversion="18.0.0.0" name="DB" repotag="18.9.0.0.200114"
targetVersion="18.9.0.0.200114"></component>
<component baseversion="12.2.0.1" name="DB" repotag="12.2.0.1.200114"
targetVersion="12.2.0.1.200114"></component>
<component baseversion="12.1.0.2" name="DB" repotag="12.1.0.2.200114"
targetVersion="12.1.0.2.200114"></component>
<component baseversion="11.2.0.4" name="DB" repotag="11.2.0.4.200114"
targetVersion="11.2.0.4.200114"></component>
This issue is tracked with Oracle bug 33600951.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in updating dbhome
When you patch database homes to Oracle Database Appliance release 19.13, the odacli update-dbhome
command may
fail.
PRGH-1153 : RHPHelper call to get runing nodes failed for DB: "GIS_IN"
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ensure that the database instances are running before you run the
odacli update-dbhome
command. Do not manually stop the database
before updating it.
This issue is tracked with Oracle bug 33114855.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching DB systems
When patching DB systems on Oracle Database Appliance, an error may be encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- On the VM mount
pkgrepos
directory, on the first node, run these steps:cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION mount 192.168.17.2:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos
For InfiniBand environments:
mount 192.168.16.24:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos
- On the VM mount
pkgrepos
directory, on the second node, run these steps:cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION mount 192.168.17.3:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos
For InfiniBand environments:
mount 192.168.16.25:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos
- Patch the DB system with the same steps as when
patching the bare metal
system:
odacli update-dcsadmin -v 19.13.0.0.0 odacli update-dcscomponents -v 19.13.0.0.0 odacli update-dcsagent -v 19.13.0.0.0 odacli create-prepatchreport -v 19.13.0.0.0 -s odacli update-server -v 19.13.0.0.0 odacli create-prepatchreport -v 19.13.0.0.0 -d -i id odacli update-dbhome -v 19.13.0.0.0 -i id -f -imp
This issue is tracked with Oracle bug 33217680.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
When patching Oracle Database Appliance, errors may be encountered.
odacli update-server
command may fail with the
following
message:Fail to patch GI with RHP : DCS-10001:Internal error encountered: PRGH-1057
: failure during move of an Oracle Grid Infrastructure home
…
…
RCZ-4001 : failed to execute command
"/u01/app/19.13.0.0/grid/crs/install/rootcrs.sh" using the
privileged execution plugin "odaexec" on nodes "xxxxxxxx"
within 36,000 seconds
PRCZ-2103 : Failed to execute command
"/u01/app/19.13.0.0/grid/crs/install/rootcrs.sh" on node "xxxxxxxx" as user
"root". Detailed error: Using configuration parameter file:
/u01/app/19.13.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/<node_name>/crsconfig/crs_postpatch_apply_oop_node_name_timestamp.log
“Error unmounting '/opt/oracle/oak/pkgrepos/orapkgs/clones'. Possible busy file system. Verify
the logs.Retrying unmount
CRS-2675: Stop of 'ora.data.acfsclone.acfs' on
'node1' failed
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2678: 'ora.data.acfsclone.acfs' on 'node1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2680: Clean of 'ora.data.acfsclone.acfs' on 'node1' failed
…
…"
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.
- Locate all export points of
/opt/oracle/oak/pkgrepos
:# cat /var/lib/nfs/etab /opt/oracle/oak/pkgrepos 192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
- Clear references to export of
clones:
# exportfs -u host:/opt/oracle/oak/pkgrepos # exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos
- After running steps 1-3 on both nodes, run the
odacli update-server
command and patch your appliance.
This issue is tracked with Oracle bug 33284607.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in storage patching
When patching Oracle Database Appliance, errors are encountered.
odacli update-storage
command may fail with
the following
message:DCS-10001:Internal error encountered: Failed to stop cluster
“Error unmounting '/opt/oracle/oak/pkgrepos/orapkgs/clones'. Possible busy file system. Verify
the logs.Retrying unmount
CRS-2675: Stop of 'ora.data.acfsclone.acfs' on
'node1' failed
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2678: 'ora.data.acfsclone.acfs' on 'node1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2680: Clean of 'ora.data.acfsclone.acfs' on 'node1' failed
…
…"
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.
- Locate all export points of
/opt/oracle/oak/pkgrepos
:# cat /var/lib/nfs/etab /opt/oracle/oak/pkgrepos 192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
- Clear references to export of
clones:
# exportfs -u host:/opt/oracle/oak/pkgrepos # exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos
- After running steps 1-3 on both nodes, run the
odacli update-storage
command and patch the storage.
This issue is tracked with Oracle bug 33284607.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in running the update-dbhome command
When you patch database homes to Oracle Database Appliance release 19.11, the
odacli update-dbhome
command fails.
odacli update-dbhome
command, due to the inclusion of the
non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The
following error message is
displayed:DCS-10001:Internal error encountered: PRCC-1021 :
One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..
"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4/OPatch/datapatch
This issue is tracked with Oracle bug 32801095.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed.
odacli
update-dcscomponents
command:
# time odacli update-dcscomponents -v 19.13.0.0.0
^[[ADCS-10008:Failed to update DCScomponents: 19.13.0.0.0
Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for
details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
This is a timing issue with setting up the SSH equivalence.
Run the odacli update-dcscomponents
command again and
the operation completes successfully.
This issue is tracked with Oracle bug 32553519.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commands odacli
create-prepatchreport
, odacli update-server
, odacli
update-dbhome
, an error is encountered.
- Table AUD$[FGA_LOG$] should use Automatic Segment Space Management
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- To verify the segment space management policy currently in use by the AUD$ and
FGA_LOG$ tables, use the following SQL*Plus
command:
select t.table_name,ts.segment_space_management from dba_tables t, dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and t.table_name in ('AUD$','FGA_LOG$');
- The output should be similar to the
following:
TABLE_NAME SEGMEN ------------------------------ ------ FGA_LOG$ AUTO AUD$ AUTO If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the DBMS_AUDIT_MGMT package to move them to the SYSAUX tablespace: BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$ audit_trail_location_value => 'SYSAUX'); END; BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$ audit_trail_location_value => 'SYSAUX'); END;
This issue is tracked with Oracle bug 27856448.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled.
odacli update-dbhome -v
release_number
on database homes that have Standard Edition
High Availability enabled, an error is
encountered.WARNING::Failed to run the datapatch as db <db_name> is not in running state
Hardware Models
All Oracle Database Appliance hardware models with High-Availability deployments
Workaround
- Locate the running node of the target database
instance:
srvctl status database -database dbUniqueName
Or, relocate the single-instance database instance to the required node:odacli modify-database -g node_number (-th node_name)
- On the running node, manually run the datapatch for non-CDB
databases:
dbhomeLocation/OPatch/datapatch
- For CDB databases, locate the PDB list using
SQL*Plus.
select name from v$containers where open_mode='READ WRITE'; dbhomeLocation/OPatch/datapatch -pdbs pdb_names_found_in_previous_step_divided_by_comma
This issue is tracked with Oracle bug 31654816.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
Grid_home/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.
The ODACLI job displays the following error:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
The data patch log contains the entry
"Prereq check failed, exiting without
installing any patches.
".
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Install the same patch again.
This issue is tracked with Oracle bugs 30026438 and 30155710.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller
version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some
Oracle Database Appliance X8-2 models, the installed LSI controller version may be
16.00.01.00.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Error in creating two DB systems using disk groups
When creating two or more DB systems in parallel using different disk groups on Oracle Database Appliance X5-2 high-availability, high-capacity systems, an error is encountered. - Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered. - Error in creating db system
The commandodacli create-dbsystem
operation fails due to errors. - Error in registering a TDE-enabled database
When registering a TDE-enabled database that was created using RMAN, an error is encountered. - Error when upgrading database from 12.1 or 19c
When upgrading databases from 12.1 to 19c, an error is encountered. - Error in recovering a database
When recovering a database on Oracle Database Appliance, an error is encountered. - Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered. - Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after runningcleanup.pl
. - Error in updating a database
When updating a database on Oracle Database Appliance, an error is encountered. - Error in running tfactl diagcollect command on remote node
When running thetfactl diagcollect
command on Oracle Database Appliance, an error is encountered. - Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered. - Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered. - Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails. - Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Errors in clone database operation
Clone database operation fails due to errors. - Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Error in creating two DB systems using disk groups
When creating two or more DB systems in parallel using different disk groups on Oracle Database Appliance X5-2 high-availability, high-capacity systems, an error is encountered.
MySQL timeouts may occur during DDL queries on the first DCS agent bootstrap within the DB system.
Hardware Models
Oracle Database Appliance X5-2 high-availability, high-capacity systems
Workaround
Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.
This issue is tracked with Oracle bug 33546843.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.
This issue is tracked with Oracle bug 33275630.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating db system
The command odacli create-dbsystem
operation fails
due to errors.
DCS-10032:Resource of type 'Virtual Network' with name 'pubnet' is not found.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Restart the DCS agent. For high-availability systems, restart the DCS agent on both nodes.systemctl restart initdcsagent
This issue is tracked with Oracle bug 32740754.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in registering a TDE-enabled database
When registering a TDE-enabled database that was created using RMAN, an error is encountered.
The following error message is displayed:
DCS-10107:Tde wallet does not exist at location :
/opt/oracle/dcs/commonstore/wallets/tde/DB_UNIQUE_NAME>.Please copy the wallet to proceed.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Log in as the
oracle
user.su - oracle
- Navigate to the
/opt/oracle/dcs/commonstore/wallets/tde/
directory.cd /opt/oracle/dcs/commonstore/wallets/tde/
- Create the
DB_UNIQUE_NAME
using uppercase letters.mkdir DB_UNIQUE_NAME
- Copy the TDE wallets to the
DB_UNIQUE_NAME
directory. - Retry database registration.
This issue is tracked with Oracle bug 28080413.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading database from 12.1 or 19c
When upgrading databases from 12.1 to 19c, an error is encountered.
ERROR : Ran '/bin/su oracle -c
"/u01/app/oracle/product/19.0.0.0/dbhome_1/bin/dbua -silent -performFixUp true -dbName db121"'
/u01/app/oracle/cfgtoollogs/dbua/upgradetimestamp
shows the
following
entry:SEVERE: Oct 25, 2021 7:20:36 AM
oracle.assistants.dbua.prereq.PreUpgradeDriverJob runFixUps
SEVERE: ORA-29284: file read error
ORA-06512: at "SYS.UTL_FILE", line 106
ORA-06512: at "SYS.UTL_FILE", line 746
ORA-06512: at "SYS.DBMS_PREUP", line 3437
ORA-06512: at "SYS.DBMS_PREUP", line 11227
ORA-06512: at line 4
SEVERE: Oct 25, 2021 7:20:37 AM oracle.assistants.dbua.prereq.PrereqChecker
logPrereqResults
SEVERE: An error occurred while executing the preupgrade auto fixups.
FIXABLE: MANUAL
Hardware Models
All Oracle Database Appliance virtualized platform deployments
Workaround
Run fixup scripts manually. Check the file preupgrade_fixups.sql
in the
directory generated under
$ORACLE_BASE/cfgtoollogs/dbua/upgradeDATA_TIME_STAMP
. After
running the fixup script, run the odacli upgrade-database
command.
This issue is tracked with Oracle bug 33473842.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in recovering a database
When recovering a database on Oracle Database Appliance, an error is encountered.
odacli recover-database
on a Standard Edition High Availability database, the following error message is
displayed:DCS-10001:Internal error encountered: Unable to get valid database node number to post recovery.
Hardware Models
All Oracle Database Appliance high-availability hardware models
Workaround
srvctl config database -db db_name | grep “Configured nodes” | awk
‘{print $3}’, whose output is nodeX,nodeY
srvctl modify database -db db_name -node nodeX
odacli recover-database
srvctl stop database -db db_name
srvctl modify database -db db_name -node nodeX,nodeY
srvctl start database -db db_name
This issue is tracked with Oracle bug 32928688.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.
ORA-15333: disk is not visible on client instance
Hardware Models
All Oracle Database Appliance hardware models bare metal and dbsystem
Workaround
Shut down dbsystem before adding the second JBOD.systemctl restart initdcsagent
This issue is tracked with Oracle bug 32586762.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after running
cleanup.pl
.
After running cleanup.pl
, provisioning the appliance fails because
of missing Oracle Grid Infrastructure image (IMGGI191100). The following error
message is displayed:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
After running cleanup.pl, and before provisioning the appliance, update the repository as follows:
# odacli update-repository -f /**gi**
This issue is tracked with Oracle bug 32707387.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in updating a database
When updating a database on Oracle Database Appliance, an error is encountered.
odacli update-dbhome
, the
following error message is
displayed:PRGO-1069 :Internal error [# rhpmovedb.pl-isPatchUpg-1 #]..
To confirm that the MMON process occupies the lock, connect to the target database which failed to patch, and run the command:
SELECT s.sid, p.spid, s.machine, s.program FROM v$session s, v$process p
WHERE s.paddr = p.addr and s.sid = (
SELECT sid from v$lock WHERE id1= (
SELECT lockid FROM dbms_lock_allocated WHERE name = 'ORA$QP_CONTROL_LOCK'
));
If
in the displayed result, s.program in the output is similar to to the format
oracle_user@host_box_name (MMON)
,
then the error is caused by the MMON process. Run the workaround to address
this issue.
Hardware Models
All Oracle Database Appliance high-availability hardware models
Workaround
- Stop the MMON
process:
# ps -ef | grep MMON root 71220 70691 0 21:25 pts/0 00:00:00 grep --color=auto MMON
Locate the process ID from step (1) and stop it:# kill -9 71220
- Manually run datapatch on target database:
- Locate the database home where the target database is
running:
odacli describe-database -in db_name
- Locate the database home
location:
odacli describe-dbhome -i DbHomeID_found_in_step_a
- On the running node of the target
database:
[root@node1 ~]# sudo su - oracle Last login: Thu Jun 3 21:24:45 UTC 2021 [oracle@node1 ~]$ . oraenv ORACLE_SID = [oracle] ? db_instance_name ORACLE_HOME = [/home/oracle] ? dbHome_location
- If the target database is a non-CDB database, then run
the
following:
$ORACLE_HOME/OPatch/datapatch
- If the target database is a CDB database, then run the
following to find the PDB
list:
select name from v$containers where open_mode="READ WRITE";
- Exit SQL*Plus and run the
following:
$ORACLE_HOME/OPatch/datapatch -pdbs pdb_names_gathered_by_the_SQL_statement_in_step_e_separated_by_comma
- Locate the database home where the target database is
running:
This issue is tracked with Oracle bug 32827353.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in running tfactl diagcollect command on remote node
When running the tfactl diagcollect
command on Oracle
Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models KVM and bare metal systems
Workaround
- Run the following command on each node so that Oracle Trace File
Analyzer generates new certificates and distributes to the other
node:
tfactl syncnodes -remove -local
- Connect using SSH with
root
credentials on one node and run the following.tfactl syncnodes
This issue is tracked with Oracle bug 32921859.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.
UpgradeResults.html
file, when upgrading database from 11.2.0.4 to 12.1
or 12.2:
Database is using a newer time zone file version than the Oracle home
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
- Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
- After manually completing the database upgrade, run the following command to update
DCS
metadata:
/opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f
This issue is tracked with Oracle bug 31125985.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
- Before upgrading the 12.1 single-instance database, run the following PL/SQL
command to change the
local_listener
to an empty string:ALTER SYSTEM SET LOCAL_LISTENER='';
- After upgrading the 12.1 single-instance database successfully, run the
following PL/SQL command to change the
local_listener
to the desired value:ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-';
This issue is tracked with Oracle bugs 31202775 and 31214657.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.
Hardware Models
All Oracle Database Appliance X8-2-HA with High Performance configuration
Workaround
- Power off storage expansion shelf.
- Reboot both nodes.
- Proceed with provisioning the default storage shelf (first JBOD).
- After the system is successfully provisioned
with default storage shelf (first JBOD), check
that
oakd
is running on both nodes in foreground mode.# ps -aef | grep oakd
- Check that all first JBOD disks have the status
online, good in
oakd
, and CACHED in Oracle ASM. - Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
- Run the following command from the master node
to add the storage expansion shelf disks (two JBOD
setup) to
oakd
and Oracle ASM.#odaadmcli show ismaster OAKD is in Master Mode # odaadmcli expand storage -ndisk 24 -enclosure 1 Skipping precheck for enclosure '1'... Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish ... #
- Check that the storage expansion shelf disks
(two JBOD setup) are added to
oakd
and Oracle ASM.
Replace odaadmcli
with
oakcli
commands on Oracle
Database Appliance Virtualized Platform in the
procedure.
For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.
This issue is tracked with Oracle bug 30839054.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.
DCS-10001:Internal error encountered: Fail to run command Failed to create
volume.
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname
This issue is tracked with Oracle bug 30750497.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
- Change the parameter
value.
SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
- Shut down the database.
SQL> SHUTDOWN IMMEDIATE
- Start the database.
SQL> Startup
- Verify the parameter for the new
value.
SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';
This issue is tracked with Oracle bug 30309914.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error in back up of database
When backing up a database on Oracle Database Appliance, an error is encountered. - OpenSSH command vulnerability
OpenSSH command vulnerability issue detected in Qualys and Nessus scans. - Error in bare metal CPU pool association
After patching to Oracle Database Appliance release 19.13, bare metal CPU pool which is not NUMA allocated, can be associated to a database. - Cannot configure system settings using BUI for multi-user access enabled systems
On an Oracle Database Appliance with multi-user access enabled, some system settings cannot be configured using the Browser User Interface (BUI). - Error in display of component status
Theodacli describe-component
command may displayNot Available
even when the local controller is available. - AHF permissions error
When running the OERR tool in the AHF_HOME on Oracle Database Appliance, an error is encountered. - Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered. - Error in odacli modify-dbfileattributes job status
When rerunning odacli modify-dbfileattributes command, the job status is set toCreated
. - Error in TDE wallet management
When upgrading a database to Oracle Database 18c or later, and the database hasTDE Wallet Management
set to the valueEXTERNAL
, theTDE Wallet Management
is not set to the valueODA
. - Error in TDE wallet management
When changing the TDE wallet password or rekeying the TDE wallet of a database which hasTDE Wallet Management
set to the valueEXTERNAL
, an error is encountered. - Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths. - Error in configuring Oracle Data Guard
When running the commandodacli configure-dataguard
on Oracle Database Appliance, an error is encountered. - Error in configuring Oracle Data Guard on DB System
When running the commandodacli configure-dataguard
on Oracle Database Appliance, an error is encountered. - Error in reinstating on Oracle Data Guard
When running the commandodacli reinstate-dataguard
on Oracle Database Appliance, an error is encountered. - Error in attaching when running reinstate operation on Oracle Data Guard
When running the commandodacli reinstate-dataguard
on Oracle Data Guard, an error is encountered. - Error in switchover of Oracle Data Guard
When running the commandodacli switchover-dataguard
on Oracle Database Appliance, an error is encountered. - Error in viewing Oracle Data Guard status
When viewing Oracle Data Guard status on Oracle Database Appliance, an error is encountered. - Error in reinstate operation on Oracle Data Guard
When running the commandodacli reinstate-dataguard
on Oracle Data Guard an error is encountered. - Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered. - Error in deleting a standby database
When deleting a standby database, an error is encountered. - Error in Oracle Data Guard failover operation for 18.14 database
When running theodacli failover-dataguard
command on a database of version 18.14, an error is encountered. - Error in Oracle Active Data Guard operations
When performing switchover, failover, and reinstate operations on Oracle Active Data Guard on Oracle Database Appliance, an error is encountered. - Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered. - Error in creating Oracle Data Guard status
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered. - Error in registering a database
When registering a single instance database on Oracle Database Appliance, if the RAC option is specified in theodacli register-database
command, an error is encountered. - Error in registering a database
When restoring a database on Oracle Database Appliance, if the NLS setting on the standby database is not America/American, then an error may be encountered. - Error in recovering a TDE-enabled Oracle RAC or Oracle RAC One Node database
When recovering a TDE-enabled Oracle RAC or Oracle RAC One Node database, an error may be encountered. - Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role. - Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered. - Job history not erased after running cleanup.pl
After runningcleanup.pl
, job history is not erased. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running theodacli update-registry
command with-n all --force
or-n dbstorage --force
option can result in metadata corruption. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode. - Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers. - Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in/var/log/messages
.
Error in back up of database
When backing up a database on Oracle Database Appliance, an error is encountered.
odacli
create-backup
on new primary database fails with the following
message:DCS-10001:Internal error encountered: Unable to get the
rman command status commandid:xxx
output:STATUS
-------------------------
[COMPLETED WITH WARNINGS] error:.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- On the new primary database, connect to RMAN as
oracle
and edit the archivelog deletion policy.rman target / RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
- On the new primary database, as the
root
user, take a backup:odacli create-backup -in db_name -bt backup_type
This issue is tracked with Oracle bug 33181168.
Parent topic: Known Issues When Managing Oracle Database Appliance
OpenSSH command vulnerability
OpenSSH command vulnerability issue detected in Qualys and Nessus scans.
OPENSSH COMMAND INJECTION VULNERABILITY
. Refer to
CVE-2020-15778 for details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
None.
This issue is tracked with Oracle bug 33217970.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in bare metal CPU pool association
After patching to Oracle Database Appliance release 19.13, bare metal CPU pool which is not NUMA allocated, can be associated to a database.
Hardware Models
All Oracle Database Appliance hardware models that support release 19.13 and were patched from earlier 19.x release. New provisioning with 19.13 are not affected, since new bare metal CPU pools are NUMA allocated.
Workaround
Run the odacli remap-cpupools
command and
restart the bare metal database instances.
This issue is tracked with Oracle bug 31907677.
Parent topic: Known Issues When Managing Oracle Database Appliance
Cannot configure system settings using BUI for multi-user access enabled systems
On an Oracle Database Appliance with multi-user access enabled, some system settings cannot be configured using the Browser User Interface (BUI).
You cannot specify the token expiration duration, password expiration duration, maximum failed login attempts, and other details when you provision multi-user access enabled Oracle Database Appliance with BUI. You can specify these values when you use JSON file to provision your multi-user access enabled Oracle Database Appliance.
Hardware Models
Oracle Database Appliance hardware models with multi-user access enabled
Workaround
None.
This issue is tracked with Oracle bug 33571755.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in display of component status
The odacli describe-component
command may display
Not Available
even when the local controller is
available.
Hardware Models
Oracle Database Appliance hardware models with Mellanox InfiniBand network card ConnectX-3 or ConnectX-5
Workaround
None
This issue is tracked with Oracle bug 33601175.
Parent topic: Known Issues When Managing Oracle Database Appliance
AHF permissions error
When running the OERR tool in the AHF_HOME on Oracle Database Appliance, an error is encountered.
odacli
create-backup
on new primary database fails with the following
message:cd /opt/oracle/dcs/oracle.ahf/bin
../oerr
-bash: ./oerr: Permission denied
Hardware Models
All Oracle Database Appliance hardware models
Workaround
sh
, as
follows:cd /opt/oracle/dcs/oracle.ahf/bin
sh oerr
Use AHF XXXX format... Exiting
This issue is tracked with Oracle bug 33293560.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models with DB systems
Workaround
- Stop the NFS service on both
nodes:
service nfs stop
- Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.
This issue is tracked with Oracle bug 33289742.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in odacli modify-dbfileattributes job status
When rerunning odacli modify-dbfileattributes command, the job status is set
to Created
.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
systemctl stop initdcsagent
systemctl start initdcsagent
All jobs with state Created
are set to the state
Failure
. You can now submit other jobs.
This issue is tracked with Oracle bug 32945075.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in TDE wallet management
When upgrading a database to Oracle Database 18c or later, and the database
has TDE Wallet Management
set to the value EXTERNAL
,
the TDE Wallet Management
is not set to the value
ODA
.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
TDE Wallet
Management
from the value EXTERNAL
to the
value
ODA
.odacli modify-database -in DB_NAME -ctm
Provide the TDE wallet password when prompted.
This issue is tracked with Oracle bug 33593582.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in TDE wallet management
When changing the TDE wallet password or rekeying the TDE wallet of a
database which has TDE Wallet Management
set to the value
EXTERNAL
, an error is encountered.
DCS-10089:Database DB_NAME is in an invalid state 'NOT_RUNNING'.Database DB_NAME must be running
Hardware Models
All Oracle Database Appliance hardware models
Workaround
NONE. The operations such as changing the TDE wallet password or
rekeying the TDE wallet is not supported on a database which has TDE
Wallet Management
set to the value
EXTERNAL
.
This issue is tracked with Oracle bug 33278653.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths.
Hardware Models
All Oracle Database Appliance hardware models with virtualized platform
Workaround
None.
This issue is tracked with Oracle bug 33580574.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
When running the command odacli configure-dataguard
on
Oracle Database Appliance, an error is encountered.
When running the command odacli configure-dataguard
, an
error occurs at step Step 9: Re-enable Data Guard (Primary
site)
.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Run the commandodacli restore-archivelog -in db_name
on the primary
site and retry the command odacli configure-dataguard
.
This issue is tracked with Oracle bug 33387213.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard on DB System
When running the command odacli configure-dataguard
on
Oracle Database Appliance, an error is encountered.
odacli configure-dataguard
on
an Oracle Database 21c DB system, the following error message is displayed at step
Configure and enable Data Guard (Primary
site)
:ORA-12154: TNS:could not resolve the connect identifier specified
Hardware Models
All Oracle Database Appliance hardware models
Workaround
tnsnames.ora
file in the directory
/u01/app/<dbUser>/homes/OraDB21000_home#/network/admin
.mv /u01/app/<dbUser>/homes/OraDB21000_home#/network/admin/tnsnames.ora
/u01/app/<dbUser>/homes/OraDB21000_home#/network/admin/tnsnames.ora.backup
This issue is tracked with Oracle bug 33579891.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in reinstating on Oracle Data Guard
When running the command odacli reinstate-dataguard
on
Oracle Database Appliance, an error is encountered.
DCS-10001:Internal error encountered: Unable to reinstate Dg.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Manually flashback old primary database.- On the new primary machine, get the
standby_became_primary_scn
:SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 4370820
- On the old primary database, as
oracle
user, run the following.rman target / RMAN> set decryption identified by 'password' RMAN> FLASHBACK DATABASE TO SCN STANDBY_BECAME_PRIMARY_SCN;
- On the new primary database, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 33190261.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in attaching when running reinstate operation on Oracle Data Guard
When running the command odacli reinstate-dataguard
on
Oracle Data Guard, an error is encountered.
DCS-10001:Internal error encountered: Unable to reinstate Dg.
dcs-agent.log
:Failed to attach to dbUniqueName
Hardware Models
All Oracle Database Appliance hardware models
Workaround
GMGRL > SHOW CONFIGURATION;
If successful, run odacli list-dataguardstatus
on both
sites to update role. Otherwise, manually flashback old primary database.
- On the new primary machine, get the
standby_became_primary_scn
:SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 4370820
- On the old primary database, as
oracle
user, run the following.rman target / RMAN> set decryption identified by 'password' RMAN> FLASHBACK DATABASE TO SCN STANDBY_BECAME_PRIMARY_SCN;
- On the new primary database, run the
odacli reinstate-dataguard
command. - Run the
odacli list-dataguardstatus
command on both sites to check result.
This issue is tracked with Oracle bug 33559917.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in switchover of Oracle Data Guard
When running the command odacli switchover-dataguard
on
Oracle Database Appliance, an error is encountered.
odacli switchover-dataguard
, an
error occurs at step Step 9: Re-enable Data Guard (Primary site)
,
the error message is
displayed:DCS-10001:Internal error encountered: Unable to switchover Dg.
dcs-agent.log
:Failed to attach to dbUniqueName
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ignore the error. Verify that the switchover is successful as follows:- Verify that the roles of databases are correctly switched over and that the
Configuration Status
isSUCCESS
.DGMGRL > SHOW CONFIGURATION;
- Run the command
odacli list-dataguardstatus
on both primary and standby sites till the role is updated correctly.
This issue is tracked with Oracle bug 33465826.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in viewing Oracle Data Guard status
When viewing Oracle Data Guard status on Oracle Database Appliance, an error is encountered.
Check if DataGuard config
is updated
. Oracle Data Guard operations, though, are
successful.
Hardware Models
All Oracle Database Appliance high-availability systems
Workaround
Use DGMGRL
to verify Oracle Data Guard status.
This issue is tracked with Oracle bug 33411769.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in reinstate operation on Oracle Data Guard
When running the command odacli reinstate-dataguard
on
Oracle Data Guard an error is encountered.
dcs-agent.log
:DCS-10001:Internal error encountered: Unable to reinstate Dg." and can
further find this error "ORA-12514: TNS:listener does not currently know of
service requested
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ensure that the database you are reinstating is started in MOUNT mode.
srvctl start database -d db-unique-name -o mount
After the command completes successfully, run the command odacli
reinstate-dataguard
job. If the database is already in MOUNT mode, this
can be an temporary error. Check the Data Guard status again a few minutes later
with odacli describe-dataguardstatus
or odacli
list-dataguardstatus
, or check with DGMGRL> SHOW
CONFIGURATION;
to see if the reinstatement is successful.
This issue is tracked with Oracle bug 32367676.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not run concurrent database or database home creation job.This issue is tracked with Oracle bug 32376885.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in deleting a standby database
When deleting a standby database, an error is encountered.
DCS-10001:Internal error encountered: Failed to run the asm command:
[/u01/app/19.0.0.0/grid/bin/asmcmd, --nocp, rm, -rf, RECO/ABCDEU]
Error:ORA-29261: bad argument
ORA-06512: at line 4
ORA-15178: directory 'ABCDEU' is not empty; cannot drop this directory
ORA-15260: permission denied on ASM disk group
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 666
ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute).
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
- After deleting the standby database and before recreating the same
standby database, perform the following steps:
- After deleting the standby database and
before recreating the same standby database, perform
the following steps:
- Log in as the
oracle
user:su - oracle
- Set the
environment:
. oraenv ORACLE_SID = null ORACLE_HOME = dbhome_path (such as /u01/app/oracle/product/19.0.0.0/dbhome_1) 3. cd dbhome_path/bin 4. asmcmd --privilege sysdba rm -rf +RECO/DBUNIQUENAME/ 5. asmcmd --privilege sysdba rm -rf +DATA/DBUNIQUENAME/arc10/ 6. asmcmd --privilege sysdba rm -rf +DATA/DBUNIQUENAME/PASSWORD/
- Log in as the
- After deleting the standby database and
before recreating the same standby database, perform
the following steps:
- Recreate the standby database with a different database unique name.
This issue is tracked with Oracle bug 32871772.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Oracle Data Guard failover operation for 18.14 database
When running the odacli failover-dataguard
command on a
database of version 18.14, an error is encountered.
DCS-10001:Internal error encountered: Unable to precheckFailoverDg11g Dg.
select DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from v$database
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcbgtcr_17], [], [], [], [], [],
[], [], [], [], [], []
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the following DGMGRL statements on the system
with the database to fail over
to:
DGMGRL> SHOW CONFIGURATION; DGMGRL> VALIDATE DATABASE '<DB_UQNIUE_NAME_to_failover_to>'; DGMGRL> FAILOVER TO '<DB_UQNIUE_NAME_to_failover_to>'; DGMGRL> SHOW CONFIGURATION;
- After failover is successful, run the
odacli describe-dataguardstatus -i id
command several times to update the DCS metadata.
This issue is tracked with Oracle bug 32727379.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Oracle Active Data Guard operations
When performing switchover, failover, and reinstate operations on Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.
PRCZ-2103 : Failed to execute command
"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/bin/dbua" on node
"node1" as user "oracle". Detailed error:
Logs directory:
/u01/app/odaorabase/oracle/cfgtoollogs/dbua/upgrade2021-05-06_01-31-16PM
SEVERE: May 08, 2021 6:50:24 PM oracle.assistants.dbua.prereq.PrereqChecker
logPrereqResults
SEVERE: Starting with Oracle Database 11.2, setting JOB_QUEUE_PROCESSES=0
will disable job execution via DBMS_JOBS and DBMS_SCHEDULER. FIXABLE: MANUAL
Database: ptdkjqt
Cause: The database has JOB_QUEUE_PROCESSES=0.
Action: Set the value of JOB_QUEUE_PROCESSES to a non-zero value, or remove
the setting entirely and accept the Oracle default.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
Follow these steps:
- Use SQL*Plus to access the database and run the following
command:
alter system set JOB_QUEUE_PROCESSES=1000;
- Retry the upgrade command.
This issue is tracked with Oracle bug 32856214.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.
Error: ORA-16664: unable to receive the result from a member
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart standby database in upgrade mode:
srvctl stop database -d <db_unique_name> Run PL/SQL command: STARTUP UPGRADE;
- Continue the enable apply process and wait for log apply process to refresh.
- After some time, check the Data Guard status with the DGMGRL
command:
SHOW CONFIGURATION;
This issue is tracked with Oracle bug 32864100.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in creating Oracle Data Guard status
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.
odacli
configure-dataguard
command fails at step
NewDgconfig
with the following error on the standby
system:ORA-16665: TIME OUT WAITING FOR THE RESULT FROM A MEMBER
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the standby system, run the following:
export DEMODE=true; odacli create-dataguardstatus -i dbid -n dataguardstatus_id_on_primary -r configdg.json export DEMODE=false; configdg.json example
configdg.json
file for a single-node
system:{
"name": "test1_test7",
"protectionMode": "MAX_PERFORMANCE",
"replicationGroups": [
{
"sourceEndPoints": [
{
"endpointType": "PRIMARY",
"hostName": test_domain1",
"listenerPort": 1521,
"databaseUniqueName": "test1",
"serviceName": "test",
"sysPassword": "***",
"ipAddress": "test_IPaddress"
},
],
"targetEndPoints": [
{
"endpointType": "STANDBY",
"hostName": "test_domain2",
"listenerPort": 1521,
"databaseUniqueName": "test7",
"serviceName": "test",
"sysPassword": "***",
"ipAddress": "test_IPaddress3"
},
],
"transportType": "ASYNC"
}
]
}
This issue is tracked with Oracle bug 32719173.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in registering a database
When registering a single instance database on Oracle Database Appliance, if
the RAC option is specified in the odacli register-database
command, an
error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Create a single-instance database using Oracle Database
Configuration Assistance (DBCA) and then register the database using the
odacli register-database
command with the RAC
option.
This issue is tracked with Oracle bug 32853078.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in registering a database
When restoring a database on Oracle Database Appliance, if the NLS setting on the standby database is not America/American, then an error may be encountered.
An error occurs when running the RMAN duplicate task. The RMAN log described in the error message may show RMAN-06136 and ORA-00907 errors.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
None.
This issue is tracked with Oracle bug 32349703.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in recovering a TDE-enabled Oracle RAC or Oracle RAC One Node database
When recovering a TDE-enabled Oracle RAC or Oracle RAC One Node database, an error may be encountered.
+DATA/DB_UNIQUE_NAME
folder. The following
error is
observed:DCS-10001:Internal error encountered: Failed to create the empty keystore. Please check if a keystore 'ewallet.p12' is already present at: '+RECO/<DB_UNIQUE_NAME>/tdewallet'. Please remove or rename it if it does.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Switch to
oracle
user and createDB_HOME/dbs/initoracle_sid.ora
file with the following parameters:db_name=db_name db_unique_name=db_unique_name wallet_root=+DATA/DBUNIQUENAME tde_configuration='KEYSTORE_CONFIGURATION=FILE'
- Set the
ORACLE_SID
as per data type, for example,DB_NAME_1
for Oracle RAC One Node database andDB_NAME1
for Oracle RAC database. Run the commandstartup nomount
on the database. - Restore the TDE wallet using
odacli restore-tdewallet -in dbname>
. - Restore
SPFILE:
run{set dbid DBID; startup nomount;RESTORE SPFILE TO 'DB_HOME/dbs/spfileoracle_sid.ora' FROM AUTOBACKUP RECOVERY AREA='+RECO' db_unique_name='db_unique_name';shutdown abort;}
- Restore the control
file:
run{set dbid DBID; startup nomount;restore controlfile from AUTOBACKUP validate;shutdown abort;}
- Recover the
database:
odacli recover-database
- Delete the
DB_HOME/dbs/initoracle_sid.ora
file.
This issue is tracked with Oracle bug 33640512.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Unable to reinstate Dg. Reinstate job was executed within 24hrs after failover job.
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 3522449
- On the old primary database, flashback to this SCN with
RMAN with the backup encryption
password:
RMAN> set decryption identified by 'rman_backup_password' ; executing command: SET decryption RMAN> FLASHBACK DATABASE TO SCN 3522449 ; ... Finished flashback at 24-SEP-20 RMAN> exit
- On the new primary machine, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 31884506.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Message:
DCS-10001:Internal error encountered: Unable to reinstate Dg.
The dcs-agent.log file has the following error entry:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- Make sure the database you are reinstating is started in
MOUNT mode. To start the database in MOUNT mode, run this
command:
srvctl start database -d db-unique-name -o mount
- After the above command runs successfully, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 32047967.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.
odacli
describe-database
command is not updated after Oracle Data Guard
switchover, failover, and reinstate operations on Oracle Database
Appliance.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli update-registry -n db --force/-f
to update the
database metadata. After the job completes, run the odacli
describe-database
command and verify that dbRole is updated.
This issue is tracked with Oracle bug 31378202.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered.
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered:
Missing arguments : required sqlplus connection information is not
provided
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Perform recovery of the single-instance database on the node where the database is running.
This issue is tracked with Oracle bug 31399400.
Parent topic: Known Issues When Managing Oracle Database Appliance
Job history not erased after running cleanup.pl
After running cleanup.pl
, job history is not
erased.
After running cleanup.pl
, when you run
/opt/oracle/dcs/bin/odacli list-jobs
commands, the list is not
empty.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
- Stop the DCS Agent by running the following commands on both nodes.
For Oracle Linux 6, run:
initctl stop initdcsagent
For Oracle Linux 7, run:
systemctl stop initdcsagent
- Run the cleanup script sequentially on both the nodes.
This issue is tracked with Oracle bug 30529709.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running the odacli update-registry
command with -n
all --force
or -n dbstorage --force
option can result in metadata corruption.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Run the -all
option when all the databases created in the system
use OAKCLI in migrated systems. On other systems
that run on DCS stack, update all components other
than dbstorage individually, using the
odacli update-registry -n
component_name_to_be_updated_excluding_dbstorage
.
This issue is tracked with Oracle bug 30274477.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance
Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
- Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
- After configuring the oda-admin password, the following error is
displayed:
Failed to change the default user (oda-admin) account password. Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized
Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.
Models
All Oracle Database Appliance Hardware Models bare metal deployments
Workaround
To access the Web Console, use either Google Chrome or Firefox.
This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.
Parent topic: Known Issues When Managing Oracle Database Appliance
Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages
.
Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages
:
Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"
You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages
.
Hardware Models
Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand
Workaround
Perform the following to remove the parameters:
-
After patching, update the /
etc/opensm/opensm.conf
file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.cat /etc/opensm/opensm.conf | egrep -w 'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^# max_seq_redisc 0 rereg_on_guid_migr FALSE aguid_inout_notice FALSE sm_assign_guid_func uniq_count reports 2 per_module_logging FALSE consolidate_ipv4_mask 0xFFFFFFFF
-
Reboot. The messages will not appear after rebooting the node.
This issue is tracked with Oracle bug 25985258.
Parent topic: Known Issues When Managing Oracle Database Appliance