4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Retrying update-server command after odacli update-server command fails
When you patch Oracle Database Appliance release 19.11, theodacli update-server
command fails. - Retrying odacli update-dbhome command with -imp option after update fails
When you patch database homes to Oracle Database Appliance release 19.11, theodacli update-dbhome
command fails. - Error in stopping Oracle Grid Infrastructure when patching Oracle Database Appliance
If you create an Oracle Data Guard or Oracle Database with type network usingodacli create-network
, then there is an error in stopping Grid Infrastructure during patching. - Error in running the update-dbhome command
When you patch database homes to Oracle Database Appliance release 19.11, theodacli update-dbhome
command fails. - Error in upgrading from Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance patching
During upgrade of Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance upgrade from release 18.8 to 19.x, an error is encountered. - Error in updating DCS components when patching Oracle Database Appliance
When updating DCS components when patching Oracle Database Appliance, an error is encountered. - Error when patching 11.2.0.4 Database homes to Oracle Database Appliance release 19.10
Patching of database home of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to version 11.2.0.4.210119 may fail. - Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message is displayed. - Error in updating storage when patching Oracle Database Appliance
When updating storage during patching of Oracle Database Appliance, an error is encountered. - Error in Oracle Grid Infrastructure upgrade
Oracle Grid Infrastructure upgrade fails, though therootupgrade.sh
script ran successfully. - Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commandsodacli create-prepatchreport
,odacli update-server
,odacli update-dbhome
, an error is encountered. - Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled. - Error in server patching
An error is encountered when patching the server. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported. - 11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start. - Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered.
Retrying update-server command after odacli update-server command fails
When you patch Oracle Database Appliance release 19.11, the odacli
update-server
command fails.
odacli update-server
job is successful,
odacli describe-job
output may show a message about missing
patches on the source home. For
example:Message: Contact Oracle Support Services to request patch(es) "bug #". The patched "OraGrid191100" is missing the patches for bug "bug#” which is present in the source "OraGrid19000"
For release 19.11, a missing patch error for bug number 29511771 is
expected. This patch contains Perl version 5.28 for the source grid home. Oracle
Database Appliance release 19.11 includes the later Perl version 5.32 in the Oracle
Grid Infrastructure clone files, and hence, you can ignore the error. For any other
missing patches reported in the odacli describe-job
command output,
contact Oracle Support to request the patches for Oracle Clusterware release
19.11.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
Review the error messages reported in the odacli
describe-job
command output for any missing patches other than the
patch with bug number 29511771, and contact Oracle Support to request the patches
for Oracle Clusterware release 19.11.
This issue is tracked with Oracle bug 32973488.
Parent topic: Known Issues When Patching Oracle Database Appliance
Retrying odacli update-dbhome command with -imp option after update fails
When you patch database homes to Oracle Database Appliance release 19.11, the
odacli update-dbhome
command fails.
odacli update-dbhome
command, the following error message is
displayed:DCS-10001:Internal error encountered: Contact Oracle Support Services to request patch(es) "bug#". Then supply the --ignore-missing-patch|-imp to retry the command.
- 27138071 and 30508171, applicable to Oracle Database release 12.1
- 28581244 and 30508161, applicable to Oracle Database release 12.2
- 28628507 and 31225444, applicable to Oracle Database release 18c
- 29511771, applicable to Oracle Database release 19c
These patches contain the earlier versions of Perl 5.26 and Perl 5.28
for the source database home. Oracle Database Appliance release 19.11 includes the
later Perl version 5.32 in the database clone files, and hence, you can ignore the
error. You must rerun the odacli update-dbhome
command again with
the -imp
option.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
odacli update-dbhome
command again with the
-imp
option:# /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.11.0.0.0 -imp
This issue is tracked with Oracle bug 32915897.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in stopping Oracle Grid Infrastructure when patching Oracle Database Appliance
If you create an Oracle Data Guard or Oracle Database with type network using
odacli create-network
, then there is an error in stopping Grid
Infrastructure during patching.
CRS-2673: Attempting to stop 'test_vip.vip' on 'test'
CRS-2677: Stop of 'test_vip.vip' on 'test' succeeded
CRS-2675: Stop of 'test_vip.vip' on 'test' failed
CRS-2677: Stop of 'vm2.kvm' on 'test' succeeded
CRS-2673: Attempting to stop 'ora.data.vs1.acfs' on 'test'
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.6 or later
Workaround
Stop the Virtual IP and listener manually before patching to Oracle Database Appliance release 19.11, and then ignore this error during patching.
This issue is tracked with Oracle bug 32224312.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in running the update-dbhome command
When you patch database homes to Oracle Database Appliance release 19.11, the
odacli update-dbhome
command fails.
odacli update-dbhome
command, due to the inclusion of the
non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The
following error message is
displayed:DCS-10001:Internal error encountered: PRCC-1021 :
One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..
"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4/OPatch/datapatch
This issue is tracked with Oracle bug 32801095.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in upgrading from Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance patching
During upgrade of Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance upgrade from release 18.8 to 19.x, an error is encountered.
odacli
update-server
command:DCS-10059:Clusterware is not running on all nodes
/u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_25383.trc
has
the following
error:KSIPC: ksipc_open: Failed to complete ksipc_open at process startup!!
KSIPC: ksipc_open: ORA-27504: IPC error creating OSD context
This is because, the STIG Oracle Linux 6 rules deployed on an Oracle Database Appliance system due to RDS/RDS_TCP not being loaded (due to OL6-00-000126 rule).
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Edit the
/etc/modprobe.d/modprobe.conf
file. - Comment the following
lines:
# The RDS protocol is disabled # install rds /bin/true
- Restart the nodes.
- Run the the
odacli update-server
command again.
This issue is tracked with Oracle bug 31881957.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in updating DCS components when patching Oracle Database Appliance
When updating DCS components when patching Oracle Database Appliance, an error is encountered.
/opt
directory is full, then the following error
is seen when running the odacli update-dcscomponents
command:
java.io.IOException: No space left on device
Hardware Models
All Oracle Database Appliance hardware models
Workaround
All patches and clone files are stored in the /opt
directory. Use the command odacli cleanup-patchrepo
and remove
unnecessary patches. Retry the operation after cleaning up the directory.
This issue is tracked with Oracle bug 32534150.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching 11.2.0.4 Database homes to Oracle Database Appliance release 19.10
Patching of database home of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to version 11.2.0.4.210119 may fail.
- When DCS Agent version is 19.9, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.201020 (which was the Database home version released with Oracle Database Appliance release 19.9)
- When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.210119 (which was the Database home version released with Oracle Database Appliance release 19.9)
- When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.200114 (which was the Database home version released with Oracle Database Appliance release 19.6)
This error occurs only when patching Oracle Database homes of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to Oracle Database home using 19.10.0.0.0 version DCS Agent.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Patch your 11.2.0.4 Oracle Database home to any version earlier than 11.2.0.4.210119 (the version released with Oracle Database Appliance release 19.10) so that the DCS Agent is of version earlier than 19.10.0.0.0, and then update the DCSAgent to 19.10.
Note that once you patch DCS Agent to 19.10.0.0.0, then patching of these old 11.2.0.4 homes will fail.
This issue is tracked with Oracle bug 32498178.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message is displayed.
odacli
update-dcscomponents
command:
# time odacli update-dcscomponents -v 19.10.0.0.0
^[[ADCS-10008:Failed to update DCScomponents: 19.10.0.0.0
Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for
details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
This is a timing issue with setting up the SSH equivalence.
Run the odacli update-dcscomponents
command again and
the operation completes successfully.
This issue is tracked with Oracle bug 32553519.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in updating storage when patching Oracle Database Appliance
When updating storage during patching of Oracle Database Appliance, an error is encountered.
# odacli describe-job -i 765c5601-f4ad-44f0-a989-45a0b7432a0d
Job details
----------------------------------------------------------------
ID: 765c5601-f4ad-44f0-a989-45a0b7432a0d
Description: Storage Firmware Patching
Status: Failure
Created: February 24, 2021 8:15:21 AM PST
Message: ZK Wait Timed out. ZK is Offline
Task Name Start Time End Time Status
---------------------------------------- ------------------------------------------------------------------
Storage Firmware Patching February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST Failure
task:TaskSequential_140 February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST Failure
Applying Firmware Disk Patches February 24, 2021 8:18:28 AM PST February 24, 2021 8:18:48 AM PST Failure
Hardware Models
Oracle Database Appliance X5-2 hardware models with InfiniBand
Workaround
- Check the private network (
ibbond0
) and ping private IPs from each node. - If the private IPs are not ping-able, then restart the private network interfaces on both nodes and retry.
- Check the zookeeper status.
- On Oracle Database Appliance high availability deployments, if the zookeeper status is not in the leader of follower mode, then continue to the next job.
This issue is tracked with Oracle bug 32550378.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in Oracle Grid Infrastructure upgrade
Oracle Grid Infrastructure upgrade fails, though the
rootupgrade.sh
script ran successfully.
/opt/oracle/oak/log/<NODENAME>/patch/19.8.0.0.0/
.ERROR: The clusterware active state is UPGRADE_AV_UPDATED
INFO: ** Refer to the release notes for more information **
INFO: ** and suggested corrective action **
This is because when the root upgrade scripts run on the last node, the active version is not set to the correct state.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- As
root
user, run the following command on the second node:/u01/app/19.0.0.0/grid/rootupgrade.sh -f
- After the command completes, verify that the active version of
the cluster is updated to UPGRADE
FINAL.
/u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f The cluster upgrade state is [UPGRADE FINAL]
- Run Oracle Database Applaince server patching process again to upgrade Oracle Grid Infrastructure.
This issue is tracked with Oracle bug 31546654.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commands odacli
create-prepatchreport
, odacli update-server
, odacli
update-dbhome
, an error is encountered.
- Table AUD$[FGA_LOG$] should use Automatic Segment Space Management
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- To verify the segment space management policy currently in use by the AUD$ and
FGA_LOG$ tables, use the following SQL*Plus
command:
select t.table_name,ts.segment_space_management from dba_tables t, dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and t.table_name in ('AUD$','FGA_LOG$');
- The output should be similar to the
following:
TABLE_NAME SEGMEN ------------------------------ ------ FGA_LOG$ AUTO AUD$ AUTO If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the DBMS_AUDIT_MGMT package to move them to the SYSAUX tablespace: BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$ audit_trail_location_value => 'SYSAUX'); END; BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$ audit_trail_location_value => 'SYSAUX'); END;
This issue is tracked with Oracle bug 27856448.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled.
odacli update-dbhome -v
release_number
on database homes that have Standard Edition
High Availability enabled, an error is
encountered.WARNING::Failed to run the datapatch as db <db_name> is not in running state
Hardware Models
All Oracle Database Appliance hardware models with High-Availability deployments
Workaround
- Locate the running node of the target database
instance:
srvctl status database -database dbUniqueName
Or, relocate the single-instance database instance to the required node:odacli modify-database -g node_number (-th node_name)
- On the running node, manually run the datapatch for non-CDB
databases:
dbhomeLocation/OPatch/datapatch
- For CDB databases, locate the PDB list using
SQL*Plus.
select name from v$containers where open_mode='READ WRITE'; dbhomeLocation/OPatch/datapatch -pdbs pdb_names_found_in_previous_step_divided_by_comma
This issue is tracked with Oracle bug 31654816.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
An error is encountered when patching the server.
odacli update-server -v
release_number
, the following error is encountered:
DCS-10001:Internal error encountered: patchmetadata for 19.6.0.0.0 missing
target version for GI.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Change the file ownership temporarily to the appropriate
grid
user for theosdbagrp
binary in thegrid_home/bin
location. For example:$ chown -R grid:oinstall /u01/app/18.0.0.0/grid/bin/osdbagrp
- Run either the
update-registry -n gihome
or theupdate-registry -n system
command.
This issue is tracked with Oracle bug 31125258.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
Grid_home/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.
The ODACLI job displays the following error:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
The data patch log contains the entry
"Prereq check failed, exiting without
installing any patches.
".
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Install the same patch again.
This issue is tracked with Oracle bugs 30026438 and 30155710.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller
version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some
Oracle Database Appliance X8-2 models, the installed LSI controller version may be
16.00.01.00.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.
Hardware Models
All Oracle Database Appliance Hardware models
Workaround
Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.
srvctl start database -db db_unique_name
This issue is tracked with Oracle bug 28815716.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered.
Error Encountered When Patching Bare Metal Systems:
When patching the appliance on bare metal systems, the odacli
update-server
command fails with the following error:
Please stop TFA before server patching.
To resolve this issue, follow the steps described in the Workaround.
Error Encountered When Patching Virtualized Platform:
When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:
INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1
Check the prepatch log file generated in the directory
/opt/oracle/oak/log/hostname/patch/18.8.0.0.0
. You can also view
the prepatch log for the last run with the command ls -lrt prepatch_*.log
.
Check the last log file in the command output.
In the log file, search for entries similar to the following:
ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information.
To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
tfactl stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
- Run
/etc/init.d/init.tfa stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
This issue is tracked with Oracle bug 30260318.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Error in creating db system
The commandodacli create-dbsystem
operation fails due to errors. - Error in modifying database shape for dbsystem
When modifying the database shape in a dbsystem on Oracle Database Appliance, an error is encountered. - Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error in creating a database
When creating a database on Oracle Database Appliance, an error is encountered. - Error in recovering a database
When recovering a database on Oracle Database Appliance, an error is encountered. - Error in creating a database
When creating a database on Oracle Database Appliance, an error is encountered. - Db System options not available in Browser User Interface
Some operations supported with ODACLI commands on dbsystems are not available in Browser User Interface. - Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered. - Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after runningcleanup.pl
. - Error in registering a database
When registering a database on Oracle Database Appliance, an error is encountered. - Error in updating a database
When updating a database on Oracle Database Appliance, an error is encountered. - Error in running tfactl diagcollect command on remote node
When running thetfactl diagcollect
command on Oracle Database Appliance, an error is encountered. - Error in running tfactl diagcollect command
When running thetfactl diagcollect
command on Oracle Database Appliance, an error is encountered. - TFA disabled after patching Oracle Database Appliance
After patching Oracle Database Appliance, TFA status shows as disabled. - Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered. - Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered. - Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails. - Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance. - Errors in clone database operation
Clone database operation fails due to errors. - Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Error in creating db system
The command odacli create-dbsystem
operation fails
due to errors.
DCS-10032:Resource of type 'Virtual Network' with name 'pubnet' is not found.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Restart the DCS agent. For high-availability systems, restart the DCS agent on both nodes.systemctl restart initdcsagent
This issue is tracked with Oracle bug 32740754.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in modifying database shape for dbsystem
When modifying the database shape in a dbsystem on Oracle Database Appliance, an error is encountered.
odacli modify-dbsystem
does not modify
the database with the new dbsystem shape.
Hardware Models
All Oracle Database Appliance hardware models with dbsystem deployments
Workaround
Modify the dbsystem database before running the command
odacli modify-dbsystem
with target
dbshape
in case of shape scale down.
odacli
modify-dbsystem
with target dbshape
in case
of shape scale
up.odacli modify-database -in db_name -s dbshape
This issue is tracked with Oracle bug 32705745.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered.
DCS-10001:Internal error encountered: Failed to set the ownership of the TDE Wallet.
This error does not occur if a database was already created on the newly provisioned system.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Login as the
grid
user and delete the directory corresponding to the failed database under DATA diskgroup.# su - grid Last login: *** *** ** **:**:** *** **** $ asmcmd ASMCMD> rm -rf +DATA/<DBUNIQUENAME> ASMCMD> exit
- As the grid user, start SQL*Plus connection with
sysasm
credentials:$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on Fri Oct 23 03:21:14 2020 Version 19.11.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.9.0.0.0 SQL> alter diskgroup data add user 'oracle'; Diskgroup altered.
- Delete the failed database and retry the iRestore operation.
This issue is tracked with Oracle bug 32861139.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating a database
When creating a database on Oracle Database Appliance, an error is encountered.
odacli
create-database
fails with the following
error:ORA-49802: missing read, write, or execute permission on specified ADR home directory
The
trace file may contain the following
entries:ORA-00600: internal error code, arguments: [dbkc_init_bs_ctx-10], [48189],[], [], [], [], [], [], [], [], [], []
Hardware Models
All Oracle Database Appliance hardware models
Workaround
/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1
,
then run the following commands as the database user
oracle
, on the node that reported the
error:mv /u01/app/odaorabase/oracle/diag /u01/app/odaorabase/oracle/diag.ori
$ORACLE_HOME/bin/diagsetup clustercheck=false
basedir=/u01/app/odaorabase/oracle oraclehome=$ORACLE_HOME
This issue is tracked with Oracle bug :32903268.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in recovering a database
When recovering a database on Oracle Database Appliance, an error is encountered.
odacli recover-database
on a Standard Edition High Availability database, the following error message is
displayed:DCS-10001:Internal error encountered: Unable to get valid database node number to post recovery.
Hardware Models
All Oracle Database Appliance high-availability hardware models
Workaround
srvctl config database -db db_name | grep “Configured nodes” | awk
‘{print $3}’, whose output is nodeX,nodeY
srvctl modify database -db db_name -node nodeX
odacli recover-database
srvctl stop database -db db_name
srvctl modify database -db db_name -node nodeX,nodeY
srvctl start database -db db_name
This issue is tracked with Oracle bug 32928688.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating a database
When creating a database on Oracle Database Appliance, an error is encountered.
odacli create-database
fails with the
following
error:DCS-10001:Internal error encountered: Failed to set File systems dependency
This
error occurs when you try to create an Oracle ASM database on an existing home
located on the local file system.Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not create a database on the local home
/u01/app/<dbuser>/product/...
. You can create a
new database home on the Oracle ACFS file system. Then create a database in
the new Oracle home.
This issue is tracked with Oracle bug 32928462.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Db System options not available in Browser User Interface
Some operations supported with ODACLI commands on dbsystems are not available in Browser User Interface.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Use odacli commands for these operations on dbsystems.
This issue is tracked with Oracle bugs 32786024 and 32561609.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.
ORA-15333: disk is not visible on client instance
Hardware Models
All Oracle Database Appliance hardware models bare metal and dbsystem
Workaround
Shut down dbsystem before adding the second JBOD.systemctl restart initdcsagent
This issue is tracked with Oracle bug 32586762.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after running
cleanup.pl
.
After running cleanup.pl
, provisioning the appliance fails because
of missing Oracle Grid Infrastructure image (IMGGI191100). The following error
message is displayed:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
After running cleanup.pl, and before provisioning the appliance, update the repository as follows:
# odacli update-repository -f /**gi**
This issue is tracked with Oracle bug 32707387.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in registering a database
When registering a database on Oracle Database Appliance, an error is encountered.
use_large_pages=true
) is set to use
the HugePages for SGA.The command odacli register-database
fails
with the following
error:DCS-10045:Validation error encountered: Available Memory is less than SGA Size { Available : size_in_MB and SGA Size : size_in_MB }.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Start the database manually, and then disable the HugePages
setting manually with the command SET
use_large_pages=false
, and after that register the database using
the odacli register-database
command.
This issue is tracked with Oracle bug 32847601.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in updating a database
When updating a database on Oracle Database Appliance, an error is encountered.
odacli update-dbhome
, the
following error message is
displayed:PRGO-1069 :Internal error [# rhpmovedb.pl-isPatchUpg-1 #]..
To confirm that the MMON process occupies the lock, connect to the target database which failed to patch, and run the command:
SELECT s.sid, p.spid, s.machine, s.program FROM v$session s, v$process p
WHERE s.paddr = p.addr and s.sid = (
SELECT sid from v$lock WHERE id1= (
SELECT lockid FROM dbms_lock_allocated WHERE name = 'ORA$QP_CONTROL_LOCK'
));
If
in the displayed result, s.program in the output is similar to to the format
oracle_user@host_box_name (MMON)
,
then the error is caused by the MMON process. Run the workaround to address
this issue.
Hardware Models
All Oracle Database Appliance high-availability hardware models
Workaround
- Stop the MMON
process:
# ps -ef | grep MMON root 71220 70691 0 21:25 pts/0 00:00:00 grep --color=auto MMON
Locate the process ID from step (1) and stop it:# kill -9 71220
- Manually run datapatch on target database:
- Locate the database home where the target database is
running:
odacli describe-database -in db_name
- Locate the database home
location:
odacli describe-dbhome -i DbHomeID_found_in_step_a
- On the running node of the target
database:
[root@node1 ~]# sudo su - oracle Last login: Thu Jun 3 21:24:45 UTC 2021 [oracle@node1 ~]$ . oraenv ORACLE_SID = [oracle] ? db_instance_name ORACLE_HOME = [/home/oracle] ? dbHome_location
- If the target database is a non-CDB database, then run
the
following:
$ORACLE_HOME/OPatch/datapatch
- If the target database is a CDB database, then run the
following to find the PDB
list:
select name from v$containers where open_mode="READ WRITE";
- Exit SQL*Plus and run the
following:
$ORACLE_HOME/OPatch/datapatch -pdbs pdb_names_gathered_by_the_SQL_statement_in_step_e_separated_by_comma
- Locate the database home where the target database is
running:
This issue is tracked with Oracle bug 32827353.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in running tfactl diagcollect command on remote node
When running the tfactl diagcollect
command on Oracle
Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models KVM and bare metal systems
Workaround
- Run the following command on each node so that Oracle Trace File
Analyzer generates new certificates and distributes to the other
node:
tfactl syncnodes -remove -local
- Connect using SSH with
root
credentials on one node and run the following.tfactl syncnodes
This issue is tracked with Oracle bug 32921859.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in running tfactl diagcollect command
When running the tfactl diagcollect
command on Oracle
Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-node
local
options:tfactl diagcollect -node local
This issue is tracked with Oracle bug 32940358.
Parent topic: Known Issues When Deploying Oracle Database Appliance
TFA disabled after patching Oracle Database Appliance
After patching Oracle Database Appliance, TFA status shows as disabled.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
odacli update-dbhome
command with the -sko
option:odacli update-dbhome -j -v 19.9.0.0.0 -i dbhome_id -sko
This issue is tracked with Oracle bug 32058933.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.
UpgradeResults.html
file, when upgrading database from 11.2.0.4 to 12.1
or 12.2:
Database is using a newer time zone file version than the Oracle home
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
- Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
- After manually completing the database upgrade, run the following command to update
DCS
metadata:
/opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f
This issue is tracked with Oracle bug 31125985.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
- Before upgrading the 12.1 single-instance database, run the following PL/SQL
command to change the
local_listener
to an empty string:ALTER SYSTEM SET LOCAL_LISTENER='';
- After upgrading the 12.1 single-instance database successfully, run the
following PL/SQL command to change the
local_listener
to the desired value:ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-';
This issue is tracked with Oracle bugs 31202775 and 31214657.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.
Hardware Models
All Oracle Database Appliance X8-2-HA with High Performance configuration
Workaround
- Power off storage expansion shelf.
- Reboot both nodes.
- Proceed with provisioning the default storage shelf (first JBOD).
- After the system is successfully provisioned
with default storage shelf (first JBOD), check
that
oakd
is running on both nodes in foreground mode.# ps -aef | grep oakd
- Check that all first JBOD disks have the status
online, good in
oakd
, and CACHED in Oracle ASM. - Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
- Run the following command from the master node
to add the storage expansion shelf disks (two JBOD
setup) to
oakd
and Oracle ASM.#odaadmcli show ismaster OAKD is in Master Mode # odaadmcli expand storage -ndisk 24 -enclosure 1 Skipping precheck for enclosure '1'... Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish ... #
- Check that the storage expansion shelf disks
(two JBOD setup) are added to
oakd
and Oracle ASM.
Replace odaadmcli
with
oakcli
commands on Oracle
Database Appliance Virtualized Platform in the
procedure.
For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.
This issue is tracked with Oracle bug 30839054.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.
DCS-10001:Internal error encountered: Fail to run command Failed to create
volume.
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname
This issue is tracked with Oracle bug 30750497.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.
Hardware Models
Oracle Database Appliance high capacity environments with HDD disks
Workaround
Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.
This issue is tracked with Oracle bug 28836461.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
- Change the parameter
value.
SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
- Shut down the database.
SQL> SHUTDOWN IMMEDIATE
- Start the database.
SQL> Startup
- Verify the parameter for the new
value.
SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';
This issue is tracked with Oracle bug 30309914.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error in reinstate operation on Oracle Data Guard
When running the commandodacli reinstate-dataguard
on Oracle Data Guard an error is encountered. - Error in starting a database from a bare metal CPU pool
When starting a database after patching to Oracle Database Appliance release 19.10, an error is encountered. - Error in restoring a database
When restoring a database on Oracle Database Appliance, an error is encountered. - Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered. - Error in restoring a database in dbsystem
When restoring a database in dbsystem on Oracle Database Appliance, an error is encountered. - Errors due to lack of space
When running commands to patch or update database homes, an error is encountered. - Directories not deleted on dbsystem
After running the commandodacli delete-dbsystem --force -n
, certain empty non-Oracle Managed Files (OMF) directories under+diskgroup/dbuniquename
are not deleted. - Error in iRestore operation
When restoring a database from NFS backup location on Oracle Database Appliance, an error is encountered. - Error in iRestore operation on Standard Edition Database
When restoring a Standard Edition Database on Oracle Database Appliance, an error is encountered. - Error in restoring a standby database for 11.2.0.4 database
When performing an iRestore operation on a standby database of version 11.2.0.4, an error is encountered. - Error in deleting a standby database
When deleting a standby database, an error is encountered. - Error in configuring Oracle Active Data Guard
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered. - Error in Oracle Data Guard failover operation for 18.14 database
When running theodacli failover-dataguard
command on a database of version 18.14, an error is encountered. - Error in Oracle Active Data Guard operations
When performing switchover, failover, and reinstate operations on Oracle Active Data Guard on Oracle Database Appliance, an error is encountered. - Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered. - Error in configuring Oracle Data Guard with cloned primary database
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in configuring Oracle Data Guard on db system
When configuring Oracle Data Guard on db system, an error is encountered. - Error in creating Oracle Data Guard status
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered. - Error in registering a database
When registering a single instance database on Oracle Database Appliance, if the RAC option is specified in theodacli register-database
command, an error is encountered. - Nessus scan does not recognize the January 2021 CPU patch
The Nessus scan report on Oracle Database Appliance does not recognize the January 2021 CPU patch. - Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in Configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role. - Error in running other operations when modifying database with CPU pool
When modifying a database with CPU pool, an error is encountered with other operations. - Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered. - Job history not erased after running cleanup.pl
After runningcleanup.pl
, job history is not erased. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running theodacli update-registry
command with-n all --force
or-n dbstorage --force
option can result in metadata corruption. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode. - Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers. - Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in/var/log/messages
.
Error in reinstate operation on Oracle Data Guard
When running the command odacli reinstate-dataguard
on
Oracle Data Guard an error is encountered.
dcs-agent.log
:DCS-10001:Internal error encountered: Unable to reinstate Dg." and can
further find this error "ORA-12514: TNS:listener does not currently know of
service requested
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ensure that the database you are reinstating is started in MOUNT mode.
srvctl start database -d db-unique-name -o mount
After the command completes successfully, run the command odacli
reinstate-dataguard
job. If the database is already in MOUNT mode, this
can be an temporary error. Check the Data Guard status again a few minutes later
with odacli describe-dataguardstatus
or odacli
list-dataguardstatus
, or check with DGMGRL> SHOW
CONFIGURATION;
to see if the reinstatement is successful.
This issue is tracked with Oracle bug 32367676.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in starting a database from a bare metal CPU pool
When starting a database after patching to Oracle Database Appliance release 19.10, an error is encountered.
service cgconfig.service
is
down.# systemctl status cgconfig.service
cgconfig.service - Control Group configuration service
Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor
preset: disabled)
Active: inactive (dead)
.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Check the
cgconfig.service
status. If the status is disabled or inactive, then continue.# systemctl status cgconfig.service cgconfig.service - Control Group configuration service Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor preset: disabled) Active: inactive (dead)
- Start
cgconfig.service
:# systemctl start cgconfig.service
- Enable
cgconfig.service
:# systemctl enable cgconfig.service Created symlink from /etc/systemd/system/sysinit.target.wants/cgconfig.service to /usr/lib/systemd/system/cgconfig.service.
- Check
cgconfig.service
status:# systemctl status cgconfig.service cgconfig.service - Control Group configuration service Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; enabled; vendor preset: disabled) Active: active (exited) since Mon 2021-02-22 23:03:34 CST; 3min 40s ago Main PID: 16594 (code=exited, status=0/SUCCESS)
- Restart the failed database.
This issue is tracked with Oracle bug 31907677.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a database
When restoring a database on Oracle Database Appliance, an error is encountered.
This is because there are multiple database IDs in the wrong location, leading to failure in RMAN.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not specify backup location, or provide the correct backup location pointing to the parent directory of the source database backup.
This issue is tracked with Oracle bug 31907677.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not run concurrent database or database home creation job.This issue is tracked with Oracle bug 32376885.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a database in dbsystem
When restoring a database in dbsystem on Oracle Database Appliance, an error is encountered.
/u01/app/oracle/product/19.0.0.0/dbhome_1/bin/orapwd
file=‘+DATA/brtest/orapwdbrtest’ password=xxxxxx entries=5
dbuniquename=“BRTEST” force=y
OPW-00014: Could not delete password file +DATA/brtest/orapwdbrtest.
ORA-15056: additional error message
ORA-06512: at line 4
ORA-15260: permission denied on ASM disk group
ORA-06512: at “SYS.X$DBMS_DISKGROUP”, line 533
ORA-06512: at line 2
The
odacli delete-dbsystem
command did not completely delete
some of the Oracle ASM files that belonged to the deleted database, the password
file, in the above example. This can cause an error when trying to restore the
database using the same name.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Run the asmcmd
command from the Oracle Database
Appliance host to manually delete the files that belong to the deleted
database. See the Automatic Storage Management Administrator's
Guide for the ascmd commands. Make sure you verify the database
name first before deleting the files.
This issue is tracked with Oracle bug 32931078.
Parent topic: Known Issues When Managing Oracle Database Appliance
Errors due to lack of space
When running commands to patch or update database homes, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- If you tried to delete the database homes immediately after the
command
odacli update-dbhome
failed, then run the commandodacli list-dbhomes
. The database homes that were provisioned but not used, are displayed as the last entry in the list result. The output ofodacli list-dbhomes
is displayed in the order of the create time. - Compare the results of
odacli list-databases
andodacli list-dbhomes
commands. The database homes whose IDs are not displayed in the output ofodacli list-databases
are those database homes that do not contain databases, and could be deleted to free up more space.
This issue is tracked with Oracle bug 32915967.
Parent topic: Known Issues When Managing Oracle Database Appliance
Directories not deleted on dbsystem
After running the command odacli delete-dbsystem --force -n
,
certain empty non-Oracle Managed Files (OMF) directories under
+diskgroup/dbuniquename
are not
deleted.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Manually run the command asmcmd rm
on the directory to
manually delete it.
This issue is tracked with Oracle bug 32806915.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in iRestore operation
When restoring a database from NFS backup location on Oracle Database Appliance, an error is encountered.
DCS-10001:Internal error encountered: Failed to run Rman Script :
/tmp/dcsfiles/duplicateRman2021-05-25_06-03-50.0840547.script. Please refer
log at location :
/u01/app/oracle/diag/rdbms/mydb/mydb/scaoda8s002/rman/bkup/rman_duplicate/2021
-05-25/rman_duplicate_2021-05-25_06-03-50.0864.log.Duplicate command
execution failed.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
This issue occurs if NFS is configured so that the user ID of the
user oracle
and group ID of group asmadmin
do not match, in the primary and backup systems, mac1 and mac2 respectively.
However, even with the mismatch if iRestore from the NFS backup needs to be
performed, then make sure the user or group of the oracle
binary in mac2 is able to at least read the backup files in the NFS backup
location
NFS_backup_location/orabackups/cluster_name/database/DBID/DbUniqueName/db
of mac1.
You can find the user or group of the oracle
binary by running
the ls -ltr
command on the 'oracle'
binaryoracle
binary present at the
<DBHOME>/bin
.
oracle
binary are oracle
and asmadmin
respectively.
[root@****** bin]# ls -ltr /u01/app/oracle/product/19.0.0.0/dbhome_3/bin/oracle
-rwsr-s--x 1 oracle asmadmin 448749536 *** 25 06:03 /u01/app/oracle/product/19.0.0.0/dbhome_3/bin/oracle
oracle
binary on mac2, then at least 'read'
permission must be provided for 'others', that is**4 on mac1 to all the NFS
backup files. For example:
[root@mac1 bin]#/scratch2/orabackups/scaoda8s002-c/database/2987837625/mydb/db
-rwxr--r-- 1 oracle asmadmin 1097728 Jun 3 10:55 auto_cf_DBSE3_2116871228_0100fgpa_1_1_1_20210603_1074250538
-rwxr--r-- 1 oracle asmadmin 1097728 Jun 3 10:55 c-2116871228-20210603-00
This issue is tracked with Oracle bug 32422681.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in iRestore operation on Standard Edition Database
When restoring a Standard Edition Database on Oracle Database Appliance, an error is encountered.
DCS-10001:Internal error encountered: Failed to run sql in method : runRmanDuplicateDbFromDiskBackup.Unable to startup instance in nomount mode as output contains ora-
/opt/oracle/dcs/log/dcs-agent.log
contains the
following
entries:ORACLE instance shut down.
ORA-00371: not enough shared pool memory, should be at least 1141769669 bytes
This issue occurs only if more than 8 CPUs are online on the appliance.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Apply the BLR corresponding to bug 32961939 and retry the operation.
This issue is tracked with Oracle bug 32957033.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a standby database for 11.2.0.4 database
When performing an iRestore operation on a standby database of version 11.2.0.4, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- After taking backup and before performing the
iRestore operation, delete control file autobackups in the
directory shown as attribute
backupLocation
in the backup report:c-3737675288-20210211-04 c-3737675288-20210211-05 c-3737675288-20210211-06 c-3737675288-20210212-00 c-3737675288-20210212-01
- Perform the database iRestore operation.
- After successfully performing the iRestore operation, create a backup of the source database.
This issue is tracked with Oracle bug 32473071.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in deleting a standby database
When deleting a standby database, an error is encountered.
DCS-10001:Internal error encountered: Failed to run the asm command:
[/u01/app/19.0.0.0/grid/bin/asmcmd, --nocp, rm, -rf, RECO/ABCDEU]
Error:ORA-29261: bad argument
ORA-06512: at line 4
ORA-15178: directory 'ABCDEU' is not empty; cannot drop this directory
ORA-15260: permission denied on ASM disk group
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 666
ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute).
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
- After deleting the standby database and before recreating the same
standby database, perform the following steps:
- After deleting the standby database and
before recreating the same standby database, perform
the following steps:
- Log in as the
oracle
user:su - oracle
- Set the
environment:
. oraenv ORACLE_SID = null ORACLE_HOME = dbhome_path (such as /u01/app/oracle/product/19.0.0.0/dbhome_1) 3. cd dbhome_path/bin 4. asmcmd --privilege sysdba rm -rf +RECO/DBUNIQUENAME/ 5. asmcmd --privilege sysdba rm -rf +DATA/DBUNIQUENAME/arc10/ 6. asmcmd --privilege sysdba rm -rf +DATA/DBUNIQUENAME/PASSWORD/
- Log in as the
- After deleting the standby database and
before recreating the same standby database, perform
the following steps:
- Recreate the standby database with a different database unique name.
This issue is tracked with Oracle bug 32871772.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Active Data Guard
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.
odacli configure-dataguard
command
fails at step EnableActivedg
with the following
error:DCS-10001:Internal error encountered: Unable to restart standby db
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
Follow these steps:
- Run
odacli configure dataguard
with MAX_PERFORMANCE protection mode, SYNC transport type andenable active dataguard
setting. - Run the following DGMGRL command after successfully
configuring Oracle Data Guard:
DGMGRL> edit configuration set protection mode as maxprotection;
This issue is tracked with Oracle bug 32852846.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Oracle Data Guard failover operation for 18.14 database
When running the odacli failover-dataguard
command on a
database of version 18.14, an error is encountered.
DCS-10001:Internal error encountered: Unable to precheckFailoverDg11g Dg.
select DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from v$database
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcbgtcr_17], [], [], [], [], [],
[], [], [], [], [], []
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the following DGMGRL statements on the system
with the database to fail over
to:
DGMGRL> SHOW CONFIGURATION; DGMGRL> VALIDATE DATABASE '<DB_UQNIUE_NAME_to_failover_to>'; DGMGRL> FAILOVER TO '<DB_UQNIUE_NAME_to_failover_to>'; DGMGRL> SHOW CONFIGURATION;
- After failover is successful, run the
odacli describe-dataguardstatus -i id
command several times to update the DCS metadata.
This issue is tracked with Oracle bug 32727379.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Oracle Active Data Guard operations
When performing switchover, failover, and reinstate operations on Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.
PRCZ-2103 : Failed to execute command
"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/bin/dbua" on node
"node1" as user "oracle". Detailed error:
Logs directory:
/u01/app/odaorabase/oracle/cfgtoollogs/dbua/upgrade2021-05-06_01-31-16PM
SEVERE: May 08, 2021 6:50:24 PM oracle.assistants.dbua.prereq.PrereqChecker
logPrereqResults
SEVERE: Starting with Oracle Database 11.2, setting JOB_QUEUE_PROCESSES=0
will disable job execution via DBMS_JOBS and DBMS_SCHEDULER. FIXABLE: MANUAL
Database: ptdkjqt
Cause: The database has JOB_QUEUE_PROCESSES=0.
Action: Set the value of JOB_QUEUE_PROCESSES to a non-zero value, or remove
the setting entirely and accept the Oracle default.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
Follow these steps:
- Use SQL*Plus to access the database and run the following
command:
alter system set JOB_QUEUE_PROCESSES=1000;
- Retry the upgrade command.
This issue is tracked with Oracle bug 32856214.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli
configure-dataguard
command fails at step
create-dataguardstatus
with the following
error:Failed to persist newly created dataguard configuration -- null
DCS-10001:Internal error encountered: Unable to add new dg config.
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
dbUniqueName
for consistency in the
following:
dbUniqueName
in theodacli describe-database
command output.dbUniqueName
in the dataguard.json file used in theodacli configure-dataguard -r
command.- In the command output of
show parameter db_unique_name
.
If the dbUniqueName
letter case is not consistent, then update
them for consistency and then use the devmode
command to
create dataguardstatus on both primary and standby systems.
DEVMODE=true odacli create-dataguardstatus -i dbid -r config_dg.json
DEVMODE=true odacli create-dataguardstatus -i dbid -r config_dg.json -n dataguardstatus_id_of_primary
This issue is tracked with Oracle bug 32861273.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.
Error: ORA-16664: unable to receive the result from a member
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart standby database in upgrade mode:
srvctl stop database -d <db_unique_name> Run PL/SQL command: STARTUP UPGRADE;
- Continue the enable apply process and wait for log apply process to refresh.
- After some time, check the Data Guard status with the DGMGRL
command:
SHOW CONFIGURATION;
This issue is tracked with Oracle bug 32864100.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard with cloned primary database
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli configure-dataguard
command fails at step
Configure Primary database (Primary site)
with the
following
error:DCS-10001: FAILED TO CREATE BROKER CONFIG FILE DIRECTORY
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
Follow these steps:
- On the system with the cloned primary database, run the
following commands:
mkdir /u02/app/oracle/oradata/dbUniqueName chown oracle:oinstall /u02/app/oracle/oradata/dbUniqueName
- Run the
odacli configure-dataguard
command.
This issue is tracked with Oracle bug 32906493.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard on db system
When configuring Oracle Data Guard on db system, an error is encountered.
odacli
configure-dataguard
command fails at at step Configure and
enable Data Guard (Primary site
) with the following
error:DGMGRL> Error: ORA-16627: operation disallowed since no member would remain to support protection mode"
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration
Workaround
Follow these steps:
- Run
odacli configure dataguard
with MAX_PERFORMANCE protection mode and ASYNC transport type. - Manually change protection mode and transport type after
successfully configuring Oracle Data Guard:
su - oracle DGMGRL> edit database primary_db_unique_name set property 'LogXptMode'='SYNC'; Property "LogXptMode" updated DGMGRL> edit database standby_db_unique_name set property 'LogXptMode'='SYNC'; Property "LogXptMode" updated DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY; DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPROTECTION;
This issue is tracked with Oracle bug 32891817.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in creating Oracle Data Guard status
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.
odacli
configure-dataguard
command fails at step
NewDgconfig
with the following error on the standby
system:ORA-16665: TIME OUT WAITING FOR THE RESULT FROM A MEMBER
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the standby system, run the following:
export DEMODE=true; odacli create-dataguardstatus -i dbid -n dataguardstatus_id_on_primary -r configdg.json export DEMODE=false; configdg.json example
configdg.json
file for a single-node
system:{
"name": "test1_test7",
"protectionMode": "MAX_PERFORMANCE",
"replicationGroups": [
{
"sourceEndPoints": [
{
"endpointType": "PRIMARY",
"hostName": test_domain1",
"listenerPort": 1521,
"databaseUniqueName": "test1",
"serviceName": "test",
"sysPassword": "***",
"ipAddress": "test_IPaddress"
},
],
"targetEndPoints": [
{
"endpointType": "STANDBY",
"hostName": "test_domain2",
"listenerPort": 1521,
"databaseUniqueName": "test7",
"serviceName": "test",
"sysPassword": "***",
"ipAddress": "test_IPaddress3"
},
],
"transportType": "ASYNC"
}
]
}
This issue is tracked with Oracle bug 32719173.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in registering a database
When registering a single instance database on Oracle Database Appliance, if
the RAC option is specified in the odacli register-database
command, an
error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Create a single-instance database using Oracle Database
Configuration Assistance (DBCA) and then register the database using the
odacli register-database
command with the RAC
option.
This issue is tracked with Oracle bug 32853078.
Parent topic: Known Issues When Managing Oracle Database Appliance
Nessus scan does not recognize the January 2021 CPU patch
The Nessus scan report on Oracle Database Appliance does not recognize the January 2021 CPU patch.
Severity : HIGH
CVSS 2.0 Score : 7.5
Plugin : 145266
Issue Description : Oracle Database Server Multiple Vulnerabilities (Jan 2021 CPU)
Hardware Models
All Oracle Database Appliance hardware models
Workaround
As per the analysis, the issue is considered to be 'False Positive' from the Tenable Support team and they have provided the fix in 'Plugin Set' feed version 202105051730. Perform Nessus scan with Linux version 8.14.0 and 'Plugin Set' feed version 202105051730 or later.
This issue is tracked with Oracle bug 32844858.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Unable to reinstate Dg. Reinstate job was executed within 24hrs after failover job.
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 3522449
- On the old primary database, flashback to this SCN with
RMAN with the backup encryption
password:
RMAN> set decryption identified by 'rman_backup_password' ; executing command: SET decryption RMAN> FLASHBACK DATABASE TO SCN 3522449 ; ... Finished flashback at 24-SEP-20 RMAN> exit
- On the new primary machine, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 31884506.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli configure-dataguard
command fails with
the following
error:DCS-10001:Internal error encountered: Unable to pass postcheckDgStatus. Primary database has taken a non-Archivelog type backup between irestore standby database and configure-dataguard.
Verify
the status of the job with the odacli list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, remove the Oracle Data Guard
configuration:
DGMGRL > remove configuration;
- On the standby machine, delete the standby database.
- On the primary machine, disable the database backup
schedule:
odacli update-schedule -i ID -d
- Start the Oracle Data Guard configuration steps.
- Enable primary database backup schedule after Oracle Data Guard configuration is successful.
This issue is tracked with Oracle bug 31880191.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:DCS-10001:Internal error encountered:
Unable enqueue Id and update DgConfig.
Use DGMGRL to show standby database has this error
GMGRL> show database xxxx
Database - xxxx
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: 4 days 22 hours 1 minute 23 seconds (computed 1 second ago)
Average Apply Rate: 0 Byte/s
Real Time Query: OFF
Instance(s):
xxxx1 (apply instance)
xxxx2
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
ORA-16856: transport lag could not be determined
Database Status:
WARNING
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the new primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 4370820
- On the new primary database, check missing sequence after
standby_became_primary_scn:
SQL> select name, sequence#, first_change#, next_change# from v$archived_log where first_change#>4370820 and name is NULL; ... NAME ------------------------------------------------------------------------------- SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# ---------- ------------- ------------ 53 4601014 4601154
- On the new primary machine, restore the missing sequence
with
RMAN.
$rman target/ RMAN> restore archivelog from logseq=1 until logseq=53;
- On the new standby machine, check if current_scn is increasing, and
check with
DGMGRL> SHOW CONFIGURATION;
to see if the apply lag is being resolved.
This issue is tracked with Oracle bug 32041012.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Message:
DCS-10001:Internal error encountered: Unable to reinstate Dg.
The dcs-agent.log file has the following error entry:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- Make sure the database you are reinstating is started in
MOUNT mode. To start the database in MOUNT mode, run this
command:
srvctl start database -d db-unique-name -o mount
- After the above command runs successfully, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 32047967.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.
odacli
describe-database
command is not updated after Oracle Data Guard
switchover, failover, and reinstate operations on Oracle Database
Appliance.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli update-registry -n db --force/-f
to update the
database metadata. After the job completes, run the odacli
describe-database
command and verify that dbRole is updated.
This issue is tracked with Oracle bug 31378202.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in running other operations when modifying database with CPU pool
When modifying a database with CPU pool, an error is encountered with other operations.
# odacli create-backup -in dbName -bt Regular-L0
DCS-10089:Database dbName is in an invalid state `{Node Name:closed}' Hardware Models
Hardware Models
All Oracle Database Appliance hardware models with bare metal configuration
Workaround
Wait until the odacli modify-database
completes before you
perform any other operation on the same database.
This issue is tracked with Oracle bug 32045674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered.
Failed to copy file from : source_location to: destination_location
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not change the database storage type when restoring a TDE-enabled database.
This issue is tracked with Oracle bug 31848183.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered.
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered:
Missing arguments : required sqlplus connection information is not
provided
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Perform recovery of the single-instance database on the node where the database is running.
This issue is tracked with Oracle bug 31399400.
Parent topic: Known Issues When Managing Oracle Database Appliance
Job history not erased after running cleanup.pl
After running cleanup.pl
, job history is not
erased.
After running cleanup.pl
, when you run
/opt/oracle/dcs/bin/odacli list-jobs
commands, the list is not
empty.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
- Stop the DCS Agent by running the following commands on both nodes.
For Oracle Linux 6, run:
initctl stop initdcsagent
For Oracle Linux 7, run:
systemctl stop initdcsagent
- Run the cleanup script sequentially on both the nodes.
This issue is tracked with Oracle bug 30529709.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running the odacli update-registry
command with -n
all --force
or -n dbstorage --force
option can result in metadata corruption.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Run the -all
option when all the databases created in the system
use OAKCLI in migrated systems. On other systems
that run on DCS stack, update all components other
than dbstorage individually, using the
odacli update-registry -n
component_name_to_be_updated_excluding_dbstorage
.
This issue is tracked with Oracle bug 30274477.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance
Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
- Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
- After configuring the oda-admin password, the following error is
displayed:
Failed to change the default user (oda-admin) account password. Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized
Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.
Models
All Oracle Database Appliance Hardware Models bare metal deployments
Workaround
To access the Web Console, use either Google Chrome or Firefox.
This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.
Parent topic: Known Issues When Managing Oracle Database Appliance
Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages
.
Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages
:
Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"
You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages
.
Hardware Models
Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand
Workaround
Perform the following to remove the parameters:
-
After patching, update the /
etc/opensm/opensm.conf
file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.cat /etc/opensm/opensm.conf | egrep -w 'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^# max_seq_redisc 0 rereg_on_guid_migr FALSE aguid_inout_notice FALSE sm_assign_guid_func uniq_count reports 2 per_module_logging FALSE consolidate_ipv4_mask 0xFFFFFFFF
-
Reboot. The messages will not appear after rebooting the node.
This issue is tracked with Oracle bug 25985258.
Parent topic: Known Issues When Managing Oracle Database Appliance