4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- DCS Agent unavailable due to time zone errors when provisioning or patching Oracle Database Appliance to release 19.10 or when creating a DB System on KVM
Understand if bug 32629684 affects your deployment, and the workaround to apply. - Error in stopping Oracle Grid Infrastructure when patching Oracle Database Appliance
If you create an Oracle Data Guard or Oracle Database with type network using odacli create-network, then there is an error in stopping Grid Infrastructure during patching. - Error in upgrading from Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance patching
During upgrade of Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance upgrade from release 18.8 to 19.x, an error is encountered. - Error in updating DCS components when patching Oracle Database Appliance
When updating DCS components when patching Oracle Database Appliance, an error is encountered. - Error in updating DCS components after updating DCS admin when patching Oracle Database Appliance
When patching Oracle Database Appliance, if you run theodacli update-dcscomponents
command before DCS admin is completely updated, then an error is encountered. - Error when patching Database homes to Oracle Database Appliance release 19.10
Patching of Oracle Database homes of version 19.9.0.0.201018 to Oracle Database home version 19.10.0.0.210119 may fail. - Error when patching 11.2.0.4 Database homes to Oracle Database Appliance release 19.10
Patching of database home of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to version 11.2.0.4.210119 may fail. - Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message is displayed. - Error in updating storage when patching Oracle Database Appliance
When updating storage during patching of Oracle Database Appliance, an error is encountered. - Error in running ORAchk
When running the commandodacli create-prepatchreport
during patching, an error is encountered. - Error in prepatch report
When running the commandodacli create-prepatchreport
during patching, an error is encountered. - Error in Oracle Grid Infrastructure upgrade
Oracle Grid Infrastructure upgrade fails, though therootupgrade.sh
script ran successfully. - Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commandsodacli create-prepatchreport
,odacli update-server
,odacli update-dbhome
, an error is encountered. - Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or theodacli create-prepatchreport
command, an error is encountered. - Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled. - Error in server patching
An error is encountered when patching the server. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported. - 11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start. - Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered.
DCS Agent unavailable due to time zone errors when provisioning or patching Oracle Database Appliance to release 19.10 or when creating a DB System on KVM
Understand if bug 32629684 affects your deployment, and the workaround to apply.
Oracle Database Appliance Patches affected by Bug 32629684
- Patch 32351355 - Oracle Database Appliance Server Patch for ODACLI/DCS Stack
- Patch 30403643 - Oracle Database Appliance ISO Image
- Patch 32451228 - Oracle Database Appliance KVM Database System Template
Bug 32629684 is fixed. If you downloaded the above-mentioned patches after March 22, 2021, then the workaround mentioned in the following sections do not apply to your deployment.
# rpm -qa | grep dcs
dcs-agent-19.10.0.0.0_LINUX.X64_210222.4-1.x86_64
dcs-admin-19.10.0.0.0_LINUX.X64_210222.4-1.x86_64
dcs-controller-19.10.0.0.0_LINUX.X64_210222.4-1.x86_64
dcs-cli-19.10.0.0.0_LINUX.X64_210222.4-1.x86_64
If the DCS software version is 210222.4-1
, then bug
32629684 does not apply to your deployment.
If the DCS software version is 210222-1
, then you must apply the
workaround described in the following section.
If your Oracle Database Appliance release 19.10 deployment has DCS
software version 210222-1
and you have already applied the
workaround, then no additional steps are required.
Error Description for Bug 32629684
if your DCS software version is 210222-1
:
When provisioning or patching Oracle Database Appliance to release 19.10 or creating a DB System on KVM, the default time zone is set to PDT, and an error is encountered.
dcs-agent.log
file contains the following error
entry:2021-03-14 11:09:49,844 ERROR [main] [] o.h.e.j.s.SqlExceptionHelper: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from
Datasource: java.sql.SQLException: The server time zone value 'PDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specific time zone value if you want to utilize time zone support.
# /opt/oracle/dcs/bin/odacli ping-agent
DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070.
odacli update-dcscomponents
may fail
because the migration utility may encounter the above connection failure. The
following error message is
displayed:DCS-10008:Failed to update DCScomponents: 19.10.0.0.0
Internal error while patching the DCS components :
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered: Metadata migration failed on hostname: Please refer /opt/oracle/dcs/log/jobfiles/job-id/migration.log on hostname
This error occurs because MySQL cannot recognize any abbreviated time
zone such PDT, CST, or others with the exception of UTC. Hence, named time zones
must be used instead of abbreviated time zones in the MySQL configuration file,
/opt/oracle/dcs/mysql/etc/mysqldb.cnf
. For example, use
America/Los_Angeles
instead of PDT or PST to specify the time
zone.
This error occurs immediately after provisioning or patching Oracle Database Appliance to release 19.10 or creating a DB System on KVM. Use the workaround described in the following section to resolve this time zone issue.
Hardware Models
All Oracle Database Appliance hardware models with DCS software version
210222-1
Workaround for DB Systems on KVM
Download and use the latest Patch 32451228 - Oracle Database Appliance KVM Database System Template released on March 22, 2021.Workaround for Provisioning and Patching Bare Metal Systems
210222-1
, then you must
apply the workaround described in this section. Follow these steps:
- Stop the DCS
agent:
# systemctl stop initdcsagent
- Stop MySQL
server:
# systemctl stop oda-mysql
- Set the time zone by updating the
default-time-zone
field in Oracle database Appliance MySQL configuration file/opt/oracle/dcs/mysql/etc/mysqldb.cnf
and save changes.For example, to set the time zone toAmerica/Los_Angeles
, update the MySQL configuration file/opt/oracle/dcs/mysql/etc/mysqldb.cnf
as follows:------------------------------------------- [mysqld] port=3306 default-time-zone=America/Los_Angeles … -------------------------------------------
- Start MySQL
server:
# systemctl start oda-mysql
- Start the DCS
agent:
# systemctl start initdcsagent
- Verify that the DCS agent run
correctly:
# odacli ping-agent Agent is ready to serve the requests.
This issue is tracked with Oracle bug 32629684.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in stopping Oracle Grid Infrastructure when patching Oracle Database Appliance
If you create an Oracle Data Guard or Oracle Database with type network using odacli create-network, then there is an error in stopping Grid Infrastructure during patching.
CRS-2673: Attempting to stop 'test_vip.vip' on 'test'
CRS-2677: Stop of 'test_vip.vip' on 'test' succeeded
CRS-2675: Stop of 'test_vip.vip' on 'test' failed
CRS-2677: Stop of 'vm2.kvm' on 'test' succeeded
CRS-2673: Attempting to stop 'ora.data.vs1.acfs' on 'test'
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.6 or later
Workaround
Step the Virtual IP and listener manually before patching to Oracle Database Appliance release 19.10, and then ignore this error during patching.
This issue is tracked with Oracle bug 32224312.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in upgrading from Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance patching
During upgrade of Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance upgrade from release 18.8 to 19.x, an error is encountered.
odacli
update-server
command:DCS-10059:Clusterware is not running on all nodes
/u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_25383.trc
has
the following
error:KSIPC: ksipc_open: Failed to complete ksipc_open at process startup!!
KSIPC: ksipc_open: ORA-27504: IPC error creating OSD context
This is because, the STIG Oracle Linux 6 rules deployed on an Oracle Database Appliance system due to RDS/RDS_TCP not being loaded (due to OL6-00-000126 rule).
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Edit the
/etc/modprobe.d/modprobe.conf
file. - Comment the following
lines:
# The RDS protocol is disabled # install rds /bin/true
- Restart the nodes.
- Run the the
odacli update-server
command again.
This issue is tracked with Oracle bug 31881957.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in updating DCS components when patching Oracle Database Appliance
When updating DCS components when patching Oracle Database Appliance, an error is encountered.
/opt
directory is full, then the following error
is seen when running the odacli update-dcscomponents
command:
java.io.IOException: No space left on device
Hardware Models
All Oracle Database Appliance hardware models
Workaround
All patches and clone files are stored in the /opt
directory. Use the command odacli cleanup-patchrepo
and remove
unnecessary patches. Retry the operation after cleaning up the directory.
This issue is tracked with Oracle bug 32534150.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in updating DCS components after updating DCS admin when patching Oracle Database Appliance
When patching Oracle Database Appliance, if you run the odacli
update-dcscomponents
command before DCS admin is completely updated, then an
error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Check the Zookeeper status before running the odacli
update-dcscomponents
command:
standalone
:# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: standalone
follower
:# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
This issue is tracked with Oracle bug 32531539.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching Database homes to Oracle Database Appliance release 19.10
Patching of Oracle Database homes of version 19.9.0.0.201018 to Oracle Database home version 19.10.0.0.210119 may fail.
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
This error occurs only when patching Oracle Database homes of version 19.9.0.0.201018 to Oracle Database home version 19.10.0.0.210119.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
odacli update-dbhome -v 19.10.0.0.0 -i
DB_Home_ID
fails, then run datapatch manually on all
databases that run in the database home with the -pdbs
option, and
provide the list of all PDBs including CDB$ROOT
.
- Get all the PDB names from following
query:
SQL> select name from v$containers where OPEN_MODE='READ WRITE'; NAME ------------------------------------------------------------------------------ CDB$ROOT PDB1 SQL> exit;
- Run datapatch manually on all databases that run in the database home with
the
-pdbs
option:$ORACLE_HOME/OPatch/datapatch -pdbs CDB$ROOT,PDB1
- Run the
odacli update-dbhome -v 19.10.0.0.0 -i DB_Home_ID
command again.
This issue is tracked with Oracle bug 32438382.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching 11.2.0.4 Database homes to Oracle Database Appliance release 19.10
Patching of database home of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to version 11.2.0.4.210119 may fail.
- When DCS Agent version is 19.9, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.201020 (which was the Database home version released with Oracle Database Appliance release 19.9)
- When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.210119 (which was the Database home version released with Oracle Database Appliance release 19.9)
- When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.200114 (which was the Database home version released with Oracle Database Appliance release 19.6)
This error occurs only when patching Oracle Database homes of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to Oracle Database home using 19.10.0.0.0 version DCS Agent.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Patch your 11.2.0.4 Oracle Database home to any version earlier than 11.2.0.4.210119 (the version released with Oracle Database Appliance release 19.10) so that the DCS Agent is of version earlier than 19.10.0.0.0, and then update the DCSAgent to 19.10.
Note that once you patch DCS Agent to 19.10.0.0.0, then patching of these old 11.2.0.4 homes will fail.
This issue is tracked with Oracle bug 32498178.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message is displayed.
odacli
update-dcscomponents
command:
# time odacli update-dcscomponents -v 19.10.0.0.0
^[[ADCS-10008:Failed to update DCScomponents: 19.10.0.0.0
Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for
details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
This is a timing issue with setting up the SSH equivalence.
Run the odacli update-dcscomponents
command again and
the operation completes successfully.
This issue is tracked with Oracle bug 32553519.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in updating storage when patching Oracle Database Appliance
When updating storage during patching of Oracle Database Appliance, an error is encountered.
# odacli describe-job -i 765c5601-f4ad-44f0-a989-45a0b7432a0d
Job details
----------------------------------------------------------------
ID: 765c5601-f4ad-44f0-a989-45a0b7432a0d
Description: Storage Firmware Patching
Status: Failure
Created: February 24, 2021 8:15:21 AM PST
Message: ZK Wait Timed out. ZK is Offline
Task Name Start Time End Time Status
---------------------------------------- ------------------------------------------------------------------
Storage Firmware Patching February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST Failure
task:TaskSequential_140 February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST Failure
Applying Firmware Disk Patches February 24, 2021 8:18:28 AM PST February 24, 2021 8:18:48 AM PST Failure
Hardware Models
Oracle Database Appliance X5-2 hardware models with InfiniBand
Workaround
- Check the private network (
ibbond0
) and ping private IPs from each node. - If the private IPs are not ping-able, then restart the private network interfaces on both nodes and retry.
- Check the zookeeper status.
- On Oracle Database Appliance high availability deployments, if the zookeeper status is not in the leader of follower mode, then continue to the next job.
This issue is tracked with Oracle bug 32550378.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in running ORAchk
When running the command odacli create-prepatchreport
during
patching, an error is encountered.
AHF-4819: The vm.min_free_kbytes configuration is not set as recommended
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ignore the error.
This issue is tracked with Oracle bug 32418503.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in prepatch report
When running the command odacli create-prepatchreport
during
patching, an error is encountered.
Command execution test failed: Value returned -root, expected -null.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ignore the error if the instance is not provisioned. This validation
error may occur when attempting to patch an Oracle Database Appliance system where
the oracle
or grid
user is already created.
This issue is tracked with Oracle bug 32491470.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in Oracle Grid Infrastructure upgrade
Oracle Grid Infrastructure upgrade fails, though the
rootupgrade.sh
script ran successfully.
/opt/oracle/oak/log/<NODENAME>/patch/19.8.0.0.0/
.ERROR: The clusterware active state is UPGRADE_AV_UPDATED
INFO: ** Refer to the release notes for more information **
INFO: ** and suggested corrective action **
This is because when the root upgrade scripts run on the last node, the active version is not set to the correct state.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- As
root
user, run the following command on the second node:/u01/app/19.0.0.0/grid/rootupgrade.sh -f
- After the command completes, verify that the active version of
the cluster is updated to UPGRADE
FINAL.
/u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f The cluster upgrade state is [UPGRADE FINAL]
- Run Oracle Database Applaince server patching process again to upgrade Oracle Grid Infrastructure.
This issue is tracked with Oracle bug 31546654.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commands odacli
create-prepatchreport
, odacli update-server
, odacli
update-dbhome
, an error is encountered.
- Table AUD$[FGA_LOG$] should use Automatic Segment Space Management
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- To verify the segment space management policy currently in use by the AUD$ and
FGA_LOG$ tables, use the following SQL*Plus
command:
select t.table_name,ts.segment_space_management from dba_tables t, dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and t.table_name in ('AUD$','FGA_LOG$');
- The output should be similar to the
following:
TABLE_NAME SEGMEN ------------------------------ ------ FGA_LOG$ AUTO AUD$ AUTO If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the DBMS_AUDIT_MGMT package to move them to the SYSAUX tablespace: BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$ audit_trail_location_value => 'SYSAUX'); END; BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$ audit_trail_location_value => 'SYSAUX'); END;
This issue is tracked with Oracle bug 27856448.
Parent topic: Known Issues When Patching Oracle Database Appliance
Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or the odacli create-prepatchreport
command, an error is encountered.
One or more log archive destination and alternate log archive destination settings are not as recommended
Software home check failed
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
odacli update-dbhome
, odacli
create-prepatchreport
, odacli update-server
commands with the
-sko
option. For
example:odacli update-dbhome -j -v 19.10.0.0.0 -i dbhome_id -sko
This issue is tracked with Oracle bugs 30931017, 31631618, and 31921112.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled.
odacli update-dbhome -v
release_number
on database homes that have Standard Edition
High Availability enabled, an error is
encountered.WARNING::Failed to run the datapatch as db <db_name> is not in running state
Hardware Models
All Oracle Database Appliance hardware models with High-Availability deployments
Workaround
- Locate the running node of the target database
instance:
srvctl status database -database dbUniqueName
Or, relocate the single-instance database instance to the required node:odacli modify-database -g node_number (-th node_name)
- On the running node, manually run the datapatch for non-CDB
databases:
dbhomeLocation/OPatch/datapatch
- For CDB databases, locate the PDB list using
SQL*Plus.
select name from v$containers where open_mode='READ WRITE'; dbhomeLocation/OPatch/datapatch -pdbs pdb_names_found_in_previous_step_divided_by_comma
This issue is tracked with Oracle bug 31654816.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
An error is encountered when patching the server.
odacli update-server -v
release_number
, the following error is encountered:
DCS-10001:Internal error encountered: patchmetadata for 19.6.0.0.0 missing
target version for GI.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Change the file ownership temporarily to the appropriate
grid
user for theosdbagrp
binary in thegrid_home/bin
location. For example:$ chown -R grid:oinstall /u01/app/18.0.0.0/grid/bin/osdbagrp
- Run either the
update-registry -n gihome
or theupdate-registry -n system
command.
This issue is tracked with Oracle bug 31125258.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
Grid_home/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.
The ODACLI job displays the following error:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
The data patch log contains the entry
"Prereq check failed, exiting without
installing any patches.
".
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Install the same patch again.
This issue is tracked with Oracle bugs 30026438 and 30155710.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller
version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some
Oracle Database Appliance X8-2 models, the installed LSI controller version may be
16.00.01.00.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.
Hardware Models
All Oracle Database Appliance Hardware models
Workaround
Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.
srvctl start database -db db_unique_name
This issue is tracked with Oracle bug 28815716.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered.
Error Encountered When Patching Bare Metal Systems:
When patching the appliance on bare metal systems, the odacli
update-server
command fails with the following error:
Please stop TFA before server patching.
To resolve this issue, follow the steps described in the Workaround.
Error Encountered When Patching Virtualized Platform:
When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:
INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1
Check the prepatch log file generated in the directory
/opt/oracle/oak/log/hostname/patch/18.8.0.0.0
. You can also view
the prepatch log for the last run with the command ls -lrt prepatch_*.log
.
Check the last log file in the command output.
In the log file, search for entries similar to the following:
ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information.
To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
tfactl stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
- Run
/etc/init.d/init.tfa stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
This issue is tracked with Oracle bug 30260318.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Creating dbysystem with database shape above 16 not supported
When creating a dbsystem with database shape above 16, an error is encountered. - Error in registering database
When registering a database, an error is encountered. - Error when creating the appliance for dbsystem
When creating a dbsystem, thecopy cacerts
step in thecreate-appliance
job fails with a permission denied error. - Error when creating the appliance
When creating a dbsystem, thecopy truststore.jks
step in thecreate-appliance
job fails with a permission denied error. - Error in creating Database System on KVM
When creating a dbsystem on KVM, an error is encountered. - Error in creating a virtual machine on KVM
When creating a VM on Oracle Database Appliance on KVM, an error is encountered. - Error when creating or restoring 11.2.0.4 database
An error is encountered when creating or restoring 11.2.0.4 databases. - TFA disabled after patching Oracle Database Appliance
After patching Oracle Database Appliance, TFA status shows as disabled. - Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered. - Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered. - Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails. - Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error. - Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance. - Errors in clone database operation
Clone database operation fails due to errors. - Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Creating dbysystem with database shape above 16 not supported
When creating a dbsystem with database shape above 16, an error is encountered.
DCS-10045:Validation error encountered: DB System shape 'odb20' is not available on this ODA platform.
Hardware Models
All Oracle Database Appliance hardware models non-high-availability platforms
Workaround
None.This issue is tracked with Oracle bug 32517584.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in registering database
When registering a database, an error is encountered.
If you configured multiple ports for the database listener, before
registering the database, then the command odacli register-database
fails to register that database.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Configure only one port for the database listener, before registering the database with the commandodacli register-database
. After the
database is registered, configure the other listener ports.
This issue is tracked with Oracle bug 30095060.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when creating the appliance for dbsystem
When creating a dbsystem, the copy cacerts
step in the
create-appliance
job fails with a permission denied
error.
odacli describe-job -i 74b4a402-dc74-4ec3-89f9-18603129cbc3
Job details
----------------------------------------------------------------
ID: 74b4a402-dc74-4ec3-89f9-18603129cbc3
Description: Provisioning service creation
Status: Failure
Created: February 19, 2021 8:24:49 PM GMT
Message: DCS-10001:Internal error encountered: failed to
copy DCS certificate : DCS-10001:Internal error encountered: Failed to scp
file /opt/oracle/dcs/dcscli/cacerts to /tmp/@<Private IP of Node 0>.
Permission
denied, please try again.
Permission denied, please try again
Task Name Start Time
End Time Status
---------------------------------------- -----------------------------------
----------------------------------- ----------
Provisioning service creation February 19, 2021 8:24:50 PM GMT
February 19, 2021 9:34:36 PM GMT Failure
..
..
Restart DCS Agent February 19, 2021 9:34:35 PM GMT
February 19, 2021 9:34:36 PM GMT Failure
Provisioning service creation February 19, 2021 9:34:35 PM GMT
February 19, 2021 9:34:36 PM GMT Failure
remote copy file February 19, 2021 9:34:35 PM GMT
February 19, 2021 9:34:36 PM GMT Failure
Hardware Models
All Oracle Database Appliance hardware models bare metal and KVM-based DB System deployments
Workaround
- Run the touch command touch
/opt/oracle/dcs/conf/.agent_upgraded
on both the nodes. - Stop DCS agent and Zookeeper on both the nodes.
- Start Zookeeper on both the nodes and then start DCS agent on both the nodes.
This issue is tracked with Oracle bug 32423290.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when creating the appliance
When creating a dbsystem, the copy truststore.jks
step in
the create-appliance
job fails with a permission denied
error.
# odacli describe-job -i 2a3952bc-a264-449b-a844-cac7862308bb
Job details
----------------------------------------------------------------
ID: 2a3952bc-a264-449b-a844-cac7862308bb
Description: Provisioning service creation
Status: Failure
Created: March 1, 2021 3:56:21 PM GMT
Message: DCS-10001:Internal error encountered: failed to
copy Trust store : DCS-10001:Internal error encountered: Failed to scp file
/opt/zookeeper/conf/truststore.jks to /tmp/@Private IP of Node 0. Warning:
Permanently added 'ip' (ECDSA) to the list of known hosts.
! Permission denied, please try again.
Task Name Start Time
End Time Status
---------------------------------------- -----------------------------------
----------------------------------- ----------
Provisioning service creation March 1, 2021 3:56:22 PM GMT
March 1, 2021 5:10:47 PM GMT Failure
..
..
Provisioning service creation March 1, 2021 5:10:39 PM GMT
March 1, 2021 5:10:47 PM GMT Failure
Create Trust Store March 1, 2021 5:10:39 PM GMT
March 1, 2021 5:10:45 PM GMT Success
Delete Trust Store March 1, 2021 5:10:45 PM GMT
March 1, 2021 5:10:45 PM GMT Success
Create Trust Store March 1, 2021 5:10:45 PM GMT
March 1, 2021 5:10:47 PM GMT Failure
Hardware Models
All Oracle Database Appliance hardware models bare metal and KVM-based DB System deployments
Workaround
- Copy the
/opt/zookeeper/conf/truststore.jks
file from Node 1 to Node 0. - Stop DCS agent and Zookeeper on both the nodes.
- Start Zookeeper on both the nodes and then start DCS agent on both the nodes.
This issue is tracked with Oracle bug 32543488.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating Database System on KVM
When creating a dbsystem on KVM, an error is encountered.
This error occurs when creating Database System on KVM with the same virtual function (for virtual networks on Infiniband-based systems) being used for Oracle ASM and interconnect networks.
Hardware Models
All Oracle Database Appliance hardware models non-high-availability platforms
Workaround
- Stop the dbsystem and delete it.
- Restart the DCS agent.
- Recreate the dbsystem.
This issue is tracked with Oracle bug 32509478.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating a virtual machine on KVM
When creating a VM on Oracle Database Appliance on KVM, an error is encountered.
odacli create-vm
command fails if the
preferred node is specified and this node is different from the node where the the
odacli create-vm
command is run.
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Run the odacli create-vm
command from the
preferred node.
This issue is tracked with Oracle bug 32537904.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when creating or restoring 11.2.0.4 database
An error is encountered when creating or restoring 11.2.0.4 databases.
When you run the command odacli create-database
or
odacli irestore-database
for 11.2.0.4 databases, the command
fails to run at the Configuring DB Console step. This error may also occur when
creating 11.2.0.4 databases using the Browser User Interface.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Run the commands without enabling DB Console.
This issue is tracked with Oracle bug 31017360.
Parent topic: Known Issues When Deploying Oracle Database Appliance
TFA disabled after patching Oracle Database Appliance
After patching Oracle Database Appliance, TFA status shows as disabled.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
odacli update-dbhome
command with the -sko
option:odacli update-dbhome -j -v 19.9.0.0.0 -i dbhome_id -sko
This issue is tracked with Oracle bug 32058933.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.
UpgradeResults.html
file, when upgrading database from 11.2.0.4 to 12.1
or 12.2:
Database is using a newer time zone file version than the Oracle home
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
- Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
- After manually completing the database upgrade, run the following command to update
DCS
metadata:
/opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f
This issue is tracked with Oracle bug 31125985.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
- Before upgrading the 12.1 single-instance database, run the following PL/SQL
command to change the
local_listener
to an empty string:ALTER SYSTEM SET LOCAL_LISTENER='';
- After upgrading the 12.1 single-instance database successfully, run the
following PL/SQL command to change the
local_listener
to the desired value:ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-';
This issue is tracked with Oracle bugs 31202775 and 31214657.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.
Hardware Models
All Oracle Database Appliance X8-2-HA with High Performance configuration
Workaround
- Power off storage expansion shelf.
- Reboot both nodes.
- Proceed with provisioning the default storage shelf (first JBOD).
- After the system is successfully provisioned
with default storage shelf (first JBOD), check
that
oakd
is running on both nodes in foreground mode.# ps -aef | grep oakd
- Check that all first JBOD disks have the status
online, good in
oakd
, and CACHED in Oracle ASM. - Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
- Run the following command from the master node
to add the storage expansion shelf disks (two JBOD
setup) to
oakd
and Oracle ASM.#odaadmcli show ismaster OAKD is in Master Mode # odaadmcli expand storage -ndisk 24 -enclosure 1 Skipping precheck for enclosure '1'... Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish ... #
- Check that the storage expansion shelf disks
(two JBOD setup) are added to
oakd
and Oracle ASM.
Replace odaadmcli
with
oakcli
commands on Oracle
Database Appliance Virtualized Platform in the
procedure.
For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.
This issue is tracked with Oracle bug 30839054.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.
DCS-10001:Internal error encountered: Fail to run command Failed to create
volume.
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname
This issue is tracked with Oracle bug 30750497.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.
If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.
Hardware Models
All Oracle Database Appliance high-availability environments
Workaround
Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.
For example, the following command creates a database testdb
with user DBSNMP
.
/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP
This issue is tracked with Oracle bug 28916487.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.
Hardware Models
Oracle Database Appliance high capacity environments with HDD disks
Workaround
Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.
This issue is tracked with Oracle bug 28836461.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
- Change the parameter
value.
SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
- Shut down the database.
SQL> SHUTDOWN IMMEDIATE
- Start the database.
SQL> Startup
- Verify the parameter for the new
value.
SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';
This issue is tracked with Oracle bug 30309914.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error in reinstate operation on Oracle Data Guard
When running the commandodacli reinstate-dataguard
on Oracle Data Guard an error is encountered. - Error in starting a database from a bare metal CPU pool
When starting a database after patching to Oracle Database Appliance release 19.10, an error is encountered. - Error in restoring a database
When restoring a database on Oracle Database Appliance, an error is encountered. - Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered. - SEHA disabled by default when creating Database System with single-instance Standard Edition Oracle Database
Standard Edition High-Availability (SEHA) is disabled by default when creating Database System with single-instance Standard Edition Oracle Database. - Inconsistency in command output
When running theodacli recover-database
command, there is inconsistency in theodacli describe-database
andodacli list-cpupool
output. - Only local CPU Pool supported for single-instance and Enterprise Edition Oracle Database
Non-local CPU pool is not supported for single-instance and Enterprise Edition Oracle Database. - Error in restoring a standby database for 11.2.0.4 database
When performing an iRestore operation on a standby database of version 11.2.0.4, an error is encountered. - Error in cloning a database on KVM
When cloning a database on Oracle Database Appliance on KVM, an error is encountered. - Error in cloning a virtual machine on KVM
When cloning a VM on Oracle Database Appliance on KVM, an error is encountered. - Error in creating a KVM guest VM on Windows systems
When cloning a database on Oracle Database Appliance on KVM, an error is encountered. - Error in Configuring Oracle Data Guard on Oracle ASM Database
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in restoring a database
When restoring a database on Oracle Database Appliance, an error is encountered. - Error in restoring of TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error in backup of TDE-enabled database
When perfoming back up of a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in Configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in running other operations when modifying database with CPU pool
When modifying a database with CPU pool, an error is encountered with other operations. - Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role. - Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered. - Error when rebooting the appliance
When rebooting Oracle Database Appliance, the user interactive screen is displayed. - Job history not erased after running cleanup.pl
After runningcleanup.pl
, job history is not erased. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running theodacli update-registry
command with-n all --force
or-n dbstorage --force
option can result in metadata corruption. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode. - Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers. - Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in/var/log/messages
.
Error in reinstate operation on Oracle Data Guard
When running the command odacli reinstate-dataguard
on
Oracle Data Guard an error is encountered.
dcs-agent.log
:DCS-10001:Internal error encountered: Unable to reinstate Dg." and can
further find this error "ORA-12514: TNS:listener does not currently know of
service requested
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ensure that the database you are reinstating is started in MOUNT mode.
srvctl start database -d db-unique-name -o mount
After the command completes successfully, run the command odacli
reinstate-dataguard
job. If the database is already in MOUNT mode, this
can be an temporary error. Check the Data Guard status again a few minutes later
with odacli describe-dataguardstatus
or odacli
list-dataguardstatus
, or check with DGMGRL> SHOW
CONFIGURATION;
to see if the reinstatement is successful.
This issue is tracked with Oracle bug 32367676.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in starting a database from a bare metal CPU pool
When starting a database after patching to Oracle Database Appliance release 19.10, an error is encountered.
service cgconfig.service
is
down.# systemctl status cgconfig.service
cgconfig.service - Control Group configuration service
Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor
preset: disabled)
Active: inactive (dead)
.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Check the
cgconfig.service
status. If the status is disabled or inactive, then continue.# systemctl status cgconfig.service cgconfig.service - Control Group configuration service Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor preset: disabled) Active: inactive (dead)
- Start
cgconfig.service
:# systemctl start cgconfig.service
- Enable
cgconfig.service
:# systemctl enable cgconfig.service Created symlink from /etc/systemd/system/sysinit.target.wants/cgconfig.service to /usr/lib/systemd/system/cgconfig.service.
- Check
cgconfig.service
status:# systemctl status cgconfig.service cgconfig.service - Control Group configuration service Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; enabled; vendor preset: disabled) Active: active (exited) since Mon 2021-02-22 23:03:34 CST; 3min 40s ago Main PID: 16594 (code=exited, status=0/SUCCESS)
- Restart the failed database.
This issue is tracked with Oracle bug 31907677.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a database
When restoring a database on Oracle Database Appliance, an error is encountered.
This is because there are multiple database IDs in the wrong location, leading to failure in RMAN.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not specify backup location, or provide the correct backup location pointing to the parent directory of the source database backup.
This issue is tracked with Oracle bug 31907677.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not run concurrent database or database home creation job.This issue is tracked with Oracle bug 32376885.
Parent topic: Known Issues When Managing Oracle Database Appliance
SEHA disabled by default when creating Database System with single-instance Standard Edition Oracle Database
Standard Edition High-Availability (SEHA) is disabled by default when creating Database System with single-instance Standard Edition Oracle Database.
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Use the commandodacli modify-database -sh
to disable SEHA on the
database.
This issue is tracked with Oracle bugs 32444191, 32444195, and 32444190.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in command output
When running the odacli recover-database
command, there is
inconsistency in the odacli describe-database
and odacli
list-cpupool
output.
odacli recover-database
command from the other node, there
is inconsistency in the odacli describe-database
and
odacli list-cpupool
output.
# odacli describe-database -in test
Execution on node 0:
Database details
----------------------------------------------------------------
ID: 59c357dc-9088-41bb-b152-31f1ab751485
Description: test
DB Name: test
DB Version: 19.10.0.0.210119
DB Type: SI
DB Target Node Name: node2 ----> Node 1
...
CPU Pool Name: Test3 -----> describe-db option shows cpupool as Test3
# odacli list-cpupools
Name Type Configured on Cores Associated resources Created Updated
Test1 BM node1, 2 db1, test,
---> In the list cpupool option, test db is listed under Test1
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Run the odacli recover-database
command with the
-cpupool
option for single-instance database from the
same node where the database is running. Avoid running the command from the
other node.
This issue is tracked with Oracle bug 32559396.
Parent topic: Known Issues When Managing Oracle Database Appliance
Only local CPU Pool supported for single-instance and Enterprise Edition Oracle Database
Non-local CPU pool is not supported for single-instance and Enterprise Edition Oracle Database.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Use local CPU pool for single-instance and Enterprise Edition Oracle Database.This issue is tracked with Oracle bug 32086625.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a standby database for 11.2.0.4 database
When performing an iRestore operation on a standby database of version 11.2.0.4, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- After taking backup and before performing the
iRestore operation, delete control file autobackups in the
directory shown as attribute
backupLocation
in the backup report:c-3737675288-20210211-04 c-3737675288-20210211-05 c-3737675288-20210211-06 c-3737675288-20210212-00 c-3737675288-20210212-01
- Perform the database iRestore operation.
- After successfully performing the iRestore operation, create a backup of the source database.
This issue is tracked with Oracle bug 32473071.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in cloning a database on KVM
When cloning a database on Oracle Database Appliance on KVM, an error is encountered.
odacli clone-vm
command fails if the command
is run from the remote node. For example, if the VM to clone is on Node0 before
being stopped and the odacli clone-vm
command is run from Node1,
then the clone operation fails.
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Run the odacli clone-vm
command from the same
node where the VM was running.
This issue is tracked with Oracle bug 32141864.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in cloning a virtual machine on KVM
When cloning a VM on Oracle Database Appliance on KVM, an error is encountered.
odacli clone-vm
command fails if the VM to be
cloned has virtual disks attached.
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Detach any virtual disks before cloning and attach them after the clone operation has completed.
This issue is tracked with Oracle bug 32489225.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in creating a KVM guest VM on Windows systems
When cloning a database on Oracle Database Appliance on KVM, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
For steps to configure KVM guest machines on Windows, refer to the My Oracle Support Note 2748946.1 at:
https://support.oracle.com/rs?type=doc&id=2748946.1
This issue is tracked with Oracle bug 32433940.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Configuring Oracle Data Guard on Oracle ASM Database
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered.
DCS-10001:Internal error encountered: Unable to create Redo Logs.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the standby site, run the following
command:
[grid] sqlplus / as sysasm SQL> alter diskgroup DATA set attribute 'access_control.enabled' ='false';
- Run the Oracle Data Guard configuration as normal.
- On the standby site, run the following
command:
[grid] sqlplus / as sysasm SQL> alter diskgroup DATA set attribute 'access_control.enabled'='true';
- On the standby site, run the following
command:
ASMCMD> chown dbUser +DATA/dbUniqName/PASSWORD/orapwdbName
This issue is tracked with Oracle bug 32569611.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a database
When restoring a database on Oracle Database Appliance, an error is encountered.
DCS-10001:Internal error encountered: Failed to run command Failed to create FileGroupDBUNIQUENAME_9999 on DiskGroupRECO
Hardware Models
All Oracle Database Appliance hardware models
Workaround
When restoring a database, provide the Database Name and Database Unique Name such that they have not been used by any database in the system.
This issue is tracked with Oracle bug 32586509.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring of TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered.
Error 1: Job to restore TDE-enabled Oracle ASM database may fail with the following error message:
DCS-10001:Internal error encountered: Failed to copy TDE Wallet to backup location.
Error 2: Jobs to restore TDE wallet of Oracle ASM database and recover TDE-enabled Oracle ASM database may fail with the following error messages:
DCS-10001:Internal error encountered: Failed to create autologin software
keystore.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Disable the Access Control.
# su - grid Last login: *** *** 4 :01:35 ** 20** [grid@node1 ~]$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 4 **:**:** 20** Version 19.10.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.10.0.0.0 SQL> alter diskgroup DATA set attribute 'access_control.enabled' ='false'; Diskgroup altered.
- Start the iRestore job.
- Enable the Access Control immediately after the successful
completion of "Auto login TDE Wallet creation" task in the above
iRestore job. The following example shows the successful
completion of "Auto login TDE Wallet creation" task.
Job details ---------------------------------------------------------------- ID: 9617c289-3698-4d3e-84e5-6b15408b4143 Description: Database service recovery with db name: dharmtd6 Status: Success Created: March 4, 2021 4:18:02 PM IST Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Check if cluster ware is running March 4, 2021 4:18:12 PM IST March 4, 2021 4:18:13 PM IST Success Check if cluster ware is running March 4, 2021 4:18:13 PM IST March 4, 2021 4:18:13 PM IST Success .... Auxiliary Instance Creation March 4, 2021 4:18:36 PM IST March 4, 2021 4:18:48 PM IST Success TDE Wallet directory creation March 4, 2021 4:18:48 PM IST March 4, 2021 4:18:50 PM IST Success TDE Wallet Restore March 4, 2021 4:18:50 PM IST March 4, 2021 4:18:53 PM IST Success Auto login TDE Wallet creation March 4, 2021 4:18:53 PM IST Access Control can be enabled as below: [root@node1 bin]# su - grid Last login: *** *** 4 :01:35 ** 20** [grid@node2 ~]$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on *** *** 4 18:25:08 20** Version 19.10.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.10.0.0.0 SQL> alter diskgroup DATA set attribute 'access_control.enabled' ='true'; Diskgroup altered.
- Delete the
ewallet.p12
andcwallet.sso
files (if present) in+DATA/DBUNIQUENAME/tde
path by connecting asgrid_user
.# su - grid Last login: *** *** 4 :01:35 ** 20** [grid@scaoda***c1n2 ~]$ asmcmd ASMCMD> cd +DATA/MYDB/tde ASMCMD> rm ewallet.p12 cwallet.sso
- Disable the Access Control.
# su - grid Last login: *** *** 4 :01:35 ** 20** [grid@node1 ~]$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 4 **:**:** 20** Version 19.10.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.10.0.0.0 SQL> alter diskgroup DATA set attribute 'access_control.enabled' ='false'; Diskgroup altered.
- Start the iRestore TDE wallet job or Recover TDE database job.
- Enable the Access
Control.
[root@node1 bin]# su - grid Last login: *** *** 4 :01:35 ** 20** [grid@node2 ~]$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on *** *** 4 18:25:08 20** Version 19.10.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.10.0.0.0 SQL> alter diskgroup DATA set attribute 'access_control.enabled' ='true'; Diskgroup altered.
This issue is tracked with Oracle bug 32573493.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in backup of TDE-enabled database
When perfoming back up of a TDE-enabled database on Oracle Database Appliance, an error is encountered.
DCS-10001:Internal error encountered: Failed to copy TDE Wallet to backup location.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Change the permission of
/u01/app/dbuser/diag/kfod/node_1_name
and "/u01/app/dbuser/diag/kfod/node_2_name
directory to <dbuser
> using the
chown
command. For single-node deployments, run the
command on one node.
chown -R oracle /u01/app/oracle/diag/kfod/test1/ chown -R oracle /u01/app/oracle/diag/kfod/test2/
This issue is tracked with Oracle bug 32577203.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Unable to reinstate Dg. Reinstate job was executed within 24hrs after failover job.
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 3522449
- On the old primary database, flashback to this SCN with
RMAN with the backup encryption
password:
RMAN> set decryption identified by 'rman_backup_password' ; executing command: SET decryption RMAN> FLASHBACK DATABASE TO SCN 3522449 ; ... Finished flashback at 24-SEP-20 RMAN> exit
- On the new primary machine, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 31884506.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli configure-dataguard
command fails with
the following
error:DCS-10001:Internal error encountered: Unable to pass postcheckDgStatus. Primary database has taken a non-Archivelog type backup between irestore standby database and configure-dataguard.
Verify
the status of the job with the odacli list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, remove the Oracle Data Guard
configuration:
DGMGRL > remove configuration;
- On the standby machine, delete the standby database.
- On the primary machine, disable the database backup
schedule:
odacli update-schedule -i ID -d
- Start the Oracle Data Guard configuration steps.
- Enable primary database backup schedule after Oracle Data Guard configuration is successful.
This issue is tracked with Oracle bug 31880191.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:DCS-10001:Internal error encountered:
Unable enqueue Id and update DgConfig.
Use DGMGRL to show standby database has this error
GMGRL> show database xxxx
Database - xxxx
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: 4 days 22 hours 1 minute 23 seconds (computed 1 second ago)
Average Apply Rate: 0 Byte/s
Real Time Query: OFF
Instance(s):
xxxx1 (apply instance)
xxxx2
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
ORA-16856: transport lag could not be determined
Database Status:
WARNING
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the new primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 4370820
- On the new primary database, check missing sequence after
standby_became_primary_scn:
SQL> select name, sequence#, first_change#, next_change# from v$archived_log where first_change#>4370820 and name is NULL; ... NAME ------------------------------------------------------------------------------- SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# ---------- ------------- ------------ 53 4601014 4601154
- On the new primary machine, restore the missing sequence
with
RMAN.
$rman target/ RMAN> restore archivelog from logseq=1 until logseq=53;
- On the new standby machine, check if current_scn is increasing, and
check with
DGMGRL> SHOW CONFIGURATION;
to see if the apply lag is being resolved.
This issue is tracked with Oracle bug 32041012.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Message:
DCS-10001:Internal error encountered: Unable to reinstate Dg.
The dcs-agent.log file has the following error entry:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- Make sure the database you are reinstating is started in
MOUNT mode. To start the database in MOUNT mode, run this
command:
srvctl start database -d db-unique-name -o mount
- After the above command runs successfully, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 32047967.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in running other operations when modifying database with CPU pool
When modifying a database with CPU pool, an error is encountered with other operations.
# odacli create-backup -in dbName -bt Regular-L0
DCS-10089:Database dbName is in an invalid state `{Node Name:closed}' Hardware Models
Hardware Models
All Oracle Database Appliance hardware models with bare metal configuration
Workaround
Wait until the odacli modify-database
completes before you
perform any other operation on the same database.
This issue is tracked with Oracle bug 32045674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered.
Failed to copy file from : source_location to: destination_location
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not change the database storage type when restoring a TDE-enabled database.
This issue is tracked with Oracle bug 31848183.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.
odacli
describe-database
command is not updated after Oracle Data Guard
switchover, failover, and reinstate operations on Oracle Database
Appliance.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli update-registry -n db --force/-f
to update the
database metadata. After the job completes, run the odacli
describe-database
command and verify that dbRole is updated.
This issue is tracked with Oracle bug 31378202.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered.
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered:
Missing arguments : required sqlplus connection information is not
provided
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Perform recovery of the single-instance database on the node where the database is running.
This issue is tracked with Oracle bug 31399400.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error when rebooting the appliance
When rebooting Oracle Database Appliance, the user interactive screen is displayed.
Hardware Models
Oracle Database Appliance X7-2-HA hardware models
Workaround
From the system console, select or highlight the kernel using the Up or Down arrow keys and then press Enter to continue with the reboot of the appliance.
This issue is tracked with Oracle bug 31196452.
Parent topic: Known Issues When Managing Oracle Database Appliance
Job history not erased after running cleanup.pl
After running cleanup.pl
, job history is not
erased.
After running cleanup.pl
, when you run
/opt/oracle/dcs/bin/odacli list-jobs
commands, the list is not
empty.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
- Stop the DCS Agent by running the following commands on both nodes.
For Oracle Linux 6, run:
initctl stop initdcsagent
For Oracle Linux 7, run:
systemctl stop initdcsagent
- Run the cleanup script sequentially on both the nodes.
This issue is tracked with Oracle bug 30529709.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running the odacli update-registry
command with -n
all --force
or -n dbstorage --force
option can result in metadata corruption.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Run the -all
option when all the databases created in the system
use OAKCLI in migrated systems. On other systems
that run on DCS stack, update all components other
than dbstorage individually, using the
odacli update-registry -n
component_name_to_be_updated_excluding_dbstorage
.
This issue is tracked with Oracle bug 30274477.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance
Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
- Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
- After configuring the oda-admin password, the following error is
displayed:
Failed to change the default user (oda-admin) account password. Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized
Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.
Models
All Oracle Database Appliance Hardware Models bare metal deployments
Workaround
To access the Web Console, use either Google Chrome or Firefox.
This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.
Parent topic: Known Issues When Managing Oracle Database Appliance
Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages
.
Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages
:
Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"
You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages
.
Hardware Models
Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand
Workaround
Perform the following to remove the parameters:
-
After patching, update the /
etc/opensm/opensm.conf
file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.cat /etc/opensm/opensm.conf | egrep -w 'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^# max_seq_redisc 0 rereg_on_guid_migr FALSE aguid_inout_notice FALSE sm_assign_guid_func uniq_count reports 2 per_module_logging FALSE consolidate_ipv4_mask 0xFFFFFFFF
-
Reboot. The messages will not appear after rebooting the node.
This issue is tracked with Oracle bug 25985258.
Parent topic: Known Issues When Managing Oracle Database Appliance