4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Error in updating the operating system when patching the server
When patching the server, the operating system may not be updated. - Error in precheck during Data Preserving Reprovisioning
When upgrading your deployment, an error may be encountered when running the prechecks. - Incorrect job status during Data Preserving Reprovisioning
When upgrading your deployment, an error may be encountered. - Error updating the registry
When running theodacli update-registry
command, an error may be encountered. - Error in upgrading a database
When upgrading a database, an error may be encountered. - Error in database patching
When patching a database on Oracle Database Appliance, an error may be encountered. - Component version not updated after patching
After patching the server to Oracle Database Appliance release 19.16, theodacli describe-component
command does not display the correct Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or 8000047C. - Detaching of databases with additionally configured services not supported by odacli detach-node
When runningodacli detach-node
in the Data Preserving Reprovisioning process, if there are additionally configured services, then databases cannot be detached. - Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered. - Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered. - Error messages in log entries in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, the log entries may display error messages though the overall status of the job is displayed asSUCCESS
. - Error in running prechecks for patching
When patching the Oracle Database Appliance server, an error may be encountered. - Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered. - AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.21, theodacli update-dbhome
command may fail. - Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or theodacli create-prepatchreport
command, an error is encountered. - Error in patching prechecks report
The patchung prechecks report may display an error. - Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
Error in updating the operating system when patching the server
When patching the server, the operating system may not be updated.
DCS-10001:Internal error encountered: Failed to patch OS.
rpm -q kernel-uek
If the output of this command displays multiple RPM names, then perform the workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
# yum remove kernel-uek-4.14.35-1902.11.3.1.el7uek.x86_64
# yum remove kernel-uek-4.14.35-1902.301.1.el7uek.x86_64
This issue is tracked with Oracle bug 34154435.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in precheck during Data Preserving Reprovisioning
When upgrading your deployment, an error may be encountered when running the prechecks.
Problem Description
Oracle RAC One databases that were registered using the
odacli register-database
command have the
dbTargetNodeNumber
as null and Data Preserving
Reprovisioning precheck Validate Database Status
may fail
with a null error.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Run odacli update-registry -n DB -u
db_unique_name
to discover the the
dbTargetNodeNumber
field.
Bug Number
This issue is tracked with Oracle bug 36081324.
Parent topic: Known Issues When Patching Oracle Database Appliance
Incorrect job status during Data Preserving Reprovisioning
When upgrading your deployment, an error may be encountered.
Problem Description
When a job is marked as Success
, it means that all
of its tasks have completed successfully and none of them are still running.
However, there may be cases where the odacli describe-job
command result incorrectly displays a task in a running state, even though
the job itself has successfully completed.
Command Details
# odacli describe-job
Hardware Models
All Oracle Database Appliance hardware models
Workaround
None. Ignore the error.
Bug Number
This issue is tracked with Oracle bug 35970784.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error updating the registry
When running the odacli update-registry
command, an error
may be encountered.
Failure Message
The following error message may be displayed:
DCS-10140:Oracle home '[home_directories]' cannot be discovered because the metadata does not exist.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Comment out all database homes with Removed=”T”
attached, and comment out all non-database homes and non-Grid Infrastructure
entries, such as Oracle Enterprise Manager agent home, before you run the
odacli update-registry
command. After the registry
update completes successfully, restore the commented out entries to the
original state.
Bug Number
This issue is tracked with Oracle bug 36008985.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in upgrading a database
When upgrading a database, an error may be encountered.
Problem Description
When you create Oracle ASM databases, the RECO directory may not have been created on systems provisioned with the OAK stack. This directory is created when the first RECO record is written. After successfully upgrading these systems using Data Preserving Reprovisioning to Oracle Database Appliance release 19.15 or later, if you attempt to upgrade the database, an error message may be displayed.
Failure Message
When the odacli upgrade-database
command is run,
the following error message is displayed:
# odacli upgrade-database -i 16288932-61c6-4a9b-beb0-4eb19d95b2bd -to b969dd9b-f9cb-4e49-8e0d-575a0940d288
DCS-10001:Internal error encountered: dbStorage metadata not in place:
DCS-12013:Metadata validation error encountered: dbStorage metadata missing
Location info for database database_unique_name..
Command Details
# odacli upgrade-database
Hardware Models
All Oracle Database Appliance X6-2HA and X5-2 hardware models
Workaround
- Verify that the
odacli list-dbstorages
command displaysnull
for the redo location for the database that reported the error. For example, the following output displays a null or empty value for the database unique nameF
.# odacli list-dbstorages ID Type DBUnique Name Status Destination Location Total Used Available ---------------------------------------- ------ -------------------- ... ... ... 198678d9-c7c7-4e74-9bd6-004485b07c14 ASM F CONFIGURED DATA +DATA/F 4.89 TB 1.67 GB 4.89 TB REDO +REDO/F 183.09 GB 3.05 GB 180.04 GB RECO 8.51 TB ... ... ...
In the above output, the RECO record has a null value.
- Manually create the RECO directory for this database. If the
database unique name is
dbuniq
, then run theasmcmd
command as thegrid
user.asmcmd
- Run the
mkdir
command.asmcmd> mkdir +RECO/dbuniq
- Verify that the
odacli list-dbstorages
command output does not display a null or empty value for the database. - Rerun the
odacli upgrade-database
command.
Bug Number
This issue is tracked with Oracle bug 34923078.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in database patching
When patching a database on Oracle Database Appliance, an error may be encountered.
Problem Description
When applying the datapatch during patching of database on Oracle Database Appliance, an error message may be displayed.
Failure Message
When the odacli update-database
command is run,
the following error message is displayed:
Failed to execute sqlpatch for database …
Command Details
# odacli update-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the following SQL*Plus
command:
alter system set nls_sort='BINARY' SCOPE=SPFILE;
- Restart the database using srvctl command.
- Retry applying the datapatch with
dbhome/OPatch/datapatch -verbose -db dbUniqueName
.
Bug Number
This issue is tracked with Oracle bug 35060742.
Parent topic: Known Issues When Patching Oracle Database Appliance
Component version not updated after patching
After patching the server to Oracle Database Appliance release 19.16, the
odacli describe-component
command does not display the correct
Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or
8000047C.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Manually update the Ethernet controllers to 00005DD or 800005DE
using the fwupdate
command.
This issue is tracked with Oracle bug 34402352.
Parent topic: Known Issues When Patching Oracle Database Appliance
Detaching of databases with additionally
configured services not supported by odacli detach-node
When running odacli detach-node
in the Data Preserving
Reprovisioning process, if there are additionally configured services, then databases
cannot be detached.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Additional services must be deleted to complete the detach
operation by running the command srvctl remove service
. If
these services are required, then before removing the service, the metadata
must be captured manually and then the services must be recreated on the
system running Oracle Database Appliance release 19.17 or later using the
srvctl
command from the appropriate database
home.
This issue is tracked with Oracle bug 33593287.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.
If incorrect VIP names or VIP IP addresses are configured, then the
detach completes successfully but the command odacli restore-node
-g
displays a validation error. This is because the earlier
releases did not validate VIP names or VIP IP addresses before
provisioning.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Follow these steps:
Manually edit the file
/opt/oracle/oak/restore/metadata/provisionInstance.json
with the correct VIP names or VIP IP addresses. Retry the command
odacli restore-node -g
. For fixing VIP names or VIP
IP addresses, nslookup
can be used to query hostnames and
IP addresses.
This issue is tracked with Oracle bug 34140344.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.
DCS-10045: groupNames are not unique.
This error occurs if the source Oracle Database Appliance is an OAK version. This is because on the DCS stack, the same operating system group is not allowed to be assigned two or more roles.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Follow these steps:
Manually edit the file
/opt/oracle/oak/restore/metadata/provisionInstance.json
with unique group names for each role. Retry the command odacli
restore-node -g
.
This issue is tracked with Oracle bug 34042493.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error messages in log entries in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, the log
entries may display error messages though the overall status of the job is displayed as
SUCCESS
.
odacli
restore-node -d
performs a set of ignorable tasks. Failure of
these tasks does not affect the status of the overall job. The output of the
command odacli describe-job
may report such failures. These
tasks
are:Restore of user created networks
Restore of object stores
Restore of NFS backup locations
Restore of backupconfigs
Relinking of backupconfigs to databases
Restore of backup reports
In the sample output above, even if these tasks fail, the overall
status of the job is marked as SUCCESS
.
Hardware Models
All Oracle Database Appliance hardware models being upgraded using the Data Preserving Reprovisioning process
Workaround
Investigate the failure using thedcs-agent.log
, fix the errors, and then retry the command
odacli restore-node -d
.
This issue is tracked with Oracle bug 34512193.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in running prechecks for patching
When patching the Oracle Database Appliance server, an error may be encountered.
Problem Description
When patching Oracle Database Appliance, an error message may be displayed.
If the number of open PDBs in a container database is not in between
0.1 times the value of target_pdb parameter + 1
and
10 times the value of target_pdbs parameter - 1
, and
if the prepatchreport is generated before updating the CDB or updating DB
home containing the CDB, then an error is encountered.
Failure Message
When the odacli update-server
command is run, the
following error message is displayed in the prechecks report:
AHF-6563: Database parameter target_pdbs is not set within best practice thresholds
Command Details
# odacli create-prepatchreport
Hardware Models
Oracle Database Appliance hardware models X10-HA, X10-S, X9-2-HA, X9-2S, X8-2-HA, X8-2M, X8-2S, X7-2-HA, X7-2M, and X7-2S
Workaround
- Connect to the SQL*Plus prompt:
- Switch to the DB user. For example, if the DB user is
oracle
, then run the following command:su - oracle
- Run
.oraenv
and setORACLE_SID
.. oraenv ORACLE_SID = [lowcdb2] ? lowcdb1 The Oracle base remains unchanged with the value /u01/app/odaorabase/odaadmin
- Connect to SQL*Plus. be executing 'sqlplus / as
sysdba'
command.
sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Mon XXX XX XX:XX:XX XXXX Version 19.19.0.0.0 Copyright (c) 1982, 2022, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.19.0.0.0 SQL>
- Switch to the DB user. For example, if the DB user is
- Identify the number of open PDBs in the
CDB:
SQL> select count(*) from v$pdbs where name not like 'PDB$SEED' and open_mode like 'READ WRITE'; COUNT(*) ---------- 1
- Set the
target_pdbs
parameter to a value which is10 times the number of open PDBs - 1
. For example, if number of open PDBs is 1, then the value oftarget_pdbs
must be set to 9 (1x10 - 1). Note that for Oracle RAC databases, the 'ttarget_pdbs
parameter must be set on both nodes.SQL> alter system set target_pdbs=<10 x number_of_open_pdbs-1> scope=spfile sid='*';
- Exit from the SQL
prompt:
SQL> exit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.19.0.0.0
- Stop the database by running the
srvctl stop database -db db_unique_name
command. - Start the database by running the
srvctl start database -db db_unique_name
command.
Bug Number
This issue is tracked with Oracle bug 35587396.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered.
odacli update-server -f version
, an error may be
displayed.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
The STIG V1R2 rule OL7-00-040420 tries to change the permission of
the file /etc/ssh/ssh_host_rsa_key
from '640' to '600'
which causes the error. During patching, run the command chmod 600
/etc/ssh/ssh_host_rsa_key
command on both nodes.
This issue is tracked with Oracle bug 33168598.
Parent topic: Known Issues When Patching Oracle Database Appliance
AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.21, the odacli update-dbhome
command may
fail.
Verify the Alternate Archive Failed AHF-4940: One or more log archive
Destination is Configured to destination and alternate log archive
Prevent Database Hangs destination settings are not as recommended
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the
odacli update-dbhome
command with the-f
option./opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.21.0.0.0 -f
This issue is tracked with Oracle bug 33144170.
Parent topic: Known Issues When Patching Oracle Database Appliance
Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or the odacli create-prepatchreport
command, an error is encountered.
One or more log archive destination and alternate log archive destination settings are not as recommended
Software home check failed
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
odacli update-dbhome
, odacli
create-prepatchreport
, odacli update-server
commands with the
-sko
option. For
example:odacli update-dbhome -j -v 19.21.0.0.0 -i dbhome_id -sko
This issue is tracked with Oracle bugs 30931017, 31631618, and 31921112.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching prechecks report
The patchung prechecks report may display an error.
Failure in the pre-patch report caused by “AHF-5190: operating system boot device order is not configured as recommended”
Hardware Models
Oracle Database Appliance X-7 hardware models
Workaround
Run the odacli update-server
or odacli
update-dbhome
command with the -f
option.
This issue is tracked with Oracle bug 33631256.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed.
odacli
update-dcscomponents
command:
# time odacli update-dcscomponents -v 19.21.0.0.0
^[[ADCS-10008:Failed to update DCScomponents: 19.21.0.0.0
Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for
details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
This is a timing issue with setting up the SSH equivalence.
Run the odacli update-dcscomponents
command again and
the operation completes successfully.
This issue is tracked with Oracle bug 32553519.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
Grid_home/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.
The ODACLI job displays the following error:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
The data patch log contains the entry
"Prereq check failed, exiting without
installing any patches.
".
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Install the same patch again.
This issue is tracked with Oracle bugs 30026438 and 30155710.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Error in starting the DB System
When starting a DB system on an Oracle Database Appliance, an error may be encountered. - Error in upgrading the DB System
When upgrading a DB system on an Oracle Database Appliance, an error may be encountered. - Error in creating a DB system
When creating a DB system, an error may be encountered. - Error when upgrading DB systems with Data Preserving Reprovisioning
When upgrading your DB systems during Data Preserving Reprovisioning, an error may be encountered. - Error in starting the DB System
When starting a DB system on an Oracle Database Appliance, an error may be encountered. - Error in creating database
When creating a database on Oracle Database Appliance, an error may be encountered. - Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered. - Error in creating DB system
When creating a DB system on Oracle Database Appliance, an error may be encountered. - Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered. - Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after runningcleanup.pl
. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Errors in clone database operation
Clone database operation fails due to errors.
Error in starting the DB System
When starting a DB system on an Oracle Database Appliance, an error may be encountered.
Problem Description
If DBVM is undefined using virsh undefine
dbvm_name
, then the odacli start-dbsystem
command may
fail to run.
Failure Message
DCS-10001:Internal error encountered: error: failed to get domain 'dbvm_name'
Hardware Models
All Oracle Database Appliance hardware models running Oracle Database Appliance release 19.21
Workaround
Run virsh define
/u05/app/sharedrepo/dbsystem/.ACFS/snaps/vm_dbvm_name/dbvm_name.xml
to define the VM. Then start the DB system.
Bug Number
This issue is tracked with Oracle bug 36051738.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in upgrading the DB System
When upgrading a DB system on an Oracle Database Appliance, an error may be encountered.
Problem Description
The odacli update-dbsystem
command may fail to
run.
Failure Message
DCS-10502:Resource 'dbvm_name' encountered errors while trying to start: CRS-29208: Unable to communicate with libvirt daemon: .
For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/<hostname>/crs/trace/crsd_orarootagent_root.trc".
dcs-agent.log
file:privnetX: No such device error in the CRS-29208
CRS-5017: The resource action "<dbvm_name>.kvm start" encountered the following error:
CRS-29208: Unable to communicate with libvirt daemon:
Cannot get interface MTU on 'privnet3': No such device
Command Details
# odacli upgrade-dbsystem
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Retry upgrading the DB system. In some cases, you may need to retry the operation several times.
Bug Number
This issue is tracked with Oracle bug 36153991.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating a DB system
When creating a DB system, an error may be encountered.
Problem Description
- The
odacli create-dbsystem
job may be stuck in therunning
status for a long time. - Other DB system or application VM lifecycle operations such as
create, start, or stop VM jobs may be stuck in the
running
status for a long time. - Any
virsh
command such asvirsh list
command process may not respond. - The command
ps -ef | grep libvirtd
displays that there are twolibvirtd
processes. For example:# ps -ef |grep libvirtd root 5369 1 0 05:27 ? 00:00:03 /usr/sbin/libvirtd root 27496 5369 0 05:29 ? 00:00:00 /usr/sbin/libvirtd <<<
The second
libvirtd
process (pid 27496) is stuck and causes the job hang.
Command Details
# odacli create-dbsystem
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Delete the second libvirtd
, that is, the one
spawned by the first libvirtd
, for example, pid: 27496 in
the above example.
Bug Number
This issue is tracked with Oracle bug 34715675.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading DB systems with Data Preserving Reprovisioning
When upgrading your DB systems during Data Preserving Reprovisioning, an error may be encountered.
Problem Description
If you created DB systems on Oracle Database Appliance release 19.16 or earlier, and patched your DB systems to Oracle Database Appliance release 19.19 or 19.20 without patching to 19.17 or 19.18, and upgraded your bare metal system to Oracle Database Appliance release 19.21, you may encounter an error when updating the DCS admin on the DB system during the DB system upgrade using Data Preserving Reprovisioning.
Failure Message
When upgrading DB systems using Data Preserving Reprovisioning, the following error message is displayed:
DCS-10001:Internal error encountered: Failed to update dcs-admin-19.21.0.0.0_LINUX.X64_DATE.x86_64.rpm on node NODENAME
Found RPM release version: 19.21.0.0.0
Validating dcs-admin version
/bin/sh: /opt/oracle/oak/pkgrepos/dcsadmin/19.21.0.0.0/dcsadminversioncheck.sh: Permission denied
Current verison 19.20.0.0.0 cannot be patched to 19.21.0.0.0
Hardware Models
All Oracle Database Appliance hardware models
Workaround
/etc/exports
file on the bare metal
system as follows:
- Check the IP address in the
/etc/exports
file with the incorrect export options The IP address with the issue do not contain theno_root_squash
export option. For example,ASM_IP1:/opt/oracle/oak/pkgrepos
. - Unexport
ASM_IP1
.- Locate the string to
unexport:
grep "/opt/oracle/oak/pkgrepos" /var/lib/nfs/etab |awk -F "(" ' \{print $1}'| awk '\{print $2":"$1}'| grep ASM_IP1
The line is in the format
192.168.17.X:/opt/oracle/oak/pkgrepos
. - Run an unexport with the IP
address:
exportfs -u ASM_IP1:/opt/oracle/oak/pkgrepos
- Locate the string to
unexport:
- Modify the
/etc/exports
file and addno_root_squash
option. Edit the/etc/exports
file and find the row which hasASM_IP1
. Modify the export options for particular line from(ro,sync,no_subtree_check,crossmnt)
to(ro,sync,no_subtree_check,crossmnt,no_root_squash)
. - Export the
ASM_IP1
again.exportfs ASM_IP1:/opt/oracle/oak/pkgrepos
Bug Number
This issue is tracked with Oracle bug 36124601.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in starting the DB System
When starting a DB system on an Oracle Database Appliance, an error may be encountered.
Problem Description
For a DB system with custom memory size, if you modified the CPU
pool size or ran the odacli remap-cpupool
command, then the
DB system may fail to start.
Failure Message
virsh
console displays kernel panic with
out-of-memory
error. The following error message may
be
displayed:[Wait DB System VM DCS Agent bootstrap :
JobId=300b6dea-aaab-411b-897f-46c93a336c0f] []
c.o.d.a.k.c.KvmCommandExecutor: Got result from execution of '/usr/bin/nc -zv
IP_address 7071 -w 1':
KvmCommandExecutor.KvmCommandResult(executedCmd=/usr/bin/nc -zv IP_address
7071 -w 1, returnCode=1, output=, error=Ncat:
Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Retrieve the VM name associated with the DB system
using the
odacli describe-dbsytem
command. - Retrieve the memory size of the DB system with the
odacli describe-dbsystem
command and convert it to KB. For example, if the memory size is 64G, when converted to KiB, the size is 67108864 KiB. - Stop the DB system with the
odacli stop-dbsystem
command. For high-availability systems, the process may take up to 20 minutes. - Backup and update the XML file on the VM at the following path.
For high-availability systems, perform this step for both VMs.
/u05/app/sharedrepo/dbsystem_name/.ACFS/snaps/vm_vm_name.xml
For Oracle Database Appliance hardware models with one socket, for example, Small, modify the XML as follows. Replacexxxxxx
in the example with the memory size from step 2 in KiB unit. For example, 67108864 for memory size of 64G.<description>DB System VM</description> <memory unit='KiB'>xxxxxx</memory> <<< <currentMemory unit='KiB'>xxxxxx</currentMemory> <<< ... <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> <feature policy='force' name='invtsc'/> <feature policy='require' name='arch-capabilities'/> <numa> <cell id='0' cpus='0-3' memory='xxxxxx' unit='KiB'/> <<< </numa> </cpu>
For Oracle Database Appliance hardware models that have two sockets, for example, Medium, Large, HA, modify the XML as follows. Replacexxxxxx
in the example with the memory size from step 2 in KiB unit. For example, 67108864 for 64G memory. Divide the memory size in KB by 2 and use it to replace the yyyyyy value below. For example, if memory is 64G or 67108864KiB, replace yyyyyy with 33554432.<description>DB System VM</description> <memory unit='KiB'>xxxxxx</memory> <<< <currentMemory unit='KiB'>xxxxxx</currentMemory> <<< ... <numa> <cell id='0' cpus='0-1' memory='yyyyyy' unit='KiB’> <<< <distances> <sibling id='0' value='10'/> <sibling id='1' value='21'/> </distances> </cell> <cell id='1' cpus='2-3' memory='yyyyyy' unit='KiB’> <<< <distances> <sibling id='0' value='21'/> <sibling id='1' value='10'/> </distances> </cell> </numa>
- Use the
virsh
list to confirm that the VM is stopped, then use thevirsh
command to undefine the VM. Run the command on both bare metal system hosts for high-availability deployments.virsh list virsh undefine vm_name
- Start the DB
system:
odacli start-dbsystem -n dbsystem_name
Bug Number
This issue is tracked with Oracle bug 35360741.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating database
When creating a database on Oracle Database Appliance, an error may be encountered.
Problem Description
When creating a database on Oracle Database Appliance, the
operation may fail after the createDatabaseByRHP
task.
However, the odacli list-databases
command displays the
status as CONFIGURED for the failed database in the job results.
Failure Message
When you run the odacli create-database
command,
the following error message is displayed:
DCS-10001:Internal error encountered: Failed to clear all listeners from database
Command Details
# odacli create-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Check the job description of the odacli
create-database
command using the odacli
describe-job
command. Fix the issue for the task failure in
the odacli create-database
command. Delete the database
with the command odacli delete-database -n db_name
and retry the odacli create-database
command.
Bug Number
This issue is tracked with Oracle bug 34709091.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.
This issue is tracked with Oracle bug 33275630.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating DB system
When creating a DB system on Oracle Database Appliance, an error may be encountered.
odacli create-dbsystem
command, the following error message may be
displayed:DCS-10001:Internal error encountered: ASM network is not online in all nodes
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Manually bring the offline resources
online:
crsctl start res -all
- Run the
odacli create-dbsystem
command.
This issue is tracked with Oracle bug 33784937.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.
ORA-15333: disk is not visible on client instance
Hardware Models
All Oracle Database Appliance hardware models bare metal and dbsystem
Workaround
Shut down dbsystem before adding the second JBOD.systemctl restart initdcsagent
This issue is tracked with Oracle bug 32586762.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after running
cleanup.pl
.
After running cleanup.pl
, provisioning the appliance fails because
of missing Oracle Grid Infrastructure image (IMGGI191100). The following error
message is displayed:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
After running cleanup.pl, and before provisioning the appliance, update the repository as follows:
# odacli update-repository -f /**gi**
This issue is tracked with Oracle bug 32707387.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error in configuring Oracle Data Guard in a multi-user access enabled deployment
When configuring Oracle Data Guard in a multi-user access enabled deployment, an error may be encountered. - Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered. - Error in recovery of database
When recovering an Oracle Database Enterprise Edition High Availability database from node 0, with target node as 1, an error may be encountered. - Error in configuring Oracle Data Guard
When running the commandodacli configure-dataguard
on Oracle Database Appliance, an error may be encountered at theupload password file to standby database
step. - Error in backup of database
When backing up a database on Oracle Database Appliance, an error is encountered. - Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered. - Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths. - Error in reinstate operation on Oracle Data Guard
When running the commandodacli reinstate-dataguard
on Oracle Data Guard an error is encountered. - Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Error in configuring Oracle Data Guard in a multi-user access enabled deployment
When configuring Oracle Data Guard in a multi-user access enabled deployment, an error may be encountered.
Problem Description
When you configure Oracle Data Guard in a multi-user access enabled
deployment as the ODA-ADMINISTRATOR
user, the operation may
fail at step Configure Standby database (Standby site)
.
Failure Message
DCS-10001:Internal error encountered: Unable to populate standby database metadata.
Command Details
odacli configure-dataguard
Hardware Models
All Oracle Database Appliance hardware models in a multi-user access enabled deployment
Workaround
ODA-DB
and user type as
System
, for example, yoracle as in the following
procedure. If the primary system is multi-user access enabled, make sure the
primary database is created with this user. If the standby system is
multi-user access enabled, make sure the standby database is restored with
this user.
- Obtain the ODA-DB user name on the multi-user access enabled
system:
[odaadmin@scaoda9l006 ~]$ odacli list-users ID DCS User Name OS User Name Role(s) Account Status User Type ---------------------------------------- --------------- -------------------------------------------------- ... 8564aba2-94b9-4607-8c4f-2cda3bdc6cb5 odaadmin odaadmin ODA-ADMINISTRATOR Active System d9ae7f70-b294-42c1-881a-5f619ec2a851 yoracle yoracle ODA-DB Active System
- Switch to the ODA-DB user and configure Oracle Data Guard on the
primary and standby systems:
[yoracle@oda1 ~] su - yoracle [yoracle@oda1 ~]$ odacli create-database -n test -u ptest -bn f1 -bp [yoracle@oda1 ~]$ odacli create-backup -bt Regular-L0 -n test [yoracle@oda1 ~]$ odacli irestore-database -r backup_report.json -ro STANDBY -bp -on f1 -u stest [yoracle@oda1 ~]$ odacli configure-dataguard Standby site address: oda2 BUI username for Standby site. If Multi-user Access is disabled on Standby site, enter 'oda-admin'; otherwise, enter the name of the user who has irestored the Standby database (default: oda-admin): yoracle BUI password for Standby site: Database name for Data Guard configuration: test Primary database SYS password: ****************************************************************************** ************* Data Guard default settings Primary site network for Data Guard configuration: Public-network Standby site network for Data Guard configuration: Public-network Primary database listener port (TCP): 1521 Standby database listener port (TCP): 1521 Transport type: ASYNC Protection mode: MAX_PERFORMANCE Data Guard configuration name: ptest_stest Active Data Guard: disabled Do you want to edit this Data Guard configuration? (Y/N, default:N): Standby database's SYS password will be set to Primary database's after Data Guard configuration. Ignore warning and proceed with Data Guard configuration? (Y/N, default:N): y ****************************************************************************** ************* Configure Data Guard ptest_stest started ****************************************************************************** ************* Step 1: Validate Data Guard configuration request (Primary site) ... ****************************************************************************** ************* Step 11: Create Data Guard status (Standby site) Description: DG Status operation for db test - NewDgconfig Job ID: e6b13275-9450-4650-8187-b33f2dd6480f Started May 16, 2023 00:52:33 AM IST Create Data Guard status Finished May 16, 2023 00:52:35 AM IST ****************************************************************************** ************* Configure Data Guard ptest_stest completed ****************************************************************************** *************
Bug Number
This issue is tracked with Oracle bug 35389339.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered.
Problem Description
When you configure Oracle Data Guard on the second node of the
standby system on an Oracle Database Appliance high-availability deployment,
the operation may fail at step Configure Standby database (Standby
site)
in the task Reset Db sizing and hidden
parameters for ODA best practice
.
Command Details
odacli configure-dataguard
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Run odacli configure-dataguard on the first node of the standby system in the high-availability deployment
Bug Number
This issue is tracked with Oracle bug 33401667.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in recovery of database
When recovering an Oracle Database Enterprise Edition High Availability database from node 0, with target node as 1, an error may be encountered.
Failure Message
The following error message is displayed:
DCS-10001:Internal error encountered: null
Command Details
# odacli recover-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Retry the operation from the target node number of the database.
Bug Number
This issue is tracked with Oracle bug 34785410.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
When running the command odacli configure-dataguard
on
Oracle Database Appliance, an error may be encountered at the upload password file
to standby database
step.
odacli configure-dataguard
on
Oracle Database Appliance, the following error message may be displayed at
CONFIGUREDG - DCS-10001: UNABLE TO CONFIGURE BROKER DGMGRL> SHOW
CONFIGURATION;
:ORA-16783: cannot resolve gap for database tgtpodpgtb
Hardware Models
All Oracle Database Appliance hardware models
Workaround
odacli configure-dataguard
with the
--skip-password-copy
option.
- On the primary system, locate the password
file:
srvctl config database -d dbUniqueName | grep -i password
If the output is the Oracle ASM directory, then copy the password from the Oracle ASM directory to the local directory.su - grid asmcmd ASMCMD> pwcopy +DATA/tiger2/PASSWORD/orapwtiger /tmp/orapwtiger
If the output is empty, then check the directory at
/dbHome/dbs/orapwdbName
. For example, theorapwd
file can be at/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
- Copy the password file to the standby system. Back up the
original password
file.
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger.ori scp root@primaryHost:/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
- Change the standby
orapwd
file permission.chown -R oracle /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger chgrp oinstall /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
- Check the password file location on the standby system and copy to the
Oracle ASM directory, if
necessary.
srvctl config database -d tiger2 | grep -i password Password file: +DATA/tiger2/PASSWORD/orapwtiger
In this example, copy the password from the local directory to the Oracle ASM directory.su - grid asmcmd ASMCMD> pwcopy /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger +DATA/tiger2/PASSWORD/orapwtiger
This issue is tracked with Oracle bug 34484209.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in backup of database
When backing up a database on Oracle Database Appliance, an error is encountered.
odacli
create-backup
on new primary database fails with the following
message:DCS-10001:Internal error encountered: Unable to get the
rman command status commandid:xxx
output:STATUS
-------------------------
[COMPLETED WITH WARNINGS] error:.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- On the new primary database, connect to RMAN as
oracle
and edit the archivelog deletion policy.rman target / RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
- On the new primary database, as the
root
user, take a backup:odacli create-backup -in db_name -bt backup_type
This issue is tracked with Oracle bug 33181168.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models with DB systems
Workaround
- Stop the NFS service on both
nodes:
service nfs stop
- Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.
This issue is tracked with Oracle bug 33289742.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths.
Hardware Models
All Oracle Database Appliance hardware models with virtualized platform
Workaround
None.
This issue is tracked with Oracle bug 33580574.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in reinstate operation on Oracle Data Guard
When running the command odacli reinstate-dataguard
on
Oracle Data Guard an error is encountered.
dcs-agent.log
:DCS-10001:Internal error encountered: Unable to reinstate Dg." and can
further find this error "ORA-12514: TNS:listener does not currently know of
service requested
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ensure that the database you are reinstating is started in MOUNT mode.
srvctl start database -d db-unique-name -o mount
After the command completes successfully, run the command odacli
reinstate-dataguard
job. If the database is already in MOUNT mode, this
can be an temporary error. Check the Data Guard status again a few minutes later
with odacli describe-dataguardstatus
or odacli
list-dataguardstatus
, or check with DGMGRL> SHOW
CONFIGURATION;
to see if the reinstatement is successful.
This issue is tracked with Oracle bug 32367676.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.
Error: ORA-16664: unable to receive the result from a member
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart standby database in upgrade mode:
srvctl stop database -d <db_unique_name> Run PL/SQL command: STARTUP UPGRADE;
- Continue the enable apply process and wait for log apply process to refresh.
- After some time, check the Data Guard status with the DGMGRL
command:
SHOW CONFIGURATION;
This issue is tracked with Oracle bug 32864100.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Message:
DCS-10001:Internal error encountered: Unable to reinstate Dg.
The dcs-agent.log file has the following error entry:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- Make sure the database you are reinstating is started in
MOUNT mode. To start the database in MOUNT mode, run this
command:
srvctl start database -d db-unique-name -o mount
- After the above command runs successfully, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 32047967.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.
odacli
describe-database
command is not updated after Oracle Data Guard
switchover, failover, and reinstate operations on Oracle Database
Appliance.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli update-registry -n db --force/-f
to update the
database metadata. After the job completes, run the odacli
describe-database
command and verify that dbRole is updated.
This issue is tracked with Oracle bug 31378202.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance