4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Error in updating the operating system when patching the server
When patching the server to Oracle Database Appliance release 19.15, the operating system may not be updated. - Error in running jobs
When upgrading a database, an error may be encountered. - Error in upgrading a database
When upgrading a database, an error may be encountered. - Error in database patching
When patching a database on Oracle Database Appliance, an error may be encountered. - Error in server patching
When patching the Oracle Database Appliance server, an error may be encountered. - Error in server patching during DB system patching
When patching the server during DB system patching to Oracle Database Appliance release 19.15, an error may be encountered. - Component version not updated after patching
After patching the server to Oracle Database Appliance release 19.16, theodacli describe-component
command does not display the correct Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or 8000047C. - Detaching of databases with additionally configured services not supported by odaugradeutil
When runningodaugradeutil
in the Data Preserving Reprovisioning process, if there are additionally configured services, then databases cannot be detached. - Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered. - Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered. - Error messages in log entries in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, the log entries may display error messages though the overall status of the job is displayed asSUCCESS
. - Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered. - AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.19, theodacli update-dbhome
command may fail. - Error in patching prechecks report
The patchung prechecks report may display an error. - Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
Error in updating the operating system when patching the server
When patching the server to Oracle Database Appliance release 19.15, the operating system may not be updated.
DCS-10001:Internal error encountered: Failed to patch OS.
rpm -q kernel-uek
If the output of this command displays multiple RPM names, then perform the workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
# yum remove kernel-uek-4.14.35-1902.11.3.1.el7uek.x86_64
# yum remove kernel-uek-4.14.35-1902.301.1.el7uek.x86_64
This issue is tracked with Oracle bug 34154435.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in running jobs
When upgrading a database, an error may be encountered.
Problem Description
When running jobs, the DCS agent may not be registered correctly during bootstrap and the job may fail with error DCS-10058.
Failure Message
The following error message is displayed:
DCS-10058:DCS Agent is not running on all nodes.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart the DCS agent service with the following
command on both nodes in sequential order, starting from the
first
node:
# systemctl restart initdcsagent
- Retry the command that failed earlier.
Bug Number
This issue is tracked with Oracle bug 35056432.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in upgrading a database
When upgrading a database, an error may be encountered.
Problem Description
When you create Oracle ASM databases, the RECO directory may not have been created on systems provisioned with the OAK stack. This directory is created when the first RECO record is written. After successfully upgrading these systems using Data Provisioning Reprovisioning to Oracle Database Appliance release 19.15 or later, if you attempt to upgrade the database, an error message may be displayed.
Failure Message
When the odacli upgrade-database
command is run,
the following error message is displayed:
# odacli upgrade-database -i 16288932-61c6-4a9b-beb0-4eb19d95b2bd -to b969dd9b-f9cb-4e49-8e0d-575a0940d288
DCS-10001:Internal error encountered: dbStorage metadata not in place:
DCS-12013:Metadata validation error encountered: dbStorage metadata missing
Location info for database database_unique_name..
Command Details
# odacli upgrade-database
Hardware Models
All Oracle Database Appliance X6-2HA and X5-2 hardware models
Workaround
- Verify that the
odacli list-dbstorages
command displaysnull
for the redo location for the database that reported the error. For example, the following output displays a null or empty value for the database unique nameF
.# odacli list-dbstorages ID Type DBUnique Name Status Destination Location Total Used Available ---------------------------------------- ------ -------------------- ... ... ... 198678d9-c7c7-4e74-9bd6-004485b07c14 ASM F CONFIGURED DATA +DATA/F 4.89 TB 1.67 GB 4.89 TB REDO +REDO/F 183.09 GB 3.05 GB 180.04 GB RECO 8.51 TB ... ... ...
In the above output, the RECO record has a null value.
- Manually create the RECO directory for this database. If the
database unique name is
dbuniq
, then run theasmcmd
command as thegrid
user.asmcmd
- Run the
mkdir
command.asmcmd> mkdir +RECO/dbuniq
- Verify that the
odacli list-dbstorages
command output does not display a null or empty value for the database. - Rerun the
odacli upgrade-database
command.
Bug Number
This issue is tracked with Oracle bug 34923078.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in database patching
When patching a database on Oracle Database Appliance, an error may be encountered.
Problem Description
When applying the datapatch during patching of database on Oracle Database Appliance, an error message may be displayed.
Failure Message
When the odacli update-database
command is run,
the following error message is displayed:
Failed to execute sqlpatch for database …
Command Details
# odacli update-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the following SQL*Plus
command:
alter system set nls_sort='BINARY' SCOPE=SPFILE;
- Restart the database using srvctl command.
- Retry applying the datapatch with
dbhome/OPatch/datapatch -verbose -db dbUniqueName
.
Bug Number
This issue is tracked with Oracle bug 35060742.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
When patching the Oracle Database Appliance server, an error may be encountered.
Problem Description
When converting Oracle Clusterware resource type on KVM virtual machines, an error message may be displayed.
Failure Message
When the odacli update-server
command is run, the
following error message is displayed:
DCS-10001:Internal Error encountered: (...), caused by:
CRS-2510: Resource 'ora.data.acfs_resource.acfs' used in dependency 'hard'
does not exist or is not registered.
CRS-2514: Dependency attribute specification 'hard' is invalid in resource
'vm_resource.kvm'
CRS-4000: Command Add failed, or completed with errors.
Command Details
# odacli update-server
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- List and identify DB systems with
FAILED
status:# odacli list-dbsystems
- Delete the DB systems with
FAILED
status:# odacli delete-dbsystem -n dbsystem_name -f
- Retry the command that failed earlier.
Bug Number
This issue is tracked with Oracle bug 35060579.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching during DB system patching
When patching the server during DB system patching to Oracle Database Appliance release 19.15, an error may be encountered.
ORA-12559: Message 12559 not found; product=RDBMS; facility=ORA
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Retry server patching on the DB system.
This issue is tracked with Oracle bug 34153158.
Parent topic: Known Issues When Patching Oracle Database Appliance
Component version not updated after patching
After patching the server to Oracle Database Appliance release 19.16, the
odacli describe-component
command does not display the correct
Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or
8000047C.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Manually update the Ethernet controllers to 00005DD or 800005DE
using the fwupdate
command.
This issue is tracked with Oracle bug 34402352.
Parent topic: Known Issues When Patching Oracle Database Appliance
Detaching of databases with additionally
configured services not supported by odaugradeutil
When running odaugradeutil
in the Data Preserving
Reprovisioning process, if there are additionally configured services, then databases
cannot be detached.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Additional services must be deleted to complete the detach
operation by running the command srvctl remove service
. If
these services are required, then before removing the service, the metadata
must be captured manually and then the services must be recreated on the
system running Oracle Database Appliance release 19.15 using the
srvctl
command from the appropriate database
home.
This issue is tracked with Oracle bug 33593287.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.
If incorrect VIP names or VIP IP addresses are configured, then the
detach completes successfully but the command odacli restore-node
-g
displays a validation error. This is because the earlier
releases did not validate VIP names or VIP IP addresses before
provisioning.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Follow these steps:
Manually edit the file
/opt/oracle/oak/restore/metadata/provisionInstance.json
with the correct VIP names or VIP IP addresses. Retry the command
odacli restore-node -g
. For fixing VIP names or VIP
IP addresses, nslookup
can be used to query hostnames and
IP addresses.
This issue is tracked with Oracle bug 34140344.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in restore node process in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.
DCS-10045: groupNames are not unique.
This error occurs if the source Oracle Database Appliance is an OAK version. This is because on the DCS stack, the same operating system group is not allowed to be assigned two or more roles.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Follow these steps:
Manually edit the file
/opt/oracle/oak/restore/metadata/provisionInstance.json
with unique group names for each role. Retry the command odacli
restore-node -g
.
This issue is tracked with Oracle bug 34042493.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error messages in log entries in Data Preserving Reprovisioning
In the Data Preserving Reprovisioning process, during node restore, the log
entries may display error messages though the overall status of the job is displayed as
SUCCESS
.
odacli
restore-node -d
performs a set of ignorable tasks. Failure of
these tasks does not affect the status of the overall job. The output of the
command odacli describe-job
may report such failures. These
tasks
are:Restore of user created networks
Restore of object stores
Restore of NFS backup locations
Restore of backupconfigs
Relinking of backupconfigs to databases
Restore of backup reports
In the sample output above, even if these tasks fail, the overall
status of the job is marked as SUCCESS
.
Hardware Models
All Oracle Database Appliance hardware models being upgraded using the Data Preserving Reprovisioning process
Workaround
Investigate the failure using thedcs-agent.log
, fix the errors, and then retry the command
odacli restore-node -d
.
This issue is tracked with Oracle bug 34512193.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered.
odacli update-server -f version
, an error may be
displayed.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
The STIG V1R2 rule OL7-00-040420 tries to change the permission of
the file /etc/ssh/ssh_host_rsa_key
from '640' to '600'
which causes the error. During patching, run the command chmod 600
/etc/ssh/ssh_host_rsa_key
command on both nodes.
This issue is tracked with Oracle bug 33168598.
Parent topic: Known Issues When Patching Oracle Database Appliance
AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.19, the odacli update-dbhome
command may
fail.
Verify the Alternate Archive Failed AHF-4940: One or more log archive
Destination is Configured to destination and alternate log archive
Prevent Database Hangs destination settings are not as recommended
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the
odacli update-dbhome
command with the-f
option./opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.19.0.0.0 -f
This issue is tracked with Oracle bug 33144170.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching prechecks report
The patchung prechecks report may display an error.
Failure in the pre-patch report caused by “AHF-5190: operating system boot device order is not configured as recommended”
Hardware Models
Oracle Database Appliance X-7 hardware models
Workaround
Run the odacli update-server
or odacli
update-dbhome
command with the -f
option.
This issue is tracked with Oracle bug 33631256.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed.
odacli
update-dcscomponents
command:
# time odacli update-dcscomponents -v 19.19.0.0.0
^[[ADCS-10008:Failed to update DCScomponents: 19.19.0.0.0
Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for
details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
This is a timing issue with setting up the SSH equivalence.
Run the odacli update-dcscomponents
command again and
the operation completes successfully.
This issue is tracked with Oracle bug 32553519.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
Grid_home/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.
The ODACLI job displays the following error:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
The data patch log contains the entry
"Prereq check failed, exiting without
installing any patches.
".
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Install the same patch again.
This issue is tracked with Oracle bugs 30026438 and 30155710.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Error in creating a DB system
When creating a DB system, an error may be encountered. - Error in starting the DB System
When starting a DB system on an Oracle Database Appliance, an error may be encountered. - Error in creating database
When creating a database on Oracle Database Appliance, an error may be encountered. - Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered. - Error in creating DB system
When creating a DB system on Oracle Database Appliance, an error may be encountered. - Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered. - Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after runningcleanup.pl
. - Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered. - Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Errors in clone database operation
Clone database operation fails due to errors. - Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Error in creating a DB system
When creating a DB system, an error may be encountered.
Problem Description
- The
odacli create-dbsystem
job may be stuck in therunning
status for a long time. - Other DB system or application VM lifecycle operations such as
create, start, or stop VM jobs may be stuck in the
running
status for a long time. - Any
virsh
command such asvirsh list
command process may not respond. - The command
ps -ef | grep libvirtd
displays that there are twolibvirtd
processes. For example:# ps -ef |grep libvirtd root 5369 1 0 05:27 ? 00:00:03 /usr/sbin/libvirtd root 27496 5369 0 05:29 ? 00:00:00 /usr/sbin/libvirtd <<<
The second
libvirtd
process (pid 27496) is stuck and causes the job hang.
Command Details
# odacli create-dbsystem
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Delete the second libvirtd
, that is, the one
spawned by the first libvirtd
, for example, pid: 27496 in
the above example.
Bug Number
This issue is tracked with Oracle bug 34715675.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in starting the DB System
When starting a DB system on an Oracle Database Appliance, an error may be encountered.
Problem Description
For a DB system with custom memory size, if you modified the CPU
pool size or ran the odacli remap-cpupool
command, then the
DB system may fail to start.
Failure Message
virsh
console displays kernel panic with
out-of-memory
error. The following error message may
be
displayed:[Wait DB System VM DCS Agent bootstrap :
JobId=300b6dea-aaab-411b-897f-46c93a336c0f] []
c.o.d.a.k.c.KvmCommandExecutor: Got result from execution of '/usr/bin/nc -zv
IP_address 7071 -w 1':
KvmCommandExecutor.KvmCommandResult(executedCmd=/usr/bin/nc -zv IP_address
7071 -w 1, returnCode=1, output=, error=Ncat:
Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Retrieve the VM name associated with the DB system
using the
odacli describe-dbsytem
command. - Retrieve the memory size of the DB system with the
odacli describe-dbsystem
command and convert it to KB. For example, if the memory size is 64G, when converted to KiB, the size is 67108864 KiB. - Stop the DB system with the
odacli stop-dbsystem
command. For high-availability systems, the process may take up to 20 minutes. - Backup and update the XML file on the VM at the following path.
For high-availability systems, perform this step for both VMs.
/u05/app/sharedrepo/dbsystem_name/.ACFS/snaps/vm_vm_name.xml
For Oracle Database Appliance hardware models with one socket, for example, Small, modify the XML as follows. Replacexxxxxx
in the example with the memory size from step 2 in KiB unit. For example, 67108864 for memory size of 64G.<description>DB System VM</description> <memory unit='KiB'>xxxxxx</memory> <<< <currentMemory unit='KiB'>xxxxxx</currentMemory> <<< ... <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> <feature policy='force' name='invtsc'/> <feature policy='require' name='arch-capabilities'/> <numa> <cell id='0' cpus='0-3' memory='xxxxxx' unit='KiB'/> <<< </numa> </cpu>
For Oracle Database Appliance hardware models that have two sockets, for example, Medium, Large, HA, modify the XML as follows. Replacexxxxxx
in the example with the memory size from step 2 in KiB unit. For example, 67108864 for 64G memory. Divide the memory size in KB by 2 and use it to replace the yyyyyy value below. For example, if memory is 64G or 67108864KiB, replace yyyyyy with 33554432.<description>DB System VM</description> <memory unit='KiB'>xxxxxx</memory> <<< <currentMemory unit='KiB'>xxxxxx</currentMemory> <<< ... <numa> <cell id='0' cpus='0-1' memory='yyyyyy' unit='KiB’> <<< <distances> <sibling id='0' value='10'/> <sibling id='1' value='21'/> </distances> </cell> <cell id='1' cpus='2-3' memory='yyyyyy' unit='KiB’> <<< <distances> <sibling id='0' value='21'/> <sibling id='1' value='10'/> </distances> </cell> </numa>
- Use the
virsh
list to confirm that the VM is stopped, then use thevirsh
command to undefine the VM. Run the command on both bare metal system hosts for high-availability deployments.virsh list virsh undefine vm_name
- Start the DB
system:
odacli start-dbsystem -n dbsystem_name
Bug Number
This issue is tracked with Oracle bug 35360741.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating database
When creating a database on Oracle Database Appliance, an error may be encountered.
Problem Description
When creating a database on Oracle Database Appliance, the
operation may fail after the createDatabaseByRHP
task.
However, the odacli list-databases
command displays the
status as CONFIGURED for the failed database in the job results.
Failure Message
When you run the odacli create-database
command,
the following error message is displayed:
DCS-10001:Internal error encountered: Failed to clear all listeners from database
Command Details
# odacli create-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Check the job description of the odacli
create-database
command using the odacli
describe-job
command. Fix the issue for the task failure in
the odacli create-database
command. Delete the database
with the command odacli delete-database -n db_name
and retry the odacli create-database
command.
Bug Number
This issue is tracked with Oracle bug 34709091.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.
This issue is tracked with Oracle bug 33275630.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating DB system
When creating a DB system on Oracle Database Appliance, an error may be encountered.
odacli create-dbsystem
command, the following error message may be
displayed:DCS-10001:Internal error encountered: ASM network is not online in all nodes
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Manually bring the offline resources
online:
crsctl start res -all
- Run the
odacli create-dbsystem
command.
This issue is tracked with Oracle bug 33784937.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.
ORA-15333: disk is not visible on client instance
Hardware Models
All Oracle Database Appliance hardware models bare metal and dbsystem
Workaround
Shut down dbsystem before adding the second JBOD.systemctl restart initdcsagent
This issue is tracked with Oracle bug 32586762.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after running
cleanup.pl
.
After running cleanup.pl
, provisioning the appliance fails because
of missing Oracle Grid Infrastructure image (IMGGI191100). The following error
message is displayed:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
After running cleanup.pl, and before provisioning the appliance, update the repository as follows:
# odacli update-repository -f /**gi**
This issue is tracked with Oracle bug 32707387.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.
UpgradeResults.html
file, when upgrading database from 11.2.0.4 to 12.1
or 12.2:
Database is using a newer time zone file version than the Oracle home
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
- Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
- After manually completing the database upgrade, run the following command to update
DCS
metadata:
/opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f
This issue is tracked with Oracle bug 31125985.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
- Before upgrading the 12.1 single-instance database, run the following PL/SQL
command to change the
local_listener
to an empty string:ALTER SYSTEM SET LOCAL_LISTENER='';
- After upgrading the 12.1 single-instance database successfully, run the
following PL/SQL command to change the
local_listener
to the desired value:ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-';
This issue is tracked with Oracle bugs 31202775 and 31214657.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
- Change the parameter
value.
SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
- Shut down the database.
SQL> SHUTDOWN IMMEDIATE
- Start the database.
SQL> Startup
- Verify the parameter for the new
value.
SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';
This issue is tracked with Oracle bug 30309914.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error in starting the kdump service
When starting the kdump service, an error may be encountered. - Error in configuring Oracle Data Guard in a multi-user access enabled deployment
When configuring Oracle Data Guard in a multi-user access enabled deployment, an error may be encountered. - Error in recovery of database
When recovering an Oracle Database Enterprise Edition High Availability database from node 0, with target node as 1, an error may be encountered. - Error in configuring Oracle Data Guard
When running the commandodacli configure-dataguard
on Oracle Database Appliance, an error may be encountered at theupload password file to standby database
step. - Error in cleaning up a multi-user access enabled deployment
When running/opt/oracle/oak/onecmd/cleanup.pl
on a multi-user access enabled deployment, an error may be encountered. - Error in backup of database
When backing up a database on Oracle Database Appliance, an error is encountered. - Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered. - Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths. - Error in configuring Oracle Data Guard
After upgrading the standby database from release 12.1 to 19.14, the following error message may be displayed at stepEnable redo transport and apply
. - Error in viewing Oracle Data Guard status
When viewing Oracle Data Guard status on Oracle Database Appliance, an error is encountered. - Error in reinstate operation on Oracle Data Guard
When running the commandodacli reinstate-dataguard
on Oracle Data Guard an error is encountered. - Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered. - Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered. - Error in creating Oracle Data Guard status
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered. - Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Error in starting the kdump service
When starting the kdump service, an error may be encountered.
Failure Message
The following error message is displayed:
crashkernel reservation failed - memory is in use.
Command Details
# systemctl status kdump
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Modify
/etc/default/grub
and change"crashkernel=512M@64M"
to"crashkernel=512M"
. - Run
grub2-mkconfig
.
Bug Number
This issue is tracked with Oracle bug 34714285.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard in a multi-user access enabled deployment
When configuring Oracle Data Guard in a multi-user access enabled deployment, an error may be encountered.
Problem Description
When you configure Oracle Data Guard in a multi-user access enabled
deployment as the ODA-ADMINISTRATOR
user, the operation may
fail at step Configure Standby database (Standby site)
.
Failure Message
DCS-10001:Internal error encountered: Unable to populate standby database metadata.
Command Details
odacli configure-dataguard
Hardware Models
All Oracle Database Appliance hardware models in a multi-user access enabled deployment
Workaround
ODA-DB
and user type as
System
, for example, yoracle as in the following
procedure. If the primary system is multi-user access enabled, make sure the
primary database is created with this user. If the standby system is
multi-user access enabled, make sure the standby database is restored with
this user.
- Obtain the ODA-DB user name on the multi-user access enabled
system:
[odaadmin@scaoda9l006 ~]$ odacli list-users ID DCS User Name OS User Name Role(s) Account Status User Type ---------------------------------------- --------------- -------------------------------------------------- ... 8564aba2-94b9-4607-8c4f-2cda3bdc6cb5 odaadmin odaadmin ODA-ADMINISTRATOR Active System d9ae7f70-b294-42c1-881a-5f619ec2a851 yoracle yoracle ODA-DB Active System
- Switch to the ODA-DB user and configure Oracle Data Guard on the
primary and standby systems:
[yoracle@oda1 ~] su - yoracle [yoracle@oda1 ~]$ odacli create-database -n test -u ptest -bn f1 -bp [yoracle@oda1 ~]$ odacli create-backup -bt Regular-L0 -n test [yoracle@oda1 ~]$ odacli irestore-database -r backup_report.json -ro STANDBY -bp -on f1 -u stest [yoracle@oda1 ~]$ odacli configure-dataguard Standby site address: oda2 BUI username for Standby site. If Multi-user Access is disabled on Standby site, enter 'oda-admin'; otherwise, enter the name of the user who has irestored the Standby database (default: oda-admin): yoracle BUI password for Standby site: Database name for Data Guard configuration: test Primary database SYS password: ****************************************************************************** ************* Data Guard default settings Primary site network for Data Guard configuration: Public-network Standby site network for Data Guard configuration: Public-network Primary database listener port (TCP): 1521 Standby database listener port (TCP): 1521 Transport type: ASYNC Protection mode: MAX_PERFORMANCE Data Guard configuration name: ptest_stest Active Data Guard: disabled Do you want to edit this Data Guard configuration? (Y/N, default:N): Standby database's SYS password will be set to Primary database's after Data Guard configuration. Ignore warning and proceed with Data Guard configuration? (Y/N, default:N): y ****************************************************************************** ************* Configure Data Guard ptest_stest started ****************************************************************************** ************* Step 1: Validate Data Guard configuration request (Primary site) ... ****************************************************************************** ************* Step 11: Create Data Guard status (Standby site) Description: DG Status operation for db test - NewDgconfig Job ID: e6b13275-9450-4650-8187-b33f2dd6480f Started May 16, 2023 00:52:33 AM IST Create Data Guard status Finished May 16, 2023 00:52:35 AM IST ****************************************************************************** ************* Configure Data Guard ptest_stest completed ****************************************************************************** *************
Bug Number
This issue is tracked with Oracle bug 35389339.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in recovery of database
When recovering an Oracle Database Enterprise Edition High Availability database from node 0, with target node as 1, an error may be encountered.
Failure Message
The following error message is displayed:
DCS-10001:Internal error encountered: null
Command Details
# odacli recover-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Retry the operation from the target node number of the database.
Bug Number
This issue is tracked with Oracle bug 34785410.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
When running the command odacli configure-dataguard
on
Oracle Database Appliance, an error may be encountered at the upload password file
to standby database
step.
odacli configure-dataguard
on
Oracle Database Appliance, the following error message may be displayed at
CONFIGUREDG - DCS-10001: UNABLE TO CONFIGURE BROKER DGMGRL> SHOW
CONFIGURATION;
:ORA-16783: cannot resolve gap for database tgtpodpgtb
Hardware Models
Oracle Database Appliance hardware models with DB system and database version earlier than Oracle Database Appliance release 19.15
Workaround
odacli configure-dataguard
with the
--skip-password-copy
option.
- On the primary system, locate the password
file:
srvctl config database -d dbUniqueName | grep -i password
If the output is the Oracle ASM directory, then copy the password from the Oracle ASM directory to the local directory.su - grid asmcmd ASMCMD> pwcopy +DATA/tiger2/PASSWORD/orapwtiger /tmp/orapwtiger
If the output is empty, then check the directory at
/dbHome/dbs/orapwdbName
. For example, theorapwd
file can be at/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
- Copy the password file to the standby system. Back up the
original password
file.
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger.ori scp root@primaryHost:/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
- Change the standby
orapwd
file permission.chown -R oracle /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger chgrp oinstall /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
- Check the password file location on the standby system and copy to the
Oracle ASM directory, if
necessary.
srvctl config database -d tiger2 | grep -i password Password file: +DATA/tiger2/PASSWORD/orapwtiger
In this example, copy the password from the local directory to the Oracle ASM directory.su - grid asmcmd ASMCMD> pwcopy /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger +DATA/tiger2/PASSWORD/orapwtiger
This issue is tracked with Oracle bug 34484209.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in cleaning up a multi-user access enabled deployment
When running /opt/oracle/oak/onecmd/cleanup.pl
on a
multi-user access enabled deployment, an error may be encountered.
Problem Description
The /opt/oracle/oak/onecmd/cleanup.pl
operation
may not respond and may need to be closed manually.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Run /opt/oracle/oak/onecmd/cleanup.pl
with the
-nodpr
option on a multi-user access enabled
deployment.
Bug Number
This issue is tracked with Oracle bug 35326073.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in backup of database
When backing up a database on Oracle Database Appliance, an error is encountered.
odacli
create-backup
on new primary database fails with the following
message:DCS-10001:Internal error encountered: Unable to get the
rman command status commandid:xxx
output:STATUS
-------------------------
[COMPLETED WITH WARNINGS] error:.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- On the new primary database, connect to RMAN as
oracle
and edit the archivelog deletion policy.rman target / RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
- On the new primary database, as the
root
user, take a backup:odacli create-backup -in db_name -bt backup_type
This issue is tracked with Oracle bug 33181168.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models with DB systems
Workaround
- Stop the NFS service on both
nodes:
service nfs stop
- Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.
This issue is tracked with Oracle bug 33289742.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths.
Hardware Models
All Oracle Database Appliance hardware models with virtualized platform
Workaround
None.
This issue is tracked with Oracle bug 33580574.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
After upgrading the standby database from release 12.1 to 19.14, the
following error message may be displayed at step Enable redo transport and
apply
.
Warning: ORA-16629: database reports a different protection level from the protection mode standbydb - Physical standby database (disabled)
Hardware Models
All Oracle Database Appliance hardware models
Workaround
DGMGRL> Enable database tgtptdcnvo
Enabled.
This issue is tracked with Oracle bug 33749492.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in viewing Oracle Data Guard status
When viewing Oracle Data Guard status on Oracle Database Appliance, an error is encountered.
Check if DataGuard config
is updated
. Oracle Data Guard operations, though, are
successful.
Hardware Models
All Oracle Database Appliance high-availability systems
Workaround
Use DGMGRL
to verify Oracle Data Guard status.
This issue is tracked with Oracle bug 33411769.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in reinstate operation on Oracle Data Guard
When running the command odacli reinstate-dataguard
on
Oracle Data Guard an error is encountered.
dcs-agent.log
:DCS-10001:Internal error encountered: Unable to reinstate Dg." and can
further find this error "ORA-12514: TNS:listener does not currently know of
service requested
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Ensure that the database you are reinstating is started in MOUNT mode.
srvctl start database -d db-unique-name -o mount
After the command completes successfully, run the command odacli
reinstate-dataguard
job. If the database is already in MOUNT mode, this
can be an temporary error. Check the Data Guard status again a few minutes later
with odacli describe-dataguardstatus
or odacli
list-dataguardstatus
, or check with DGMGRL> SHOW
CONFIGURATION;
to see if the reinstatement is successful.
This issue is tracked with Oracle bug 32367676.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in running concurrent database or database home creation jobs
When running concurrent database or database home creation jobs, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not run concurrent database or database home creation job.This issue is tracked with Oracle bug 32376885.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.
Error: ORA-16664: unable to receive the result from a member
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart standby database in upgrade mode:
srvctl stop database -d <db_unique_name> Run PL/SQL command: STARTUP UPGRADE;
- Continue the enable apply process and wait for log apply process to refresh.
- After some time, check the Data Guard status with the DGMGRL
command:
SHOW CONFIGURATION;
This issue is tracked with Oracle bug 32864100.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in creating Oracle Data Guard status
When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.
odacli
configure-dataguard
command fails at step
NewDgconfig
with the following error on the standby
system:ORA-16665: TIME OUT WAITING FOR THE RESULT FROM A MEMBER
Verify the status of the job with the odacli
list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the standby system, run the following:
export DEMODE=true; odacli create-dataguardstatus -i dbid -n dataguardstatus_id_on_primary -r configdg.json export DEMODE=false; configdg.json example
configdg.json
file for a single-node
system:{
"name": "test1_test7",
"protectionMode": "MAX_PERFORMANCE",
"replicationGroups": [
{
"sourceEndPoints": [
{
"endpointType": "PRIMARY",
"hostName": test_domain1",
"listenerPort": 1521,
"databaseUniqueName": "test1",
"serviceName": "test",
"sysPassword": "***",
"ipAddress": "test_IPaddress"
},
],
"targetEndPoints": [
{
"endpointType": "STANDBY",
"hostName": "test_domain2",
"listenerPort": 1521,
"databaseUniqueName": "test7",
"serviceName": "test",
"sysPassword": "***",
"ipAddress": "test_IPaddress3"
},
],
"transportType": "ASYNC"
}
]
}
This issue is tracked with Oracle bug 32719173.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Unable to reinstate Dg. Reinstate job was executed within 24hrs after failover job.
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 3522449
- On the old primary database, flashback to this SCN with
RMAN with the backup encryption
password:
RMAN> set decryption identified by 'rman_backup_password' ; executing command: SET decryption RMAN> FLASHBACK DATABASE TO SCN 3522449 ; ... Finished flashback at 24-SEP-20 RMAN> exit
- On the new primary machine, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 31884506.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Message:
DCS-10001:Internal error encountered: Unable to reinstate Dg.
The dcs-agent.log file has the following error entry:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- Make sure the database you are reinstating is started in
MOUNT mode. To start the database in MOUNT mode, run this
command:
srvctl start database -d db-unique-name -o mount
- After the above command runs successfully, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 32047967.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.
odacli
describe-database
command is not updated after Oracle Data Guard
switchover, failover, and reinstate operations on Oracle Database
Appliance.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli update-registry -n db --force/-f
to update the
database metadata. After the job completes, run the odacli
describe-database
command and verify that dbRole is updated.
This issue is tracked with Oracle bug 31378202.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance