4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

Error in updating the operating system when patching the server

When patching the server, the operating system may not be updated.

The following error message is displayed:
DCS-10001:Internal error encountered: Failed to patch OS.
Run the following command:
rpm -q kernel-uek

If the output of this command displays multiple RPM names, then perform the workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Remove the following RPMs:
# yum remove kernel-uek-4.14.35-1902.11.3.1.el7uek.x86_64
# yum remove kernel-uek-4.14.35-1902.301.1.el7uek.x86_64

This issue is tracked with Oracle bug 34154435.

Error in server patching

When patching the Oracle Database Appliance server, an error may be encountered.

Problem Description

When patching the server on Oracle Database Appliance, an error message may be displayed.

Failure Message

When you run the odacli update-server command, the following error message is displayed at the Patch KVM infrastructure task:

DCS-10033:Service Clusterware on current node is down.

This issue may occur on systems that require update of Oracle ILOM, or operating system, or local storage firmware.

Command Details

# odacli update-server -v version

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Restart Oracle Clusterware as follows:
$GI_HOME/bin/crsctl start crs

Bug Number

This issue is tracked with Oracle bug 36923776.

Error in attaching a vdisk after DB system patching

After upgrading a DB system on Oracle Database Appliance, the vdisks attached to the DB system may not continue to be attached.

Problem Description

After DB system upgrade, the existing vdisks are not attached. Only the vdisk metadata associated with the DB system is preserved. The virtual device name may be different from the name before you run the odacli upgrade-dbsystem command.

Command Details

# odacli upgrade-dbsystem

Hardware Models

All Oracle Database Appliance hardware models X9-2, X8-2, and X7-2

Workaround

Detach the vdisk manually with the --force option from the VM to reconcile the metadata. Then, attach the vdisk to the respective VM. Then, manually mount the file system on the device in the DB system.

Bug Number

This issue is tracked with Oracle bug 36885595.

Error in upgrading Oracle AFD-enabled DB system

When upgrading a DB system with Oracle ASM Filter Driver (Oracle AFD) during Data Preserving Reprovisioning, an error may be encountered.

Problem Description

When you upgrade a DB system with Oracle AFD using Data Preserving Reprovisioning to Oracle Database Appliance release 19.22, with Oracle Grid Infrastructure or Oracle Database release 19.21 or earlier, then an error may be encountered at the "Restore node - DPR" step.

Failure Message

The following error message is displayed in the database alert.log:

ORA-00600: internal error code, arguments: [kfnRConnect!ascname], [DATA], [], [], [], [], [], [], [], [], [], []

Hardware Models

All Oracle Database Appliance hardware models x9-2 and earlier running Oracle Grid Infrastructure 19.21

Workaround

Do not upgrade the existing Oracle AFD-enabled DB system with Oracle Grid Infrastructure or Oracle Database release 19.21 till the fix for bug 36114443 is available in the Oracle Grid Infrastructure and Oracle Database clone files.

Bug Number

This issue is tracked with Oracle bug 36296849.

Error in server patching

When patching the server on Oracle Database Appliance, an error may be encountered.

Problem Description

When patching the server on Oracle Database Appliance, and the DCS agent loads, the scheduler service may fail to start and an error message may be displayed.

Failure Message

The dcs-agent.log file displays the following error message:

-----------------------
2024-07-29 14:24:30,351 WARN [backgroundjob-zookeeper-pool-7-thread-2] [] o.j.s.JobZooKeeper: JobRunr encountered a problematic exception. Please create a bug report (if possible, provide the code to reproduce this and the stacktrace) - Processing will continue.
java.lang.NullPointerException: null
        at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.pollIntervalInSecondsTimeBoxIsAboutToPass(ZooKeeperTask.java:93)
        at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.getJobsToProcess(ZooKeeperTask.java:84)
        at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.processJobList(ZooKeeperTask.java:57)
        at org.jobrunr.server.zookeeper.tasks.ProcessOrphanedJobsTask.runTask(ProcessOrphanedJobsTask.java:29)
        at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.run(ZooKeeperTask.java:47)
        at org.jobrunr.server.JobZooKeeper.lambda$runMasterTasksIfCurrentServerIsMaster$0(JobZooKeeper.java:76)
        at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
        at org.jobrunr.server.JobZooKeeper.runMasterTasksIfCurrentServerIsMaster(JobZooKeeper.java:76)
        at org.jobrunr.server.JobZooKeeper.run(JobZooKeeper.java:56)
-----------------------

Command Details

# odacli update-server

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Restart the DCS agent:
    systemctl restart initdcsagent
  2. Verify that the DCS agent is running:
    odacli ping-agent
    odacli list-jobs
    odacli describe-component

Bug Number

This issue is tracked with Oracle bug 36896020.

Incorrect job status during Data Preserving Reprovisioning

When upgrading your deployment, an error may be encountered.

Problem Description

When a job is marked as Success, it means that all of its tasks have completed successfully and none of them are still running. However, there may be cases where the odacli describe-job command result incorrectly displays a task in a running state, even though the job itself has successfully completed.

Command Details

# odacli describe-job

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None. Ignore the error.

Bug Number

This issue is tracked with Oracle bug 35970784.

Error in upgrading a database

When upgrading a database, an error may be encountered.

Problem Description

When you create Oracle ASM databases, the RECO directory may not have been created on systems provisioned with the OAK stack. This directory is created when the first RECO record is written. After successfully upgrading these systems using Data Preserving Reprovisioning to Oracle Database Appliance release 19.15 or later, if you attempt to upgrade the database, an error message may be displayed.

Failure Message

When the odacli upgrade-database command is run, the following error message is displayed:

# odacli upgrade-database -i 16288932-61c6-4a9b-beb0-4eb19d95b2bd -to b969dd9b-f9cb-4e49-8e0d-575a0940d288
DCS-10001:Internal error encountered: dbStorage metadata not in place:
DCS-12013:Metadata validation error encountered: dbStorage metadata missing
Location info for database database_unique_name..

Command Details

# odacli upgrade-database

Hardware Models

All Oracle Database Appliance X6-2HA and X5-2 hardware models

Workaround

  1. Verify that the odacli list-dbstorages command displays null for the redo location for the database that reported the error. For example, the following output displays a null or empty value for the database unique name F.
    # odacli list-dbstorages
    
    ID                                     Type   DBUnique Name  Status     
    Destination Location  Total      Used       Available      
    ---------------------------------------- ------ --------------------
    ...
    ...
    ...
    198678d9-c7c7-4e74-9bd6-004485b07c14     ASM    F            CONFIGURED   
    DATA    +DATA/F  4.89 TB    1.67 GB    4.89 TB                                                                   
    REDO    +REDO/F  183.09 GB  3.05 GB    180.04 GB                                                                                
    RECO             8.51 TB              
    ...
    ...
    ...

    In the above output, the RECO record has a null value.

  2. Manually create the RECO directory for this database. If the database unique name is dbuniq, then run the asmcmd command as the grid user.
    asmcmd
  3. Run the mkdir command.
    asmcmd> mkdir +RECO/dbuniq
  4. Verify that the odacli list-dbstorages command output does not display a null or empty value for the database.
  5. Rerun the odacli upgrade-database command.

Bug Number

This issue is tracked with Oracle bug 34923078.

Error in database patching

When patching a database on Oracle Database Appliance, an error may be encountered.

Problem Description

When applying the datapatch during patching of database on Oracle Database Appliance, an error message may be displayed.

Failure Message

When the odacli update-database command is run, the following error message is displayed:

Failed to execute sqlpatch for database …

Command Details

# odacli update-database

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the following SQL*Plus command:
    alter system set nls_sort='BINARY' SCOPE=SPFILE;
  2. Restart the database using srvctl command.
  3. Retry applying the datapatch with dbhome/OPatch/datapatch -verbose -db dbUniqueName.

Bug Number

This issue is tracked with Oracle bug 35060742.

Component version not updated after patching

After patching the server to Oracle Database Appliance release 19.16, the odacli describe-component command does not display the correct Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or 8000047C.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Manually update the Ethernet controllers to 00005DD or 800005DE using the fwupdate command.

This issue is tracked with Oracle bug 34402352.

Error in restore node process in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.

If incorrect VIP names or VIP IP addresses are configured, then the detach completes successfully but the command odacli restore-node -g displays a validation error. This is because the earlier releases did not validate VIP names or VIP IP addresses before provisioning.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Manually edit the file /opt/oracle/oak/restore/metadata/provisionInstance.json with the correct VIP names or VIP IP addresses. Retry the command odacli restore-node -g. For fixing VIP names or VIP IP addresses, nslookup can be used to query hostnames and IP addresses.

This issue is tracked with Oracle bug 34140344.

Error in restore node process in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.

The following error message may be displayed:
DCS-10045: groupNames are not unique.

This error occurs if the source Oracle Database Appliance is an OAK version. This is because on the DCS stack, the same operating system group is not allowed to be assigned two or more roles.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Manually edit the file /opt/oracle/oak/restore/metadata/provisionInstance.json with unique group names for each role. Retry the command odacli restore-node -g.

This issue is tracked with Oracle bug 34042493.

Error messages in log entries in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, the log entries may display error messages though the overall status of the job is displayed as SUCCESS.

For Oracle Database Appliance running the DCS stack starting with Oracle Database Appliance release 12.2.1.4.0, the command odacli restore-node -d performs a set of ignorable tasks. Failure of these tasks does not affect the status of the overall job. The output of the command odacli describe-job may report such failures. These tasks are:
Restore of user created networks
Restore of object stores
Restore of NFS backup locations
Restore of backupconfigs
Relinking of backupconfigs to databases
Restore of backup reports

In the sample output above, even if these tasks fail, the overall status of the job is marked as SUCCESS.

Hardware Models

All Oracle Database Appliance hardware models being upgraded using the Data Preserving Reprovisioning process

Workaround

Investigate the failure using the dcs-agent.log, fix the errors, and then retry the command odacli restore-node -d.

This issue is tracked with Oracle bug 34512193.

Error in server patching

When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered.

On an Oracle Database Appliance deployment with release earlier than 19.24, if the Security Technical Implementation Guidelines (STIG) V1R2 is already deployed, then when you patch to 19.24 or earlier, and run the command odacli update-server -f version, an error may be displayed.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

The STIG V1R2 rule OL7-00-040420 tries to change the permission of the file /etc/ssh/ssh_host_rsa_key from '640' to '600' which causes the error. During patching, run the command chmod 600 /etc/ssh/ssh_host_rsa_key command on both nodes.

This issue is tracked with Oracle bug 33168598.

AHF error in prepatch report for the update-dbhome command

When you patch server to Oracle Database Appliance release 19.24, the odacli update-dbhome command may fail.

The following error message is displayed in the pre-patch report:
Verify the Alternate Archive    Failed    AHF-4940: One or more log archive 
Destination is Configured to              destination and alternate log archive
Prevent Database Hangs                    destination settings are not as recommended           

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Run the odacli update-dbhome command with the -f option.
    /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.24.0.0.0 -f

This issue is tracked with Oracle bug 33144170.

Errors when running ORAchk or the odacli create-prepatchreport command

When you run ORAchk or the odacli create-prepatchreport command, an error is encountered.

The following error messages may be seen:
One or more log archive destination and alternate log archive destination settings are not as recommended 
Software home check failed 

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the odacli update-dbhome, odacli create-prepatchreport, odacli update-server commands with the -sko option. For example:
odacli update-dbhome -j -v 19.24.0.0.0 -i dbhome_id -sko

This issue is tracked with Oracle bugs 30931017, 31631618, and 31921112.

Error in patching prechecks report

The patchung prechecks report may display an error.

The following error message may be displayed:
Failure in the pre-patch report caused by “AHF-5190: operating system boot device order is not configured as recommended”

Hardware Models

Oracle Database Appliance X-7 hardware models

Workaround

Run the odacli update-server or odacli update-dbhome command with the -f option.

This issue is tracked with Oracle bug 33631256.

Error message displayed even when patching Oracle Database Appliance is successful

Although patching of Oracle Database Appliance was successful, an error message may be displayed.

The following error is seen when running the odacli update-dcscomponents command:
# time odacli update-dcscomponents -v 19.24.0.0.0 
^[[ADCS-10008:Failed to update DCScomponents: 19.24.0.0.0
Internal error while patching the DCS components : 
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer  
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for  
details.   

Hardware Models

All Oracle Database Appliance hardware models

Workaround

This is a timing issue with setting up the SSH equivalence.

Run the odacli update-dcscomponents command again and the operation completes successfully.

This issue is tracked with Oracle bug 32553519.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching to 12.1.0.2.190716 Bundle Patch

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Error in creating Oracle AFD-enabled DB system

When creating a DB system with Oracle ASM Filter Driver (Oracle AFD), an error may be encountered.

Problem Description

When you create a DB system with Oracle AFD on Oracle Database Appliance release 19.22, with Oracle Grid Infrastructure or Oracle Database release 19.21 or earlier, then an error may be encountered at the "Install DB System" step.

Failure Message

The following error message is displayed in the database alert.log:

WARNING: group 2 (RECO) has missing disks
ORA-15040: diskgroup is incomplete
WARNING: group 2 is being dismounted

Command Details

# odacli create-dbsystem

Hardware Models

All Oracle Database Appliance hardware models running Oracle Grid Infrastructure 19.21

Workaround

This issue is fixed in Oracle Grid Infrastructure 19.22 Release Update (RU). Create the DB system using Oracle Grid Infrastructure and Oracle Database release 19.22.

You can create DB system without enabling Oracle AFD by specifying enableAFD=false in the DB system JSON file during DB system creation.

Do not patch or upgrade the existing Oracle AFD-enabled DB system with Oracle Grid Infrastructure or Oracle Database release 19.21 till the fix for bug 36114443 is available in the Oracle Grid Infrastructure and Oracle Database clone files.

Bug Number

This issue is tracked with Oracle bug 36300713.

Error in modifying the Oracle ASM port

When running the odacli modify-asmport command on Oracle Database Appliance, an error may be encountered.

Problem Description

After patching from Oracle Database Appliance release 19.23 to 19.24, if any DB system is running Oracle Database 23ai database released with Oracle Database Appliance release 19.23, an error message may be displayed when you run the odacli modify-asmport command.

Failure Message

When the odacli modify-asmport command is run, any of the following error message may be displayed:

/crs endpoint not found
/asm endpoint not found

Task Level Failure Message

The job may fail at the Stop CRS on DB System(s) step. The complete details of the error are displayed in the Message section of the command output.

Command Details

# odacli modify-asmport

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Patch your DB system to Oracle Database Appliance release 19.24 and use the DB 23ai clones available with Oracle Database Appliance release 19.24.

Bug Number

This issue is tracked with Oracle bug 36879784.

Error in Oracle Data Guard operation after modifying the Oracle ASM port

When running the odacli modify-asmport command on Oracle Database Appliance configured with Oracle Data Guard, an error may be encountered.

Problem Description

If you run the odacli modify-asmport command on an appliance configured with Oracle Data Guard that uses MAX PROTECTION mode, then this could cause a disruption in primary site due to the standby Oracle Clusterware being restarted as part of the Oracle ASM port change.

Failure Message

The following error message may be displayed in the alert logs for the database on the primary host:

ORA-16072: a minimum of one standby database destination is required
Followed by the message:
terminating the instance due to ORA error 16072

Task Level Failure Message

The job may fail at the Stop CRS on DB System(s) step. The complete details of the error are displayed in the Message section of the command output.

Command Details

# odacli modify-asmport

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Start the database instance on the primary host.

Bug Number

This issue is tracked with Oracle bug 36931905.

Error in database creation on multi-user access enabled system

When creating a database on multi-user access enabled system on Oracle Database Appliance, an error may be encountered.

Problem Description

When you create a database on a multi-user access enabled system, an error message may be displayed.

Failure Message

When the user name of database owner contains both lowercase and uppercase letters, the error message may be as follows:

[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - [FATAL] Error in Process: /u01/app/KvEl6/product/19.0.0.0/dbhome_2/bin/orapwd
    [jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - Enter password for SYS:
    [jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - OPW-00010: Could not create the password file.
    [jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-00600: internal error code, arguments: [kfzpCreate02], [0], [], [], [], [], [], [], [], [], [], []
    [jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-15260: permission denied on ASM disk group
    [jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 679
    [jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-06512: at line 2
When the user name of database owner begins with number digit, the error message may be as follows:
PRCZ-4001 : failed to execute command "/u01/app/6RXNI/product/19.0.0.0/dbhome_15//bin/dbca" using the privileged execution plugin "odaexec" on nodes "scaoda901c7n1" within 5,000 seconds
PRCZ-2103 : Failed to execute command "/u01/app/6RXNI/product/19.0.0.0/dbhome_15//bin/dbca" on node "scaoda901c7n1" as user "6RXNI". Detailed error:
[FATAL] [DBT-05801] There are no ASM disk groups detected.
  CAUSE: ASM may not be configured, or ASM disk groups are not created yet.
  ACTION: Create ASM disk groups, or change the storage location to File System.
[FATAL] [DBT-05801] There are no ASM disk groups detected.
  CAUSE: ASM may not be configured, or ASM disk groups are not created yet.
  ACTION: Create ASM disk groups, or change the storage location to File System.

Command Details

# odacli create-database

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not start custom user name with number digit or have mixed-case letters in the custom user name.

Bug Number

This issue is tracked with Oracle bug 36878796.

Error in configuring Oracle ASR

When configuring Oracle ASR, an error may be encountered when registering Oracle ASR Manager due to an issue while contacting the transport server.

Failure Message

The following error message is displayed:

DCS-10045:Validation error encountered: Registration failed : Please check the agent logs for details.

Command Details

# odacli configure-asr

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Retry configuring Oracle ASR using the odacli configure-asr command.

Bug Number

This issue is tracked with Oracle bug 36363437.

Error in starting the DB System

When starting a DB system on an Oracle Database Appliance, an error may be encountered.

Problem Description

If DBVM is undefined using virsh undefine dbvm_name, then the odacli start-dbsystem command may fail to run.

Failure Message

The following error message may be displayed:
DCS-10001:Internal error encountered: error: failed to get domain 'dbvm_name'

Hardware Models

All Oracle Database Appliance hardware models running Oracle Database Appliance release 19.21

Workaround

Run virsh define /u05/app/sharedrepo/dbsystem/.ACFS/snaps/vm_dbvm_name/dbvm_name.xml to define the VM. Then start the DB system.

Bug Number

This issue is tracked with Oracle bug 36051738.

Error in creating database

When creating a database on Oracle Database Appliance, an error may be encountered.

Problem Description

When creating a database on Oracle Database Appliance, the operation may fail after the createDatabaseByRHP task. However, the odacli list-databases command displays the status as CONFIGURED for the failed database in the job results.

Failure Message

When you run the odacli create-database command, the following error message is displayed:

DCS-10001:Internal error encountered: Failed to clear all listeners from database

Command Details

# odacli create-database

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Check the job description of the odacli create-database command using the odacli describe-job command. Fix the issue for the task failure in the odacli create-database command. Delete the database with the command odacli delete-database -n db_name and retry the odacli create-database command.

Bug Number

This issue is tracked with Oracle bug 34709091.

Error in creating two DB systems

When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.

When attempting to start the DB systems, the following error message is displayed:
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.

This issue is tracked with Oracle bug 33275630.

Error in adding JBOD

When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.

The following error message is displayed:
ORA-15333: disk is not visible on client instance

Hardware Models

All Oracle Database Appliance hardware models bare metal and dbsystem

Workaround

Shut down dbsystem before adding the second JBOD.
systemctl restart initdcsagent 

This issue is tracked with Oracle bug 32586762.

Error in provisioning appliance after running cleanup.pl

Errors encountered in provisioning applince after running cleanup.pl.

After running cleanup.pl, provisioning the appliance fails because of missing Oracle Grid Infrastructure image (IMGGI191100). The following error message is displayed:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

After running cleanup.pl, and before provisioning the appliance, update the repository as follows:

# odacli update-repository -f /**gi** 

This issue is tracked with Oracle bug 32707387.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Errors in clone database operation

Clone database operation fails due to errors.

If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Error in upgrading Oracle Data Guard

When upgrading Oracle Data Guard, an error may be encountered.

Problem Description

If you configured Oracle Data Guard on a multi-user access enabled Oracle Database Appliance release 19.19 system, as odaadmin user, then this Oracle Data Guard configuration may not display when you run the odacli list-dataguardstatus command. If you upgrade this system to Oracle Database Appliance release 19.23 using Data Preserving Reprovisioning, then the Validate Database Service presence step in the the create-preupgradereport precheck may fail for the Oracle Data Guard database.

The following error message is displayed:
One or more pre-checks failed for [DB]

Command Details

# odacli create-preupgradereport 
# odacli describe-preupgradereport 

Task Level Failure message

"The following services [TDG1yn_ro, TDG1yn_rw, Y6Z_ro, Y6Z_rw] created on database 
'TDG1yn' can result in a failure in 'detach-node'

Hardware Models

All Oracle Database Appliance hardware models X9-2, X8-2, and X7-2

Workaround

For each service listed, do the following:
  1. Stop the service reported:
    srvctl stop service -d db_unique_name -service service_name
  2. Remove the service:
    srvctl remove service -d db_unique_name -service service_name

Bug Number

This issue is tracked with Oracle bug 36610040.

Error in deleting database home

When deleting a database home on Oracle Database Appliance, an error may be encountered.

Problem Description

When you delete a database home, the database home is not deleted completely. The subfolders and files exist in the corresponding database home location and the database home entry exists in the /u01/app/oraInventory/ContentsXML/inventory.xml file.

Failure Message

When the odacli update-database command is run, the following error message is displayed:

Failed to execute sqlpatch for database …

Command Details

# odacli delete-dbhome

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Before you run the odacli delete-dbhome command, confirm that the wOraDBversion_homeidx exists in the /opt/oracle/rhp/RHPCheckpoints/ location on the same node where you run the command.

Bug Number

This issue is tracked with Oracle bug 36864228.

Error in running a job

When running a job, an error may be encountered.

Problem Description

Due to distributed lock conflict during DCS infrastructure connection callback, an error may be encountered when running a job.

Failure Message

The following error message is displayed:

DCS-10058:DCS agent is not running on all nodes.

Command Details

Any ODACLI command

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Restart the DCS agent service on each node in a sequential manner, one after another:
# systemctl restart initdcsagent

Bug Number

This issue is tracked with Oracle bug 36380550.

Error in configuring Oracle Data Guard

When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered.

Problem Description

When you configure Oracle Data Guard on the second node of the standby system on an Oracle Database Appliance high-availability deployment, the operation may fail at step Configure Standby database (Standby site) in the task Reset Db sizing and hidden parameters for ODA best practice.

Command Details

odacli configure-dataguard

Hardware Models

All Oracle Database Appliance hardware models high-availability deployments

Workaround

Run odacli configure-dataguard on the first node of the standby system in the high-availability deployment

Bug Number

This issue is tracked with Oracle bug 33401667.

Error in cleaning up a deployment

When cleaning up a Oracle Database Appliance, an error is encountered.

During cleanup, shutdown of Clusterware fails because the NFS export service uses Oracle ACFS-based clones repository.

Hardware Models

All Oracle Database Appliance hardware models with DB systems

Workaround

Follow these steps:
  1. Stop the NFS service on both nodes:
    service nfs stop
  2. Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.

This issue is tracked with Oracle bug 33289742.

Error in display of file log path

File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths.

Hardware Models

All Oracle Database Appliance hardware models with virtualized platform

Workaround

None.

This issue is tracked with Oracle bug 33580574.

Error in the enable apply process after upgrading databases

When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.

The following error message is displayed:
Error: ORA-16664: unable to receive the result from a member

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Restart standby database in upgrade mode:
    srvctl stop database -d <db_unique_name> 
    Run PL/SQL command: STARTUP UPGRADE; 
  2. Continue the enable apply process and wait for log apply process to refresh.
  3. After some time, check the Data Guard status with the DGMGRL command:
    SHOW CONFIGURATION; 

This issue is tracked with Oracle bug 32864100.

Error in updating Role after Oracle Data Guard operations

When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.

The dbRole component described in the output of the odacli describe-database command is not updated after Oracle Data Guard switchover, failover, and reinstate operations on Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Run odacli update-registry -n db --force/-f to update the database metadata. After the job completes, run the odacli describe-database command and verify that dbRole is updated.

This issue is tracked with Oracle bug 31378202.

Inconsistency in ORAchk summary and details report page

ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.

This issue is tracked with Oracle bug 30676674.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.