4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

Error in updating the operating system when patching the server

When patching the server, the operating system may not be updated.

The following error message is displayed:
DCS-10001:Internal error encountered: Failed to patch OS.
Run the following command:
rpm -q kernel-uek

If the output of this command displays multiple RPM names, then perform the workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Remove the following RPMs:
# yum remove kernel-uek-4.14.35-1902.11.3.1.el7uek.x86_64
# yum remove kernel-uek-4.14.35-1902.301.1.el7uek.x86_64

This issue is tracked with Oracle bug 34154435.

Error in upgrading Oracle AFD-enabled DB system

When upgrading a DB system with Oracle ASM Filter Driver (Oracle AFD) during Data Preserving Reprovisioning, an error may be encountered.

Problem Description

When you upgrade a DB system with Oracle AFD using Data Preserving Reprovisioning to Oracle Database Appliance release 19.22, with Oracle Grid Infrastructure or Oracle Database release 19.21 or earlier, then an error may be encountered at the "Restore node - DPR" step.

Failure Message

The following error message is displayed in the database alert.log:

ORA-00600: internal error code, arguments: [kfnRConnect!ascname], [DATA], [], [], [], [], [], [], [], [], [], []

Hardware Models

All Oracle Database Appliance hardware models x9-2 and earlier running Oracle Grid Infrastructure 19.21

Workaround

Do not upgrade the existing Oracle AFD-enabled DB system with Oracle Grid Infrastructure or Oracle Database release 19.21 till the fix for bug 36114443 is available in the Oracle Grid Infrastructure and Oracle Database clone files.

Bug Number

This issue is tracked with Oracle bug 36296849.

Incorrect job status during Data Preserving Reprovisioning

When upgrading your deployment, an error may be encountered.

Problem Description

When a job is marked as Success, it means that all of its tasks have completed successfully and none of them are still running. However, there may be cases where the odacli describe-job command result incorrectly displays a task in a running state, even though the job itself has successfully completed.

Command Details

# odacli describe-job

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None. Ignore the error.

Bug Number

This issue is tracked with Oracle bug 35970784.

Error in upgrading a database

When upgrading a database, an error may be encountered.

Problem Description

When you create Oracle ASM databases, the RECO directory may not have been created on systems provisioned with the OAK stack. This directory is created when the first RECO record is written. After successfully upgrading these systems using Data Preserving Reprovisioning to Oracle Database Appliance release 19.15 or later, if you attempt to upgrade the database, an error message may be displayed.

Failure Message

When the odacli upgrade-database command is run, the following error message is displayed:

# odacli upgrade-database -i 16288932-61c6-4a9b-beb0-4eb19d95b2bd -to b969dd9b-f9cb-4e49-8e0d-575a0940d288
DCS-10001:Internal error encountered: dbStorage metadata not in place:
DCS-12013:Metadata validation error encountered: dbStorage metadata missing
Location info for database database_unique_name..

Command Details

# odacli upgrade-database

Hardware Models

All Oracle Database Appliance X6-2HA and X5-2 hardware models

Workaround

  1. Verify that the odacli list-dbstorages command displays null for the redo location for the database that reported the error. For example, the following output displays a null or empty value for the database unique name F.
    # odacli list-dbstorages
    
    ID                                     Type   DBUnique Name  Status     
    Destination Location  Total      Used       Available      
    ---------------------------------------- ------ --------------------
    ...
    ...
    ...
    198678d9-c7c7-4e74-9bd6-004485b07c14     ASM    F            CONFIGURED   
    DATA    +DATA/F  4.89 TB    1.67 GB    4.89 TB                                                                   
    REDO    +REDO/F  183.09 GB  3.05 GB    180.04 GB                                                                                
    RECO             8.51 TB              
    ...
    ...
    ...

    In the above output, the RECO record has a null value.

  2. Manually create the RECO directory for this database. If the database unique name is dbuniq, then run the asmcmd command as the grid user.
    asmcmd
  3. Run the mkdir command.
    asmcmd> mkdir +RECO/dbuniq
  4. Verify that the odacli list-dbstorages command output does not display a null or empty value for the database.
  5. Rerun the odacli upgrade-database command.

Bug Number

This issue is tracked with Oracle bug 34923078.

Error in database patching

When patching a database on Oracle Database Appliance, an error may be encountered.

Problem Description

When applying the datapatch during patching of database on Oracle Database Appliance, an error message may be displayed.

Failure Message

When the odacli update-database command is run, the following error message is displayed:

Failed to execute sqlpatch for database …

Command Details

# odacli update-database

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the following SQL*Plus command:
    alter system set nls_sort='BINARY' SCOPE=SPFILE;
  2. Restart the database using srvctl command.
  3. Retry applying the datapatch with dbhome/OPatch/datapatch -verbose -db dbUniqueName.

Bug Number

This issue is tracked with Oracle bug 35060742.

Component version not updated after patching

After patching the server to Oracle Database Appliance release 19.16, the odacli describe-component command does not display the correct Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or 8000047C.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Manually update the Ethernet controllers to 00005DD or 800005DE using the fwupdate command.

This issue is tracked with Oracle bug 34402352.

Error in restore node process in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.

If incorrect VIP names or VIP IP addresses are configured, then the detach completes successfully but the command odacli restore-node -g displays a validation error. This is because the earlier releases did not validate VIP names or VIP IP addresses before provisioning.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Manually edit the file /opt/oracle/oak/restore/metadata/provisionInstance.json with the correct VIP names or VIP IP addresses. Retry the command odacli restore-node -g. For fixing VIP names or VIP IP addresses, nslookup can be used to query hostnames and IP addresses.

This issue is tracked with Oracle bug 34140344.

Error in restore node process in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.

The following error message may be displayed:
DCS-10045: groupNames are not unique.

This error occurs if the source Oracle Database Appliance is an OAK version. This is because on the DCS stack, the same operating system group is not allowed to be assigned two or more roles.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Manually edit the file /opt/oracle/oak/restore/metadata/provisionInstance.json with unique group names for each role. Retry the command odacli restore-node -g.

This issue is tracked with Oracle bug 34042493.

Error messages in log entries in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, the log entries may display error messages though the overall status of the job is displayed as SUCCESS.

For Oracle Database Appliance running the DCS stack starting with Oracle Database Appliance release 12.2.1.4.0, the command odacli restore-node -d performs a set of ignorable tasks. Failure of these tasks does not affect the status of the overall job. The output of the command odacli describe-job may report such failures. These tasks are:
Restore of user created networks
Restore of object stores
Restore of NFS backup locations
Restore of backupconfigs
Relinking of backupconfigs to databases
Restore of backup reports

In the sample output above, even if these tasks fail, the overall status of the job is marked as SUCCESS.

Hardware Models

All Oracle Database Appliance hardware models being upgraded using the Data Preserving Reprovisioning process

Workaround

Investigate the failure using the dcs-agent.log, fix the errors, and then retry the command odacli restore-node -d.

This issue is tracked with Oracle bug 34512193.

Error in server patching

When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered.

On an Oracle Database Appliance deployment with release earlier than 19.23, if the Security Technical Implementation Guidelines (STIG) V1R2 is already deployed, then when you patch to 19.23 or earlier, and run the command odacli update-server -f version, an error may be displayed.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

The STIG V1R2 rule OL7-00-040420 tries to change the permission of the file /etc/ssh/ssh_host_rsa_key from '640' to '600' which causes the error. During patching, run the command chmod 600 /etc/ssh/ssh_host_rsa_key command on both nodes.

This issue is tracked with Oracle bug 33168598.

AHF error in prepatch report for the update-dbhome command

When you patch server to Oracle Database Appliance release 19.23, the odacli update-dbhome command may fail.

The following error message is displayed in the pre-patch report:
Verify the Alternate Archive    Failed    AHF-4940: One or more log archive 
Destination is Configured to              destination and alternate log archive
Prevent Database Hangs                    destination settings are not as recommended           

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Run the odacli update-dbhome command with the -f option.
    /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.23.0.0.0 -f

This issue is tracked with Oracle bug 33144170.

Errors when running ORAchk or the odacli create-prepatchreport command

When you run ORAchk or the odacli create-prepatchreport command, an error is encountered.

The following error messages may be seen:
One or more log archive destination and alternate log archive destination settings are not as recommended 
Software home check failed 

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the odacli update-dbhome, odacli create-prepatchreport, odacli update-server commands with the -sko option. For example:
odacli update-dbhome -j -v 19.23.0.0.0 -i dbhome_id -sko

This issue is tracked with Oracle bugs 30931017, 31631618, and 31921112.

Error in patching prechecks report

The patchung prechecks report may display an error.

The following error message may be displayed:
Failure in the pre-patch report caused by “AHF-5190: operating system boot device order is not configured as recommended”

Hardware Models

Oracle Database Appliance X-7 hardware models

Workaround

Run the odacli update-server or odacli update-dbhome command with the -f option.

This issue is tracked with Oracle bug 33631256.

Error message displayed even when patching Oracle Database Appliance is successful

Although patching of Oracle Database Appliance was successful, an error message may be displayed.

The following error is seen when running the odacli update-dcscomponents command:
# time odacli update-dcscomponents -v 19.23.0.0.0 
^[[ADCS-10008:Failed to update DCScomponents: 19.23.0.0.0
Internal error while patching the DCS components : 
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer  
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for  
details.   

Hardware Models

All Oracle Database Appliance hardware models

Workaround

This is a timing issue with setting up the SSH equivalence.

Run the odacli update-dcscomponents command again and the operation completes successfully.

This issue is tracked with Oracle bug 32553519.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching to 12.1.0.2.190716 Bundle Patch

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Error in creating Oracle AFD-enabled DB system

When creating a DB system with Oracle ASM Filter Driver (Oracle AFD), an error may be encountered.

Problem Description

When you create a DB system with Oracle AFD on Oracle Database Appliance release 19.22, with Oracle Grid Infrastructure or Oracle Database release 19.21 or earlier, then an error may be encountered at the "Install DB System" step.

Failure Message

The following error message is displayed in the database alert.log:

WARNING: group 2 (RECO) has missing disks
ORA-15040: diskgroup is incomplete
WARNING: group 2 is being dismounted

Command Details

# odacli create-dbsystem

Hardware Models

All Oracle Database Appliance hardware models running Oracle Grid Infrastructure 19.21

Workaround

This issue is fixed in Oracle Grid Infrastructure 19.22 Release Update (RU). Create the DB system using Oracle Grid Infrastructure and Oracle Database release 19.22.

You can create DB system without enabling Oracle AFD by specifying enableAFD=false in the DB system JSON file during DB system creation.

Do not patch or upgrade the existing Oracle AFD-enabled DB system with Oracle Grid Infrastructure or Oracle Database release 19.21 till the fix for bug 36114443 is available in the Oracle Grid Infrastructure and Oracle Database clone files.

Bug Number

This issue is tracked with Oracle bug 36300713.

Error in creating a DB system

When creating a DB system, an error may be encountered.

Problem Description

If a database with same DB name, but with different DB unique name, is present in another DB system, then the create-dbsystem process may fail with the following error:
DCS-12200:The resource of type Database with name \"TDG1Qs\" already exists in Database System: n1

You can reuse the DB name across bare metal and DB systems when you create the DB system, but not when you create a database on DB system.

Command Details

# odacli create-dbsystem

Hardware Models

All Oracle Database Appliance hardware models

Workaround

To reuse DB name across bare metal and DB systems, create or irestore the database as needed on already provisioned DB systems.

Bug Number

This issue is tracked with Oracle bug 36613023.

Error in creating a VM

When creating an application VM, an error may be encountered.

Problem Description

When creating an application VM with an ISO image as the source, and --extra-args option in the odacli create-vm command, the operation may fail with the following error:
DCS-10001:Internal error encountered: ERROR Kernel arguments are only supported with location or kernel installs.

Command Details

# odacli create-vm

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Use installation tree as the source instead of ISO image, with the --extra-args option in the odacli create-vm command.

Bug Number

This issue is tracked with Oracle bug 36626987.

Error in configuring Oracle ASR

When configuring Oracle ASR, an error may be encountered when registering Oracle ASR Manager due to an issue while contacting the transport server.

Failure Message

The following error message is displayed:

DCS-10045:Validation error encountered: Registration failed : Please check the agent logs for details.

Command Details

# odacli configure-asr

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Retry configuring Oracle ASR using the odacli configure-asr command.

Bug Number

This issue is tracked with Oracle bug 36363437.

Error in creating a DB system

If the customRoleSeparation field is not present in the DB system creation template, then an error may be encountered when creating the DB system.

Problem Description

When creating a DB system, the following error message may be displayed:
DCS-10001:Password ******** 'grid' is not specified

Command Details

# odacli create-dbsystem

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Add the customRoleSeparation field in the DB system creation template.

Bug Number

This issue is tracked with Oracle bug 36305068.

Error in attaching or detaching a vnetwork

When running an odacli modify-dbsystem job to attach or detach a vnetwork, an error may be encountered.

Failure Message

The following error message is displayed:

BM error: DCS-10001:Internal error encountered: Error creating job 'Create network in DB System 'name'.
DB System error: DCS-10001:Internal error encountered: DCS agent is not running on all nodes.

Command Details

# odacli modify-dbsystem

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Retry the odacli modify-dbsystem command without specifying other options that restart the DB system VMs such as --memory,-m.

Bug Number

This issue is tracked with Oracle bug 36370497.

Error in starting the DB System

When starting a DB system on an Oracle Database Appliance, an error may be encountered.

Problem Description

If DBVM is undefined using virsh undefine dbvm_name, then the odacli start-dbsystem command may fail to run.

Failure Message

The following error message may be displayed:
DCS-10001:Internal error encountered: error: failed to get domain 'dbvm_name'

Hardware Models

All Oracle Database Appliance hardware models running Oracle Database Appliance release 19.21

Workaround

Run virsh define /u05/app/sharedrepo/dbsystem/.ACFS/snaps/vm_dbvm_name/dbvm_name.xml to define the VM. Then start the DB system.

Bug Number

This issue is tracked with Oracle bug 36051738.

Error in creating a DB system

When creating a DB system, an error may be encountered.

Problem Description

When creating a DB system, the following errors may be encountered:
  • The odacli create-dbsystem job may be stuck in the running status for a long time.
  • Other DB system or application VM lifecycle operations such as create, start, or stop VM jobs may be stuck in the running status for a long time.
  • Any virsh command such as virsh list command process may not respond.
  • The command ps -ef | grep libvirtd displays that there are two libvirtd processes. For example:
    # ps -ef |grep libvirtd
    root      5369     1  0 05:27 ?        00:00:03 /usr/sbin/libvirtd
    root     27496  5369  0 05:29 ?        00:00:00 /usr/sbin/libvirtd  <<<

    The second libvirtd process (pid 27496) is stuck and causes the job hang.

Command Details

# odacli create-dbsystem

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Delete the second libvirtd, that is, the one spawned by the first libvirtd, for example, pid: 27496 in the above example.

Bug Number

This issue is tracked with Oracle bug 34715675.

Error when upgrading DB systems with Data Preserving Reprovisioning

When upgrading your DB systems during Data Preserving Reprovisioning, an error may be encountered.

Problem Description

If you created DB systems on Oracle Database Appliance release 19.16 or earlier, and patched your DB systems to Oracle Database Appliance release 19.19 or 19.20 without patching to 19.17 or 19.18, and upgraded your bare metal system to Oracle Database Appliance release 19.21, you may encounter an error when updating the DCS admin on the DB system during the DB system upgrade using Data Preserving Reprovisioning.

Failure Message

When upgrading DB systems using Data Preserving Reprovisioning, the following error message is displayed:

DCS-10001:Internal error encountered: Failed to update dcs-admin-19.21.0.0.0_LINUX.X64_DATE.x86_64.rpm on node NODENAME
Found RPM release version: 19.21.0.0.0
Validating dcs-admin version
/bin/sh: /opt/oracle/oak/pkgrepos/dcsadmin/19.21.0.0.0/dcsadminversioncheck.sh: Permission denied
Current verison 19.20.0.0.0 cannot be patched to 19.21.0.0.0

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Update the /etc/exports file on the bare metal system as follows:
  1. Check the IP address in the /etc/exports file with the incorrect export options The IP address with the issue do not contain the no_root_squash export option. For example, ASM_IP1:/opt/oracle/oak/pkgrepos.
  2. Unexport ASM_IP1.
    1. Locate the string to unexport:
      grep "/opt/oracle/oak/pkgrepos" /var/lib/nfs/etab |awk -F "(" ' \{print $1}'| awk '\{print $2":"$1}'| grep ASM_IP1

      The line is in the format 192.168.17.X:/opt/oracle/oak/pkgrepos.

    2. Run an unexport with the IP address:
      exportfs -u ASM_IP1:/opt/oracle/oak/pkgrepos
  3. Modify the /etc/exports file and add no_root_squash option. Edit the /etc/exports file and find the row which has ASM_IP1. Modify the export options for particular line from (ro,sync,no_subtree_check,crossmnt) to (ro,sync,no_subtree_check,crossmnt,no_root_squash).
  4. Export the ASM_IP1 again.
     exportfs ASM_IP1:/opt/oracle/oak/pkgrepos 

Bug Number

This issue is tracked with Oracle bug 36124601.

Error in creating database

When creating a database on Oracle Database Appliance, an error may be encountered.

Problem Description

When creating a database on Oracle Database Appliance, the operation may fail after the createDatabaseByRHP task. However, the odacli list-databases command displays the status as CONFIGURED for the failed database in the job results.

Failure Message

When you run the odacli create-database command, the following error message is displayed:

DCS-10001:Internal error encountered: Failed to clear all listeners from database

Command Details

# odacli create-database

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Check the job description of the odacli create-database command using the odacli describe-job command. Fix the issue for the task failure in the odacli create-database command. Delete the database with the command odacli delete-database -n db_name and retry the odacli create-database command.

Bug Number

This issue is tracked with Oracle bug 34709091.

Error in creating two DB systems

When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.

When attempting to start the DB systems, the following error message is displayed:
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.

This issue is tracked with Oracle bug 33275630.

Error in creating DB system

When creating a DB system on Oracle Database Appliance, an error may be encountered.

When running the odacli create-dbsystem command, the following error message may be displayed:
DCS-10001:Internal error encountered: ASM network is not online in all nodes

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Manually bring the offline resources online:
    crsctl start res -all
  2. Run the odacli create-dbsystem command.

This issue is tracked with Oracle bug 33784937.

Error in adding JBOD

When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.

The following error message is displayed:
ORA-15333: disk is not visible on client instance

Hardware Models

All Oracle Database Appliance hardware models bare metal and dbsystem

Workaround

Shut down dbsystem before adding the second JBOD.
systemctl restart initdcsagent 

This issue is tracked with Oracle bug 32586762.

Error in provisioning appliance after running cleanup.pl

Errors encountered in provisioning applince after running cleanup.pl.

After running cleanup.pl, provisioning the appliance fails because of missing Oracle Grid Infrastructure image (IMGGI191100). The following error message is displayed:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

After running cleanup.pl, and before provisioning the appliance, update the repository as follows:

# odacli update-repository -f /**gi** 

This issue is tracked with Oracle bug 32707387.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Errors in clone database operation

Clone database operation fails due to errors.

If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Error in configuring Oracle Data Guard in a multi-user access enabled deployment

When configuring Oracle Data Guard in a multi-user access enabled deployment, an error may be encountered.

Problem Description

When you configure Oracle Data Guard in a multi-user access enabled deployment as the ODA-ADMINISTRATOR user, the operation may fail at step Configure Standby database (Standby site).

Failure Message

The following error message may be displayed:
DCS-10001:Internal error encountered: Unable to populate standby database metadata.

Command Details

odacli configure-dataguard

Hardware Models

All Oracle Database Appliance hardware models in a multi-user access enabled deployment

Workaround

On a multi-user access enabled deployment, configure Oracle Data Guard with the role of ODA-DB and user type as System, for example, yoracle as in the following procedure. If the primary system is multi-user access enabled, make sure the primary database is created with this user. If the standby system is multi-user access enabled, make sure the standby database is restored with this user.
  1. Obtain the ODA-DB user name on the multi-user access enabled system:
    [odaadmin@scaoda9l006 ~]$ odacli list-users
    
    ID                                       DCS User Name   OS User Name   Role(s)    Account Status User Type      
    ---------------------------------------- --------------- --------------------------------------------------
    ...
    8564aba2-94b9-4607-8c4f-2cda3bdc6cb5     odaadmin        odaadmin   ODA-ADMINISTRATOR   Active   System          
    d9ae7f70-b294-42c1-881a-5f619ec2a851     yoracle         yoracle    ODA-DB              Active   System  
    
  2. Switch to the ODA-DB user and configure Oracle Data Guard on the primary and standby systems:
    [yoracle@oda1 ~] su - yoracle
    [yoracle@oda1 ~]$ odacli create-database -n test -u ptest -bn f1 -bp
    [yoracle@oda1 ~]$ odacli create-backup -bt Regular-L0 -n test
    [yoracle@oda1 ~]$ odacli irestore-database -r backup_report.json -ro STANDBY -bp -on f1 -u stest
    [yoracle@oda1 ~]$ odacli configure-dataguard
    Standby site address: oda2
    BUI username for Standby site. If Multi-user Access is disabled on Standby 
    site, enter 'oda-admin'; otherwise, enter the name of the user who has
    irestored the Standby database (default: oda-admin): yoracle
    BUI password for Standby site:
    Database name for Data Guard configuration: test
    Primary database SYS password:
    ******************************************************************************
    *************
    Data Guard default settings
    Primary site network for Data Guard configuration: Public-network
    Standby site network for Data Guard configuration: Public-network
    Primary database listener port (TCP): 1521
    Standby database listener port (TCP): 1521
    Transport type: ASYNC
    Protection mode: MAX_PERFORMANCE
    Data Guard configuration name: ptest_stest
    Active Data Guard: disabled
    Do you want to edit this Data Guard configuration? (Y/N, default:N):
    Standby database's SYS password will be set to Primary database's after Data
    Guard configuration. Ignore warning and proceed with Data Guard
    configuration? (Y/N, default:N): y
    ******************************************************************************
    *************
    Configure Data Guard ptest_stest started
    ******************************************************************************
    *************
    Step 1: Validate Data Guard configuration request (Primary site)
    ...
    ******************************************************************************
    *************
    Step 11: Create Data Guard status (Standby site)
    Description: DG Status operation for db test - NewDgconfig
    Job ID: e6b13275-9450-4650-8187-b33f2dd6480f
    Started May 16, 2023 00:52:33 AM IST
    Create Data Guard status
    Finished May 16, 2023 00:52:35 AM IST
    ******************************************************************************
    *************
    Configure Data Guard ptest_stest completed
    ******************************************************************************
    *************

Bug Number

This issue is tracked with Oracle bug 35389339.

Error in upgrading Oracle Data Guard

When upgrading Oracle Data Guard, an error may be encountered.

Problem Description

If you configured Oracle Data Guard on a multi-user access enabled Oracle Database Appliance release 19.19 system, as odaadmin user, then this Oracle Data Guard configuration may not display when you run the odacli list-dataguardstatus command. If you upgrade this system to Oracle Database Appliance release 19.23 using Data Preserving Reprovisioning, then the Validate Database Service presence step in the the create-preupgradereport precheck may fail for the Oracle Data Guard database.

The following error message is displayed:
One or more pre-checks failed for [DB]

Command Details

# odacli create-preupgradereport 
# odacli describe-preupgradereport 

Task Level Failure message

"The following services [TDG1yn_ro, TDG1yn_rw, Y6Z_ro, Y6Z_rw] created on database 
'TDG1yn' can result in a failure in 'detach-node'

Hardware Models

All Oracle Database Appliance hardware models X9-2, X8-2, and X7-2

Workaround

For each service listed, do the following:
  1. Stop the service reported:
    srvctl stop service -d db_unique_name -service service_name
  2. Remove the service:
    srvctl remove service -d db_unique_name -service service_name

Bug Number

This issue is tracked with Oracle bug 36610040.

Error in running a job

When running a job, an error may be encountered.

Problem Description

Due to distributed lock conflict during DCS infrastructure connection callback, an error may be encountered when running a job.

Failure Message

The following error message is displayed:

DCS-10058:DCS agent is not running on all nodes.

Command Details

Any ODACLI command

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Restart the DCS agent service on each node in a sequential manner, one after another:
# systemctl restart initdcsagent

Bug Number

This issue is tracked with Oracle bug 36380550.

Error in configuring Oracle Data Guard

When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered.

Problem Description

When you configure Oracle Data Guard on the second node of the standby system on an Oracle Database Appliance high-availability deployment, the operation may fail at step Configure Standby database (Standby site) in the task Reset Db sizing and hidden parameters for ODA best practice.

Command Details

odacli configure-dataguard

Hardware Models

All Oracle Database Appliance hardware models high-availability deployments

Workaround

Run odacli configure-dataguard on the first node of the standby system in the high-availability deployment

Bug Number

This issue is tracked with Oracle bug 33401667.

Error in backup of database

When backing up a database on Oracle Database Appliance, an error is encountered.

After successful failover, running the command odacli create-backup on new primary database fails with the following message:
DCS-10001:Internal error encountered: Unable to get the
rman command status commandid:xxx
output:STATUS
-------------------------
[COMPLETED WITH WARNINGS] error:.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. On the new primary database, connect to RMAN as oracle and edit the archivelog deletion policy.
    rman target /
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
  2. On the new primary database, as the root user, take a backup:
    odacli create-backup -in db_name -bt backup_type

This issue is tracked with Oracle bug 33181168.

Error in cleaning up a deployment

When cleaning up a Oracle Database Appliance, an error is encountered.

During cleanup, shutdown of Clusterware fails because the NFS export service uses Oracle ACFS-based clones repository.

Hardware Models

All Oracle Database Appliance hardware models with DB systems

Workaround

Follow these steps:
  1. Stop the NFS service on both nodes:
    service nfs stop
  2. Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.

This issue is tracked with Oracle bug 33289742.

Error in display of file log path

File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths.

Hardware Models

All Oracle Database Appliance hardware models with virtualized platform

Workaround

None.

This issue is tracked with Oracle bug 33580574.

Error in reinstate operation on Oracle Data Guard

When running the command odacli reinstate-dataguard on Oracle Data Guard an error is encountered.

Following are the errors reported in dcs-agent.log:
DCS-10001:Internal error encountered: Unable to reinstate Dg." and can 
further find this error "ORA-12514: TNS:listener does not currently know of  
service requested  

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Ensure that the database you are reinstating is started in MOUNT mode.

To start the database in MOUNT mode, run this command:
srvctl start database -d db-unique-name -o mount

After the command completes successfully, run the command odacli reinstate-dataguard job. If the database is already in MOUNT mode, this can be an temporary error. Check the Data Guard status again a few minutes later with odacli describe-dataguardstatus or odacli list-dataguardstatus, or check with DGMGRL> SHOW CONFIGURATION; to see if the reinstatement is successful.

This issue is tracked with Oracle bug 32367676.

Error in the enable apply process after upgrading databases

When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.

The following error message is displayed:
Error: ORA-16664: unable to receive the result from a member

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Restart standby database in upgrade mode:
    srvctl stop database -d <db_unique_name> 
    Run PL/SQL command: STARTUP UPGRADE; 
  2. Continue the enable apply process and wait for log apply process to refresh.
  3. After some time, check the Data Guard status with the DGMGRL command:
    SHOW CONFIGURATION; 

This issue is tracked with Oracle bug 32864100.

Failure in Reinstating Oracle Data Guard

When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.

The odacli reinstate-dataguard command fails with the following error:
Message:   
DCS-10001:Internal error encountered: Unable to reinstate Dg.   

The dcs-agent.log file has the following error entry:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Follow these steps:

  1. Make sure the database you are reinstating is started in MOUNT mode. To start the database in MOUNT mode, run this command:
    srvctl start database -d db-unique-name -o mount 
  2. After the above command runs successfully, run the odacli reinstate-dataguard command.

This issue is tracked with Oracle bug 32047967.

Error in updating Role after Oracle Data Guard operations

When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.

The dbRole component described in the output of the odacli describe-database command is not updated after Oracle Data Guard switchover, failover, and reinstate operations on Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Run odacli update-registry -n db --force/-f to update the database metadata. After the job completes, run the odacli describe-database command and verify that dbRole is updated.

This issue is tracked with Oracle bug 31378202.

Inconsistency in ORAchk summary and details report page

ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.

This issue is tracked with Oracle bug 30676674.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.