4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

Error in updating the operating system when patching the server

When patching the server to Oracle Database Appliance release 19.15, the operating system may not be updated.

The following error message is displayed:
DCS-10001:Internal error encountered: Failed to patch OS.
Run the following command:
rpm -q kernel-uek

If the output of this command displays multiple RPM names, then perform the workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Remove the following RPMs:
# yum remove kernel-uek-4.14.35-1902.11.3.1.el7uek.x86_64
# yum remove kernel-uek-4.14.35-1902.301.1.el7uek.x86_64

This issue is tracked with Oracle bug 34154435.

Error in server patching during DB system patching

When patching the server during DB system patching to Oracle Database Appliance release 19.15, an error may be encountered.

The following error message is displayed:
ORA-12559: Message 12559 not found;  product=RDBMS; facility=ORA

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Retry server patching on the DB system.

This issue is tracked with Oracle bug 34153158.

Component version not updated after patching

After patching the server to Oracle Database Appliance release 19.16, the odacli describe-component command does not display the correct Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or 8000047C.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Manually update the Ethernet controllers to 00005DD or 800005DE using the fwupdate command.

This issue is tracked with Oracle bug 34402352.

Detaching of databases with additionally configured services not supported by odaugradeutil

When running odaugradeutil in the Data Preserving Reprovisioning process, if there are additionally configured services, then databases cannot be detached.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Additional services must be deleted to complete the detach operation by running the command srvctl remove service. If these services are required, then before removing the service, the metadata must be captured manually and then the services must be recreated on the system running Oracle Database Appliance release 19.15 using the srvctl command from the appropriate database home.

This issue is tracked with Oracle bug 33593287.

Error in restore node process in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.

If incorrect VIP names or VIP IP addresses are configured, then the detach completes successfully but the command odacli restore-node -g displays a validation error. This is because the earlier releases did not validate VIP names or VIP IP addresses before provisioning.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Manually edit the file /opt/oracle/oak/restore/metadata/provisionInstance.json with the correct VIP names or VIP IP addresses. Retry the command odacli restore-node -g. For fixing VIP names or VIP IP addresses, nslookup can be used to query hostnames and IP addresses.

This issue is tracked with Oracle bug 34140344.

Error in restore node process in Data Preserving Reprovisioning with NFS backup

In the Data Preserving Reprovisioning process, during node restore if an NFS backup configuration is present, then an error may be encountered.

Hardware Models

All Oracle Database Appliance hardware models that have been upgraded using the Data Preserving Reprovisioning process with NFS backup configuration

Workaround

Follow these steps. On high-availability systems, run the steps on both nodes.
  1. Identify the DB user from the /opt/oracle/oak/restore/metadata/provisionInstance.json file.
    [root@n1 ~]# grep -A2 -B2 'oracleUser'
    /opt/oracle/oak/restore/metadata/provisionInstance.json
                        "userName": "oracle",
                        "userId": 1001,
                        "userRole": "oracleUser"
                    },
                    {
  2. Find the backup location for all NFS based backup configuration:
    [root@n1 ~]# grep -i -A3 'backupDestination.*NFS'
    /opt/oracle/oak/restore/metadata/dbBkpConfs.json
      "backupDestination" : "NFS",
      "backupLocation" : "/repo",   <<=======
      "objectStoreId" : null,
      "createTime" : "August 11, 2022 20:39:23 PM CST",
  3. For each of the NFS backup locations, create the directory using root and change ownership to the DB user.
    mkdir /repo
    chown oracle:oinstall /repo

    Note: Replace oinstall in the command with the oinstall group name in the /opt/oracle/oak/restore/metadata/provisionInstance.json file.

  4. Retry the command odacli restore-node -d.

This issue is tracked with Oracle bug 34503968.

Error in restore node process in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.

The following error message may be displayed:
#  /opt/oracle/dcs/bin/odacli restore-node -d
DCS-10001:Internal error encountered: Failed to process dbStorage metadata.

This error occurs because flashCacheDestination is null in the /opt/oracle/oak/restore/metadata/dbStorages.json file.

Hardware Models

All Oracle Database Appliance hardware models being upgraded using the Data Preserving Reprovisioning process where additional database storage is configured. An additional database storage is storage created when you run the command oakcli create dbstorage or odacli create-dbstorage but the storage is not associated with any database.

Workaround

Follow these steps:

In the /opt/oracle/oak/restore/metadata/dbStorages.json file, change all occurences of "flashCacheDestination": null to "flashCacheDestination": "", that is, an empty string.

Retry the command odacli restore-node -d.

This issue is tracked with Oracle bug 34526874.

Error in restore node process in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.

The following error message may be displayed:
DCS-10045: groupNames are not unique.

This error occurs if the source Oracle Database Appliance is an OAK version. This is because on the DCS stack, the same operating system group is not allowed to be assigned two or more roles.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Manually edit the file /opt/oracle/oak/restore/metadata/provisionInstance.json with unique group names for each role. Retry the command odacli restore-node -g.

This issue is tracked with Oracle bug 34042493.

Error messages in log entries in Data Preserving Reprovisioning

In the Data Preserving Reprovisioning process, during node restore, the log entries may display error messages though the overall status of the job is displayed as SUCCESS.

For Oracle Database Appliance running the DCS stack starting with Oracle Database Appliance release 12.2.1.4.0, the command odacli restore-node -d performs a set of ignorable tasks. Failure of these tasks does not affect the status of the overall job. The output of the command odacli describe-job may report such failures. These tasks are:
Restore of user created networks
Restore of object stores
Restore of NFS backup locations
Restore of backupconfigs
Relinking of backupconfigs to databases
Restore of backup reports

In the sample output above, even if these tasks fail, the overall status of the job is marked as SUCCESS.

Hardware Models

All Oracle Database Appliance hardware models being upgraded using the Data Preserving Reprovisioning process

Workaround

Investigate the failure using the dcs-agent.log, fix the errors, and then retry the command odacli restore-node -d.

This issue is tracked with Oracle bug 34512193.

Error in server patching

When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered.

On an Oracle Database Appliance deployment with release earlier than 19.16, if the Security Technical Implementation Guidelines (STIG) V1R2 is already deployed, then when you patch to 19.16 or earlier, and run the command odacli update-server -f version, an error may be displayed.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

The STIG V1R2 rule OL7-00-040420 tries to change the permission of the file /etc/ssh/ssh_host_rsa_key from '640' to '600' which causes the error. During patching, run the command chmod 600 /etc/ssh/ssh_host_rsa_key command on both nodes.

This issue is tracked with Oracle bug 33168598.

AHF error in prepatch report for the update-dbhome command

When you patch server to Oracle Database Appliance release 19.16, the odacli update-dbhome command may fail.

The following error message is displayed in the pre-patch report:
Verify the Alternate Archive    Failed    AHF-4940: One or more log archive 
Destination is Configured to              destination and alternate log archive
Prevent Database Hangs                    destination settings are not as recommended           

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Run the odacli update-dbhome command with the -f option.
    /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.16.0.0.0 -f

This issue is tracked with Oracle bug 33144170.

Error in patching prechecks report

The patchung prechecks report may display an error.

The following error message may be displayed:
Failure in the pre-patch report caused by “AHF-5190: operating system boot device order is not configured as recommended”

Hardware Models

Oracle Database Appliance X-7 hardware models

Workaround

Run the odacli update-server or odacli update-dbhome command with the -f option.

This issue is tracked with Oracle bug 33631256.

Error message displayed even when patching Oracle Database Appliance is successful

Although patching of Oracle Database Appliance was successful, an error message may be displayed.

The following error is seen when running the odacli update-dcscomponents command:
# time odacli update-dcscomponents -v 19.16.0.0.0 
^[[ADCS-10008:Failed to update DCScomponents: 19.16.0.0.0
Internal error while patching the DCS components : 
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer  
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for  
details.   

Hardware Models

All Oracle Database Appliance hardware models

Workaround

This is a timing issue with setting up the SSH equivalence.

Run the odacli update-dcscomponents command again and the operation completes successfully.

This issue is tracked with Oracle bug 32553519.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching to 12.1.0.2.190716 Bundle Patch

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

ODACLI command output not included in system report

On Oracle Database Appliance which has multi-user access enabled, ODACLI command output is not included in the system report.

The system report generated by Oracle Trace File Analyzer Collector does not have the output for ODACLI commands as the ODACLI commands do not run in the absence of required authentication.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Generate the output of the ODACLI commands separately and then provide the output to Oracle Support, if needed.

This issue is tracked with Oracle bug 33786157.

Error in creating an Oracle ASM Database after patching

After patching a multi-user access enabled appliance to Oracle Database Appliance release, an error may be encountered when creating an Oracle Database on Oracle ASM storage.

An attempt to create an Oracle Database on Oracle ASM storage as the user odaadmin after patching a multi-user access enabled appliance, that was initially provisioned in Oracle Database Appliance release 19.13 with isRoleSeparated=false and two operating system groups, fails. The following error message may be displayed:
[FATAL] [DBT-05801] THERE ARE NO ASM DISK GROUPS DETECTED.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

The user odaadmin must be added to the operating system group corresponding to the groupRole=dba. This is a one-time activity that must be performed manually before re-attempting to create an Oracle ASM database.

This issue is tracked with Oracle bug 34126894.

Error in modifying the database

The modify operation is unable to reset database parameter.

This is because the database running on the node displayed in the srvctl and the metadata are not the same.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Stop the database:
    srvctl stop database -d dbuniquename
  2. Start the database:
    srvctl start database -d dbuniquename -node targetNodeNumber

This issue is tracked with Oracle bug 34292498.

Error in increasing memory on DB system

When increasing the memory on a DB system using the odacli modify-dbsystem command, an error may be encountered.

When running the odacli modify-dbsystem command to increase the memory, if the memory specified is greater than the memory currently free, then an error message may be displayed.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 34430617.

Error in reducing DB system shape

When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.

The odacli modify-dbsystem command to scale down the shape may fail at the Wait DB System VM DCS Agent bootstrap step. The virsh console VM_name displays kernel panic with out of memory in the stack message.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Stop and start the DB system. It may take upto 20 min to stop the DB system.
  2. Login to the VM and do the following. For a high-availability system, run the steps on both nodes.
    1. Comment out vm.nr_hugepages in the /etc/sysctl.conf file.
    2. Back up /boot/initramfs-`uname -r`.img.
    3. Run dracut -f as root to recreate initramfs.
    4. Uncomment vm.nr_hugepages in the /etc/sysctl.conf file.
    5. Perform the scaledown.

This issue is tracked with Oracle bug 34362565.

Error in creating two DB systems

When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.

When attempting to start the DB systems, the following error message is displayed:
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.

This issue is tracked with Oracle bug 33275630.

Error in creating DB system

When creating a DB system on Oracle Database Appliance, an error may be encountered.

When running the odacli create-dbsystem command, the following error message may be displayed:
DCS-10001:Internal error encountered: ASM network is not online in all nodes

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Manually bring the offline resources online:
    crsctl start res -all
  2. Run the odacli create-dbsystem command.

This issue is tracked with Oracle bug 33784937.

Error in recovering a TDE-enabled database

When recovering a TDE-enabled Oracle RAC One Node database from the remote node, after the database was shut down, an error may be encountered.

When attempting to start the TDE-enabled Oracle RAC One Node database from the remote node, that is, the node other than node mentioned in the dbTargetNodeNumber in the database object, then the following error message may be displayed:

DCS-10001:Internal error encountered: DCS-10001:Internal error encountered:
Missing arguments : required sqlplus connection information is not provided..

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the odacli recover-database command from the node mentioned in the dbTargetNodeNumber in the database object.

This issue is tracked with Oracle bug 33851593.

Error in adding JBOD

When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.

The following error message is displayed:
ORA-15333: disk is not visible on client instance

Hardware Models

All Oracle Database Appliance hardware models bare metal and dbsystem

Workaround

Shut down dbsystem before adding the second JBOD.
systemctl restart initdcsagent 

This issue is tracked with Oracle bug 32586762.

Error in provisioning appliance after running cleanup.pl

Errors encountered in provisioning applince after running cleanup.pl.

After running cleanup.pl, provisioning the appliance fails because of missing Oracle Grid Infrastructure image (IMGGI191100). The following error message is displayed:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

After running cleanup.pl, and before provisioning the appliance, update the repository as follows:

# odacli update-repository -f /**gi** 

This issue is tracked with Oracle bug 32707387.

Error in updating a database

When updating a database on Oracle Database Appliance, an error is encountered.

When you run the command odacli update-dbhome, the following error message is displayed:
PRGO-1069 :Internal error [# rhpmovedb.pl-isPatchUpg-1 #].. 

To confirm that the MMON process occupies the lock, connect to the target database which failed to patch, and run the command:

SELECT s.sid, p.spid, s.machine, s.program FROM v$session s, v$process p  
WHERE s.paddr = p.addr and s.sid = ( 
SELECT sid from v$lock WHERE id1= ( 
SELECT lockid FROM dbms_lock_allocated WHERE name = 'ORA$QP_CONTROL_LOCK' 
)); 

If in the displayed result, s.program in the output is similar to to the format oracle_user@host_box_name (MMON), then the error is caused by the MMON process. Run the workaround to address this issue.

Hardware Models

All Oracle Database Appliance high-availability hardware models

Workaround

Run the following commands:
  1. Stop the MMON process:
    # ps -ef | grep MMON 
    root     71220 70691  0 21:25 pts/0    00:00:00 grep --color=auto MMON 
    Locate the process ID from step (1) and stop it:
    # kill -9 71220
  2. Manually run datapatch on target database:
    1. Locate the database home where the target database is running:
      odacli describe-database -in db_name
    2. Locate the database home location:
      odacli describe-dbhome -i DbHomeID_found_in_step_a
    3. On the running node of the target database:
      [root@node1 ~]# sudo su - oracle 
      Last login: Thu Jun  3 21:24:45 UTC 2021 
      [oracle@node1 ~]$ . oraenv 
      ORACLE_SID = [oracle] ? db_instance_name
      ORACLE_HOME = [/home/oracle] ? dbHome_location
    4. If the target database is a non-CDB database, then run the following:
      $ORACLE_HOME/OPatch/datapatch
    5. If the target database is a CDB database, then run the following to find the PDB list:
      select name from v$containers where open_mode="READ WRITE"; 
    6. Exit SQL*Plus and run the following:
      $ORACLE_HOME/OPatch/datapatch -pdbs pdb_names_gathered_by_the_SQL_statement_in_step_e_separated_by_comma 

This issue is tracked with Oracle bug 32827353.

Error in running tfactl diagcollect command on remote node

When running the tfactl diagcollect command on Oracle Database Appliance, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models KVM and bare metal systems

Workaround

Prior to Oracle Autonomous Health Framework 21.2, if the certificates are generated on each node separately, then you must perform either of the following manual steps to fix this.
  • Run the following command on each node so that Oracle Trace File Analyzer generates new certificates and distributes to the other node:
    tfactl syncnodes -remove -local
  • Connect using SSH with root credentials on one node and run the following.
    tfactl syncnodes

This issue is tracked with Oracle bug 32921859.

Error when upgrading database from 11.2.0.4 to 12.1 or 12.2

When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.

Database upgrade can cause the following warning in the UpgradeResults.html file, when upgrading database from 11.2.0.4 to 12.1 or 12.2:
Database is using a newer time zone file version than the Oracle home 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

  1. Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
  2. After manually completing the database upgrade, run the following command to update DCS metadata:
    /opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f

This issue is tracked with Oracle bug 31125985.

Error when upgrading 12.1 single-instance database

When upgrading 12.1 single-instance database, a job failure error is encountered.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Use the following workaround:
  1. Before upgrading the 12.1 single-instance database, run the following PL/SQL command to change the local_listener to an empty string:
    ALTER SYSTEM SET LOCAL_LISTENER='';
  2. After upgrading the 12.1 single-instance database successfully, run the following PL/SQL command to change the local_listener to the desired value:
    ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-'; 

This issue is tracked with Oracle bugs 31202775 and 31214657.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Errors in clone database operation

Clone database operation fails due to errors.

If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.

Clone database operation fails

For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Set the compatible value to that of the source database. Follow these steps:
  1. Change the parameter value.
    SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
  2. Shut down the database.
    SQL> SHUTDOWN IMMEDIATE
  3. Start the database.
    SQL> Startup
  4. Verify the parameter for the new value.
    SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';

This issue is tracked with Oracle bug 30309914.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Error in configuring Oracle Data Guard

When running the command odacli configure-dataguard on Oracle Database Appliance, an error may be encountered at the upload password file to standby database step.

When running the command odacli configure-dataguard on Oracle Database Appliance, the following error message may be displayed at CONFIGUREDG - DCS-10001: UNABLE TO CONFIGURE BROKER DGMGRL> SHOW CONFIGURATION;:
ORA-16783: cannot resolve gap for database tgtpodpgtb

Hardware Models

Oracle Database Appliance hardware models with DB system and database version earlier than Oracle Database Appliance release 19.15

Workaround

Manually copy the password file from primary to standby system and retry the command odacli configure-dataguard with the --skip-password-copy option.
  1. On the primary system, locate the password file:
    srvctl config database -d dbUniqueName | grep -i password
    If the output is the Oracle ASM directory, then copy the password from the Oracle ASM directory to the local directory.
    su - grid
    asmcmd
    ASMCMD> pwcopy +DATA/tiger2/PASSWORD/orapwtiger /tmp/orapwtiger

    If the output is empty, then check the directory at /dbHome/dbs/orapwdbName. For example, the orapwd file can be at /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger

  2. Copy the password file to the standby system. Back up the original password file.
    /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
    /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger.ori
    scp  
    root@primaryHost:/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
    /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
  3. Change the standby orapwd file permission.
    chown -R oracle /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
    chgrp oinstall /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
  4. Check the password file location on the standby system and copy to the Oracle ASM directory, if necessary.
    srvctl config database -d tiger2 | grep -i password
    Password file: +DATA/tiger2/PASSWORD/orapwtiger
    In this example, copy the password from the local directory to the Oracle ASM directory.
    su - grid
    asmcmd
    ASMCMD> pwcopy /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwtiger
    +DATA/tiger2/PASSWORD/orapwtiger

This issue is tracked with Oracle bug 34484209.

Error in backup of database

When backing up a database on Oracle Database Appliance, an error is encountered.

After successful failover, running the command odacli create-backup on new primary database fails with the following message:
DCS-10001:Internal error encountered: Unable to get the
rman command status commandid:xxx
output:STATUS
-------------------------
[COMPLETED WITH WARNINGS] error:.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. On the new primary database, connect to RMAN as oracle and edit the archivelog deletion policy.
    rman target /
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
  2. On the new primary database, as the root user, take a backup:
    odacli create-backup -in db_name -bt backup_type

This issue is tracked with Oracle bug 33181168.

Error when running two irestore jobs

When running two irestore jobs on an ObjectStoreSwift object on Oracle Database Appliance, then an error may be encountered.

The first irestore job may fail with the following error message:
DCS-10001:Internal error encountered: Failed to
run RMAN command. Please refer log at location : hostname1:
/u01/app/odaorabase/odaadmin/diag/rdbms/idb0001/idb0001_1/nodename1/rman/b
kup/rman/2022-06-20/rman_2022-06-20_18-23-58.0337.log.
The RMAN log displays the following:
ORA-19511: non RMAN, but media manager or vendor specific failure, error
text: KBHS-01013: specified OPC_WALLET alias alias_opc not found in wallet

Hardware Models

All Oracle Database Appliance hardware models

Workaround

If you want to use the same ObjectStoreSwift object for both irestore database jobs, then start the second irestore job after the completion of the first irestore job.

This issue is tracked with Oracle bug 34300624.

Error in database backup

When backing up a database on Oracle Database Appliance, an error may be encountered.

Consider a Regular-L0 backup of a database to Disk, followed by Regular-L1 backup to either Objectstore or NFS location. If the backup report corresponding to that Regular-L1 backup is used to irestore the database, then the irestore job may fail with the following error:
DCS-10001:Internal error encountered: Failed to run RMAN command. Please refer log at location : log_location. RMAN failed to restore DB for Migration.
The log_location may display the following error:
RMAN LOG file
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/03/2022 09:16:10
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06102: no channel to restore a backup or copy of archived log for thread
1 with sequence 2 and starting SCN of 3765572
RMAN-06102: no channel to restore a backup or copy of archived log for thread
1 with sequence 1 and starting SCN of 3763021

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Take a Regular-L0 backup to either Objectstore or NFS location again. Then, use the corresponding backup report to irestore the database.

This issue is tracked with Oracle bug 34463380.

Error in restoring a database on a multi-user access enabled system

On Oracle Database Appliance which has multi-user access enabled, if a non-default user with the role of ODA-DB attempts to irestore a database from a NFS backup location, without specifying the DB home ID, then an error may be encountered.

The following error message is displayed:
DCS-10001:Internal error encountered: Unable to fetch database status info
from v$instance.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Before running the irestore job, create the same DB home and user as the one performing the irestore operation. Then specify the DB home ID as created above, when you run the irestore operation.

This issue is tracked with Oracle bug 34477115.

Error in configuring Oracle Data Guard

When running the command odacli configure-dataguard on Oracle Database Appliance, an error may be encountered.

When running the command odacli configure-dataguard on Oracle Database Appliance, the following error message may be displayed at CONFIGUREDG - DCS-10001: UNABLE TO CONFIGURE BROKER DGMGRL> SHOW CONFIGURATION;:
ORA-16783: cannot resolve gap for database tgtpodpgtb

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps on the machine with the new primary database:
  1. Disable the scheduled auto backup of primary database with the command:
    odacli update-schedule
  2. Restore archive log of the primary database.
    odacli restore-archivelog
  3. Configure Oracle Data Guard with the command odacli configure-dataguard.

This issue is tracked with Oracle bug 34008520.

Error in automatic back up of database

When patching Oracle Database Appliance, there may be an error in automatic backup of database.

The following message is displayed:
DCS-10001:Internal error encountered: 1.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

After the appliance is successfully patched, the automatic database backup processes complete successfully.

This issue is tracked with Oracle bug 33699091.

OpenSSH command vulnerability

OpenSSH command vulnerability issue detected in Qualys and Nessus scans.

Qualys and Nessus both report a medium severity issue OPENSSH COMMAND INJECTION VULNERABILITY. Refer to CVE-2020-15778 for details.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 33217970.

Error in cleaning up a deployment

When cleaning up a Oracle Database Appliance, an error is encountered.

During cleanup, shutdown of Clusterware fails because the NFS export service uses Oracle ACFS-based clones repository.

Hardware Models

All Oracle Database Appliance hardware models with DB systems

Workaround

Follow these steps:
  1. Stop the NFS service on both nodes:
    service nfs stop
  2. Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.

This issue is tracked with Oracle bug 33289742.

Error in TDE wallet management

When changing the TDE wallet password or rekeying the TDE wallet of a database which has TDE Wallet Management set to the value EXTERNAL, an error is encountered.

The following message is displayed:
DCS-10089:Database DB_NAME is in an invalid state 'NOT_RUNNING'.Database DB_NAME must be running

Hardware Models

All Oracle Database Appliance hardware models

Workaround

NONE. The operations such as changing the TDE wallet password or rekeying the TDE wallet is not supported on a database which has TDE Wallet Management set to the value EXTERNAL.

This issue is tracked with Oracle bug 33278653.

Error in display of file log path

File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths.

Hardware Models

All Oracle Database Appliance hardware models with virtualized platform

Workaround

None.

This issue is tracked with Oracle bug 33580574.

Error in configuring Oracle Data Guard

When running the command odacli configure-dataguard on Oracle Database Appliance, an error may be encountered.

When running the command odacli configure-dataguard on Oracle Database Appliance, the following error message may be displayed at step Restore missing archivelog (Primary site):
DCS-10114:Failed to acquire exclusive access

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Before running the command odacli configure-dataguard, disable auto-backup schedules for the primary database and verify that the existing backup jobs are completed.
    1. Check the database backup schedule for the primary database:
      odacli list-schedules
    2. Disable the backup schedules for database and archive logs of the primary database:
      odacli update-schedule -i schedule_id -d
  2. Run the command odacli configure-dataguard.
  3. After the command odacli configure-dataguard completes successfully, reenable auto backup for the primary database, if desired.
    odacli update-schedule -i schedule_id -e

This issue is tracked with Oracle bug 33724368.

Error in reinstating on Oracle Data Guard

When running the command odacli reinstate-dataguard on Oracle Database Appliance, an error is encountered.

The following error message is displayed:
DCS-10001:Internal error encountered: Unable to reinstate Dg.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Manually flashback old primary database.
Run the following commands:
  1. On the new primary machine, get the standby_became_primary_scn:
    SQL> select standby_became_primary_scn from v$database;
    STANDBY_BECAME_PRIMARY_SCN
    --------------------------
      4370820
  2. On the old primary database, as oracle user, run the following.
    rman target /
    RMAN> set decryption identified by 'password'
    RMAN> FLASHBACK DATABASE TO SCN STANDBY_BECAME_PRIMARY_SCN;
  3. On the new primary database, run the odacli reinstate-dataguard command.

This issue is tracked with Oracle bug 33190261.

Error in configuring Oracle Data Guard

After upgrading the standby database from release 12.1 to 19.14, the following error message may be displayed at step Enable redo transport and apply:
Warning: ORA-16629: database reports a different protection level from the protection mode standbydb - Physical standby database (disabled)

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Enable the standby database again by running the following DGMGRL command:
DGMGRL> Enable database tgtptdcnvo
Enabled.

This issue is tracked with Oracle bug 33749492.

Error in viewing Oracle Data Guard status

When viewing Oracle Data Guard status on Oracle Database Appliance, an error is encountered.

Oracle Data Guard status is not shown on the remote node of Oracle Database Appliance high-availability systems causing Oracle Data Guard switchover, failover, and reinstate jobs to fail at the task Check if DataGuard config is updated. Oracle Data Guard operations, though, are successful.

Hardware Models

All Oracle Database Appliance high-availability systems

Workaround

Use DGMGRL to verify Oracle Data Guard status.

This issue is tracked with Oracle bug 33411769.

Error in reinstate operation on Oracle Data Guard

When running the command odacli reinstate-dataguard on Oracle Data Guard an error is encountered.

Following are the errors reported in dcs-agent.log:
DCS-10001:Internal error encountered: Unable to reinstate Dg." and can 
further find this error "ORA-12514: TNS:listener does not currently know of  
service requested  

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Ensure that the database you are reinstating is started in MOUNT mode.

To start the database in MOUNT mode, run this command:
srvctl start database -d db-unique-name -o mount

After the command completes successfully, run the command odacli reinstate-dataguard job. If the database is already in MOUNT mode, this can be an temporary error. Check the Data Guard status again a few minutes later with odacli describe-dataguardstatus or odacli list-dataguardstatus, or check with DGMGRL> SHOW CONFIGURATION; to see if the reinstatement is successful.

This issue is tracked with Oracle bug 32367676.

Error in running concurrent database or database home creation jobs

When running concurrent database or database home creation jobs, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not run concurrent database or database home creation job.

This issue is tracked with Oracle bug 32376885.

Error in the enable apply process after upgrading databases

When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.

The following error message is displayed:
Error: ORA-16664: unable to receive the result from a member

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Restart standby database in upgrade mode:
    srvctl stop database -d <db_unique_name> 
    Run PL/SQL command: STARTUP UPGRADE; 
  2. Continue the enable apply process and wait for log apply process to refresh.
  3. After some time, check the Data Guard status with the DGMGRL command:
    SHOW CONFIGURATION; 

This issue is tracked with Oracle bug 32864100.

Error in creating Oracle Data Guard status

When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.

When configuring Oracle Data Guard, the odacli configure-dataguard command fails at step NewDgconfig with the following error on the standby system:
ORA-16665: TIME OUT WAITING FOR THE RESULT FROM A MEMBER

Verify the status of the job with the odacli list-jobs command.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Follow these steps:

  1. On the standby system, run the following:
    export DEMODE=true; 
    odacli create-dataguardstatus -i dbid -n dataguardstatus_id_on_primary -r configdg.json 
    export DEMODE=false; 
    configdg.json example   
Example configdg.json file for a single-node system:
{
  "name": "test1_test7",
  "protectionMode": "MAX_PERFORMANCE",
   "replicationGroups": [
    {
      "sourceEndPoints": [
        {
          "endpointType": "PRIMARY",
          "hostName": test_domain1",
          "listenerPort": 1521,
          "databaseUniqueName": "test1",
          "serviceName": "test", 
          "sysPassword": "***", 
          "ipAddress": "test_IPaddress"
        },
         ],
      "targetEndPoints": [
        {
          "endpointType": "STANDBY",
          "hostName": "test_domain2",
          "listenerPort": 1521,
          "databaseUniqueName": "test7",
          "serviceName": "test", 
          "sysPassword": "***", 
          "ipAddress": "test_IPaddress3"
        },
      ],
      "transportType": "ASYNC"
    }
  ]
}

This issue is tracked with Oracle bug 32719173.

Error in restoring a database

When restoring a database on Oracle Database Appliance, if the DB Home ID is provided in the command odacli irestore-database, then an error may be encountered.

The following error message is displayed:
odacli irestore-database -r dbs2_check.json -n name2 -dh
3462a80c-0c6a-419b-82e1-c3944dedd892
Enter SYS user password:
Retype SYS user password:
DCS-10001:Internal error encountered: java.lang.NullPointerException.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Unmount the NFS client location.
    umount NFS_client_location
  2. Add the no_root_squash option in the /etc/export file against the NFS server location.
    nfs_server_locationIP_address_of_NFS_Client(rw,syn,no_root_squash)
  3. Restart the NFS server at the NFS server machine.
    /bin/systemctl restart nfs.service
  4. Remount the NFS client.
     mount -t nfs IP_address_of_NFS_server:NFS_server_location
    NFS_client_location
  5. Perform irestore of database from NFS backup.
  6. Unmount the NFS client location.
    umount NFS_client_location
  7. Remove no_root_squash option in the /etc/export file against the NFS server location.
    nfs_server_location IP_address_of_NFS_Client(rw,sync)
  8. Perform steps 3 and step 4 again.

This issue is tracked with Oracle bug 34149711.

Error in registering a database

When restoring a database on Oracle Database Appliance, if the NLS setting on the standby database is not America/American, then an error may be encountered.

An error occurs when running the RMAN duplicate task. The RMAN log described in the error message may show RMAN-06136 and ORA-00907 errors.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 32349703.

Error in Reinstating Oracle Data Guard

When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.

The odacli reinstate-dataguard command fails with the following error:
Unable to reinstate Dg. Reinstate job was executed within 24hrs after failover job.  

The dcs-agent.log file has the following error entry:

DGMGRL> Reinstating database "xxxx", 
 please wait... 
Oracle Clusterware is restarting database "xxxx" ... 
Connected to "xxxx" 
Continuing to reinstate database "xxxx" ... 
Error: ORA-16653: failed to reinstate database 

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Follow these steps:

  1. On the primary machine, get the standby_became_primary_scn:
    SQL> select standby_became_primary_scn from v$database; 
    STANDBY_BECAME_PRIMARY_SCN 
    -------------------------- 
              3522449 
  2. On the old primary database, flashback to this SCN with RMAN with the backup encryption password:
    RMAN> set decryption identified by 'rman_backup_password' ; 
    executing command: SET decryption 
    RMAN> FLASHBACK DATABASE TO SCN 3522449 ; 
    ... 
    Finished flashback at 24-SEP-20 
    RMAN> exit 
  3. On the new primary machine, run the odacli reinstate-dataguard command.

This issue is tracked with Oracle bug 31884506.

Failure in Reinstating Oracle Data Guard

When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.

The odacli reinstate-dataguard command fails with the following error:
Message:   
DCS-10001:Internal error encountered: Unable to reinstate Dg.   

The dcs-agent.log file has the following error entry:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Follow these steps:

  1. Make sure the database you are reinstating is started in MOUNT mode. To start the database in MOUNT mode, run this command:
    srvctl start database -d db-unique-name -o mount 
  2. After the above command runs successfully, run the odacli reinstate-dataguard command.

This issue is tracked with Oracle bug 32047967.

Error in updating Role after Oracle Data Guard operations

When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.

The dbRole component described in the output of the odacli describe-database command is not updated after Oracle Data Guard switchover, failover, and reinstate operations on Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Run odacli update-registry -n db --force/-f to update the database metadata. After the job completes, run the odacli describe-database command and verify that dbRole is updated.

This issue is tracked with Oracle bug 31378202.

Error when recovering a single-instance database

When recovering a single-instance database, an error is encountered.

When a single-instance database is running on the remote node, and you run the operation for database recovery on the local node, the following error is observed:
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered: 
Missing arguments : required sqlplus connection  information is not 
provided

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Perform recovery of the single-instance database on the node where the database is running.

This issue is tracked with Oracle bug 31399400.

Inconsistency in ORAchk summary and details report page

ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.

This issue is tracked with Oracle bug 30676674.

Missing DATA, RECO, and REDO entries when dbstorage is rediscovered

Running the odacli update-registry command with -n all --force or -n dbstorage --force option can result in metadata corruption.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the -all option when all the databases created in the system use OAKCLI in migrated systems. On other systems that run on DCS stack, update all components other than dbstorage individually, using the odacli update-registry -n component_name_to_be_updated_excluding_dbstorage.

This issue is tracked with Oracle bug 30274477.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.