4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

Error in Oracle Grid Infrastructure upgrade

Oracle Grid Infrastructure upgrade fails, though the rootupgrade.sh script ran successfully.

The following messages are logged in the grid upgrade log file located under /opt/oracle/oak/log/<NODENAME>/patch/19.8.0.0.0/ .
ERROR: The clusterware active state is UPGRADE_AV_UPDATED 
INFO: ** Refer to the release notes for more information ** 
INFO: ** and suggested corrective action                 ** 

This is because when the root upgrade scripts run on the last node, the active version is not set to the correct state.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. As root user, run the following command on the second node:
     /u01/app/19.0.0.0/grid/rootupgrade.sh -f 
  2. After the command completes, verify that the active version of the cluster is updated to UPGRADE FINAL.
    /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f 
    The cluster upgrade state is [UPGRADE FINAL] 
  3. Run Oracle Database Applaince server patching process again to upgrade Oracle Grid Infrastructure.

This issue is tracked with Oracle bug 31546654.

Error when patching 11.2.0.4 database homes

When patching 11.2.0.4 database home, an error is encountered.

The following error message is observed:
WARNING: 2020-07-09 04:34:56:  Errors found while running catbundle.sql on  
the Database db_name 
INFO: 2020-07-09 04:34:56:  Please run catbundle.sql manually on Database db_name

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Use the following workaround:
  1. Navigate to the catbundle log files in the location /u01/app/db_user/product/11.2.0.4/home_name/cfgtoollogs/catbundle.
  2. Locate the log file for the the failed operations using the date and timestamp in the warning message.
  3. Check for ORA errors in the file. Ignore the following ORA errors and fix the other errors.
    ORA-29809 ORA-29931 ORA-29830 ORA-00942 ORA-00955 ORA-01430 ORA-01432 
    ORA-01434 ORA-01435 ORA-01917 ORA-01920 ORA-01921 ORA-01952 ORA-02303 
    ORA-02443 ORA-04043 ORA-29832 ORA-29844 ORA-14452 ORA-06512 ORA-01927 
  4. Run catbundle.sql again.

This issue is tracked with Oracle bug 31579565.

Error in SSD_LOCAL value after patching Oracle Database Appliance Virtualized Platform

After patching Oracle Database Appliance Virtualized Platform from release 18.8 to 19.8, there may be an error in SSD_LOCAL value.

After upgrading from Oracle Database Appliance release 18.8 to 19.8, on Virtualized Platform with SSD local disks for the operating system, the following output is displayed when you run the oakcli show version -detail command:
SSD_LOCAL                 0121                     Up-to-date, 0121 
SSD_LOCAL                 0212                     Up-to-date, 0212 

Hardware Models

All Oracle Database Appliance hardware models Virtualized Platform deployments

Workaround

There is a display issue. Ignore the inconsistency.

This issue is tracked with Oracle bug 31607196.

Error when running the prepatch report

When running Oracle ORAchk or the odacli create-prepatchreport command, an error is encountered.

The following messages may be displayed:
- CSS disktimeout is not set to the default value of 200 
- Cluster Synchronization Services (CSS) misscount not set to recommended  
value 

Hardware Models

Oracle Database Appliance hardware models X6-2L, X6-2M, X6-2S, X7-2M, X7-2S, X8-2M, X8-2S

Workaround

Ignore the error messages and continue patching.

This issue is tracked with Oracle bug 31631618.

Error when upgrading operating system on Virtualized Platform

When upgrading operating system on Virtualized Platform, an error is encountered.

The error message displays an unhandled python exception in the /opt/oracle/oak/bin/dcliagent.py file.

Hardware Models

All Oracle Database Appliance hardware models on Virtualized Platform

Workaround

The dcliagent.py starts after a few retries. Ignore the error message.

This issue is tracked with Oracle bug 31633374.

Error in patching database homes

An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled.

When running the command odacli update-dbhome -v release_number on database homes that have Standard Edition High Availability enabled, an error is encountered.
WARNING::Failed to run the datapatch as db <db_name> is not in running state 

Hardware Models

All Oracle Database Appliance hardware models with High-Availability deployments

Workaround

Follow these steps:
  1. Locate the running node of the target database instance:
    srvctl status database -database dbUniqueName
    Or, relocate the single-instance database instance to the required node:
    odacli modify-database -g node_number (-th node_name) 
  2. On the running node, manually run the datapatch for non-CDB databases:
    dbhomeLocation/OPatch/datapatch
  3. For CDB databases, locate the PDB list using SQL*Plus.
    select name from v$containers where open_mode='READ WRITE';
    dbhomeLocation/OPatch/datapatch -pdbs pdb_names_found_in_previous_step_divided_by_comma

This issue is tracked with Oracle bug 31654816.

TFA not running after server or database patching

Oracle TFA does not run after server or database patching.

TFA is shut down during patching of Oracle Database and Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Run tfactl start to start TFA manually.

This issue is tracked with Oracle bug 31091006.

Disk firmware not updated after patching

After patching Oracle Database Appliance, disk firmware is not updated on some Oracle Database Appliance hardware models.

The odacli describe-component command shows available version for disks as 0112 but the odacli update-storage and odacli update-server commands do not update the disk firmware.

Hardware Models

All Oracle Database Appliance X7-2-HA hardware models

Workaround

None

This issue is tracked with Oracle bug 30841243.

Error in server patching

An error is encountered when patching the server.

When running the command odacli update-server -v release_number, the following error is encountered:
DCS-10001:Internal error encountered: patchmetadata for 19.6.0.0.0 missing  
target version for GI.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Change the file ownership temporarily to the appropriate grid user for the osdbagrp binary in the grid_home/bin location. For example:
    $ chown -R grid:oinstall /u01/app/18.0.0.0/grid/bin/osdbagrp
  2. Run either the update-registry -n gihome or the update-registry -n system command.

This issue is tracked with Oracle bug 31125258.

Error in database patching

An error is encountered when patching the database.

When running the command odacli update-dbhome -v release_number on database homes that have Standard Edition High Availability enabled, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run datapatch with list of all valid PDBs by connecting all the databases that are running from database home. Follow these steps:
  1. Find all databases running from the database home. For example:
    odacli list-databases|grep DB home resource ID
  2. Find the PDBs for each database:
    #su oracle 
        #export ORACLE_HOME=Database home location
        #export ORACLE_SID=Database SID
        #sqlplus "/as sysdba"  
        SQL> SELECT NAME, OPEN_MODE FROM V$CONTAINERS WHERE OPEN_MODE='READ  
    WRITE'; 

    This query returns all PDB names including CDB$ROOT if the PDB is in "READ WRITE" mode.

  3. Exit from SQL*Plus.
  4. Run datapatch:
    #su oracle 
        #export ORACLE_HOME=Database home location
        #export ORACLE_SID=Database SID  
        #$ORACLE_HOME/OPatch/datapatch -pdbs "PDB name","PDB name"
  5. As root user, run the odacli update-dbhome command again to correct the metadata entries.
    # odacli update-dbhome -i dbhome id -v 19.7.0.0.0 

This issue is tracked with Oracle bug 31399885.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching to 12.1.0.2.190716 Bundle Patch

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

11.2.0.4 databases fail to start after patching

After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.

Hardware Models

All Oracle Database Appliance Hardware models

Workaround

Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.

Start the databases with the command:
srvctl start database -db db_unique_name

This issue is tracked with Oracle bug 28815716.

Patching errors on Oracle Database Appliance Virtualized Platform

When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered.

Error Encountered When Patching Virtualized Platform:

When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:

ERROR: Unable to apply the GRID patch 
ERROR: Failed to patch server (grid) component 

This error can occur even if you stopped Oracle TFA Collector before patching. During server patching on the node, Oracle TFA Collector is updated and this can restart the TFA processes, thus causing an error. To resolve this issue, follow the steps described in the Workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

On Oracle Database Appliance Virtualized Platform, do the following:
  1. Run /etc/init.d/init.tfa stop on all the nodes in the cluster.
  2. Run the command:
    /u01/app/18.0.0.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PREPATCH -status 

    Verify that the command output is SUCCESS.

  3. If the command output was SUCCESS, then run the following commands on all the nodes:
    /u01/app/18.0.0.0/grid/crs/install/rootcrs.sh -prepatch -rollback 
    /u01/app/18.0.0.0/grid/crs/install/rootcrs.sh -postpatch 
  4. Restart patching.

This issue is tracked with Oracle bug 30886701.

Patching Oracle Database home fails with errors

When applying the patch for Oracle Database homes, an error is encountered.

Error Encountered When Patching Oracle Database Homes on Bare Metal Systems:

When patching Oracle Database homes on baremetal systems, the odacli update-dbhome command fails with an error similar to the following:

Please stop TFA before dbhome patching.  

To resolve this issue, follow the steps described in the Workaround.

Error Encountered When Patching Oracle Database Homes on Virtualized Platform:

When patching Oracle Database homes on Virtualized Platform, patching fails with an error similar to the following:

INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1  

Check the prepatch log file generated in the directory /opt/oracle/oak/log/hostname/patch/18.8.0.0.0. You can also view the prepatch log for the last run with the command ls -lrt prepatch_*.log. Check the last log file in the command output.

In the log file, search for entries similar to the following:

ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information. 

To resolve this issue, follow the steps described in the Workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

On Oracle Database Appliance bare metal systems, do the following:
  1. Run tfactl stop on all the nodes in the cluster.
  2. Restart patching once Oracle TFA Collector has stopped on all nodes.
On Oracle Database Appliance Virtualized Platform, do the following:
  1. Run /etc/init.d/init.tfa stop on all the nodes in the cluster.
  2. Restart patching once Oracle TFA Collector has stopped on all nodes.

This issue is tracked with Oracle bugs 30799713 and 30892062.

Error in patching Oracle Database Appliance

When applying the server patch for Oracle Database Appliance, an error is encountered.

Error Encountered When Patching Bare Metal Systems:

When patching the appliance on bare metal systems, the odacli update-server command fails with the following error:

Please stop TFA before server patching.

To resolve this issue, follow the steps described in the Workaround.

Error Encountered When Patching Virtualized Platform:

When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:

INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1  

Check the prepatch log file generated in the directory /opt/oracle/oak/log/hostname/patch/18.8.0.0.0. You can also view the prepatch log for the last run with the command ls -lrt prepatch_*.log. Check the last log file in the command output.

In the log file, search for entries similar to the following:

ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information. 

To resolve this issue, follow the steps described in the Workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

On Oracle Database Appliance bare metal systems, do the following:
  1. Run tfactl stop on all the nodes in the cluster.
  2. Restart patching once Oracle TFA Collector has stopped on all nodes.
On Oracle Database Appliance Virtualized Platform, do the following:
  1. Run /etc/init.d/init.tfa stop on all the nodes in the cluster.
  2. Restart patching once Oracle TFA Collector has stopped on all nodes.

This issue is tracked with Oracle bugs 30260318 and 30892062.

Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance

Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance.

When cleaning up and reprovisioning Oracle Database Appliance with release 19.8, the Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk RPMs may not be updated to release 19.8. The components are updated when you apply the patches for Oracle Database Appliance release 19.8.

Hardware Models

All Oracle Database Appliance deployments

Workaround

Update to the latest server patch for the release.

This issue is tracked with Oracle bugs 28933900 and 30187516.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Error when performing backup and recovery of Standard Edition High Availability Database

When performing backup and recovery of Standard Edition High Availability Database, an error is encountered.

Associating a backup configuration to Standard Edition High Availability Database, and backup and recovery operations of Standard Edition High Availability Database fail with the following error:

DCS-10089:Database  is in an invalid state 'NOT_RUNNING'. Database dbname must be running. 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 31173818.

NTP service not running after rebooting node

The Network Time Protocol daemon (ntpd) fails to start after rebooting the node.

On Oracle Linux 7 environment, even though Network Time Protocol (NTP) was configured during provisioning, the Network Time Protocol daemon (ntpd) fails to start.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Follow the instructions described in My Oracle Support Note 2422378.1.

This issue is tracked with Oracle bug 31399685.

Cannot create 11.2.0.4 and 12.1 Oracle ACFS databases with Oracle Flex redundancy

Creation of 11.2.0.4 and 12.1 Oracle ACFS databases with Oracle Flex redundancy fails.

Hardware Models

All Oracle Database Appliance hardware deployments

Workaround

Optionally, create a 11.2.0.4 or 12.1 database home.

Create a 11.2.0.4 or 12.1 database based on an existing 11.2.0.4 or 12.1 database home.

This issue is tracked with Oracle bug 31016061.

Error when creating 11.2.0.4 database

An error is encountered when creating 11.2.0.4 databases.

When you run the command odacli create-database for 11.2.0.4 databases specifying the Oracle Database version as 18.10.0.0, the command fails.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the command specifying a five digit Oracle Database version, for example, 18.10.0.0.200414.

This issue is tracked with Oracle bug 31328317.

Error when creating or restoring 11.2.0.4 database

An error is encountered when creating or restoring 11.2.0.4 databases.

When you run the command odacli create-database or odacli irestore-database for 11.2.0.4 databases, the command fails to run at the Configuring DB Console step. This error may also occur when creating 11.2.0.4 databases using the Browser User Interface.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the commands without enabling DB Console.

This issue is tracked with Oracle bug 31017360.

Error when upgrading database from 11.2.0.4 to 12.1 or 12.2

When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.

Database upgrade can cause the following warning in the UpgradeResults.html file, when upgrading database from 11.2.0.4 to 12.1 or 12.2:
Database is using a newer time zone file version than the Oracle home 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

  1. Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
  2. After manually completing the database upgrade, run the following command to update DCS metadata:
    /opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f

This issue is tracked with Oracle bug 31125985.

Error when creating 19c single-instance database

When creating 19c single-instance database, an error is encountered.

When creating a 19c single-instance database with different dbName and dbUniqueName, the password file is stored in the local storage instead of shared storage.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Use the same dbName and dbUniqueName when creating a 19c single-instance database.

This issue is tracked with Oracle bug 31194087.

Error when upgrading 12.1 single-instance database

When upgrading 12.1 single-instance database, a job failure error is encountered.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Use the following workaround:
  1. Before upgrading the 12.1 single-instance database, run the following PL/SQL command to change the local_listener to an empty string:
    ALTER SYSTEM SET LOCAL_LISTENER='';
  2. After upgrading the 12.1 single-instance database successfully, run the following PL/SQL command to change the local_listener to the desired value:
    ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-'; 

This issue is tracked with Oracle bugs 31202775, 31214657, 31210407, and 31178058.

Failure in creating RECO disk group during provisioning

When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.

Hardware Models

All Oracle Database Appliance X8-2-HA with High Performance configuration

Workaround

  1. Power off storage expansion shelf.
  2. Reboot both nodes.
  3. Proceed with provisioning the default storage shelf (first JBOD).
  4. After the system is successfully provisioned with default storage shelf (first JBOD), check that oakd is running on both nodes in foreground mode.
     # ps -aef | grep oakd
  5. Check that all first JBOD disks have the status online, good in oakd, and CACHED in Oracle ASM.
  6. Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
  7. Run the following command from the master node to add the storage expansion shelf disks (two JBOD setup) to oakd and Oracle ASM.
    #odaadmcli show ismaster 
          OAKD is in Master Mode 
    
          # odaadmcli expand storage -ndisk 24 -enclosure 1 
           Skipping precheck for enclosure '1'... 
           Check the progress of expansion of storage by executing 'odaadmcli  
    show disk' 
           Waiting for expansion to finish ... 
          #  
  8. Check that the storage expansion shelf disks (two JBOD setup) are added to oakd and Oracle ASM.

Replace odaadmcli with oakcli commands on Oracle Database Appliance Virtualized Platform in the procedure.

For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.

This issue is tracked with Oracle bug 30839054.

Simultaneous creation of two Oracle ACFS Databases fails

If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.

DCS-10001:Internal error encountered: Fail to run command Failed to create  
volume. 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.

For High Perfomance configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname 
For Oracle Database Appliance X8-2 High Perfomance configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname 
For High Capacity configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)  
For Oracle Database Appliance X8-2 High Capacity configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname  

This issue is tracked with Oracle bug 30750497.

Database creation hangs when using a deleted database name for database creation

The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.

If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.

Hardware Models

All Oracle Database Appliance high-availability environments

Workaround

Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.

For example, the following command creates a database testdb with user DBSNMP.

/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP 

This issue is tracked with Oracle bug 28916487.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Accelerator volume for data is not created on flash storage

The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.

Hardware Models

Oracle Database Appliance high capacity environments with HDD disks

Workaround

Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.

This issue is tracked with Oracle bug 28836461.

Errors in clone database operation

Clone database operation fails due to errors.

If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.

Clone database operation fails

For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Set the compatible value to that of the source database. Follow these steps:
  1. Change the parameter value.
    SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
  2. Shut down the database.
    SQL> SHUTDOWN IMMEDIATE
  3. Start the database.
    SQL> Startup
  4. Verify the parameter for the new value.
    SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';

This issue is tracked with Oracle bug 30309914.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Error in switchover operation with Oracle Data Guard

When performing switchover operation with Oracle Data Guard on Oracle Database Appliance, an error is encountered.

The Role component described in the output of the odacli describe-dataguardstatus command is inconsistent with the DGMGRL> show configuration; output. The command odacli switchover-dataguard fails because the Role component in odacli describe-dataguardstatus is not correct.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Run odacli describe-dataguardstatus -i dgconfigId a few times to check if Role is updated. Perform the switchover operation after the Role component in the output of the odacli describe-dataguardstatus command is updated.

This issue is tracked with Oracle bugs 31428670 and 31584695.

Error in updating Role after Oracle Data Guard operations

When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.

The dbRole component described in the output of the odacli describe-database command is not updated after Oracle Data Guard switchover, failover, and reinstate operations on Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Run odacli update-registry -n db --force/-f to update the database metadata. After the job completes, run the odacli describe-database command and verify that dbRole is updated.

This issue is tracked with Oracle bug 31378202.

Error in switchover operation on Oracle Data Guard with 11.2.0.4 database

When performing switchover operation with Oracle Data Guard with 11.2.0.4 database on Oracle Database Appliance, an error is encountered.

The following error message is displayed:
DCS-10001:Internal error encountered: Unable enqueue Id and update DgConfig. 

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

  1. Check if all instances are running:
    srvctl status database -d new_standby_db_unique_name
  2. If any instance is not running, then start the instance manually:
    srvctl start instance
  3. Run odacli describe-dataguardstatus -i dgconfigId a few times to ensure the correct status on all nodes.

This issue is tracked with Oracle bug 31639494.

Error in configuring Oracle Data Guard on system with customer user group settings

When configuring Oracle Data Guard on a system with customer user group settings, an error is encountered.

The following error is seen in the step Upload password file to Standby database (Standby site).
DCS-10001:Internal error encountered: Unable to set file roles and groups to dbUser:oinstall.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Manually copy the password file from the primary to the standby system and run the odacli configure-dataguard command with the --skip-password-copy/-s option.

Follow these steps:
  1. Locate the password file location on the primary system:
    srvctl config database -d dbUniqueName | grep -i password 
  2. If the output is in the Oracle ASM directory, then copy the password from the Oracle ASM directory to the local directory:
    su - grid 
    asmcmd 
    ASMCMD> pwcopy +DATA/system2/PASSWORD/orapwsystem2 /tmp/orapwsystem2
  3. If the output is empty, check the directory at /dbHome/dbs/orapwdbName. For example, the orapwd can be in /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2.
  4. Run the following command on the standby system to copy the password file to the standby system:
    /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2 
    /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2.ori 
    scp   
    root@primaryHost:/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2 
    /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2
  5. Change the standby orapwd file permissions:
    chown -R oracle /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2
    chgrp oinstall /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2 
  6. Check the password file location and copy to Oracle ASM directory, if necessary:
    srvctl config database -d system2 | grep -i password 
    Password file: +DATA/system2/PASSWORD/orapwsystem2 
  7. Copy the password from the local folder to the Oracle ASM directory:
    su - grid 
    asmcmd 
    ASMCMD> pwcopy /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/orapwsystem2 
    +DATA/system2/PASSWORD/orapwsystem2

This issue is tracked with Oracle bug 31616641.

Error in configuring Oracle Data Guard with protection mode and transport type

When configuring Oracle Data Guard with protection mode as Max Protection and transport type as SYNC, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Configure Oracle Data Guard with default protection mode Max Performance and default Transport Type as ASYNC. After Oracle Data Guard is successfully configured, manually change the protection mode and transport type.
su - oracle 
DGMGRL> edit database primary_db_unique_name set property  
'LogXptMode'='SYNC'; 
Property "LogXptMode" updated 
DGMGRL> edit database standby_db_unique_name set property  
'LogXptMode'='SYNC'; 
Property "LogXptMode" updated 
DGMGRL>  EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY; 
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPROTECTION; 

This issue is tracked with Oracle bugs 31600966 and 31601031.

Error in Oracle Data Guard failover operation

When performing an Oracle Data Guard failover operation, an error is encountered.

Running the odacli failover-dataguard command fails with the following error:
DCS-10001 - UNABLE TO PRECHECKFAILOVERDG11G DG
The dcs-agent.log file contains this line:
PHYSICAL STANDBY|YES  |NO

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Check if flashback is enabled on the standby database. If not, enable flashback and retry the odacli failover-dataguard command.

Follow these steps:
  1. Check if flashback is enabled on the standby database:
    select flashback_on from v$database;  
  2. If the output is No, then run the following:
    alter database recover managed standby database cancel; 
    alter database flashback on; 
    alter database recover managed standby database using current logfile 
    disconnect; 
  3. Retry the odacli failover-dataguard command:
    odacli failover-dataguard 

This issue is tracked with Oracle bug 31626430.

Error in Oracle Data Guard reinstate operation

When performing an Oracle Data Guard reinstate operation, an error is encountered.

Running the odacli reinstate-dataguard command fails with the following error:
DCS-10001:Internal error encountered: Unable to reinstate Dg
The dcs-agent.log file contains this line:
Error: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Check if flashback is enabled on the standby database. If not, enable flashback and retry the odacli failover-dataguard command.

Follow these steps:
  1. Check Oracle Data Guard status:
    DGMGRL> show configuration;  
  2. If the status is SUCCESS, then the reinstate job was actually successful. Run odacli describe-dataguardstatus on both primary and standby systems to update Oracle Data Guard status.
    odacli describe-dataguardstatus

This issue is tracked with Oracle bug 31571682.

Inconsistency in database version on Oracle Data Guard primary and standby systems after restore operation

There is an inconsistency in database version on Oracle Data Guard primary and standby systems after performing a restore operation.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Update the repository with the correct Oracle Database clone files before running the restore operation.

This issue is tracked with Oracle bug 31616944.

Delay in completing delete filegroup operation

There may be a delay in the delete filegroup operation completion if there are a number of files to be deleted.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None. Wait till the operation completes.

This issue is tracked with Oracle bug 31534406.

Error when relocating database

When relocating a database having host name in upper case letters, an error is encountered.

If the database host name has uppercase letters, then the operation to relocate the database fails.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Specify the host name with lower case letters, or use the -g option to specify target node number.

This issue is tracked with Oracle bug 31386630.

Error when connecting to the database after relocation

When connecting to the database after relocation, an error is encountered.

After relocating a cloned database, there is an error when connecting to the database using SQL*Plus.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Copy the password file from the current location to /u02/app/oracle/oradata/sourcedbUniqueName/dbName/dbs.
  2. Change the password file owner and group to oracle:oinstall.
  3. Modify the password file location using the command:
    srvctl modify database -db dbUniqueName -pwfile newPwFileLoc

This issue is tracked with Oracle bug 31317837.

Error when recovering a single-instance database

When recovering a single-instance database, an error is encountered.

When a single-instance database is running on the remote node, and you run the operation for database recovery on the local node, the following error is observed:
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered: 
Missing arguments : required sqlplus connection  information is not 
provided

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Perform recovery of the single-instance database on the node where the database is running.

This issue is tracked with Oracle bug 31399400.

Errors when running ORAchk or the odacli create-prepatchreport command

When you run ORAchk or the odacli create-prepatchreport command, an error is encountered.

The following error messages may be seen:
Table AUD$[FGA_LOG$] should use Automatic Segment Space Management 
diagsnap or pstack are configured to collect first failure diagnostic
Initialization parameter RESOURCE_MANAGER_PLAN should be set. 
One or more log archive destination and alternate log archive destination settings are not as recommended 
Software home check failed 

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore the error messages and continue the deployment.

This issue is tracked with Oracle bug 30931017.

Database ID incorrectly displayed in odacli describe-database output

Database ID is incorrectly displayed in the output of the command odacli describe-database.

The ID field in the output of the command odacli describe-database wrongly displays the databaseId instead of db object ID.

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Run the odacli list-databases command to view the correct ID. You can also view the correct ID details using the Browser User Interface.

This issue is tracked with Oracle bug 31121016.

Error when rebooting the appliance

When rebooting Oracle Database Appliance, the user interactive screen is displayed.

Hardware Models

Oracle Database Appliance X7-2-HA hardware models

Workaround

From the system console, select or highlight the kernel using the Up or Down arrow keys and then press Enter to continue with the reboot of the appliance.

This issue is tracked with Oracle bug 31196452.

Job history not erased after running cleanup.pl

After running cleanup.pl, job history is not erased.

After running cleanup.pl, when you run /opt/oracle/dcs/bin/odacli list-jobs commands, the list is not empty.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

  1. Stop the DCS Agent by running the following commands on both nodes.

    For Oracle Linux 6, run:

    initctl stop initdcsagent 

    For Oracle Linux 7, run:

    systemctl stop initdcsagent 
  2. Run the cleanup script sequentially on both the nodes.

This issue is tracked with Oracle bug 30529709.

Inconsistency in ORAchk summary and details report page

ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.

This issue is tracked with Oracle bug 30676674.

Missing DATA, RECO, and REDO entries when dbstorage is rediscovered

Running the odacli update-registry command with -n all --force or -n dbstorage --force option can result in metadata corruption.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the -all option when all the databases created in the system use OAKCLI in migrated systems. On other systems that run on DCS stack, update all components other than dbstorage individually, using the odacli update-registry -n component_name_to_be_updated_excluding_dbstorage.

This issue is tracked with Oracle bug 30274477.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Issues with the Web Console on Microsoft web browsers

Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.

Following are issues with Microsoft web browsers:
  • Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
  • Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
  • Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
  • After configuring the oda-admin password, the following error is displayed:
    Failed to change the default user (oda-admin) account password. 
    Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized

    Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.

Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

To access the Web Console, use either Google Chrome or Firefox.

This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.

Disk space issues due to Zookeeper logs size

The Zookeeper log files, zookeeper.out and /opt/zookeeper/log/zkMonitor.log, are not rotated, when new logs are added. This can cause disk space issues.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Rotate the zookeeper log file manually, if the log file size increases, as follows:

  1. Stop the DCS-agent service for zookeeper on both nodes.

    initctl stop initdcsagent
  2. Stop the zookeeper service on both nodes.

    /opt/zookeeper/bin/zkServer.sh stop
  3. Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.

  4. Set the ZOO_LOG_DIR as an environment variable to a different log directory, before starting the zookeeper server.

    export ZOO_LOG_DIR=/opt/zookeeper/log
  5. Switch to ROLLINGFILE, to set the capability to roll.

    export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
    Restart the zookeeper server, for the changes to take effect.
  6. Set the following parameters in the /opt/zookeeper/conf/log4j.properties file, to limit the number of backup files, and the file sizes.

    zookeeper.log.dir=/opt/zookeeper/log
    zookeeper.log.file=zookeeper.out
    log4j.appender.ROLLINGFILE.MaxFileSize=10MB
    log4j.appender.ROLLINGFILE.MaxBackupIndex=10
  7. Start zookeeper on both nodes.

    /opt/zookeeper/bin/zkServer.sh start
  8. Check the zookeeper status, and verify that zookeeper runs in leader/follower/standalone mode.

    /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
  9. Start the dcs agent on both nodes.

    initctl start initdcsagent
  10. Purge the zookeeper monitor log, zkMonitor.log, in the location /opt/zookeeper/log. You do not have to stop the zookeeper service.

This issue is tracked with Oracle bug 29033812.

Error after running the cleanup script

After running the cleanup.pl script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake.

The error is caused when you run the following steps:

  1. Run cleanup.pl on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.

  2. Run cleanup.pl on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.

  3. After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.

    # odacli list-jobs
    DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Verify the zookeeper status on the both nodes before starting dcsagent:

    /opt/zookeeper/bin/zkServer.sh status

    For a single-node environment, the status should be: leader, or follower, or standalone.

  2. Restart the dcsagent on Node0 after running the cleanup.pl script.

    # systemctl stop initdcsagent 
    # systemctl start initdcsagent

This issue is tracked with Oracle bug 26996134.

Error in attaching vdisk to guest VM

The current system firmware may be different from the available firmware after applying the latest patch.

When multiple vdisks from the oda_base driver_domain are attached to the guest VM, their entries are not written on the xenstore, vdisks are not attached to the VM, and the VM may not start.

The following errors are logged on xen-hotplug.log in ODA_BASE:
xenstore-write: could not write path backend/vbd/6/51728/node 
xenstore-write: could not write path backend/vbd/6/51728/hotplug-error 

Hardware Models

Oracle Database Appliance Virtualized Platform

Workaround

  1. Add the followng into the /etc/sysconfig/xencommons file in dom0:
    XENSTORED_ARGS="--entry-nb=4096 --transaction=512"
  2. Reboot dom0.

This issue is tracked with Oracle bug 30886365.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.