4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

TFA not running after server or database patching

Oracle TFA does not run after server or database patching.

TFA is shut down during patching of Oracle Database and Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Run tfactl start to start TFA manually.

This issue is tracked with Oracle bug 31091006.

Error when updating DCS components during patching of Oracle Database Appliance

When updating DCS components during patching of Oracle Database Appliance, an error is encountered.

After updating DCS components from 19.5 or 18.8 to 19.6, using the command odacli update-dcscomponents -v 19.6.0.0.0, the job to delete SSH keys may display as failed when running the command odacli list-jobs.

For example:
# odacli list-jobs
......
8666377b-ef22-4e32-949a-d57ecef95a2b     SSH keys update   April 18, 2020 9:45:58 AM CST      Success 
e8f317c2-8958-4939-a825-2cd69a505eb1    SSH key delete      April 18, 2020 9:46:03 AM CST      Success 
2ebfc631-235e-4dee-8f3e-3a4d09eedd4e     SSH keys update   April 18, 2020 9:46:30 AM CST     Success 
c336aab7-793a-4f1d-b256-84034345dbdf     SSH key delete      April 18, 2020 9:46:39 AM CST     Failure 
The following error may display when running the command odacli describe-job -i c336aab7-793a-4f1d-b256-84034345dbd.
DCS-10110:Failed to complete the operation.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

If SSH equivalence for the root user was not removed after the odacli update-dcscomponents job, then remove it manually. The subsequent jobs may not fail even if this SSH equivalence is not deleted.

This issue is tracked with Oracle bug 31194672.

Error in patching Oracle Database Appliance server

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, Oracle Grid Infrastructure clone.pl failed to run.

Hardware Models

All Oracle Database Appliance hardware models with custom grid user name which was migrated from OAKCLI to ODACLI stack in Oracle Database Appliance release 18.3

Workaround

  1. Remove the directory /u01/app/19.0.0.0 on both nodes.
  2. Create the directory /u01/app/grid_user on both nodes.
  3. Set permissions to 755 for the directory created in step 2.
  4. Set owner as grid_user for the directory created in step 2.
  5. Retry patching by running the odacli update-server command.

    For example, for grid user tmpgrid, the steps are:

    rm -rf /u01/app/19.0.0.0 
    mkdir /u01/app/tmpgrid 
    chmod 755 /u01/app/tmpgrid 
    chown tmpgrid /u01/app/tmpgrid 
    odacli update-server -v 19.6.0.0

This issue is tracked with Oracle bug 31111872.

Error when running the odacli create-prepatchreport command after operating system upgrade

An error is encountered when running the odacli create-prepatchreport command after operating system upgrade.

The following error is displayed:
# /opt/oracle/dcs/bin/odacli create-prepatchreport -v 19.6.0.0.0  -j -os 
DCS-10001:Internal error encountered: Patch tag is null for component os.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

After successfully patching the operating system, run the odacli create-prepatchreport command without the -os option.
# /opt/oracle/dcs/bin/odacli create-prepatchreport -v 19.6.0.0.0 -j 

This issue is tracked with Oracle bug 31024383.

Disk firmware not updated after patching

After patching Oracle Database Appliance, disk firmware is not updated on some Oracle Database Appliance hardware models.

The odacli describe-component command shows available version for disks as 0112 but the odacli update-storage and odacli update-server commands do not update the disk firmware.

Hardware Models

All Oracle Database Appliance X7-2-HA hardware models

Workaround

None

This issue is tracked with Oracle bug 30841243.

Error after upgrading to Oracle Linux 7

After upgrading to Oracle Linux 7, an error is encountered.

If your Oracle Database Appliance X7-2 HA system was provisioned with version prior to Oracle Database Appliance release 18.3, you can run into a potential issue where your system hostname is reset to oak0/oak1 and the DCS stack is reinitialized after upgrading the operating system to Oracle Linux 7.

Hardware Models

Oracle Database Appliance X7-2 HA system provisioned with version prior to Oracle Database Appliance release 18.3

Workaround

Before upgrading the operating system to Oracle Linux 7, manually run the following command:
# touch /root/.setupdcsfile

This issue is tracked with Oracle bug 31228390.

Error when upgrading to Oracle Linux 7

When upgrading to Oracle Linux 7, an error is encountered.

The following error message is displayed:
[main] ERROR com.oracle.dcs.commons.utils.patching.CommonsPatchingUtils -  
getLVMFreeSpace: 
exception seen when calculating the actual value. 
DCS-10001:Internal error encountered: Failed to get the LVM free space. 
The error occurs due to any of the following reasons:
  • Oracle Database Appliance Backup and Recovery (ODABR) tool is installed on the system.
  • No ODABR snapshots are present on the node.
  • You may have modified the NLS setting, which could return a numeric value in a format the code is expecting (nnnn.nn)

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

Use any of the following workarounds:

  • Create the ODABR snapshots manually by running the command /opt/odabr/odabr backup -snap.
  • At the command prompt, run:
    $ LC_ALL=C 
      $ export LC_ALL 
      $ <execute command "odacli update-server ..." 

This issue is tracked with Oracle bug 31214103.

Error in network interface connection after operating system upgrade

After operating system upgrade, an error in network interface connection is encountered.

The following error message is displayed:
Em2 and em3 shows:
ethtool em3
Settings for em3:
        Supported ports: [ FIBRE ]
        ...
        Link detected: no 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Add customized options manually from the backup file at /etc/sysconfig/network-scripts/bkupIfcfgUpg/ifcfg-*. This issue is tracked with Oracle bug 31358688.

Error encountered when running upgrade script

When upgrading Oracle Database Appliance, an error is encountered.

When upgrading Oracle Grid Infrastructure from Oracle Database Appliance release 18.8 to 19.6, during the execution of the rootupgrade script, the process hangs.

Check the PID process using the command:

grid  PID PPID /bin/bash /sbin/weak-modules --verbose --dry-run --no-initramfs  --add-modules 

Check if the CPU time for the PID process increases steadily.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

Manually complete the Oracle Grid Infrastructure upgrade:
  1. As the root user, run the root upgrade script. On High-Availability system, run the command on Node0 and then Node1.
    /u01/app/19.0.0.0/grid/rootupgrade.sh 
  2. Ensure rootupgrade script has completed successfully on all the nodes in the cluster.
    /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f
     
    # /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f 
    Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster  
    upgrade state is [UPGRADE FINAL]. The cluster active patch level is  
    [3225354603].
  3. If the upgrade state is UPGRADE FINAL, then run the following command:
    /u01/app/19.0.0.0/grid/gridSetup.sh -responseFile /tmp/config_assists.rsp 
    -executeConfigTools -silent -ignorePrereqFailure
  4. Create the /tmp/config_assists.rsp file manually:
    cp /opt/oracle/dcs/rdbaas/config/grid_config_resp_122 /tmp/config_assists.rsp
  5. Edit /tmp/config_assists.rsp and make the following changes:
    oracle.install.option=CRS_CONFIG to oracle.install.option=CRS_CONFIG 
    
    oracle.install.asm.SYSASMPassword=welcome1 to oracle.install.asm.SYSASMPassword=WelCome#_123 
    
    oracle.install.asm.monitorPassword=welcome1 to oracle.install.asm.monitorPassword=WelCome#_123 
    
    oracle.install.config.emAdminPassword=welcome1 to oracle.install.config.emAdminPassword=WelCome#_123 
  6. Add the following lines to the end of the file:
    oracle.assistants.asm|S_ASMPASSWORD=WelCome#_123 
    oracle.assistants.asm|S_ASMMONITORPASSWORD=WelCome#_123 
    oracle.crs|oracle_install_crs_MgmtDB_CDB=false 
    oracle.crs|oracle_install_crs_ConfigureMgmtDB=false 
    oracle.crs|oracle_install_crs_LaunchCluvfy=false
  7. After gridSetup.sh runs successfully, run the command:
    /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f 
  8. Check the output and confirm that the upgrade state is [NORMAL].
    #  /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f 
    Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster 
    upgrade state is [NORMAL]. The cluster active patch level is [3225354603]. 

This issue is tracked with Oracle bug 31233647.

Error when upgrading database from 11.2.0.4 to 12.1 or 12.2

When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.

Database upgrade can cause the following warning in the UpgradeResults.html file, when upgrading database from 11.2.0.4 to 12.1 or 12.2:
Database is using a newer time zone file version than the Oracle home 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

  1. Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
  2. After manually completing the database upgrade, run the following command to update DCS metadata:
    /opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f

This issue is tracked with Oracle bug 31121016.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching to 12.1.0.2.190716 Bundle Patch

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Error in patching NVMe disks to the latest version

Patching of NVMe disks to the latest version may not be supported on some Oracle Database Appliance hardware models.

On Oracle Database Appliance X8-2 hardware models, the NVMe controller 7361456_ICRPC2DD2ORA6.4T is installed with higher version VDV1RL01/VDV1RL02. Patching of this controller is not supported on Oracle Database Appliance X8-2 hardware models. For other platforms, if the installed version is QDV1RE0F, or QDV1RE13, or QDV1RD09, or QDV1RE14 then when you patch the storage, the NVMe controller version is updated to qdv1rf30.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None

This issue is tracked with Oracle bug 30287439.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance

Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance.

When cleaning up and reprovisioning Oracle Database Appliance with release 19.6, the Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk RPMs may not be updated to release 19.6. The components are updated when you apply the patches for Oracle Database Appliance release 19.6.

Hardware Models

All Oracle Database Appliance deployments

Workaround

Update to the latest server patch for the release.

This issue is tracked with Oracle bugs 28933900 and 30187516.

11.2.0.4 databases fail to start after patching

After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.

Hardware Models

All Oracle Database Appliance Hardware models

Workaround

Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.

Start the databases with the command:
srvctl start database -db db_unique_name

This issue is tracked with Oracle bug 28815716.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Error when creating database with ObjectStore backup option

When creating database with ObjectStore backup option, if the RMAN backup password is not provided, then an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Use one of the following options:
  • Do not specify the Objectstore backupconfig in the create database request. Once the database is created, then you can associate the required Objectstore backup configuration to the database.
  • Associate the Objectstore backup configuration again. Specify the RMAN backup password when prompted. After successful association, take the backup.

This issue is tracked with Oracle bug 31010490.

Error when performing backup and recovery of Standard Edition High Availability Database

When performing backup and recovery of Standard Edition High Availability Database, an error is encountered.

Associating a backup configuration to Standard Edition High Availability Database, and backup and recovery operations of Standard Edition High Availability Database fail with the following error:

DCS-10089:Database  is in an invalid state 'NOT_RUNNING'. Database dbname must be running. 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 31173818.

Cannot create 11.2.0.4 and 12.1 Oracle ACFS databases with Oracle Flex redundancy

Creation of 11.2.0.4 and 12.1 Oracle ACFS databases with Oracle Flex redundancy fails.

Hardware Models

All Oracle Database Appliance hardware deployments

Workaround

Optionally, create a 11.2.0.4 or 12.1 database home.

Create a 11.2.0.4 or 12.1 database based on an existing 11.2.0.4 or 12.1 database home.

This issue is tracked with Oracle bug 31016061.

Error when creating or restoring 11.2.0.4 database

An error is encountered when creating or restoring 11.2.0.4 databases.

When you run the command odacli create-database or odacli irestore-database for 11.2.0.4 databases, the command fails to run at the Configuring DB Console step. This error may also occur when creating 11.2.0.4 databases using the Browser User Interface.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the commands without enabling DB Console.

This issue is tracked with Oracle bug 31017360.

Error in upgrading 12.1 Oracle Database

When upgrading 12.1 Oracle Database, either CDB, Oracle ACFS, or single-instance, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 31214657.

Error when creating 19c single-instance database

When creating 19c single-instance database, an error is encountered.

When creating a 19c single-instance database with different dbName and dbUniqueName, the password file is stored in the local storage instead of shared storage.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Use the same dbName and dbUniqueName when creating a 19c single-instance database.

This issue is tracked with Oracle bug 31194087.

Error when upgrading 12.1 single-instance database

When upgrading 12.1 single-instance database, a job failure error is encountered.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Use the following workaround:
  1. Before upgrading the 12.1 single-instance database, run the following PL/SQL command to change the local_listener to an empty string:
    ALTER SYSTEM SET LOCAL_LISTENER='';
  2. After upgrading the 12.1 single-instance database successfully, run the following PL/SQL command to change the local_listener to the desired value:
    ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-'; 

This issue is tracked with Oracle bugs 31202775, 31214657, 31210407, and 31178058.

Failure in creating RECO disk group during provisioning

When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.

Hardware Models

All Oracle Database Appliance X8-2-HA with High Performance configuration

Workaround

  1. Power off storage expansion shelf.
  2. Reboot both nodes.
  3. Proceed with provisioning the default storage shelf (first JBOD).
  4. After the system is successfully provisioned with default storage shelf (first JBOD), check that oakd is running on both nodes in foreground mode.
     # ps -aef | grep oakd
  5. Check that all first JBOD disks have the status online, good in oakd, and CACHED in Oracle ASM.
  6. Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
  7. Run the following command from the master node to add the storage expansion shelf disks (two JBOD setup) to oakd and Oracle ASM.
    #odaadmcli show ismaster 
          OAKD is in Master Mode 
    
          # odaadmcli expand storage -ndisk 24 -enclosure 1 
           Skipping precheck for enclosure '1'... 
           Check the progress of expansion of storage by executing 'odaadmcli  
    show disk' 
           Waiting for expansion to finish ... 
          #  
  8. Check that the storage expansion shelf disks (two JBOD setup) are added to oakd and Oracle ASM.

Replace odaadmcli with oakcli commands on Oracle Database Appliance Virtualized Platform in the procedure.

For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.

This issue is tracked with Oracle bug 30839054.

Simultaneous creation of two Oracle ACFS Databases fails

If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.

DCS-10001:Internal error encountered: Fail to run command Failed to create  
volume. 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.

For High Perfomance configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname 
For Oracle Database Appliance X8-2 High Perfomance configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname 
For High Capacity configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)  
For Oracle Database Appliance X8-2 High Capacity configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname  

This issue is tracked with Oracle bug 30750497.

Database creation hangs when using a deleted database name for database creation

The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.

If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.

Hardware Models

All Oracle Database Appliance high-availability environments

Workaround

Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.

For example, the following command creates a database testdb with user DBSNMP.

/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP 

This issue is tracked with Oracle bug 28916487.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Accelerator volume for data is not created on flash storage

The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.

Hardware Models

Oracle Database Appliance high capacity environments with HDD disks

Workaround

Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.

This issue is tracked with Oracle bug 28836461.

Errors in clone database operation

Clone database operation fails due to errors.

If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.

Clone database operation fails

For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Set the compatible value to that of the source database. Follow these steps:
  1. Change the parameter value.
    SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
  2. Shut down the database.
    SQL> SHUTDOWN IMMEDIATE
  3. Start the database.
    SQL> Startup
  4. Verify the parameter for the new value.
    SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';

This issue is tracked with Oracle bug 30309914.

Database creation fails for odb-01s DSS databases

When attempting to create an DSS database with shape odb-01s, the job may fail with errors.

CRS-2674: Start of 'ora.test.db' on 'example_node' failed
CRS-5017: The resource action "ora.test.db start" encountered the following
error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/example_node/crs/trace/crsd_oraagent_oracle.trc".

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2

Workaround

There is no workaround. Select an alternate shape to create the database.

This issue is tracked with Oracle bug 27768012.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Errors when running ORAchk or the odacli create-prepatchreport command

When you run ORAchk or the odacli create-prepatchreport command, an error is encountered.

The following error messages may be seen:
Table AUD$[FGA_LOG$] should use Automatic Segment Space Management 
diagsnap or pstack are configured to collect first failure diagnostic
Initialization parameter RESOURCE_MANAGER_PLAN should be set. 
One or more log archive destination and alternate log archive destination settings are not as recommended 
Software home check failed 

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore the error messages and continue the deployment.

This issue is tracked with Oracle bug 30931017.

Database ID incorrectly displayed in odacli describe-database output

Database ID is incorrectly displayed in the output of the command odacli describe-database.

The ID field in the output of the command odacli describe-database wrongly displays the databaseId instead of db object ID.

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Run the odacli list-databases command to view the correct ID. You can also view the correct ID details using the Browser User Interface.

This issue is tracked with Oracle bug 31121016.

Error when rebooting the appliance

When rebooting Oracle Database Appliance, the user interactive screen is displayed.

Hardware Models

Oracle Database Appliance X7-2-HA hardware models

Workaround

From the system console, select or highlight the kernel using the Up or Down arrow keys and then press Enter to continue with the reboot of the appliance.

This issue is tracked with Oracle bug 30931017.

Error encountered when relocating database

When relocating a database, an error is encountered.

The following error messages may be seen:
java.lang.NullPointerException 

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Download and install the Oracle Database (RDBMS) patch from bug 31114977.

This issue is tracked with Oracle bug 31225790.

Error encountered when disabling High Availability

When disabling High Availability for a Standard Edition High Availability database, an error is encountered.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Download and install the Oracle Database (RDBMS) patch from bug 31114977.

This issue is tracked with Oracle bug 31231043.

Inconsistency in available and current system firmware

The current system firmware may be different from the available firmware after applying the latest patch.

Oracle Database Appliance X8-2 with expander model ORACLE/DE3-24C are at version 0309 but patching of expander firmware from earlier versions to 0309 is not supported in this release. Oracle Database Appliance Release 18.8 contains the patch for expander version 0306, so when you run odacli describe-component command, the available expander version is displayed as 0306.

Oracle Database Appliance X8-2 with controller model LSI Logic/0x0097 are at version 16.00.00.00 but patching of controller firmware from earlier versions to 16.00.00.00 is not supported in this release. Oracle Database Appliance Release 18.8 contains the patch for controller version 13.00.00.00, so when you run odacli describe-component command, the available expander version is displayed as 13.00.00.00.

Hardware Models

Oracle Database Appliance X8-2 hardware models

Workaround

Ignore this inconsistency, since this is a display issue and does not affect the installed firmware version.

This issue is tracked with Oracle bug 30787910.

Inconsistency in ORAchk summary and details report page

ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.

This issue is tracked with Oracle bug 30676674.

Missing DATA, RECO, and REDO entries when dbstorage is rediscovered

Running the odacli update-registry command with -n all --force or -n dbstorage --force option can result in metadata corruption.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the -all option when all the databases created in the system use OAKCLI in migrated systems. On other systems that run on DCS stack, update all components other than dbstorage individually, using the odacli update-registry -n component_name_to_be_updated_excluding_dbstorage.

This issue is tracked with Oracle bug 30274477.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Issues with the Web Console on Microsoft web browsers

Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.

Following are issues with Microsoft web browsers:
  • Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
  • Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
  • Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
  • After configuring the oda-admin password, the following error is displayed:
    Failed to change the default user (oda-admin) account password. 
    Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized

    Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.

Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

To access the Web Console, use either Google Chrome or Firefox.

This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.

Disk space issues due to Zookeeper logs size

The Zookeeper log files, zookeeper.out and /opt/zookeeper/log/zkMonitor.log, are not rotated, when new logs are added. This can cause disk space issues.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Rotate the zookeeper log file manually, if the log file size increases, as follows:

  1. Stop the DCS-agent service for zookeeper on both nodes.

    initctl stop initdcsagent
  2. Stop the zookeeper service on both nodes.

    /opt/zookeeper/bin/zkServer.sh stop
  3. Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.

  4. Set the ZOO_LOG_DIR as an environment variable to a different log directory, before starting the zookeeper server.

    export ZOO_LOG_DIR=/opt/zookeeper/log
  5. Switch to ROLLINGFILE, to set the capability to roll.

    export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
    Restart the zookeeper server, for the changes to take effect.
  6. Set the following parameters in the /opt/zookeeper/conf/log4j.properties file, to limit the number of backup files, and the file sizes.

    zookeeper.log.dir=/opt/zookeeper/log
    zookeeper.log.file=zookeeper.out
    log4j.appender.ROLLINGFILE.MaxFileSize=10MB
    log4j.appender.ROLLINGFILE.MaxBackupIndex=10
  7. Start zookeeper on both nodes.

    /opt/zookeeper/bin/zkServer.sh start
  8. Check the zookeeper status, and verify that zookeeper runs in leader/follower/standalone mode.

    /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
  9. Start the dcs agent on both nodes.

    initctl start initdcsagent
  10. Purge the zookeeper monitor log, zkMonitor.log, in the location /opt/zookeeper/log. You do not have to stop the zookeeper service.

This issue is tracked with Oracle bug 29033812.

Error after running the cleanup script

After running the cleanup.pl script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake.

The error is caused when you run the following steps:

  1. Run cleanup.pl on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.

  2. Run cleanup.pl on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.

  3. After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.

    # odacli list-jobs
    DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Verify the zookeeper status on the both nodes before starting dcsagent:

    /opt/zookeeper/bin/zkServer.sh status

    For a single-node environment, the status should be: leader, or follower, or standalone.

  2. Restart the dcsagent on Node0 after running the cleanup.pl script.

    # systemctl stop initdcsagent 
    # systemctl start initdcsagent

Old configuration details persisting in custom environment

The configuration file /etc/security/limits.conf contains default entries even in the case of custom environments.

On custom environments, when a single user is configured for both grid and oracle, the default grid user entries for the image are not removed from the /etc/security/limits.conf file.

Models

Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M

Workaround

This issue does not affect the functionality. Manually edit the /etc/security/limits.conf file and remove invalid entries.

This issue is tracked with Oracle bug 26978354.

Incorrect SGA and PGA values displayed

For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly.

For OLTP databases created with odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

For DSS databases created with with odb36 shape, following are the issues:

  • sga_target is set as 64 GB instead of 72 GB

  • pga_aggregate_target is set as 128 GB instead of 144 GB

For IMDB databases created with Odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

  • inmmory_size is set as 64 GB instead of 72 GB

Models

Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M

Workaround

Reset the PGA and SGA sizes manually

This issue is tracked with Oracle bug 27036374.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.