4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

TFA not running after server or database patching

Oracle TFA does not run after server or database patching.

TFA is shut down during patching of Oracle Database and Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Run tfactl start to start TFA manually.

This issue is tracked with Oracle bug 31091006.

Error in patching Oracle Database Appliance server

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, Oracle Grid Infrastructure clone.pl failed to run.

Hardware Models

All Oracle Database Appliance hardware models with custom grid user name which was migrated from OAKCLI to ODACLI stack in Oracle Database Appliance release 18.3

Workaround

  1. Remove the directory /u01/app/19.0.0.0 on both nodes.
  2. Create the directory /u01/app/grid_user on both nodes.
  3. Set permissions to 755 for the directory created in step 2.
  4. Set owner as grid_user for the directory created in step 2.
  5. Retry patching by running the odacli update-server command.

    For example, for grid user tmpgrid, the steps are:

    rm -rf /u01/app/19.0.0.0 
    mkdir /u01/app/tmpgrid 
    chmod 755 /u01/app/tmpgrid 
    chown tmpgrid /u01/app/tmpgrid 
    odacli update-server -v 19.6.0.0

This issue is tracked with Oracle bug 31111872.

Disk firmware not updated after patching

After patching Oracle Database Appliance, disk firmware is not updated on some Oracle Database Appliance hardware models.

The odacli describe-component command shows available version for disks as 0112 but the odacli update-storage and odacli update-server commands do not update the disk firmware.

Hardware Models

All Oracle Database Appliance X7-2-HA hardware models

Workaround

None

This issue is tracked with Oracle bug 30841243.

Error in server patching

An error is encountered when patching the server.

When running the command odacli update-server -v release_number, the following error is encountered:
DCS-10001:Internal error encountered: patchmetadata for 19.6.0.0.0 missing  
target version for GI.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Change the file ownership temporarily to the appropriate grid user for the osdbagrp binary in the grid_home/bin location. For example:
    $ chown -R grid:oinstall /u01/app/18.0.0.0/grid/bin/osdbagrp
  2. Run either the update-registry -n gihome or the update-registry -n system command.

This issue is tracked with Oracle bug 31125258.

Error in database patching

An error is encountered when patching the database.

When running the command odacli update-dbhome -v release_number on database homes that have Standard Edition High Availability enabled, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run datapatch with list of all valid PDBs by connecting all the databases that are running from database home. Follow these steps:
  1. Find all databases running from the database home. For example:
    odacli list-databases|grep DB home resource ID
  2. Find the PDBs for each database:
    #su oracle 
        #export ORACLE_HOME=Database home location
        #export ORACLE_SID=Database SID
        #sqlplus "/as sysdba"  
        SQL> SELECT NAME, OPEN_MODE FROM V$CONTAINERS WHERE OPEN_MODE='READ  
    WRITE'; 

    This query returns all PDB names including CDB$ROOT if the PDB is in "READ WRITE" mode.

  3. Exit from SQL*Plus.
  4. Run datapatch:
    #su oracle 
        #export ORACLE_HOME=Database home location
        #export ORACLE_SID=Database SID  
        #$ORACLE_HOME/OPatch/datapatch -pdbs "PDB name","PDB name"
  5. As root user, run the odacli update-dbhome command again to correct the metadata entries.
    # odacli update-dbhome -i dbhome id -v 19.7.0.0.0 

This issue is tracked with Oracle bug 31399885.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching to 12.1.0.2.190716 Bundle Patch

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance

Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance.

When cleaning up and reprovisioning Oracle Database Appliance with release 19.7, the Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk RPMs may not be updated to release 19.7. The components are updated when you apply the patches for Oracle Database Appliance release 19.7.

Hardware Models

All Oracle Database Appliance deployments

Workaround

Update to the latest server patch for the release.

This issue is tracked with Oracle bugs 28933900 and 30187516.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Error when performing backup and recovery of Standard Edition High Availability Database

When performing backup and recovery of Standard Edition High Availability Database, an error is encountered.

Associating a backup configuration to Standard Edition High Availability Database, and backup and recovery operations of Standard Edition High Availability Database fail with the following error:

DCS-10089:Database  is in an invalid state 'NOT_RUNNING'. Database dbname must be running. 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 31173818.

NTP service not running after rebooting node

The Network Time Protocol daemon (ntpd) fails to start after rebooting the node.

On Oracle Linux 7 environment, even though Network Time Protocol (NTP) was configured during provisioning, the Network Time Protocol daemon (ntpd) fails to start.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Follow the instructions described in My Oracle Support Note 2422378.1.

This issue is tracked with Oracle bug 31399685.

Cannot create 11.2.0.4 and 12.1 Oracle ACFS databases with Oracle Flex redundancy

Creation of 11.2.0.4 and 12.1 Oracle ACFS databases with Oracle Flex redundancy fails.

Hardware Models

All Oracle Database Appliance hardware deployments

Workaround

Optionally, create a 11.2.0.4 or 12.1 database home.

Create a 11.2.0.4 or 12.1 database based on an existing 11.2.0.4 or 12.1 database home.

This issue is tracked with Oracle bug 31016061.

Error in creating 11.2.0.4.200414 databases

Creation of 11.2.0.4.200414 Oracle ACFS database fails.

Hardware Models

All Oracle Database Appliance hardware deployments

Workaround

Manually apply Oracle ACFS patch 31323577 on Oracle Grid Infrastructure home and then create the 11.2.0.4 Oracle ACFS Database.

This issue is tracked with Oracle bug 31321374.

Error when creating 11.2.0.4 database

An error is encountered when creating 11.2.0.4 databases.

When you run the command odacli create-database for 11.2.0.4 databases specifying the Oracle Database version as 18.10.0.0, the command fails.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the command specifying a five digit Oracle Database version, for example, 18.10.0.0.200414.

This issue is tracked with Oracle bug 31328317.

Error when creating or restoring 11.2.0.4 database

An error is encountered when creating or restoring 11.2.0.4 databases.

When you run the command odacli create-database or odacli irestore-database for 11.2.0.4 databases, the command fails to run at the Configuring DB Console step. This error may also occur when creating 11.2.0.4 databases using the Browser User Interface.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the commands without enabling DB Console.

This issue is tracked with Oracle bug 31017360.

Error when upgrading database from 11.2.0.4 to 12.1 or 12.2

When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.

Database upgrade can cause the following warning in the UpgradeResults.html file, when upgrading database from 11.2.0.4 to 12.1 or 12.2:
Database is using a newer time zone file version than the Oracle home 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

  1. Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
  2. After manually completing the database upgrade, run the following command to update DCS metadata:
    /opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f

This issue is tracked with Oracle bug 31125985.

Error when creating 19c single-instance database

When creating 19c single-instance database, an error is encountered.

When creating a 19c single-instance database with different dbName and dbUniqueName, the password file is stored in the local storage instead of shared storage.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Use the same dbName and dbUniqueName when creating a 19c single-instance database.

This issue is tracked with Oracle bug 31194087.

Error when upgrading 12.1 single-instance database

When upgrading 12.1 single-instance database, a job failure error is encountered.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Use the following workaround:
  1. Before upgrading the 12.1 single-instance database, run the following PL/SQL command to change the local_listener to an empty string:
    ALTER SYSTEM SET LOCAL_LISTENER='';
  2. After upgrading the 12.1 single-instance database successfully, run the following PL/SQL command to change the local_listener to the desired value:
    ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-'; 

This issue is tracked with Oracle bugs 31202775, 31214657, 31210407, and 31178058.

Failure in creating RECO disk group during provisioning

When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.

Hardware Models

All Oracle Database Appliance X8-2-HA with High Performance configuration

Workaround

  1. Power off storage expansion shelf.
  2. Reboot both nodes.
  3. Proceed with provisioning the default storage shelf (first JBOD).
  4. After the system is successfully provisioned with default storage shelf (first JBOD), check that oakd is running on both nodes in foreground mode.
     # ps -aef | grep oakd
  5. Check that all first JBOD disks have the status online, good in oakd, and CACHED in Oracle ASM.
  6. Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
  7. Run the following command from the master node to add the storage expansion shelf disks (two JBOD setup) to oakd and Oracle ASM.
    #odaadmcli show ismaster 
          OAKD is in Master Mode 
    
          # odaadmcli expand storage -ndisk 24 -enclosure 1 
           Skipping precheck for enclosure '1'... 
           Check the progress of expansion of storage by executing 'odaadmcli  
    show disk' 
           Waiting for expansion to finish ... 
          #  
  8. Check that the storage expansion shelf disks (two JBOD setup) are added to oakd and Oracle ASM.

Replace odaadmcli with oakcli commands on Oracle Database Appliance Virtualized Platform in the procedure.

For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.

This issue is tracked with Oracle bug 30839054.

Simultaneous creation of two Oracle ACFS Databases fails

If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.

DCS-10001:Internal error encountered: Fail to run command Failed to create  
volume. 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.

For High Perfomance configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname 
For Oracle Database Appliance X8-2 High Perfomance configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname 
For High Capacity configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)  
For Oracle Database Appliance X8-2 High Capacity configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname  

This issue is tracked with Oracle bug 30750497.

Database creation hangs when using a deleted database name for database creation

The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.

If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.

Hardware Models

All Oracle Database Appliance high-availability environments

Workaround

Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.

For example, the following command creates a database testdb with user DBSNMP.

/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP 

This issue is tracked with Oracle bug 28916487.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Accelerator volume for data is not created on flash storage

The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.

Hardware Models

Oracle Database Appliance high capacity environments with HDD disks

Workaround

Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.

This issue is tracked with Oracle bug 28836461.

Errors in clone database operation

Clone database operation fails due to errors.

If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.

Clone database operation fails

For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Set the compatible value to that of the source database. Follow these steps:
  1. Change the parameter value.
    SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
  2. Shut down the database.
    SQL> SHUTDOWN IMMEDIATE
  3. Start the database.
    SQL> Startup
  4. Verify the parameter for the new value.
    SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';

This issue is tracked with Oracle bug 30309914.

Database creation fails for odb-01s DSS databases

When attempting to create an DSS database with shape odb-01s, the job may fail with errors.

CRS-2674: Start of 'ora.test.db' on 'example_node' failed
CRS-5017: The resource action "ora.test.db start" encountered the following
error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/example_node/crs/trace/crsd_oraagent_oracle.trc".

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2

Workaround

There is no workaround. Select an alternate shape to create the database.

This issue is tracked with Oracle bug 27768012.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Error when relocating database

When relocating a database having host name in upper case letters, an error is encountered.

If the database host name has uppercase letters, then the operation to relocate the database fails.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Specify the host name with lower case letters, or use the -g option to specify target node number.

This issue is tracked with Oracle bug 31386630.

Error in relocating a running database

When relocating a database, an error message saying the database is not running may be observed, even though the database is running.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Download and install the patch from bug 31332777.

This issue is tracked with Oracle bug 31315150.

Error when connecting to the database after relocation

When connecting to the database after relocation, an error is encountered.

After relocating a cloned database, there is an error when connecting to the database using SQL*Plus.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Copy the password file from the current location to /u02/app/oracle/oradata/sourcedbUniqueName/dbName/dbs.
  2. Change the password file owner and group to oracle:oinstall.
  3. Modify the password file location using the command:
    srvctl modify database -db dbUniqueName -pwfile newPwFileLoc

This issue is tracked with Oracle bug 31317837.

Error when recovering a single-instance database

When recovering a single-instance database, an error is encountered.

When a single-instance database is running on the remote node, and you run the operation for database recovery on the local node, the following error is observed:
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered: 
Missing arguments : required sqlplus connection  information is not 
provided

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Perform recovery of the single-instance database on the node where the database is running.

This issue is tracked with Oracle bug 31399400.

Errors when running ORAchk or the odacli create-prepatchreport command

When you run ORAchk or the odacli create-prepatchreport command, an error is encountered.

The following error messages may be seen:
Table AUD$[FGA_LOG$] should use Automatic Segment Space Management 
diagsnap or pstack are configured to collect first failure diagnostic
Initialization parameter RESOURCE_MANAGER_PLAN should be set. 
One or more log archive destination and alternate log archive destination settings are not as recommended 
Software home check failed 

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore the error messages and continue the deployment.

This issue is tracked with Oracle bug 30931017.

Database ID incorrectly displayed in odacli describe-database output

Database ID is incorrectly displayed in the output of the command odacli describe-database.

The ID field in the output of the command odacli describe-database wrongly displays the databaseId instead of db object ID.

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Run the odacli list-databases command to view the correct ID. You can also view the correct ID details using the Browser User Interface.

This issue is tracked with Oracle bug 31121016.

Error when rebooting the appliance

When rebooting Oracle Database Appliance, the user interactive screen is displayed.

Hardware Models

Oracle Database Appliance X7-2-HA hardware models

Workaround

From the system console, select or highlight the kernel using the Up or Down arrow keys and then press Enter to continue with the reboot of the appliance.

This issue is tracked with Oracle bug 31196452.

Job history not erased after running cleanup.pl

After running cleanup.pl, job history is not erased.

After running cleanup.pl, when you run /opt/oracle/dcs/bin/odacli list-jobs commands, the list is not empty.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

  1. Stop the DCS Agent by running the following commands on both nodes.

    For Oracle Linux 6, run:

    initctl stop initdcsagent 

    For Oracle Linux 7, run:

    systemctl stop initdcsagent 
  2. Run the cleanup script sequentially on both the nodes.

This issue is tracked with Oracle bug 30529709.

Inconsistency in ORAchk summary and details report page

ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.

This issue is tracked with Oracle bug 30676674.

Missing DATA, RECO, and REDO entries when dbstorage is rediscovered

Running the odacli update-registry command with -n all --force or -n dbstorage --force option can result in metadata corruption.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the -all option when all the databases created in the system use OAKCLI in migrated systems. On other systems that run on DCS stack, update all components other than dbstorage individually, using the odacli update-registry -n component_name_to_be_updated_excluding_dbstorage.

This issue is tracked with Oracle bug 30274477.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Issues with the Web Console on Microsoft web browsers

Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.

Following are issues with Microsoft web browsers:
  • Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
  • Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
  • Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
  • After configuring the oda-admin password, the following error is displayed:
    Failed to change the default user (oda-admin) account password. 
    Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized

    Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.

Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

To access the Web Console, use either Google Chrome or Firefox.

This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.

Disk space issues due to Zookeeper logs size

The Zookeeper log files, zookeeper.out and /opt/zookeeper/log/zkMonitor.log, are not rotated, when new logs are added. This can cause disk space issues.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Rotate the zookeeper log file manually, if the log file size increases, as follows:

  1. Stop the DCS-agent service for zookeeper on both nodes.

    initctl stop initdcsagent
  2. Stop the zookeeper service on both nodes.

    /opt/zookeeper/bin/zkServer.sh stop
  3. Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.

  4. Set the ZOO_LOG_DIR as an environment variable to a different log directory, before starting the zookeeper server.

    export ZOO_LOG_DIR=/opt/zookeeper/log
  5. Switch to ROLLINGFILE, to set the capability to roll.

    export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
    Restart the zookeeper server, for the changes to take effect.
  6. Set the following parameters in the /opt/zookeeper/conf/log4j.properties file, to limit the number of backup files, and the file sizes.

    zookeeper.log.dir=/opt/zookeeper/log
    zookeeper.log.file=zookeeper.out
    log4j.appender.ROLLINGFILE.MaxFileSize=10MB
    log4j.appender.ROLLINGFILE.MaxBackupIndex=10
  7. Start zookeeper on both nodes.

    /opt/zookeeper/bin/zkServer.sh start
  8. Check the zookeeper status, and verify that zookeeper runs in leader/follower/standalone mode.

    /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
  9. Start the dcs agent on both nodes.

    initctl start initdcsagent
  10. Purge the zookeeper monitor log, zkMonitor.log, in the location /opt/zookeeper/log. You do not have to stop the zookeeper service.

This issue is tracked with Oracle bug 29033812.

Error after running the cleanup script

After running the cleanup.pl script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake.

The error is caused when you run the following steps:

  1. Run cleanup.pl on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.

  2. Run cleanup.pl on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.

  3. After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.

    # odacli list-jobs
    DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Verify the zookeeper status on the both nodes before starting dcsagent:

    /opt/zookeeper/bin/zkServer.sh status

    For a single-node environment, the status should be: leader, or follower, or standalone.

  2. Restart the dcsagent on Node0 after running the cleanup.pl script.

    # systemctl stop initdcsagent 
    # systemctl start initdcsagent

This issue is tracked with Oracle bug 26996134.

Incorrect SGA and PGA values displayed

For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly.

For OLTP databases created with odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

For DSS databases created with with odb36 shape, following are the issues:

  • sga_target is set as 64 GB instead of 72 GB

  • pga_aggregate_target is set as 128 GB instead of 144 GB

For IMDB databases created with Odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

  • inmmory_size is set as 64 GB instead of 72 GB

Models

Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M

Workaround

Reset the PGA and SGA sizes manually

This issue is tracked with Oracle bug 27036374.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.