4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Changing the CPU core count after provisioning appliance for Oracle Database Appliance release 18.7 fails

On a newly-provisioned release 18.7 Oracle Database Appliance system, changing the CPU core count fails.

An Oracle Database Appliance system is provisioned with all CPU cores enabled by default. You can change the CPU core count after provisioning the appliance with the command odacli update-cpucore -c count. Changing the CPU core count fails in the following provisioning cases:
  • If you provision an Oracle Database Appliance system with Oracle Database Appliance release 18.7, and change the CPU core count as a postinstallation task, then the provisioning fails with the error message:
    Apply one-off fix for Bug 30269395 on both nodes in order to update CPU cores. Please refer to 18.7.0.0.0 Release Notes.
    Workaround:
    1. Download and apply the patch for bug 30269395 available on My Oracle Support:

      https://support.oracle.com/rs?type=patch&id=30269395.

    2. After the patch is successfully applied, change the CPU core count with the command odacli update-cpucore.
  • On a new Oracle Database Appliance system, that is not yet provisioned, changing the CPU core count fails with the following error message:
    ODA system needs to be provisioned with full CPU cores before updating CPU cores. Please refer to 18.7.0.0.0 release notes.
    Workaround:
    1. Provision the system with Oracle Database Appliance release 18.7.
    2. Download and apply the patch for bug 30269395 available on My Oracle Support:

      https://support.oracle.com/rs?type=patch&id=30269395.

    3. After the patch is successfully applied, change the CPU core count with the command odacli update-cpucore.

Hardware Models

All Oracle Database Appliance hardware models

This issue is tracked with Oracle bug 30313635.

Running /opt/oracle/dcs/bin/odacli configure-firstnet fails on Oracle Database Appliance X8-2 Systems

Running /opt/oracle/dcs/bin/odacli configure-firstnet for network setup on Oracle Database Appliance X8-2 systems fails.

The following error message is displayed:
Summary: TB /usr/sbin/system-config-network-cmd:343:main:ParseError:  
 ('Error parsing line:  
DeviceList.BOND.bond0.IP=172.16.20.70', 'an integer is required')  
Traceback (most recent call last):  
   File "/usr/sbin/system-config-network-cmd", line 343, in main raise pe 
ParseError: ('Error parsing line: DeviceList.BOND.bond0.IP=172.16.20.70', 
'an integer is required') 

Hardware Models

Oracle Database Appliance X8-2 Hardware Models

Workaround

Instead, run the command opt/oracle/oak/bin/configure-firstnet on Oracle Database Appliance X8-2 systems.

This issue is tracked with Oracle bug 30499174.

Only one network interface displayed after rebooting node

After rebooting the node, only one network interface is displayed.

When both nodes reboot or power on simultaneously, only one of HAIP interfaces is used and Oracle ASM may not be able to start. The netstat command returns only one of two interfaces.
# netstat -nr | grep 169 
169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
Ensure that the ora.cluster_interconnect.haip is ONLINE on one node before rebooting (or powering on) on the other node.
# /u01/app/18.0.0.0/grid/bin/crsctl stat res -t -init|grep -A1  
ora.cluster_interconnect.haip 
------------------------------------------------------------------------------ 
-- 
Name           Target  State        Server                   State details     
    
------------------------------------------------------------------------------ 
-- 
Cluster Resources 
------------------------------------------------------------------------------ 
-- 
ora.cluster_interconnect.haip 
      1        ONLINE  ONLINE       <hostname>            STABLE 

Hardware Models

Oracle Database Appliance hardware models baremetal deployments on X4-2 and X7-2. X5-2 and X6-2 baremetal deployments with Infiniband Interconnect are not affected.

Workaround

If both nodes are already rebooted simultaneously and only one interface is configured for high-availability, then stop crs on both nodes and start crs one by one.
  1. Login as root in any node and stop the cluster with the -all option.
    # /u01/app/18.0.0.0/grid/bin/crsctl stop cluster -all
  2. Stop crs on both nodes.
    [Node 0] 
    # /u01/app/18.0.0.0/grid/bin/crsctl stop crs 
    [Node 1] 
    # /u01/app/18.0.0.0/grid/bin/crsctl stop crs
  3. Start crs on each node, one by one.
    [Node 0] 
    # /u01/app/18.0.0.0/grid/bin/crsctl start crs 
    [Node 1] 
    # /u01/app/18.0.0.0/grid/bin/crsctl start crs 

This issue is tracked with Oracle bug 29613692.

Snapshot databases can only be created on the primary database

For oakcli stack, snapshot database can be created from the primary database, and not from the standby database.

If the database name (db_name) and database unique name (db_unique_name) are different when creating snapshot database, then the following error is encountered:

WARNING: 2018-09-13 12:47:18: Following data files are not on SNAP location

Hardware Models

All Oracle Database Appliance hardware models for Virtualized Platform

Workaround

None. For oakcli stack, create snapshot database from the primary database, and not from the standby database.

This issue is tracked with Oracle bug 28649665.

Creation of CDB for 12.1.0.2 databases may fail

Creation of multitenant container database (CDB) for 12.1.0.2 databases on Virtualized Platform may fail.

If the database name (db_name) and database unique name (db_unique_name) are different when creating snahsot database, then the following error is encountered:

WARNING: 2018-09-13 12:47:18: Following data files are not on SNAP location

Hardware Models

All Oracle Database Appliance hardware models for Virtualized Platform

Workaround

None.

This issue is tracked with Oracle bug 29231958.

DCS-10045:Validation error encountered: Error retrieving the cpucores

When deploying the appliance, DCS-10045 error appears. There is an error retrieving the CPU cores of the second node.

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Remove the following directory in Node0: /opt/oracle/dcs/repo/node_0

  2. Remove the following directory in Node1: /opt/oracle/dcs/repo/node_1

  3. Restart the dcs-agent on both nodes.

    cd /opt/oracle/dcs/bin
    initctl stop initdcsagent
    initctl start initdcsagent

This issue is tracked with Oracle bug 27527676.

Database creation hangs when using a deleted database name for database creation

The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.

If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.

Hardware Models

All Oracle Database Appliance high-availability environments

Workaround

Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.

For example, the following command creates a database testdb with user DBSNMP.

/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP 

This issue is tracked with Oracle bug 28916487.

Error when updating 12.1.0.2 database homes

When updating Oracle Database homes from 12.1.0.2 to 18.3, using the command odacli update-dbhome -i dbhomeId -v 18.3.0.0.0, an error is encountered.

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Apply the patch for bug 24385625 and run odacli update-dbhome -i dbhomeId -v 18.3.0.0.0 again to fix the issue.

This issue is tracked with Oracle bug 28975529.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Accelerator volume for data is not created on flash storage

The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.

Hardware Models

Oracle Database Appliance high capacity environments with HDD disks

Workaround

Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.

This issue is tracked with Oracle bug 28836461.

Database connection fails after database upgrade

After upgrading the database from 11.2 to 12.1.0.2, database connection fails due to job_queue_processes value.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

  1. Before upgrading the database, check the job_queue_processes parameter, for example, x. If the value of job_queue_processes is less than 4, then set the value to 4.

  2. Upgrade the database to 12.1.0.2.

  3. After upgrading the database, set the value of job_queue_processes to the earlier value, for example, x.

This issue is tracked with Oracle bug 28987900.

Failure in creating 18.3 database with DSS database shape odb1s

When creating 18.3 databases, with DSS database shape odb1s, the creation fails, with the following eror message:

ORA-04031: unable to allocate 6029352 bytes of shared memory ("shared
pool","unknown object","sga heap(1,0)","ksipc pct")

Hardware Models

All Oracle Database Appliance Hardware Models

Workaround

None.

This issue is tracked with Oracle bug 28444642.

Error in provisioning Oracle ASM Database on FLASH storage

On Oracle Database Appliance High-Availability systems with High Capacity storage, Oracle ASM Database creation on FLASH storage fails.

This issue occurs because the FLASH disk group is not mounted.

Hardware Models

All Oracle Database Appliance high-availability hardware models with High Capacity storage configuration

Workaround

Provision the appliance without creating the database, and then create the database.

This issue is tracked with Oracle bug 30309798.

Clone database operation fails

For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Set the compatible value to that of the source database. Follow these steps:
  1. Change the parameter value.
    SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
  2. Shut down the database.
    SQL> SHUTDOWN IMMEDIATE
  3. Start the database.
    SQL> Startup
  4. Verify the parameter for the new value.
    SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';

This issue is tracked with Oracle bug 30309914.

Errors in clone database operation

Clone database operation fails due to errors.

If the dbname and dbunique name are not the same for the source database or they are in mixed case (mix of uppercase and lowercase letters) or the source database is single-instance or Oracle RAC One Node, running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from source database which has the same db_name and db_unique_name, in lowercase letters, and the source database instance is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002231, 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 28986950, 30309971, and 30228362.

Errors after restarting CRS

If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors.

Repository status is unknown and High Availability Virtual IP is offline if the Cluster Ready Services (CRS) are stopped or restarted before stopping the repository and virtual machines.

Hardware Models

Oracle Database Appliance HA models X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, V1

Workaround

Follow these steps:

  1. Start the High Availability Virtual IP on node1.
    # /u01/app/GI_version/grid/bin/srvctl start havip -id havip_0  
  2. Stop the oakVmAgent.py process on dom0.

  3. Run the lazy unmount option on the dom0 repository mounts:
    umount -l mount_points

This issue is tracked with Oracle bug 20461930.

Unable to create an Oracle ASM Database for Release 12.1

Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1.

Unable to create an Oracle ASM database lower than 12.1.0.2.17814 PSU (12.1.2.12).

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

There is not a workaround. If you have Oracle Database 11.2 or 12.1 that is using Oracle Automatic Storage Management (Oracle ASM) and you want to upgrade to a higher release of Oracle Database, then you must be on at least Oracle Database Appliance 12.1.2.12.0 and Database Home 12.1.0.2.170814.

The upgrade path for Oracle Database 11.2 or 12.1 Oracle ASM is as follows:

  • If you are on Oracle Database Appliance version 12.1.2.6.0 or later, then upgrade to 12.1.2.12 or higher before upgrading your database.

  • If you are on Oracle Database Appliance version 12.1.2.5 or earlier, then upgrade to 12.1.2.6.0, and then upgrade again to 12.1.2.12 or higher before upgrading your database.

This issue is tracked with Oracle bug 21626377, 27682997, and 21780146. The issues are fixed in Oracle Database 12.1.0.2.170814.

Database creation fails for odb-01s DSS databases

When attempting to create an DSS database with shape odb-01s, the job may fail with errors.

CRS-2674: Start of 'ora.test.db' on 'example_node' failed
CRS-5017: The resource action "ora.test.db start" encountered the following
error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/example_node/crs/trace/crsd_oraagent_oracle.trc".

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

There is no workaround. Select an alternate shape to create the database.

This issue is tracked with Oracle bug 27768012.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

Patching appliance with reduced CPU core count to release 18.7 fails

On an Oracle Database Appliance system with a reduced CPU core count, patching to Oracle Database Appliance release 18.7 fails.

If the Oracle Database Appliance system you are attempting to patch from an earlier supported release has a reduced CPU core count, then patching to Oracle Database Appliance release 18.7 fails in the following patching cases:
  • When you patch Oracle Database Appliance system with a reduced CPU core count with Oracle Database Appliance release 18.7, then the odacli update-server command fails with the error message:
    Configured core count is not equal maximum allowed core count. Refer to 18.7.0.0.0 Release Notes for steps to run before patching the system to 18.7.0.0.0.
    Workaround:
    1. Determine the CPU core count on your Oracle Database Appliance system.
      odacli describe-cpucore
      Node  Cores  Modified                       Job Status
      ----- ------ ------------------------------ ---------------
      0     36     October 18, 2019 4:29:01 PM CEST Configured
      0     12     November 8, 2019 9:24:27 AM CET Configured
    2. Reboot the nodes, update the BIOS, and set the enabled cores per socket value.

      Update the BIOS for both nodes with half of the count displayed in the odacli describe-cpucore command output. For example, if the value returned was 36, then update the BIOS with the value 18.

    3. Patch your appliance to Oracle Database Appliance release 18.7.
  • After patching to Oracle Database Appliance release 18.7, if you try to change the CPU core count, and if the patch for Bug 30269395 is not applied, then the following error message is encountered:
    Configured core count is not equal maximum allowed core count, and patch for bug 30269395 is not present in GI home. Download and apply the patch for 30269395 and rerun this command. Refer to 18.7.0.0.0 Release Notes for information on this known issue.
    Workaround:
    1. Download and apply on both nodes, the patch for bug 30269395 available on My Oracle Support:

      https://support.oracle.com/rs?type=patch&id=30269395

    2. After the patch is successfully applied on both nodes, change the CPU core count with the command odacli update-cpucore.
    3. If the value for the enabled cores per socket value is not set to the maximum in the BIOS, then reboot the nodes, update the BIOS, and set the enabled cores per socket value.

      For example, if the maximum enabled cores per socket value for your Oracle Database Appliance hardware model is 36, then update the BIOS with the value 18.

Hardware Models

All Oracle Database Appliance hardware models

This issue is tracked with Oracle bug 30313635.

Resilvering of Oracle ADVM processes impacting performance after upgrading to 18.3

Upgrading to Oracle Database Appliance 18.3 or later can impact performance on some Oracle Database Appliance systems due to Oracle ASM Dynamic Volume Manager (Oracle ADVM) processes consuming excessive CPU.

When you upgrade to Oracle Database Appliance 18.3, the storage disk may be resilvered or synchronized again, for mirrored volumes on an Oracle ASM disk group with Allocation Unit (AU) size greater than 1 MB. The larger the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) volume size, the higher is the impact.

Hardware Models

All Oracle Database Appliance hardware models, particularly, X5-2 and X7-2 High Capacity models that use 8T HDDs.

Workaround

For information about resolving this issue, see Oracle Support Note 2525427.1 at:

https://support.oracle.com/rs?type=doc&id=2525427.1

This issue is tracked with Oracle bug 29520544.

Error when applying patch to Oracle Database Appliance on Virtualized Platform

Server patching for Oracle Database Appliance on Virtualized Platform may fail with errors.

Running the command oakcli update -patch 18.7.0.0 fails with the following error:

VM server patch failed to apply gi patch,Copy failed from
'/tmp/oakpatch/gi/18.0.0.0.0/29757256/30097923/30223179/files/bin/crsctl.bin'
to '/u01/app/18.0.0.0/grid/bin/crsctl.bin', OakVmAgent using crsctl command
to check repo or tfa to check db resources status which cause gi patch failure. 

Hardware Models

All Oracle Database Appliance hardware models Virtualized Platform

Workaround

Follow these steps:

  1. On ODA_BASE, run the following commands:
    # oakcli disable startrepo -node 0
    # oakcli disable startrepo -node 1
  2. Run the command to apply the patch.
    oakcli update -patch 18.7.0.0
  3. Stop TFACTL.
    # /u01/app/18.0.0.0/grid/bin/tfactl stop
  4. After patching completes, run the following commands on ODA_BASE:
    # oakcli enable startrepo -node 0
    # oakcli enable startrepo -node 1  

This issue is tracked with Oracle bug 30318993.

Error in validating patch for Oracle Database Appliance Virtualized Platform

When validating the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered.

When patching the appliance, the oakcli validate -c ospatch -ver 18.7.0.0.0 command fails with the following error:

Bareword "X7_HP" not allowed while "strict subs" in use at /opt/oracle/oak/lib/oakvalidatelib/oaksharedstorageinfo.pm line 152.
Bareword "X7_HC" not allowed while "strict subs" in use at /opt/oracle/oak/lib/oakvalidatelib/oaksharedstorageinfo.pm line 161.
Bareword "X7_HP" not allowed while "strict subs" in use at /opt/oracle/oak/lib/oakvalidatelib/oaksharedstorageinfo.pm line 171.
Bareword "X7_HC" not allowed while "strict subs" in use at /opt/oracle/oak/lib/oakvalidatelib/oaksharedstorageinfo.pm line 176. 

Hardware Models

All Oracle Database Appliance hardware models Virtualized Platforms

Workaround

None. Ignore the error message and continue.

This issue is tracked with Oracle bug 30539619.

Error when patching DCS agent

When patching the DCS agent, an error is encountered.

When patching the DCS agent during the provisioning process, the following error is displayed:

DCS-10011:Input parameter 'defaultUserAlias' cannot be NULL. 

This issue occurs when you upgrade a system which does not have Oracle Grid Infrastructure.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install Oracle Grid Infrastructure and then upgrade the system.

This issue is tracked with Oracle bug 30114925.

Error when patching to 12.1.0.2.190716 Release Update

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Release Update, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Relocation of Oracle RAC One Database fails during patching

When relocating Oracle RAC One Database during patching, an error is encountered.

When patching a database home in which one or more Oracle RAC One Databases are running, the relocation of the Oracle RAC One Database may fail. This causes the Oracle Database home patching to fail.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Shut down the Oracle RAC One node manually and then patch the database home. After patching completes successfully, start Oracle Database.

This issue is tracked with Oracle bug 30114925.

Error in patching Oracle Database Appliance

When applying the server patch for Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

INFO: date_time_stamp: Checking for running CRS processes on the node.
ERROR: date_time_stamp: Failed in shutting down all crs processes.
Exiting...
ERROR: date_time_stamp: Failed to patch server (grid) component 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Shut down Oracle TFA Collector.
    /u01/app/18.0.0.0/grid/bin/tfactl stop
    
  2. Restart Oracle Database Appliance server patching.

This issue is tracked with Oracle bug 30260318.

Error when patching database homes to 18.3 with Oracle Grid Infrastructure release 18.3.0.0.180717

When DCS agent release 18.7 tries to patch 18.3 database homes, an error is encountered.

This error occurs because Oracle Grid Infrastructure is on release 18.3.0.0.180717.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Update the server to Oracle Database Appliance release 18.7 or update the DCS agent to Oracle Database Appliance release 18.3.

This issue is tracked with Oracle bug 30320375.

Patching pre-checks do not complete with --local option during server patching

Server patching fails while running patching pre-checks with the --local option.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not run patching pre-checks on the server with the --local option.

This issue is tracked with Oracle bug 30255817.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    /u01/app/18.0.0.0/grid//bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    /u01/app/18.0.0.0/grid/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching 11.2.0.4 database homes

Patching 11.2.0.4 Oracle Database homes to Release 18.5 may fail if bug#2015 exists in the inventory.

Hardware Models

All Oracle Database Appliance Bare Metal and Virtualized Platform Deployments.

Workaround

Delete bug#2015 if it exists in the inventory.
  1. Check if bug#2015 exists in the inventory:
    su - oracle 
    export ORACLE_HOME=path_to_the_11.2.0.4_ORACLE_HOME 
    $ORACLE_HOME/OPatch/opatch lspatches | grep -i "OCW" | cut -d ';' -f1
  2. The command returns a bug number, for example, 28729234. Navigate to the inventory:
    cd $ORACLE_HOME/inventory/oneoffs/bug# from above command/etc/config
  3. Check if inventory.xml contains a string such as 'bug number="2015"'. If no match is found, then no action is required, and you can continue with step 6 in this procedure.
    grep 'bug number="2015"' inventory.xml 
    echo $?  ( the command returns 0, if match found )   
  4. Take a backup of inventory.xml.
    cp inventory.xml inventory.xml.$(date +%Y%m%d-%H%M)
  5. Delete entry like <bug number="2015" ...> from inventory.xml.
    sed '/bug number="2015"/d' inventory.xml

This issue is tracked with Oracle bugs 29834563 and 29446248.

Error when patching from Oracle Database Appliance release earlier than 18.3 to release 18.7

When patching from Oracle Database Appliance release earlier than 18.3 to release 18.7, an error is encountered.

The minimum system version required to patch to Oracle Database Appliance release 18.7 is release 18.3. If you try to patch your deployment on an Oracle Database Appliance release prior to 18.3, to release 18.7, and the DCS-Agent patching to release 18.7 succeeds, then when you try to patch the server to release 18.7, the DCS agent reports following error:

DCS-10001:Internal error encountered: Current version is not compatible to update to 18.7.0.0.0. Please update the server components in the following order [18.3.0.0.0]

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the DCS command line interface utility (CLI) for Oracle Database Appliance release 18.7 on both nodes manually. The DCS CLI is located in the following location:
/opt/oracle/oak/pkgrepos/dcscli/latest/
Run the following command to install the DCS CLI:
# rpm -Uvh /opt/oracle/oak/pkgrepos/dcscli/latest/dcs-cli-18.7.0.0.0_LINUX.X64_*.rpm 

This issue is tracked with Oracle bug 30258005.

Error in patching NVMe disks to the latest version

Patching of NVMe disks to the latest version may not be supported on some Oracle Database Appliance hardware models.

On Oracle Database Appliance X8-2 hardware models, the NVMe controller 7361456_ICRPC2DD2ORA6.4T is installed with higher version VDV1RL01/VDV1RL02. Patching of this controller is not supported on Oracle Database Appliance X8-2 hardware models. For other platforms, if the installed version is QDV1RE0F, or QDV1RE13, or QDV1RD09, or QDV1RE14 then when you patch the storage, the NVMe controller version is updated to qdv1rf30.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None

This issue is tracked with Oracle bug 30287439.

Failure in patching Oracle Database Appliance Virtualized Platform

Server patching for Oracle Database Appliance may fail with errors.

Patching the appliance server fails with the following error:

Worker 0: IOError: [Errno 28] No space left on device 

This can occur during server patching. The space issue may occur either on ODA_BASE or dom0. The issue occurs when the log files opensm.log on dom0 and ibacm.log on ODA_BASE increase in size and consume all free space on the volume.

Hardware Models

Oracle Database Appliance hardware models X6-2 and X5-2 Virtualized Platform with InfiniBand

Workaround

Follow these steps:

  1. On ODA_BASE, truncate /var/log/opensm.log.
  2. On dom0, truncate /var/log/ibacm.log.
  3. Stop Oracle Clusterware:
    /u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
  4. After the cluster and the cluster resources are stopped, start Oracle Clusterware:
    /u01/app/18.0.0.0/grid/bin/crsctl start crs 

Restart Oracle Database Appliance server patching.

This issue is tracked with Oracle bug 30327847.

Error in patching Oracle Database Appliance Virtualized Platform

When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered.

Patching the appliance server fails with the following error:

ERROR: Host 192.168.16.28 listed in file /opt/oracle/oak/temp_privips.txt is not pingable at /opt/oracle/oak/pkgrepos/System/18.7.0.0.0/bin/pkg_install.pl 
line 1806 
ERROR: Unable to apply the patch 2 

This can occur during non-local (rolling) server patch. The error is seen on the first node after patching of ODA_BASE and dom0 is complete. This issue is caused because the remote node Node1 rebooted during patching.

Hardware Models

Oracle Database Appliance hardware models X6-2 and X5-2 Virtualized Platform with InfiniBand

Workaround

  1. Shut down Oracle TFA Collector.
    /u01/app/18.0.0.0/grid/bin/tfactl stop
    
  2. Restart Oracle Database Appliance server patching.

This issue is tracked with Oracle bug 30318927.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

Onboard public network interfaces do not come up after patching or imaging

When you apply patches or re-image Oracle Database Appliance, the onboard public network interfaces may not come up due to faulty status presented in the ILOM.

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M

Workaround

  1. Clear all faults on the ILOM.
  2. Reset or power cycle the host.
  3. Check that the ILOM has the most current version of firmware patches.
  4. Check that the X7-2 On Board Dual Port 10Gb/25Gb SFP28 Ethernet Controller firmware is up-to-date.
  5. Collect a new snapshot and monitor your appliance to confirm that the faults did not recur.
  6. Contact Oracle Support if this issue recurs.

This issue is tracked with Oracle bugs 29206350 and 28308268.

Server patching fails to start Oracle Clusterware

When applying the server patch, Oracle Clusterware does not start due to issue with Oracle Clusterware Time Synchronization Services Daemon (OCTSSD).

Hardware Models

Oracle Database Appliance high-availability hardware models baremetal deployments.

Workaround

  1. Login as root in any node and stop the cluster with the -force option.
    # export ORACLE_HOME = /u01/app/18.0.0.0/grid/bin 
    # $ORACLE_HOME/bin/crsctl stop crs -force
  2. Restart ctssd on the master node and failed node.
    On the master node:
    # $ORACLE_HOME/bin/crsctl stop res ora.ctssd -init 
    # $ORACLE_HOME/bin/crsctl start res ora.ctssd -init
  3. Update the server.
    # odacli update-server -v 18.5.0.0.0 

This issue is tracked with Oracle bug 29549267.

Stack migration fails during patching

After patching the OAK stack, the following error is encountered when running odacli commands:

DCS-10001:Internal error encountered: java.lang.String cannot be cast to 
com.oracle.dcs.agent.model.DbSystemNodeComponents.

Hardware Models

All Oracle Database Appliance Hardware models

Workaround

  1. Rename the /etc/ntp.conf file temporarily and retry patching the appliance.
    # mv /etc/ntp.conf /etc/ntp.conf.orig
  2. After patching is successful, restore the /etc/ntp.conf file.
    # mv /etc/ntp.conf.orig /etc/ntp.conf

This issue is tracked with Oracle bug 29216717.

DATA disk group fails to start after upgrading Oracle Grid Infrastructure to 18.5

After upgrading Oracle Grid Infrastructure to 18.5, the DATA disk group fails to start.

The following error is reported in the log file:

ORA-15038: disk '/dev/mapper/HDD_E1_S13_1931008292p1' mismatch on 'Sector
Size' with target disk group [512] [4096]

Hardware Models

Oracle Database Appliance hardware models X5-2 or later, with mixed storage disks installed

Workaround

To start Oracle Clusterware successfully, connect to Oracle ASM as grid user, and run the following SQL commands:

SQL> show parameter _disk_sector_size_override; 

NAME                                 TYPE        VALUE 
-------------------------------------------------------- 
_disk_sector_size_override           boolean     TRUE 

SQL> alter system set "_disk_sector_size_override" = FALSE scope=both; 
alter system set "_disk_sector_size_override" = FALSE scope=both 
* 
ERROR at line 1: 
ORA-32000: write to SPFILE requested but SPFILE is not modifiable 

SQL> alter diskgroup DATA mount; 

Diskgroup altered. 

SQL> alter system set "_disk_sector_size_override" = FALSE scope=both; 

System altered

This issue is tracked with Oracle bug 29220984.

Some files missing after patching the appliance

Some files are missing after patching the appliance.

Hardware Models

Oracle Database Appliance X7-2 hardware models

Workaround

Before patching the appliance, take a backup of the /etc/sysconfig/network-scripts/ifcfg-em* folder, and compare the folder contents after patching. If any files or parameters of the ifcfg-em* are missing, then they can be recovered from the backup directory.

This issue is tracked with Oracle bug 28308268.

Space issues with /u01 directory after patching

After patching to 18.7, the directory /u01/app/18.0.0.0/grid/log/hostname/client fills quickly with gpnp logs.

Hardware Models

All Oracle Database Appliance hardware models for virtualized platforms deployments (X3-2 HA, X4-2 HA, X5-2 HA, X6-2 HA, X7-2 HA)

Workaround

  1. Run the following commands on both ODA_BASE nodes:

    On Node0:

    rm -rf /u01/app/18.0.0.0/grid/log/hostname/client/
    oakcli enable startrepo -node 0
    oakcli stop oak
    pkill odaBaseAgent
    oakcli start oak

    On Node1:

    rm -rf /u01/app/18.0.0.0/grid/log/hostname/client/
    oakcli enable startrepo -node 1
    oakcli stop oak
    pkill odaBaseAgent
    oakcli start oak

This issue is tracked with Oracle bug 28865162.

Errors when deleting database storage after migration to DCS stack

After migrating to the DCS stack, some volumes in the database storage cannot be deleted.

Create an Oracle ACFS database storage using the oakcli create dbstorage command for multitenant environment (CDB) without database in the OAK stack and then migrate to the DCS stack. When deleting the database storage, only the DATA volume is deleted, and not the REDO and RECO volumes.

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create a database on Oracle ACFS database storage with the same name as the database for which you want to delete the storage volumes, and then delete the database. This cleans up all the volumes and file systems.

This issue is tracked with Oracle bug 28987135.

Repository in offline or unknown status after patching

After rolling or local patching of both nodes to 18.7, repositories are in offline or unknown state on node 0 or 1.

The command oakcli start repo <reponame> fails with the error:

OAKERR8038 The filesystem could not be exported as a crs resource  
OAKERR:5015 Start repo operation has been disabled by flag

Models

Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

Log in to oda_base of any node and run the following two commands:

oakcli enable startrepo -node 0  
oakcli enable startrepo -node 1

The commands start the repositories and enable them to be available online.

This issue is tracked with Oracle bug 27539157.

Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance

Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance.

When cleaning up and reprovisioning Oracle Database Appliance with release 18.7, the Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk RPMs may not be updated to release 18.7. The components are updated when you apply the patches for Oracle Database Appliance release 18.7.

Hardware Models

All Oracle Database Appliance deployments

Workaround

Update to the latest server patch for the release.

This issue is tracked with Oracle bugs 28933900 and 30187516.

11.2.0.4 databases fail to start after patching

After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.

Hardware Models

All Oracle Database Appliance Hardware models

Workaround

Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.

Start the databases with the command:
srvctl start database -db db_unique_name

This issue is tracked with Oracle bug 28815716.

FLASH disk group is not mounted when patching or provisioning the server

The FLASH disk group is not mounted after a reboot, including after provisioning, reimaging, or patching the server with Oracle Database Appliance 12.2.1.2.

This issue occurs when the node reboots and then you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database. When patching or provisioning a server with Oracle Database Appliance 12.2.1.2, you will encounter an SSH disconnect issue and an error.
# oakcli update -patch 12.2.1.2 --server

**************************************************************************** 
*****   For all X5-2 customers with 8TB disks, please make sure to     *****
*****   run storage patch ASAP to update the disk firmware to "PAG1".  *****
**************************************************************************** 
INFO: DB, ASM, Clusterware may be stopped during the patch if required 
INFO: Both Nodes may get rebooted automatically during the patch if required 
Do you want to continue: [Y/N]?: y 
INFO: User has confirmed for the reboot 
INFO: Patch bundle must be unpacked on the second Node also before applying the patch 
Did you unpack the patch bundle on the second Node? : [Y/N]? : y  
Please enter the 'root'  password :  
Please re-enter the 'root' password:  
INFO: Setting up the SSH 
..........Completed .....  
... ...
INFO: 2017-12-26 00:31:22: -----------------Patching ILOM & BIOS----------------- 
INFO: 2017-12-26 00:31:22: ILOM is already running with version 3.2.9.23r116695 
INFO: 2017-12-26 00:31:22: BIOS is already running with version 30110000 
INFO: 2017-12-26 00:31:22: ILOM and BIOS will not be updated  
INFO: 2017-12-26 00:31:22: Getting the SP Interconnect state... 
INFO: 2017-12-26 00:31:44: Clusterware is running on local node 
INFO: 2017-12-26 00:31:44: Attempting to stop clusterware and its resources locally 
Killed 
# Connection to server.example.com closed. 

The Oracle High Availability Services, Cluster Ready Services, Cluster Synchronization Services, and Event Manager are online. However, when you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database, you receive an error: flash space is 0.

Hardware Models

Oracle Database Appliance X5-2, X6-2-HA, and X7-2 HA SSD systems.

Workaround

Manually mount FLASH disk group before creating an Oracle ACFS database.

Perform the following steps as the GRID owner:

  1. Set the environment variables as grid OS user:

    on node0 
    export ORACLE_SID=+ASM1 
    export ORACLE_HOME= /u01/app/12.2.0.1/grid
    
  2. Log on to the ASM instance as sysasm

    $ORACLE_HOME/bin/sqlplus / as sysasm
  3. Execute the following SQL command:

    SQL> ALTER DISKGROUP FLASH MOUNT

This issue is tracked with Oracle bug 27322213.

Error in patching database home locally using the Web Console

Applying a database home patch locally through the Web Console, creates a pre-patch submission request.

Models

All Oracle Database Appliance Hardware Models

Workaround

Use the odacli update-dbhome --local command patching database homes locally.

This issue is tracked with Oracle bug 28909972.

Error when patching Oracle Database 11.2.0.4

When patching Oracle Database 11.2.0.4, the log file may show some errors.

When patching Oracle Database 11.2.0.4 homes, the following error may be logged in alert.log.

ORA-00600: internal error code, arguments: [kgfmGetCtx0], [kgfm.c],
[2840], [ctx], [], [], [], [], [], [], [], []

Once the patching completes, the error will no longer be raised.

Hardware Models

Oracle Database Appliance X7-2-HA Virtualized Platform, X6-2-HA Bare Metal and Virtualized Platform, X5-2, X4-2, X3-2, and V1.

Workaround

There is no workaround for this issue.

This issue is tracked with Oracle bug 28032876.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Extensive tracing generated for server processes

Extensive tracing files for the server processes are generated with DRM messages.

2019-08-07 03:35:33.498*:example1():   
[0x3fc1001c][0xf02],[TX][ext0x0,0x0][domid 0x0] 
  maxnodes 16, key 2663540594, node 2 (inst 3), member_node 0 
2019-08-07 03:35:33.498*:example1():   delta 15 
2019-08-07 03:35:33.498*:example2():   
[0x3fc1001c][0xf11],[TX][ext0x0,0x0][domid 0x0] 
  maxnodes 16, key 2663540609, node 1 (inst 2), member_node 1 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Disable tracing:
alter system set event='trace [rac_enq] disk disable' scope=spfile; 

This issue is tracked with Oracle bug 30166512.

Missing DATA, RECO, and REDO entries when dbstorage is rediscovered

Running the odacli update-registry command with -n all --force or -n dbstorage --force option can result in metadata corruption.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the -all option when all the databases created in the system use OAKCLI in migrated systems. On other systems that run on DCS stack, update all components other than dbstorage individually, using the odacli update-registry -n component_name_to_be_updated_excluding_dbstorage.

This issue is tracked with Oracle bug 30274477.

Incorrect Aura8 firmware value displayed

The Aura8 firmware version displayed in the components list is incorrect.

Models

Oracle Database Appliance X8-2S and X8-2M

Workaround

None.

This issue is tracked with Oracle bug 30340410.

Error encountered for database operations for odb28 database shape

When creating databases, there is a policy restriction for creating databases with database shapes odb8 or higher for Oracle Database Standard Edition.

The database shape odb28 is listed as an unsupported database shape in the opt/oracle/dcs/rdbaas/config/opc_sizing_metadata.xml file for some Oracle Database Appliance hardware models.

Hardware Models

Oracle Database Appliance Hardware Models X7-2 and X8-2

Workaround

Update the opt/oracle/dcs/rdbaas/config/opc_sizing_metadata.xml file with the information for odb28. For example:
<shape name="Odb28"> 
                        <ocpus>28</ocpus> 
                        <memory>224GB</memory> 
                        <log_buffer>128M</log_buffer> 
                        <redo_size>4GB</redo_size> 
                        <db_block_size>8k</db_block_size> 
                        <db_size>100</db_size> 
 </shape> 

This issue is tracked with Oracle bug 30313914.

Error when restoring a database from backup

When running the odacli irestore-database command, an error is encountered.

When the the source database that contains the backups used for database restore, has a name identical to the substring in one of the tablespace names, the database restore operation fails.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Rename the tablespace so that it does not contain the same substring as source database name, and then perform the restore operation.

This issue is tracked with Oracle bug 30290161.

ODA_BASE is in read-only mode or cannot start

The /OVS directory is full and ODA_BASE is in read-only mode.

The vmcore file in the /OVS/ var directory can cause the /OVS directory (Dom 0) to become 100% used. When Dom 0 is full, ODA_BASE is in read-only mode or cannot start.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Perform the following to correct or prevent this issue:

  • Periodically check the file usage on Dom 0 and clean up the vmcore file, as needed.

  • Edit the oda_base vm.cfg file and change the on_crash = 'coredump-restart' parameter to on_crash = 'restart'. Especially when ODA_BASE is using more than 200 GB (gigabytes) of memory.

This issue is tracked with Oracle bug 26121450.

Restriction in moving database home for database shape greater than odb8

When creating databases, there is a policy restriction for creating databases with database shapes odb8 or higher for Oracle Database Standard Edition.

To maintain consistency with this policy restriction, do not migrate any database to an Oracle Database Standard Edition database home, where the database shape is greater than odb8. The database migration may not fail, but it may not adhere to policy rules.

Hardware Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

None.

This issue is tracked with Oracle bug 29003323.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Issues with the Web Console on Microsoft web browsers

Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.

Following are issues with Microsoft web browsers:
  • Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
  • Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
  • Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
  • After configuring the oda-admin password, the following error is displayed:
    Failed to change the default user (oda-admin) account password. 
    Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized

    Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.

Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

To access the Web Console, use either Google Chrome or Firefox.

This issue is tracked with Oracle bugs 27798498, 27028446, 30077007, 30099089, 29887027, and 27799452.

Disk space issues due to Zookeeper logs size

The Zookeeper log files, zookeeper.out and /opt/zookeeper/log/zkMonitor.log, are not rotated, when new logs are added. This can cause disk space issues.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Rotate the zookeeper log file manually, if the log file size increases, as follows:

  1. Stop the DCS-agent service for zookeeper on both nodes.

    initctl stop initdcsagent
  2. Stop the zookeeper service on both nodes.

    /opt/zookeeper/bin/zkServer.sh stop
  3. Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.

  4. Set the ZOO_LOG_DIR as an environment variable to a different log directory, before starting the zookeeper server.

    export ZOO_LOG_DIR=/opt/zookeeper/log
  5. Switch to ROLLINGFILE, to set the capability to roll.

    export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
    Restart the zookeeper server, for the changes to take effect.
  6. Set the following parameters in the /opt/zookeeper/conf/log4j.properties file, to limit the number of backup files, and the file sizes.

    zookeeper.log.dir=/opt/zookeeper/log
    zookeeper.log.file=zookeeper.out
    log4j.appender.ROLLINGFILE.MaxFileSize=10MB
    log4j.appender.ROLLINGFILE.MaxBackupIndex=10
  7. Start zookeeper on both nodes.

    /opt/zookeeper/bin/zkServer.sh start
  8. Check the zookeeper status, and verify that zookeeper runs in leader/follower/standalone mode.

    /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
  9. Start the dcs agent on both nodes.

    initctl start initdcsagent
  10. Purge the zookeeper monitor log, zkMonitor.log, in the location /opt/zookeeper/log. You do not have to stop the zookeeper service.

This issue is tracked with Oracle bug 29033812.

Error after running the cleanup script

After running the cleanup.pl script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake.

The error is causes when you run the following steps:

  1. Run cleanup.pl on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.

  2. Run cleanup.pl on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.

  3. After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.

    # odacli list-jobs
    DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Verify the zookeeper status on the both nodes before starting dcsagent:

    /opt/zookeeper/bin/zkServer.sh status

    For a single-node environment, the status should be: leader, or follower, or standalone.

  2. Restart the dcsagent on Node0 after running the cleanup.pl script.

    # initctl stop initdcsagent 
    # initctl start initdcsagent

Incorrect results returned for the describe-component command in certain cases

The describe-component command may return incorrect results in some cases.

For the following disk, the describe-component command shows the available version as QDV1RE14 which is lower than the actual version QDV1RF30:

Disk type: NVMe
    Manufacturer : Intel
    Model:  0x0a54
    Product name: 7335940:ICDPC2DD2ORA6.4T
    Version: QDV1RF30

TThe following disk is not visible when you run the describe-component command. This does not impact the system components, except display.

Disk type: NVMe
    Manufacturer : Intel
    Model:  0x0a54
    Product name: 7361456_ICRPC2DD2ORA6.4T
    Version: VDV1RY03

Hardware Models

All Oracle Database Appliance hardware models.

Workaround

Use the fwupdate list all command to check the correct versions.

This issue is tracked with Oracle bug 29680034.

Old configuration details persisting in custom environment

The configuration file /etc/security/limits.conf contains default entries even in the case of custom environments.

On custom environments, when a single user is configured for both grid and oracle, the default grid user entries for the image are not removed from the /etc/security/limits.conf file.

Models

Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M

Workaround

This issue does not affect the functionality. Manually edit the /etc/security/limits.conf file and remove invalid entries.

This issue is tracked with Oracle bug 27036374.

Incorrect SGA and PGA values displayed

For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly.

For OLTP databases created with odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

For DSS databases created with with odb36 shape, following are the issues:

  • sga_target is set as 64 GB instead of 72 GB

  • pga_aggregate_target is set as 128 GB instead of 144 GB

For IMDB databases created with Odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

  • inmmory_size is set as 64 GB instead of 72 GB

Models

Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M

Workaround

Reset the PGA and SGA sizes manually

This issue is tracked with Oracle bug 27036374.

OAKERR:7007 Error encountered while starting VM

When starting a virtual machine (VM), an error message appears that the domain does not exist.

If a VM was cloned in Oracle Database Appliance 12.1.2.10 or earlier, you cannot start the HVM domain VMs in Oracle Database Appliance 12.1.2.11.

This issue does not impact newly cloned VMs in Oracle Database Appliance 12.1.2.11 or any other type of VM cloned on older versions. The vm templates were fixed in 12.1.2.11.0.

When trying to start the VM (vm4 in this example), the output is similar to the following:

# oakcli start vm vm4 -d 
.
Start VM : test on Node Number : 0 failed.
DETAILS:
        Attempting to start vm on node:0=>FAILED.  
<OAKERR:7007 Error  encountered while starting VM -  Error: Domain 'vm4' does not exist.>                        

The following is an example of the vm.cfg file for vm4:

vif = ['']
name = 'vm4'
extra = 'NODENAME=vm4'
builder = 'hvm'
cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
vcpus = 2
memory = 2048
cpu_cap = 0
vnc = 1
serial = 'pty'
disk =
[u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
 
970c644ea.img,xvda,w']
maxvcpus = 2
maxmem = 2048

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Delete the extra = 'NODENAME=vm_name'  line from the vm.cfg file for the VM that failed to start.

  1. Open the vm.cfg file for the virtual machine (vm) that failed to start.

    • Dom0 : /Repositories/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

    • ODA_BASE : /app/sharedrepo/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

  2. Delete the following line: extra=’NODENAME=vmname. For example, if virtual machine vm4 failed to start, delete the line extra = 'NODENAME=vm4'.

    vif = ['']
    name = 'vm4'
    extra = 'NODENAME=vm4' 
    builder = 'hvm'
    cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
    vcpus = 2
    memory = 2048
    cpu_cap = 0
    vnc = 1
    serial = 'pty'
    disk =
    [u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
     
    970c644ea.img,xvda,w']
    maxvcpus = 2
    maxmem = 2048
  3. Start the virtual machine on Oracle Database Appliance 12.1.2.11.0.

    # oakcli start vm vm4

This issue is tracked with Oracle bug 25943318.

Error in node number information when running network CLI commands

Network information for node0 is always displayed for some odacli commands, when the -u option is not specified.

If the -u option is not provided, then the describe-networkinterface, list-networks and the describe-network odacli commands always display the results for node0 (the default node), irrespective of whether the command is run from node0 or node1.

Hardware Models

Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

Specify the -u option in the odacli command, for details about the current node.

This issue is tracked with Oracle bug 27251239.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.