4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Patching Oracle Database Appliance

Understand the known issues when patching Oracle Database Appliance to this release.

Error in server patching

When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error is encountered.

On an Oracle Database Appliance deployment with release earlier than 19.12, if the Security Technical Implementation Guidelines (STIG) V1R2 is already deployed, then when you patch to 19.12 or earlier, the command odacli update-server -f version.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

As per the analysis, the STIG V1R2 rule OL7-00-040420 tried to change the permission of the file /etc/ssh/ssh_host_rsa_key from '640' to '600' which caused the error. During patching, run the command chmod 600 /etc/ssh/ssh_host_rsa_key command on both nodes.

This issue is tracked with Oracle bug 33168598.

Error in prepatch report for the update-server command

When you patch server to Oracle Database Appliance release 19.12, the odacli update-server command fails.

The following error message is displayed in the pre-patch report:
Evaluate GI patching Failed Internal error encountered:        
/u01/app/19.12.0.0/ygridDCS-10001:  
 PRGO-1022 : Working copy "OraGrid191200" already exists..... 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Run the odacli update-server command with the -f option.
    /opt/oracle/dcs/bin/odacli update-server -v 19.12.0.0.0 -f

This issue is tracked with Oracle bug 33261965.

Error in prepatch report for the update-dbhome command

When you patch server to Oracle Database Appliance release 19.12, the odacli update-dbhome command fails.

The following error message is displayed in the pre-patch report:
Evaluate DBHome patching with Failed Internal error encountered: Internal RHP error encountered: PRGO-1693 : The database patching cannot be completed
in a rolling manner because the target patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_3" contains non-rolling bug fixes "32327201"
compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"....

Evaluate DBHome patching with Failed Internal error encountered: Internal RHP error encountered: PRCT-1003 : failed to run "rhphelper" on node "node1"
PRCT-1014 : Internal error: RHPHELP12102_main-02... 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Run the odacli update-dbhome command with the -f option.
    /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.12.0.0.0 -f

This issue is tracked with Oracle bug 33251523.

AHF error in prepatch report for the update-dbhome command

When you patch server to Oracle Database Appliance release 19.12, the odacli update-dbhome command fails.

The following error message is displayed in the pre-patch report:
Verify the Alternate Archive    Failed    AHF-4940: One or more log archive 
Destination is Configured to              destination and alternate log archive
Prevent Database Hangs                    destination settings are not as recommended           

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Run the odacli update-dbhome command with the -f option.
    /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.12.0.0.0 -f

This issue is tracked with Oracle bug 33144170.

Database clone error in prepatch report for the update-dbhome command

When you patch server to Oracle Database Appliance release 19.12, the odacli update-dbhome command fails.

The following error message is displayed in the pre-patch report:
Is DB clone available           Failed    The DB clone for version            
                                          19.12.0.0.210720 cannot be found.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Stop and restart the DCS agent and run the pre-patch report again.
    systemctl stop initdcsagent
    systemctl start initdcsagent
  2. Create the pre-patch report again and check that the same error is not displayed in the report:
    /opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.12.0.0.0
  3. Run the odacli update-dbhome command.
    /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.12.0.0.0 -f

This issue is tracked with Oracle bug 33293991.

Error in running the update-dbhome command

When you patch database homes to Oracle Database Appliance release 19.11, the odacli update-dbhome command fails.

For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, due to the inclusion of the non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The following error message is displayed:
DCS-10001:Internal error encountered: PRCC-1021 :
One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..
The rhp.log file contains the following entries:
"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"

Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Follow these steps:
  1. Shut down and restart database the failed database and run the datapatch script manually to complete the database update.
    db_home_path_the_database_is_running_on/OPatch/datapatch
  2. If the database is an Oracle ACFS database that was patched to 19.12, then run odacli list-dbstorages command, and locate the corresponding entries by db_unique_name. Check the DATA and RECO destination location ifthey exist from the result.
  3. For DATA destination location, the value should be similar to the following:
    /u02/app/oracle/oradata/db_unique_name
  4. For RECO, pre-process the values from the beginning to the last forward slash (/). For example:
    /u03/app/oracle
    addlFS = /u01/app/odaorahome,/u01/app/odaorabase0(for single-node systems)
    addlFS = /u01/app/odaorahome,/u01/app/odaorabase0, /u01/app/odaorabase1(for high availability systems)
  5. Run the srvctl command db_home_path_the_database_is_running_on/bin/srvctl modify database -d db_unique_name -acfspath $data, $reco, $addlFS -diskgroup DATA. For example:
    srvctl modify database -d provDb0 -acfspath
    /u02/app/oracle/oradata/provDb0,/u03/app/oracle/,/u01/app/odaorahome,/u01/app/
    odaorabase0 -diskgroup DATA

This issue is tracked with Oracle bug 32740491.

Error in updating dbhome

When you patch database homes to Oracle Database Appliance release 19.12, the odacli update-dbhome command fails.

The following error message is displayed:
PRGH-1153 : RHPHelper call to get runing nodes failed for DB: "GIS_IN"

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Ensure that the database instances are running before you run the odacli update-dbhome command. Do not manually stop the database before updating it.

This issue is tracked with Oracle bug 33114855.

Error when patching DB systems

When patching DB systems on Oracle Database Appliance, an error is encountered.

When a DB system node, which has Oracle Database Appliance 19.10, reboots, the ora packages repository is not mounted automatically. The 19.10 DCS Agent does not mount the repositories causing a failure in operations that needs repository access, such as patching.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

When you restart the bare metal system, the DCS Agent on the bare metal system restarts NFS on both nodes. Follow these steps to remount the repository on the DB system:
  1. On the VM mount pkgrepos directory, on the first node, run these steps:
    cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION
    mount 192.168.17.2:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos

    For InfiniBand environments:

    mount 192.168.16.24:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos
  2. On the VM mount pkgrepos directory, on the second node, run these steps:
    cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION
    mount 192.168.17.3:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos

    For InfiniBand environments:

    mount 192.168.16.25:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos
  3. Patch the DB system with the same steps as when patching the bare metal system:
        odacli update-dcsadmin -v 19.12.0.0.0
        odacli update-dcscomponents -v 19.12.0.0.0
        odacli update-dcsagent -v 19.12.0.0.0
        odacli create-prepatchreport -v 19.12.0.0.0 -s
        odacli update-server -v 19.12.0.0.0
        odacli create-prepatchreport -v 19.12.0.0.0 -d -i id
        odacli update-dbhome -v 19.12.0.0.0 0 -i id -f -imp

This issue is tracked with Oracle bug 33217680.

Error in server patching

When patching Oracle Database Appliance, errors are encountered.

The odacli update-server command may fail with the following message:
Fail to patch GI with RHP : DCS-10001:Internal error encountered: PRGH-1057
        : failure during move of an Oracle Grid Infrastructure home
        …
        …
        RCZ-4001 : failed to execute command
        "/u01/app/19.12.0.0/grid/crs/install/rootcrs.sh" using the
        privileged execution plugin "odaexec" on nodes "xxxxxxxx"
        within 36,000 seconds
        PRCZ-2103 : Failed to execute command
        "/u01/app/19.12.0.0/grid/crs/install/rootcrs.sh" on node "xxxxxxxx" as user
        "root". Detailed error: Using configuration parameter file:
        /u01/app/19.12.0.0/grid/crs/install/crsconfig_params
        The log of current session can be found at:
   /u01/app/grid/crsdata/<node_name>/crsconfig/crs_postpatch_apply_oop_node_name_timestamp.log
This error shows that during the move of Oracle Grid Infrastructure stack to the new home location, stopping the Clusterware on the earlier Oracle home fails. Confirm that it is the same error by checking the error log for the following entry:
“Error unmounting '/opt/oracle/oak/pkgrepos/orapkgs/clones'. Possible busy file system. Verify
the logs.Retrying unmount
CRS-2675: Stop of 'ora.data.acfsclone.acfs' on
'node1' failed
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2678: 'ora.data.acfsclone.acfs' on 'node1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2680: Clean of 'ora.data.acfsclone.acfs' on 'node1' failed                      
                     …
                     …"

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps on both nodes:
  1. Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.
  2. Locate all export points of /opt/oracle/oak/pkgrepos:
    # cat /var/lib/nfs/etab
    /opt/oracle/oak/pkgrepos 192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
  3. Clear references to export of clones:
    # exportfs -u host:/opt/oracle/oak/pkgrepos
    # exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos
  4. After running steps 1-3 on both nodes, run the odacli update-server command and patch your appliance.

This issue is tracked with Oracle bug 33284607.

Error in storage patching

When patching Oracle Database Appliance, errors are encountered.

The odacli update-storage command may fail with the following message:
DCS-10001:Internal error encountered: Failed to stop cluster
This error shows that stopping the Clusterware may fail. Confirm that it is the same error by checking the error log for the following entry:
“Error unmounting '/opt/oracle/oak/pkgrepos/orapkgs/clones'. Possible busy file system. Verify
the logs.Retrying unmount
CRS-2675: Stop of 'ora.data.acfsclone.acfs' on
'node1' failed
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2678: 'ora.data.acfsclone.acfs' on 'node1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'
Clean action is about to exhaust maximum waiting time
CRS-2680: Clean of 'ora.data.acfsclone.acfs' on 'node1' failed                      
                     …
                     …"

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps on both nodes:
  1. Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.
  2. Locate all export points of /opt/oracle/oak/pkgrepos:
    # cat /var/lib/nfs/etab
    /opt/oracle/oak/pkgrepos 192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
  3. Clear references to export of clones:
    # exportfs -u host:/opt/oracle/oak/pkgrepos
    # exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos
  4. After running steps 1-3 on both nodes, run the odacli update-storage command and patch the storage.

This issue is tracked with Oracle bug 33284607.

Retrying update-server command after odacli update-server command fails

When you patch Oracle Database Appliance release 19.11, the odacli update-server command fails.

Even when the odacli update-server job is successful, odacli describe-job output may show a message about missing patches on the source home. For example:
Message: Contact Oracle Support Services to request patch(es) "bug #". The patched "OraGrid191100" is missing the patches for bug "bug#” which is present in the source "OraGrid19000"

For release 19.11, a missing patch error for bug number 29511771 is expected. This patch contains Perl version 5.28 for the source grid home. Oracle Database Appliance release 19.11 includes the later Perl version 5.32 in the Oracle Grid Infrastructure clone files, and hence, you can ignore the error. For any other missing patches reported in the odacli describe-job command output, contact Oracle Support to request the patches for Oracle Clusterware release 19.11.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Review the error messages reported in the odacli describe-job command output for any missing patches other than the patch with bug number 29511771, and contact Oracle Support to request the patches for Oracle Clusterware release 19.11.

This issue is tracked with Oracle bug 32973488.

Retrying odacli update-dbhome command with -imp option after update fails

When you patch database homes to Oracle Database Appliance release 19.11, the odacli update-dbhome command fails.

For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, the following error message is displayed:
DCS-10001:Internal error encountered: Contact Oracle Support Services to request patch(es) "bug#". Then supply the --ignore-missing-patch|-imp to retry the command.
You need not contact Oracle Support for the following bug numbers in the error message:
  • 27138071 and 30508171, applicable to Oracle Database release 12.1
  • 28581244 and 30508161, applicable to Oracle Database release 12.2
  • 28628507 and 31225444, applicable to Oracle Database release 18c
  • 29511771, applicable to Oracle Database release 19c

These patches contain the earlier versions of Perl 5.26 and Perl 5.28 for the source database home. Oracle Database Appliance release 19.11 includes the later Perl version 5.32 in the database clone files, and hence, you can ignore the error. You must rerun the odacli update-dbhome command again with the -imp option.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Rerun the odacli update-dbhome command again with the -imp option:
# /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.12.0.0.0 -imp

This issue is tracked with Oracle bug 32915897.

Error in running the update-dbhome command

When you patch database homes to Oracle Database Appliance release 19.11, the odacli update-dbhome command fails.

For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, due to the inclusion of the non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The following error message is displayed:
DCS-10001:Internal error encountered: PRCC-1021 :
One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..
The rhp.log file contains the following entries:
"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"

Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Shut down and restart database the failed database and run the datapatch script manually to complete the database update.
 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4/OPatch/datapatch

This issue is tracked with Oracle bug 32801095.

Error in upgrading from Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance patching

During upgrade of Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance upgrade from release 18.8 to 19.x, an error is encountered.

Following are the errors reported when running the odacli update-server command:
DCS-10059:Clusterware is not running on all nodes  
The log file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_25383.trc has the following error:
KSIPC: ksipc_open: Failed to complete ksipc_open at process startup!! 
KSIPC: ksipc_open: ORA-27504: IPC error creating OSD context   

This is because, the STIG Oracle Linux 6 rules deployed on an Oracle Database Appliance system due to RDS/RDS_TCP not being loaded (due to OL6-00-000126 rule).

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Edit the /etc/modprobe.d/modprobe.conf file.
  2. Comment the following lines:
    # The RDS protocol is disabled  
    # install rds /bin/true
  3. Restart the nodes.
  4. Run the the odacli update-server command again.

This issue is tracked with Oracle bug 31881957.

Error when patching 11.2.0.4 Database homes to Oracle Database Appliance release 19.10

Patching of database home of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to version 11.2.0.4.210119 may fail.

Following are the scenarios when this error may occur:
  • When DCS Agent version is 19.9, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.201020 (which was the Database home version released with Oracle Database Appliance release 19.9)
  • When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.210119 (which was the Database home version released with Oracle Database Appliance release 19.9)
  • When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.200114 (which was the Database home version released with Oracle Database Appliance release 19.6)

This error occurs only when patching Oracle Database homes of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to Oracle Database home using 19.10.0.0.0 version DCS Agent.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Patch your 11.2.0.4 Oracle Database home to any version earlier than 11.2.0.4.210119 (the version released with Oracle Database Appliance release 19.10) so that the DCS Agent is of version earlier than 19.10.0.0.0, and then update the DCSAgent to 19.10.

Note that once you patch DCS Agent to 19.10.0.0.0, then patching of these old 11.2.0.4 homes will fail.

This issue is tracked with Oracle bug 32498178.

Error message displayed even when patching Oracle Database Appliance is successful

Although patching of Oracle Database Appliance was successful, an error message is displayed.

The following error is seen when running the odacli update-dcscomponents command:
# time odacli update-dcscomponents -v 19.10.0.0.0 
^[[ADCS-10008:Failed to update DCScomponents: 19.10.0.0.0 
Internal error while patching the DCS components : 
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer  
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for  
details.   

Hardware Models

All Oracle Database Appliance hardware models

Workaround

This is a timing issue with setting up the SSH equivalence.

Run the odacli update-dcscomponents command again and the operation completes successfully.

This issue is tracked with Oracle bug 32553519.

Error in updating storage when patching Oracle Database Appliance

When updating storage during patching of Oracle Database Appliance, an error is encountered.

The following error is displayed:
# odacli describe-job -i  765c5601-f4ad-44f0-a989-45a0b7432a0d 

Job details 
---------------------------------------------------------------- 
                     ID:  765c5601-f4ad-44f0-a989-45a0b7432a0d 
            Description:  Storage Firmware Patching 
                 Status:  Failure 
                Created:  February 24, 2021 8:15:21 AM PST 
                Message:  ZK Wait Timed out. ZK is Offline 

Task Name                     Start Time                          End Time                            Status 
---------------------------------------- ------------------------------------------------------------------
 
Storage Firmware Patching      February 24, 2021 8:18:06 AM PST     February 24, 2021 8:18:48 AM PST    Failure 
task:TaskSequential_140        February 24, 2021 8:18:06 AM PST     February 24, 2021 8:18:48 AM PST    Failure 
Applying Firmware Disk Patches February 24, 2021 8:18:28 AM PST     February 24, 2021 8:18:48 AM PST    Failure    

Hardware Models

Oracle Database Appliance X5-2 hardware models with InfiniBand

Workaround

Follow these steps:
  1. Check the private network (ibbond0) and ping private IPs from each node.
  2. If the private IPs are not ping-able, then restart the private network interfaces on both nodes and retry.
  3. Check the zookeeper status.
  4. On Oracle Database Appliance high availability deployments, if the zookeeper status is not in the leader of follower mode, then continue to the next job.

This issue is tracked with Oracle bug 32550378.

Error in Oracle Grid Infrastructure upgrade

Oracle Grid Infrastructure upgrade fails, though the rootupgrade.sh script ran successfully.

The following messages are logged in the grid upgrade log file located under /opt/oracle/oak/log/<NODENAME>/patch/19.8.0.0.0/ .
ERROR: The clusterware active state is UPGRADE_AV_UPDATED 
INFO: ** Refer to the release notes for more information ** 
INFO: ** and suggested corrective action                 ** 

This is because when the root upgrade scripts run on the last node, the active version is not set to the correct state.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. As root user, run the following command on the second node:
     /u01/app/19.0.0.0/grid/rootupgrade.sh -f 
  2. After the command completes, verify that the active version of the cluster is updated to UPGRADE FINAL.
    /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f 
    The cluster upgrade state is [UPGRADE FINAL] 
  3. Run Oracle Database Applaince server patching process again to upgrade Oracle Grid Infrastructure.

This issue is tracked with Oracle bug 31546654.

Error when running ORAChk or updating the server or database home

When running Oracle ORAchk or the commands odacli create-prepatchreport, odacli update-server, odacli update-dbhome, an error is encountered.

The following messages may be displayed:
- Table AUD$[FGA_LOG$] should use Automatic Segment Space Management 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. To verify the segment space management policy currently in use by the AUD$ and FGA_LOG$ tables, use the following SQL*Plus command:
    select t.table_name,ts.segment_space_management from dba_tables t,  
    dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and  
    t.table_name in ('AUD$','FGA_LOG$'); 
  2. The output should be similar to the following:
    TABLE_NAME                     SEGMEN 
    ------------------------------ ------ 
    FGA_LOG$                       AUTO 
    AUD$                           AUTO  
    If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the  
    DBMS_AUDIT_MGMT package to move them to the SYSAUX tablespace: 
    
    BEGIN 
    DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type =>  
    DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$  
    audit_trail_location_value => 'SYSAUX');   
    END;   
    
    BEGIN 
    DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type =>  
    DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$  
    audit_trail_location_value => 'SYSAUX'); 
    END; 

This issue is tracked with Oracle bug 27856448.

Error in patching database homes

An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled.

When running the command odacli update-dbhome -v release_number on database homes that have Standard Edition High Availability enabled, an error is encountered.
WARNING::Failed to run the datapatch as db <db_name> is not in running state 

Hardware Models

All Oracle Database Appliance hardware models with High-Availability deployments

Workaround

Follow these steps:
  1. Locate the running node of the target database instance:
    srvctl status database -database dbUniqueName
    Or, relocate the single-instance database instance to the required node:
    odacli modify-database -g node_number (-th node_name) 
  2. On the running node, manually run the datapatch for non-CDB databases:
    dbhomeLocation/OPatch/datapatch
  3. For CDB databases, locate the PDB list using SQL*Plus.
    select name from v$containers where open_mode='READ WRITE';
    dbhomeLocation/OPatch/datapatch -pdbs pdb_names_found_in_previous_step_divided_by_comma

This issue is tracked with Oracle bug 31654816.

Error in server patching

An error is encountered when patching the server.

When running the command odacli update-server -v release_number, the following error is encountered:
DCS-10001:Internal error encountered: patchmetadata for 19.6.0.0.0 missing  
target version for GI.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Change the file ownership temporarily to the appropriate grid user for the osdbagrp binary in the grid_home/bin location. For example:
    $ chown -R grid:oinstall /u01/app/18.0.0.0/grid/bin/osdbagrp
  2. Run either the update-registry -n gihome or the update-registry -n system command.

This issue is tracked with Oracle bug 31125258.

Server status not set to Normal when patching

When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:

DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

  1. Run the command:
    Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
  2. Ignore the following two warnings:
    Verifying OCR Integrity ...WARNING
    PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
    
    Verifying Single Client Access Name (SCAN) ...WARNING
    RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
  3. Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be Normal again.

  4. You can verify the status with the command:
    Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

Error when patching to 12.1.0.2.190716 Bundle Patch

When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script. 

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

Patching of M.2 drives not supported

Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

Error in patching Oracle Database Appliance

When applying the server patch for Oracle Database Appliance, an error is encountered.

Error Encountered When Patching Bare Metal Systems:

When patching the appliance on bare metal systems, the odacli update-server command fails with the following error:

Please stop TFA before server patching.

To resolve this issue, follow the steps described in the Workaround.

Error Encountered When Patching Virtualized Platform:

When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:

INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1  

Check the prepatch log file generated in the directory /opt/oracle/oak/log/hostname/patch/18.8.0.0.0. You can also view the prepatch log for the last run with the command ls -lrt prepatch_*.log. Check the last log file in the command output.

In the log file, search for entries similar to the following:

ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information. 

To resolve this issue, follow the steps described in the Workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

On Oracle Database Appliance bare metal systems, do the following:
  1. Run tfactl stop on all the nodes in the cluster.
  2. Restart patching once Oracle TFA Collector has stopped on all nodes.
On Oracle Database Appliance Virtualized Platform, do the following:
  1. Run /etc/init.d/init.tfa stop on all the nodes in the cluster.
  2. Restart patching once Oracle TFA Collector has stopped on all nodes.

This issue is tracked with Oracle bug 30260318.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Error in creating two DB systems

When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.

When attempting to start the DB systems, the following error message is displayed:
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.

This issue is tracked with Oracle bug 33275630.

Error in provisioning the appliance

When provisioning Oracle Database Appliance, an error is encountered if the /u01 directory does not have sufficient space.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Verify that the /u01 directory has sufficient space before you start provisioning the appliance.

This issue is tracked with Oracle bug 33255007.

Error in registering a non-TDE database in DB Systems

When registering a non-TDE database in a DB system on Oracle Database Appliance, an error is encountered.

The following error is displayed:
DCS-10107:TDE wallet ewallet.p12 not found at location +DATA/DB_UNIQUENAME.
Please copy the wallet to proceed.   

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps to register a non-TDE database in DB system:
  1. Log in as the oracle user and create a directory tde at +DATA/DB_UNIQUENAME/ using the following command:
    <DB_HOME>/bin/asmcmd --privilege sysdba mkdir +DATA/DB_UNIQUENAME/tde
  2. Connect to SQL*Plus and run the following:
    ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATA/DB_UNIQUENAME/tde'
    IDENTIFIED BY "WelCome#_123";
    
    ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE
    '+DATA/DB_UNIQUENAME/tde' IDENTIFIED BY "WelCome#_123";
  3. Retry registering the database.
  4. As the oracle user, delete the empty TDE wallets and directory as follows:
    DB_HOME/bin/asmcmd --privilege sysdba rm 
    +DATA/DB_UNIQUENAME/tde/ewallet.p12
    
    DB_HOME/bin/asmcmd --privilege sysdba rm
    +DATA/DB_UNIQUENAME/tde/cwallet.sso
    
    DB_HOME/bin/asmcmd --privilege sysdba rm +DATA/DB_UNIQUENAME/tde/

This issue is tracked with Oracle bug 33305399.

Error in creating db system

The command odacli create-dbsystem operation fails due to errors.

The following error message is displayed:
DCS-10032:Resource of type 'Virtual Network' with name 'pubnet' is not found. 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Restart the DCS agent. For high-availability systems, restart the DCS agent on both nodes.
systemctl restart initdcsagent 

This issue is tracked with Oracle bug 32740754.

Error in recovering a database

When recovering a database on Oracle Database Appliance, an error is encountered.

When you run the command odacli recover-database on a Standard Edition High Availability database, the following error message is displayed:
DCS-10001:Internal error encountered: Unable to get valid database node number to post recovery. 

Hardware Models

All Oracle Database Appliance high-availability hardware models

Workaround

Run the following commands:
srvctl config database -db db_name  | grep “Configured nodes” | awk  
‘{print $3}’, whose output is nodeX,nodeY 
srvctl modify database -db db_name -node nodeX 
odacli recover-database 
srvctl stop database -db db_name 
srvctl modify database -db db_name -node nodeX,nodeY 
srvctl start database -db db_name 

This issue is tracked with Oracle bug 32928688.

Error in adding JBOD

When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.

The following error message is displayed:
ORA-15333: disk is not visible on client instance

Hardware Models

All Oracle Database Appliance hardware models bare metal and dbsystem

Workaround

Shut down dbsystem before adding the second JBOD.
systemctl restart initdcsagent 

This issue is tracked with Oracle bug 32586762.

Error in provisioning appliance after running cleanup.pl

Errors encountered in provisioning applince after running cleanup.pl.

After running cleanup.pl, provisioning the appliance fails because of missing Oracle Grid Infrastructure image (IMGGI191100). The following error message is displayed:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

After running cleanup.pl, and before provisioning the appliance, update the repository as follows:

# odacli update-repository -f /**gi** 

This issue is tracked with Oracle bug 32707387.

Error in registering a database

When registering a database on Oracle Database Appliance, an error is encountered.

In DB system, if you create a database manually, then it consumes HugePages if the parameter (use_large_pages=true) is set to use the HugePages for SGA.The command odacli register-database fails with the following error:
DCS-10045:Validation error encountered: Available Memory is less than SGA Size { Available : size_in_MB and SGA Size : size_in_MB }.   

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Start the database manually, and then disable the HugePages setting manually with the command SET use_large_pages=false, and after that register the database using the odacli register-database command.

This issue is tracked with Oracle bug 32847601.

Error in updating a database

When updating a database on Oracle Database Appliance, an error is encountered.

When you run the command odacli update-dbhome, the following error message is displayed:
PRGO-1069 :Internal error [# rhpmovedb.pl-isPatchUpg-1 #].. 

To confirm that the MMON process occupies the lock, connect to the target database which failed to patch, and run the command:

SELECT s.sid, p.spid, s.machine, s.program FROM v$session s, v$process p  
WHERE s.paddr = p.addr and s.sid = ( 
SELECT sid from v$lock WHERE id1= ( 
SELECT lockid FROM dbms_lock_allocated WHERE name = 'ORA$QP_CONTROL_LOCK' 
)); 

If in the displayed result, s.program in the output is similar to to the format oracle_user@host_box_name (MMON), then the error is caused by the MMON process. Run the workaround to address this issue.

Hardware Models

All Oracle Database Appliance high-availability hardware models

Workaround

Run the following commands:
  1. Stop the MMON process:
    # ps -ef | grep MMON 
    root     71220 70691  0 21:25 pts/0    00:00:00 grep --color=auto MMON 
    Locate the process ID from step (1) and stop it:
    # kill -9 71220
  2. Manually run datapatch on target database:
    1. Locate the database home where the target database is running:
      odacli describe-database -in db_name
    2. Locate the database home location:
      odacli describe-dbhome -i DbHomeID_found_in_step_a
    3. On the running node of the target database:
      [root@node1 ~]# sudo su - oracle 
      Last login: Thu Jun  3 21:24:45 UTC 2021 
      [oracle@node1 ~]$ . oraenv 
      ORACLE_SID = [oracle] ? db_instance_name
      ORACLE_HOME = [/home/oracle] ? dbHome_location
    4. If the target database is a non-CDB database, then run the following:
      $ORACLE_HOME/OPatch/datapatch
    5. If the target database is a CDB database, then run the following to find the PDB list:
      select name from v$containers where open_mode="READ WRITE"; 
    6. Exit SQL*Plus and run the following:
      $ORACLE_HOME/OPatch/datapatch -pdbs pdb_names_gathered_by_the_SQL_statement_in_step_e_separated_by_comma 

This issue is tracked with Oracle bug 32827353.

Error in running tfactl diagcollect command on remote node

When running the tfactl diagcollect command on Oracle Database Appliance, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models KVM and bare metal systems

Workaround

Prior to Oracle Autonomous Health Framework 21.2, if the certificates are generated on each node separately, then you must perform either of the following manual steps to fix this.
  • Run the following command on each node so that Oracle Trace File Analyzer generates new certificates and distributes to the other node:
    tfactl syncnodes -remove -local
  • Connect using SSH with root credentials on one node and run the following.
    tfactl syncnodes

This issue is tracked with Oracle bug 32921859.

Error in running tfactl diagcollect command

When running the tfactl diagcollect command on Oracle Database Appliance, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the command both nodes separately with the -node local options:
tfactl diagcollect -node local

This issue is tracked with Oracle bug 32940358.

Error when upgrading database from 11.2.0.4 to 12.1 or 12.2

When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.

Database upgrade can cause the following warning in the UpgradeResults.html file, when upgrading database from 11.2.0.4 to 12.1 or 12.2:
Database is using a newer time zone file version than the Oracle home 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

  1. Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
  2. After manually completing the database upgrade, run the following command to update DCS metadata:
    /opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f

This issue is tracked with Oracle bug 31125985.

Error when upgrading 12.1 single-instance database

When upgrading 12.1 single-instance database, a job failure error is encountered.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Use the following workaround:
  1. Before upgrading the 12.1 single-instance database, run the following PL/SQL command to change the local_listener to an empty string:
    ALTER SYSTEM SET LOCAL_LISTENER='';
  2. After upgrading the 12.1 single-instance database successfully, run the following PL/SQL command to change the local_listener to the desired value:
    ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-'; 

This issue is tracked with Oracle bugs 31202775 and 31214657.

Failure in creating RECO disk group during provisioning

When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.

Hardware Models

All Oracle Database Appliance X8-2-HA with High Performance configuration

Workaround

  1. Power off storage expansion shelf.
  2. Reboot both nodes.
  3. Proceed with provisioning the default storage shelf (first JBOD).
  4. After the system is successfully provisioned with default storage shelf (first JBOD), check that oakd is running on both nodes in foreground mode.
     # ps -aef | grep oakd
  5. Check that all first JBOD disks have the status online, good in oakd, and CACHED in Oracle ASM.
  6. Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
  7. Run the following command from the master node to add the storage expansion shelf disks (two JBOD setup) to oakd and Oracle ASM.
    #odaadmcli show ismaster 
          OAKD is in Master Mode 
    
          # odaadmcli expand storage -ndisk 24 -enclosure 1 
           Skipping precheck for enclosure '1'... 
           Check the progress of expansion of storage by executing 'odaadmcli  
    show disk' 
           Waiting for expansion to finish ... 
          #  
  8. Check that the storage expansion shelf disks (two JBOD setup) are added to oakd and Oracle ASM.

Replace odaadmcli with oakcli commands on Oracle Database Appliance Virtualized Platform in the procedure.

For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.

This issue is tracked with Oracle bug 30839054.

Simultaneous creation of two Oracle ACFS Databases fails

If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.

DCS-10001:Internal error encountered: Fail to run command Failed to create  
volume. 

Hardware Models

All Oracle Database Appliance bare metal deployments

Workaround

Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.

For High Perfomance configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname 
For Oracle Database Appliance X8-2 High Perfomance configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname 
For High Capacity configuration, run the following commands:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)  
For Oracle Database Appliance X8-2 High Capacity configuration, remove the REDO volume as follows:
su - GRID_USER 
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second 
node); 
export ORACLE_HOME=GRID_HOME; 
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname  

This issue is tracked with Oracle bug 30750497.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Accelerator volume for data is not created on flash storage

The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.

Hardware Models

Oracle Database Appliance high capacity environments with HDD disks

Workaround

Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.

This issue is tracked with Oracle bug 28836461.

Errors in clone database operation

Clone database operation fails due to errors.

If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.

For Oracle Database 12c and later, synchronize the source database before the clone operation, by running the command:
SQL> alter system checkpoint;

This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.

Clone database operation fails

For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Set the compatible value to that of the source database. Follow these steps:
  1. Change the parameter value.
    SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
  2. Shut down the database.
    SQL> SHUTDOWN IMMEDIATE
  3. Start the database.
    SQL> Startup
  4. Verify the parameter for the new value.
    SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';

This issue is tracked with Oracle bug 30309914.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Error in number of cores enabled after running odacli update-cpucore

When running the odacli update-cpucore -c value command on an Oracle Database Appliance Small hardware model, only half the specified CPU cores are enabled.

No error message is displayed when you run the command. However, after the appliance restarts, only half the number of cores specified in the command are enabled at a BIOS level.

Hardware Models

All Oracle Database Appliance X8-2S, X7-2S, X6-2S hardware models

Workaround

Run the following steps only after you run the odacli update-cpucore -c value command and observe that only half the specified CPU cores are enabled. You can, then, manually set the desired number of cores to be enabled in the BIOS:
  1. As the root user, run the ubiosconfig command to retrieve the configuration. Save the unmodified XML file for reference later.
    # /usr/sbin/ubiosconfig export all -f --expert -x /tmp/bios.xml
  2. Edit the file /tmp/bios.xml and change the value between the the tags <Active_Processor_Cores>enabled_core</Active_Processor_Cores> to the value specified in the odacli update-cpucore command. If you specified to enable 8 cores, then you will see a value of 4. Change this value to 8.
    Before:
    <!-- Active Processor Cores -->
    <!-- Description: Number of cores to enable in each processor package. -->
    <!-- Possible Values: "All", "1", "2", "3", "4", "5", "6", "7", "8", "9",
    "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22",
    "23", "24", "25", "26", "27" -->
    <Active_Processor_Cores>4</Active_Processor_Cores>
    
    After:
    <!-- Active Processor Cores -->
    <!-- Description: Number of cores to enable in each processor package. -->
    <!-- Possible Values: "All", "1", "2", "3", "4", "5", "6", "7", "8", "9",
    "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22",
    "23", "24", "25", "26", "27" -->
    <Active_Processor_Cores>8</Active_Processor_Cores>
  3. As the root user, run the ubiosconfig command to save the configuration.
    # /usr/sbin/ubiosconfig import config -f --expert -y -x /tmp/bios.xml
  4. Reboot the node.
  5. Confirm the desired number of CPU cores is enabled:
    # /usr/bin/lscpu | grep Core
    Core(s) per socket:    8

This issue is tracked with Oracle bug 33400434.

Error in back up of database

When backing up a database on Oracle Database Appliance, an error is encountered.

After successful failover, running the command odacli create-backup on new primary database fails with the following message:
DCS-10001:Internal error encountered: Unable to get the
rman command status commandid:xxx
output:STATUS
-------------------------
[COMPLETED WITH WARNINGS] error:.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. On the new primary database, connect to RMAN as oracle and edit the archivelog deletion policy.
    rman target /
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
  2. On the new primary database, as the root user, take a backup:
    odacli create-backup -in db_name -bt backup_type

This issue is tracked with Oracle bug 33181168.

Error in back up of 21.3 database

When attaching a 21.3 database with a backupconfig on Oracle Database Appliance, an error is encountered.

The following message is displayed:
DCS-10089:Database DB_NAME is in an invalid state 'NOT_RUNNING'.Database DB_NAME must be running

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 33214654.

OpenSSH command vulnerability

OpenSSH command vulnerability issue detected in Qualys and Nessus scans.

Qualys and Nessus both report a medium severity issue OPENSSH COMMAND INJECTION VULNERABILITY. Refer to CVE-2020-15778 for details.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

None.

This issue is tracked with Oracle bug 33217970.

Error in bare metal CPU pool association

After patching to Oracle Database Appliance release 19.12, bare metal CPU pool which is not NUMA allocated, can be associated to a database.

This is an error since the bare metal CPU pool does not follow NUMA allocation and thus the bare metal database do not run on appropriate physical CPUs.

Hardware Models

All Oracle Database Appliance hardware models that support release 19.12 and were patched from earlier 19.x release. New provisioning with 19.12 are not affected, since new bare metal CPU pools are NUMA allocated.

Workaround

Run the odacli remap-cpupools command and restart the bare metal database instances.

This issue is tracked with Oracle bug 31907677.

Error in detaching vnetwork from DB system

When detaching a vnetwork from DB system on Oracle Database Appliance, an error is encountered.

When detaching a vnetwork from DB system, the job does not verify whether the network is currently associated to the database.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Before you detach the vnetwork from the DB system, dissociate the network from the database within the DB system:
    odacli modify-database -in db_name -dn network_name
  2. On the bare metal system, run the following:
    odacli modify-dbsystem -n db_name -dvn network_name

This issue is tracked with Oracle bug 33284771.

AHF permissions error

When running the OERR tool in the AHF_HOME on Oracle Database Appliance, an error is encountered.

After successful failover, running the command odacli create-backup on new primary database fails with the following message:
cd /opt/oracle/dcs/oracle.ahf/bin
../oerr
-bash: ./oerr: Permission denied

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the tool, with sh, as follows:
cd /opt/oracle/dcs/oracle.ahf/bin
sh oerr
Use AHF XXXX format... Exiting

This issue is tracked with Oracle bug 33293560.

Error in cleaning up a deployment

When cleaning up a Oracle Database Appliance, an error is encountered.

During cleanup, shutdown of Clusterware fails because the NFS export service uses Oracle ACFS-based clones repository.

Hardware Models

All Oracle Database Appliance hardware models with DB systems

Workaround

Follow these steps:
  1. Stop the NFS service on both nodes:
    service nfs stop
  2. Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.

This issue is tracked with Oracle bug 33289742.

Error in restoring a database on DB system

When restoring a database on Oracle Database Appliance, an error is encountered.

iRestore operation of a database in a DB system fails when creating the directory /opt/oracle/dcs/localstore/objectstore/opc_pfile/DBID .
The follow error message is displayed:
/opt/oracle/dcs/bin/odacli irestore-database -r 
/u01/backup_report_ohMp_irestore.json -u tgtohMp -on ohMp_aaa -bp -c 4 -t -ro
standby -dh f43e12cf-3353-4227-9af9-7cda00a97fc8
Enter SYS user password:
Retype SYS user password:
Enter TDE wallet password:
Enter RMAN backup encryption password:
Do you want to provide another RMAN backup encryption password? [y/n]
(default 'n'): n
DCS-10001:Internal error encountered: Failed to create directory during
irestore: /opt/oracle/dcs/localstore/objectstore/opc_pfile/1559793085.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Before running the odacli irestore-database command, run the following commands, as the root user:
mkdir -p /opt/oracle/dcs/localstore/objectstore
chown user_with_role_of_oracleUser_in_prov.json_for_DBVM:group_with
role_of_oinstall_in_prov.json_for_DBVM
/opt/oracle/dcs/localstore/objectstore/
For example:
chown zoracle:zoinstall /opt/oracle/dcs/localstore/objectstore/

This issue is tracked with Oracle bug 33232650.

Error in TDE wallet management

When changing the TDE wallet password or rekeying the TDE wallet of a database which has TDE Wallet Management set to the value EXTERNAL, an error is encountered.

The following message is displayed:
DCS-10089:Database DB_NAME is in an invalid state 'NOT_RUNNING'.Database DB_NAME must be running

Hardware Models

All Oracle Database Appliance hardware models

Workaround

NONE. The operations such as changing the TDE wallet password or rekeying the TDE wallet is not supported on a database which has TDE Wallet Management set to the value EXTERNAL.

This issue is tracked with Oracle bug 33278653.

Error in reinstate operation on Oracle Data Guard

When running the command odacli reinstate-dataguard on Oracle Data Guard an error is encountered.

Following are the errors reported in dcs-agent.log:
DCS-10001:Internal error encountered: Unable to reinstate Dg." and can 
further find this error "ORA-12514: TNS:listener does not currently know of  
service requested  

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Ensure that the database you are reinstating is started in MOUNT mode.

To start the database in MOUNT mode, run this command:
srvctl start database -d db-unique-name -o mount

After the command completes successfully, run the command odacli reinstate-dataguard job. If the database is already in MOUNT mode, this can be an temporary error. Check the Data Guard status again a few minutes later with odacli describe-dataguardstatus or odacli list-dataguardstatus, or check with DGMGRL> SHOW CONFIGURATION; to see if the reinstatement is successful.

This issue is tracked with Oracle bug 32367676.

Error in starting a database from a bare metal CPU pool

When starting a database after patching to Oracle Database Appliance release 19.10, an error is encountered.

After patching to Oracle Database Appliance release 19.10, the database using bare metal CPU pool fails to start after the system restarts. The service cgconfig.service is down.
# systemctl status cgconfig.service 
cgconfig.service - Control Group configuration service 
   Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor  
preset: disabled) 
   Active: inactive (dead) 

.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Check the cgconfig.service status. If the status is disabled or inactive, then continue.
    # systemctl status cgconfig.service 
    cgconfig.service - Control Group configuration service 
       Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor  
    preset: disabled) 
       Active: inactive (dead) 
  2. Start cgconfig.service:
    # systemctl start cgconfig.service 
  3. Enable cgconfig.service:
    # systemctl enable cgconfig.service 
    Created symlink from  
    /etc/systemd/system/sysinit.target.wants/cgconfig.service to  
    /usr/lib/systemd/system/cgconfig.service. 
  4. Check cgconfig.service status:
    # systemctl status cgconfig.service 
    cgconfig.service - Control Group configuration service 
       Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; enabled; vendor  
    preset: disabled) 
       Active: active (exited) since Mon 2021-02-22 23:03:34 CST; 3min 40s ago 
     Main PID: 16594 (code=exited, status=0/SUCCESS) 
  5. Restart the failed database.

This issue is tracked with Oracle bug 31907677.

Error in restoring a database

When restoring a database on Oracle Database Appliance, an error is encountered.

iRestore operation fails when specifying a wrong backup location which does not point to the parent directory of the source database backup.

This is because there are multiple database IDs in the wrong location, leading to failure in RMAN.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not specify backup location, or provide the correct backup location pointing to the parent directory of the source database backup.

This issue is tracked with Oracle bug 31907677.

Error in running concurrent database or database home creation jobs

When running concurrent database or database home creation jobs, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not run concurrent database or database home creation job.

This issue is tracked with Oracle bug 32376885.

Error in restoring a database in dbsystem

When restoring a database in dbsystem on Oracle Database Appliance, an error is encountered.

Manual restore of the database to dbsystem fails with the following error:
/u01/app/oracle/product/19.0.0.0/dbhome_1/bin/orapwd  
file=‘+DATA/brtest/orapwdbrtest’ password=xxxxxx entries=5  
dbuniquename=“BRTEST” force=y 
OPW-00014: Could not delete password file +DATA/brtest/orapwdbrtest. 
ORA-15056: additional error message 
ORA-06512: at line 4 
ORA-15260: permission denied on ASM disk group 
ORA-06512: at “SYS.X$DBMS_DISKGROUP”, line 533 
ORA-06512: at line 2 
The odacli delete-dbsystem command did not completely delete some of the Oracle ASM files that belonged to the deleted database, the password file, in the above example. This can cause an error when trying to restore the database using the same name.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the asmcmd command from the Oracle Database Appliance host to manually delete the files that belong to the deleted database. See the Automatic Storage Management Administrator's Guide for the ascmd commands. Make sure you verify the database name first before deleting the files.

This issue is tracked with Oracle bug 32931078.

Directories not deleted on dbsystem

After running the command odacli delete-dbsystem --force -n, certain empty non-Oracle Managed Files (OMF) directories under +diskgroup/dbuniquename are not deleted.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Manually run the command asmcmd rm on the directory to manually delete it.

This issue is tracked with Oracle bug 32806915.

Error in iRestore operation

When restoring a database from NFS backup location on Oracle Database Appliance, an error is encountered.

The following error message is displayed:
DCS-10001:Internal error encountered: Failed to run Rman Script :
/tmp/dcsfiles/duplicateRman2021-05-25_06-03-50.0840547.script. Please refer
log at location :
/u01/app/oracle/diag/rdbms/mydb/mydb/scaoda8s002/rman/bkup/rman_duplicate/2021
-05-25/rman_duplicate_2021-05-25_06-03-50.0864.log.Duplicate command
execution failed.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

This issue occurs if NFS is configured so that the user ID of the user oracle and group ID of group asmadmin do not match, in the primary and backup systems, mac1 and mac2 respectively. However, even with the mismatch if iRestore from the NFS backup needs to be performed, then make sure the user or group of the oracle binary in mac2 is able to at least read the backup files in the NFS backup location NFS_backup_location/orabackups/cluster_name/database/DBID/DbUniqueName/db of mac1.

You can find the user or group of the oracle binary by running the ls -ltr command on the 'oracle' binaryoracle binary present at the <DBHOME>/bin.

In the following example, the user and group of the oracle binary are oracle and asmadmin respectively.
[root@****** bin]# ls -ltr /u01/app/oracle/product/19.0.0.0/dbhome_3/bin/oracle 
-rwsr-s--x 1 oracle asmadmin 448749536 *** 25 06:03 /u01/app/oracle/product/19.0.0.0/dbhome_3/bin/oracle 
If one cannot provide permission for either user or group of the oracle binary on mac2, then at least 'read' permission must be provided for 'others', that is**4 on mac1 to all the NFS backup files. For example:
[root@mac1 bin]#/scratch2/orabackups/scaoda8s002-c/database/2987837625/mydb/db 
-rwxr--r-- 1 oracle asmadmin 1097728 Jun 3 10:55 auto_cf_DBSE3_2116871228_0100fgpa_1_1_1_20210603_1074250538 
-rwxr--r-- 1 oracle asmadmin 1097728 Jun 3 10:55 c-2116871228-20210603-00

This issue is tracked with Oracle bug 32422681.

Error in iRestore operation on Standard Edition Database

When restoring a Standard Edition Database on Oracle Database Appliance, an error is encountered.

The following error message is displayed:
DCS-10001:Internal error encountered: Failed to run sql in method : runRmanDuplicateDbFromDiskBackup.Unable to startup instance in nomount mode as output contains ora-
The /opt/oracle/dcs/log/dcs-agent.log contains the following entries:
ORACLE instance shut down.
ORA-00371: not enough shared pool memory, should be at least 1141769669 bytes

This issue occurs only if more than 8 CPUs are online on the appliance.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Apply the BLR corresponding to bug 32961939 and retry the operation.

This issue is tracked with Oracle bug 32957033.

Error in restoring a standby database for 11.2.0.4 database

When performing an iRestore operation on a standby database of version 11.2.0.4, an error is encountered.

iRestore to standby may fail for database of version 11.2.0.4 if the standby database control file checkpoint is more recent than duplication point-in-time.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. After taking backup and before performing the iRestore operation, delete control file autobackups in the directory shown as attribute backupLocation in the backup report:
    c-3737675288-20210211-04 
    c-3737675288-20210211-05 
    c-3737675288-20210211-06 
    c-3737675288-20210212-00 
    c-3737675288-20210212-01
  2. Perform the database iRestore operation.
  3. After successfully performing the iRestore operation, create a backup of the source database.

This issue is tracked with Oracle bug 32473071.

Error in deleting a standby database

When deleting a standby database, an error is encountered.

When you iRestore a database as standby database and then delete it and then again iRestore the same standby database with the same database unique name, the following error is displayed:
DCS-10001:Internal error encountered: Failed to run the asm command: 
[/u01/app/19.0.0.0/grid/bin/asmcmd, --nocp, rm, -rf, RECO/ABCDEU] 
Error:ORA-29261: bad argument 
ORA-06512: at line 4 
ORA-15178: directory 'ABCDEU' is not empty; cannot drop this directory 
ORA-15260: permission denied on ASM disk group 
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 666 
ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute). 

Verify the status of the job with the odacli list-jobs command.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration

Workaround

Run any of the following workarounds:
  • After deleting the standby database and before recreating the same standby database, perform the following steps:
    1. After deleting the standby database and before recreating the same standby database, perform the following steps:
      1. Log in as the oracle user:
        su - oracle 
      2. Set the environment:
        . oraenv 
        ORACLE_SID = null 
        ORACLE_HOME = dbhome_path (such as /u01/app/oracle/product/19.0.0.0/dbhome_1) 
        3. cd dbhome_path/bin 
        4. asmcmd --privilege sysdba rm -rf +RECO/DBUNIQUENAME/ 
        5. asmcmd --privilege sysdba rm -rf +DATA/DBUNIQUENAME/arc10/ 
        6. asmcmd --privilege sysdba rm -rf +DATA/DBUNIQUENAME/PASSWORD/ 
  • Recreate the standby database with a different database unique name.

This issue is tracked with Oracle bug 32871772.

Error in Oracle Data Guard failover operation for 18.14 database

When running the odacli failover-dataguard command on a database of version 18.14, an error is encountered.

The following error message is displayed:
DCS-10001:Internal error encountered: Unable to precheckFailoverDg11g Dg.
The error message can be viewed in the DCS agent log:
select DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from v$database 
ERROR at line 1: 
ORA-00600: internal error code, arguments: [kcbgtcr_17], [], [], [], [], [], 
[], [], [], [], [], [] 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Run the following DGMGRL statements on the system with the database to fail over to:
    DGMGRL> SHOW CONFIGURATION; 
    DGMGRL> VALIDATE DATABASE '<DB_UQNIUE_NAME_to_failover_to>'; 
    DGMGRL> FAILOVER TO '<DB_UQNIUE_NAME_to_failover_to>'; 
    DGMGRL> SHOW CONFIGURATION; 
  2. After failover is successful, run the odacli describe-dataguardstatus -i id command several times to update the DCS metadata.

This issue is tracked with Oracle bug 32727379.

Error in Oracle Active Data Guard operations

When performing switchover, failover, and reinstate operations on Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.

When performing switchover, failover, and reinstate operations on Oracle Active Data Guard, upgrading primary database may fail at step Database Upgrade with the following error:
PRCZ-2103 : Failed to execute command  
"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/bin/dbua" on node  
"node1" as user "oracle". Detailed error:  
Logs directory:  
/u01/app/odaorabase/oracle/cfgtoollogs/dbua/upgrade2021-05-06_01-31-16PM 
The log contains the following message:
SEVERE: May 08, 2021 6:50:24 PM oracle.assistants.dbua.prereq.PrereqChecker  
logPrereqResults  
SEVERE: Starting with Oracle Database 11.2, setting JOB_QUEUE_PROCESSES=0  
will disable job execution via DBMS_JOBS and DBMS_SCHEDULER. FIXABLE: MANUAL  
Database: ptdkjqt  
Cause: The database has JOB_QUEUE_PROCESSES=0.  
Action: Set the value of JOB_QUEUE_PROCESSES to a non-zero value, or remove  
the setting entirely and accept the Oracle default.  

Hardware Models

All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration

Workaround

Follow these steps:

  1. Use SQL*Plus to access the database and run the following command:
    alter system set JOB_QUEUE_PROCESSES=1000;
  2. Retry the upgrade command.

This issue is tracked with Oracle bug 32856214.

Error in the enable apply process after upgrading databases

When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.

The following error message is displayed:
Error: ORA-16664: unable to receive the result from a member

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:
  1. Restart standby database in upgrade mode:
    srvctl stop database -d <db_unique_name> 
    Run PL/SQL command: STARTUP UPGRADE; 
  2. Continue the enable apply process and wait for log apply process to refresh.
  3. After some time, check the Data Guard status with the DGMGRL command:
    SHOW CONFIGURATION; 

This issue is tracked with Oracle bug 32864100.

Error in configuring Oracle Data Guard with cloned primary database

When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered.

When configuring Oracle Data Guard with cloned primary database, the odacli configure-dataguard command fails at step Configure Primary database (Primary site) with the following error:
DCS-10001: FAILED TO CREATE BROKER CONFIG FILE DIRECTORY

Verify the status of the job with the odacli list-jobs command.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Active Data Guard configuration

Workaround

Follow these steps:

  1. On the system with the cloned primary database, run the following commands:
    mkdir /u02/app/oracle/oradata/dbUniqueName
    chown oracle:oinstall /u02/app/oracle/oradata/dbUniqueName 
  2. Run the odacli configure-dataguard command.

This issue is tracked with Oracle bug 32906493.

Error in creating Oracle Data Guard status

When configuring Oracle Active Data Guard on Oracle Database Appliance, an error is encountered.

When configuring Oracle Data Guard, the odacli configure-dataguard command fails at step NewDgconfig with the following error on the standby system:
ORA-16665: TIME OUT WAITING FOR THE RESULT FROM A MEMBER

Verify the status of the job with the odacli list-jobs command.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Follow these steps:

  1. On the standby system, run the following:
    export DEMODE=true; 
    odacli create-dataguardstatus -i dbid -n dataguardstatus_id_on_primary -r configdg.json 
    export DEMODE=false; 
    configdg.json example   
Example configdg.json file for a single-node system:
{
  "name": "test1_test7",
  "protectionMode": "MAX_PERFORMANCE",
   "replicationGroups": [
    {
      "sourceEndPoints": [
        {
          "endpointType": "PRIMARY",
          "hostName": test_domain1",
          "listenerPort": 1521,
          "databaseUniqueName": "test1",
          "serviceName": "test", 
          "sysPassword": "***", 
          "ipAddress": "test_IPaddress"
        },
         ],
      "targetEndPoints": [
        {
          "endpointType": "STANDBY",
          "hostName": "test_domain2",
          "listenerPort": 1521,
          "databaseUniqueName": "test7",
          "serviceName": "test", 
          "sysPassword": "***", 
          "ipAddress": "test_IPaddress3"
        },
      ],
      "transportType": "ASYNC"
    }
  ]
}

This issue is tracked with Oracle bug 32719173.

Error in registering a database

When registering a single instance database on Oracle Database Appliance, if the RAC option is specified in the odacli register-database command, an error is encountered.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Create a single-instance database using Oracle Database Configuration Assistance (DBCA) and then register the database using the odacli register-database command with the RAC option.

This issue is tracked with Oracle bug 32853078.

Error in Reinstating Oracle Data Guard

When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.

The odacli reinstate-dataguard command fails with the following error:
Unable to reinstate Dg. Reinstate job was executed within 24hrs after failover job.  

The dcs-agent.log file has the following error entry:

DGMGRL> Reinstating database "xxxx", 
 please wait... 
Oracle Clusterware is restarting database "xxxx" ... 
Connected to "xxxx" 
Continuing to reinstate database "xxxx" ... 
Error: ORA-16653: failed to reinstate database 

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Follow these steps:

  1. On the primary machine, get the standby_became_primary_scn:
    SQL> select standby_became_primary_scn from v$database; 
    STANDBY_BECAME_PRIMARY_SCN 
    -------------------------- 
              3522449 
  2. On the old primary database, flashback to this SCN with RMAN with the backup encryption password:
    RMAN> set decryption identified by 'rman_backup_password' ; 
    executing command: SET decryption 
    RMAN> FLASHBACK DATABASE TO SCN 3522449 ; 
    ... 
    Finished flashback at 24-SEP-20 
    RMAN> exit 
  3. On the new primary machine, run the odacli reinstate-dataguard command.

This issue is tracked with Oracle bug 31884506.

Failure in Reinstating Oracle Data Guard

When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.

The odacli reinstate-dataguard command fails with the following error:
Message:   
DCS-10001:Internal error encountered: Unable to reinstate Dg.   

The dcs-agent.log file has the following error entry:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Follow these steps:

  1. Make sure the database you are reinstating is started in MOUNT mode. To start the database in MOUNT mode, run this command:
    srvctl start database -d db-unique-name -o mount 
  2. After the above command runs successfully, run the odacli reinstate-dataguard command.

This issue is tracked with Oracle bug 32047967.

Error in updating Role after Oracle Data Guard operations

When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.

The dbRole component described in the output of the odacli describe-database command is not updated after Oracle Data Guard switchover, failover, and reinstate operations on Oracle Database Appliance.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Data Guard configuration

Workaround

Run odacli update-registry -n db --force/-f to update the database metadata. After the job completes, run the odacli describe-database command and verify that dbRole is updated.

This issue is tracked with Oracle bug 31378202.

Error in running other operations when modifying database with CPU pool

When modifying a database with CPU pool, an error is encountered with other operations.

Since modifying a database to attach or detach a CPU Pool needs a database restart, it may affect any other concurrent operation on the same database. For instance, the database backup job fails when you concurrently modify the same database with the CPU Pool option. The ODACLI job displays the following error:
# odacli create-backup -in dbName -bt Regular-L0 
  DCS-10089:Database dbName is in an invalid state `{Node Name:closed}'

Hardware Models

All Oracle Database Appliance hardware models with bare metal configuration

Workaround

Wait until the odacli modify-database completes before you perform any other operation on the same database.

This issue is tracked with Oracle bug 32045674.

Error in restoring a TDE-enabled database

When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered.

When a TDE-enabled database with Oracle ASM database storage is restored on an Oracle ACFS database storage, the following error message is displayed:
Failed to copy file from : source_location to: destination_location 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Do not change the database storage type when restoring a TDE-enabled database.

This issue is tracked with Oracle bug 31848183.

Error when recovering a single-instance database

When recovering a single-instance database, an error is encountered.

When a single-instance database is running on the remote node, and you run the operation for database recovery on the local node, the following error is observed:
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered: 
Missing arguments : required sqlplus connection  information is not 
provided

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Perform recovery of the single-instance database on the node where the database is running.

This issue is tracked with Oracle bug 31399400.

Job history not erased after running cleanup.pl

After running cleanup.pl, job history is not erased.

After running cleanup.pl, when you run /opt/oracle/dcs/bin/odacli list-jobs commands, the list is not empty.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

  1. Stop the DCS Agent by running the following commands on both nodes.

    For Oracle Linux 6, run:

    initctl stop initdcsagent 

    For Oracle Linux 7, run:

    systemctl stop initdcsagent 
  2. Run the cleanup script sequentially on both the nodes.

This issue is tracked with Oracle bug 30529709.

Inconsistency in ORAchk summary and details report page

ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.

Hardware Models

Oracle Database Appliance hardware models bare metal deployments

Workaround

Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.

This issue is tracked with Oracle bug 30676674.

Missing DATA, RECO, and REDO entries when dbstorage is rediscovered

Running the odacli update-registry command with -n all --force or -n dbstorage --force option can result in metadata corruption.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the -all option when all the databases created in the system use OAKCLI in migrated systems. On other systems that run on DCS stack, update all components other than dbstorage individually, using the odacli update-registry -n component_name_to_be_updated_excluding_dbstorage.

This issue is tracked with Oracle bug 30274477.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Issues with the Web Console on Microsoft web browsers

Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.

Following are issues with Microsoft web browsers:
  • Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
  • Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
  • Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
  • After configuring the oda-admin password, the following error is displayed:
    Failed to change the default user (oda-admin) account password. 
    Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized

    Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.

Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

To access the Web Console, use either Google Chrome or Firefox.

This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.