19 Troubleshooting Oracle Database Appliance

Understand tools you can use to validate changes and troubleshoot Oracle Database Appliance problems.

Troubleshooting Data Preserving Reprovisioning Upgrades

Understand the errors you may encounter during Data Preserving Reprovisioning upgrade and their solutions.

Checks related to TDE-enabled databases

Scenario: The database precheck Validate TDE wallet presence may fail with the following error, for a TDE-enabled database with TDE Wallet Management attribute set to EXTERNAL value and with software keystore used for TDE configuration.

 Pre-Check                      Status   Error                              Action
 ------------------------------ -------- -------------------------------------------------
 Validate TDE wallet presence    Failed  Both Password Protected Wallet        Make sure that both, Password        
                                        (ewallet.p12) and Autologin Wallet     Protected Wallet (ewallet.p12) and   
                                        (cwallet.sso) are not found at         Autologin Wallet (cwallet.sso) are   
                                        '/u01/app/odaorahome/oracle/product/   present at mentioned location        
                                        19.0.0.0/dbhome_1/admin/extdb/                                              
                                        wallets' location for database 'extdb'     

Cause: The reason for the above failure is that both the TDE wallets (ewallet.p12 and cwallet.sso) of the database is not present at the location dbhome>/admin/db_uniquename/wallets. Note that db_uniquename must be in lowercase.

Action Required: Create the path dbhome>/admin/db_uniquename/wallets, if it does not exist and then copy both TDE wallets (ewallet.p12 and cwallet.sso) of the database to the same location. Create the preupgrade report again. Once Data Provisioning Reprovisioning completes, that is, after restoring the node with Oracle Grid Infrastructure and database, both TDE wallets at dbhome>/admin/db_uniquename/wallets can be deleted.

Checks related to Oracle Grid Infrastructure

Scenario: The Oracle Grid Infrastructure precheck Check custom filesystems may fail with the following error:
Check custom filesystems       Failed   File systems /acfsmounts/acfs1 are     Remove the file systems determined in
                                        owned by OS users not provisioned by   the check                           
                                        ODA

Cause: The file system /acfsmounts/acfs1 is owned by the operating system user and is not created by Oracle Database Appliance.

Action Required: Remove the file system from Oracle Clusterware manually. After completing the Data Preserving Reprovisioning, remount this file system manually.

Checks related to Oracle ILOM

Scenario: Configuration of Oracle ILOM host name may fail with the following error:
Job details
----------------------------------------------------------------
                     ID:  da5079f6-875b-435f-918f-7cb2974121e3
            Description:  Restore node service - GI
                 Status:  Failure (To view Error Correlation report, run "odacli describe-job -i ... --ecr" command)
                Created:  January 16, 2024 7:46:23 AM GMT
                Message:  DCS-10001:Internal error encountered: Failed to configure hostname the ilom (none).

Cause: Incorrect Oracle ILOM metadata may persist during the detach-node operation.

Action Required: Run cleanup.pl on all the nodes sequentially without the -f, -erasedata, or -nodpr options. Create a backup copy of the file /opt/oracle/oak/restore/metadata/provisionInstance.json and edit the original file. Delete the Oracle ILOM section from the file. For high-availability systems, there are two entries. A sample section is as follows:
"ilom" : {
       "ilomName" : "...",
       "ipAddress" : "...",
       "subNetMask" : "...",
       "gateway" : "..."
     },

Save the file and retry the operation.

Checks related to operating system

Scenario: The operating system precheck Check Required OS files may fail with the following error:
Check Required OS files        Failed   Required file                          Identify the cause why file is       
                                        '/opt/oracle/dcs/dcscli/dcscli_wallet/ missing, remediate that and then     
                                        cwallet.sso' not found                 retry the operation

Cause: The operating system file required for the system upgrade is missing.

Action Required: Contact My Oracle Support to create the file.

Scenario: The operating system precheck Check Additional OS RPMs may display the following alert:
Check Additional OS RPMs       Alert    Additional OS RPMs, compared to the    None; the list of these RPMs can be  
                                        base ODA image, are installed on the   found at                             
                                        system                                 '/opt/oracle/dcs/log/                
                                                                               reprovision-custom-rpms.list'. The   
                                                                               upgraded versions of these rpms will 
                                                                               have to be reinstalled manually after
                                                                               reimage

Cause: The system may have extra RPMs installed which are not managed by Oracle Database Appliance.

Action Required: After completing the Data Preserving Reprovisioning flow, manually install the additional RPMs listed in the custom-rpms.list file, located at /opt/oracle/oak/restore/.

Scenario: The Storage precheck Check Required Storage files may fail with the following error:
Check Required Storage files   Failed   Required file '/etc/multipath.conf'    Identify the cause why file is       
                                        not found                              missing, remediate that and then     
                                                                               retry the operation

Cause: The required storage file needed for the operating system upgrade is missing.

Action Required: Contact My Oracle Support to create the file.

Scenario: The database precheck Validate Database Version may fail with the following error:
Validate Database Version      Failed   Version '19.10.0.0.210119' for         Please update the database to the    
                                        database 'odacn' is lower than         minimum supported version or higher  
                                        minimum supported version                                                   
                                        '19.17.0.0.221018'                                                          
                                         
Validate Database Version      Failed   Version '12.1.0.2.210119' for          Please update the database to the    
                                        database 'dbj3' is lower than minimum  minimum supported version or higher  
                                        supported version '12.1.0.2.220719'                                         
 
Validate Database Version      Failed   Version '12.2.0.1.210119' for          Please update the database to the    
                                        database 'dbj4' is lower than minimum  minimum supported version or higher  
                                        supported version '12.2.0.1.220118'                                         
                     
Cause: Note that for the bare metal systems, only Oracle Database release 19c is supported. No other database is currently supported. The supported Oracle Database releases are as follows:
Oracle Database release Minimum Version The last Oracle Database Appliance release that provided the Oracle Database clone file
11.2.0.4 1.2.0.4.210119 19.10
12.1 12.1.0.2.220719 19.16
12.2 12.2.0.1.220118 19.14
18c 18.14.0.0.210420 19.11

Action Required: For Oracle Database release 11g databases, there is no ODACLI support to update the database. You must manually update the database to 11.2.0.4.210119 using OPatch. Then use the odacli update-registry command to update the metadata.

For Oracle Database releases 12.1.x, 12.2.x, and 18c, do the following:
  1. Update the database to the last supported Oracle Database Appliance release.
  2. Run the odacli update-repository -f serverzip_for_ODA_release command.
  3. Run the odacli update-repository -f ODA_DB_CLONE_for_minimum_version command.
  4. Generate prepatch report using the version as ODA_RELEASE.
  5. Update the database, which creates the new database home.
  6. Use the odacli delete-dbhome command to delete the database home. Note that if there are additional databases that run from the database home, you must patch all these databases before you can delete the database home.
  7. Use the odacli upgrade-database command to upgrade the database from an earlier release to Oracle Database release 19c.
For a Oracle Database 19c database, update the database to Oracle Database 19.17 or later.

Failure to restore Oracle ASR when running the odacli restore-node -g command

Scenario: If Oracle ASR configuration fails during the restore-node operation, then the restore-node job displays the status as Success but the Oracle ASR configuration task status displays the status as Failure. To verify if the Oracle ASR configuration was restored successfully, check the describe-job output of the restore-node job. Following is a sample job for the restore-node operation:

Registering ASR Manager                  December 12, 2023 6:51:09 AM UTC         December 12, 2023 6:51:17 AM UTC         Failure                                                                                                                          
ASR service creation                     December 12, 2023 6:51:55 AM UTC         December 12, 2023 6:51:56 AM UTC         Failure  
Registering Asset: ODA Host              December 12, 2023 6:51:55 AM UTC         December 12, 2023 6:51:56 AM UTC         Failure  
ASR service creation                     December 12, 2023 6:51:56 AM UTC         December 12, 2023 6:51:56 AM UTC         Failure  
ASR assets activation                    December 12, 2023 6:51:56 AM UTC         December 12, 2023 6:51:56 AM UTC         Failure 

Check the /opt/oracle/dcs/log/dcs-agent.log file and the Oracle ASR logs in the /var/opt/asrmanager/log location to determine the cause of the failure for Oracle ASR configuration. Once the cause and the resolution are known, Oracle ASR must be configured manually using the odacli configure-asr command after the restore-node job completes.

Cause: If Oracle ASR restoration fails due to an incorrect SSO password, the following error message may be displayed in the dcs-agent.log file.
An Oracle Single Sign On (OSSO) account is required for data submission.
If you do not have an account or have forgotten your username or
password,
 ******** http://support.oracle.com
 
Username []: asr-qa_ca@oracle.com
Password: ********
Password ******** (to verify):
 
Contacting transport servers. Please wait...
 
Checking connection to https://transport.oracle.com/v1/
Connection is ok. Trying to register client.
Error: Invalid Oracle SSO Username and/or Password. ********
 
 
Registration failed.
2023-12-11 10:11:09,259 DEBUG [Registering ASR Manager : JobId=cfc269c4-211a-4297-b363-a2ac65aa65b0] [] c.o.d.c.n.MessageUtil: load locale as en_US
2023-12-11 10:11:09,260 ERROR [Registering ASR Manager : JobId=cfc269c4-211a-4297-b363-a2ac65aa65b0] [] c.o.d.a.r.s.a.AsrOperations: Exception:
com.oracle.dcs.commons.exception.DcsException: DCS-10045:Validation error encountered: Registration failed : Error: Invalid Oracle SSO Username and/or Password.

Action Required: Use the correct SSO password and retry configuring Oracle ASR using the odacli configure-asr command.

Scenario: If Oracle ASR restoration fails due to connectivity issues with the transport server of the Oracle ASR Manager the following error message may be displayed in the dcs-agent.log file
2023-12-06 15:08:46,839 DEBUG [Registering ASR Manager : JobId=b4e25721-3a4f-4650-9ab9-ceefac678627] [] c.o.d.c.u.CommonsUtils: Output :
spawn /opt/asrmanager/bin/asr register
 
1) transport.oracle.com
Select destination transport server or enter full URL for alternate server [1]:
1
 
If a proxy server is required for HTTPS communication to the internet,
 
enter the information below. If no proxy is needed,
 enter -
Proxy server name []:
 
An Oracle Single Sign On (OSSO) account is required for data submission.
If you do not have an account or have forgotten your username or
password,
 ******** http://support.oracle.com
 
Username []: asr-qa_ca@oracle.com
Password: ********
Password ******** (to verify):
 
Contacting transport servers. Please wait...
  
Registration failed.

Action Required: Retry registration of Oracle ASR registration After the restore-node job completes, configure Oracle ASR manually using the odacli configure-asr command.

Failure during restore of KVM and DB systems

Scenario: Restore of metadata of VMs may fail due to missing CPU pool. The dcs-agent.log file may display the following error:
ERROR [Restore VMs metadata : JobId=bae05eea-27f1-4ccc-b962-6f071d5d90d3] 
[] c.o.d.a.k.e.KvmExceptionFactory: Not found by name com.oracle.dcs.commons.exception.DcsException: 
DCS-10032:Resource of type 'CPU Pool' with name 'pool_59c70ac2-' is not found.

Action Required: Run the odacli restore-node -kvm command to restore the missing CPU pool.

Scenario: When you run the odacli restore-node -d command, during restoration of databases, there may be an error in restoring the backup configuration.

Cause: Restore of the backup configuration may have failed for because of inaccessibility of NFS location, changed Objectstore password, and others.

Action Required: Create the backup configuration using the odacli create-backupconfig command and also attach it to the database using the odacli modify-database command, if required. If you encounter errors when you run the odacli restore-node -kvm command, then run the command again to restore the missing resources.

Scenario: Restore of metadata of VMs may fail due to missing vnetwork. The dcs-agent.log file may display the followign error:
ERROR [Restore VMs metadata : JobId=7776e6ad-b8c5-4e23-a72c-fb2d0b82fda3] 
[] c.o.d.a.k.e.KvmExceptionFactory: Not found by name com.oracle.dcs.commons.exception.DcsException:
 DCS-10032:Resource of type 'Virtual Network' with name 'vnet48777' is not found.

Action Required: Run the odacli restore-node -kvm command again to restore the missing vnetwork.

Errors related to Oracle Data Guard

Cause of the failure: If Oracle Data Guard is configured on the system and the Oracle Data Guard configuration has errors or warnings, then the precheck displays these error or warnings.
Dataguard FAILED 
Warning: ORA-16853: apply lag has Make sure that dataguard is in exceeded specified threshold. Make sure that dataguard is in 'CONFIGURED' state. 

Resolution: Oracle Data Guard must be in Configured state. You must fix all warnings or errors that are displayed in the precheck to move the Oracle Data Guard configuration to the Configuration state.

Checks related to Muti User Access enabled environments

Scenario: Token expiration duration is out of range (> 600 mins or < 10 mins)

Cause: You may have manually edited the token expiration duration in the file at /opt/oracle/dcs/idm/idm.conf.

Action Required: The pre-upgrade report that is generated prior to detach checks for this anomaly and displays an error message and a resolution for the same. Edit the idm.conf file as the root user and correct the value of token expiration so that it is in range. This applies to other configuration settings too.

Scenario: The odacli restore-node -g command may fail with an error message about inconsistent state of the system.

Cause: The state of the system prior to the detach operation has a different multi-user access setting from when you ran the odacli restore-node -g command. This can happen if you accessed the BUI before running the odacli restore-node -g command and chose to enable or disable multi-user access.

Action Required: Perform a Data Preserving Reprovisioning cleanup of the system and then run the odacli restore-node -g command using ODACLI or BUI.

Scenario: The odacli restore-node -g command may display an error about a UID or GID conflict.

Cause: When running the odacli restore-node -g command, all users in the system are restored with their original UID or GID. If there is a conflict with an existing user or group, then the odacli restore-node -g command operation fails.

Action Required: Change the UID or GID of the conflicting user or group.

Sample Pre-Upgrade Checks Report

Sample output from a system when running the pre-upgrade checks.

# odacli create-preupgradereport
------------------------------------------------------------------------
                 Job ID:  e73f3d0f-8e77-40a1-92cc-2dc825c3fd28
            Description:  Run pre-upgrade checks for Bare Metal
                 Status:  SUCCESS
                Created:  December 12, 2023 12:43:13 PM GMT
                 Result:  All pre-checks succeeded

Node Name       
---------------
scaoda703c1n1 

Check                          Status   Message                                Action                                
------------------------------ -------- -------------------------------------- --------------------------------------
__GI__ 
Check presence of databases    Success  No additional database found           None                                  
not managed by ODA                      registered in CRS                                                            
Check custom filesystems       Success  All file systems are owned and used    None                                  
                                        by OS users provisioned by ODA                                               
Check presence of HAVIP        Success  No HAVIP resources found registered    None                                  
resources not managed by ODA            in CRS                                                                       
Check presence of export       Success  No EXPORT resources found registered   None                                  
resources not managed by ODA            in CRS                                                                       

__OS__ 
Check Required OS files        Success  All the required files are present     None                                  
Check Additional OS RPMs       Success  No RPMs outside of base ISO were       None                                  
                                        found on the system                                                          

__STORAGE__ 
Check Required Storage files   Success  All the required files are present     None                                  
Validate OAK Disks             Success  All OAK disks are in valid state       None                                  
Validate ASM Disk Groups       Success  All ASM disk groups are in valid state None                                  
Validate ASM Disks             Success  All ASM disks are in valid state       None                                  
Check Database Home Storage    Success  The volume(s)                          None                                  
volumes                                 orahome_sh,odabase_n0,odabase_n1                                             
                                        state is CONFIGURED.                                                         
Check space under /opt         Success  Free space on /opt: 189495.58 MB is    None                                  
                                        more than required space: 1024 MB                                            
Check space in ASM disk        Success  Space required for creating local      None                                  
group(s)                                homes is present in ACFS database                                            
                                        home storage. Required: 0 GB                                                 
                                        Available: 774 GB                                                            

__SYS__ 
Validate Hardware Type         Success  Current hardware is supported          None                                  
Validate ILOM interconnect     Success  ILOM interconnect is not enabled       None                                  
Validate System Version        Success  System version 19.21.0.0.0 is          None                                  
                                        supported                                                                    
Verify System Timezone         Success  Succesfully verified the time zone     None                                  
                                        file                                                                         
Verify Grid User               Success  Grid user is verified                  None                                  
Verify Grid Version            Success  Oracle Grid Infrastructure is running  None                                  
                                        on the '19.18.0.0.230117' version on                                         
                                        all nodes                                                                    
Check Audit Files              Success  Local Audit files not found            None                                  

__DB__ 
Validate Database Status       Success  Database 'mydb' is running and is in   None                                  
                                        'CONFIGURED' state                                                           
Validate Database Version      Success  Version '19.18.0.0.230117' for         None                                  
                                        database 'mydb' is supported                                                 
Validate Database Datapatch    Success  Database 'mydb' is completely applied  None                                  
Application Status                      with datapatch                                                               
Validate TDE wallet presence   Success  Database 'mydb' is not TDE enabled.    None                                  
                                        Skipping TDE wallet presence check.                                          
Validate Database Home         Success  Database home location check passed    None                                  
location                                for database mydbu                                                           
Validate Database Status       Success  Database 'uxljY' is running on         None                                  
                                        'scaoda703c1n2'. This check is                                               
                                        skipped.                                                                     
Validate Database Version      Success  Version '19.18.0.0.230117' for         None                                  
                                        database 'uxljY' is supported                                                
Validate Database Datapatch    Success  The database is RACOne and is running  None                                  
Application Status                      on scaoda703c1n2. This check is                                              
                                        skipped.                                                                     
Validate TDE wallet presence   Success  Database 'uxljY' is not TDE enabled.   None                                  
                                        Skipping TDE wallet presence check.                                          
Validate Database Home         Success  Database home location check passed    None                                  
location                                for database uxljY                                                           

__CERTIFICATES__ 
Check using custom             Success  Using Default key pair                 None                                  
certificates                                                                                                         
Check the agent of the DB      Success  All the agents of the DB systems are   None                                  
System accessible                       accessible                                                                   

__DBSYSTEMS__ 
Validate DB System DCS         Success  scaoda703c4n1: SUCCESS                 None                                  
component versions                                                                                                   


Node Name       
---------------
scaoda703c1n2 

Check                          Status   Message                                Action                                
------------------------------ -------- -------------------------------------- --------------------------------------
__GI__ 
Check presence of databases    Success  No additional database found           None                                  
not managed by ODA                      registered in CRS                                                            
Check custom filesystems       Success  All file systems are owned and used    None                                  
                                        by OS users provisioned by ODA                                               
Check presence of HAVIP        Success  No HAVIP resources found registered    None                                  
resources not managed by ODA            in CRS                                                                       
Check presence of export       Success  No EXPORT resources found registered   None                                  
resources not managed by ODA            in CRS                                                                       

__OS__ 
Check Required OS files        Success  All the required files are present     None                                  
Check Additional OS RPMs       Success  No RPMs outside of base ISO were       None                                  
                                        found on the system                                                          

__STORAGE__ 
Check Required Storage files   Success  All the required files are present     None                                  
Validate OAK Disks             Success  All OAK disks are in valid state       None                                  
Validate ASM Disk Groups       Success  All ASM disk groups are in valid state None                                  
Validate ASM Disks             Success  All ASM disks are in valid state       None                                  
Check Database Home Storage    Success  The volume(s)                          None                                  
volumes                                 orahome_sh,odabase_n0,odabase_n1                                             
                                        state is CONFIGURED.                                                         
Check space under /opt         Success  Free space on /opt: 131591.74 MB is    None                                  
                                        more than required space: 1024 MB                                            
Check space in ASM disk        Success  Space required for creating local      None                                  
group(s)                                homes is present in ACFS database                                            
                                        home storage. Required: 0 GB                                                 
                                        Available: 774 GB                                                            

__SYS__ 
Validate Hardware Type         Success  Current hardware is supported          None                                  
Validate ILOM interconnect     Success  ILOM interconnect is not enabled       None                                  
Validate System Version        Success  System version 19.21.0.0.0 is          None                                  
                                        supported                                                                    
Verify System Timezone         Success  Succesfully verified the time zone     None                                  
                                        file                                                                         
Verify Grid User               Success  Grid user is verified                  None                                  
Verify Grid Version            Success  Oracle Grid Infrastructure is running  None                                  
                                        on the '19.18.0.0.230117' version on                                         
                                        all nodes                                                                    
Check Audit Files              Success  Local Audit files not found            None                                  

__DB__ 
Validate Database Status       Success  Database 'mydb' is running and is in   None                                  
                                        'CONFIGURED' state                                                           
Validate Database Version      Success  Version '19.18.0.0.230117' for         None                                  
                                        database 'mydb' is supported                                                 
Validate Database Datapatch    Success  Database 'mydb' is completely applied  None                                  
Application Status                      with datapatch                                                               
Validate TDE wallet presence   Success  Database 'mydb' is not TDE enabled.    None                                  
                                        Skipping TDE wallet presence check.                                          
Validate Database Home         Success  Database home location check passed    None                                  
location                                for database mydbu                                                           
Validate Database Status       Success  Database 'uxljY' is running and is in  None                                  
                                        'CONFIGURED' state                                                           
Validate Database Version      Success  Version '19.18.0.0.230117' for         None                                  
                                        database 'uxljY' is supported                                                
Validate Database Datapatch    Success  Database 'uxljY' is completely         None                                  
Application Status                      applied with datapatch                                                       
Validate TDE wallet presence   Success  Database 'uxljY' is not TDE enabled.   None                                  
                                        Skipping TDE wallet presence check.                                          
Validate Database Home         Success  Database home location check passed    None                                  
location                                for database uxljY                                                           

__CERTIFICATES__ 
Check using custom             Success  Using Default key pair                 None                                  
certificates                                                                                                         
Check the agent of the DB      Success  All the agents of the DB systems are   None                                  
System accessible                       accessible                                                                   

__DBSYSTEMS__ 
Validate DB System DCS         Success  scaoda703c4n1: SUCCESS                 None                                  
component versions

Viewing Oracle Database Appliance Error Correlation Reports

Understand how to view Error Correlation Report and how to interpret the report to troubleshoot your appliance.

About Error Correlation Reports

If a DCS job fails, an Error Correlation job is created automatically to generate an Error Correlation report. You can access and review the generated Error Correlation report from the BUI to explore possible ways of error resolution.

The Error Correlation Report contains the following:
  • Log Messages: Errors, exceptions and warnings from various log files.
  • Failed Task Messages: Error message displayed when the DCS job failed.
  • Release Notes: Relevant Known Issues from Oracle Database Appliance Release Notes to help resolve the issue.
  • Documentation: Relevant topics from the Oracle Database Appliance Documentation Library to help resolve the error.
The Error Correlation Report is generated for every failed DCS job and can be accessed from the BUI. On Oracle Database Appliance high-availability deployments, the Error Correlation report contains the error information derived from log files of both the nodes.

Viewing Error Correlation Reports using ODACLI Commands

You can view the Error Correlation report of a failed DCS job by running the odacli describe-job -i failed_dcs_job_id --ecr command. For an example output, see the topic odacli describe-job in this guide.

Viewing Error Correlation Reports from the BUI

To view the Error Correlation Report from the Activities page in the BUI:
  1. Log into the Browser User Interface:
    https://host-ip-address:7093/mgmt/index.html
  2. Click the Activity tab.
  3. In the Activities page, click the Failure or InternalError link in the failed DCS job for which you want to view the Error Correlation report. Note that only failed DCS jobs have associated Error Correlation Reports.
  4. You can also view the Error Correlation Report for the failed DCS job when you click the Actions menu, and select View Error Correlation Report.
  5. The Error Correlation Report contains the following tabs:
    • Log Messages: Displays the logs for DCS agent, DCS admin, Oracle HAMI, MySQL, and Oracle FPP. You can expand each section to view the details. Only components that have logs are displayed. If no errors are found, then the message No errors or exceptions found in logs is displayed in the Log Messages section.
    • Failed Task Messages: Displays the specific error message displayed when the task failed.
    • Release Notes: Displays relevant Known Issues from Oracle Database Appliance Release Notes to help resolve the issue. You can click each of these links to view the Release Notes entry. If no relevant Known issues are found, then the message No matching results were found. is displayed.
    • Documentation: Displays relevant topics from the Oracle Database Appliance Documentation Library to help resolve the error. You can click each of these links to view the documentation topic from the Oracle Database Appliance documentation.
To view the Error Correlation Report from the Diagnostics page in the BUI:
  1. In the BUI, click the Diagnostics tab.
  2. In the Diagnostics page, click Collect Diagnostic Data for a failed job.
  3. The Collect Diagnostics page displays the Error Correlation Report and Job Details in separate tabs for the failed DCS job. Click the Report File Name link to download the Error Correlation Report to your local system.
  4. The Job details tab displays the steps in the job and the Error Correlation Report contains the Log Messages, Failed Task Messages, Release Notes, and Documentation tabs.

About Enabling Linux Kernel Core Extractor for Troubleshooting

Understand how to manage Linux Kernel Core Extractor to troubleshoot your appliance.

About Linux Kernel Core Extractor

A Linux kernel panic can occur due to various reasons such as faulty hardware, driver crashes, or software bugs. To identify the cause of kernel panic, it is essential to collect and analyze the vmcore of the crashed kernel. The kdump service is used to collect the vmcore after the first kernel crash. This process is slow for systems with high memory and often fails to generate vmcore when the available space is not sufficient. When Linux Kernel Core Extractor is enabled on Oracle Database Appliance bare metal systems, the crash utility in the kdump kernel collects useful information for troubleshooting without generating vmcore.

Linux Kernel Core Extractor Commands

List generated crash reports:
# /usr/sbin/oled lkce list
Followings are the crash*out found in /var/oled/lkce dir:
/var/oled/lkce/crash_20220307-154542.out
Purge existing all but last three crash reports:
# /usr/sbin/oled lkce clean
lkce deletes all but last three /var/oled/lkce/crash*out files. do you want to proceed(yes/no)? [no]:
Purge all crash reports:
# /usr/sbin/oled lkce clean --all
lkce removes all the files in /var/oled/lkce dir. do you want to proceed(yes/no)? [no]:
By default, the crash report contains output for the following crash commands. You can add other crash commands to the /etc/oled/lkce/crash_cmds_file.
#
# This is the input file for crash utility. You can edit this manually
# Add your own list of crash commands one per line.
#
bt
bt -a
bt -FF
dev
kmem -s
foreach bt
log
mod
mount
net
ps -m
ps -S
runq
quit
By default, vmcore generation is disabled. You can enable vmcore generation as follows:
# oled lkce configure --vmcore=yes
Restarting kdump service... done!
lkce: set vmcore to yes
For additional Linux Kernel Core Extractor commands, refer to the Linux Kernel Core Extractor help.
# oled lkce help
Usage: lkce options
options:
    report report-options -- Generate a report from vmcore
    report-options:
        --vmcore=/path/to/vmcore        - path to vmcore
        [--vmlinux=/path/to/vmlinux]        - path to vmlinux
        [--crash_cmds=cmd1,cmd2,cmd3,..]    - crash commands to include
        [--outfile=/path/to/outfile]        - write output to a file
 
    configure [--default]   -- configure lkce with default values
    configure [--show]  -- show lkce configuration -- default
    configure [config-options]
    config-options:
        [--vmlinux_path=/path/to/vmlinux]   - set vmlinux_path
        [--crash_cmds_file=/path/to/file]   - set crash_cmds_file
        [--kdump_report=yes/no]         - set crash report in kdump kernel
        [--vmcore=yes/no]           - set vmcore generation in kdump kernel
        [--max_out_files=<number>]        - set max_out_files
 
    enable  -- enable lkce in kdump kernel
    disable -- disable lkce in kdump kernel
    status  -- status of lkce
 
    clean [--all]   -- clear crash report files
    list        -- list crash report files

Viewing Details About DCS Error Messages

Understand how to view details about DCS errors for troubleshooting them.

About Viewing Information About DCS Errors

To view more details about any errors during DCS operations, use the command dcserr error_code.

# /opt/oracle/dcs/bin/dcserr
dcserr error_code
 
# dcserr 10001
10001, Internal_Error, "Internal error encountered: {0}."
// *Cause: An internal error occurred.
// *Action: Contact Oracle Support Services for assistance.
/
# dcserr 1001
Unknown error code

To view more details about DCS errors in the Browser User Interface (BUI), you can provide the DCS error code in the Search box in the BUI. The Search results display the Cause and Action of DCS error codes.

Collecting Diagnostics Data Using the BUI

Understand how to collect diagnostics data to troubleshoot errors.

About Collecting Diagnostics Data

Use the Diagnostics tab in the Browser User Interface to view diagnostic information about your deployment and the installed components.

In the Diagnostic Collection page, you can view the available diagnostics collections. Click Collect Diagnostic Data to start diagnostics collection. Once the data is collected, click on the collection file path to download the file.

In the Collect Diagnostics page, specify the Job ID for the diagnostics data collection. Optionally, specify a tag and a description for the collection. The details of the Job ID are displayed. Click Collect to start the diagnostics data collection.

You can also collect diagnostics from the Activity page, by selecting Collect Diagnostics from the Actions drop down for a specific job. Click Collect to start the diagnostics data collection.

To delete a diagnostic collection, from the Diagnostic Collection page, select the specific collection, and click Delete.

This diagnostic collection feature does not replace the odaadmcli manage diagcollect command. You can use the odaadmcli manage diagcollect command also to enable diagnostics collections, independently of this new feature from the BUI. The odaadmcli manage diagcollect command and the diagnostics collection from BUI use the tfactl command internally. The diagnostics collection from BUI is aimed to collect other data from DCS metadata that is not collected through tfactl and provide greater context for root cause analysis of related DCS jobs failures.

Resolving Errors When Updating DCS Components During Patching

Understand how to troubleshoot errors when updating DCS components during patching.

.

About DCS Components

When you run the odacli update-dcscomponents command during patching, pre-checks for MySQL installation are automatically verified before update of Oracle HAMI, MySQL, and DCS components. If any of the pre-checks fail, then the command errors out with a reference to the pre-check report log file location /opt/oracle/dcs/log/jobfiles/jobId/dcscomponentsPreCheckReport.log. Review the pre-check report and take corrective actions and then rerun the odacli update-dcscomponents command. If there are no pre-check errors, then the patching process proceeds with updating Oracle HAMI, MySQL, and DCS components such as the DCS Agent, DCS CLI, and DCS Controller.

Note:

Run the odacli update-dcsadmin command prior to running the odacli update-dcscomponents command.

When the odacli update-dcscomponents command completes successfully:

The command output is as follows:

# ./odacli update-dcscomponents -v 19.23.0.0.0            
{
  "jobId" : "3ac3667a-fa22-40b6-a832-504a56aa3fdc",
  "status" : "Success",
  "message" : "Update-dcscomponents is successful on all the node(s):DCS-Agent
shutdown is successful. MySQL upgrade is done before. Metadata migration is
successful. Agent rpm upgrade is successful. DCS-CLI rpm upgrade is successful.
DCS-Controller rpm upgrade is succ",
  "reports" : null,
  "createTimestamp" : "April 8, 2024 02:37:37 AM CST",
  "description" : "Update-dcscomponents job completed and is not part of Agent
job list",
  "updatedTime" : "April 8, 2024 02:39:10 AM CST"
}

The pre-check report log file at the location /opt/oracle/dcs/log/jobfiles/jobId/dcscomponentsPreCheckReport.log contains the following:

Pre-check Name: Space check
Status: Success
Comments: Required space 3 GB is available in /opt

Pre-check Name: Port check
Status: Success
Comments: Port 3306 is available for running ODA MySQL

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Success
Comments: ODA MySQL rpm dry-run passed

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Success
Comments: ODA MySQL connector/J library found

Pre-check Name: Check for the existence of Metadata migration utility
Status: Success
Comments: Metadata migration utility found

dcs-admin version: 
Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.23.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Success
Comments: Scheduler cron expressions for existing job schedules are up to date 

When the odacli update-dcscomponents command fails:

On Oracle Database Appliance single-node systems, the command output is as follows:

# ./odacli update-dcscomponents -v 19.23.0.0.0            

DCS-10008:Failed to update DCScomponents: 19.22.0.0.0
Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer to
/opt/oracle/dcs/log/jobfiles/jobId/dcscomponentsPreCheckReport.log
on node 0 for details.

On Oracle Database Appliance high-availability systems, the command output is as follows:

# ./odacli update-dcscomponents -v 19.23.0.0.0            

Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer to
/opt/oracle/dcs/log/jobfiles/jobId/dcscomponentsPreCheckReport.log
on node 0 and /opt/oracle/dcs/log/jobfiles/jobId/dcscomponentsPreCheckReport.log
on node 1 for details.

The command runs all pre-checks one by one, and errors out at the end if any of the pre-checks is marked as Failed. When a pre-check fails, the error message is displayed on to the console along with the reference to pre-check report log location. The pre-check report log file is at the location /opt/oracle/dcs/log/jobfiles/jobId/dcscomponentsPreCheckReport.log.

Pre-check Name: Space check
Status: Failed
Comments: Available space in /opt is 2 GB but minimum required space in /opt is 3 GB 

Pre-check Name: Port check
Status: Success
Comments: Port 3306 is available for running ODA MySQL

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Success
Comments: ODA MySQL rpm dry-run passed

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Success
Comments: ODA MySQL connector/J library found

Pre-check Name: Check for the existence of Metadata migration utility
Status: Success
Comments: Metadata migration utility found

Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.20.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Success
Comments: Scheduler cron expressions for existing job schedules are up to date

When the odacli update-dcscomponents command fails due to space check error:

The pre-check report log contains the following:

Pre-check Name: Space check
Status: Failed
Comments: Available space in /opt is 2 GB but minimum required space in /opt is 3 GB 

Pre-check Name: Port check
Status: Success
Comments: Port 3306 is available for running ODA MySQL

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Success
Comments: ODA MySQL rpm dry-run passed

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Success
Comments: ODA MySQL connector/J library found

Pre-check Name: Check for the existence of Metadata migration utility
Status: Success
Comments: Metadata migration utility found

Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.20.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Success
Comments: Scheduler cron expressions for existing job schedules are up to date

When the odacli update-dcscomponents command fails due to port check error:

The pre-check report log contains the following:

Pre-check Name: Space check
Status: Success
Comments: Required space 3 GB is available in /opt

Pre-check Name: Port check
Status: Failed
Comments: No port found in the range ( 3306 to 65535 )

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Success
Comments: ODA MySQL rpm dry-run passed

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Success
Comments: ODA MySQL connector/J library found

Pre-check Name: Check for the existence of Metadata migration utility
Status: Success
Comments: Metadata migration utility found

Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.20.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Success
Comments: Scheduler cron expressions for existing job schedules are up to date

When the odacli update-dcscomponents command fails due to MySQL RPM installation dry-run check error:

The pre-check report log contains the following:

Pre-check Name: Space check
Status: Success
Comments: Required space 3 GB is available in /opt

Pre-check Name: Port check
Status: Success
Comments: Port 3306 is available for running ODA MySQL

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Failed
Comments: ODA MySQL rpm dry-run failed. Failed due to the following error :
Exception details are displayed below

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Success
Comments: ODA MySQL connector/J library found

Pre-check Name: Check for the existence of Metadata migration utility
Status: Success
Comments: Metadata migration utility found

Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.20.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Success
Comments: Scheduler cron expressions for existing job schedules are up to date

When the odacli update-dcscomponents command fails due to MySQL connector/J library check error:

The pre-check report log contains the following:

Pre-check Name: Space check
Status: Success
Comments: Required space 3 GB is available in /opt

Pre-check Name: Port check
Status: Success
Comments: Port 3306 is available for running ODA MySQL

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Success
Comments: ODA MySQL rpm dry-run passed

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Failed
Comments: MySQL connector/J library does not exist. Ensure update-repository with latest serverzip bundles ran first without any issues prior to running update-dcscomponents

Pre-check Name: Check for the existence of Metadata migration utility
Status: Success
Comments: Metadata migration utility found

dcs-admin version: 
Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.20.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Success
Comments: Scheduler cron expressions for existing job schedules are up to date

When the odacli update-dcscomponents command fails due to Metadata migration utility check error:

The pre-check report log contains the following:

Pre-check Name: Space check
Status: Success
Comments: Required space 3 GB is available in /opt

Pre-check Name: Port check
Status: Success
Comments: Port 3306 is available for running ODA MySQL

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Success
Comments: ODA MySQL rpm dry-run passed

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Success
Comments: ODA MySQL connector/J library found

Pre-check Name: Check for the existence of Metadata migration utility
Status: Failed
Comments: Metadata migration utility does not exist. Ensure update-repository with latest serverzip bundles ran first without any issues prior to running update-dcscomponents.

dcs-admin version: 
Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.20.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Success
Comments: Scheduler cron expressions for existing job schedules are up to date

When the odacli update-dcscomponents command displays a warning due to scheduler cron expression:

When patching from Oracle Database Appliance release 19.19 or earlier to the latest release, and you run the odacli update-dcscomponents command, there may be a warning in the precheck report log file if the default cron expressions from the existing list of job schedules are modified. The pre-check report log contains the following:

Pre-check Name: Space check
Status: Success
Comments: Required space 3 GB is available in /opt

Pre-check Name: Port check
Status: Success
Comments: Port 3306 is available for running ODA MySQL

Pre-check Name: ODA MySQL rpm installation dry-run check
Status: Success
Comments: ODA MySQL rpm dry-run passed

Pre-check Name: Check for the existence of MySQL connector/J library
Status: Success
Comments: ODA MySQL connector/J library found

Pre-check Name: Check for the existence of Metadata migration utility
Status: Success
Comments: Metadata migration utility found

dcs-admin version: 
Pre-check Name: dcs-admin version validation
Status: Success
Comments: dcs-admin is already updated :19.20.0.0.0

Config File Exist dcscontroller: 
Pre-check Name: Check DCS config files exists for dcscontroller
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-controller.yml and /opt/oracle/dcs/conf/dcs-controller-logback.xml exist
 
Config File Exist dcsagent: 
Pre-check Name: Check DCS config files exists for dcsagent
Status: Success
Comments: Files /opt/oracle/dcs/conf/dcs-agent.yml and /opt/oracle/dcs/conf/dcs-agent-logback.xml exist

Validate scheduler cron expressions:
Pre-check Name: Validate scheduler cron expressions
Status: Warning
Comments: Following cron expressions in the scheduler were modified from their default values. Starting 19.20, DCS Agent converts 7 fields cron expression into 6 fields cron expression. No further action needed.
Schedule ID : 3f671ee7-1a03-43fd-b98b-ce33eb09de08 , Custom cron expression : 10
25 * 1/1 * ? 2023

Note that the Status: Warning means the update-dcscomponents pre-check has detected custom cron expressions from the existing list of job schedules. The DCS agent automatically converts from the 7 fields custom cron expression to the equivalent 6 fields cron expression after patching DCS components.

Viewing Component Information on the Appliance

View details of all the components installed on the appliance, and the RPM drift information.

Collecting and Viewing the Bill of Materials in the Browser User Interface

Use the Appliance tab in the Browser User Interface to collect and view information about your deployment and the installed components. The Advanced Information tab displays information about the following components:

  • Grid Infrastructure Version, and the home directory

  • Database Version, Home location, and Edition

  • Location and details about the databases configured

  • All patches applied to the appliance

  • Firmware Controller and Disks

  • ILOM information

  • BIOS version

  • List of RPMs

In the List of RPMs section, click Show and then click RPM Drift to view the differences between the RPMs installed on the appliance, and the RPMs shipped in the latest Oracle Database Appliance Patch Bundle Update release.

Click Collect Bill of Materials to initiate a collection and submit the job. The job ID is displayed. After the collection is complete, click Refresh to refresh the information.

Click Download to save the components report. You can use this report to help diagnose any deployment issues.

Viewing the Bill of Materials from the Command Line

The bill of materials is also available through the command line for bare metal and virtualized platforms deployments. The information about the installed components is collected according to a set schedule, and stored in the location /opt/oracle/dcs/Inventory/ for bare metal deployments and in the /opt/oracle/oak/Inventory/ directory for virtualized platforms. The file is stored in the format oda_bom_TimeStamp.json. Use the command describe-system to view the bill of materials on the command line. See the Oracle Database Command-Line Interface chapter for command options and usage notes.

Example 19-1 Example Command to View the Bill of Materials from the Command Line for Bare Metal Deployments

# odacli describe-system -b
ODA Components Information 
------------------------------
Component Name                Component Details                                            
---------------               ----------------------------------------------------------------------------------------------- 
NODE                          Name : oda1 
                              Domain Name : testdomain.com 
                              Time Stamp : April 21, 2020 6:21:15 AM UTC 

  
RPMS                          Installed RPMS : abrt-2.1.11-55.0.1.el7.x86_64,
                                               abrt-addon-ccpp-2.1.11-55.0.1.el7.x86_64,
                                               abrt-addon-kerneloops-2.1.11-55.0.1.el7.x86_64,
                                               abrt-addon-pstoreoops-2.1.11-55.0.1.el7.x86_64,
                                               abrt-addon-python-2.1.11-55.0.1.el7.x86_64,
                                               abrt-addon-vmcore-2.1.11-55.0.1.el7.x86_64,
                                               abrt-addon-xorg-2.1.11-55.0.1.el7.x86_64,
                                               abrt-cli-2.1.11-55.0.1.el7.x86_64,
                                               abrt-console-notification-2.1.11-55.0.1.el7.x86_64,
                                               abrt-dbus-2.1.11-55.0.1.el7.x86_64,
                                               abrt-libs-2.1.11-55.0.1.el7.x86_64,
                                               abrt-python-2.1.11-55.0.1.el7.x86_64,
                                               abrt-tui-2.1.11-55.0.1.el7.x86_64,
                                               acl-2.2.51-14.el7.x86_64,
                                               adwaita-cursor-theme-3.28.0-1.el7.noarch,
                                               adwaita-icon-theme-3.28.0-1.el7.noarch,
                                               aic94xx-firmware-30-6.el7.noarch,
                                               aide-0.15.1-13.0.1.el7.x86_64,
                                               alsa-firmware-1.0.28-2.el7.noarch,
                                               alsa-lib-1.1.8-1.el7.x86_64,
                                               alsa-tools-firmware-1.1.0-1.el7.x86_64,
                                               at-3.1.13-24.el7.x86_64,
                                               at-spi2-atk-2.26.2-1.el7.x86_64,
                                               at-spi2-core-2.28.0-1.el7.x86_64,
                                               atk-2.28.1-1.el7.x86_64,
                                               attr-2.4.46-13.el7.x86_64,
                                               audit-2.8.5-4.el7.x86_64,
                                               audit-libs-2.8.5-4.el7.x86_64,
                                               audit-libs-python-2.8.5-4.el7.x86_64,
                                               augeas-libs-1.4.0-9.el7.x86_64,
                                               authconfig-6.2.8-30.el7.x86_64,
                                               autogen-libopts-5.18-5.el7.x86_64,
                                               avahi-libs-0.6.31-19.el7.x86_64,
                                               basesystem-10.0-7.0.1.el7.noarch,
                                               bash-4.2.46-33.el7.x86_64,
                                               bash-completion-2.1-6.el7.noarch,
                                               bc-1.06.95-13.el7.x86_64,
                                               bind-export-libs-9.11.4-9.P2.el7.x86_64,
                                               bind-libs-9.11.4-9.P2.el7.x86_64,
                                               bind-libs-lite-9.11.4-9.P2.el7.x86_64,
                                               bind-license-9.11.4-9.P2.el7.noarch,
                                               bind-utils-9.11.4-9.P2.el7.x86_64,
                                               binutils-2.27-41.base.0.7.el7_7.2.x86_64,
                                               biosdevname-0.7.3-2.el7.x86_64,
                                               blktrace-1.0.5-9.el7.x86_64,
                                               bnxtnvm-1.40.10-1.x86_64,
                                               boost-date-time-1.53.0-27.el7.x86_64,
                                               boost-filesystem-1.53.0-27.el7.x86_64,
                                               boost-iostreams-1.53.0-27.el7.x86_64,
....
....
....

Example 19-2 Example Command to View the Bill of Materials from the Command Line for Virtualized Platforms

# oakcli describe-system -b

Example 19-3 Example Command to View the Bill of Materials Report from the Stored Location

# ls -la /opt/oracle/dcs/Inventory/
total 264
-rw-r--r-- 1 root root 83550 Apr 26 05:41 oda_bom_2018-04-26_05-41-36.json

Errors When Logging into the Browser User Interface

If you have problems logging into the Browser User Interface, then it may be due to your browser or credentials.

Note:

Oracle Database Appliance uses self-signed certificates. Your browser determines how you log into the Browser User Interface. Depending on the browser and browser version, you may receive a warning or error that the certificate is invalid or not trusted because it is self-signed, or that the connection is not private. Ensure that you accept the self-signed certificate for the agent and Browser User Interface.

Follow these steps to log into the Browser User Interface:

  1. Open a browser window.
  2. Go to the following URL: https://ODA-host-ip-address:7093/mgmt/index.html
  3. Get the security certificate (or certificate), confirm the security exception, and add an exception.
  4. Log in with your Oracle Database Appliance credentials.
    If you have not already set the oda-admin password, then a message is displayed, advising you to change the default password to comply with your system security requirements.
  5. If you have not added an exception for the agent security certificate, then a message about accepting agent certificate is displayed.
  6. Using a different tab in your browser, go to the following URL: https://ODA-host-ip-address:7070/login
  7. Get the security certificate (or certificate), confirm the security exception, and add an exception.
  8. Refresh the Browser User Interface URL : https://ODA-host-ip-address:7093/mgmt/index.html

Note:

If you have any issues logging into the Oracle Database Appliance Browser User Interface on browsers such as macOS Catalina and Google Chrome, then you may need to use any workaround as described on the official site for the product.

Errors when re-imaging Oracle Database Appliance

Understand how to troubleshoot errors that occur when re-imaging Oracle Database Appliance.

If re-imaging Oracle Database Appliance fails, with old header issues such as errors in storage discovery, or in running GI root scripts, or disk group RECO creation, then use the force mode with cleanup.pl.

# cleanup.pl -f

To ensure that re-imaging is successful, remove the old headers from the storage disks by running the secure erase tool. Verify that the OAK/ASM headers are removed.

# cleanup.pl -erasedata
# cleanup.pl -checkHeader

Retry the re-imaging operation.

Using Oracle Autonomous Health Framework for Running Diagnostics

Oracle Autonomous Health Framework collects and analyzes diagnostic data collected, and proactively identifies issues before they affect the health of your system.

About Installing Oracle Autonomous Health Framework

Oracle Autonomous Health Framework is installed automatically when you provision or patch to Oracle Database Appliance release 19.23.

When you provision or patch your appliance to Oracle Database Appliance release 19.23, Oracle Autonomous Health Framework is installed in the path /opt/oracle/dcs/oracle.ahf.

You can verify that Oracle Autonomous Health Framework is installed by running the following command:
[root@oak ~]# rpm -q oracle-ahf
oracle-ahf-193000-########.x86_64

Note:

When you provision or patch to Oracle Database Appliance release 19.23, Oracle Autonomous Health Framework automatically provides Oracle ORAchk Health Check Tool and Oracle Trace File Analyzer Collector.
Oracle ORAchk Health Check Tool performs proactive health checks for the Oracle software stack and scans for known problems. Oracle ORAchk Health Check Tool audits important configuration settings for Oracle RAC deployments in the following categories:
  • Operating system kernel parameters and packages
  • Oracle Database Database parameters, and other database configuration settings
  • Oracle Grid Infrastructure, which includes Oracle Clusterware and Oracle Automatic Storage Management
Oracle ORAchk is aware of the entire system. It checks the configuration to indicate if best practices are being followed.
Oracle Trace File Collector provides the following key benefits and options:
  • Encapsulation of diagnostic data collection for all Oracle Grid Infrastructure and Oracle RAC components on all cluster nodes into a single command, which you run from a single node
  • Option to "trim" diagnostic files during data collection to reduce data upload size
  • Options to isolate diagnostic data collection to a given time period, and to a particular product component, such as Oracle ASM, Oracle Database, or Oracle Clusterware
  • Centralization of collected diagnostic output to a single node in Oracle Database Appliance, if desired
  • On-Demand Scans of all log and trace files for conditions indicating a problem
  • Real-Time Scan Alert Logs for conditions indicating a problem (for example, Database Alert Logs, Oracle ASM Alert Logs, and Oracle Clusterware Alert Logs)

Using the Oracle ORAchk Health Check Tool

Run Oracle ORAchk to audit configuration settings and check system health.

Note:

Before running ORAchk, check for the latest version of Oracle Autonomous Health Framework, and download and install it. See My Oracle Support Note 2550798.1 for more information about downloading and installing the latest verion of Oracle Autonomous Health Framework.

Running ORAchk on Oracle Database Appliance 19.23 Baremetal Systems for New Installation

When you provision or upgrade to Oracle Database Appliance 19.23, ORAchk is installed using Oracle Autonomous Framework in the directory /opt/oracle/dcs/oracle.ahf.

To run orachk, use the following command:
[root@oak bin]# orachk

When all checks are finished, a detailed report is available. The output displays the location of the report in an HTML format and the location of a zip file if you want to upload the report. For example, you can choose the filter to show failed checks only, show checks with a Fail, Warning, Info, or Pass status, or any combination.

Review the Oracle Database Appliance Assessment Report and system health and troubleshoot any issues that are identified. The report includes a summary and filters that enable you to focus on specific areas.

Running ORAchk on Oracle Database Appliance 19.23 Virtualized Platform

When you provision or upgrade to Oracle Database Appliance 19.23, ORAchk is installed using Oracle Autonomous Framework in the directory /opt/oracle.ahf.

To run orachk, use the following command:
[root@oak bin]# oakcli orachk

Generating and Viewing Oracle ORAchk Health Check Tool Reports in the Browser User Interface

Generate Oracle ORAchk Health Check Tool reports using the Browser User Interface.

  1. Log into the Browser User Interface with the oda-admin username and password.
    https://Node0–host-ip-address:7093/mgmt/index.html
  2. Click the Monitoring tab.
  3. In the Monitoring page, on the left navigation pane, click ORAchk Report.
    On the ORAchk Reports page, a list of all the generated ORAchk reports is displayed.
  4. In the Actions menu for the ORAchk report you want to view, click View.
    The Oracle Database Appliance Assessment Report is displayed. It contains details of the health of your deployment, and lists current risks, recommendations for action, and links for additional information.
  5. To create an on-demand ORAchk report: On the ORAchk Reports page, click Create and then click Yes in the confirmation box.
    The job to create an ORAchk report is submitted.
  6. Click the link to view the status of the job. Once the job completes successfully, you can view the Oracle Database Appliance Assessment Report on the ORAchk Reports page.
  7. To delete an ORAchk report: In the Actions menu for the ORAchk report you want to delete, click Delete.

Generating and Viewing Database Security Assessment Reports in the Browser User Interface

Generate and view Database Security Assessment Reports using the Browser User Interface.

  1. Log into the Browser User Interface with the oda-admin username and password.
    https://Node0–host-ip-address:7093/mgmt/index.html
  2. Click the Security tab.
  3. In the Security page, on the left navigation pane, click DBSAT Reports.
    On the Database Security Assessment Reports page, a list of all the generated DBSAT reports is displayed.
  4. In the Actions menu for the ORAchk report you want to view, click View.
    The Oracle Database Security Assessment Report is displayed. It contains details of the health of your deployment, and lists current risks, recommendations for action, and links for additional information.
  5. To create a DBSAT report: On the DBSAT Reports page, click Create and then click Yes in the confirmation box.
    The job to create a DBSAT report is submitted.
  6. Click the link to view the status of the job. Once the job completes successfully, you can view the Oracle Database Appliance Assessment Report on the DBSAT Reports page.
  7. To delete a DBSAT report: In the Actions menu for the DBSAT report you want to delete, click Delete.

Running Oracle Trace File Analyzer (TFA) Collector Commands

Understand the installed location of tfactl and the options for the command.

About Using tfactl to Collect Diagnostic Information

When you provision or upgrade to Oracle Database Appliance 19.23, Oracle Trace File Analyzer (TFA) Collector is installed in the directory /opt/oracle.ahf/bin/tfactl. You can invoke the command line utility for TFA, tfactl from the directory /opt/oracle.ahf/bin/tfactl, or simply type tfactl.

You can use the following command options to run tfactl:

 /opt/oracle.ahf/bin/tfactl diagcollect -ips|-oda|-odalite|-dcs|-odabackup|
-odapatching|-odadataguard|-odaprovisioning|-odaconfig|-odasystem|-odastorage|-database|
-asm|-crsclient|-dbclient|-dbwlm|-tns|-rhp|-procinfo|-afd|-crs|-cha|-wls|
-emagent|-oms|-ocm|-emplugins|-em|-acfs|-install|-cfgtools|-os|-ashhtml|-ashtext|
-awrhtml|-awrtext -mask -sanitize

Table 19-1 Command Options for tfactl Tool

Option Description
-h

(Optional) Describes all the options for this command.

-ips

(Optional) Use this option to view the diagnostic logs for the specified component.

-oda

(Optional) Use this option to view the logs for the entire Appliance.

-odalite

(Optional) Use this option to view the diagnostic logs for the odalite component.

-dcs

(Optional) Use this option to view the DCS log files.

-odabackup

(Optional) Use this option to view the diagnostic logs for the backup components for the Appliance.

-odapatching

(Optional) Use this option to view the diagnostic logs for patching components of the Appliance.

-odadataguard

(Optional) Use this option to view the diagnostic logs for Oracle Data Guard component of the Appliance.

-odaprovisioning

(Optional) Use this option to view provisioning logs for the Appliance.

-odaconfig

(Optional) Use this option to view configuration-related diagnostic logs.

-odasystem

(Optional) Use this option to view system information.

-odastorage

(Optional) Use this option to view the diagnostic logs for the Appliance storage.

-database

(Optional) Use this option to view database-related log files.

-asm

(Optional) Use this option to view the diagnostic logs for the Appliance.

-crsclient

(Optional) Use this option to view the diagnostic logs for the Appliance.

-dbclient

(Optional) Use this option to view the diagnostic logs for the Appliance.

-dbwlm

(Optional) Use this option to view the diagnostic logs for the specified component.

-tns

(Optional) Use this option to view the diagnostic logs for TNS.

-rhp

(Optional) Use this option to view the diagnostic logs for Rapid Home Provisioning.

-afd

(Optional) Use this option to view the diagnostic logs for Oracle ASM Filter Driver.

-crs

(Optional) Use this option to view the diagnostic logs for Oracle Clusterware.

-cha

(Optional) Use this option to view the diagnostic logs for the Cluster Health Monitor.

-wls

(Optional) Use this option to view the diagnostic logs for Oracle WebLogic Server.

-emagent

(Optional) Use this option to view the diagnostic logs for the Oracle Enterprise Manager agent.

-oms

(Optional) Use this option to view the diagnostic logs for the Oracle Enterprise Manager Management Service.

-ocm

(Optional) Use this option to view the diagnostic logs for the specified component.

-emplugins

(Optional) Use this option to view the diagnostic logs for Oracle Enterprise Manager plug-ins.

-em

(Optional) Use this option to view the diagnostic logs for Oracle Enterprise Manager deployment.

-acfs

(Optional) Use this option to view the diagnostic logs for Oracle ACFS storage.

-install

(Optional) Use this option to view the diagnostic logs for installation.

-cfgtools

(Optional) Use this option to view the diagnostic logs for the configuration tools.

-os

(Optional) Use this option to view the diagnostic logs for the operating system.

-ashhtml

(Optional) Use this option to view the diagnostic logs for the specified component.

-ashtext

(Optional) Use this option to view the diagnostic logs for the Appliance.

-awrhtml

(Optional) Use this option to view the diagnostic logs for the Appliance.

-awrtext

(Optional) Use this option to view the diagnostic logs for the specified component.

-mask

(Optional) Use this option to choose to mask sensitive data in the log collection.

-sanitize

(Optional) Use this option to choose to sanitize (redact) sensitive data in the log collection.

Usage Notes

You can use Trace File Collector (the tfactl command) to collect all log files for the Oracle Database Appliance components.

You can also use the command odaadmcli manage diagcollect, with similar command options, to collect the same diagnostic information.

For more information about using the -mask and -sanitize options, see the next topic.

Sanitizing Sensitive Information in Diagnostic Collections

Oracle Autonomous Health Framework uses Adaptive Classification and Redaction (ACR) to sanitize sensitive data.

After collecting copies of diagnostic data, Oracle Trace File Analyzer and Oracle ORAchk use Adaptive Classification and Redaction (ACR) to sanitize sensitive data in the collections. ACR uses a Machine Learning based engine to redact a pre-defined set of entity types in a given set of files. ACR also sanitizes or masks entities that occur in files and directory names. Sanitization replaces a sensitive value with random characters. Masking replaces a sensitive value with a series of asterisks ("*").

ACR currently sanitizes the following entity types:
  • Host names
  • IP addresses
  • MAC addresses
  • Oracle Database names
  • Tablespace names
  • Service names
  • Ports
  • Operating system user names

ACR also masks user data from the database appearing in block and redo dumps.

Example 19-4 Block dumps before redaction

14A533F40 00000000 00000000 00000000 002C0000 [..............,.] 
14A533F50 35360C02 30352E30 31322E37 380C3938 [..650.507.2189.8] 
14A533F60 31203433 37203332 2C303133 360C0200 [34 123 7310,...6] 

Example 19-5 Block dumps after redaction

14A533F40 ******** ******** ******** ******** [****************]
14A533F50 ******** ******** ******** ******** [****************]
14A533F60 ******** ******** ******** ******** [****************] 

Example 19-6 Redo dumps before redaction

col 74: [ 1] 80
col 75: [ 5] c4 0b 19 01 1f
col 76: [ 7] 78 77 06 16 0c 2f 26 

Example 19-7 Redo dumps after redaction

col 74: [ 1] **
col 75: [ 5] ** ** ** ** **
col 76: [ 7] ** ** ** ** ** ** **

Redaction of Literal Values in SQL Statements in AWR, ASH and ADDM Reports

Automatic Workload Repository (AWR), Active Session History (ASH), and Automatic Database Diagnostic Monitor (ADDM) reports are HTML files, which contain sensitive entities such as hostnames, database names, and service names in the form of HTML tables. In addition to these sensitive entities, they also contain SQL statements, that can contain bind variables or literal values from tables. These literal values can be sensitive personal information (PI) stored in databases. ACR processes such reports to identify and redact both usual sensitive entities and literal values present in the SQL statements.

Sanitizing Sensitive Information Using odaadmcli Command

Use the odaadmcli manage diagcollect command to collect diagnostic logs for Oracle Database Appliance components. During collection, ACR can be used to redact (sanitize or mask) the diagnostic logs.
odaadmcli manage diagcollect [--dataMask|--dataSanitize]

In the command, the --dataMask option blocks out the sensitive data in all collections, for example, replaces myhost1 with *******. The default is None. The --dataSanitize option replaces the sensitive data in all collections with random characters, for example, replaces myhost1 with orzhmv1. The default is None.

Enabling Adaptive Classification and Redaction (ACR)

Oracle Database Appliance supports Adaptive Classification and Redaction (ACR) to sanitize sensitive data.

After collecting copies of diagnostic data, Oracle Database Appliance use Adaptive Classification and Redaction (ACR) to sanitize sensitive data in the collections. You can use the commands odacli enable-acr and odacli disable-acr to enable or disable ACR across both nodes, not just on the local node.

See Also:

For more information about setting up the staging server for Adaptive Classification and Redaction (ACR), see My Oracle Support note 2882798.1.

Example 19-8 Describing current status of ACR

bash-4.2# odacli describe-acr
Trace File Redaction: Enabled

Example 19-9 Enabling ACR:

bash-4.2# odacli enable-acr

Job details                                                      
----------------------------------------------------------------
                ID:  12bbf784-610a-40a8-b409-e74c58bc35aa
               Description:  Enable ACR job
                Status:  Created
                Created:  April 8, 2021 3:04:13 AM PDT

Example 19-10 Disabling ACR

bash-4.2# odacli disable-acr

Job details                                                      
----------------------------------------------------------------
                ID:  1d69f8b3-3989-4192-bbb9-6518e425061a
               Description:  Disable ACR job
                Status:  Created
                Created:  April 8, 2021 3:04:13 AM PDT

Example 19-11 Enabling ACR during provisioning of the appliance

You can enable ACR during provisioning of the appliance by adding the acr option to the JSON file used for provisioning. Specify true or false for the field acrEnable in the JSON file. If the acr option is not specified, then ACR is disabled.

"acr": {
    "acrEnable": true
}

Sanitizing Sensitive Information in Oracle Trace File Analyzer Collections

You can redact (sanitize or mask) Oracle Trace File Analyzer diagnostic collections.

Enabling Automatic Redaction

To enable automatic redaction, use the command:

tfactl set redact=[mask|sanitize|none] 

In the command, the -mask option blocks out the sensitive data in all collections, for example, replaces myhost1 with *******. The -sanitize option replaces the sensitive data in all collections with random characters, for example, replaces myhost1 with orzhmv1. The none option does not mask or sanitize sensitive data in collections. The default is none.

Enabling On-Demand Redaction

You can redact collections on-demand, for example, tfactl diagcollect -srdc ORA-00600 -mask or tfactl diagcollect -srdc ORA-00600 -sanitize.

  1. To mask sensitive data in all collections:
    tfactl set redact=mask
  2. To sanitize sensitive data in all collections:
    tfactl set redact=sanitize

Example 19-12 Masking or Sanitizing Sensitive Data in a Specific Collection

tfactl diagcollect -srdc ORA-00600 -mask
tfactl diagcollect -srdc ORA-00600 -sanitize

Redacting and Sanitizing Entities in the BUI

Enable and disable trace file redaction, redact files, and show or hide sanitized entities using the Browser User Interface.

  1. Log into the Browser User Interface with the oda-admin username and password.
    https://Node0–host-ip-address:7093/mgmt/index.html
  2. Click the Security tab.
  3. In the Security page, on the left navigation pane, click Trace File Redaction.
  4. Click the Trace File Redaction Status tab.
    The current ACR status is displayed.
  5. You can enable or disable ACR status based on the current ACR staus. For example, if the ACR status is disabled, then click Enable to enable ACR. The job to change the ACR status is submitted.
  6. Click Refresh Status to refresh the ACR status display.
  7. Click the Redact Files tab.
  8. Specify the Input File Path of the file to be redacted. The file must be in the .tar, or .gz, or .zip file format.
  9. Select either Sanitize or Mask for the Redaction Mode.
  10. Click Redact. The job to redact files is submitted.
  11. Click the Show Sanitized Entities tab.
  12. Specify the List of sanitized entities and click Show. The list of sanitized entities are displayed.

Sanitizing Sensitive Information in Oracle ORAchk Output

You can sanitize Oracle ORAchk output.

To sanitize Oracle ORAchk output, include the -sanitize option, for example, orachk -profile asm -sanitize. You can also sanitize post process by passing in an existing log, HTML report, or a zip file, for example, orachk -sanitize file_name.

Example 19-13 Sanitizing Sensitive Information in Specific Collection IDs

orachk -sanitize comma_delimited_list_of_collection_IDs

Example 19-14 Sanitizing a File with Relative Path

orachk -sanitize new/orachk_node061919_053119_001343.zip 
orachk is sanitizing
/scratch/testuser/may31/new/orachk_node061919_053119_001343.zip. Please wait...

Sanitized collection is:
/scratch/testuser/may31/orachk_aydv061919_053119_001343.zip
orachk -sanitize ../orachk_node061919_053119_001343.zip 
orachk is sanitizing
/scratch/testuser/may31/../orachk_node061919_053119_001343.zip. Please wait...

Sanitized collection is:
/scratch/testuser/may31/orachk_aydv061919_053119_001343.zip

Example 19-15 Sanitizing Oracle Autonomous Health Framework Debug Log

orachk -sanitize new/orachk_debug_053119_023653.log
orachk is sanitizing /scratch/testuser/may31/new/orachk_debug_053119_023653.log.
Please wait...

Sanitized collection is: /scratch/testuser/may31/orachk_debug_053119_023653.log

Example 19-16 Running Full Sanity Check

orachk -localonly -profile asm -sanitize -silentforce

Detailed report (html) - 
/scratch/testuser/may31/orachk_node061919_053119_04448/orachk_node061919_053119_04448.html

orachk is sanitizing /scratch/testuser/may31/orachk_node061919_053119_04448.
Please wait...

Sanitized collection is: /scratch/testuser/may31/orachk_aydv061919_053119_04448

UPLOAD [if required] - /scratch/testuser/may31/orachk_node061919_053119_04448.zip
To reverse lookup a sanitized value, use the command:
orachk -rmap all|comma_delimited_list_of_element_IDs

You can also use orachk -rmap to lookup a value sanitized by Oracle Trace File Analyzer.

Example 19-17 Printing the Reverse Map of Sanitized Elements


orachk -rmap MF_NK1,fcb63u2

________________________________________________________________________________
| Entity Type | Substituted Entity Name | Original Entity Name |
________________________________________________________________________________
| dbname      | MF_NK1               | HR_DB1            |
| dbname      | fcb63u2              | rac12c2           |
________________________________________________________________________________
orachk -rmap all

Running the Disk Diagnostic Tool

Use the Disk Diagnostic Tool to help identify the cause of disk problems.

The tool produces a list of 14 disk checks for each node. To display details, where n represents the disk resource name, enter the following command:

# odaadmcli stordiag n
For example, to display detailed information for NVMe pd_00:
# odaadmcli stordiag pd_00

Running the Oracle Database Appliance Hardware Monitoring Tool

The Oracle Database Appliance Hardware Monitoring Tool displays the status of different hardware components in Oracle Database Appliance server.

The tool is implemented with the Trace File Analyzer collector. Use the tool both on bare-metal and on virtualized systems. The Oracle Database Appliance Hardware Monitoring Tool reports information only for the node on which you run the command. The information it displays in the output depend on the component that you select to review.

Bare Metal Platform

You can see the list of monitored components by running the command odaadmcli show -h

To see information about specific components, use the command syntax odaadmcli show component, where component is the hardware component that you want to query. For example, the command odaadmcli show power shows information specifically about the Oracle Database Appliance power supply:

# odaadmcli show power

NAME            HEALTH  HEALTH_DETAILS   PART_NO.  	SERIAL_NO.
Power_Supply_0  OK            -          7079395     476856Z+1514CE056G

(Continued)
LOCATION    INPUT_POWER   OUTPUT_POWER   INLET_TEMP         EXHAUST_TEMP
PS0         Present       112 watts      28.000 degree C    34.938 degree C

Virtualized Platform

You can see the list of monitored components by running the command oakcli show -h

To see information about specific components, use the command syntax oakcli show component, where component is the hardware component that you want to query. For example, the command oakcli show power shows information specifically about the Oracle Database Appliance power supply:

# oakcli show power

NAME            HEALTH HEALTH_DETAILS PART_NO. SERIAL_NO.          
Power Supply_0  OK      -             7047410   476856F+1242CE0020
Power Supply_1  OK     -              7047410   476856F+1242CE004J

(Continued)

LOCATION  INPUT_POWER OUTPUT_POWER INLET_TEMP         EXHAUST_TEMP
PS0       Present     88 watts     31.250 degree C    34.188 degree C
PS1       Present     66 watts     31.250 degree C    34.188 degree C

Note:

Oracle Database Appliance Server Hardware Monitoring Tool is enabled during initial startup of ODA_BASE on Oracle Database Appliance Virtualized Platform. When it starts, the tool collects base statistics for about 5 minutes. During this time, the tool displays the message "Gathering Statistics…" message.

Disabling the Browser User Interface

You can also disable the Browser User Interface. Disabling the Browser User Interface means you can only manage your appliance through the command-line interface.

  1. Log in to the appliance:
    ssh -l root oda-host-name
  2. Stop the DCS controller. For High-Availability systems, run the command on both nodes.
    systemctl stop initdcscontroller

Preparing Log Files for Oracle Support Services

If you have a system fault that requires help from Oracle Support Services, then you may need to provide log records to help Oracle support diagnose your issue.

You can collect diagnostic information for your appliance in the following ways:
  • Use the Bill Of Materials report saved in the /opt/oracle/dcs/Inventory/ directory, to enable Oracle Support to help troubleshoot errors, if necessary.
  • You can use Trace File Collector (the tfactl command) to collect all log files for the Oracle Database Appliance components.
  • Use the command odaadmcli manage diagcollect to collect diagnostic files to send to Oracle Support Services.
  • Use the Error Correlation report available in the /opt/oracle/dcs/da/da_repo directory.

The odaadmcli manage diagcollect command consolidates information from log files stored on Oracle Database Appliance into a single log file for use by Oracle Support Services. The location of the file is specified in the command output.

Example 19-18 Collecting log file information for a time period, masking sensitive data

# odaadmcli manage diagcollect --dataMask --fromTime 2019-08-12 --toTime 2019-08-25
DataMask is set as true
FromTime is set as: 2019-08-12
ToTime is set as: 2019-08-25
TFACTL command is: /opt/oracle/tfa/tfa_home/bin/tfactl
Data mask is set.
Collect data from 2019-08-12
Collect data to 2019-08-25