Previous AHF 23.x Releases

AHF Release 23.11

Upgraded Java Version

AHF 23.11.0 is shipped with Java version 11.0.21.

Option to View Operating System and Database Parameter Values

AHF 23.11 includes a new command option tfactl param to view the values of operating system and database parameters specified.

Note:

tfactl run param will be deprecated in a future release. It will be replaced by tfactl param.
For more information, see:
  • tfactl run
  • tfactl param

Related Topics

Database Anomalies Advisor

AHF Insights adds the Database Anomalies Advisor, which shows database anomalies, their cause, and recommended actions.

AHF detects database anomalies and identifies the cause and corrective action. This is now made available via AHF Insights in the new Database Anomalies Advisor.

The Database Anomalies Advisor shows a summary timeline of anomalies for hosts and database instances. Findings can be drilled into to understand the cause and recommendation action.

To view the Database Anomalies Advisor and it’s recommendations, run ahf analysis create --type insights, open the resulting report, and click Database Anomalies Advisor.

AHF Support for Oracle Linux 9

AHF adds support for OL9 for both Intel-64/AMD-64 (x86_64) and Arm (aarch64)

Oracle Linux is an optimized and secure operating environment for application development and deployment. Oracle Linux 9 provides kernel, performance, and security enhancements.

AHF is now supported on Oracle Linux 9 on both Intel-64/AMD-64 (x86_64) and Arm (aarch64).

For more information, see Announcing Oracle Linux 9 general availability .

AHF Insights Space Usage Analytics for Diagnostic Destinations

A new section Space Analysis has been added in release 23.11 that renders Disk Utilization and Diagnostice Space Usage data in visual and tabular format.

You can view the directory structure and space consumed by directories and files in a visual and tree format across all diagnostic directories and nodes.

Get Insights from Exawatcher Data

AHF Insights now includes Exawatcher data.

Exawatcher is an Exadata specific tool that collects performance data from Exadata storage cells. Previously, Exawatcher data was not available within AHF Insights.

AHF Insights now includes Exawatcher data, in the same easy to explore interface as all other diagnostic data.

Insights Timeline Includes Patch Information

AHF Insights timeline now includes details about when patches were applied.

AHF Insights provides a bird’s eye view of your entire system, with the ability to spot problems, drill into the root cause and understand how to resolve.

When triaging issues, it can be useful to understand when patches were applied.

The AHF Insights Timeline now shows datapoints highlighting when new patches were applied. In addition, several other usability improvements have been added:

  • The timeline can be viewed in a Database Faceted format.
  • Operating System Issues data has been rounded to 2 decimal places in the Report section tables.
  • Node names in the drop-downs selections are sorted alphabetically.

AHF Release 23.10

Using the exadcli Utility to Collect Cell Metric Data for Guest VMs (domUs)

exadcli enables you to issue an ExaCLI command to be run on multiple remote nodes. Remote nodes are referenced by their host name or IP address.

For more information, see Using the exadcli Utility to Collect Cell Metric Data for Guest VMs (domUs).

Option to Set a Custom Port to Upload Diagnostics

Starting with AHF 23.10, you can configure a custom port while setting ahfctl setupload parameters.

If you do not specify a port, then 443 is used by default. You can set a port number in the range of 0 - 65353.

Option to Include Profiles While Running AHF Compliance Checks

Starting with AHF 23.10, you can use -includeprofile to specify a comma-delimited list of profiles to add profile specific checks to the existing checks list.

ahfctl compliance -includeprofile profile1, profile2...

orachk -includeprofile profile1, profile2...

exachk -includeprofile profile1, profile2...

Note:

You cannot:

  • use -includeprofile and -profile options together
  • use -includeprofile and -excludeprofile options together

Use the -profile option to specify a comma-delimited list of profiles to run only the checks in the specified profiles.

Use the -excludeprofile option to specify a comma-delimited list of profiles to exclude from the compliance check run.

AHF Insights Support for Larger Collection Intervals

Starting with 23.10, you can generate Insights report for time period of 12 hours.

In addition, improvements have been made to the Operating System Issues section. You will now be able to view the data in problematic time ranges in plots with more data points.

The problematic time ranges will have the following reading intervals:
  • 5 seconds for ranges less than 1 minute
  • 30 seconds for ranges more than 1 minute

The number of data points for plots under Operating System Issues section are dynamic for optimal time taken to generate report.

The data points for time ranges greater than 4 hours are reduced and have the following reading intervals:
  • 1 minute for intervals up to 4 hours
  • 3 minutes for intervals greater than 4 hours and less than 12 hours
  • 5 minutes for intervals greater than 12 hours.

AHF Insights User Experience Improvement

Report tab in the Operating System Issues section has been revamped to provide a seemless experience.

With Report view, explore the findings in a drop-down fashion with a full widescreen view.

You can:
  • view the Event information in a subplot within the Summary Timeline Gantt Chart
  • explore the top ranked metrics in tables under a problem finding in a visual format
  • view the metrics associated with the prblem finding in a visual format
  • drill down into the detailed state of the system at a specific problematic point in time under 'Problematic Snapshots' section. Problem specific system snapshots are organized into dropdowns ordered by problem timestamp

Related Topics

Terminal Releases of AHF for Old Platforms

Several old Operating Systems are approaching their end of life, as a result AHF is announcing terminal releases.

For more information, see Unsupported platforms.

Related Topics

New GoldenGate Diagnostic Collection Component

AHF diagcollect now includes a new component for GoldenGate.

AHF has a long-standing ability to collect Golden Gate diagnostics via an SRDC (Service Request Data Collection). However, the Golden Gate SRDC collected logs irrespective of the timeframe and copied all the files matching the file pattern. This resulted in collecting extra logs, which were not required for diagnostics.

Golden Gate has now been added as a new diagcollect component, allowing AHF to discover the Golden Gate directories, inventory the files and store it in BDB. This enables collections based on timeframe which results in only necessary log collection and faster, smaller diagnostic collections.

To use the goldengate component, run:
tfactl diagcollect -goldengate -last 1h -noclassify

New Oracle ORAchk and Oracle EXAchk Best Practice Checks

Release 23.10 includes the following new Oracle ORAchk and Oracle EXAchk best practice checks.

Best Practice Checks Common to Both Oracle ORAchk and Oracle EXAchk

Oracle ORAchk Specific Best Practice Checks

Oracle EXAchk Specific Best Practice Checks

  • Oracle High Availability Services Automatic Startup Configuration
  • CHECK FOR EXADATA CRITICAL ISSUE EX80
  • CHECK FOR EXADATA CRITICAL ISSUE EX81
  • CHECK FOR EXADATA CRITICAL ISSUE EX82
  • CHECK FOR EXADATA CRITICAL ISSUE DB52
  • Verify number of inactive patches for database home
  • Verify number of inactive patches for Grid Inftrastructure home

All checks can be explored in more detail via the Health Check Catalogs:

AHF Release 23.9

Enhancement to Controlling the Behavior of Oracle ORAchk or Oracle EXAchk Daemon

AHF 23.9 includes a new command option reset to change the behavior of Oracle ORAchk or Oracle EXAchk daemon during autostart, autostop, and upgrade.

Command Description

exachk -autostart reset

orachk -autostart reset

ahfctl compliance -autostart reset

Starts and loads the default schedulers.

exachk -autostop unset

orachk -autostop unset

ahfctl compliance -autostop unset

Removes all default unmodified schedulers.

Easier to Manage Audit Dump Logs

AHF Managelogs feature adds ability to manage audit dump logs.

The AHF Managelogs feature purges logs from default locations like the Grid Infrastructure and all Database Automatic Diagnostic Repository (ADR) destinations.

To do this purging Managelogs uses the Database Automatic Diagnostic Repository (ADR), however ADR does not manage audit dump files. As a result, audit dump files can grow and consume too much space.

The Managelogs feature has been expanded to optionally also include management of audit dump files for Grid Infrastructure and Database.

Configure automatic log purging

  • Configure auto purge:
    tfactl set manageLogsAutoPurge=ON
  • Include audit dumps:
    tfactl set managelogs.adump=ON
  • Set the frequency of purging (defaults to 60 mins)
    tfactl set manageLogsAutoPurgeInterval=<n>
  • Configure how old logs must be for them to be purged (default 30 days):
    tfactl set manageLogsAutoPurgePolicyAge=<d|h>

Purge logs on-demand

  • Enable audit dumps:
    tfactl set managelogs.adump=ON
  • Check the usage for audit dump destination
    tfactl managelogs -show usage
  • Check variation for audit dump destination
    tfactl managelogs -show variation
  • Purge audit dump files along with other destinations managed by managelogs:
    tfactl managelogs -purge

Related Topics

Enhancement to ahfctl setupgrade and ahfctl unsetupgrade to Store or Remove autoupdate Configurations

A new option -autoupdate has been added to ahfctl setupgrade and ahfctl unsetupgrade.

  • To store autoupdate configurations, run, for example,
    ahfctl setupgrade -autoupgrade on -swstage /opt/oracle.ahf -frequency 1 -autoupdate on
  • To turn on autoupdate configurations, run:
    ahfctl setupgrade -autoupdate on
  • To turn off autoupdate configurations, run:
    ahfctl setupgrade -autoupdate off
  • To unset autoupdate configurations, run:
    ahfctl unsetupgrade -autoupdate

Faster Creation of Diagnostic Collections with Insights Reports

AHF TFA collections, which include Insights reports are now created faster.

AHF Insights reports can be generated stand-alone using the command ahf analysis create --type insights. Alternatively, a TFA diagnostic collection can be created with an AHF Insights report included by adding the -insight option to the existing -diagcollect command.

This creation of the AHF Insights report often requires analysis of existing zipped diagnostics. Unzipping and processing the collections is CPU intensive and can be slow.

This process of analyzing the diagnostic collection to generate the AHF Insights report has been streamlined and performance improved. Timings will vary based on the type of collection being performed and the systems involved.

An example baseline from testing shows the following improvement, on the time taken to generate the included AHF Insights report:

  • 23.8: tfactl diagcollect -asm -crs- os -tns -insight -last 1h >> 6.8 seconds
  • 23.9: tfactl diagcollect -asm -crs- os -tns -insight -last 1h >> 1 second

Quicker Grid Infrastructure Problem Resolution with CVU Diagnostics

Cluster Verification Utility (CVU) diagnostic files are included in AHF diagnostic collections.

As CVU (Cluster Verification Utility) diagnostic files contain periodic Grid Infrastructure configuration information and critical diagnostic reports they are often required for diagnosis of Grid Infrastructure problems.

AHF now collects all files under the following CVU directories:

  • <GI_BASE>/crsdata/<node>/cvu/diagnostics/cvu_diag_report.txt
  • <GI_BASE>/crsdata/@global/cvu/baseline/cvures/cvusnapshot*.zip

To include the CVU diagnostic files add the -cvu component to the diagcollect command.

For example:
tfactl diagcollect -cvu -last 1h -noclassify

By default, AHF will include CVU in CRS or Database collections for example, both these automatically include CVU diagnostics:

  • tfactl diagcollect -crs -last 1h -noclassify
  • tfactl diagcollect -database orcl -last 1h -noclassify

New Oracle ORAchk and Oracle EXAchk Best Practice Checks

Release 23.9 includes the following new Oracle ORAchk and Oracle EXAchk best practice checks.

Oracle ORAchk Specific Best Practice Checks

  • Check asmappl.config consistency across nodes for ODA
  • Verify clusterware ADVM volume resources configuration
  • Verify printk logging configuration

Oracle EXAchk Specific Best Practice Checks

  • Verify RoCE cabling and switch ports assignment
  • Check file S_CRSCONFIG_<NODE>_ENV.TXT for consistent limit values across all nodes in the cluster
  • Verify DSA authentication is not supported for SSH equivalency

All checks can be explored in more detail via the Health Check Catalogs:

AHF Release 23.8

Easier to Manage Best Practice Compliance

AHF compliance checks from ORAchk and EXAchk are now fully integrated into AHF Insights Best Practice section.

AHF has thousands of Best Practice Compliance Checks, which are run automatically by AHF ORAchk and EXAchk. The results of these checks are viewable in HTML reports and output in JSON and XML for consumption into other tools. In addition, all Best Practice Compliance Checks are fully integrated into AHF Insights for running on-demand.

AHF Insights makes it easy to quickly see the Health Score, understand where systems are out of compliance and then take the necessary corrective action.

With this enhancement, you can:
  • Explore the best practice data in a visual format.
  • Filter best practices accross different status through visualization and Status status drop-down.
  • Search checks from all sections of best practice report.
  • View the best practice report in a vertical fashion.
  • See the health score with a visual distribution of checks that have failed.
Continue to use the ORAchk / EXAchk commands for automated scheduled runs, but for on-demand compliance investigation, generate an AHF Insights report:
ahf analysis create --type insights

For more information, see Compliance Checking with Oracle ORAchk and Oracle EXAchk and Best Practice Issues.

Enhancements to the AHF Insights Interface Design and Usability

AHF 23.8 includes the following enhancements to the user interface to make it more intuitive and easier to use.

Note:

plotly.js dependency on CDN has been removed for customers using AHF Insights in restrictive environments.

You can now:

  • Copy data in text format into the clipboard to post it into SR body while raising a service request.
    Copy button is included in the following sections of the report:
    • Cluster
    • Databases
    • Database Servers
    • Storage Servers
    • Fabric Switches
    • Recommended Software
  • Spot the disks that have anomalies. In the Operating System Issues tab, under Local IO, click Disk to view Disk Metrics. Disks that have anomalies are marked with an X mark.
  • Explore process aggregate from operating system details in a more intuitive way.
    • Demarcated process aggregates per the instance group like Databases, ASM, APX (Apex), IOS, Clusterware, and so on.
    • Legends specific to individual category rather than single legend for all categories.

Upload AHF Insights Report Automatically to Object Store or Pre-Authenticated URL (PAR)

Upload AHF Insights report automatically if Object Store is configured as part of AHF or Pre-Authenticated URL (PAR) for centralized monitoring.

Uploading AHF Insights reports helps Oracle Cloud Operations to identify, investigate, track, and resolve system health issues and divergences in best practice configurations quickly and effectively.

Oracle Autonomous Database on Dedicated Exadata Infrastructure and Oracle SaaS

To set REST endpoints (Object Store's), run:
ahfctl setupload -name oss -type https -user <user> -url <object_store> -password
To upload AHF Insights report to Object Store, run:
ahf analysis create --type insights
.

Oracle Exadata Database Service on Dedicated Infrastructure (ExaDB-D) and Oracle Base Database Service

To upload AHF Insights report to PAR location, run:
tfactl diagcollect -insight -last 1h -par <par_url>
tfactl insight -last 1h -par <par_url>

Automate the Generation of AHF Insights Reports Using AHF Cron

Schedule cron jobs to generate AHF Insights report.

Note:

The AHF Insights report will be generated every Monday at 3 a.m.
  • To get cron details:
    tfactl get cron
    # tfactl get cron
      .----------------------------------------------.
      |                <hostname>                    |
      +--------------------------------------+-------+
      | Configuration Parameter              | Value |
      +--------------------------------------+-------+
      | Enable/disable the TFA cron ( cron ) | OFF   |
      '--------------------------------------+-------'
  • To enable cron:
    tfactl set cron=on
    # tfactl set cron=on
      Successfully set cron=ON
      .----------------------------------------------.
      |                <hostname>                    |
      +--------------------------------------+-------+
      | Configuration Parameter              | Value |
      +--------------------------------------+-------+
      | Enable/disable the TFA cron ( cron ) | ON    |
      '--------------------------------------+-------'
  • To reload cron with modifications:
    tfactl refreshconfig modifycron -enable true -id <ID> -validFor all
    # tfactl refreshconfig modifycron -enable true -id id001 -validFor all
    modifycron() completed successfully.
  • To list existing cron details:
    # tfactl refreshconfig listcrons
    # tfactl refreshconfig listcrons
      TFA CRON item:
      Name:     id001
      Command:  ahf analysis create --type insights --last 5m
      Schedule: 0 3 * * 1
  • To turn off cron:
    # tfactl set cron=off
    # tfactl set cron=off
      Successfully set cron=OFF
      .----------------------------------------------.
      |                <hostname>                    |
      +--------------------------------------+-------+
      | Configuration Parameter              | Value |
      +--------------------------------------+-------+
      | Enable/disable the TFA cron ( cron ) | OFF   |
      '--------------------------------------+-------'

Guided Resolution of Database Performance Problems Caused by Noisy Neighbors

AHF Balance no-longer requires a GI Home and now works with any Oracle Home.

Database CPU use is limited by the database CPU_COUNT parameter. When these limits add up to more than the number of CPUs on a machine, noisy-neighbor problems are possible.

AHF Balance analyses database CPU configuration and historical CPU usage data from Enterprise Manager. The high-level results of this analysis are shown in the ORAchk/EXAchk MAA Score Card.

Further reports can be run to:

  • Get an overview of possible noisy neighbors across the fleet.
  • See detailed information about a specific database.
  • Generate a corrective action plan.
To use AHF Balance:
  • Configure AHF Balance to analyze historical CPU usage from Enterprise Manager’s repository database:
    ahf configuration set --type impact --connect-string <EM-DATABASE-CONNECT-STRING> --user-name <USER-NAME>

    Note:

    Ensure that the connect string does not contain any spaces.
  • Run a fleet-wide analysis to create a detailed AHF Balance report to understand noisy neighbors and the improvements possible by changing CPU_COUNT settings:
    ahf analysis create --type impact --scope fleet --name <FLEET_NAME>
  • Run a cluster-level analysis to get a detailed corrective action plan:
    ahf analysis create --type impact --scope cluster

For more information, see Data Source.

New Oracle ORAchk and Oracle EXAchk Best Practice Checks

Release 23.8 includes the following new Oracle ORAchk and Oracle EXAchk best practice checks.

Best Practice Checks Common to Both Oracle ORAchk and Oracle EXAchk

  • Verify health of data dictionary for multitenant database
  • Verify health of data dictionary for non-multitenant database

Oracle ORAchk Specific Best Practice Checks

  • Oracle Database recommendation for audit settings
  • Oracle Database unified auditing recommendation

Oracle EXAchk Specific Best Practice Checks

  • Check for CachedBy and CachingPolicy GridDisks attributes
  • Check for tainted kernel by non-Oracle modules and third-party security software installed from package

All checks can be explored in more detail via the Health Check Catalogs:

AHF Release 23.7

Easier Patch Management with AHF Insights

AHF Insights now includes a new patching section showing Database and GI patches

Managing patches can be difficult. It requires the ability to:

  • Keep track of which individual patches are applied, to which hosts, and when.
  • Spot where you’ve got gaps in patches.
  • Understand which bugs the various patches fix.

AHF Insights now makes this a whole lot easier with the new Patching Information section. The Patching Information section shows Database and GI patches per host and Oracle Home, providing easy understanding of which patches are applied and where. There’s also a new patch timeline, which visualizes patch information showing when patches were applied. Gaps or inconsistencies in patching are highlighted across nodes for the same home. Bugs and relevant patch information can be quickly searched and viewed via interactive reports.

Related Topics

AHF Insights Go Mobile

AHF Insights is now mobile responsive and optimized for ease of reading.

People rely on AHF Insights to get a top-down system view, see when problems occur, understand the causes, and how to fix them. Now, AHF Insights can be viewed on a mobile phone. Navigate system topology, drill into problems, and get recommendations from anywhere when on the go. To view graphs just tilt to landscape to get full screen metric immersion.

In addition, AHF Insights has several improvements to make it easier to use and faster to find important information. Various AHF Insights sections have now been optimized to provide default viewing options, which make it even easier and faster to explore data.

  • The Cluster Section now shows Database homes ordered by Database Version and Database Homes are expanded by default.
  • The Database Section shows CDB names sorted alphabetically by default.
  • The Operating System Issues Section has rearranged and added new data labels and the IO and Network details can now be configured.

Easier Operation on Exadata Dom0

On Exadata Dom0, AHF installations can be converted from standalone (extract) to typical, and /EXAVMIMAGES is now used for the default data directory.

AHF provides multiple installation methods:

  • Standalone: Extracts only the AHF bits.
  • Typical: Performs full install including configuring scheduling for important features like compliance checking.

Previously, to change an AHF installation from standalone to typical required an uninstall followed by a fresh install. Now, any upgrades on Exadata Dom0 of a Standalone installation will prompt to convert to Typical and any installation will prompt to start the scheduler if it’s not already running. Existing AHF installations can be converted from Standalone to Typical during scripted upgrades by using the -upgradetotypical option.

On Exadata Dom0 the default installation location of /opt can get quickly filled by collections.

Now, fresh AHF installations on Exadata Dom0 use /EXAVMIMAGES as the default data directory. Additionally, auto upgrades as either root or a user within the Platinum role will automatically move the data directory to be under /EXAVMIMAGES.

For more information, see Convert AHF Standalone (default) Installation to Typical Installation.

Faster Redaction of Diagnostic Collections

Diagnostic collections can now be redacted faster by increasing the CPU allocation to ACR.

AHF ships with ACR (Adaptive Classification and Redaction) for the purposes of sanitizing sensitive data. Redaction involves scanning the full contents of every file within a collection, so is very CPU intensive. For this reason, there are certain limits put in place within AHF to ensure excessive CPU is not used.

All AHF processes run under a CGroups setting, which caps the maximum CPU usage at the lower of either 4 CPUs or 75% of available CPUs. Additionally, there is a specific cap on ACR to only use a maximum of 20% of available CPU.

In some environments, however, customers have large CPU resources and want to use more CPU so redaction can be completed quickly. This can now be accomplished with this two-phase process:

Firstly, increase the AHF CGroup limit above the normal 75% limit by using the -force option:
ahfctl setresourcelimit -resource cpu -value <cpu_count> -force

For more information about setting resource limit, see ahfctl setresourcelimit.

Secondly, use the -acrprocesscount option to set the number of ACR processes that will be used within the diagnostic collection command:
tfactl diagcollect <option> <-sanitize | -mask> -acrprocesscount <cpu_count>

For example, tfactl diagcollect -last 5m -acrprocesscount 3 -sanitize

For more information on redaction of AHF collections, see Sanitizing Sensitive Information in Oracle Trace File Analyzer Collections and tfactl diagcollect.

Caution:

Most customers should not perform redaction in a production environment. Instead, set up a staging server for ACR.

New Oracle ORAchk and Oracle EXAchk Best Practice Checks

Release 23.7 includes the following new Oracle ORAchk and Oracle EXAchk best practice checks.

Oracle ORAchk Specific Best Practice Checks

  • Verify number of inactive patches for Grid Infrastructure home
  • Verify number of inactive patches for database home

All checks can be explored in more detail via the Health Check Catalogs: