About Data Refresh Performance

Data refresh in Oracle Fusion Data Intelligence is a complex process. Some day-to-day variation in refresh duration is expected.

A typical data refresh includes the following processes:
  • Extracting data from the Oracle Fusion Cloud Applicationsor other Third-party sources.
  • Transforming the data into the prebuilt schema ready for analytics.
  • Loading the data into Oracle Autonomous AI Lakehouse.
  • Sharing data with external targets with the Data Share functionality.

Factors Affecting Data Refresh Performance

The time required to refresh the data and all the resulting key metrics and dashboards depends on several factors.

These include:
  • Source system availability – If the source system is unavailable, the refresh will pause and wait. The system sends a notification. If the system can determine that the access issue was caused by invalid credentials, the Request History and Warehouse Refresh Statistics display whether the refresh was queued due to source availability.
  • Resource availability on the source system – If the source system resources are consumed by processes other than the Oracle Fusion Data Intelligence extraction process, it will cause delays in the Oracle Fusion Data Intelligence data refreshes. This is especially true for Oracle Fusion Cloud Applications.
  • Size and complexity of the source data – In general, the volume of data processed during a refresh is a good indicator of how long the refresh will take, though this isn’t always the case. The Warehouse Refresh Statistics includes the number of published records that can indicate the overall volume. See View the Warehouse Refresh Statistics.
  • Activated functional areas – In general, the more the functional areas, the more data to be processed and hence longer refreshes.
  • Custom SQL queries – If custom queries are running on Oracle Autonomous AI Lakehouse, they can consume resources needed by the pipeline, which may impact overall refresh performance.
  • Table locks by custom downstream processing - If any downstream processes acquire locks on the Oracle Fusion Data Intelligence tables and hold them for too long, that can cause delays in data refresh performed by Oracle Fusion Data Intelligence.
  • Customizations made on the source system objects – If the customizations require a full load for those objects, then refresh will take longer.
  • Source and target system maintenance – System maintenance activities such as patching can pause or delay pipeline processes, which may extend the overall refresh time.
  • Custom data pipelines – If complex transformation logic is used in the Data Augmentations Scripts for custom data pipelines, it may affect the overall refresh time.

Considerations for Data Refresh Performance

Given the variety of factors that influence data refresh, the completion times will differ from day to day. .

See Factors Affecting Data Refresh Performance.

Pipeline performance is a shared responsibility; hence, Oracle provides a scalable platform, configuration controls, and usage guidance, while you manage data volumes, source-system readiness, transformation design (if any custom data pipelines are used), scheduling, and environment-specific configurations. Optimal results require ongoing user involvement in configuration, proactive monitoring of different factors affecting data refreshes, and adherence to Oracle-recommended practices and tools.

To help keep refresh performance optimal, consider the following actions:
  • Activated Functional Areas - Activate only the functional areas required for your analytics business needs. Start with what’s necessary and add more as new requirements arise, rather than enabling everything upfront. See Activate a Data Pipeline for a Functional Area. Remove any functional areas not critical for your analytics. See Deactivate a Data Pipeline for a Functional Area.
  • Modules and Data Tables in Frequent Data Refresh – Add only the modules that are required for your intraday operational analytics business needs using the Frequent Data Refresh functionality. Remove any modules not critical for your analytics. See Configure Frequent Data Refresh V2 (Preview) and Performance Considerations for Frequent Data Refresh.
  • Initial Extract Date – This setting controls what data is extracted, transformed, and stored in the warehouse. Configure it carefully based on actual business requirements. Instead of an absolute Initial Extract Date, consider using a Relative Initial Extract Date. See About Pipeline Parameters. This is especially impactful if you're using the Configurable Account Analysis functionality. See Configurable Account Analysis.
  • Custom Extracts – If custom data extractions are running on the source system, they can compete for resources and delay the data refresh. For example, custom Business Intelligence Cloud Connector (BICC) extractions running in parallel with the Oracle Fusion Data Intelligence extraction process may slow down Oracle Fusion Data Intelligence refreshes. To avoid delays, ensure no other custom refresh jobs are running during the Oracle Fusion Data Intelligence data refresh windows. When a certain refresh takes longer than expected, use the Warehouse Refresh statistics to determine if there are delays due to custom extracts during data refresh. See View the Warehouse Refresh Statistics.
  • High Service Sessions on Oracle Autonomous AI Lakehouse – The data refresh process depends on available warehouse resources. If High service sessions are running, they can consume capacity and delay publishing data to the warehouse. See Usage Guidelines for Autonomous AI Lakehouse Associated with Oracle Fusion Data Intelligence. When a certain refresh takes longer than expected, use the Warehouse Refresh statistics to determine if there were any High sessions during data refresh. See View the Warehouse Refresh Statistics.
  • Downstream Custom ETL Processes – The data refresh process requires uninterrupted access to the tables in the data warehouse. Any custom ETL processes accessing these tables and acquiring long-held locks should be scheduled outside the Oracle Fusion Data Intelligence refresh window.
  • Prioritized Refresh – If you want to make certain data refreshed first even within the incremental refresh, you can select those warehouse tables for prioritized refresh . However, use this only for a limited set of tables that are truly critical to be refreshed before other datasets. See Prioritize Datasets for Incremental Refresh (Preview).
  • Functional Area Schedule Override – Review the functional areas you’ve activated and consider staggering refresh times based on business need. This can reduce processing load from daily incremental pipeline refreshes. See Override Data Pipeline Schedules for Functional Areas (Preview).
  • Fusion Augmentations Source – If your data augmentations are mainly used for downstream integrations, consider using Fusion Augmentations Source for Datasets. With this option, augmentation refreshes can run in parallel with the daily incremental refreshes for functional areas in Oracle Fusion Cloud Applications. This improves incremental refresh performance for the Oracle Fusion Cloud Applications source. It also enables a different refresh frequency for data augmentations using the Fusion Augmentations Source. Schedule the refreshes based on Oracle Fusion Cloud Applications source and Fusion Augmentations Source in a staggered manner to avoid resource contention on Oracle Fusion Cloud Applications. See Perform Data Augmentations with Fusion Augmentations Source (Preview).
  • Refresh Stage Priority Change on Request – For scheduled incremental refresh, you can file a service request to move certain modules from Primary stage to Secondary stage. See Schedule Incremental Data Refresh . This enables data for higher critical business needs to be made available sooner.