3 Configure Oracle Fusion Analytics Warehouse Data

As the cloud account administrator with the Functional Administrator or System Administrator application role, you specify the data load and reporting configuration details, and create data pipelines for functional areas that determine how the source data from Oracle Applications Cloud is loaded and displayed in the data warehouse.

For example, you might specify the Initial Extract Date as January 1, 2019, specify the global currency as US Dollars (USD), and select the Exchange Rate to be used for converting transactions to the global currency.

Note:

The times visible in the Console are in local time zone. However, Oracle Fusion Analytics Warehouse stores the values for the Warehouse Refresh Date Time and Scheduled Date in the Coordinated Universal Time (UTC) in the autonomous data warehouse.

Typical Workflow to Configure Data

Follow these tasks as a guide to configure Oracle Fusion Analytics Warehouse data.

Task Description More Information
Specify the pipeline parameters Set up the pipeline parameters for your data model file before running your data pipelines for the functional areas. Set Up the Pipeline Parameters
Specify the report parameters Set up the report parameters for your data model file before running your data pipelines for the functional areas.

Set Up the Report Parameters for Enterprise Resource Planning

Set Up the Report Parameters for Human Capital Management

Create a data pipeline for a functional area To start analyzing your Oracle Applications Cloud data, create data pipelines for the functional areas to copy data to the data warehouse. Create a Data Pipeline for a Functional Area
Edit a data pipeline for a functional area You can change any details of your data pipelines for the functional areas prior to activating them. Edit a Data Pipeline for a Functional Area
Activate a data pipeline for a functional area You must activate the data pipeline for a functional area to run and load data into the data warehouse. Activate a Data Pipeline for a Functional Area
Deactivate a data pipeline for a functional area You can deactivate all the incremental set up and ETL jobs of data pipelines for the functional areas. Deactivate a Data Pipeline for a Functional Area
Delete a data pipeline for a functional area You can delete data pipelines for the functional areas prior to activating them. Delete a Data Pipeline for a Functional Area
Reload a data pipeline for a functional area Perform this action to refresh the data for the selected functional area. Reload a Data Pipeline for a Functional Area
Reset a data pipeline for a functional area You can reset an activated data pipeline for a functional area. Resetting the data pipeline deletes all data from your data warehouse. Reset a Data Pipeline for a Functional Area
View request history View the load request history by functional area, load type, and status. View Load Request History
Augment your data Augment your reports by using data sets that you create with specific columns from various data stores. Augment Your Data

About Data Pipelines for Functional Areas

Data pipelines for functional areas plans load data specific to a functional area into the data warehouse.

These pipelines hold configuration parameters specific to a functional area. For example, General Ledger is a functional area under Finance.

You can't submit a data pipeline for a functional area in the following situations:
  • Load in progress: If an incremental load is in progress.
  • An impending load: If an incremental load is scheduled to run in the next hour.
  • Exceeded the number of daily refresh requests: If you've exceeded the maximum number of ad hoc data refresh requests for the day. In this case, you can submit a request the following day.

About Best Practices

Here's a list of the best practices to follow before running a data pipeline for a functional area.

  • Define the pipeline and report parameters for the functional area accurately.
  • Schedule the data load to run during off-peak hours.
  • Run initial full warehouse loads during weekends and incremental loads during weeknights to ensure that users aren't impacted.

About Pipeline Parameters

The pipeline parameters apply to all functional areas.

The pipeline parameters that apply at the functional area levels are currency, exchange rate type, and initial extraction date and time to schedule the incremental job to run.
  • Analytics Currency: Currency conversions are required because your business might have transactions captured in multiple currencies. To create meaningful enterprise-wide analyses, you must use a common currency. Oracle Fusion Analytics Warehouse allows you to specify an analytics currency to store amounts in the data warehouse using this common currency. The load process checks the specified currency value and determines the corresponding exchange rate to convert the amounts to the analytics currency. For example, if your organization is a multinational enterprise that has its headquarters in the United States, you probably want to choose US dollars (USD) as the analytics currency.
  • Exchange Rate Type: Specify an exchange rate type that you want to use when converting the transaction amounts to the analytics currency. For example, you may set up Corporate as a standard exchange rate across your organization for a stable currency.

Set Up the Pipeline Parameters

Set up the pipeline parameters for your data model file before running your data pipelines for the functional areas.

  1. Sign in to your service using the following steps:
    1. Sign in to the Oracle Cloud Infrastructure console.
    2. Open the Navigation menu. Under Solutions and Platform, click Analytics, and then click Analytics Apps for Fusion.

      You see the service instances page.

    3. On the service instances page, select the compartment and the instance where you need to do the configuration.
    4. Click the instance URL to access Oracle Fusion Analytics Warehouse.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
  3. On the Data Pipeline page, under Global Configuration, click Pipeline Parameters.
    You see the Pipeline Parameters page.
  4. Under Global Parameters:
    1. Specify the currency of the transaction data.
    2. Select an exchange rate type that you want to use when converting the transaction amounts to the global currency.
    3. Indicate the initial extract date from which to load the transaction data.
  5. Under Additional settings, specify the time when you want the incremental data load to happen daily.
  6. Click Save.

About Report Parameters

Set up the report parameters to specify how data is presented on the KPI decks, visualizations, dashboards, and reports in Oracle Fusion Analytics Warehouse.

About the Reporting Configuration for Enterprise Resource Planning

You specify values for these parameters as you configure reporting for Enterprise Resource Planning:

  • Calendar: This parameter specifies the reporting calendar used in Oracle Analytics Cloud. All time options seen in the reporting layer are based off the calendar chosen in the configuration.
  • Chart of Accounts: This parameter controls the list of ledgers used for reporting and is used in tandem with the Calendar parameter.
  • Ledgers: This parameter restricts the ledgers in the reporting layer and in the security setup. The list of values for this parameter are the ledgers that share the same calendar and the chart of accounts selected in the previous two parameters. The ALL option selects all ledgers that share the calendar and the selected chart of accounts.
  • Ledger Sets: This optional parameter restricts the ledger sets in the reporting layer and in the security setup. The list of values for this parameter are the ledger sets that share the same calendar and the chart of accounts selected in the previous two parameters.

About the Reporting Configuration for Human Capital Management

You specify values for these parameters as you configure reporting for Human Capital Management:

  • Worker Age Band: This parameter specifies the grouping of the ages of workers by certain ranges. For example, less than 30, 30 to 50, and greater than 50 years. KPIs and analyses related to the headcount of the workers use these age bands. For example, the percentage of workers who are aged 50 and older.
  • Tenure Band: This parameter specifies the grouping of the service years of workers by certain ranges. For example, less than 5, 5 to 10, and greater than 10 years. KPIs and analyses related to the headcount or diversity factors of the workers use these bands. For example, you can use this parameter to answer a business question such as "Show me the percentage of workers in my organization of various years of service."
  • Performance Band: This parameter specifies the grouping of the performance ratings of workers by certain ranges. For example, high, medium, and low. KPIs and analyses related to the performance rating of the workers use these performance bands. For example, you can use this parameter to answer a business question such as "Who are the top performers for the current review period in my organization".
  • Potential Band: This parameter specifies the grouping of potential ratings under various headers. For example, High, Medium, and Low. You can configure the rating ranges under these bands.

Set Up the Report Parameters for Enterprise Resource Planning

Specify the report parameters for your data model file.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Pipeline page.
  3. On the Data Pipeline page, under Global Configurations, click Report Parameters.
    You see the Report Parameters page.
  4. On the Report Parameters page, click Enterprise Resource Planning and under it Oracle Financial Analytics.
  5. Select a fiscal calendar to configure the reporting calendar to be used in Oracle Analytics Cloud.
  6. Select one or multiple chart of accounts to control the list of ledgers used for reporting.
  7. In Ledgers, select ALL to use all the available ledgers in the reporting layer and in the security setup.
  8. Optionally, in Ledgers Sets, select the ledgers sets that you want to use in decks and data visualizations.
  9. Click Save.

Set Up the Report Parameters for Human Capital Management

Specify the report parameters for your data model file.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Pipeline page.
  3. On the Data Pipeline page, under Global Configuration, click Report Parameters.
    You see the Report Parameters page.
  4. On the Report Parameters page, click Human Capital Management and under it Oracle Human Capital Management Analytics.
  5. Select the worker age band to specify the age range of the workers.
  6. Select the tenure band to specify the range of service years of the workers.
  7. Select the performance band to specify the range of worker performance ratings.
  8. Select the potential band to specify the range of potential ratings for the workers.
  9. Select one or multiple reasons for reorganization to compute the headcount gain or losses based on the reorganization factor.
  10. Click Save.

Create a Data Pipeline for a Functional Area

To start analyzing your Oracle Applications Cloud data, create a data pipeline for a functional area to copy data to the data warehouse.

Perform this task only after you have set up the pipeline and report parameters.
  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, click your service. For example, under Applications, click Enterprise Resource Planning.
  4. On the service page, for example, the Data Configuration: Oracle Financial Analytics page, click Create.
  5. Select your application areas to transfer data to the warehouse, and then click Next.
  6. Review the parameters and click one of the options:
    • Cancel: To cancel the data pipeline for the functional area.
    • Save: To save the data pipeline for the functional area but not activate it.
    • Activate: To display the page for scheduling when to run the data pipeline for the functional area. Select Scheduled Execution Data to specify the date and time on which to run the data pipeline for the functional area. Select Run Immediately to create and run the data pipeline for the functional area immediately.

    Note:

    You can schedule an activation of a functional area even if activation of another functional area is in progress. However, you can't update the activation of a functional area that's in progress.
Oracle Fusion Analytics Warehouse extracts data only from the initial extract date that you specify. Any data added prior to the initial extract date won’t be part of this data pipeline activation plan. You see your data pipeline activation plan listed on the Data Configuration page.

Edit a Data Pipeline for a Functional Area

You can change any details of your data pipeline data pipeline for a functional area prior to activating it.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, click your service. For example, under Applications, click Enterprise Resource Planning.
  4. On the service page, for example, the Data Configuration: Oracle Financial Analytics page, click the Action menu for the data pipeline for a functional area that you want to edit, and click Edit.
  5. Make the changes, review the changes, and then click Save or Activate.

Activate a Data Pipeline for a Functional Area

You must activate the data pipeline for a functional area to run it and load the data into the data warehouse.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, click your service. For example, under Applications, click Enterprise Resource Planning.
  4. On the service page, for example, the Data Configuration: Oracle Financial Analytics page, click the Action menu for the saved data pipeline for the functional area that you want to activate, and click Edit.
  5. Review the details of the data pipeline for the functional area and click Activate.
    You see step 4 of the Data Configuration wizard where you can schedule when to run the data pipeline for the functional area.
  6. Select Scheduled Execution Data to specify the date and time on which to run the data pipeline for the functional area. Select Run Immediately to create and run the data pipeline for the functional area immediately. Click Finish.
Oracle Fusion Analytics Warehouse runs the data pipeline for the functional area, loads data into your data warehouse, and displays your data pipeline for the functional area on the Data Configuration page. Once data is successfully loaded, the system updates the status of the data pipeline for the functional area to Activation Completed.

Deactivate a Data Pipeline for a Functional Area

You can deactivate all the incremental setup and jobs of a data pipeline for a functional area. Deactivating a data pipeline for a functional area ensures that future incremental jobs don't select the specific functional area when the data pipeline runs.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, click your service. For example, under Applications, click Enterprise Resource Planning.
  4. On the service page, for example, the Data Configuration: Oracle Financial Analytics page, click the Action menu for the active data pipeline for the functional area that you want to deactivate, and click Deactivate.
You see your data pipeline for the functional area on the Data Configuration page with Deactivated status.

Delete a Data Pipeline for a Functional Area

You can delete a data pipeline for a functional area prior to activating it.

Deleting a data pipeline for a functional area permanently removes it, so data refreshes of the warehouse won't occur.
  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, click your service. For example, under Applications, click Enterprise Resource Planning.
  4. On the service page, for example, the Data Configuration: Oracle Financial Analytics page, click the Action menu for the data pipeline for the functional area that you want to delete, and click Delete.

Reload a Data Pipeline for a Functional Area

Reload an activated data pipeline for a functional area to refresh the data for the selected functional area.

Performing this action refreshes the data for the selected functional area. The next scheduled run may take additional time to complete.

Note:

Executing an on-demand data refresh while working on the data may cause temporary inconsistencies.
  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, click your service. For example, under Applications, click Enterprise Resource Planning.
  4. On the service page, for example, the Data Configuration: Oracle Financial Analytics page, click the Action menu for the data pipeline for the functional area whose data you want to refresh, and then click Refresh Data.
    The system executes an incremental load immediately. If you had reset a data pipeline for the functional area before refreshing data, then the system runs a full load immediately for that functional area.

Reset a Data Pipeline for a Functional Area

When you reset a data pipeline for a functional area, you delete all data from your data warehouse and the deletion is irreversible.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, click your service. For example, under Applications, click Enterprise Resource Planning.
  4. On the service page, for example, the Data Configuration: Oracle Financial Analytics page, click the Action menu for the data pipeline for the functional area that you want to reset, and click Reset.
You see the data pipeline for the functional area on the Data Configuration page with Reset Completed status.

Reset the Data Warehouse

Reset your data warehouse when your source Oracle Applications Cloud instance has been refreshed from another environment, for example, due to a Production to Test refresh. You may also want to reset the data warehouse when the data has become corrupted.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
  3. On the Data Configuration page, under Global Configuration, click Pipeline Parameters.
    You see the Pipeline Parameters page.
  4. On the Pipeline Parameters page, click the Reset Data Warehouse menu option.
  5. Click OK in the message dialog.
You must activate each of your data pipelines for the functional areas. See Activate a Data Pipeline for a Functional Area.

Reset the Cache

The data augmentation source column information is cached in the browser cache and is maintained for a week. If your source has new columns and they don't display in the Data Augmentation wizard, then you can reset the browser cache and retry data augmentation.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
  3. On the Data Configuration page, under Global Configuration, click Pipeline Parameters.
    You see the Pipeline Parameters page.
  4. On the Pipeline Parameters page, click the Reset Cache menu option.
    You see a message that cache has been cleared successfully.

View Load Request History

You can view the load request history by functional area, load type, and status along with other details.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. Click Application Navigation on the top left and then click Request History.
  4. On the Request History page, view the history of loads that have been performed.

    Note:

    The request history doesn't display the incremental loads.

View Audit Log

You can view the modifications made to entities in the data pipelines such as Ledgers, Analytics Currency, and functional areas.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. Click Application Navigation on the top left and then click Audit Log.
  4. On the Audit Log page, view the list of all the modifications.

About Augmenting Your Data

Augment your reports by choosing specific columns from various data stores (Business Intelligence view objects) of the Oracle Applications Cloud data sources.

You can augment your reports with data sets created by extending an existing entity or group of facts, by adding a new dimension in the target instance, and by adding a new fact in the target instance.

You can select the columns from various data stores, create an augmentation data set, and use that data set to create data pipelines for functional areas. This enables you to seamlessly extract and load data from additional Oracle Applications Cloud data stores and make it readily available in tables populated in the autonomous data warehouse. You can then use the data for visualization and analysis. To find the data stores that are available for extraction using augmentation, see Data Stores.

You can request for additional data stores to be made available for data augmentation. Create a service request for additional data stores for data augmentation and include the name of your tenancy along with the list of all the required data stores in the Description field.

Note:

You mustn't currently perform semantic model customizations on objects created by data augmentation such as descriptive flex fields (DFFs), new facts or new dimensions due to feature incompatibility issues to be resolved in a future release. For example, if you're deploying the Descriptive Flex Fields (DFF), Add Facts, or Add Dimensions functionality of data augmentation, then you may encounter issues if you then use the Semantic Model Extensions functionality to create or modify subject areas, add a hierarchy, add a column, or other extensions referencing the objects created by data augmentation. You can freely implement data augmentations when the data augmentation objects aren't used in conjunction with the semantic layer extensions. You can also freely use the semantic layer extensions when not referencing the objects created by data augmentation. See About Semantic Model Customization.

Augment Your Data

You can supplement the data in your reports by using data sets that you create with specific columns from various data stores (Business Intelligence view objects) of the Oracle Applications Cloud data sources.

  1. Sign in to your service.
  2. In Oracle Fusion Analytics Warehouse, open the Application Navigation menu, click Console, and then click Data Pipeline.
    You see the Data Configuration page.
  3. On the Data Configuration page, under Applications, click Data Augmentation.
  4. On the Data Augmentation page, click Create.
    You see the data augmentation wizard with multiple steps. You can create the following augmentation types:
    • Create a dimension.
    • Create a fact.
    • Extend an entity.
  5. In step 1 of the wizard, select an augmentation type. Each augmentation type requires you to complete certain tasks.
  6. For the type of augmentation, if you select Create Dimension to add a new dimension in the target instance, follow these instructions:
    1. Select Supplemental Data (Regular) as the source data set type.
    2. Select a product pillar; for example, Enterprise Resource Planning.
    3. Select a source table to which you want to add the new dimension.
    4. Click Next.
    5. In step 3 of the wizard, in the Available Attributes pane, select the check box for the attributes that you want in your new dimension, and then click Add Selected.

      You see the attributes that you had selected in the Target attributes pane and the recommended (defined) primary key. You can either accept this key or override it with your own primary key definition.

    6. Optionally, select Advanced to reorganize the order of columns that are marked as primary keys and specify one of your incremental keys to use when determining the initial extract date.
    7. Click Next.
    8. In step 6 of the wizard, provide the following details and click Finish to save and schedule your data augmentation pipeline job:
      1. Name your augmentation pipeline job; for example, Customer Class Code.
      2. Enter a suffix for the target table name using underscore in place of spaces between words and don’t use special characters; for example, Customer_Class_D. The augmentation process automatically creates the target table name.
      3. Provide a description.
      4. Select the functional area and one or multiple subject areas in which you want to include this augmentation pipeline job.
      5. Specify the options to save the data augmentation pipeline job without executing it, or schedule the execution date and time, or execute it immediately.
  7. For the type of augmentation, if you select Create Fact to add a new fact table in the target instance, then follow these instructions:
    1. Select Supplemental Data (Regular) as the source data set type.
    2. Select a product pillar; for example, Enterprise Resource Planning.
    3. Select a source table to which you want to add the new fact table.
    4. Click Next.
    5. In step 3 of the wizard, in the Available Attributes pane, select the check box for the attributes that you want in your new fact table, and then click Add Selected.

      You see the attributes that you had selected in the Target attributes pane and the recommended (defined) primary key. You can either accept this key or override it with your own primary key definition.

      For any numeric columns, you must specify “Measure” as the entity type if you want any aggregation done on the attribute. If you want to join any columns such as "ID" with any other dimension, then you must specify “Dimension” as the entity type.

    6. Optionally, select Advanced to reorganize the order of columns that are marked as primary keys and specify one of your incremental keys to use when determining the initial extract date.
    7. Click Next.
    8. In step 4 of the wizard, specify the dimension in the data warehouse that you want to map to the column that you identified as “Dimension” entity type and then click Next.
    9. In step 5 of the wizard, for the columns that you specified as “Measure” entity type, select an aggregation, and then click Next.
    10. In step 6 of the wizard, provide the following details and click Finish to save and schedule your data augmentation pipeline job:
      1. Name your augmentation pipeline job; for example, AP Distribution.
      2. Enter a suffix for the target table name using underscore in place of spaces between words and don’t use special characters; for example, AP_DISTRIBUTION_F. The augmentation process automatically creates the target table name.
      3. Provide a description.
      4. Select the functional area and one or multiple subject areas in which you want to include this augmentation pipeline job.
      5. Specify the options to save the data augmentation pipeline job without executing it, or schedule the execution date and time, or execute it immediately.
  8. For the type of augmentation, if you select Extend Entity to extend a group of existing facts, then follow these instructions:
    1. Select Descriptive Flex Field (DFF) as the source data set type.
    2. Select a product pillar; for example, Enterprise Resource Planning.
    3. Select a source table from the list of view objects that support descriptive flex fields.
    4. In step 2 of the wizard, select the entity or group of fact tables to extend. For example, if you select ARTransaction as the entity to extend, then this process joins the ready-to-use InvoiceID descriptive flex field using the “s_k_5000” primary join key with all the fact tables in the ARTransaction entity.
    5. Click Next.
    6. In step 6 of the wizard, provide the following details and click Finish to save and schedule your data augmentation pipeline job:
      1. Name your augmentation pipeline job; for example, AP Invoice Header.
      2. Enter a suffix for the target table name using underscore in place of spaces between words and don’t use special characters; for example, AP_Invoice_Header_DFF. The augmentation process automatically creates the target table name.
      3. Provide a description.
      4. Select the functional area and one or multiple subject areas in which you want to include this augmentation pipeline job.
      5. Specify the options to save the data augmentation pipeline job without executing it, or schedule the execution date and time, or execute it immediately.
You see the data augmentation pipeline jobs on the Data Augmentation page with one of these statuses:
  • Activation in Progress - You can’t edit, delete, or schedule a data augmentation pipeline job while activation is in progress.
  • Activation Completed - You can edit the data augmentation to add or delete VO attributes and save the changes. You can’t modify the schedule in this status.
  • Activation Scheduled - You can edit the data augmentation to add VO attributes, save the changes while retaining the existing schedule, reschedule the execution date and time, or execute the plan immediately.