Oracle® Business Intelligence Applications Installation and Configuration Guide > Customizing the Oracle Business Analytics Warehouse > Type II Customizations: Adding Additional Tables >

Important Notes About Customizing the Oracle Business Analytics Warehouse


All custom work, unless stated otherwise, must be done in the Custom folder so that the custom work can be preserved during an Informatica repository upgrade. Doing work on the standard folder should be avoided whenever possible. An upgrade of the Informatica repository overrides any changes to the standard folder.

Additional Customization Instructions
  • Table definitions in Informatica. Make sure that the SQL style is set to Oracle while importing the table definitions from external data sources. Even if the actual data source is of another database type, such as DB2 or MSSQL, it does not affect the logic of how data gets loaded.
  • Update strategy. For loading new fact and dimension tables, design a custom process on the source side to detect the new and modified records. The SDE process should be designed to load only the changed data (new and modified). If the data is loaded without the incremental process, the data that was previously loaded will be updated again, which is a costly process. For example, the logic in the preconfigured SIL mappings looks up the destination tables based on the INTEGRATION_ID and DATASRC_NUM_ID and returns the ROW_WID if the combination exists, in which case it updates the record. If the lookup returns null, it inserts the record instead. In some cases, last update date(s) stored in target tables are also compared in addition to the columns mentioned above to determine insert or update. Look at the similar mappings in the preconfigured folder for more details.
  • ETL process. When using multiple sources for the data warehouse, you can decide to load from all of them at the same time or at different time frequencies using different execution plans.
  • Truncating target tables. Truncating should be done through the DAC. A single task has place holders for a full load, and one for an incremental load.
    • For the SDE workflows, the commands for full load and incremental load are the same. They should have the Truncate Always flag checked in the DAC. For these kinds of tasks, the command for full load and incremental load are based on the same mapping.
    • For SIL workflows, the command can be different for full and incremental loads. They should have the Truncate For Full Load option checked in the DAC. When a table gets truncated, the indices are automatically dropped and created after the data is loaded. The workflow associated with the full load command can have the Bulk Load option turned on for an optimized version of the mapping that quickly inserts data. Note that if there are indexes on the table, the bulk load may fail, so it is very important that the indices are registered in the DAC and that you drop all of the indexes on this table during a full load if you use the bulk load option.
    • If a source requires an auxiliary task, it needs to be run only during incremental mode. So, for these tasks, the full load command is empty. No truncate options should be set.
  • ETL_PROC_WID. Use the same ETL_PROC_WID in W_PARAM_G table in custom mappings. ETL_PROC_WID is a reference key to Run History in the DAC. To use the same ETL_PROC_WID, copy the reusable lookup (called LKP_ETL_PROC_WID) defined in the SILOS folder. The input to the lookup is a constant (hard coded to 1).
  • DATASRC_NUM_ID. Use a parameter to define this value in the mapping. The DAC will automatically create a parameter file with the correct DATASOURCE_NUM_ID, which will be picked up by the parameter in the mapping. This allows you to make multiple copies of the same mapping when you have multiple instances of the same transactional database type. You do not have to do any additional hard-coding other than to register the sources in the DAC.
  • Creating indices and naming conventions. Staging tables typically do not require any indices. Use care to determine if indices are required on staging tables. Create indices on all the columns that the ETL will use for dimensions and facts (for example, ROW_WIDs of Dimensions and Facts, INTEGRATION_ID and DATASRC_NUM_ID, and flags). Carefully consider which columns or combination of columns filter conditions should exist, and define indices to improve query performance. Inspect the OTB objects for guidance. Name all the newly created tables as WC_. This helps visually isolate the new tables from the OTB tables. Keep good documentation of the customizations done; this helps when upgrading your data warehouse. Once the indices are decided upon, they should be registered in the DAC, either manually or by right-clicking on the certain table and invoking the Import Indices command.
  • Currency. For currency-related data, populate the table with the base currency and exchange date field (in order to convert the data appropriately). The data for currency conversion should be maintained in the main data source. (The currency data is maintained by converting all the currency information into a single Base Currency code specified in the DAC.)
  • Day Dimension. For the data that is related to W_DAY_D, use the reusable transformation EXP_DAY_DIMENSION_FK_RESOLUTION. Note that this transformation will take a date as input and return the foreign key to the Day dimension as output in the appropriate format (YYYYMMDD) avoiding the need of costly join or lookup to the W_DAY_D dimension table each time for resolution. Copy the reusable transformation and use it.
  • List Of Values. This applies in particular to Type I and II. The preconfigured columns that depend on a list of values have a language-dependent column and a language-independent column. Use the mapplet MPLT_LOV_TRANSLATION to populate the language-dependent and independent columns in the dimension tables. For fact tables, use MPLT_LOV_D_ROW_WID to create a new foreign key to the LOV dimension. You could also deal with translations directly in SQL overrides to improve performance.
Oracle® Business Intelligence Applications Installation and Configuration Guide Copyright © 2007, Oracle. All rights reserved.