Oracle® Business Intelligence Applications Fusion Edition Installation and Configuration Guide Release 7.9.5 Part Number E12083-01 |
|
|
View PDF |
This chapter describes how to configure Oracle Financial Analytics for particular sources to meet your business needs.
This chapter contains the following topics:
Oracle Financial Analytics consists of the following:
Oracle General Ledger and Profitability Analytics. The General Ledger Analytics application provides information to support your enterprise's balance sheet and provides a detailed analysis on the overall health of your company. The default configuration for the General Ledger Analytics application is based on what is identified as the most-common level of detail or granularity. However, you can configure and modify the extracts to best meet your business requirements.
The Profitability Analytics application provides cost analysis, revenue trends, and sales performance to provide an accurate picture of profit and loss. The information found in the Profitability Analytics application pertains to data found in the revenue and expense account groupings of your financial statements and chart of accounts. The Profitability Analytics application is designed to provide insight into your enterprise's revenue and profitability information, which ties into your accounts receivable. The default configuration for the Profitability Analytics application is based on what is identified as the most-common level of detail, or granularity. However, the extracts are configurable and you can modify the extracts to meet your business requirements. The Profitability Analytics application provides cost analysis, revenue trends, and profitability analysis at the products and customer level, and the income statement at the company and business divisions level.
Oracle Payables Analytics. The Oracle Payables Analytics application provides information about your enterprise's accounts payable information and identifies the cash requirements to meet your obligations.
The information found in the Oracle Payables Analytics application pertains to data found exclusively under Accounts Payable (AP) in your financial statements and chart of accounts. Analysis of your payables allows you to evaluate the efficiency of your cash outflows. The need for analysis is increasingly important because suppliers are becoming strategic business partners with the focus on increased efficiency for just in time, and quality purchasing relationships.
The default configuration for the Oracle Payables Analytics application is based on what is identified as the most- common level of detail, or granularity. However, you can configure or modify the extracts to best meet your business requirements.
Oracle Receivables Analytics. The Oracle Receivables Analytics application provides information to support your credit and collection activities, and to monitor and identify potential, receivables problems.
The information found in the Oracle Receivables Analytics application pertains to data found exclusively in the Accounts Receivable (AR) account grouping of your financial statements and chart of accounts. Each day that your receivables are past the due date represents a significant, opportunity-cost to your company. Keeping a close eye on the trends, and clearing of AR is one way to assess the efficiency of your sales operations, the quality of your receivables, and the value of key customers.
The default configuration for the Oracle Receivables Analytics application is based on what is identified as the most-common level of detail or granularity. However, you may configure and modify the extracts to best meet your business requirements.
This section contains configuration steps that you need to perform on Oracle Financial Analytics before you do a full data load, and contains the following topics:
Section 9.2.1, "Configuration Steps for Financial Analytics For All Source Systems"
Section 9.2.2, "Configuration Steps for Financial Analytics For Oracle EBS"
Section 9.2.3, "Configuration Steps for Financial Analytics For PeopleSoft"
Section 9.2.4, "Configuration Steps for Financial Analytics For Universal"
Section 9.2.5, "Configuration Steps for Controlling Your Data Set"
This section contains configuration steps before you do a full data load that apply to all source systems.
Note:
For configuration steps that apply to all analytics modules (for example, Oracle Financial Analytics, Oracle HR Analytics, Oracle Sales Analytics, see Chapter 7, "Configuring Common Areas and Dimensions").This section contains configuration steps before you do a full data load that apply to Oracle EBS, and contains the following topics:
If you modify or extend a seeded list of values, you must configure the CSV files for Oracle Financial Analytics by mapping values from your source systems to the domain values.
The following sections explain how to extract the lists of values from your source system, which you then compare with the seeded values. If the lists of values are different to the seeded values, you need to follow the instructions to configure the Domain Values and CSV Worksheet Files.
Typically, the number of rows returned by the source system will not match the number of rows provided in the domain valuesXXX.csv files. If so, you need to delete extra rows or insert additional rows in the CSV file to accommodate all the values returned by the source system in context. You also need to determine which row should have 'Return' and which row should have 'Delivery' in the W_XACT_TYPE_CODE column.
Table 9-2 lists the CSV worksheet files and the domain values for Financial Analytics and Oracle EBS in the $pmserver\LkpFiles folder.
Table 9-1 Domain Values and CSV Worksheet Files for Oracle Financial Analytics and Oracle EBS
Worksheet File Name | Description | Session |
---|---|---|
domainValues_Xact_Types_DocTypes_ora12.csv |
List the Transaction types and the corresponding domain values for the Oracle EBS R12 application. For more information about updating the values in this file, see Section 9.2.2.2, "How to configure domainValues_Xact_Types_DocTypes_ora12.csv for Oracle General Ledger and Profitability Analytics"). |
SDE_ORA_TransactionTypeDimension_GLCOGSDerive |
domainValues_Xact_Types_DocTypes_ora11i.csv |
List the Transaction types and the corresponding domain values for the Oracle EBS 11i application. For more information about updating the values in this file, see Section 9.2.2.3, "How to configure domainValues_Xact_Types_DocTypes_ora11i.csv for Oracle General Ledger and Profitability Analytics"). |
SDE_ORA_TransactionTypeDimension_GLCOGSDerive |
This section explains how to configure domainValues_Xact_Types_DocTypes_ora12.csv for Oracle General Ledger and Profitability Analytics, Release 12.
To configure domainValues_ Xact_Types_DocTypes_psft.csv for Oracle General Ledger and Profitability Analytics:
Identify the Entry Types in your Oracle Inventory Application by using the following SQL:
select mtt.transaction_type_id, mtt.description from mtl_transaction_types mtt;
This query gives the transaction type codes in your Oracle Inventory Application and their corresponding descriptions.
Open the domainValues_Xact_Types_DocTypes_ora12.csv file using a text editor in the $pmserver\lkpfiles folder.
Look for only the rows in the file which has xact_cat_code = 'COGS' or xact_cat_code = 'DEFERRED_COGS'. From among these records, look for those with W_XACT_TYPE_CODE = 'DELIVERY' or 'RETURN'. The column XACT_TYPE_CODE will contain the entry type codes from Oracle Inventory application, and the column W_XACT_TYPE_CODE is the corresponding domain value to which this entry type code will be mapped to.
Map the entry type codes returned by the query above from your Oracle Inventory Application to the domain values.
Make sure the entry type codes go into the XACT_TYPE_CODE column in the file. Note that it is not necessarily a one-to-one mapping between the domain values and the entry types. For example, you can have multiple entry types mapped into the DELIVERY/RETURN domain value. Be careful not to map the same entry type to two different domain values.
Save and close the file.
In Informatica PowerCenter Designer, open the mapping SDE_ORA_GLCOGSFact in the folder SDE_ORAxxx_Adaptor.
Open the mapplet mplt_BC_ORA_GLCOGSFact contained inside this mapping.
Open the source qualifier SQ_MTL_TRANSACTION_ACCOUNTS.
Modify the SQL so that it extracts the additional transaction type IDs, from step 1, that need to be extracted.
Make the same change in Informatica PowerCenter Workflow Manager to the SQL Override in the full load session SDE_GL_COGSFACT_Full.
This section explains how to configure domainValues_Xact_Types_DocTypes_ora11i.csv for Oracle General Ledger and Profitability Analytics, Release 11i.
To configure domainValues_Xact_Types_DocTypes_ora11i.csv for Oracle General Ledger and Profitability Analytics:
Identify the Entry Types in your Oracle Inventory Application by using the following SQL:
select mtt.transaction_type_id, mtt.description from mtl_transaction_types mtt;
This query gives the transaction type codes in your Oracle Inventory Application and their corresponding descriptions. Oracle Inventory Application is a data source from which Oracle General Ledger and Profitability Analytics extract data.
Open the domainValues_Xact_Types_DocTypes_ora11i.csv file using a text editor in the $pmserver\lkpfiles folder.
Look for only the rows in the file which has xact_cat_code = 'COGS'. From among these records, look for those with W_XACT_TYPE_CODE = 'DELIVERY' or 'RETURN'. The column XACT_TYPE_CODE will contain the entry type codes from Oracle Inventory application, and the column W_XACT_TYPE_CODE is the corresponding domain value to which this entry type code will be mapped to.
Map the entry type codes returned by the query above from your Oracle Inventory Application to the domain values.
Make sure the entry type codes go into the XACT_TYPE_CODE column in the file. Note that it is not necessarily a one-to-one mapping between the domain values and the entry types. For example, you can have multiple entry types mapped into the DELIVERY/RETURN domain value. Be careful not to map the same entry type to two different domain values.
Save and close the file.
In Informatica PowerCenter Designer, open the mapping SDE_ORA_GLCOGSFact in the folder SDE_ORAxxx_Adaptor.
Open the mapplet mplt_BC_ORA_GLCOGSFact contained inside this mapping.
Open the source qualifier SQ_MTL_TRANSACTION_ACCOUNTS.
Modify the SQL so that it extracts the additional transaction type IDs, from step 1, that need to be extracted.
Make the same change in Informatica PowerCenter Workflow Manager to the full load sessions that correspond to the mapping SDE_ORA_GLCOGSFact_Full.
If you have an Oracle EBS source system, you can specify from which set of books or ledgers you extract the GL data.
Oracle Business Intelligence Applications enables you to configure:
The list of ledgers or set of books from which to extract the GL data.
The types of ledgers or set of books for which to extract the GL data for.
You can use either of these configuration points separately and combine them. When installed out-of-the-box, Oracle Business Intelligence Applications extract all GL data for all ledgers or set of books.
For Oracle 11i customers, to configure the list of set of books to extract the GL data for, do the following:
In the DAC Client, display the Design view, and open the source container that you want to configure by selecting a container from the drop down list to the right of the Execute button.
For more information about logging into the DAC, see Section A.1, "How to Log Into the DAC Client".
Display the Source System Parameters tab.
Use the Search facility to locate the parameter $$FILTER_BY_SET_OF_BOOKS_TYPE.
Set the value to 'Y'.
Make sure that you put single quotes around the Y.
Save the changes.
Use the Search facility to locate the parameter $$SET_OF_BOOKS_ID_LIST.
Enter the IDs of the set of books for which you want to extract GL data for.
Specify the list of set of book IDs separated by commas (do not specify single or double quotes).
For example, if you want to extract GL data for set of books with IDs: 101, 207, and 303, then set the value of this parameter to the following:
101, 207, 303
Save the changes.
You can also specify the Set of Books type you want to extract GL data for. In Oracle 11i, there are three set of books types (this is based on the column GL_SETS_OF_BOOKS.mrc_sob_type_code column):
P (Parent)
R (Reporting)
N (None)
For Oracle 11i customers, to configure the types of set of books to extract the GL data, do the following:
In the DAC Client, display the Design view, and open the source container that you want to configure by selecting a container from the drop down list to the right of the Execute button.
For more information about logging into the DAC, see Section A.1, "How to Log Into the DAC Client".
Display the Source System Parameters tab.
Use the Search facility to locate the parameter $$FILTER_BY_SET_OF_BOOKS_TYPE.
Set the value to 'Y'.
Make sure that you put single quotes around the Y.
Save the changes.
Use the Search facility to locate the parameter $$SET_OF_BOOKS_TYPE_LIST.
Enter the types of set of books for which you want to extract GL data for.
Specify the list of set of book IDs in single quotes and separated by commas.
For example, if you want to extract GL data for all Parent set of books and all Reporting set of books, then set the value of this parameter as follows:
'P', 'R'
Save the changes.
For Oracle EBS R12 customers, to configure the list of ledgers to extract the GL data for, do the following:
In the DAC Client, in the Design view, open the source container that you want to configure by selecting a container from the drop down list to the right of the Execute button.
For more information about logging into the DAC, see Section A.1, "How to Log Into the DAC Client".
Display the Source System Parameters tab.
Use the Search facility to locate the parameter $$FILTER_BY_LEDGER_ID.
Set the value to 'Y'.
Make sure that you put single quotes around the Y.
Save the changes.
Use the Search facility to locate the parameter $$LEDGER_ID_LIST.
Enter the IDs of ledgers for which you want to extract GL data for.
Specify the list of ledger IDs separated by commas (do not use single quotes).
For example, For example, if you want to extract GL data for ledgers with IDs: 101, 207, and 303, then set the value of this parameter as follows:
101, 207, 303
Save the changes.
You can also specify types of ledgers you want to extract GL data for. In Oracle EBS R12, there are four types of ledgers (this is based on the column GL_LEDGERS.ledger_category_code column):
PRIMARY
SECONDARY
ALC
NONE
For R12 customers, to configure the types of ledgers to extract the GL data for, do the following:
In the DAC Client, in the Design view, open the source container that you want to configure by selecting a container from the drop down list to the right of the Execute button.
For more information about logging into the DAC, see Section A.1, "How to Log Into the DAC Client".
Display the Source System Parameters tab.
Use the Search facility to locate the parameter $$FILTER_BY_LEDGER_TYPE.
Set the value to 'Y'.
Make sure that you put single quotes around the Y.
Save the changes.
Use the Search facility to locate the parameter $$LEDGER_TYPE_LIST.
Enter the types of ledgers for which you want to extract GL data for.
Specify the list of set of book IDs in single quotes and separated by commas.
For example, if you want to extract GL data for all Primary ledgers and all Secondary ledgers, then simply set the value of this parameter to as follows:
'PRIMARY', 'SECONDARY'
Save the changes.
This section contains configuration steps before you do a full data load that apply to PeopleSoft, and contains the following topics:
Section 9.2.3.2, "How to configure Domain Values for Oracle Receivables Analytics"
Section 9.2.3.4, "GL Account Dimension, Chartfields and Hierarchy for PeopleSoft"
If you modify or extend a seeded list of values, you must configure the CSV files for Oracle Financial Analytics by mapping values from your source systems to the domain values.
The section Section 9.2.3.2, "How to configure Domain Values for Oracle Receivables Analytics" explains how to extract the lists of values from your source system, which you then compare with the seeded values. If the lists of values are different to the seeded values, you need to follow the instructions to configure the Domain Values and CSV Worksheet Files.
Table 9-2 lists the CSV worksheet files and the domain values for Financial Analytics and PeopleSoft in the $pmserver\LkpFiles folder.
Table 9-2 Domain Values and CSV Worksheet Files for Oracle Financial Analytics
Worksheet File Name | Description | Session |
---|---|---|
domainValues_Xact_Types_DocTypes_psft.csv |
List the Transaction types and the corresponding domain values for the PeopleSoft application. For more information about updating the values in this file, see Section 9.2.3.2, "How to configure Domain Values for Oracle Receivables Analytics"). |
SDE_PSFT_TransactionTypeDimension_ARDerive |
This section explains how to configure domain values for Oracle Receivables Profitability Analytics.
To configure domain values for Oracle Receivables Analytics:
Identify the Entry Types in your Receivables Application by using the following SQL:
SELECT DISTINCT ENTRY_TYPE, DESCR FROM PS_ENTRY_TYPE_TBL
This query gives the transaction type codes in your Oracle Inventory Application and their corresponding descriptions. Oracle Inventory Application is a data source from which Oracle General Ledger and Profitability Analytics extract data.
Open the domainValues_ Xact_Types_DocTypes_psft.csv file using a text editor in the $pmserver\lkpfiles folder.
Look for only the rows in the file that have xact_cat_code = 'RECEIVABLE' and W_XACT_TYPE_CODE = 'ORIGINAL' or 'APPLICATION'. The column XACT_SUBTYPE_CODE will contain the entry type codes from PeopleSoft Receivables and the columns W_XACT_TYPE_CODE and W_XACT_SUBTYPE_CODE are the corresponding domain values these entry type codes will be mapped to.
Copy the TYPE column to the XACT_TYPE_CODE column in the file.
The data must be copied starting from the 2nd line. The first line is the column header.
Map the entry type codes returned by the query above from your PeopleSoft Receivable Application to the domain values.
Make sure the entry type codes go into the XACT_TYPE_CODE column in the file. Note that it is not necessarily a one-to-one mapping between the domain values and the entry types. For example, you can have multiple entry types mapped into the ORIGINAL/INVOICE domain value. Be careful not to map the same entry type to two different domain values.
Save and close the file.
In Informatica PowerCenter Designer, open mapplets mplt_BC_PSFT_ARTransactionFact_Item and mplt_BC_PSFT_ARTransactionFact_ItemActivity in the SDE_PSFTxxx_Adaptor folder.
Open the source qualifier transformation.
At the end of the SQL, there is a 'IN' clause that should include all the entry type codes that map into W_XACT_TYPE_CODE = 'ORIGINAL' in the csv file prepared above.
Make the same change in Informatica PowerCenter Workflow Manager to the full load sessions that correspond to the SDE_PSFT_ARTransactionFact_Item_Full and SDE_PSFT_ARTransactionFact_ItemActivity_Full mappings.
Before you run the initial ETL, make sure that you set the Source System Parameters in order to compute the GL Balances correctly, as follows:
In the DAC Client, display the Design View, then display the Source System Parameters tab.
For information about how to log into the DAC, see Section A.1, "How to Log Into the DAC Client".
Set the following parameters:
Set the parameter $$INITIAL_EXTRACT_DATE to the date from which you want to extract data from your PeopleSoft application to the data warehouse. Ensure that this Parameter is the beginning of an accounting period and not just any date in the middle of a period. For example, if you decide to extract data from June 2005, and the June 2005 accounting period starts from 5th June, set the date to 5th June, 2005.
Setup the parameter $$YEAR to the same as the fiscal year of $$INITIAL_EXTRACT_DATE parameter above. For example, if June 2005 is fiscal year 2005, set this parameter to 2005.
Set the parameter $$ACCOUNTING_PERIOD to the accounting period number of the $$INITIAL_EXTART_DATE parameter above. For example, if June 2005 is the 6th period of fiscal year 2005, set this parameter to 6.
The GL Account Dimension in the Oracle Business Analytics Warehouse is at a granularity of a combination of chartfields. PeopleSoft Financials provides several chartfields for GL Accounts, such as Account, Alternate Account, Operating Unit, Department etc. The ETL program extracts all possible combinations of these chartfields that you have used and stores each of these chartfields individually in the GL Account Dimension. It extracts the combinations of chartfields used from the following PeopleSoft account entry tables:
PS_VCHR_ACCTG_LINES (Accounts Payable)
PS_ITEM_DST (Accounts Receivable)
PS_BI_ACCT_ENTRY (Billings)
PS_CM_ACCTG_LINE (Costing)
PS_JRNL_LN (General Ledger)
The GL Account Dimension (W_GL_ACCOUNT_D) in the Oracle Business Analytics Warehouse provides a flexible and generic data model to accommodate up to 30 chartfields. These are stored in the generic columns named ACCOUNT_SEG1_CODE, ACCOUNT_SEG2_CODE and so on up to ACCOUNT_SEG30_CODE, henceforth referred to as segments. These columns store the actual chartfield value that is used in your PeopleSoft application.
Mapping PeopleSoft Chartfields
A CSV file has been provided to map the PeopleSoft Chartfields to the generic segments. Use this file to specify which PeopleSoft application chartfield should be populated in which segment. The file is called file_glacct_segment_config_psft.csv and is located in the <OracleBI>\dwrep\Informatica\SrcFiles folder.
The first row in the file is a header row; do not modify this line. The second row in the file is where you specify how to do the mapping. The value for the column ROW_ID is hard coded to '1'; there is no need to change this.
Note that the file contains 30 columns – SEG1, SEG2, up to SEG30. You will have to specify which chartfield needs to be populated in each of these columns by specifying one of the supported values for the chartfields. The chartfields currently supported for the PeopleSoft application are listed below.
Note:
Values are case sensitive. You must specify the values exactly as shown below.Account
Alternate Account
Operating Unit
Fund Code
Department
Program Code
Class Field
Budget Reference
Product
Project
Affiliate
Fund Affiliate
Operating Unit Affiliate
ChartField 1
ChartField 2
ChartField 3
Note:
You only need to include the chartfields in the CSV file that you want to map.Hierarchy
Oracle Business Intelligence Application supports hierarchy for all these segments. If you have created a 'tree' in PeopleSoft for any of these chartfields, you can extract these trees into Oracle Business Analytics Warehouse's hierarchy to analyze your facts at any levels of the hierarchy. Information is provided in the sections that follow on how to extract these trees and how to setup the repository file (RPD) to use the Oracle Business Analytics Warehouse's hierarchy.
GL Balance Aggregate
The Oracle Business Analytics Warehouse data model has a fact table (W_GL_BALANCE_F) that stores GL Balance for all your GL Accounts. To compress this fact table for better performance, it also provides an aggregate table built on top of this fact table that stores GL Balances for up to 6 segments of your choice. The number of segments you want in the aggregate table and which ones, are totally configurable. This configuration is done at the third line of the file_glacct_segment_config_psft.csv file. Specify a value of 'Y' under the segment column that you want to be included in the aggregate table.
Note:
You can have only up to 6 'Y's in the file. Also, it is not a requirement that you have to use all 6. If, for example you want only 3 segments in your aggregate, you need only to specify 3 'Y's.Example CSV File Configuration
As an example for the configuration of the file_glacct_segment_config_psft.csv file, consider the following scenario:
Your system uses 4 chartfields, Account, Alt Account, Operating Unit, and Dept. Assume that you are interested in analyzing your data by only 3 of the 4 chartfields: Account, Operating Unit, Department, and that you typically view your GL Balances at the combination of Account and Department. Only occasionally do you want to view your GL balances as the combination of all 3 chartfields. In this scenario, the CSV file should look like:
Table 9-3 Example CSV Chartfield Mapping Values
ROW_ID | SEG1 | SEG2 | SEG3 |
---|---|---|---|
1 |
Account |
Operating Unit |
Department |
AGGREGATION |
Y |
Y |
With this configuration, W_GL_ACCOUNT_D would store the 'Account' chartfield value in SEGMENT1 column, 'Operating Unit' chartfield value in SEGMENT2 column, and so forth. The GL Balance aggregate table, W_GL_BALANCE_A would store GL Balances for each unique combination of 'Account' chartfield and 'Department' chartfield.
You can categorize your PeopleSoft General Ledger accounts into specific group account numbers. The GROUP_ACCT_NUM field denotes the nature of the General Ledger accounts.
Note:
It is critical that the General Ledger Account Numbers are mapped to the Group Account Numbers (or domain values) as the metrics in the General Ledger reporting layer uses these values.For example, Cash account, Payroll account, and so on. For a list of the Group Account Number domain values, see Oracle Business Analytics Warehouse Fusion Edition Data Model Reference. The group account num configuration is used during data extraction as well as front-end reporting. For example the group account num configuration is used heavily in both Profitability Analysis (Income Statement) and General Ledger analysis. The logic for assigning the accounts is located in the file_group_acct_codes_psft.csv file. This file is located in the <OracleBI>\dwrep\Informatica\SrcFiles folder.
Table 9-4 Layout of file_group_acct_codes_psft.csv File
BUSINESS_UNIT | FROM_ACCT | TO_ACCT | GROUP_ACCT_NUM |
---|---|---|---|
AUS01 |
101010 |
101099 |
AP |
AUS01 |
131010 |
131939 |
AR |
AUS01 |
152121 |
152401 |
COGS |
AUS01 |
171101 |
173001 |
OTHER |
AUS01 |
240100 |
240120 |
REVENUE |
AUS01 |
251100 |
251120 |
TAX |
In the Table above, in the first row, all accounts within the account number range from 101010 to 101099 containing a Business Unit equal to AUS01 are assigned to AP. Each row maps all accounts within the specified account number range and with the given Business Unit. If you need to assign a new group of account numbers, you can then assign GL accounts to the new group of account numbers in the file_group_acct_codes_psft.csv file.
This section contains configuration steps before you do a full data load that apply to Universal.
Not applicable to Oracle Business Intelligence Applications Release 7.9.5.
This section contains additional configuration steps for Oracle Financial Analytics, and contains the following topics:
Section 9.2.5.1, "Configuration Steps for Financial Analytics For All Source Systems"
Section 9.2.5.2, "Configuration Steps for Financial Analytics For Oracle EBS"
Section 9.2.5.3, "Configuration Steps for Financial Analytics For PeopleSoft"
Section 9.2.5.4, "Configuration Steps for Financial Analytics Chain For Universal"
This section contains additional configuration steps that apply to all source systems, and contains the following topics:
Section 9.2.5.1.1, "How to include UnApplied Payments in AR Aging Calculations"
Section 9.2.5.1.2, "How to Configure the AP/AR Aging Tables"
To include UnApplied Payments in Aging Calculations for AR Aging tables
In Informatica Designer, open the mapping PLP_ARSnapshotInvoiceAging in the PLP Folder.
Edit the Source Qualifier transformation SQ_IA_AR_XACTS, and open the SQL Query in the Properties tab.
In the WHERE clause of the SQL statement, change the SQL text from:
W_XACT_TYPE_D.W_XACT_SUBTYPE_CODE IN ('INVOICE','CR MEMO','DR MEMO') AND
To:
W_XACT_TYPE_D.W_XACT_SUBTYPE_CODE IN ('INVOICE','CR MEMO','DR MEMO', 'PAYMENT') AND
Save the changes.
The next ETL will populate the Aging tables using UnApplied payments in the calculations.
This section explains how to control the lengths of the aging buckets in the AP and AR aging snapshot tables. These tables are:
W_AP_AGING_INVOICE_A
W_AR_AGING_INVOICE_A
W_AP_AGING_SUPPLIER_A
W_AR_AGING_CUSTOMER_A
In these four tables, outstanding AP/AR balance information is broken out into rows. Each row represents the outstanding balance information for a particular aging bucket. Four aging buckets are provided out-of-the-box, with the following durations:
Bucket 1: 0 – 30 days
Bucket 2: 31 – 60 days
Bucket 3: 61 – 90 days
Bucket 4: 90+ days
Use the DAC to configure the lengths of these aging buckets, as described below.
To configure the length of aging buckets
Click on the 'Tasks' tab.
Query for the following tasks
PLP_APSnapshotInvoiceAging
PLP_ARSnapshotInvoiceAging
For each of these tasks, click on the 'Parameters' tab and update the parameters with names starting with '$$BUCKET'.
Save your work.
These aging tables are snapshot tables. You can also control how many historic month end snapshots you want to keep in these tables (the default value is 24 months). You can increase or decrease the number by adjusting the $$HISTORY_MONTHS parameter.
This section contains additional configuration steps that apply to Oracle EBS, and contains the following topics:
Section 9.2.5.2.1, "How to Configure How GL Balances Is Populated in Oracle EBS"
Section 9.2.5.2.2, "How to Configure Oracle Profitability Analytics Transaction Extracts"
Section 9.2.5.2.3, "How to Configure Cost Of Goods Extract (Oracle EBS 11i-specific)"
Section 9.2.5.2.4, "How to Configure AP Balance ID for Oracle Payables Analytics"
Section 9.2.5.2.6, "How to Configure the AR Adjustments Extract for Oracle Receivables Analytics"
Section 9.2.5.2.7, "How to Configure the AR Schedules Extract"
Section 9.2.5.2.10, "How to Setup Drill Down in Oracle BI Answers from General Ledger to Subledger"
Oracle Business Intelligence Applications provides two ways to populate the GL balances (stored in the W_GL_BALANCE_F table), as follows:
By extracting the GL balances directly from Oracle General Ledger, as follows:
In the DAC Client, for the Subject Area 'Financials – General Ledger', in the 'Configuration Tags' tab, make sure that the tag 'Oracle – Extract GL Balance' is unchecked for the 'Inactive' checkbox.
Make sure the tag 'Financials – Calculate GL Balance is checked in the 'Inactive' checkbox.
Click Assemble to redesign the subject area.
After the subject area has been redesigned, redesign the execution plans that contain this subject area.
By calculating the GL balances based on the records in the W_GL_OTHER_F table, which stores all journal lines, as follows:
In the DAC Client, for the Subject Area 'Financials – General Ledger', in the 'Configuration Tags' tab, make sure that the tag 'Financials – Calculate GL Balance' is unchecked for the 'Inactive' checkbox.
Make sure the tag 'Oracle – Extract GL Balance' is checked in the 'Inactive' checkbox.
Click Assemble to redesign the subject area.
After the subject area has been redesigned, redesign the execution plans that contain this subject area.
There are two separate transaction extracts for Oracle General Ledger and Profitability Analytics—General Ledger Revenue and General Ledger COGS. By default, the Oracle General Ledger Analytics application extracts only Completed revenue and COGS that have been posted to the general ledger. Completed revenue transactions are those where the RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG = Y
. If you want to extract incomplete revenue transactions, you can remove the filter in the Business Component.
To modify the extract filter for Oracle General Ledger and Profitability Analytics Revenue
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open the mplt_BC_ORA_GLRevenueFact
mapplet.
Double-click the Source Qualifier to open the Edit Transformations dialog, and click the Properties tab.
In the User Defined Join field and in the SQL Query field, remove the statement:
AND RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG = Y
Validate and save your changes to the repository.
For customers of EBS 11i, 11.5.8, 11.5.9, and 11.5.10, the Oracle General Ledger Analytics application extracts only COGS transactions that have been posted to the general ledger, by default. All COGS transactions that have been transferred satisfy the following condition— MTL_TRANSACTION_ACCOUNTS.GL_BATCH_ID <> -1. If you want to extract all transactions, you can remove the filter in the Business Component mapplet. You need to modify the Business Component mapplet for both the regular extract mapplet as well as the primary extract mapplet. This does not apply to EBS R12; by default all transactions are extracted. The following section applies only to EBS 11i customers.
To modify the extract filter for General Ledger COGS
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open mplt_BC_ORA_GLCOGSFact
.
Double-click the Source Qualifier to open the Edit Transformations dialog, and click the Properties tab.
In the User Defined Join field and in the SQL Query field, remove the statement:
AND MTL_TRANSACTION_ACCOUNTS.GL_BATCH_ID <> -1
In Mapplet Designer, open mplt_BC_ORA_GLCOGSFact_Primary
.
Double-click the Source Qualifier to open the Edit Transformations dialog, and click the Properties tab.
In the User Defined Join field and in the SQL Query field, remove the statement:
AND MTL_TRANSACTION_ACCOUNTS.GL_BATCH_ID <> -1
Validate and save your changes to the repository.
Make appropriate changes to the SQL in the informatica session, 'SDE_ORA_GLCOGSFact_Full', which is present in the corresponding folder in Informatica PowerCenter Workflow Manager.
If you customized the mapping SDE_ORA_GLCOGSFact to extract from Oracle EBS versions 11.5.8, 11.5.9, or 11.5.10, modify the domain values file domainValues_Xact_Types_DocTypes_ora11i.csv.
To extract from Oracle EBS version R12, modify the file domainValues_Xact_Types_DocTypes_ora12.csv.
The AP Balance ID controls the level at which the balance in W_AP_BALANCE_F is maintained.
This section contains configuration information for Oracle Payables Analytics that is specific to Oracle.
By default, the Accounts Payable (AP) Balance ID is maintained at the following granularity:
SET_OF_BOOKS_ID||'~'||CODE_COMBINATION_ID||'~'||VENDOR_SITE_ID)||'~'||INP_ORG_ID
However, if you want to maintain your AP balance at a different grain, you can redefine the Balance ID value in the applicable mapplets.
To modify the Accounts Payable Balance ID
Note:
To modify the Accounts Payable Balance ID, you must modify the following mapplets:mplt_SA_ORA_APTransactionFact_LiabilityDistribution
mplt_SA_ORA_APTransactionFact_ExpenseDistribution
mplt_SA_ORA_APTransactionFact_Payment
mplt_SA_ORA_APTransactionFact_PaymentSchedule
mplt_SA_ORA_APTransactionFact_DiffMan
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open the mapplet (for example, mplt_SA_ORA_APTransactionFact_LiabilityDistribution).
Double-click the Expression transformation to open the Edit Transformations dialog.
Click the Ports tab to edit the Balance ID definition in the EXT_NU_BALANCE_ID
column.
Validate and save your changes to the repository.
Repeat steps 1 to 5 for each mapplet that is listed above.
The AR Balance ID controls the level at which the balance in W_AR_BALANCE_F is maintained.
By default, the AR Balance ID is maintained at the following granularity:
set_of_books_id || '~' || code_combination_id || '~' || customer_id || '~' || customer_site_use_id || '~' transaction_currency_code || '~' || org_id
However, if you want to maintain your AR balance at a different grain, you can redefine the Balance ID value in the applicable mapplets.
To modify the AR Balance ID
Note:
To modify the AR Balance ID, you must modify the following mapplets:mplt_SA_ORA_ARTransactionFact_Adjust
mplt_SA_ORA_ARTransactionFact_ARScheduleDerive
mplt_SA_ORA_ARTransactionFact_CreditMemoApplication
mplt_SA_ORA_ARTransactionFact_ReceivableApplication
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open the mapplet (for example, mplt_SA_ORA_ARTransactionFact_Adjust).
Double-click the Expression transformation to open the Edit Transformations dialog.
Click the Ports tab to edit the Balance ID definition in the EXT_NU_AR_BALANCE_ID
column.
Validate and save your changes to the repository.
Repeat steps 1 to 5 for each mapplet that is listed above.
By default, Oracle Receivables Analytics extracts only approved adjustment entries against accounts receivable transactions. Approved adjustments are entries where the AR_ADJUSTMENTS_ALL.STATUS = A.
If you want to extract additional types of AR adjustment entries, you can remove the filter in the Business Component mapplet. By modifying or removing the filter, you can extract other entries, such as those that require more research, those that are rejected, and those that are not accrued charges.
To modify the extract filter for Accounts Receivable adjustments
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open the mplt_BC_ORA_ARTransactionFact_Adjust
mapplet.
Double-click the Source Qualifier to open the Edit Transformations dialog, and click the Properties tab.
In the SQL Query field and in the User Defined Join field, modify the statement:
AND AR_ADJUSTMENTS_ALL.STATUS = A
Validate and save your changes to the repository.
Repeat Step 2 to Step 4 for the mplt_BC_ORA_ARTransactionFact_AdjustPrimary mapplet.
By default, Oracle Receivables Analytics extracts only completed schedules; that is, transactions where the RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG(+) = Y
. If you want to extract additional types of AR schedule entries, you must remove the filter in the Business Component mapplet. By modifying or removing the filter, you can extract other entries, such as those that were marked as incomplete.
To modify the extract filter for Accounts Receivable schedules
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open the mplt_BC_ORA_ARTransactionFact_ARSchedules
mapplet.
Double-click the Source Qualifier to open the Edit Transformations dialog.
In the User Defined Join field and in the SQL Query field, modify the statement:
AND RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG(+) = Y
Validate and save your changes to the repository.
By default, Oracle Receivables Analytics extracts only confirmed, cash-receipt application entries against accounts receivable transactions. Confirmed receipts are entries where the AR_RECEIVABLE_APPLICATIONS_ALL.CONFIRMED_FLAG = Y OR NULL
. If you want to extract additional types of cash-receipt application entries, you can remove the filter in the Business Component mapplet. By modifying or removing the filter, you can extract other entries, such as unconfirmed applications.
You must modify both the regular mapplet (mplt_BC_ORA_ARTransactionFact_ReceivableApplication
) as well as the primary extract mapplet (mplt_BC_ORA_ARTransactionFact_ReceivableApplicationPrimary
).
To modify the extract filter for AR cash receipt application
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open the mplt_BC_ORA_ARTransactionFact_ReceivableApplication
mapplet.
Double-click the Source Qualifier to open the Edit Transformations dialog, and click the Properties tab.
In the User Defined Join field and in the SQL Query field, modify the statement:
AND NVL(AR_RECEIVABLE_APPLICATIONS_ALL.CONFIRMED_FLAG,'Y') = Y
Validate and save your changes to the repository.
Repeat Step 2 to Step 4 for the mplt_BC_ORA_ARTransactionFact_ReceivableApplicationPrimary mapplet.
By default, Oracle Receivables Analytics extracts only confirmed, credit-memo application entries against accounts receivable transactions. Confirmed credit memos are entries where the AR_RECEIVABLE_APPLICATIONS_ALL.CONFIRMED_FLAG = Y OR NULL
. If you want to extract additional types of AR credit-memo application entries, you can remove the filter. By modifying or removing the filter, you can extract other entries such as unconfirmed, credit memos.
You must modify both the regular mapplet (mplt_BC_ORA_ARTransactionFact_CreditmemoApplication
), as well as the primary extract mapplet (mplt_BC_ORA_ARTransactionFact_CreditmemoApplicationPrimary
). Repeat the following procedure for each mapplet.
To modify the extract filter for Accounts Receivable Credit-Memo Application receipts
In Informatica PowerCenter Designer, open the appropriate Oracle Applications folder (for example, SDE_ORA11510_Adaptor).
In Mapplet Designer, open the mplt_BC_ORA_ARTransactionFact_CreditmemoApplication
mapplet.
Double-click the Source Qualifier to open the Edit Transformations dialog, and click the Properties tab.
In the User Defined Join field and in the SQL Query field, modify the statement:
AND NVL(AR_RECEIVABLE_APPLICATIONS_ALL.CONFIRMED_FLAG,'Y') = Y
Validate and save your changes to the repository.
Repeat Step 2 to Step 4 for the mplt_BC_ORA_ARTransactionFact_CreditmemoApplicationPrimary
mapplet.
Oracle Business Intelligence Applications enables you to trace a GL Journal to the Subledger transaction that created that journal. This ability (or drill down) is achieved through the Navigation feature in Oracle BI Answers. This feature is available for AP if the source is Oracle EBS 11i and for all subledgers if the source is Oracle EBS R12.
To set up drill down in Oracle BI Answers from General Ledger to Subledger:
Create your subledger request from 'Financials - AP Transactions' or 'Financials - AR Transactions' catalog as applicable.
In your request, add a filter on the column 'GL Journal ID' under the 'Document Details' folder and the set the operator of the filter to 'Is Prompted'.
Build your GL Journal request from the 'Financials - GL Detail Transactions' catalog.
To your request, add the column 'GL Journal ID' under the 'Document Details' folder.
Navigate to the Column Properties of this column, and set the Value Interaction property in the Column Format tab to 'Navigate'.
Add a navigation target and set the target location to the subledger request you created earlier.
Note:
For COGS and Revenue, the 'GL Journal ID' column is not exposed in any presentation catalogs. It is available in the business model layer of the RPD metadata under the logical tables 'Dim - GL COGS Details' and 'Dim - GL Revenue Details' respectively. As a workaround, you can create presentation catalogs to report of detail level transactions for COGS and Revenue and expose this column in the presentation catalog and use similar steps are above to setup a drilldown from GL to COGS/Revenue.This section contains additional configuration steps that apply to PeopleSoft, and contains the following topics.
Section 9.2.5.3.1, "About PeopleSoft Trees in Oracle Financial Analytics"
Section 9.2.5.3.3, "General Ledger Chartfield Hierarchy Extraction Customization"
Financial Analytics consists of three products: GL and Profitability Analytics, Payables Analytics, and Receivables Analytics. Financial Analytics is referred to in the Oracle Business Intelligence Applications as Financial Analytics. References to Oracle Business Intelligence Applications can be read as Financial Analytics.
For PeopleSoft, the Oracle Financial Analytics application sources data from a data structure, called PeopleSoft Trees, to get information about the organization's General Ledger account hierarchies, Profit Center hierarchies, Cost Center hierarchies, and so on.
PeopleSoft Trees are a flexible, generic way of constructing hierarchical summarization of a particular database fields in PeopleSoft for reporting purposes. Typically, entities such as Chartfields (Account, Dept, and Project, and so on), items, locations, and so on, are organized into user-defined trees.
Table 9-5 lists the default PeopleSoft Trees that are sourced by Oracle Financial Analytics.
Table 9-5 PeopleSoft Trees for Oracle Financial Analytics
PeopleSoft Data Structure | Repository Source Name | Description |
---|---|---|
PeopleSoft Tree |
ACCTROLLUP |
Account Rollup |
PeopleSoft Tree |
AR_BUSINESS_UNITS |
AR Business Unit Roll-Up |
PeopleSoft Tree |
DEPARTMENTS |
Manufacturing Departments |
PeopleSoft Tree |
OPERUNIT |
Operating Unit |
Table 9-6 lists the mappings and sessions using PeopleSoft Trees.
Table 9-6 List of Mappings And Sessions Using PeopleSoft Trees
Repository Source Name | Mapping | Session |
---|---|---|
ACCTROLLUP |
M_P_GL_ACCTS_HIERARCHY_EXTRACT |
S_M_P_GL_ACCTS_HIERARCHY_EXTRACT |
AR_BUSINESS_UNITS |
M_P_BUSN_ORGS_COMPANY_HIERARCHY_EXTRACT |
S_M_P_BUSN_ORGS_COMPANY_HIERARCHY_EXTRACT |
DEPARTMENTS |
M_P_PROFIT_CNTRS_HIERARCHY_EXTRACT |
S_M_P_PROFIT_CNTRS_HIERARCHY_EXTRACT |
OPERUNIT |
M_P_COST_CENTERS_HIERARCHY_EXTRACT |
S_M_P_COST_CENTERS_HIERARCHY_EXTRACT |
Oracle Business Intelligence Applications provides a set of mechanisms to support various tree structures in PeopleSoft applications. The following tables illustrate the supported structure types and level types.
To support above the functionality in an efficient way, a flattened structure is adopted to store the hierarchy-related data. Each record of W_HIERARCHY_D and other hierarchy-related dimension tables (e.g. W_INT_ORG_DH) represents a single node in the tree. It stores the path information from the top-most node to that node. For the bottom nodes in the detail tree, the extraction process generates a number of records according to the node's range and the record in the relevant table, and put those records into the hierarchy table. The following examples provide a detailed explanation of this functionality.
For any nodes in the tree, except for the bottom nodes, W_HIERARCHY_D will have one corresponding record. For example, for the 'E' node in above tree (the one highlighted with a blue square), you will have the following record in the database.
Table 9-9 Example of how the node E is stored in the W_HIERARCHY_D table.
HIER1_CODE | HIER2_CODE | HIER3_CODE | HIER4_CODE | HIER[5-20]_CODE |
---|---|---|---|---|
US |
E |
E |
E |
E |
If the actual hierarchy level in the tree ends earlier than the predefined level of W_HIERARCHY_D, then the lower hierarchy level in the table will repeat the same value as its next higher level. The predefined level is 20, which means if one path of the tree only has 5 levels, then the levels 6~20 will repeat the value of level 5.
For the leaf node in the detail tree shown above, W_HIERARCHY_D will have a record for every existing record within this detail range (the extraction process will find the single record in the relevant table according to the tree's structure). For example, in the CA node (the one with red square), the range is 2334-2434 and 3001-3001. The tree structure table (for example, State_TBL) contains two records: 2340, 3001. When generating the CA node, the extraction process will extract these two records, put them into W_HIERARCH_D as following structure.
Table 9-10 Example CSV Chartfield Mapping Values
HIER1_CODE | HIER2_CODE | HIER3_CODE | HIER4_CODE | HIER5_CODE |
---|---|---|---|---|
US |
W |
CA |
2340 |
2340 |
US |
W |
CA |
3001 |
3001 |
US |
W |
CA |
2334 |
2334 |
In both Summary Tree and Detail Tree, the lowest level node is always a node with specific value (as above example, the name and the code may be different). For the case of winter tree, the lowest level code would be the same as higher level.
The record 'Kuo' is stored in the database as follows:
Table 9-11 Example CSV Chartfield Mapping Values
HIER1_CODE | HIER2_CODE | HIER3_CODE | HIER4_CODE | HIER[5-20]_CODE |
---|---|---|---|---|
Smith |
Cheng |
Kuo |
Kuo |
Kuo |
Oracle Business Intelligence Applications uses the following temporary tables and staging tables to handle the extraction and load process of the tree. They are standard and not tree-specific or tree-structure specific.
This table below lists the tables used during the General Ledger Hierarchy extraction process. The sequence indicates the orders in which these tables are populated.
Table 9-12 General Ledger Chartfield Hierarchy
Sequence | Table Name |
---|---|
1 |
W_PSFT_TREE_TMP |
1 |
W_PSFT_GLHIER_CF_DH_TMP |
2 |
W_PSFT_GLHIER_DTLRGE_DH_TMP |
2 |
W_PSFT_GLHIER_VERT_DH_TMP |
3 |
W_PSFT_GLHIER_FLAT_DH_TMP |
4 |
W_HIERARCHY_DS |
The Internal Organization Hierarchy is shown in the table below.
Table 9-13 Internal Organization Hierarchy
Sequence | Table Name |
---|---|
1 |
W_PSFT_INT_ORG_TREE_TMP |
1 |
W_PSFT_INT_ORG_GLBU_DH_TMP |
2 |
W_PSFT_INT_ORG_DTLRGE_DH_TMP |
2 |
W_PSFT_INT_ORG_VERT_DH_TMP |
3 |
W_PSFT_INT_ORG_FLAT_DH_TMP |
4 |
W_INT_ORG_DHS |
5 |
W_INT_ORG_DH |
Note:
The hierarchy related extraction and load tasks are deactivated in the standard application. To activate the hierarchy extractions and load tasks, you need to activate certain Configuration Tags in Oracle Data Warehouse Administration Console (DAC).Configuration Tags is a DAC feature, which allows users to activate or deactivate a certain set of tasks in your ETL process. When deactivated, those set of tasks will not get executed when running the execution plan. When activated, those set of tasks will get executed when running the execution plan.
Oracle Business Intelligence Applications includes two Configuration Tags for hierarchy extractions and load, as follows:
'PeopleSoft - Populate Non Business Unit Related Hierarchies', which contains all but business unit related hierarchies' extractions and load tasks.
'PeopleSoft - Populate Business Unit Related Hierarchies', which contains Business Unit related hierarchies' extractions and load tasks.
By simply activating and de-activating them in a Subject Area, you can control whether these hierarchy extraction and load tasks get executed in the execution plan.
To activate or de-activate a set of tasks in your ETL process:
In the DAC Client, in the Design view, open the corresponding PeopleSoft container by selecting a container from the drop down list to the right of the Execute button.
For more information about logging into the DAC, see Section A.1, "How to Log Into the DAC Client".
Display the Configuration Tags tab.
Clear the Inactive checkbox of the hierarchy you want to enable, as follows:
For GL Chartfield, clear the Inactive checkbox for PeopleSoft - Populate Non Business Unit Related Hierarchies.
For Business Unit, clear the Inactive checkbox for PeopleSoft – Populate Business Unit Related Hierarchies.
Click Save.
Display the Subject Areas tab.
For each Financials subject area, do the following:
Select the Financials subject area in the Name list.
For example, Cost of Goods Sold, General Ledger, Payables and so on.
In the bottom pane, display the Configuration Tags tab.
Clear the Inactive checkbox of the hierarchy you want to enable, as follows:
- For GL Chartfield, clear the Inactive checkbox for PeopleSoft - Populate Non Business Unit Related Hierarchies.
- For Business Unit, clear the Inactive checkbox for PeopleSoft – Populate Business Unit Related Hierarchies.
Save the changes.
Click Assemble to redesign the subject areas that you have modified.
Display the Execute tab at the left hand side.
For every execution plan which contains the subject areas that you modified, rebuild those execution plans.
Save the execution plans that you modified.
As every tree in PeopleSoft is an identical item, each of them is extracted into data warehouse separately – that means it is required to create a new mapping if a new tree is to be extracted.
Oracle Business Intelligence Application provides a set of example mappings to extract Internal Organization Hierarchy and General Ledger Chartfield Hierarchy related trees. Users can customize their own extractions by simply copying some of those mappings and modifying naming parameters.
This section includes an example set of mappings for General Ledger Chartfield extraction. This example extracts a GL Account tree named 'ACCTROLLUP' with SETID 'SHARE' from the PeopleSoft application. To get other GL Chartfield related trees to be extracted, you need to customize SDE_PSFT_Stage_GLHierarchy_AccountHierarchy and SDE_PSFT_Stage_GLHierarchy_GetAccountChartField.
You do not need to change or copy the rest of the mappings. The example customization is provided as below.
Table 9-14 General Ledger Chartfield Hierarchy Mappings
Execution Sequence | Mapping Name | Needs Customization? |
---|---|---|
1 |
SDE_PSFT_Stage_GLHierarchy_AccountHierarchy_Extract |
Y |
1 |
SDE_PSFT_Stage_GLHierarchy_GetAccountChartField |
Y |
2 |
SDE_PSFT_Stage_GLHierarchy_GetHierarchyLevel |
N |
2 |
SDE_PSFT_Stage_GLHierarchy_DeriveRange |
N |
3 |
SDE_PSFT_Stage_GLHierarchy_Flatten |
N |
4 |
SDE_PSFT_GLHierarchy_Derive |
N |
For example, if a user wants to extract a tree named 'DEPARTMENT_HIERARCHY' with SETID 'XYZ', which uses DEPT_TBL as its structure's Record Name, DEPTID as Field Name (that means it builds up the hierarchy for Department chartfield from General Ledger), the following customization steps are required:
In Informatica PowerCenter Designer, create new mappings by copying the existing SDE_PSFT_Stage_GLHierarchy_AccountHierarchy_Extract and SDE_PSFT_Stage_GLHierarchy_GetAccountChartField mappings.
Name the new mappings SDE_PSFT_Stage_GLHierarchy_DepartmentHierarchy_Extract and SDE_PSFT_Stage_GLHierarchy_GetDepartmentChartField.
Open the mapping SDE_PSFT_Stage_GLHierarchy_DepartmentHierarchy_Extract and edit the settings as follows:
In SDE_PSFT_Stage_GLHierarchy_DepartmentHierarchy_Extract, open Source Definition PSFT_TREE_VERTICAL_FLATTENING.
Under the 'Properties' tab, change the Tree Name to 'DEPARTMENT_HIERARCHY', and Set ID to 'XYZ'.
Open the mapping SDE_PSFT_Stage_GLHierarchy_GetDepartmentChartField and edit the settings as follows:
In the SDE_PSFT_Stage_GLHierarchy_GetDepartmentChartField mapping, change the source table to 'DEPT_TBL' (also change the Source Qualifier). Connect SETID, DEPTID, EFFDT, EFF_STATUS, and DESCR from Source Qualifier to next Filter 'Filter_GLACCT' corresponding fields (connect DEPTID to CHARTFIELD).
In the last Expression Transformation of this mapping ('Exp_TMPGLACCT'), change the expression field SOURCE_TABLE_NAME field to 'DEPT_TBL' (the same name as the source table).
Save all your changes in the Informatica folder.
Add Corresponding Sessions and Workflows in Informatica PowerCenter Workflow Manager, as follows:
In Informatica PowerCenter Workflow Manager, locate the appropriate PeopleSoft folder (for example, SDE_PSFT_xx_Adaptor').
Open Task Developer and create a new task for the that mappings you just created.
Referring to the parameters in the original sample sessions (see table below, and leave other mappings set to the default value), modify these newly created sessions.
Parameter Configurations for SDE_PSFT_Stage_GLHierarchy_DepartmentHierarchy_Extract Session are shown in the table below:
Table 9-15 General Ledger Chartfield Hierarchy Mappings
Tab Name | Parameter Name | Value |
---|---|---|
Properties |
Parameter Filename |
$PMSourceFileDir\parameterfileOLTP.txt |
Properties |
$Source connection value |
$DBConnection_OLTP |
Properties |
$Target connection value |
$DBConnection_OLAP |
Properties |
DTM buffer size |
32000000 |
Config Object |
Default buffer block size |
128000 |
Config Object |
Stop on errors |
1 |
Config Object |
Override tracing |
Normal |
Mapping |
ConnectionW_PSFT_TREE_TMP - Rational |
$DBConnection_OLAP |
Mapping |
ConnectionSQ_PSFT_TREE_VERTICAL_FLATTENING - Application |
$DBConnection_OLTP |
Mapping |
ConnectionW_PSFT_TREE_TMP1 - Rational |
DBConnection_OLAP |
Mapping |
ConnectionLkp_Tree_ChartField - Rational |
$DBConnection_OLTP |
Mapping |
SDE_PSFT_Stage_GLHierarchy_DepartmentHierarchy_Extract – Session$Source Connection Value |
$DBConnection_OLTP |
Mapping |
SDE_PSFT_Stage_GLHierarchy_DepartmentHierarchy_Extract – Session$Target Connection Value |
$DBConnection_OLTP |
Note:
The SDE_PSFT_Stage_GLHierarchy_GetDepartmentChartField parameters are similar to this.Save your changes
In Informatica PowerCenter Workflow Manager, create a Workflow with the same name as Session.
Drag the newly created session to the panel.
Connect the start with the session.
Save and Check-in all your changes in Informatica PowerCenter Designer and Informatica PowerCenter Workflow Manager.
Re-configure the DAC settings, as follows:
Login into the DAC, open your container, and display the Tasks tab.
Add the two workflows that you created as new tasks.
For information about what values to set task property values to, refer to the tasks SDE_PSFT_Stage_GLHierarchy_AccountHierarchy_Extract and SDE_PSFT_Stage_GLHierarchy_GetAccountChartField.
Right click on each task and select 'Synchronize tasks'.
The Synchronize tasks process reads in the source table and target table information for these tasks from the Informatica repository. To see this information, display the 'Source Tables' tab and 'Target Tables' tab for these tasks.
Add your tasks into Configuration Tags: navigate to Configuration Tags panel (click 'Design', choose 'Configuration Tags'). Select 'PeopleSoft - Populate Non Business Unit Related Hierarchies' item. Select 'Tasks' tab in the panel below. Click 'Add/Remove' button; query the new mappings you just entered into the DAC. Select mappings; add them into the right panel. Click OK, then click Save.
Reassemble all the financial related areas: Click 'Design' button at the top tool bar of DAC, select 'Subject Areas' tab, for each of the financial-related subject areas (Cost of Goods Sold, General Ledger, Payables, Receivables, Revenue), right click on it, choose 'assemble' item.
Re-build related execution plan: Click 'Execute' button at the top tool bar of DAC, select 'Execution Plans' tab, find the executing plan for your PeopleSoft Financial Application (for example Financials_PeopleSoft 8.8), right click on it, choose 'build' item.
This section contains additional configuration steps that apply to Universal, and contains the following topics:
This configuration is required only if you are implementing Oracle Profitability Analytics and you want to allocate your expenses by product or by customer dimensions. The out-of-the-box adapter of Oracle Business Intelligence Applications does not capture the miscellaneous cost and expenses associated with generating revenue from a customer or from a product (for example, expenses such as marketing campaign expenses). Since the way that these miscellaneous expenses are allocated across products or customers can be very different between different companies, we do not perform this type of allocations out of box; we require customers to feed us the result of the allocations through Universal Adapter.
The Customer Costs Lines (W_CUSTOMER_COST_LINE_F) table stores cost details by customers. The total cost by customers include both direct cost that is captured in the financial system by products, and the allocated costs that are captured in the costing system. The table also stores the source of allocations. For example, the Sales and Marketing costs are not be captured as direct costs by customers. However, at a later point in time, the costs could be allocated from a combination of General Ledger accounts or Cost Centers into various customers. This table stores the source cost centers and General Ledger accounts as well. The Customer Costs can be categorized by Sales, Marketing, Service, and Operating costs. These could be further broken down to Salaries, Events, and Promotions. Oracle Business Analytics Warehouse provides a set of categories, and these can be changed depending on the user needs and cost components by which customers are tracked. The actual cost lines are also tracked in this table. Apart from Customers, there are number of other dimensions by which the costs can be tracked such as Sales Region, Sales Geography, Company, Business Area and the associated hierarchies. The different cost lines such as Sales, Marketing, and Operating costs, have different details and many dimensions are included in this table. Some of the dimension keys are not be applicable to certain cost components. It is important that an organization identifies the dimensional keys that are used for Product Cost Analysis for various components.
The Product Costs Lines (W_PRODUCT_COST_LINE_F) table stores cost details by products. The total cost by products include both direct cost that is captured in the financial system by products, and the allocated costs that are captured in the costing system. The table also stores the source of allocations. For example, the Sales and Marketing costs are not captured as direct costs by products. However, at a later point in time, the costs are allocated from a combination of General Ledger accounts and Cost Centers into various products. This table also stores the source cost centers and General Ledger accounts. The product costs can be categorized by Sales, Marketing, Service, and Operating costs. These could be further broken down into Salaries, Events, and Promotions. Oracle Business Analytics Warehouse provides a set of common categories, and these can be changed depending on the user needs and cost components by which products are tracked. The actual cost lines are also tracked in this table. Apart from products, there are number of other dimensions by which the costs are tracked such as Sales Region, Sales Geography, Company, Business Area, and the associated hierarchies. The different cost lines such as Sales, Marketing, and Operating costs, have different details and many dimensions are included in this table. Some of the dimension keys are not applicable to certain cost components. It is important that an organization identifies the dimensional keys that is used for Product Cost Analysis for various components.
In Oracle's Profitability Analytics, the Customer Costs Lines and Product Costs Lines fact tables store the costing and expenses for the Profitability functional area. You need to use these tables with General Ledger Revenue and General Ledger COGS fact tables.
The General Ledger Revenue and General Ledger COGS fact tables are populated by the or Oracle 11i source system, but the Customer Costs Lines and Product Costs Lines fact tables are populated by the universal source system.
To load the Customer Costs Lines and Product Costs Lines tables
Open the customer_cost_line.csv
file in the $pmserver\Sample Universal Source Files
folder.
Insert a record into the file for each customer costing transaction you want to load into the Customer Cost fact table.
For the DATASOURCE_NUM_ID column, you need to provide the same source identifier value as in the file_parameters_ora11i.csv
file.
Save the file in the $pmserver\srcfiles directory, then close the file.
Repeat steps 1 to 3 for the file_product_cost.csv
file.
Oracle Business Intelligence Applications enables you to trace a GL Journal to the Subledger transaction that created that journal. This ability (or drill down) is achieved through the 'Navigation' feature in Oracle BI Answers.
To set up drill down in Oracle BI Answers from General Ledger to Subledger:
Create your subledger request from 'Financials - AP Transactions' or 'Financials - AR Transactions' catalog as applicable.
In your request, add a filter on the column 'GL Journal ID' under the 'Document Details' folder and the set the operator of the filter to 'Is Prompted'.
Build your GL Journal request from the 'Financials - GL Detail Transactions' catalog.
To your request, add the column 'GL Journal ID' under the 'Document Details' folder.
Navigate to the Column Properties of this column, and set the Value Interaction property in the Column Format tab to 'Navigate'.
Add a navigation target and set the target location to the subledger request you created earlier.
If you are implementing Oracle Profitability Analytics and would like to view the budget data in your dashboards, you need to import budget data into your data warehouse, as described in this section.
About Configuring Universal Source Files
The following tables describe the columns in the universal source CSV files file_budget.csv and file_acct_budget.csv, their data types, and how to populate them where applicable.
The table below shows the structure of the file_budget.csv file. The records in the file_acct_budget.csv will be loaded into W_BUDGET_D.
Table 9-16 Universal Source for Budget Fact (file_budget.csv)
Column Name | Datatype | Size | Description |
---|---|---|---|
BUDGET_NAME |
string |
80 |
Budget name. |
BUDGET_VERSION |
string |
30 |
Budget version. |
BUDGET_STATUS |
string |
30 |
Budget status. |
BUDGET_TYPE |
string |
30 |
Budget type. |
CREATED_BY_ID |
string |
80 |
ID of created by user. Populate with Integration_ID from w_user_d. |
CHANGED_BY_ID |
string |
80 |
ID of changed by user. Populate with Integration_ID from w_user_d. |
CREATED_ON_DT |
string |
14 |
Created date. |
CHANGED_ON_DT |
string |
14 |
Changed date. Used for updating an existing record in the warehouse. Increase the date if you want to update the record. If a record with the same integration_ID already exists in the target table W_BUDGET_D, then the load process will compare the CHANGED_ON_DT values between this record and the record in W_BUDGET_D. If this record's CHANGED_ON_DT is later than the record in W_BUDGET_D, then the load process will perform an update against the record in W_BUDGET_D; otherwise the load process will ignore this record, and no update or insertion will occur. If there is no matching record in W_BUDGET_D with the same integration_ID, then the load process will insert this record into W_BUDGET_D. |
AUX1_CHANGED_ON_DT |
string |
14 |
|
AUX2_CHANGED_ON_DT |
string |
14 |
|
AUX3_CHANGED_ON_DT |
string |
14 |
|
AUX4_CHANGED_ON_DT |
string |
14 |
|
DELETE_FLG |
string |
1 |
|
DATASOURCE_NUM_ID |
number |
10 |
A number for your data source. Populate the same datasource_num_id as your main source application. |
INTEGRATION_ID |
string |
80 |
A unique identifier for the record. |
TENANT_ID |
string |
80 |
|
X_CUSTOM |
string |
10 |
The table below shows the structure of the file_acct_budget.csv file. The records in the file_acct_budget.csv will be loaded into W__ACCT_BUDGET_F.
Table 9-17 Universal Source for Budget Fact (file_acct_budget.csv)
Column Name | Datatype | Size | Description |
---|---|---|---|
GL_ACCOUNT_ID |
string |
80 |
GL Account identifier. Populate with integration_id from w_gl_account_d. |
PROJECT_ID |
string |
80 |
|
PRODUCT_ID |
string |
80 |
Product identifier. Populate with integration_id from w_product_d. |
COMPANY_ORG_ID |
string |
80 |
Company Org identifier. Populate with integration_id from w_int_org_d where company_flg = Y. |
BUDG_BUSN_AREA_ORG_ID |
string |
80 |
Company Org identifier. Populate with integration_id from w_int_org_d where business_area_flg = Y. |
BUDG_FIN_AREA_ORG_ID |
string |
80 |
Company Org identifier. Populate with integration_id from w_int_org_d where fin_area_flg = Y. |
BUDG_CTRL_AREA_ORG_ID |
string |
80 |
Company Org identifier. Populate with integration_id from w_int_org_d where ctrl_area_flg = Y. |
COST_CENTER_ID |
string |
80 |
Cost Center identifier. Populate with integration_id from w_cost_center_d. |
PROFIT_CENTER_ID |
string |
80 |
Profit Center identifier. Populate with integration_id from w_profit_center_d. |
BUDGET_ID |
string |
80 |
Populate with the value from integration_id in file_budget.csv |
POSTED_ON_DT |
string |
14 |
A date on which this transaction can be reported. |
PERIOD_BEGIN_DT |
string |
14 |
|
PERIOD_END_DT |
string |
14 |
Populate with your budget period's end date. If your budget is monthly, populate with the month end date. |
BUDGET_DOC_AMT |
number |
22 |
Budget amount in document currency. |
BUDGET_LOC_AMT |
number |
22 |
Budget amount in local currency. |
BUDGET_GRP_AMT |
number |
22 |
|
DOC_CURR_CODE |
string |
30 |
Document currency code. |
LOC_CURR_CODE |
string |
30 |
Local currency code. |
GRP_CURR_CODE |
string |
30 |
|
CREATED_BY_ID |
string |
80 |
ID of created by user. Populate with Integration_ID from w_user_d. |
CHANGED_BY_ID |
string |
80 |
ID of changed by user. Populate with Integration_ID from w_user_d. |
CREATED_ON_DT |
string |
14 |
Created date. |
CHANGED_ON_DT |
string |
14 |
Changed date. Used for updating an existing record in the warehouse. Increase the date if you want to update the record. If a record with the same integration_ID exists in the target table W_ACCT_BUDGET_F already, then the load process will compare the CHANGED_ON_DT values between this record and the record in W_ACCT_BUDGET_F. If this record's CHANGED_ON_DT is later than the record in W_ACCT_BUDGET_F, then the load process will perform an update against the record in W_ACCT_BUDGET_F; otherwise it'll ignore this record, no update or insertion will occur. If there's no matching record in W_ACCT_BUDGET_F with the same integration_ID, then the load process will insert this record into W_ACCT_BUDGET_F. |
AUX1_CHANGED_ON_DT |
string |
14 |
|
AUX2_CHANGED_ON_DT |
string |
14 |
|
AUX3_CHANGED_ON_DT |
string |
14 |
|
AUX4_CHANGED_ON_DT |
string |
14 |
|
DELETE_FLG |
string |
1 |
|
DATASOURCE_NUM_ID |
number |
10 |
A number for your data source. Populate the same datasource_num_id as your main source application. |
INTEGRATION_ID |
string |
80 |
A unique identifier for the record. |
TENANT_ID |
string |
80 |
|
X_CUSTOM |
string |
10 |
Note: date columns are to populated in the csv file as a number in YYYYMMDDHH24MISS format.
Use the following table to understand how the integration_id (key) of some of the key dimensions are constructed for the Oracle EBS source system. You can use this information to populate the dimension foreign key identifiers in the above universal source CSV file for budget fact, if you have to use budget fact in conjunction with dimensions populated from Oracle EBS.
Table 9-18 Populating the integration_id fields in Oracle EBS Source Systems
Field | How to populate |
---|---|
GL_ACCOUNT_ID (w_gl_account_d) |
ccid |
COMPANY_ORG_ID (w_int_org_d) |
No need to populate; will be calculated based on GL Account ID. |
COST_CENTER_ID(w_cost_center_d) |
No need to populate; will be calculated based on GL Account ID. |
PROFIT_CENTER_ID(w_profit_center_d) |
No need to populate; will be calculated based on GL Account ID. |
LEDGER_ID(w_ledger_d) |
For Oracle 11i populate as set of book ID For Oracle R12 populate as ledger ID. |
How to import budget data into your data warehouse
Populate the file_budget.csv and file_acct_budget.csv files with your budget data.
The CSV files are located in the OracleBI\dwrep\Informatica\Sample Universal Source Files\ directory of the Oracle Business Intelligence Applications installation machine.
Refer the tables above for details of how to populate these files.
In the DAC Client, create a new execution plan using the subject area 'Financials – Budget' in the 'Universal' container.
Run the new execution plan that you created in the previous step.
Note: This Execution Plan must be run after the regular execution plan to populate the data warehouse for the other subject areas has completed.
Load new Budget data or changes to existing budget data.
Repeat Step 1 and Step 3 as needed to load new budget for the next fiscal period or make corrections to already loaded budget data.