Skip Headers
Oracle® Business Intelligence Applications Configuration Guide for Informatica PowerCenter Users
Version 7.9.6.1

Part Number E14844-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

6 Configuring Oracle Supply Chain and Order Management Analytics

This section describes how to configure Oracle Supply Chain and Order Management Analytics. It contains the following topics:

6.1 Overview of Oracle Supply Chain and Order Management Analytics

The Oracle Supply Chain and Order Management Analytics application allows you to analyze:

The Oracle Supply Chain and Order Management Analytics application consists of orders, invoices, backlogs and inventory. Sales orders are the entry point for the sales process. Invoices are the exit point from the fulfillment process. Backlogs are points of congestion in your fulfillment process. This coverage includes insight into which items are booked, backlogged, and invoiced. This allows you to evaluate the sales performance of individual sales representatives or departments. Oracle Supply Chain and Order Management Analytics application also provides you with information on Inventory Transactions, Inventory Balances, Bill of Materials and Customer and Supplier Returns. This enables companies to monitor inventory levels trend to Sales performance to improve cost exposure, increase turnover through inventory level reduction and increased velocity, properly deploy inventory at the right place / right time and better understand Customer and Supplier Returns to maintain quality.

The Oracle Supply Chain and Order Management Analytics application also requires post-load processing mappings to populate its aggregate and derived tables.

6.2 Configuration Required Before A Full Load for Oracle Supply Chain and Order Management Analytics

This section contains configuration steps that you need to perform on Oracle Supply Chain and Order Management Analytics before you do a full data load. It contains the following topics:

6.2.1 Configuration Steps for Oracle Supply Chain and Order Management Analytics for All Source Systems

This section contains configuration steps that apply to all source systems.

Note:

For configuration steps that apply to all analytics modules (for example, Oracle Financial Analytics, Oracle HR Analytics, Oracle Sales Analytics, see Chapter 3, "Configuring Common Areas and Dimensions").

6.2.1.1 How to Disable Project Dimensions

If you have not licensed or do not want to implement Oracle Projects, you can disable Project dimensions.

To disable project dimensions

  1. In DAC, display the Design view and select the appropriate custom container from the drop-down list.

  2. Display the Configuration Tags tab.

  3. Query for the tag Enable Project Dimensions.

  4. Display the Subject Areas subtab.

  5. Select the Inactive check box for the Subject Area named 'Supply Chain - Inventory Transactions'.

  6. In the upper pane, display the Subject Areas tab, and select the Subject Area named 'Projects'.

  7. In lower pane, display the Configuration Tags tab.

  8. Select the Inactive check box for the Configuration Tag named 'Enable Project Dimensions'.

6.2.2 Configuration Steps for Oracle Supply Chain and Order Management Analytics for Oracle EBS

This section contains configuration steps required before you do a full data load that apply to Oracle EBS.

Note:

If you plan to consume analytics on Bill of Materials (BOM), you need to include the DAC Subject Area "Supply Chain - BOM Items" in your ETL Execution Plan. However, the ETL process for extracting BOM data from Oracle EBS into the OBAW data warehouse is not designed for very high volumes of BOM data. There are known performance issues with initial ETL load for very large BOM data size (for example, more than 100,000 rows of data in EBS BOM). Oracle recommends that you restrict the BOM data size in the initial ETL load by applying appropriate filters (for example, On time or Inventory Organization).

6.2.2.1 About Configuring Domain Values and CSV Worksheet Files for Oracle Supply Chain and Order Management Analytics

Table 6-1 below lists the CSV worksheet files and the domain values for Oracle Supply Chain and Order Management Analytics that are located in the $PMServer\LkpFiles folder (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

Table 6-1 Domain Values and CSV Worksheet Files for Oracle Supply Chain and Order Management Analytics

Worksheet File Name Description Session

domainValues_InvoiceTypes_ora11i.csv

Lists the Invoice Document Type column and the corresponding domain values for the Oracle 11i or Oracle R12 application.

For more information about updating the values in this file, see Section 6.2.2.2, "How to Configure Invoice Type Domain Values".

SDE_ORA_TransactionTypeDimension_SalesInvoiceLines

domainValues_PickTypes_ora11i.csv

Lists the Picking Document Type column and the corresponding domain values for the Oracle 11i or Oracle R12 application.

For more information about updating the values in this file, see Section 6.2.2.3, "How to Configure Pick Types Domain Values".

SDE_ORA_TransactionTypeDimension_SalesPickLines

domainValues_OrderTypes_ora11i.csv

Lists the Order Document Type column and the corresponding domain values for the Oracle 11i or Oracle R12 application.

For more information about updating the values in this file, see Section 6.2.2.4, "How to Configure Order Types Domain Values".

SDE_ORA_TransactionTypeDimension_SalesOrderLines

domainValues_PickStatus_ora11i.csv

Lists the Picking Status Code and the Status Desc columns, and the corresponding domain values for the Oracle 11i or Oracle R12 application.

For more information about updating the values in this file, see Section 6.2.2.5, "How to Configure Pick Status Domain Values".

SDE_ORA_StatusDimension_SalesPickLines

domainValues_PayMethodCode_ora.csv

Lists the method code column and the corresponding domain value for the application.

SDE_ORA_PaymentMethodDimension

domainValues_InvoiceStatus_ora11i.csv

Lists the Invoice Status Code and the Status Desc columns, and the corresponding domain values for the Oracle 11i or Oracle R12 application.

For more information about updating the values in this file, see Section 6.2.2.6, "How to Configure Invoice Status Domain Values".

SDE_ORA_StatusDimension_SalesInvoiceLine

DomainValue_OrderOverallStatus_ora11i.csv

List the Order Status Code column and the corresponding domain values for the Oracle 11i or Oracle R12 application.

For more information about updating the values in this file, see Section 6.2.2.7, "How to Configure Order Overall Status Domain Values".

SDE_ORA_StatusDimension_SalesOrderLineCycle


For general information about domain values with CSV worksheet files, see the section about domain values in the section entitled "Customizing the Oracle Business Analytics Warehouse" in Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users.

Note: Where SQL code is provided in the following sections, you might need to change the language specified by the FND_LOOKUP_VALUES.LANGUAGE = ' ' command.

6.2.2.2 How to Configure Invoice Type Domain Values

This section explains how to configure Invoice Type domain values using the domainValues_InvoiceTypes_ora11i.csv file.

To configure Invoice Type domain values

  1. Identify the Invoice Types in your Oracle 11i source system by using the following SQL:

    SELECT DISTINCT RA_CUST_TRX_TYPES_ALL.TYPE

    FROM RA_CUST_TRX_TYPES_ALL

    ORDER BY 1;

  2. Using a text editor, open the domainValues_InvoiceTypes_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

  3. Copy the TYPE column to the XACT_TYPE_CODE column in the file.

    The data must be copied starting from the second line. The first line is the column header.

  4. Map each Transaction Type Code to one domain value.

    For more information on Transaction Type Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.

  5. Save and close the file.

6.2.2.3 How to Configure Pick Types Domain Values

This section explains how to configure Pick Types domain values using the domainValues_PickTypes_ora11i.csv file.

To configure Pick Types domain values

  1. Identify the Pick Types in your Oracle 11i source system.

  2. Using a text editor open the domainValues_PickTypes_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

  3. Put 'STANDARD' in the XACT_TYPE_CODE column in the file.

    The data must be copied starting from the second line. The first line is the column header.

  4. Map each Transaction Type Code to one domain value.

    For more information on Transaction Type Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.

  5. Save and close the file.

6.2.2.4 How to Configure Order Types Domain Values

This section explains how to configure Order Types domain values using the domainValues_OrderTypes_ora11i.csv file.

To configure Order Types domain values

  1. Identify the Pick Types in your Oracle 11i source system by using the following SQL:

    SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE

    FROM FND_LOOKUP_VALUES

    WHERE FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 660

    AND FND_LOOKUP_VALUES.LANGUAGE = 'US'

    AND FND_LOOKUP_VALUES.LOOKUP_TYPE = 'LINE_CATEGORY'

    ORDER BY 1;

  2. Using a text editor, open the domainValues_OrderTypes_ora11i.csv file, located in $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

  3. Copy the LOOKUP_TYPE column to the XACT_TYPE_CODE column in the file.

    The data must be copied starting from the second line. The first line is the column header.

  4. Map each Transaction Type Code to one domain value.

    For more information on Transaction Type Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.

  5. Save and close the file.

6.2.2.5 How to Configure Pick Status Domain Values

This section explains how to configure Pick Status domain values using the domainValues_PickStatus_ora11i.csv file.

To configure Pick Status domain values

  1. Identify the Pick Statuses in your Oracle 11i source system by using the following SQL:

    SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE

    FROM FND_LOOKUP_VALUES

    WHERE FND_LOOKUP_VALUES.LOOKUP_TYPE= 'PICK_STATUS'

    AND FND_LOOKUP_VALUES.LANGUAGE = 'US'

    AND FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 665

    AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0

    ORDER BY 1;

  2. Using a text editor, open the domainValues_PickStatus_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

  3. Copy the LOOKUP_CODE column to the STATUS_CODE column in the file.

    The data must be copied starting from the second line. The first line is the column header.

  4. Map each Status Code to one domain value.

    For more information on Status Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.

  5. Save and close the file.

6.2.2.6 How to Configure Invoice Status Domain Values

This section explains how to configure Invoice Status domain values using the domainValues_InvoiceStatus_ora11i.csv file.

To configure Invoice Status domain values

  1. Identify the Invoice Statuses in your Oracle 11i source system by using the following SQL:

    SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE

    FROM FND_LOOKUP_VALUES

    WHERE FND_LOOKUP_VALUES.LOOKUP_TYPE= 'INVOICE_TRX_STATUS' AND

    FND_LOOKUP_VALUES.LANGUAGE = 'US'

    AND FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 222

    AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0

    ORDER BY 1;

  2. Using a text editor, open the domainValues_InvoiceStatus_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

  3. Copy the LOOKUP_CODE column to the STATUS_CODE column in the file.

    The data must be copied starting from the second line. The first line is the column header.

  4. Map each Status Code to one domain value.

    For more information on Status Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.

  5. Save and close the file.

6.2.2.7 How to Configure Order Overall Status Domain Values

This section explains how to configure Order Overall Status domain values using the domainValues_OrderOverallStatus_ora11i.csv file.

To configure Order Overall Status domain values

  1. Identify the Order Overall Statuses in your Oracle 11i source system by using the following SQL:

    SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE

    FROM FND_LOOKUP_VALUES

    WHERE FND_LOOKUP_VALUES.LOOKUP_TYPE = 'LINE_FLOW_STATUS'

    AND FND_LOOKUP_VALUES.LANGUAGE = 'US'

    AND FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 660

    AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0

    ORDER BY 1;

  2. Using a text editor, open the domainValues_OrderOverallStatus_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

  3. Copy the LOOKUP_CODE column to the STATUS_CODE column in the file.

    The data must be copied starting from the second line. The first line is the column header.

  4. Map each Status Code to one domain value.

    For more information on Status Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.

  5. Save and close the file.

6.2.2.8 How to Configure Pay Method Domain Values

This section explains how to configure Pay Method Status domain values using the domainValues_PayMethodCode_ora.csv file.

To configure Pay Method domain values

  1. Identify the Pay Methods in your Oracle 11i source system by using the following SQL:

    SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE

    FROM FND_LOOKUP_VALUES

    WHERE LOOKUP_TYPE = 'PAYMENT TYPE'

    AND VIEW_APPLICATION_ID = 660

    AND LANGUAGE = 'US'

    AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0

    ORDER BY 1;

  2. Using a text editor, open the domainValues_PayMethodCode_ora.csv file, located in the $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles).

  3. Copy the LOOKUP_CODE column to the METHOD_CODE column in the file.

    The data must be copied starting from the second line. The first line is the column header.

  4. Map each Method Code to one domain value.

    For more information on Method Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.

  5. Save and close the file.

6.2.2.9 How to Configure Movement Types Domain Values

This section explains how to configure Movement Types domain values.

  1. Identify the Inventory Movement Type in your Oracle EBS source system by using the following SQL:

    SELECT DISTINCT MTL_TRANSACTION_TYPES.TRANSACTION_TYPE_NAME FROM MTL_TRANSACTION_TYPES

  2. From the $PMServer\LkpFiles directory (for example, \PowerCenter8.6.x\server\infa_shared\LkpFiles), open the domainValues_Movement_Types_ora11i.csv file in a text editor.

  3. Copy the TRANSACTION_TYPE_NAME to the TRANSACTION_TYPE_NAME column in the file.

    The data must be copied starting from the second line.

  4. Map each TRANSACTION_TYPE_NAME to one Inventory Movement Type domain value.

    Use commas to separate the entries.

  5. Save and close the file.

6.2.2.10 How to Configure Quantity Types for Product Transactions

Oracle 11i categorize quantities into three different types:

  • Goods Received quantities. Goods Received quantity refers to the number of goods received.

  • Delivery quantities. Delivery quantity refers to the number of goods delivered.

  • Base quantities. Base quantity refers to any transaction quantity.

The Oracle Business Analytics Warehouse extracts the transaction type and loads this value into the XACT_SRC_TYPE column. In this column, the value 1 denotes a Goods Received quantity, and 2 denotes a Delivery quantity.

To find out more about XACT_SRC_TYPE column, please execute the following SQL against your EBS instance:

select TRANSACTION_SOURCE_TYPE_ID, TRANSACTION_SOURCE_TYPE_NAME, DESCRIPTION from MTL_TXN_SOURCE_TYPES order by 1

If you have rows equivalent to Purchase order (1), you should include the TRANSACTION_SOURCE_TYP E_ID in the Goods Received quantity column (EXT_GR_QTY). If you have rows equivalent to Sales Order (2), you should include the TRANSACTION_SOURCE_TYPE_ID in the Delivery quantity column (EXT_DELIVERY_QTY).

All quantities extracted from the source system are always loaded into the Base quantity column (EXT_BASE_QTY). However, only the receipt quantity is loaded into the Goods Received quantity column (EXT_GR_QTY), and only delivered quantities are loaded into the Delivery quantity column (EXT_DELIVERY_QTY).

If your definition of goods received or delivery quantity is different from the prepackaged condition, then you can edit the condition to suit your business needs.

To configure the Quantity type

  1. In Informatica PowerCenter Designer, open the SDE_ORA<Version>_Adaptor.

  2. Open the mplt_SA_ORA_ProductTransactionFact mapplet.

  3. Double-click the EXP_PROD_XACTS Expression transformation to open the Edit Transformations dialog, and display the Port tab to show the EXT_GR_QTY and EXT_DELIVERY_QTY ports.

    wperf.do?noteId=790978

  4. Edit the quantity types by substituting your desired condition for the prepackaged expression.

  5. Click Apply.

  6. Validate the mapplet, and save your changes to the repository.

6.2.2.11 How to Deploy Objects in Oracle EBS for exploding the BOM

The Bill of Materials (BOM) functional area allows you to determine the profit

margin of the components that comprise the finished goods. BOM allows you to

keep up with the most viable vendors in terms of cost and profit, and to keep

your sales organization aware of product delivery status, including shortages.

In order to explode BOM structures, you need to follow the steps in this section to deploy objects in your Oracle EBS system.

Note: For additional information about configuring BOMs for Oracle EBS source systems, see Section 6.3.2.15, "How to Configure the Bill of Materials Explosion Options".

To deploy objects in Oracle EBS for exploding the BOM:

  1. Connect to OLTP using the APPS user (or equivalent database user with appropriate privileges).

  2. Navigate to <OracleBI>\dwrep\Informatica\Stored_Procedure_Scripts\Oracle directory, and locate file OLTP_BOM_Prepare.sql.

  3. Run the OLTP_BOM_Prepare.sql script to create two tables and one sequence.

  4. In the same directory, locate the wrapper script OLTP_bompexpl_wrapper_obia_<Ver>.sql.

  5. Create and compile the OLTP_bompexpl_wrapper_obia_<Ver>.sql script in OLTP.

  6. You can also configure the options to explode BOM based on your business requirements in OLTP_bompexpl_wrapper_obia_<Ver>.sql.

    For more details, see Section 6.3.2.15, "How to Configure the Bill of Materials Explosion Options",

Note: If you are using a different user rather than APPS to extract your data, execute step 2 to 5 with the correct user. Also make sure that the user has sufficient privileges to access and execute the objects.

6.2.3 Configuration Steps for Oracle Supply Chain and Order Management Analytics for PeopleSoft

Not applicable to Oracle BI Applications Version 7.9.6.1.

6.2.4 Configuration Steps for Oracle Supply Chain and Order Management Analytics for Universal

Not applicable to Oracle BI Applications Version 7.9.6.1.

6.3 Configuration Steps for Controlling Your Data Set

This section contains additional configuration steps for Oracle Supply Chain and Order Management Analytics. It contains the following topics:

6.3.1 Configuration Steps for Oracle Supply Chain and Order Management Analytics for All Source Systems

This section contains configuration steps that apply to all source systems. It contains the following topics:

6.3.1.1 Tracking Multiple Attribute Changes in Bookings

When you modify the default VAR_BOOKING_ID column, the SQL statement is configured as follows for Oracle 11i and Oracle R12:

TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||to_char(INP_WAREHOUSE_ID)

However, if you want to track changes based on more than one attribute, in the SQL statement you must concatenate the attribute column IDs in the VAR_BOOKING_ID column. For example, if you want to track changes in Salespersons and Customer, then concatenate the technical name IDs in the VAR_BOOKING_ID column as follows:

TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||TO_CHAR(INP_WAREHOUSE_ID)||'~'||TO_CHAR(INP_SALESREP_ID)||'~'||TO_CHAR(INP_CUSTOMER_ID)

To track dimensional attribute changes in bookings

  1. In Informatica PowerCenter Designer, open the SDE_ORA115<Version>_Adaptor or SDE_ORAR12_Adaptor folder.

  2. Open one of the following mappings:

    • mplt_SA_ORA_SalesOrderLinesFact

    • mplt_SA_ORA_SalesScheduleLinesFact

  3. Double-click the appropriate Expression transformation to open the Edit Transformation box:

    • EXP_SALES_ORDLNS

    • EXP_SALES_SCHLNS

  4. In the Ports tab, edit the expression for the EXT_BOOKING_ID port, and enter the ID of the attribute for which you want to track changes.

    If you want to track changes in multiple attributes, concatenate the IDs of all attributes and put the concatenated value in the VAR_BOOKING_ID column.

  5. Validate and save your changes to the repository.

6.3.1.2 Process of Aggregating Oracle Supply Chain and Order Management Analytics Tables

This section contains Oracle Supply Chain and Order Management Analytics configuration points for aggregating the Sales Invoice Lines and Sales Order Lines tables.

The aggregation processes use the following Teradata parameters:

  • Hint_Tera_Pre_Cast

  • Hit_Tera_Post_Cast

To aggregate the Sales Invoice Lines and Sales Order Lines tables, perform the following tasks:

  • Configure the Sales Invoice Lines Aggregate Table

  • Configure the Sales Order Lines Aggregate Table

About Configuring the Sales Invoice Lines Aggregate Table

The Sales Invoice Lines aggregate table (W_SALES_INVOICE_LINE_F_A) is used to capture information about the invoices issued for your sales orders. You need to configure the Sales Invoice Lines aggregate table in order to run initial ETL and incremental ETL.

For your initial ETL run, you need to configure the TIME_GRAIN parameter for the time aggregation level in the Sales Invoice Lines aggregate fact table.

For the incremental ETL run, you need to configure the time aggregation level.

To aggregate the Sales Invoice Lines table for your incremental run, you need to configure the TIME_GRAIN parameter.

The TIME_GRAIN parameter has a preconfigured value of Month. The possible values for the TIME_GRAIN parameter are:

  • 'DAY'

  • 'WEEK'

  • 'MONTH'

  • 'QUARTER'

  • 'YEAR'

The aggregation processes use the following Teradata parameters:

  • Hint_Tera_Pre_Cast

  • Hit_Tera_Post_Cast

The Sales Invoice Lines aggregate table is fully loaded from the base table in the initial ETL run. The table can grow to millions of records. Thus, the Sales Invoice aggregate table is not fully reloaded from the base table after each incremental ETL run. Oracle Business Analytics Warehouse minimizes the incremental aggregation effort, by modifying the aggregate table incrementally as the base table is updated. This process is described below.

  • The Oracle Business Analytics Warehouse finds the records to be deleted in the base table since the last ETL run, and loads them into the W_SALES_INVOICE_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesInvoiceLinesAggregate_Derive_PreSoftDeleteImage, which is run before SIL_SalesInvoiceLinesFact_SoftDelete deletes the records from the base table.

  • The Oracle Business Analytics Warehouse finds the records to be updated in the base table since the last ETL run, and loads them into the W_SALES_INVOICE_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesInvoiceLinesFact_Derive_PreLoadImage, which is run before SIL_SalesInvoiceFact deletes the records from the base table.

  • The Oracle Business Analytics Warehouse finds the inserted or updated records in the base table since the last ETL run, and loads them into the W_SALES_INVOICE_LINE_TMP table, without changing their sign. The mapping responsible for this task is SIL_SalesInvoiceLinesFact_Derive_PreLoadImage, which is run before PLP_SalesInvoiceLinesFact_Derive_PostLoadImage updates or inserts records in the base table.

  • The Oracle Business Analytics Warehouse aggregates the W_SALES_INVOICE_LINE_TMP table and load to W_SALES_INVOICE_LINE_A_TMP, which has the same granularity as the W_SALES_INVOICE_LINE_A table.

  • The PLP_SalesInvoiceLinesAggregate_Derive mapping looks up the W_SALES_INVOICE_LINE_A aggregate table to update existing buckets or insert new buckets in the aggregate table (the mapping is PLP_SalesInvoiceLinesAggregate_Load).

How to Configure the Sales Invoice Lines Aggregate Table

To load the Sales Invoice Lines aggregate table (W_SALES_INVOICE_LINE_A), you need to configure the parameterfileDW.txt file, and run the initial and then the incremental workflows. Using a text editor, open the file parameterfileDW.txt, located in the Dac\Informatica\parameters\input folder on the DAC Server machine. For example: C:\orahome\10g<version>\bifoundation\dac\Informatica\parameters\input.

To configure the Sales Invoice Lines Aggregate Table

  1. In DAC, go to the Design view, and select the appropriate custom container.

  2. Display the Tasks tab.

  3. For each of the following tasks, display the Parameters tab and specify an appropriate value in the Value field for the TIME_GRAIN parameter:

    • SIL_SalesInvoiceLinesAggregate_Derive_PreLoadImage

    • SIL_SalesInvoiceLinesAggregate_Derive_PreSoftDeleteImage

    • PLP_SalesInvoiceLinesAggregate_Derive_PostLoadImage

    • PLP_SalesInvoiceLinesAggregate_Load

About Configuring the Sales Order Lines Aggregate Table

The Sales Order Lines aggregate table (W_SALES_ORDER_LINE_A) is used to capture information about the order lines issued for your sales orders. You need to configure the Sales Order Lines aggregate table in order to run initial ETL and incremental ETL.

For your initial ETL run, you need to configure the TIME_GRAIN parameter for the time aggregation level in the Sales Order Lines aggregate fact table.

For the incremental ETL run, you need to configure the time aggregation level.

To aggregate the Sales Invoice Lines table for your incremental run, you need to configure the TIME_GRAIN parameter.

The TIME_GRAIN parameter has a preconfigured value of Month. The possible values for the GRAIN parameter are:

  • 'DAY'

  • 'WEEK'

  • 'MONTH'

  • 'QUARTER'

  • 'YEAR'

The aggregation processes use the following Teradata parameters:

  • Hint_Tera_Pre_Cast

  • Hit_Tera_Post_Cast

The Sales Order Lines aggregate table is fully loaded from the base table in the initial ETL run. The table can grow to millions of records. Thus, the Sales Order aggregate table is not fully reloaded from the base table after each incremental ETL run. Oracle Business Analytics Warehouse minimizes the incremental aggregation effort, by modifying the aggregate table incrementally as the base table is updated. This process is described below.

  • Oracle Business Analytics Warehouse finds the records to be deleted in the base table since the last ETL run, and loads them into the W_SALES_ORDER_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesOrderLinesAggregate_Derive_PreSoftDeleteImage, which is run before SIL_SalesOrderLinesFact_SoftDelete deletes the records from the base table.

  • Oracle Business Analytics Warehouse finds the records to be updated in the base table since the last ETL run, and loads them into the W_SALES_ORDER_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesOrderLinesFact_Derive_PreLoadImage, which is run before SIL_SalesOrderFact updates the records from the base table.

  • Oracle Business Analytics Warehouse finds the inserted or updated records in the base table since the last ETL run, and loads them into the W_SALES_ORDER_LINE_TMP table, without changing their sign. The mapping responsible for this task is SIL_SalesOrderLinesFact_Derive_PreLoadImage, which is run before PLP_SalesOrderLinesFact_Derive_PostLoadImage updates or inserts records in the base table.

  • Oracle Business Analytics Warehouse uses the PLP_SalesOrderLinesAggregate_Derive mapping to aggregate the W_SALES_ORDER_LINE_TMP table and load to W_SALES_ORDER_LINE_A_TMP, which has the same granularity as the W_SALES_ORDER_LINE_A table.

  • W_SALES_ORDER_LINE_A_TMP looks up the W_SALES_ORDER_LINE_A aggregate table to update existing buckets or insert new buckets in the aggregate table (the mapping is PLP_SalesOrderLinesAggregate_Load).

How to Configure the Sales Order Lines Aggregate Table

To load the Sales Order Lines aggregate table (W_SALES_ORDER_LINE_A), you need to configure the post-load-processing parameter file and the source system parameter files, and run the initial and then the incremental workflows.

To configure the Sales Order Lines Aggregate Table

  1. In DAC, go to the Design view, and select your custom container from the drop-down list.

  2. Display the Tasks tab.

  3. For each of the following tasks, display the Parameters tab and specify an appropriate value in the Value field for the TIME_GRAIN parameter:

    • SIL_SalesOrderLinesAggregate_Derive_PreLoadImage

    • SIL_SalesOrderLinesAggregate_Derive_PreSoftDeleteImage

6.3.1.3 About Tracking Multiple Products for Oracle Supply Chain and Order Management Analytics

The Sales Order Lines table contains two columns, ORDHD_KEY_ID and ORDLN_KEY_ID, that track individual products when they are grouped and sold as a single package. These two columns allow you to analyze the relationship of all products sold as a single unit. The ORDHD_KEY_ID column stores the Order ID of the entire sales order. The ORDLN_KEY_ID column stores the Line Item ID of the parent product.

For example, assume a customer purchases a package that includes a computer, scanner, and printer. In addition, the customer purchases a monitor separately. In this case, there are two parent items: the package and the monitor. The computer, scanner, and printer are all child orders of the parent order package, while the parent order monitor is a single-item purchase.

Your data warehouse may store this sales information in the Sales Order Lines table as seen in Table 6-2. The ORDLN_KEY_ID field contains the Line Item ID of the parent product in order to maintain the relationship between the parent and child products in a package. In this example, the ORDLN_KEY_ID field is Line_1 for each of the three child products (A1, A2, A3) that were sold as a part of the parent package, Parent A.

Table 6-2 Sales Order Table Columns With Parent/Child Relationships

Key_ID SALES_ORDER_NUM PRODUCT_ID ORDHD_ KEY_ID ORDLN_ KEY _ID Relationship (Not a column in the table.)

Line_1

1000

Package

1000

Line_1

Parent A

Line_2

1000

Computer

1000

Line_1

Child A1

Line_3

1000

Scanner

1000

Line_1

Child A2

Line_4

1000

Printer

1000

Line_1

Child A3

Line_5

1000

Monitor

1000

Line_5

Parent B (no children)


In contrast, if each of the four items described in Table 6-2 were bought individually, the ORDLN_KEY_ID would have a different Line Item ID for every row. In this case, the Sales Order Lines table would look like Table 6-3.

Table 6-3 Sales Order Table Columns Without Parent/Child Relationships

Key_ID SALES_ORDER_NUM PRODUCT_ID ORDHD_ KEY_ID ORDLN_ KEY _ID Relationship (Not a column in the table.)

Line_1

1000

Computer

1000

Line_1

None

Line_2

1000

Scanner

1000

Line_2

None

Line_3

1000

Printer

1000

Line_3

None

Line_4

1000

Monitor

1000

Line_4

None


6.3.1.4 Adding Dates to the Order Cycle Time Table for Post-Load Processing

To add more dates, you need to understand how the Order Cycle Times table is populated. Thus, if you want to change the dates loaded into the Order Cycle Time table (W_SALES_CYCLE_LINE_F), then you have to modify the PLP_SalesCycleLinesFact_Load and PLP_SalesCycleLinesFact_Load_Full mappings that take the dates from the W_* tables and load them into the Cycle Time table.

To add dates to the Cycle Time table load

  1. In Informatica PowerCenter Designer, open the Configuration for Post Load Processing folder (that is, the PLP folder).

  2. In Warehouse Designer, modify the table definition for the target table to verify that it has a field to store this date.

    For example, if you are loading the Validated on Date in the W_SALES_CYCLE_LINE_F table, then you need to create a new column, VALIDATED_ON_DT, and modify the target definition of the W_SALES_CYCLE_LINE_F table.

  3. In Source Analyzer, modify the table definition of the source table to include this new column.

    Continuing with the example, you would include the VALIDATED_ON_DT column in the W_SALES_CYCLE_LINE_F source table.

  4. In Mapping Designer, modify the PLP_SalesCycleLinesFact_Load and PLP_SalesCycleLinesFact_Load_Full mappings to select the new column from any of the following source tables, and load it to the W_SALES_CYCLE_LINE_F target table:

    • W_SALES_ORDER_LINE_F

    • W_SALES_INVOICE_LINE_F

    • W_SALES_PICK_LINE_F

    • W_SALES_SCHEDULE_LINE_F

  5. Modify the Source Qualifier SQL Override for the mapping, and map the column in the transformation to map it to the target table.

6.3.1.5 About Configuring the Backlog Period Date for Oracle Supply Chain and Order Management Analytics

The Backlog table (W_SALES_BACKLOG_LINE_F) stores backlog data for the current month. In contrast, the Backlog History table (W_SALES_BACKLOG_LINE_F) stores snapshots of all previous months' historical backlog data. The periods for which the Backlog History table tracks backlog data is defined by the Backlog Period Date. By default, the date is set as the last calendar day of the month; however you may configure this date. You may want to view backlog history at a more detailed level, such as by day or by week, instead of by month. The following example describes how historical backlog data is stored and what the implications are for changing the backlog time period.

6.3.1.6 Example of How Backlog Data Is Stored in the Backlog History Table

Assume you represent a manufacturing company where financial backlog is defined as any item that is ordered, but not invoiced. On February 1, 2001, you received an order (Sales Order #1) for 30 products. 20 were shipped and invoiced and 10 were shipped, but not invoiced. At the end of the day, there is an entry in the Backlog table and in the Backlog History table. The entry in the Backlog History table looks like that shown in Table 6-4.

Table 6-4 Oracle 11i and Oracle R12: Backlog History Table Entry as of February 1, 2001

SALES_ORDER_NUM(Sales Order Number) BACKLOG _DK(Backlog Date) BACKLOG_PERIOD_DK(Backlog Period Date) OPEN_QTY(Backlog Quantity)

1

02/01/2001

02/28/2001

10


On February 2, 5 of the 10 financial backlog items are invoiced and, thus, removed from the backlog. Thus, there is an update to the existing row in the Backlog History table, as shown in Table 6-5.

Table 6-5 Oracle 11i and Oracle R12: Backlog History Table Entry as of February 2, 2001

SALES_ORDER_NUM(Sales Order Number) BACKLOG _DK(Backlog Date) BACKLOG_PERIOD_DK(Backlog Period Date) OPEN_QTY(Backlog Quantity)

1

02/01/2001

02/28/2001

Old value: 10

New value: 5


No further activity happens until March 1st. On March 1st, the remaining 5 items on financial backlog are invoiced and removed from financial backlog. In addition, a new sales order (Sales Order #2) comes in for 50 new items. All of the items are put on financial backlog.

Even though all items from Sales Order #1 are cleared from financial backlog, the last backlog row remains in the Backlog History table. The purpose in retaining the last row is to indicate that there was backlog for this particular order. The quantity, in this case 5 items, does not tell you how many items were initially on backlog, which was 10.

For the 50 new financial backlog items, there is a new entry into the Backlog History table. So, as of February 28, 2001, the Backlog History table looks like the Table 6-6.

Table 6-6 Oracle 11i: Backlog History Table Entry as of February 28, 2001

SALES_ORDER_NUM(Sales Order Number) BACKLOG _DK(Backlog Date) BACKLOG_PERIOD_DK(Backlog Period Date) OPEN_QTY(Backlog Quantity)

1

Old value: 02/01/2001

New value: 02/02/2001

02/28/2001

Old value: 10

New value: 5


On March 1, 30 more items are ordered (Sales Order #3), all of which are on financial backlog. The resulting Backlog History table looks like Table 6-7.

Table 6-7 Oracle 11i and Oracle R12: Backlog History Table Entry as of March 1, 2001

SALES_ORDER_NUM(Sales Order Number) BACKLOG _DK(Backlog Date) BACKLOG_PERIOD_DK(Backlog Period Date) OPEN_QTY(Backlog Quantity)

1

Old value: 02/01/2001

New value: 02/02/2001

02/28/2001

5

2

03/01/2001

03/31/2001

50

3

03/01/2001

03/31/2001

30


Because backlog history is maintained at the monthly level, you have a partial history of your backlogs. Based on the latest state of the Backlog History table shown in Table 6-7, you can see that sales order number 1 ended up with 5 financial backlogged items. You do not have visibility into what the initial financial backlogged item quantities were for the sales orders; you only have their ending quantities.

If you decide that you want to track more details on how the items moved out of backlog, then you'll have to maintain the history at a more granular level. For instance, if you want to know the number of items that were on backlog when the it was first opened, you would have to track the backlog history by day, instead of by month.

For example, if you maintained backlog history at the daily level you would be able to capture that sales order 1 had an initial backlog of 10 as of February 1 and the backlog quantity shrank to 5 as of February 2. So, by capturing history at the daily level, you could then compute cycle times on how long it took to move items out of backlog. However, if you decide to capture backlog history at a more detailed level, you may compromise performance because tracking backlog history at the daily level can increase the size of the Backlog History table exponentially.

If you choose to change the time period for which historical backlog data is kept, you must verify that all types of backlog are being stored at the same grain; which requires modification to multiple mappings. Table 6-8 provides a list of all applicable mappings and their corresponding Expression transformations that you must modify.

Table 6-8 Oracle 11i and Oracle R12: Backlog History Applicable Mappings and Expression Transformations

Mapping Expression Transformation

PLP_SalesBacklogLinesfact_LoadOrderLines

EXP_SALES_ORNLNS_BACKLOG

PLP_SalesBacklogLinesfact_LoadScheduleLines

EXP_SALES_SCHLNS_BACKLOG


The backlog history period is monthly by default. The default SQL statement in the Expression transformation for the port BACKLOG_PERIOD_DK is:

TO_DECIMAL(TO_CHAR(LAST_DAY(CALENDAR_DATE),'YYYYMMDD'))

You can edit the backlog period date so that you can capture a more detailed backlog history with the following procedure. Possible periods include daily (CAL_DAY_DT), weekly (CAL_WEEK_DT), monthly (CAL_MONTH_DT), and quarterly (CAL_QTR_DT).

6.3.1.7 About Configuring the Customer Status History Fact for Post-Load Processing In Oracle Supply Chain and Order Management Analytics

In Oracle Supply Chain and Order Management Analytics, W_CUSTOMER_STATUS_HIST_F is a fact table that tracks the status of customers based on the frequency of orders they place with the organization. Possible statuses are NEW, RECENT, DORMANT and LOST. The time duration for each status bucket is configurable, out of the box being a calendar year. The grain of this table is at a Customer, Customer Status and the Status Start Date level. This section explains the possible configurations available for this table, what they mean and how to implement them.

6.3.1.8 Configuring the Customer Status History Fact table

This section talks about the following configurations that are available for the Customer Status History Fact table:

  • Configure the Data Warehouse Identifier

  • Configure the Period for each status bucket

Configuring the Data Warehouse Identifier

This table uses some of the Oracle BI Applications defined statuses, like NEW, RECENT, DORMANT and LOST. These status data gets loaded into the Data Warehouse directly through an out of box pre-packaged CSV file. The data in the file is independent of any specific OLTP source systems where your Customer or Sales data resides. In order to differentiate between source-based statuses from the pre-packaged out of box Data Warehouse statuses, a definite identifier is required. The Informatica mapping parameter $$WH_DATASOURCE_NUM_ID serves that purpose.

A pre-packaged value equal to 999 is set out of the box. Ideally you would not need to configure this value unless you have chosen to use this number (999) for a specific data source of yours, like Oracle EBS 11.5.10, and so on.

For information about how to configure the $$WH_DATASOURCE_NUM_ID value, see:Section 6.3.1.9, "How to Configure the Customer Status History Fact table".

Configuring the Period for each status bucket

When a customer orders some products/services from your organization for the first time, Oracle BI Applications sets the status for the customer as NEW. The customer maintains the same status if he/she shows a constant order pattern, as long as the duration between any two of his/her orders is less than a configurable/business defined period. The value (out of box being 365 days) of this Informatica parameter $$PERIOD is configurable. An use case for that would be a Fast Moving / Retail Goods company many define 30 days as their choice of period, whereas a Slow Moving company may be even happy with 730 days as period.

In case the customer is seen to have not ordered anything for more than one period, he/she is moved to the next status, RECENT. Similarly, no orders for one more period since he/she became RECENT would make him/her DORMANT. And lastly, he/she is set to LOST if no orders were seen for more than one more period after he/she became DORMANT.

However, if a customer orders while in DORMANT status, for instance, Oracle BI Applications would upgrade his/her status back to RECENT. If the customer were in LOST status, and he/she orders, then he/she will be upgraded back to RECENT.

All these examples above illustrate how important it is for the business to set the right value for the period. Organizations would tend to launch different campaigns targetting to different customers based on their current status, or order-patterns, putting it in a different way.

For information about how to configure the $$PERIOD value, see:Section 6.3.1.9, "How to Configure the Customer Status History Fact table".

6.3.1.9 How to Configure the Customer Status History Fact table

This section explains the steps involved in configuring the Customer Status History Fact table using the $$WH_DATASOURCE_NUM_ID and $$PERIOD variables (for more information about these variables, see Section 6.3.1.8, "Configuring the Customer Status History Fact table").

To modify the value of $$WH_DATASOURCE_NUM_ID

  1. In DAC, go to the Design view, and select the appropriate custom container.

  2. Display the Source System Parameters tab and locate the parameter $$WH_DATASOURCE_NUM_ID.

  3. In the Edit subtab, enter an appropriate value in the Value field.

  4. Save the changes.

To modify the value of $$PERIOD

  1. In DAC, go to the Design view, and select the appropriate custom container.

  2. Display the Tasks tab and query for the following two tasks:

    • PLP_CustomerStatusHistoryFact_New_Customers_Load

    • PLP_CustomerStatusHistoryFact_Status_Revalidate

  3. For each task, display the Parameters subtab, and enter an appropriate value in the Value field.

    Make sure you the set same value for both tasks.

  4. Save the changes.

6.3.1.10 About Configuring the Inventory Monthly Balance Table

To configure the Inventory Monthly Balance (W_INVENTORY_DAILY_BALANCE_F_A1) aggregate table, you need to consider the aggregation level, the time period to update the aggregation, and the time period to keep records in the Inventory Balance table.

You need to configure three parameters to configure the Inventory Monthly Balance table:

  • GRAIN

  • KEEP_PERIOD

  • NUM_OF_PERIOD

The GRAIN parameter has a preconfigured value of Month. The possible values for the GRAIN parameter are:

  • DAY

  • WEEK

  • MONTH

  • QUARTER

  • YEAR

The KEEP_PERIOD parameter has a preconfigured value of Month. Values for the KEEP_PERIOD parameter include:

  • DAY

  • WEEK

  • MONTH

  • QUARTER

  • YEAR

The NUM_OF_PERIOD parameter has a preconfigured value of 3. The value for the NUM_OF_PERIOD parameter is a positive integer, for example, 1, 2, 3, and so on.

6.3.1.11 How to Configure the Inventory Monthly Balance

Before you run the initial ETL session or incremental ETL sessions to load the Inventory Monthly Balance table, configure the Inventory Monthly Balance as follows.

To configure the Inventory Monthly Balance

  1. In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.

  2. Display the Tasks tab.

  3. For each of the following tasks, display the Parameters subtab and create the specified parameter name and parameter value

    • PLP_InventoryMonthlyBalance $$GRAIN 'MONTH'

    • PLP_InventoryDailyBalance_Trim $$KEEP_PERIOD 'MONTH'

    • PLP_InventoryDailyBalance_Trim $$NUM_OF_PERIOD 3

To incrementally refresh the Inventory Monthly Balance table

  1. Delete the records from the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F) aggregate table for a certain time.

    The GRAIN parameter determines the time period for the deletion. For example, if GRAIN=MONTH, and the date is May 15, 2005, then all records for April and the current month (May) are deleted in the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F) table.

    Running the PLP_InventoryMonthlyBalance workflow mapping implements this step.

  2. Retrieve the records in the Inventory Balance (W_INVENTORY_DAILY_BALANCE_F) fact table and load the records to the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F) table at a certain grain level.

    For example, if GRAIN=MONTH, then the month end balance records in the W_INVENTORY_DAILY_BALANCE_F fact table are stored in and aggregated to the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F).

    Running the S_M_PLP_INV_BALANCE_A1_AGG session, and the M_PLP_INV_BALANCE_A1_AGG mapping implements this step. For the current month balance, balance records of the previous day (if it is in the same month) are deleted from W_INVENTORY_MONTHLY_BAL_F, and balance records of the current day will be loaded from W_INVENTORY_BALANCE_F to W_INVENTORY_MONTHLY_BAL_F.

    Running the PLP_InventoryMonthlyBalance workflow implements this step.

  3. Remove the old records from the W_INVENTORY_DAILY_BALANCE_F fact table.

    To remove old records you need to use the KEEP_PERIOD and the NUM_OF_PERIOD parameters. For example, if KEEP_PERIOD=MONTH, NUM_OF_PERIOD=1, and the date is May 15, 2005, then the records for April and the current month (May) are kept and the older records are deleted.

    Running the PLP_InventoryDailyBalance_Trim workflow implements this step.

    Note:

    The trimming process is to reduce data size in the table. It is important to emphasize that you will not be able to see the old daily balance records. But you will still be able to see the month-end balance. Therefore, please make sure that you adjust the NUM_OF_PERIOD values to reflect your data volume and data recency requirement.

6.3.1.12 About Configuring the Product Transaction Aggregate Table

There are two aggregation scenarios to configure the Product Transaction aggregate (W_PRODUCT_XACT_A) table—the initial ETL run and then the incremental ETL run.

For your initial ETL run, you need to configure the aggregation level, and the length of history kept in the Product Transaction fact table.

For your initial ETL run, you need to configure the aggregation grain, using the GRAIN parameter.

For the incremental ETL run, you need to configure the aggregation level, the update period in aggregation, and the length of history kept in the Product Transaction fact table, using the following parameters:

  • GRAIN

    The GRAIN parameter specifies the aggregation level. Valid values are DAY, WEEK, MONTH (preconfigured value), QUARTER, YEAR.

  • REFRESH_PERIOD

    The REFRESH_PERIOD parameter, together with NUM_OF_PERIOD, indicates the number of period of records that will be refresh from the transaction table to the aggregate table. Valid values are DAY, WEEK, MONTH (preconfigured value), QUARTER, YEAR.

  • NUM_OF_PERIOD

    The NUM_OF_PERIOD parameter, together with REFRESH_METHOD, indicates the number of period of records that will be refresh from the transaction table to the aggregate table. Valid values are positive integers, for example, 1, 2, 3 (preconfigured value).

6.3.1.13 How to Configure the Product Transaction Aggregate Table

Before you run the initial ETL and then the incremental ETL to load the Product Transaction aggregate table, you need to configure the Product Transaction Aggregate Table, as follows.

To configure the Product Transaction Aggregate Table

  1. In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.

  2. Display the Tasks tab.

  3. Locate the Task named PLP_ProductTransactionAggregate, display the Parameters subtab, and make sure that the following three parameters are set as specified:

    • $$REFRESH_PERIOD = 'MONTH'

    • $$GRAIN = 'MONTH'

    • $$NUM_OF_PERIOD = 3

    Note: If any of these parameters do not exist, create them as Data Type = Text with the specified Values.

To configure the Product Transaction aggregate table for the initial ETL run

  1. Retrieve the records in the Product Transaction fact (W_PRODUCT_XACT_F) table, and aggregate the records to the Product Transaction aggregate (W_PRODUCT_XACT_A) table at a certain grain level.

    For example, if GRAIN=MONTH then the records in the W_PRODUCT_XACT_F fact table are retrieved and aggregated to the W_PRODUCT_XACT_A table at a monthly level.

    Running the PLP_ProductTransactionAggregatel workflow implements this step.

To configure the Product Transaction aggregate table for the incremental ETL run

  1. Delete the refreshed records from the Product Transaction aggregate (W_PRODUCT_XACT_A) table for a certain time.

    The REFRESH_PERIOD and the NUM_OF_PERIOD parameters determine the time period for the deletion.

    For example, if REFRESH_PERIOD=MONTH, NUM_OF_PERIOD=1, and the date is May 15, 2005, then all records for April and the current month (May) are deleted in the W_PRODUCT_XACT_A table.

    Running the PLP_ProductTransactionAggregate workflow implements this step.

  2. Retrieve the records in the Product Transaction fact (W_PRODUCT_XACT_F) table, and aggregate the records to the W_PRODUCT_XACT_A table at a certain grain level.

    For example, if GRAIN=MONTH then the records in the W_PRODUCT_XACT_F fact table are retrieved and aggregated to the W_PRODUCT_XACT_A table at a monthly level.

    Running the PLP_ProductTransactionAggregate workflow implements this step.

6.3.2 Configuration Steps for Oracle Supply Chain and Order Management Analytics for Oracle EBS

This section contains configuration steps that apply to Oracle EBS. It contains the following topics:

6.3.2.1 About Configuring Sales Order Lines Data Storage

Sales order lines are the itemized lines that make up a sales order. This information is stored in the W_SALES_ORDER_LINE_F table.

6.3.2.2 How to Include Nonbooked Orders in the Sales Order Lines Tables

By default, only booked orders are extracted from the Oracle source system as shown in Figure 6-1. Therefore, all orders loaded into the Sales Order Lines and Bookings tables are booked.

However, if you want to load nonbooked orders into the Sales Order Lines table, you have to configure the extract so that it does not filter out nonbooked orders. In Oracle 11i and Oracle R12, the OE_ORDER_LINES_ALL.BOOKED_FLAG = Y condition indicates that an order is booked; therefore, this statement is used to filter out nonbooked orders. To load all orders, including nonbooked orders, remove the filter condition from the WHERE clause in the SDE_ORA_SalesOrderLinesFact and SDE_ORA_SalesOrderLinesFact_Primary mappings.

Figure 6-1 Handling Booked and Nonbooked Orders

This image is described in the surrounding text.

By default, only booked orders are loaded into the Sales Order Lines (W_SALES_ORDER_LINES_F) and Sales Booking Lines (W_SALES_BOOKING_LINE_F) tables. However, you can also load non-booked orders in Sales Order Lines (W_SALES_ORDERS_LINES_F).

To include nonbooked orders in the Sales Order Lines tables

  1. In Informatica PowerCenter Designer, open the SDE_ORA115<Version>_Adaptor or SDE_ORAR12_Adaptor folder.

  2. Open the mplt_BC_ORA_SalesOrderLinesFact mapplet in the Mapplet Designer.

  3. Double-click the SQ_BCI_SALES_ORDLNS Source Qualifier to open the Edit Transformations box.

  4. Display the Properties tab.

    This image is an example of the populated screen.
  5. For both the Sql Query Transformation Attribute and the User Defined Join Transformation Attribute, do the following:

    1. Select the down arrow in the Value field to display the SQL Editor box.

      This image is an example of the populated screen.
    2. In the SQL box, remove the line 'AND OE_ORDER_LINES_ALL.BOOKED_FLAG='Y''.

    3. Select OK to save the changes.

  6. Validate and save your changes to the repository.

  7. Repeat steps 3 - 5 for the SDE_ORA_SalesOrderLinesFact_Primary mapping.

  8. Add 'W_SALES_ORDER_LINE_F.BOOKING_FLG = 'Y'' (plus AND if there existing filters) to the field of Source Filter inside the Source Qualifier transformation, for the following mappings:

    • SIL_SalesBookingLinesFact_Load_OrderLine_Credit

    • SIL_SalesBookingLinesFact_Load_OrderLine_Debit

  9. Add 'W_SALES_SCHEDULE_LINE_F.BOOKING_FLG = 'Y'' (plus AND if there are existing filters) to the field of Source Filter inside the Source Qualifier transformation, for the following mappings:

    • SIL_SalesBookingLinesFact_Load_ScheduleLine_Credit

    • SIL_SalesBookingLinesFact_Load_ScheduleLine_Debit

6.3.2.3 How to Configure Sales Schedule Lines Data Storage

Sales schedule lines detail when each order's items are slated for shipment. Each sales order is broken into sales order lines, and each sales order line can have multiple schedule lines.

For example, you might not have enough stock to fulfill a particular sales order line, therefore you create two schedules to fulfill it. One schedule ships what you currently have in stock, and the other schedule includes enough time for you to manufacture and ship the remaining items of the sales order line. This information is stored in the W_SALES_SCHEDULE_LINE_F table. This topic describes how to modify the type of information stored in this table.

6.3.2.4 How to Include Nonbooked Orders in the Sales Schedule Lines Tables

By default, all orders loaded into the Sales Schedule Lines are booked.

However, if you want to load nonbooked orders into the Sales Schedule Lines table, you have to configure the extract so that it does not filter out nonbooked orders. In Oracle 11i and Oracle R12, the OE_ORDER_LINES_ALL.BOOKED_FLAG = Y condition indicates that an order is booked; therefore, this statement is used to filter out nonbooked orders. To load all orders, including nonbooked orders, remove the filter condition from the WHERE clause in the SDE_ORA_SalesScheduleLinesFact and SDE_ORA_SalesScheduleLineLines_Fact_Primary mappings.

To include nonbooked orders in the Sales Schedule Lines tables

  1. In Informatica PowerCenter Designer, open the SDE_ORA115<Version>_Adaptor or SDE_ORAR12_Adaptor folder.

  2. Open the mplt_BC_ORA_SalesScheduleLinesFact mapplet in the Mapplet Designer.

  3. Double-click the SQ_BCI_SALES_ORDLNS Source Qualifier to open the Edit Transformations box.

  4. Display the Properties tab.

  5. For both the Sql Query Transformation Attribute and the User Defined Join Transformation Attribute, do the following:

    1. Select the down arrow in the Value field to display the SQL Editor box.

    2. In the SQL box, remove the line 'AND OE_ORDER_LINES_ALL.BOOKED_FLAG='Y''.

    3. Select OK to save the changes.

  6. Validate and save your changes to the repository.

  7. Repeat steps 3 - 5 for the SDE_ORA_SalesScheduleLinesFact_Primary mapping.

6.3.2.5 About Loading Bookings at the Schedule Line Level

As initially configured for Oracle 11i and Oracle R12, bookings are recorded at the Sales Order Line level. For each booked order, there is at least one row in the Bookings table, as shown in the figure below.

Figure 6-2 Sales Order Lines and Bookings Table

This image is described in the surrounding text.

There are two Subject Areas in the SDE_ORA115<Version>_Adaptor or SDE_ORAR12_Adaptor containers:

  • Enterprise Sales - Booking Lines & Order Lines

  • Enterprise Sales - Booking Lines & Schedule Lines

The Execution Plans that are installed with Oracle BI Applications use the Enterprise Sales - Booking Lines & Order Lines Subject Area by default. If you want to load booking lines at the schedule line level, create a new Execution Plan and include the Enterprise Sales - Booking Lines & Schedule Lines Subject Area instead of Enterprise Sales - BookingLines & Order Lines.

Bookings may be recorded at the Sales Schedule Line level instead of the Sales Order Line level. At the Sales Schedule Line level, bookings provide a more granular view, as the orders are segmented by schedule line. Bookings recorded at the Schedule Line level provide one row in the Bookings table for each schedule line, as shown in the figure below. Oracle Applications schedule lines have the same granularity as order lines. Therefore, if you pull booking lines from schedule lines, the booking lines are limited to scheduled order lines.

Figure 6-3 Bookings recorded at the Schedule Line level

This image is described in the surrounding text.

6.3.2.6 How to Configure Early and Late Tolerances for Shipments

You configure the definition of early and late shipments by editing the EXP_SALES_PCKLNS expression in the mplt_SA_ORA_SalesPickLinesFact mapplet. The mplt_SA_ORA_SalesPickLinesFact mapplet is used by the SDE_ORASalesPickLinesFact mapping.

This mapplet compares the pick date and ship date with the scheduled ship date to determine whether or not orders are late.

To configure early and late tolerances for shipments

  1. Open the file parameterfileOLTP.txt and locate at the section [SDE_ORA_SalesPickLinesFact].

  2. Edit the parameter for the tolerance you want to modify.

    For example:

    • If you want to allow two days after the scheduled pick date before the pick is flagged as late, set the $$PICK_LATE_TIME_TOL=2.

    • To set the number of days before a pick is flagged as early, set the value for $$PICK_EARLY_TIME_TOL

    • To set the number of days before a pick is flagged as late, set the value for $$PICK_LATE_TIME_TOL.

    • If you want to change the shipping tolerances, set the values for the shipping parameters ($$SHIP_LATE_TIME_TOL and $$SHIP_EARLY_TIME_TOL).

  3. Validate and save your changes to the parameter file.

6.3.2.7 How to Configure Sales Invoice Lines Data Storage

Sales invoice lines are payments for items ordered by a customer. This information is stored in the W_SALES_INVOICE_LINE_F table. This topic describes how to modify the type of information stored in this table.

6.3.2.8 How to Configure the Sales Invoice Extract

By default, the Oracle Supply Chain and Order Management Analytics application is configured to extract completed sales invoices when performing the Sales Invoice data extract. Oracle 11i and Oracle R12 use a flag to indicate whether a sales invoice is complete. In particular, completed sales invoices are those where the RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG = Y in Oracle 11i and Oracle R12.

To extract incomplete sales invoices, as well as complete invoices, remove the extract filter statement.

To remove the extract filter for sales invoices

  1. In Informatica PowerCenter Designer, open the SDE_ORA115<Version>_Adaptor or SDE_ORAR12_Adaptor folder.

  2. Open the mplt_BC_ORA_SalesInvoiceLinesFact mapplet in the Mapplet Designer.

  3. Double-click the SQ_BCI_SALES_IVCLNS Source Qualifier to open the Edit Transformations box.

  4. Display the Properties tab.

  5. For the SQL Query Transformation Attribute, select the down arrow in the Value field to display the SQL Editor box.

  6. In the SQL box, remove the line 'AND RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG='Y''.

  7. Validate and save your changes to the repository.

  8. Repeat steps 2 - 7 for the mplt_BC_ORA_SalesInvoiceLinesFact_Primary mapplet.

6.3.2.9 How to Configure the Order Lines

The Backlogs and Cycle Lines in the out-of-the-box ETL component assumes that the Backlog, Picking and Cycle Lines tables have been updated with shipping and invoicing information (for example, using Oracle EBS Interface programs). If the Oracle Order Line Tables have not been updated with shipping and invoicing information, you need to update the out-of-the-box ETL components as follows:

To configure the Order Lines

  1. In Informatica PowerCenter Designer, open the PLP\Mappings folder.

  2. Using the Mapping Designer, edit the PLP_SalesCycleLinesFact_Load mapping and the PLP_SalesCycleLinesFact_Load_Full mappings as follows:

    1. Open the mapping in Mapping Designer.

    2. Double-click the SQ_W_SALES_ORDER_LINE_F Source Qualifier to open the Edit Transformations box.

    3. Display the Properties tab.

    4. For the SQL Query Transformation Attribute, select the down arrow in the Value field to display the SQL Editor box.

    5. In the SQL box, replace the SQL text 'X.TOTAL_SHIPPED_QTY' with 'PICKLINE.TOTAL_SHIPPED_QTY'.

    6. In the SQL box, replace the SQL text 'X.TOTAL_INVOICED_QTY' with 'IVCLINE.TOTAL_INVOICE_QTY'.

    7. Validate and save your changes to the repository.

  3. In DAC, do the following: go to the Design view, display the Configuration Tags tab, and select the Sales PLP Optional Tasks.

    1. Go to the Design view, and select the appropriate custom container.

    2. Display the Configuration Tags tab.

    3. Query for the tag 'Sales PLP Optional Tasks'.

    4. Display the Subject Areas subtab.

    5. Activate the appropriate Subject Areas by clearing the Inactive check box.

  4. In PowerCenter Designer, open the Source Qualifier in SDE_ORA_SalesPickLinesFact.

  5. Modify the SQL Query as follows

    1. Add to the following join condition:

      AND WSH_DELIVERY_DETAILS.DELIVERY_DETAIL_ID=WSH_DELIVERY_ASSIGNMENTS.DELIVERY_DETAIL_ID (+) AND WSH_DELIVERY_ASSIGNMENTS.DELIVERY_ID=WSH_NEW_DELIVERIES.DELIVERY_ID (+)

    2. Nest the following filter condition within the parenthesis:

      OR WSH_NEW_DELIVERIES.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS')

    3. Change select OE_ORDER_LINES_ALL.ACTUAL_SHIPMENT_DATE to select WSH_NEW_DELIVERIES.INTIAL_PICKUP_DATE.

    4. Select WSH_NEW_DELIVERIES.LAST_UPDATE_DATE and link it to EXP_SALES_PCKLNS.LAST_UPDATE_DATE1.

6.3.2.10 How to Configure Oracle Financial Analytics for Oracle Supply Chain and Order Management Analytics

The Oracle Supply Chain and Order Management Analytics application uses tables that are also used in the Oracle Financial Analytics application.

For Oracle 11i and Oracle R12, you need to use the following configuration steps for Oracle Financial Analytics to configure Oracle Supply Chain and Order Management Analytics:

6.3.2.11 About Tracking Attribute Changes in Bookings

Changes in booked orders are tracked in the Booking Lines table (W_SALES_BOOKING_LINE_F), not in the Sales Order Lines table (W_SALES_ORDER_LINE). By default, the only changes tracked in the W_SALES_BOOKING_LINE_F table are changes in the ordered amount, ordered quantity, or Booking ID. By default, the Booking ID is defined as:

TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||TO_CHAR(INP_WAREHOUSE_ID)

Any changes in these fields results in another row in the W_SALES_BOOKING_LINE_F table. However, changes in any other fields does not result in a new row; instead, the existing information are overwritten with the changed information. No history is kept for changes to these other field values. If you want to track other changes you can do so. For example, you may want to track changes to the sales representative who is handling the order. The ETL processes are prepackaged to overwrite sales representative changes; however, if you want to retain them, you must add the attribute to the Booking ID definition in the Booking ID expression in the source adapter mapplet (mplt_SA_ORA_SalesOrderLinesFact). The following section describes what happens if you modify the Booking ID to include the sales representative.

6.3.2.12 About Viewing the Data Warehouse Changes by Salesperson ID

Assume you want to track changes to the sales representative for bookings and de-bookings. You decide to do this to better evaluate each representative's sales performance. To track changes by Salesperson ID, you have to modify the VAR_BOOKING_ID to use the value:

TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||to_char(INP_WAREHOUSE_ID)

For example, to edit the VAR_BOOKING_ID value, do the following:

  1. In Informatica PowerCenter Designer, open the mplt_SA_ORA_SalesOrderLinesFact mapplet in the Mapplet Designer.

  2. Double-click the MAPI_SALES_ORDLNS transformation to open the Edit Transformation box.

  3. Display the Ports tab.

  4. Select the EXP_SALES_ORDLNS transformation.

  5. Edit the expression for the VAR_BOOKING_ID port.

The following paragraphs and tables describe what happens in the source system and the W_SALES_BOOKING_LINE_F table when you change sales representatives under this scenario.

Day 1: One order is placed with Salesperson 1001. The source system displays the information as shown in Table 6-9.

Table 6-9 Oracle 11i and Oracle R12: Source System Table Row After Day One Activity

Sales Order Number Sales Order Line Number Salesperson ID Quantity Selling Price Date

1

1

1001

100

25

1-June-2000


The row in Table 6-9 is entered into the IA Bookings table (W_SALES_BOOKING_LINE_F) as shown in Table 6-10.

Table 6-10 Oracle 11i and Oracle R12: W_SALES_BOOKING_LINE_F Table Row After Day One Activity

SALES_ORDER_NUM SALES_ORDER_ITEM SALESREP_ID SALES_QTY NET_DOC_AMT BOOKED_ON_DT

1

1

1001

100

2500

1-June-2000


Day 2: Salesperson 1002 takes over this order, replacing Salesperson 1001. Thus, the salesperson associated with the order is changed from 1001 to 1002 in the source system. The row in the source system looks like the row shown in Table 6-11.

Table 6-11 Oracle 11i and Oracle R12: Source System Table Row After Day Two Activity

Sales Order Number Sales Order Line Number Salesperson ID Quantity Selling Price Date

1

1

1002

100

25

2-June-2000


The SIL_SalesBookingLinesFact_Load_OrderLine_Credit, which also writes to the booking table, now does a debooking for the old line and SIL_SalesBookingLinesFact_Load_OrderLine_Debt inserts a new row into the W_SALES_BOOKING_LINE_F booking table. On day two, the row in the W_SALES_BOOKING_LINE_F table looks like the row shown in the Table 6-12.

Table 6-12 Oracle 11i and Oracle R12: W_SALES_BOOKING_LINE_F Table Row After Day Two Activity

SALES_ORDER_NUM SALES_ORDER_ITEM SALESREP_ID SALES_QTY NET_DOC_AMT BOOKED_ON_DT

1

1

1001

100

2500

1-June-2000

1

1

1001

-100

-2500

2-June-2000

1

1

1002

100

2500

2-June-2000


6.3.2.13 How to Configure Different Types of Backlog Calculations

Backlog information is stored in the W_SALES_BACKLOG_LINE_F and W_SALES_BACKLOG_HISTORY_F tables. This topic describes how to modify the type of information stored in these tables. Many types of backlog exist in the Oracle Supply Chain and Order Management Analytics application—financial backlog, operational backlog, delinquent backlog, scheduled backlog, unscheduled backlog, and blocked backlog. Each type of backlog is defined by two particular dates in the sales process; therefore, calculations of backlog hits multiple fact tables.

For example, financial backlog records which items have been ordered but payment has not been received. Thus, to calculate the number of financial backlog items, you use the Sales Order Lines table (to determine which items have been ordered) and the Sales Invoice Lines table (to see which orders have been paid for). Using these two tables, you can determine the number of items and the value of those items that are on financial backlog.

6.3.2.14 Adding Closed Orders to Backlog Calculations

By default, the Oracle Supply Chain and Order Management Analytics application only extracts open sales orders from the Sales Order Lines (W_SALES_ORDER_LINE_F) table and Sales Schedule Lines table (W_SALES_SCHEDULE_LINE_F) for backlog calculations to populate the Backlog tables. Open sales orders are defined as orders that are not canceled or not complete. The purpose in extracting only open orders is that in most organizations those orders that are closed are no longer a part of backlog. However, if you want to extract sales orders that are marked as closed, you may remove the default filter condition from the extract mapping.

For example, assume your customer orders ten items. Six items are invoiced and shipped, but four items are placed on operational and financial backlog. This backlog status continues until one of two things happens:

  • The items are eventually shipped and invoiced.

  • The remainder of the order is canceled.

If you choose to extract sales orders that are flagged as closed, you must remove the condition in the Backlog flag. To do so, use the following procedure.

The BACKLOG_FLAG in the W_SALES_ORDER_LINE_F table is also used to identify which sales orders are eligible for backlog calculations. By default, all sales order types have their Backlog flag set to Y. As a result, all sales orders are included in backlog calculations.

To remove open order extract filters

  1. In Informatica PowerCenter Designer, open the SDE_ORA115<Version>_Adaptor or SDE_ORAR12_Adaptor folder.

  2. Open the mplt_BC_ORA_SalesOrderLinesFact mapplet in the Mapplet Designer.

  3. Double-click on the EXP_SALES_ORDLNS, and display the Ports tab.

  4. Edit the VAR_OPR_BACKLOG_FLG and remove the OPEN_FLAG='Y'.

  5. Edit the VAR_FIN_BACKLOG_FLG and remove the OPEN_FLAG='Y'.

  6. Open the mplt_BC_ORA_SalesScheduleLinesFact mapplet in the Mapplet Designer.

  7. Double-click on the EXP_SALES_SCHLNS, and display the Ports tab.

  8. Edit the VAR_OPR_BACKLOG_FLG and remove the OPEN_FLAG='Y'.

  9. Edit the VAR_FIN_BACKLOG_FLG and remove the OPEN_FLAG='Y'.

  10. Validate and save your changes to the repository.

  11. Open the PLP folder.

  12. Open the mappings PLP_SalesBacklogLinesFact_LoadOrderLines and PLP_SalesBacklogLinesFact_LoadScheduleLines.

  13. Remove the condition W_STATUS_CODE <> 'Closed' from the SQL Query in the Source Qualifier.

6.3.2.15 How to Configure the Bill of Materials Explosion Options

The Bill of Materials (BOM) functional area allows you to analyze the components that comprise the finished goods. BOM allows you to determine how many products use a certain component. It also allows you to get visibility into the complete BOM hierarchy for a finished product. In order to explode BOM structures, certain objects need to be deployed in your EBS system.

Note:

If you plan to consume analytics on Bill of Materials (BOM), you need to include the DAC Subject Area "Supply Chain - BOM Items" in your ETL Execution Plan. However, the ETL process for extracting BOM data from Oracle EBS into the OBAW data warehouse is not designed for very high volumes of BOM data. There are known performance issues with initial ETL load for very large BOM data size (for example, more than 100,000 rows of data in EBS BOM). Oracle recommends that you restrict the BOM data size in the initial ETL load by applying appropriate filters (for example, On time or Inventory Organization).

You can explode the BOM structure with three different options:

  • All. All the BOM components are exploded regardless of their effective date or disable date. To explode a BOM component is to expand the BOM tree structure.

  • Current. The incremental extract logic considers any changed components that are currently effective, any components that are effective after the last extraction date, or any components that are disabled after the last extraction date.

  • Current and Future. All the BOM components that are effective now or in the future are exploded. The disabled components are left out.

These options are controlled by the EXPLODE_OPTION variable. The EXPLODE_OPTION variable is preconfigured with a value of 2, explode Current BOM structure.

In the SDE_ORA_BOM_Explosion mapping, the BOMPEXPL_WRAPPER_OBIA stored procedure calls bompexpl.exploder_userexit stored procedure to explode the BOM structure. The table below lists the variables for the bompexpl.exploder_userexit stored procedure.

Table 6-13 Variables for the bompexpl.exploder_userexit Stored Procedure

Input Variable Preconfigured Value Description

VERIFY_FLAG

0

A verify flag with a value of 1 only applies to standard BOM.

ORG_ID

ORGANIZATION_ID

Organization ID

ORDER_BY

1

Controls the order of the records.

1—Operation Sequence Number, Item Number

2—Item Number, Operation Sequence Number

GRP_ID

Negative Sequence ID -1, -2, and so on.

Unique value to identify current explosion.

SESSION_ID

Negative Sequence ID -1, -2, and so on.

Unique value to identify current session.

LEVELS_TO_EXPLODE

10

Levels to explode.

BOM_OR_ENG

1

1—BOM

2—ENG

IMPL_FLAG

1

1—Implemented Only

2—Implemented and Non-implemented

PLAN_FACTOR

2

1—Yes

2—No

EXPLODE_OPTION

2

1—All

2—Current

3—Current and Future

MODULE

2

1—Costing

2—BOM

3—Order Entry

4—ATO

5—WSM

CST_TYPE_ID

0

Cost type ID for cost explosion.

STD_COMP_FLAG

0

1—Explode only standard components

2—All components

EXPL_QTY

1

Explosion quantity

ITEM_ID

ROUND(TO_DECIMAL(PRODUCT_ID))

Item ID of assembly to explode.

ALT_DESG

ALTERNATE_BOM_DESIGNATOR

Alternate routing designator

COMP_CODE

NULL

Concatenated component code

REV_DATE

TO_CHAR(CREATION_DT, 'YYYY/MM/DD HH24:MI')

Explosion date

YYYY/MM/DD HH24:MI


There are five different BOM types in a source system—1- Model, 2 - Option Class, 3 - Planning, 4 - Standard, and 5 - Product Family. By default, only the Standard BOM type is extracted and exploded.

To configure the BOM explosion to the All option

  1. In Informatica PowerCenter Designer, open the SDE_ORA<Version>_Adaptor.

  2. Go to Mapplet Designer, and open mplt_BC_ORA_BOMHeaderDimension.

  3. Double-click the SQL qualifier SQ_BOM_INVENTORY_COMPONENTS to open the Edit Transformations dialog, display the Properties tab, and open the value for SQL Query.

  4. Modify the following default Where condition from:

    ((
    
       /* CURRENT valid component changed */ 
    
       INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') AND (INV.EFFECTIVITY_DATE <= TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') and (INV.DISABLE_DATE > TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR INV.DISABLE_DATE IS NULL))
    
       OR
    
       /* Component that became effective after last extract date and before today's extract, for CURRENT Option*/
    
       INV.EFFECTIVITY_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS')
    
       OR
    
       /* Component that become disabled after last extract date and before today's extract, for CURRENT and CURRENT-FUTURE Option*/
    
       INV.DISABLE_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS')
    
       )
    
       OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS')) 
    
    GROUP BY
    

    To:

    (INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') 
    
       OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS'))
    
    GROUP BY
    
  5. Click Apply, and Validate the mapping and save your changes to the repository.

  6. Navigate to <OracleBI>\dwrep\Informatica\Stored_Procedure_Scripts\oracle directory, Edit OLTP_bompexpl_wrapper_obia_<Ver>.sql.

  7. Change the value to 1, and Save.

  8. Run the script to create the stored procedure.

To configure the BOM explosion to the Current and Future option

  1. In Informatica PowerCenter Designer, open the SDE_ORA<Version>_Adaptor.

  2. Go to Mapplet Designer, and open mplt_BC_ORA_BOMHeaderDimension.

  3. Double-click the SQL qualifier SQ_BOM_INVENTORY_COMPONENTS to open the Edit Transformations dialog, and display the Properties tab, and open the value for SQL Query.

  4. Modify the following default Where condition from:

    ((
    
    /* CURRENT valid component changed */
    
    INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') AND (INV.EFFECTIVITY_DATE <= TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') and (INV.DISABLE_DATE > TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR INV.DISABLE_DATE IS NULL))
    
    OR
    
    /* Component that became effective after last extract date and before today's extract, for CURRENT Option*/
    
    INV.EFFECTIVITY_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS')
    
    OR
    
    /* Component that become disabled after last extract date and before today's extract, for CURRENT and CURRENT-FUTURE Option*/
    
    INV.DISABLE_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS')
    
    )
    
    OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS'))
    
    GROUP BY
    

    To:

    ((
    
    INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS')
    
    AND ((INV.DISABLE_DATE > TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR INV.DISABLE_DATE IS NULL))
    
    OR
    
    INV.DISABLE_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS')
    
    )
    
    OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS'))
    
     GROUP BY
    
  5. Click Apply, and Validate the mapping and save your changes to the repository.

  6. Navigate to <OracleBI>\dwrep\Informatica\Stored_Procedure_Scripts\oracle directory, Edit OLTP_bompexpl_wrapper_obia_<Ver>.sql.

  7. Change the value to 1, and Save.

  8. Run the script to create the stored procedure.

To configure the BOM type

  1. In Informatica PowerCenter Designer, open the SDE_ORA<Version>_Adaptor.

  2. Open the mplt_BC_ORA_BOMHeaderDimension mapplet.

  3. Double-click the SQL qualifier SQ_BOM_INVENTORY_COMPONENTS to open the Edit Transformations dialog, and display the Properties tab, and open the value for SQL Query.

  4. Modify BOM_ITEM_TYPE section in Where statement.

    Change the number to your BOM type. For example, change the number to 3 for a Planning BOM type.

    Where INV.BOM_ITEM_TYPE = 3 AND
    M.BOM_ITEM_TYPE = 3 AND
    

    Note:

    You can also remove these two filters to extract all types of BOM.
  5. Click Apply, and Validate the mapping and save your changes to the repository.

6.3.2.16 How to Configure the Left Bound and Right Bound Calculation Option

You can use the left bound and the right bound calculation to expedite some reporting requirements. For example, you can find the components in a subassembly within a finished product. Left bound and right bound are stored in the W_BOM_ITEM_F table for each BOM node, and they have one row of data in the W_BOM_ITEM_F table. The COMPUTE_BOUNDS stored procedure traverses the exploded BOM tree structure and calculates the left bound and right bound. By default, the COMPUTE_BOUNDS stored procedure is off and the W_BOM_ITEM_F.LEFT_BOUNDS and W_BOM_ITEM_F.RIGHT_BOUNDS columns are empty.

The figure below illustrates a sample BOM structure with the left bound and right bound values listed for each node. To find all the components under node B, you select the components with a top product key value of A, the left bound value is greater than 2, and the right bound value is less than 17.

Figure 6-4 Sample BOM Structure

This image is described in the surrounding text.

You can use the following procedure to turn on the left bound and the right bound calculation and populate the W_BOM_ITEM_F.LEFT_BOUNDS and W_BOM_ITEM_F.RIGHT_BOUNDS columns.

Note: Before you run an ETL using a BOM, you must compile and deploy the SQL code in Compute_Bounds_Ora11i.sql (for more information, see Section 3.1.5, "How to Deploy Stored Procedures").

To configure the left bound and right bound calculation option

  1. In Informatica PowerCenter Designer, navigate to the SILOS folder and edit the SIL_BOMItemFact mapping.

  2. Double-click the COMPUTE_BOUNDS stored procedure transformation to open the Edit Transformations dialog, and display the Properties tab.

  3. Change the value of the Call Text attribute to 'compute_bounds_ora11i(1)'.

  4. Click Apply.

  5. Validate the mapping, and save your changes to the repository.

6.3.3 Configuration Steps for Oracle Supply Chain and Order Management Analytics for PeopleSoft

Not applicable to Oracle BI Applications Version 7.9.6.1.

6.3.4 Configuration Steps for Oracle Supply Chain and Order Management Analytics for Universal

Not applicable to Oracle BI Applications Version 7.9.6.1.