Oracle® Business Intelligence Applications Configuration Guide for Informatica PowerCenter Users Release 7.9.6.3 Part Number E19039-01 |
|
|
PDF · Mobi · ePub |
This section describes how to configure Oracle Supply Chain and Order Management Analytics. It contains the following topics:
Section 6.1, "Overview of Oracle Supply Chain and Order Management Analytics"
Section 6.3, "Configuration Steps for Controlling Your Data Set"
The Oracle Supply Chain and Order Management Analytics application enables you to analyze:
Bill of materials.
Bookings.
Financial and Operational Backlogs.
Inventory held by an organization.
Inventory movements in, out, and through manufacturing plants, distribution centers, or storage locations.
Invoices.
The movement of sales orders through different stages of the sales cycle.
The Oracle Supply Chain and Order Management Analytics application consists of orders, invoices, backlogs and inventory. Sales orders are the entry point for the sales process. Invoices are the exit point from the fulfillment process. Backlogs are points of congestion in your fulfillment process. This coverage includes insight into which items are booked, backlogged, and invoiced. This enables you to evaluate the sales performance of individual sales representatives or departments. Oracle Supply Chain and Order Management Analytics application also provides you with information on Inventory Transactions, Inventory Balances, Bill of Materials and Customer and Supplier Returns. This enables companies to monitor inventory levels trend to Sales performance to improve cost exposure, increase turnover through inventory level reduction and increased velocity, properly deploy inventory at the right place / right time and better understand Customer and Supplier Returns to maintain quality.
The Oracle Supply Chain and Order Management Analytics application also requires post-load processing mappings to populate its aggregate and derived tables.
This section contains configuration steps that you need to perform on Oracle Supply Chain and Order Management Analytics before you do a full data load. It contains the following topics:
This section contains configuration steps that apply to all source systems. It contains the following topics:
Note:
For configuration steps that apply to all analytics modules (for example, Oracle Financial Analytics, Oracle HR Analytics, or Oracle Sales Analytics), see Chapter 3, "Configuring Common Areas and Dimensions."If you have not licensed or do not want to implement Oracle Projects, you can disable Project dimensions.
To disable project dimensions:
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Configuration Tags tab.
Query for the tag Enable Project Dimensions.
Display the Subject Areas subtab.
Select the Inactive check box for the Subject Area named 'Supply Chain - Inventory Transactions'.
In the upper pane, display the Subject Areas tab, and select the Subject Area named 'Projects'.
In lower pane, display the Configuration Tags tab.
Select the Inactive check box for the Configuration Tag named 'Enable Project Dimensions'.
You need to enable the delete and primary extract sessions for Oracle Supply Chain and Order Management Analytics. For more information, see Section 17.8.3.2, "Enabling Delete and Primary Extract Sessions." For general information about primary extract and delete mappings, see Section 17.8, "Configuring Loads."
This section contains configuration steps required before you do a full data load that apply to Oracle EBS.
Section 6.2.2.2, "How to Configure Invoice Type Domain Values".
Section 6.2.2.3, "How to Configure Pick Types Domain Values".
Section 6.2.2.4, "How to Configure Order Types Domain Values"
Section 6.2.2.5, "How to Configure Pick Status Domain Values"
Section 6.2.2.7, "How to Configure Invoice Status Domain Values"
Section 6.2.2.8, "How to Configure Order Overall Status Domain Values"
Section 6.2.2.6, "How to Configure Pay Method Domain Values"
Section 6.2.2.9, "How to Configure Movement Types Domain Values"
Section 6.2.2.10, "How to Configure Invoice Source Type Domain Values"
Section 6.2.2.11, "How to Configure Quantity Types for Product Transactions"
Section 6.2.2.12, "How to Deploy Objects in Oracle EBS for Exploding the BOM"
Note:
If you plan to consume analytics on Bill of Materials (BOM), you need to include the DAC Subject Area "Supply Chain - BOM Items" in your ETL Execution Plan. However, the ETL process for extracting BOM data from Oracle EBS into the OBAW data warehouse is not designed for very high volumes of BOM data. There are known performance issues with initial ETL load for very large BOM data size (for example, more than 100,000 rows of data in EBS BOM). Oracle recommends that you restrict the BOM data size in the initial ETL load by applying appropriate filters (for example, On time or Inventory Organization).Table 6-1 lists the CSV worksheet files and the domain values for Oracle Supply Chain and Order Management Analytics that are located in the $PMServer\LkpFiles folder (for example, INFA_HOME\server\infa_shared\LkpFiles).
Table 6-1 Domain Values and CSV Worksheet Files for Oracle Supply Chain and Order Management Analytics
Worksheet File Name | Description | Session |
---|---|---|
domainValues_InvoiceTypes_ora11i.csv |
Lists the Invoice Document Type column and the corresponding domain values for the Oracle 11i or Oracle R12 application. For more information about updating the values in this file, see Section 6.2.2.2, "How to Configure Invoice Type Domain Values." |
SDE_ORA_Transaction TypeDimension_SalesInvoiceLines |
domainValues_PickTypes_ora11i.csv |
Lists the Picking Document Type column and the corresponding domain values for the Oracle 11i or Oracle R12 application. For more information about updating the values in this file, see Section 6.2.2.3, "How to Configure Pick Types Domain Values." |
SDE_ORA_Transaction TypeDimension_SalesPickLines |
domainValues_OrderTypes_ora11i.csv |
Lists the Order Document Type column and the corresponding domain values for the Oracle 11i or Oracle R12 application. For more information about updating the values in this file, see Section 6.2.2.4, "How to Configure Order Types Domain Values." |
SDE_ORA_Transaction TypeDimension_SalesOrderLines |
domainValues_PickStatus_ora11i.csv |
Lists the Picking Status Code and the Status Desc columns, and the corresponding domain values for the Oracle 11i or Oracle R12 application. For more information about updating the values in this file, see Section 6.2.2.5, "How to Configure Pick Status Domain Values." |
SDE_ORA_StatusDimension_SalesPickLines |
domainValues_PayMethodCode_ora.csv |
Lists the method code column and the corresponding domain value for the application. For more information about updating the values in this file, see Section 6.2.2.6, "How to Configure Pay Method Domain Values." |
SDE_ORA_Payment MethodDimension |
domainValues_InvoiceStatus_ora11i.csv |
Lists the Invoice Status Code and the Status Desc columns, and the corresponding domain values for the Oracle 11i or Oracle R12 application. For more information about updating the values in this file, see Section 6.2.2.7, "How to Configure Invoice Status Domain Values." |
SDE_ORA_StatusDimension_SalesInvoiceLine |
domainValues_OrderOverallStatus_ora11i.csv |
List the Order Status Code column and the corresponding domain values for the Oracle 11i or Oracle R12 application. For more information about updating the values in this file, see Section 6.2.2.8, "How to Configure Order Overall Status Domain Values." |
SDE_ORA_StatusDimension_SalesOrderLineCycle |
domainValues_InvoiceSrcTypes_ora11i.csv |
Lists the Invoice Source Code and the Source Desc columns, and the corresponding domain values for the Oracle 11i or Oracle R12 application. For more information about updating the values in this file, see Section 6.2.2.10, "How to Configure Invoice Source Type Domain Values." |
SDE_ORA_Transaction SourceDimension |
For general information about domain values with CSV worksheet files, see the section about domain values in the section entitled "Customizing the Oracle Business Analytics Warehouse" in Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users.
Note: Where SQL code is provided in the following sections, you might need to change the language specified by the FND_LOOKUP_VALUES.LANGUAGE = ' ' command.
This section explains how to configure Invoice Type domain values using the domainValues_InvoiceTypes_ora11i.csv file.
To configure Invoice Type domain values:
Identify the Invoice Types in your Oracle 11i source system by using the following SQL:
SELECT DISTINCT RA_CUST_TRX_TYPES_ALL.TYPE FROM RA_CUST_TRX_TYPES_ALL ORDER BY 1;
Using a text editor, open the domainValues_InvoiceTypes_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the TYPE column to the XACT_TYPE_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Transaction Type Code to one domain value.
For more information on Transaction Type Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure Pick Types domain values using the domainValues_PickTypes_ora11i.csv file.
To configure Pick Types domain values:
Using a text editor open the domainValues_PickTypes_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Put 'STANDARD' in the XACT_TYPE_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Transaction Type Code to one domain value.
For more information on Transaction Type Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure Order Types domain values using the domainValues_OrderTypes_ora11i.csv file.
To configure Order Types domain values:
Identify the Pick Types in your Oracle 11i source system by using the following SQL:
SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE FROM FND_LOOKUP_VALUES WHERE FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 660 AND FND_LOOKUP_VALUES.LANGUAGE = 'US' AND FND_LOOKUP_VALUES.LOOKUP_TYPE = 'LINE_CATEGORY' ORDER BY 1;
Using a text editor, open the domainValues_OrderTypes_ora11i.csv file, located in $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the LOOKUP_TYPE column to the XACT_TYPE_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Transaction Type Code to one domain value.
For more information on Transaction Type Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure Pick Status domain values using the domainValues_PickStatus_ora11i.csv file.
To configure Pick Status domain values:
Identify the Pick Statuses in your Oracle 11i source system by using the following SQL:
SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE FROM FND_LOOKUP_VALUES WHERE FND_LOOKUP_VALUES.LOOKUP_TYPE= 'PICK_STATUS' AND FND_LOOKUP_VALUES.LANGUAGE = 'US' AND FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 665 AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0 ORDER BY 1;
Using a text editor, open the domainValues_PickStatus_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the LOOKUP_CODE column to the STATUS_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Status Code to one domain value.
For more information on Status Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure Pay Method Status domain values using the domainValues_PayMethodCode_ora.csv file.
To configure Pay Method domain values:
Identify the Pay Methods in your Oracle 11i source system by using the following SQL:
SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE FROM FND_LOOKUP_VALUES WHERE LOOKUP_TYPE = 'PAYMENT TYPE' AND VIEW_APPLICATION_ID = 660 AND LANGUAGE = 'US' AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0 ORDER BY 1;
Using a text editor, open the domainValues_PayMethodCode_ora.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the LOOKUP_CODE column to the METHOD_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Method Code to one domain value.
For more information on Method Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure Invoice Status domain values using the domainValues_InvoiceStatus_ora11i.csv file.
To configure Invoice Status domain values:
Identify the Invoice Statuses in your Oracle 11i source system by using the following SQL:
SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE FROM FND_LOOKUP_VALUES WHERE FND_LOOKUP_VALUES.LOOKUP_TYPE= 'INVOICE_TRX_STATUS' AND FND_LOOKUP_VALUES.LANGUAGE = 'US' AND FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 222 AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0 ORDER BY 1;
Using a text editor, open the domainValues_InvoiceStatus_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the LOOKUP_CODE column to the STATUS_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Status Code to one domain value.
For more information on Status Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure Order Overall Status domain values using the domainValues_OrderOverallStatus_ora11i.csv file.
To configure Order Overall Status domain values:
Identify the Order Overall Statuses in your Oracle 11i source system by using the following SQL:
SELECT DISTINCT FND_LOOKUP_VALUES.LOOKUP_CODE FROM FND_LOOKUP_VALUES WHERE FND_LOOKUP_VALUES.LOOKUP_TYPE = 'LINE_FLOW_STATUS' AND FND_LOOKUP_VALUES.LANGUAGE = 'US' AND FND_LOOKUP_VALUES.VIEW_APPLICATION_ID = 660 AND FND_LOOKUP_VALUES.SECURITY_GROUP_ID = 0 ORDER BY 1;
Using a text editor, open the domainValues_OrderOverallStatus_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the LOOKUP_CODE column to the STATUS_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Status Code to one domain value.
For more information on Status Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure Movement Types domain values.
Identify the Inventory Movement Type in your Oracle EBS source system by using the following SQL:
SELECT DISTINCT MTL_TRANSACTION_TYPES.TRANSACTION_TYPE_NAME FROM MTL_TRANSACTION_TYPES
From the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles), open the domainValues_Movement_Types_ora11i.csv file in a text editor.
Copy the TRANSACTION_TYPE_NAME to the TRANSACTION_TYPE_NAME column in the file.
The data must be copied starting from the second line.
Map each TRANSACTION_TYPE_NAME to one Inventory Movement Type domain value.
Use commas to separate the entries.
Save and close the file.
This section explains how to configure Invoice Source Type domain values using the domainValues_InvoiceSrcTypes_ora11i.csv file.
To configure Invoice Source Type domain values:
Identify the Invoice Source Types in your Oracle 11i source system by using the following SQL:
SELECT DISTINCT DESCRIPTIVE_FLEX_CONTEXT_CODE FROM FND_DESCR_FLEX_CONTEXTS WHERE DESCRIPTIVE_FLEXFIELD_NAME = 'RA_INTERFACE_LINES' AND APPLICATION_ID=222 ORDER BY 1;
Using a text editor, open the domainValues_InvoiceSrcType_ora11i.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the TYPE column to the XACT_TYPE_CODE1 column in the file.
The data must be copied starting from the second line. The first line is the column header.
Map each Transaction Source Type Code to one domain value.
For more information on Transaction Type Code domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
Oracle 11i categorize quantities into three different types:
Goods Received quantities. Goods Received quantity refers to the number of goods received.
Delivery quantities. Delivery quantity refers to the number of goods delivered.
Base quantities. Base quantity refers to any transaction quantity.
The Oracle Business Analytics Warehouse extracts the transaction type and loads this value into the XACT_SRC_TYPE column. In this column, the value 1
denotes a Goods Received quantity, and 2
denotes a Delivery quantity.
To find out more about XACT_SRC_TYPE column, execute the following SQL against your EBS instance:
SELECT TRANSACTION_SOURCE_TYPE_ID, TRANSACTION_SOURCE_TYPE_NAME, DESCRIPTION FROM MTL_TXN_SOURCE_TYPES ORDER BY 1
If you have rows equivalent to Purchase order (1), you should include the TRANSACTION_SOURCE_TYP E_ID in the Goods Received quantity column (EXT_GR_QTY). If you have rows equivalent to Sales Order (2), you should include the TRANSACTION_SOURCE_TYPE_ID in the Delivery quantity column (EXT_DELIVERY_QTY).
All quantities extracted from the source system are always loaded into the Base quantity column (EXT_BASE_QTY). However, only the receipt quantity is loaded into the Goods Received quantity column (EXT_GR_QTY), and only delivered quantities are loaded into the Delivery quantity column (EXT_DELIVERY_QTY).
If your definition of goods received or delivery quantity is different from the prepackaged condition, then you can edit the condition to suit your business needs.
To configure the Quantity type:
In Informatica PowerCenter Designer, open the SDE_ORAVersion_Adaptor.
Open the mplt_SA_ORA_ProductTransactionFact mapplet.
Double-click the EXP_PROD_XACTS Expression transformation to open the Edit Transformations dialog, and display the Port tab to show the EXT_GR_QTY and EXT_DELIVERY_QTY ports.
Edit the quantity types by substituting your desired condition for the prepackaged expression.
Click Apply.
Validate the mapplet, and save your changes to the repository.
The Bill of Materials (BOM) functional area enables you to determine the profit margin of the components that comprise the finished goods. BOM enables you to keep up with the most viable vendors in terms of cost and profit, and to keep your sales organization aware of product delivery status, including shortages.
To deploy objects in Oracle EBS for exploding the BOM, ensure that the Oracle EBS source environment meets the minimum patch level for your version, as follows:
Customers with Oracle EBS version R12 must be at or above patch level 10422612.
Customers with Oracle EBS version R12.0.x or OPI patch set A must be at or above patch level 10422612:R12.OPI.A.
Customers with Oracle EBS version R12.1.x or OPI patch set B must be at or above patch level 10422612:R12.OPI.B.
Customers with Oracle EBS version 11i must be at or above patch level 10361119.
Refer to the System Requirements and Supported Platforms for Oracle Business Intelligence Applications for full information about supported patch levels for your source system.
For more details, see Section 6.3.2.14, "How to Configure the Bill of Materials Explosion Options."
Note:
Systems at or above these minimum patch levels include the package OPI_OBIA_BOMPEXPL_WRAPPER_PKG in the APPS schema, and include the following tables in the OPI schema with alias tables in the APPS schema:OPI_OBIA_W_BOM_HEADER_DS
OPI_OBIA_BOM_EXPLOSION
OPI_OBIA_BOM_EXPLOSION_S
This section contains configuration steps required before you do a full data load that apply to Oracle's JD Edwards EnterpriseOne. It contains the following topics:
Section 6.2.3.2, "How to Configure User Defined Code Categories"
Section 6.2.3.3, "How to Configure Movement Type Domain Values"
Section 6.2.3.4, "How to Configure Transaction Types Domain Values for Sales Invoice Lines"
Section 6.2.3.5, "How to Configure Transaction Types Domain Values for Sales Order Lines"
Section 6.2.3.9, "How to Configure Inventory Return Locations"
Section 6.2.3.10, "How to Process Bill of Material Explosion"
Table 6-2 lists the CSV worksheet files and the domain values for Oracle Supply Chain and Order Management Analytics that are located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Table 6-2 Domain Values and CSV Worksheet Files for Oracle Supply Chain and Order Management Analytics for JD Edwards EnterpriseOne
Worksheet File | Description | Session |
---|---|---|
file_udc_category_mapping_jde.csv |
This file is used to provide JD Edwards EnterpriseOne user defined codes that are to be populated into the W_CODE_D table. |
SDE_JDE_Code_Category_Map_Load |
domainvalues_Movement_Types_jde.csv |
This file provides override information of the Movement Type Description for a specific Movement Type. |
SDE_JDE_MvmntTypeDimension |
domainvalues_xact_type_codes_scom_jde_sales_ivclns.csv |
This file provides the Sales Order Transaction Type Codes that are applicable to sales invoice lines. |
SDE_JDE_XactTypeDimension_SalesInvoiceLine |
domainvalues_xact_type_codes_scom_jde_sales_ordlns.csv |
This file provides the Sales Order Transaction Type Codes that are applicable to Sales Order lines. |
SDE_JDE_XactTypeDimension_SalesOrderLine |
file_sales_order_status_jde.csv |
This file provides Sales Order status information to populate the W_CODE_D (by category) and W_STATUS_D tables. |
SDE_JDE_StatusDimension_SalesOrder |
file_lkp_chnl_typ_jde.csv |
This file serves to provide additional information by Channel Type Code to populate the Channel Type Dimension table. |
SDE_JDE_ChannelTypeDimension |
file_lkp_consign_inv_org_jde.csv |
This file is used to identify the Branch/Plants (Organization ID) that are used for consignment within the inventory process. |
SDE_JDE_Inventory_Daily_Bal_Fact |
file_lkp_return_loc_jde.csv |
This file is used to identify the return locations by Branch/Plant, Location, and Lot Number within the inventory process. |
SDE_JDE_Inventory_Daily_Bal_Fact |
This section explains how to configure the file_udc_category_mapping_jde.csv file.
You populate this file with data before running the SDE_JDE_Category_Map_Load workflow. This workflow will run a mapping of the same name that uses the CSV file as input to create a temporary table called W_JDE_UDC_CATEGORY_MAP_TMP. This temporary table is then used as source input into the SDE_JDE_CODE_DIMENSION workflow to load the W_CODE_D table.
To configure file_udc_category_mapping_jde.csv:
Using a text editor, open the file_udc_category_mapping_jde.csv file, located in the $PMServer\SrcFiles folder.
Compare the values in this file to the values in Table 3-22 in Section 3.5.1.2, "About Configuring the Code Dimension for Oracle's JD Edwards EnterpriseOne or JD Edwards World UDCs."
Add any new values to the CSV file from the Table 3-22 entries by System Code, User Defined Code, and Category. Ensure that each Category value is spelled correctly, as this Category matches what is defined in the mappings for Oracle Supply Chain and Order Management Analytics for JD Edwards EnterpriseOne.
The values that are added to this CSV file will be added as UDC entries to the W_CODE_D table by Category.
For more information about the W_CODE_D table and values, see Oracle Business Analytics Warehouse Data Model Reference.
Use commas to separate the entries under the System Code, UDC column, and Category. Ensure that you do not include any spaces.
Save and close the file.
This section explains how to configure the domainvalues_Movement_Types_jde.csv. You populate this flat file with data before running the SDE_JDE_MvmntTypeDimension workflow.
To configure domainvalues_Movement_Types_jde.csv:
Identify the Movement Types in your JD Edwards EnterpriseOne source system by using the following SQL:
SELECT RTRIM(DRSY) || '~' || DRRT || '~' || LTRIM(DRKY) FROM F0005 AS MVMNT_TYPE_CODE WHERE DRSY = '00' AND DRRT = 'DT' AND DRKY <> ' '
Using a text editor, open the domainvalues_Movement_Types_jde.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the values from the MVMNT_TYPE_CODE column of the SQL to the MVMNT_TYPE_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
In the file, for each MVMNT_TYPE_CODE, add the correct domain values for W_MVMNT_TYPE_CODE and W_MVMNT_TYPE_DESC. Valid domain values are OTHERS, CUSTOMER RETURN, or VENDOR RETURN.
All values for both W_MVMNT_TYPE_CODE and W_MVMNT_TYPE_DESC will be OTHERS except for those Document Types (Movement Types) that are actually used for Customer Returns or Vendor Returns. For those specific Document Types identified as such, use those values appropriately.
For more information on Movement Type domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure the domainvalues_xact_type_codes_scom_jde_sales_ivclns.csv. You populate this flat file with data before running the SDE_JDE_XactTypeDimension_SalesInvoiceLine workflow.
To configure domainvalues_xact_type_codes_scom_jde_sales_ivclns.csv:
Using a text editor, open the domainvalues_xact_type_codes_scom_jde_sales_ivclns.csv file, located in the $PMServer\LkpFiles folder.
Identify the Invoice Line transaction types in JD Edwards EnterpriseOne by using the following SQL:
SELECT F0005.DRKY FROM F0005 WHERE DRSY = '00' AND DRRT = 'DT'
Map the values returned with the flat file to correspond with the XACT_TYPE_CODE source column. Associate the values in this file to the Transaction Type domain values for Invoice Lines as specified in the Oracle Business Analytics Warehouse Data Model Reference.
An example configuration is shown here:
XACT_TYPE_CODE,XACT_CODE,W_XACT_TYPE_CODE,W_XACT_TYPE_DESC,W_XACT_TYPE_CODE1,W_XACT_TYPE_DESC1 <DRKY>,SALES_IVCLNS,Standard Invoice,Standard Invoice,Order Management,Order Management source code
Note:
The values shown in the example are also the default values for the source code's domain values. If a Document Type <DRKY> is not configured in the flat file, the values presented in this example will be assumed by default for those documents.For more information about the Transaction Type domain values in W_XACT_TYPE_D, see Oracle Business Analytics Warehouse Data Model Reference.
Use commas to separate the entries under the System Code, UDC column, and Category. Ensure that you do not include any spaces.
Save and close the file.
This section explains how to configure the domainvalues_xact_type_codes_scom_jde_sales_ordlns.csv. You populate this flat file with data before running the SDE_JDE_XactTypeDimension_SalesOrderLine workflow.
To configure domainvalues_xact_type_codes_scom_jde_sales_ordlns.csv:
Using a text editor, open the domainvalues_xact_type_codes_scom_jde_sales_ordlns.csv file, located in the $PMServer\LkpFiles folder.
Identify the Order Line transaction types in JD Edwards EnterpriseOne by using the following SQL:
SELECT F0005.DRKY FROM F0005 WHERE DRSY = '00' AND DRRT = 'DT'
Map the values returned with the flat file to correspond with the XACT_TYPE_CODE source column. Associate the values in this column to the Transaction Type domain values for Order lines as specified in the Oracle Business Analytics Warehouse Data Model Reference.
An example configuration is shown here:
XACT_TYPE_CODE,XACT_CODE,W_XACT_TYPE_CODE,W_XACT_TYPE_DESC,W_XACT_TYPE_CODE1,W_XACT_TYPE_DESC1, W_XACT_TYPE_CODE2,W_XACT_TYPE_DESC2 <DRKY>,SALES_ORDLNS,Regular,Regular,EXTERNAL,External order,Self Ship,Internal inventory order
Note:
The values shown in the example are also the default values for the source code's domain values. If a Document Type <DRKY> is not configured in the flat file, the values presented in this example will be assumed by default for those documents.For more information about the W_XACT_TYPE_D domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Use commas to separate the entries under the System Code, UDC column, and Category. Ensure that you do not include any spaces.
Save and close the file.
This section explains how to configure the file_sales_order_status_jde.csv. You populate this flat file with data before running the SDE_JDE_StatusDimension_SalesOrder workflow. This flat file is the source for the table W_STATUS_DS and is available in the $PMServer\SrcFiles folder.
To configure file_sales_order_status_jde.csv:
Using a text editor, open the file_sales_order_status_jde.csv file, located in the $PMServer\SrcFiles folder.
Identify the Sales Order status combinations by analyzing your order activity rules setup, which will have data similar to this:
DocType | LineType | Last Status | Description | Next Status | Description |
---|---|---|---|---|---|
SO | S | 520 | Enter Sales Order | 540 | Print Pickslips |
SO | S | 520 | Enter Sales Order | 560 | Shipment Confirmation |
SO | S | 520 | Enter Sales Order | 535 | In Warehouse Management |
SO | S | 900 | Backordered in SO Entry | 540 | Print Pickslips |
Identify the distinct combination of DocType, LineType, Last Status, and Next Status from the F4211 table for all of your sales orders.
Render each of these combinations with a prefix of:
SALES_ORDLNS~<DocType>~<LineType>~<LastStatus>~NextStatus> SALES_PCKLNS~<DocType>~<LineType>~<LastStatus>~NextStatus> SALES_SCHLNS~<DocType>~<LineType>~<LastStatus>~NextStatus>
Copy the values rendered into the flat file to correspond with the STATUS_CODE, STATUS_NAME flat file column for each of the STATUS_CLASSes - SALES_ORDLNS, SALES_PCKLNS, and SALES_SCHLNS respectively.
An example configuration is shown here:
STATUS_CLASS,STATUS_CODE,STATUS_NAME,STATUS_CAT,STATUS_CAT_DESC SALES_ORDER_PROCESS,SALES_ORDLNS~SO~S~620~998,SALES_ORDLNS~SO~S~620~998,Being Processed,Being Processed SALES_ORDER_PROCESS,SALES_ORDLNS~SO~S~620~999,SALES_ORDLNS~SO~S~620~999,Closed,Closed SALES_ORDER_PROCESS,SALES_ORDLNS~SO~S~985~999,SALES_ORDLNS~SO~S~985~999,Closed,Closed SALES_PICKING_PROCESS,SALES_PCKLNS~SO~S~520~540,SALES_PCKLNS~SO~S~520~540,Not Yet Picked,Not Yet Picked SALES_PICKING_PROCESS,SALES_PCKLNS~SO~S~520~560,SALES_PCKLNS~SO~S~520~560,Not Yet Picked,Not Yet Picked
Note:
A set of hard-coded statuses are used to identify whether a Sales Order line is cancelled or not. These statuses are 980, 982, 983, 984, 985, 986, 987, 988, 990, 995, 996 and 997. No other Sales Order status is used to identify a cancelled Sales Order line. If you want a certain status combination to be treated as closed, then define it as closed in the Status Dimension flat file. When defined as closed, they will be overlooked by financial and operational backlog flags.
You do not have to configure certain statuses that are already hard-coded in the system like Closed, Cancelled, and so forth, as mentioned previously. The code automatically identifies them. However, a hard-coded status line should be present in the flat file to track these statuses, which are described in the following steps.
Append your configuration with these mandatory hard coded statuses:
FULFILLMENT_STATUS,ORDER BOOKED,ORDER BOOKED,Order Booked,Order Booked FULFILLMENT_STATUS,ORDER ENTERED,ORDER ENTERED,Order Entered,Order Entered FULFILLMENT_STATUS,ORDER FULLY CANCELLED,ORDER FULLY CANCELLED,Order Fully Cancelled,Order Fully Cancelled FULFILLMENT_STATUS,ORDER FULLY PICKED,ORDER FULLY PICKED,Order Fully Picked,Order Fully Picked FULFILLMENT_STATUS,ORDER FULLY SCHEDULED,ORDER FULLY SCHEDULED,Order Fully Scheduled,Order Fully Scheduled FULFILLMENT_STATUS,ORDER FULLY SHIPPED,ORDER FULLY SHIPPED,Order Fully Shipped,Order Fully Shipped FULFILLMENT_STATUS,ORDER PARTIALLY CANCELLED,ORDER PARTIALLY CANCELLED,Order Partially Cancelled,Order Partially Cancelled FULFILLMENT_STATUS,ORDER PARTIALLY SCHEDULED,ORDER PARTIALLY SCHEDULED,Order Partially Scheduled,Order Partially Scheduled FULFILLMENT_STATUS,ORDER PARTIALLY SHIPPED,ORDER PARTIALLY SHIPPED,Order Partially Shipped,Order Partially Shipped FULFILLMENT_STATUS,ORDER NOT ELIGIBLE,ORDER NOT ELIGIBLE,Order Not Eligible,Order Not Eligible SALES_INVOICE_PROCESS,SALES_INVCLNS~CANCELLED,SALES_INVCLNS~CANCELLED, Cancelled,Cancelled SALES_INVOICE_PROCESS,SALES_INVCLNS~COMPLETED,SALES_INVCLNS~COMPLETED, Completed,Completed SALES_INVOICE_PROCESS,SALES_INVCLNS~OPEN,SALES_INVCLNS~OPEN,Open,Open SALES_INVOICE_PROCESS,SALES_INVCLNS~PENDING,SALES_INVCLNS~PENDING,Pending, Pending SALES_ORDER_PROCESS,SALES_ORDLNS~BEING PROCESSED,SALES_ORDLNS~BEING PROCESSED, Being Processed,Being Processed SALES_ORDER_PROCESS,SALES_ORDLNS~BLOCKED,SALES_ORDLNS~BLOCKED,Blocked,Blocked SALES_ORDER_PROCESS,SALES_ORDLNS~BOOKED,SALES_ORDLNS~BOOKED,Booked,Booked SALES_ORDER_PROCESS,SALES_ORDLNS~CANCELLED,SALES_ORDLNS~CANCELLED,Cancelled, Cancelled SALES_ORDER_PROCESS,SALES_ORDLNS~CLOSED,SALES_ORDLNS~CLOSED,Closed,Closed SALES_ORDER_PROCESS,SALES_ORDLNS~ENTERED,SALES_ORDLNS~ENTERED,Entered,Entered SALES_PICKING_PROCESS,SALES_PCKLNS~CANCELLED,SALES_PCKLNS~CANCELLED,Cancelled,Cancelled SALES_PICKING_PROCESS,SALES_PCKLNS~FULLY PICKED,SALES_PCKLNS~FULLY PICKED,Fully Picked,Fully Picked SALES_PICKING_PROCESS,SALES_PCKLNS~FULLY SHIPPED,SALES_PCKLNS~FULLY SHIPPED, Fully Shipped,Fully Shipped SALES_PICKING_PROCESS,SALES_PCKLNS~NOT RELEVANT,SALES_PCKLNS~NOT RELEVANT,Not Relevant,Not Relevant SALES_PICKING_PROCESS,SALES_PCKLNS~NOT YET PICKED,SALES_PCKLNS~NOT YET PICKED, Not Yet Picked,Not Yet Picked SALES_PICKING_PROCESS,SALES_PCKLNS~BACKORDERED,SALES_PCKLNS~BACKORDERED, Backordered,Backordered SALES_PICKING_PROCESS,SALES_PCKLNS~PURGED,SALES_PCKLNS~PURGED,Purged,Purged SALES_SCHEDULE_PROCESS,SALES_SCHLNS~BLOCKED,SALES_SCHLNS~BLOCKED,Blocked, Blocked SALES_SCHEDULE_PROCESS,SALES_SCHLNS~CANCELLED,SALES_SCHLNS~CANCELLED,Cancelled, Cancelled SALES_SCHEDULE_PROCESS,SALES_SCHLNS~CLOSED,SALES_SCHLNS~CLOSED,Closed,Closed SALES_SCHEDULE_PROCESS,SALES_SCHLNS~ENTERED,SALES_SCHLNS~ENTERED,Entered, Entered SALES_SCHEDULE_PROCESS,SALES_SCHLNS~NOT VALID,SALES_SCHLNS~NOT VALID,Not Valid,Not Valid
Save and close the file.
This section explains how to configure the file_lkp_chnl_typ_jde.csv. You populate this flat file with data before running the SDE_JDE_ChannelTypeDimension workflow.
To configure the file_lkp_chnl_typ_jde.csv file:
Identify the Channel Types in your JD Edwards EnterpriseOne source system by using the following SQL:
SELECT DRKY FROM F0005 WHERE DRSY = '90CB' AND DRRT = 'TC'
Using a text editor, open the file_lkp_chnl_typ_jde.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Copy the values from the DRKY column of the SQL to the CHNL_TYPE_CODE column in the file.
The data must be copied starting from the second line. The first line is the column header.
In the file, add the correct domain values for Channel Type Group Code (W_CHTY_GRP_CODE), Channel Type Subgroup Code (W_CHTY_SUBG_CODE), and W_INBOUND_TYPE_FLG for each Channel Type Code (CHNL_TYPE_CODE).
For more information on Channel Type domain values, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure the file_lkp_consign_inv_org_jde.csv. You populate this flat file with data before running the SDE_JDE_Inventory_Daily_Bal_Fact workflow.
To configure file_lkp_consign_inv_org_jde.csv:
Identify the Inventory Consignment Branch/Plants (Warehouse and Inventory Organization IDs) in your JD Edwards EnterpriseOne source system.
Using a text editor, open the file_lkp_consign_inv_org_jde.csv file, located in the $PMServer\LkpFiles directory (for example, INFA_HOME\server\infa_shared\LkpFiles).
Manually enter the first value for a valid Inventory Consignment Branch/Plant on the second line in the file. The first line is the column header.
Note:
The data for each Branch/Plant must be entered as 12 characters and right-justified with leading blanks. For example, Branch/Plant (INVENTORY_ORG_ID) ABC should be entered as ---------ABC with nine leading blanks.Enter each Inventory Consignment Branch/Plant on a separate line thereafter.
For more information on consignments, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to configure the file_lkp_return_loc_jde.csv. You populate this flat file with data before running the SDE_JDE_Inventory_Daily_Bal_Fact workflow.
To configure file_lkp_return_loc_jde.csv:
Identify the Inventory Return Locations in each Branch/Plant (Warehouse/Inventory Organization ID) in your JD Edwards EnterpriseOne source system.
Using a text editor, open the file_lkp_return_loc_jde.csv file, located in the $PMServer\LkpFiles directory (for example INFA_HOME\server\infa_shared\LkpFiles).
Manually enter the first value for a valid Inventory Return Location on the second line in the file. The first line is the column header.
Note:
The data for each Return Location must be entered in the following format on each line:STORAGE_LOC~Branch/Plant~Location~Lot Number
where:
STORAGE_LOC~ is a constant
Branch/Plant value is 12 characters and right-justified, blank filled
Location is left-justified and 20 characters right-blank filled
Lot Number is entered with no trailing blanks
For example, an Inventory Return Location with a Branch/Plant of ABC, a Location of 123 and a Lot Number of 789, would be entered in the following format:
STORAGE_LOC~ ABC~123 ~789
Enter each Inventory Return Location on a separate line thereafter.
For more information on return locations, see Oracle Business Analytics Warehouse Data Model Reference.
Save and close the file.
This section explains how to process the Bill of Materials (BOM) for exploding to a multi-level structure to ultimately populate both the W_BOM_HEADER_D and W_BOM_ITEM_F tables.
JD Edwards EnterpriseOne maintains BOM information in a single level format, but Oracle BI Applications requires it in multi-level format. Therefore, before loading data into Oracle BI Application tables, the single level structure must be exploded into a multi-level structure.
Because all of the BOM information is stored in one single table in JD Edwards EnterpriseOne source and there are no defined levels for the BOM, the system has to loop through iteratively to get the BOM exploded. Also, Oracle BI Applications maintains all the revisions to the components as a new version of the BOM along with their effective dates. Considering these facts, it is not feasible to use ETL to convert the single level BOM to a multi-level BOM. Therefore the logic from an existing JD Edwards EnterpriseOne UBE (R30460) was used to create a new UBE for the explosion.
This new UBE (R30461) extracts the manufactured end products and converts the single-level BOM format into a multi-level BOM format. In addition, it also extracts some required information like Left bounds/Right bounds and level parents (1 - 10).
The UBE loads the multi-level BOM structure for manufactured end products with each revision into two work files respectively for BOM header and item (component). The ETL then extracts the data from the two work files and loads it into the Oracle BI Applications tables.
Note:
If you plan to consume analytics on Bill of Materials, it is mandatory to run this UBE before starting the ETL. This UBE and the related JD Edwards EnterpriseOne objects are created solely for the benefit of analytics and therefore will not be available in the existing source system.The SIL_BOMItemFact mapping contains the stored procedure called COMPUTE_BOUNDS
which traverses the exploded BOM tree structure and calculates the left bound and right bound. By default, the COMPUTE_BOUNDS
stored procedure is off. If you want to turn on the procedure, see Section 6.3.2.15, "How to Configure the Left Bound and Right Bound Calculation Option."
Before you run an ETL using a BOM, you must compile and deploy the SQL code in Compute_Bounds_Ora11i.sql. You deploy stored procedures by copying the stored procedure files from your Oracle Business Intelligence installation and deploying them to the target data warehouse.
In JD Edwards EnterpriseOne, the left and right bounds are calculated by the UBE (R30461).
Note:
Some sessions may fail if this procedure is not compiled in the database before running the workflows.To deploy the COMPUTE_BOUNDS stored procedure:
Navigate to the MW_HOME\biapps\dwrep\Informatica\Stored_Procedure_Scripts folder.
Select the appropriate folder for your target database type (for example, Oracle or DB2).
Copy the source code in the DB_type\Compute_Bounds_DB_type.sql file into the target data warehouse schema.
For example, for an Oracle database, copy the source SQL code from Oracle\Compute_Bounds_Ora11i.sql.
Compile the stored procedures in the target data warehouse database.
Note:
If you have problems deploying stored procedures, see your database reference guide, or contact your database administrator.This section contains additional configuration steps for Oracle Supply Chain and Order Management Analytics. It contains the following topics:
This section contains configuration steps that apply to all source systems. It contains the following topics:
Section 6.3.1.1, "Tracking Multiple Attribute Changes in Bookings"
Section 6.3.1.2, "Process of Aggregating Oracle Supply Chain and Order Management Analytics Tables"
Section 6.3.1.4, "Adding Dates to the Order Cycle Time Table for Post-Load Processing"
Section 6.3.1.6, "Example of How Backlog Data Is Stored in the Backlog History Table"
Section 6.3.1.8, "Configuring the Customer Status History Fact table"
Section 6.3.1.9, "How to Configure the Customer Status History Fact Table"
Section 6.3.1.10, "About Configuring the Inventory Monthly Balance Table"
Section 6.3.1.11, "How to Configure the Inventory Monthly Balance"
Section 6.3.1.12, "About Configuring the Product Transaction Aggregate Table"
Section 6.3.1.13, "How to Configure the Product Transaction Aggregate Table"
When you modify the default VAR_BOOKING_ID column, the SQL statement is configured as follows for Oracle 11i and Oracle R12:
TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||to_char(INP_WAREHOUSE_ID)
However, if you want to track changes based on more than one attribute, in the SQL statement you must concatenate the attribute column IDs in the VAR_BOOKING_ID column. For example, if you want to track changes in Salespersons and Customer Account, then concatenate the technical name IDs in the VAR_BOOKING_ID column as follows:
TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||TO_CHAR(INP_WAREHOUSE_ ID)||'~'||TO_CHAR(INP_SALESREP_ID)||'~'||TO_CHAR(INP_CUSTOMER_ACCOUNT_ID)
To track dimensional attribute changes in bookings:
In Informatica PowerCenter Designer, open the SDE_ORA115Version_Adaptor or SDE_ORAR12_Adaptor folder.
Open one of the following mappings:
mplt_SA_ORA_SalesOrderLinesFact
mplt_SA_ORA_SalesScheduleLinesFact
Double-click the appropriate Expression transformation to open the Edit Transformation box:
EXP_SALES_ORDLNS
EXP_SALES_SCHLNS
In the Ports tab, edit the expression for the EXT_BOOKING_ID port, and enter the ID of the attribute for which you want to track changes.
If you want to track changes in multiple attributes, concatenate the IDs of all attributes and put the concatenated value in the VAR_BOOKING_ID column.
Validate and save your changes to the repository.
This section contains Oracle Supply Chain and Order Management Analytics configuration points for aggregating the Sales Invoice Lines and Sales Order Lines tables.
The aggregation processes use the following Teradata parameters:
Hint_Tera_Pre_Cast
Hit_Tera_Post_Cast
To aggregate the Sales Invoice Lines and Sales Order Lines tables, perform the following tasks:
Configure the Sales Invoice Lines Aggregate Table
Configure the Sales Order Lines Aggregate Table
About Configuring the Sales Invoice Lines Aggregate Table
The Sales Invoice Lines aggregate table (W_SALES_INVOICE_LINE_F_A) is used to capture information about the invoices issued for your sales orders. You need to configure the Sales Invoice Lines aggregate table in order to run initial ETL and incremental ETL.
For your initial ETL run, you need to configure the TIME_GRAIN parameter for the time aggregation level in the Sales Invoice Lines aggregate fact table.
For the incremental ETL run, you need to configure the time aggregation level.
To aggregate the Sales Invoice Lines table for your incremental run, you need to configure the TIME_GRAIN parameter.
The TIME_GRAIN parameter has a preconfigured value of Month. The possible values for the TIME_GRAIN parameter are:
'WEEK'
'MONTH'
'QUARTER'
'YEAR'
The aggregation processes use the following Teradata parameters:
Hint_Tera_Pre_Cast
Hit_Tera_Post_Cast
The Sales Invoice Lines aggregate table is fully loaded from the base table in the initial ETL run. The table can grow to millions of records. Thus, the Sales Invoice aggregate table is not fully reloaded from the base table after each incremental ETL run. Oracle Business Analytics Warehouse minimizes the incremental aggregation effort, by modifying the aggregate table incrementally as the base table is updated. This process is described below.
The Oracle Business Analytics Warehouse finds the records to be deleted in the base table since the last ETL run, and loads them into the W_SALES_INVOICE_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesInvoiceLinesAggregate_Derive_PreSoftDeleteImage, which is run before SIL_SalesInvoiceLinesFact_SoftDelete deletes the records from the base table.
The Oracle Business Analytics Warehouse finds the records to be updated in the base table since the last ETL run, and loads them into the W_SALES_INVOICE_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesInvoiceLinesFact_Derive_PreLoadImage, which is run before SIL_SalesInvoiceFact deletes the records from the base table.
The Oracle Business Analytics Warehouse finds the inserted or updated records in the base table since the last ETL run, and loads them into the W_SALES_INVOICE_LINE_TMP table, without changing their sign. The mapping responsible for this task is SIL_SalesInvoiceLinesFact_Derive_PreLoadImage, which is run before PLP_SalesInvoiceLinesFact_Derive_PostLoadImage updates or inserts records in the base table.
The Oracle Business Analytics Warehouse aggregates the W_SALES_INVOICE_LINE_TMP table and load to W_SALES_INVOICE_LINE_A_TMP, which has the same granularity as the W_SALES_INVOICE_LINE_A table.
The PLP_SalesInvoiceLinesAggregate_Derive mapping looks up the W_SALES_INVOICE_LINE_A aggregate table to update existing buckets or insert new buckets in the aggregate table (the mapping is PLP_SalesInvoiceLinesAggregate_Load).
How to Configure the Sales Invoice Lines Aggregate Table
To load the Sales Invoice Lines aggregate table (W_SALES_INVOICE_LINE_A), you need to configure the parameterfileDW.txt file, and run the initial and then the incremental workflows. Using a text editor, open the file parameterfileDW.txt, located in the Dac\Informatica\parameters\input folder on the DAC Server computer. For example:
C:\orahome\10gversion\bifoundation\dac\Informatica\parameters\input
To configure the Sales Invoice Lines Aggregate Table:
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Tasks tab.
For each of the following tasks, display the Parameters tab and specify an appropriate value in the Value field for the TIME_GRAIN parameter:
SIL_SalesInvoiceLinesAggregate_Derive_PreLoadImage
SIL_SalesInvoiceLinesAggregate_Derive_PreSoftDeleteImage
PLP_SalesInvoiceLinesAggregate_Derive_PostLoadImage
PLP_SalesInvoiceLinesAggregate_Load
About Configuring the Sales Order Lines Aggregate Table
The Sales Order Lines aggregate table (W_SALES_ORDER_LINE_A) is used to capture information about the order lines issued for your sales orders. You need to configure the Sales Order Lines aggregate table in order to run initial ETL and incremental ETL.
For your initial ETL run, you need to configure the TIME_GRAIN parameter for the time aggregation level in the Sales Order Lines aggregate fact table.
For the incremental ETL run, you need to configure the time aggregation level.
To aggregate the Sales Invoice Lines table for your incremental run, you need to configure the TIME_GRAIN parameter.
The TIME_GRAIN parameter has a preconfigured value of Month. The possible values for the GRAIN parameter are:
'WEEK'
'MONTH'
'QUARTER'
'YEAR'
The aggregation processes use the following Teradata parameters:
Hint_Tera_Pre_Cast
Hit_Tera_Post_Cast
The Sales Order Lines aggregate table is fully loaded from the base table in the initial ETL run. The table can grow to millions of records. Thus, the Sales Order aggregate table is not fully reloaded from the base table after each incremental ETL run. Oracle Business Analytics Warehouse minimizes the incremental aggregation effort, by modifying the aggregate table incrementally as the base table is updated. This process is described below.
Oracle Business Analytics Warehouse finds the records to be deleted in the base table since the last ETL run, and loads them into the W_SALES_ORDER_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesOrderLinesAggregate_Derive_PreSoftDeleteImage, which is run before SIL_SalesOrderLinesFact_SoftDelete deletes the records from the base table.
Oracle Business Analytics Warehouse finds the records to be updated in the base table since the last ETL run, and loads them into the W_SALES_ORDER_LINE_TMP table. The measures in these records are multiplied by (-1). The mapping responsible for this task is SIL_SalesOrderLinesFact_Derive_PreLoadImage, which is run before SIL_SalesOrderFact updates the records from the base table.
Oracle Business Analytics Warehouse finds the inserted or updated records in the base table since the last ETL run, and loads them into the W_SALES_ORDER_LINE_TMP table, without changing their sign. The mapping responsible for this task is SIL_SalesOrderLinesFact_Derive_PreLoadImage, which is run before PLP_SalesOrderLinesFact_Derive_PostLoadImage updates or inserts records in the base table.
Oracle Business Analytics Warehouse uses the PLP_SalesOrderLinesAggregate_Derive mapping to aggregate the W_SALES_ORDER_LINE_TMP table and load to W_SALES_ORDER_LINE_A_TMP, which has the same granularity as the W_SALES_ORDER_LINE_A table.
W_SALES_ORDER_LINE_A_TMP looks up the W_SALES_ORDER_LINE_A aggregate table to update existing buckets or insert new buckets in the aggregate table (the mapping is PLP_SalesOrderLinesAggregate_Load).
How to Configure the Sales Order Lines Aggregate Table
To load the Sales Order Lines aggregate table (W_SALES_ORDER_LINE_A), you need to configure the post-load-processing parameter file and the source system parameter files, and run the initial and then the incremental workflows.
To configure the Sales Order Lines Aggregate Table:
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Tasks tab.
For each of the following tasks, display the Parameters tab and specify an appropriate value in the Value field for the TIME_GRAIN parameter:
SIL_SalesOrderLinesAggregate_Derive_PreLoadImage
SIL_SalesOrderLinesAggregate_Derive_PreSoftDeleteImage
PLP_SalesOrderLinesAggregate_Derive_PostLoadImage
PLP_SalesOrderLinesAggregate_Load
The Sales Order Lines table contains two columns, ORDHD_KEY_ID and ORDLN_KEY_ID, that track individual products when they are grouped and sold as a single package. These two columns allow you to analyze the relationship of all products sold as a single unit. The ORDHD_KEY_ID column stores the Order ID of the entire sales order. The ORDLN_KEY_ID column stores the Line Item ID of the parent product.
For example, assume a customer purchases a package that includes a computer, scanner, and printer. In addition, the customer purchases a monitor separately. In this case, there are two parent items: the package and the monitor. The computer, scanner, and printer are all child orders of the parent order package, while the parent order monitor is a single-item purchase.
Your data warehouse may store this sales information in the Sales Order Lines table as seen in Table 6-3. The ORDLN_KEY_ID field contains the Line Item ID of the parent product in order to maintain the relationship between the parent and child products in a package. In this example, the ORDLN_KEY_ID field is Line_1 for each of the three child products (A1, A2, A3) that were sold as a part of the parent package, Parent A.
Table 6-3 Sales Order Table Columns With Parent/Child Relationships
Key_ID | SALES_ORDER_NUM | PRODUCT_ID | ORDHD_ KEY_ID | ORDLN_ KEY _ID | Relationship (Not a column in the table.) |
---|---|---|---|---|---|
Line_1 |
1000 |
Package |
1000 |
Line_1 |
Parent A |
Line_2 |
1000 |
Computer |
1000 |
Line_1 |
Child A1 |
Line_3 |
1000 |
Scanner |
1000 |
Line_1 |
Child A2 |
Line_4 |
1000 |
Printer |
1000 |
Line_1 |
Child A3 |
Line_5 |
1000 |
Monitor |
1000 |
Line_5 |
Parent B (no children) |
In contrast, if each of the four items described in Table 6-3 were bought individually, the ORDLN_KEY_ID would have a different Line Item ID for every row. In this case, the Sales Order Lines table would look like Table 6-4.
Table 6-4 Sales Order Table Columns Without Parent/Child Relationships
Key_ID | SALES_ORDER_NUM | PRODUCT_ID | ORDHD_ KEY_ID | ORDLN_ KEY _ID | Relationship (Not a column in the table.) |
---|---|---|---|---|---|
Line_1 |
1000 |
Computer |
1000 |
Line_1 |
None |
Line_2 |
1000 |
Scanner |
1000 |
Line_2 |
None |
Line_3 |
1000 |
Printer |
1000 |
Line_3 |
None |
Line_4 |
1000 |
Monitor |
1000 |
Line_4 |
None |
To add more dates, you need to understand how the Order Cycle Times table is populated. Thus, if you want to change the dates loaded into the Order Cycle Time table (W_SALES_CYCLE_LINE_F), then you have to modify the PLP_SalesCycleLinesFact_Load and PLP_SalesCycleLinesFact_Load_Full mappings that take the dates from the W_* tables and load them into the Cycle Time table.
To add dates to the Cycle Time table load:
In Informatica PowerCenter Designer, open the Configuration for Post Load Processing folder (that is, the PLP folder).
In Warehouse Designer, modify the table definition for the target table to verify that it has a field to store this date.
For example, if you are loading the Validated on Date in the W_SALES_CYCLE_LINE_F table, then you need to create a new column, VALIDATED_ON_DT, and modify the target definition of the W_SALES_CYCLE_LINE_F table.
In Source Analyzer, modify the table definition of the source table to include this new column.
Continuing with the example, you would include the VALIDATED_ON_DT column in the W_SALES_CYCLE_LINE_F source table.
In Mapping Designer, modify the PLP_SalesCycleLinesFact_Load and PLP_SalesCycleLinesFact_Load_Full mappings to select the new column from any of the following source tables, and load it to the W_SALES_CYCLE_LINE_F target table:
W_SALES_ORDER_LINE_F
W_SALES_INVOICE_LINE_F
W_SALES_PICK_LINE_F
W_SALES_SCHEDULE_LINE_F
Modify the Source Qualifier SQL Override for the mapping, and map the column in the transformation to map it to the target table.
The Backlog table (W_SALES_BACKLOG_LINE_F) stores backlog data for the current month. In contrast, the Backlog History table (W_SALES_BACKLOG_LINE_F) stores snapshots of all previous months' historical backlog data. The periods for which the Backlog History table tracks backlog data is defined by the Backlog Period Date. By default, the date is set as the last calendar day of the month; however you may configure this date. You may want to view backlog history at a more detailed level, such as by day or by week, instead of by month. The following example describes how historical backlog data is stored and what the implications are for changing the backlog time period.
Assume you represent a manufacturing company where financial backlog is defined as any item that is ordered, but not invoiced. On February 1, 2001, you received an order (Sales Order #1) for 30 products. 20 were shipped and invoiced and 10 were shipped, but not invoiced. At the end of the day, there is an entry in the Backlog table and in the Backlog History table. The entry in the Backlog History table looks like that shown in Table 6-5.
Table 6-5 Oracle 11i and Oracle R12: Backlog History Table Entry as of February 1, 2001
SALES_ORDER_NUM(Sales Order Number) | BACKLOG _DK(Backlog Date) | BACKLOG_PERIOD_DK(Backlog Period Date) | OPEN_QTY(Backlog Quantity) |
---|---|---|---|
1 |
02/01/2001 |
02/28/2001 |
10 |
On February 2, 5 of the 10 financial backlog items are invoiced and, thus, removed from the backlog. Thus, there is an update to the existing row in the Backlog History table, as shown in Table 6-6.
Table 6-6 Oracle 11i and Oracle R12: Backlog History Table Entry as of February 2, 2001
SALES_ORDER_NUM(Sales Order Number) | BACKLOG _DK(Backlog Date) | BACKLOG_PERIOD_DK(Backlog Period Date) | OPEN_QTY(Backlog Quantity) |
---|---|---|---|
1 |
02/01/2001 |
02/28/2001 |
Old value: 10 New value: 5 |
No further activity happens until March 1st. On March 1st, the remaining 5 items on financial backlog are invoiced and removed from financial backlog. In addition, a new sales order (Sales Order #2) comes in for 50 new items. All of the items are put on financial backlog.
Even though all items from Sales Order #1 are cleared from financial backlog, the last backlog row remains in the Backlog History table. The purpose in retaining the last row is to indicate that there was backlog for this particular order. The quantity, in this case 5 items, does not tell you how many items were initially on backlog, which was 10.
For the 50 new financial backlog items, there is a new entry into the Backlog History table. So, as of February 28, 2001, the Backlog History table looks like the Table 6-7.
Table 6-7 Oracle 11i: Backlog History Table Entry as of February 28, 2001
SALES_ORDER_NUM(Sales Order Number) | BACKLOG _DK(Backlog Date) | BACKLOG_PERIOD_DK(Backlog Period Date) | OPEN_QTY(Backlog Quantity) |
---|---|---|---|
1 |
Old value: 02/01/2001 New value: 02/02/2001 |
02/28/2001 |
Old value: 10 New value: 5 |
On March 1, 30 more items are ordered (Sales Order #3), all of which are on financial backlog. The resulting Backlog History table looks like Table 6-8.
Table 6-8 Oracle 11i and Oracle R12: Backlog History Table Entry as of March 1, 2001
SALES_ORDER_NUM(Sales Order Number) | BACKLOG _DK(Backlog Date) | BACKLOG_PERIOD_DK(Backlog Period Date) | OPEN_QTY(Backlog Quantity) |
---|---|---|---|
1 |
Old value: 02/01/2001 New value: 02/02/2001 |
02/28/2001 |
5 |
2 |
03/01/2001 |
03/31/2001 |
50 |
3 |
03/01/2001 |
03/31/2001 |
30 |
Because backlog history is maintained at the monthly level, you have a partial history of your backlogs. Based on the latest state of the Backlog History table shown in Table 6-8, you can see that sales order number 1 ended up with 5 financial backlogged items. You do not have visibility into what the initial financial backlogged item quantities were for the sales orders; you only have their ending quantities.
If you decide that you want to track more details on how the items moved out of backlog, then you'll have to maintain the history at a more granular level. For instance, if you want to know the number of items that were on backlog when the it was first opened, you would have to track the backlog history by day, instead of by month.
For example, if you maintained backlog history at the daily level you would be able to capture that sales order 1 had an initial backlog of 10 as of February 1 and the backlog quantity shrank to 5 as of February 2. So, by capturing history at the daily level, you could then compute cycle times on how long it took to move items out of backlog. However, if you decide to capture backlog history at a more detailed level, you may compromise performance because tracking backlog history at the daily level can increase the size of the Backlog History table exponentially.
If you choose to change the time period for which historical backlog data is kept, you must verify that all types of backlog are being stored at the same grain; which requires modification to multiple mappings. Table 6-9 provides a list of all applicable mappings and their corresponding Expression transformations that you must modify.
Table 6-9 Oracle 11i and Oracle R12: Backlog History Applicable Mappings and Expression Transformations
Mapping | Expression Transformation |
---|---|
PLP_SalesBacklogLinesfact_LoadOrderLines |
EXP_SALES_ORNLNS_BACKLOG |
PLP_SalesBacklogLinesfact_LoadScheduleLines |
EXP_SALES_SCHLNS_BACKLOG |
The backlog history period is monthly by default. The default SQL statement in the Expression transformation for the port BACKLOG_PERIOD_DK is:
TO_DECIMAL(TO_CHAR(LAST_DAY(CALENDAR_DATE),'YYYYMMDD'))
You can edit the backlog period date so that you can capture a more detailed backlog history with the following procedure. Possible periods include daily (CAL_DAY_DT), weekly (CAL_WEEK_DT), monthly (CAL_MONTH_DT), and quarterly (CAL_QTR_DT).
In Oracle Supply Chain and Order Management Analytics, W_CUSTOMER_STATUS_HIST_F is a fact table that tracks the status of customers based on the frequency of orders they place with the organization. Possible statuses are NEW, RECENT, DORMANT and LOST. The time duration for each status bucket is configurable, out of the box being a calendar year. The grain of this table is at a Customer, Customer Status and the Status Start Date level. This section explains the possible configurations available for this table, what they mean and how to implement them.
This section talks about the following configurations that are available for the Customer Status History Fact table:
Configure the Data Warehouse Identifier
Configure the Period for each status bucket
Configuring the Data Warehouse Identifier
This table uses some of the Oracle BI Applications defined statuses, like NEW, RECENT, DORMANT and LOST. These status data gets loaded into the Data Warehouse directly through an out of box pre-packaged CSV file. The data in the file is independent of any specific OLTP source systems where your Customer or Sales data resides. In order to differentiate between source-based statuses from the pre-packaged out of box Data Warehouse statuses, a definite identifier is required. The Informatica mapping parameter $$WH_DATASOURCE_NUM_ID serves that purpose.
A pre-packaged value equal to 999 is set out of the box. Ideally you would not need to configure this value unless you have chosen to use this number (999) for a specific data source of yours, like Oracle EBS 11.5.10, and so on.
For information about how to configure the $$WH_DATASOURCE_NUM_ID value, see:Section 6.3.1.9, "How to Configure the Customer Status History Fact Table."
Configuring the Period for each status bucket
When a customer orders some products/services from your organization for the first time, Oracle BI Applications sets the status for the customer as NEW. The customer maintains the same status if he/she shows a constant order pattern, as long as the duration between any two of his/her orders is less than a configurable/business defined period. The value (out of box being 365 days) of this Informatica parameter $$PERIOD is configurable. An use case for that would be a Fast Moving / Retail Goods company many define 30 days as their choice of period, whereas a Slow Moving company may be even happy with 730 days as period.
In case the customer is seen to have not ordered anything for more than one period, he/she is moved to the next status, RECENT. Similarly, no orders for one more period since he/she became RECENT would make him/her DORMANT. And lastly, he/she is set to LOST if no orders were seen for more than one more period after he/she became DORMANT.
However, if a customer orders while in DORMANT status, for instance, Oracle BI Applications would upgrade his/her status back to RECENT. If the customer were in LOST status, and he/she orders, then he/she will be upgraded back to RECENT.
All these examples above illustrate how important it is for the business to set the right value for the period. Organizations would tend to launch different campaigns targetting to different customers based on their current status, or order-patterns, putting it in a different way.
For information about how to configure the $$PERIOD value, see:Section 6.3.1.9, "How to Configure the Customer Status History Fact Table."
This section explains the steps involved in configuring the Customer Status History Fact table using the $$WH_DATASOURCE_NUM_ID and $$PERIOD variables (for more information about these variables, see Section 6.3.1.8, "Configuring the Customer Status History Fact table").
To modify the value of $$WH_DATASOURCE_NUM_ID:
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Source System Parameters tab and locate the parameter $$WH_DATASOURCE_NUM_ID.
In the Edit subtab, enter an appropriate value in the Value field.
Save the changes.
To modify the value of $$PERIOD:
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Tasks tab and query for the following two tasks:
PLP_CustomerStatusHistoryFact_New_Customers_Load
PLP_CustomerStatusHistoryFact_Status_Revalidate
For each task, display the Parameters subtab, and enter an appropriate value in the Value field.
Make sure you the set same value for both tasks.
Save the changes.
To configure the Inventory Monthly Balance (W_INVENTORY_DAILY_BALANCE_F_A1) aggregate table, you need to consider the aggregation level, the time period to update the aggregation, and the time period to keep records in the Inventory Balance table.
You need to configure three parameters to configure the Inventory Monthly Balance table:
GRAIN
The GRAIN parameter has a preconfigured value of Month. The possible values for the GRAIN parameter are:
DAY
WEEK
MONTH
QUARTER
YEAR
KEEP_PERIOD
The KEEP_PERIOD parameter has a preconfigured value of Month. Values for the KEEP_PERIOD parameter include:
DAY
WEEK
MONTH
QUARTER
YEAR
NUM_OF_PERIOD
The NUM_OF_PERIOD parameter has a preconfigured value of 3. The value for the NUM_OF_PERIOD parameter is a positive integer, for example, 1, 2, 3, and so on.
Note:
If you need "Inventory Turns" data for a period older than 3 months, you must change the parameter values for KEEP_PERIOD and NUM_OF_PERIOD. For example, if you want data for the last 3 years, then set KEEP_PERIOD to YEAR and NUM_OF_PERIOD to 3.Before you run the initial ETL session or incremental ETL sessions to load the Inventory Monthly Balance table, configure the Inventory Monthly Balance as follows.
To configure the Inventory Monthly Balance:
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Tasks tab.
For each of the following tasks, display the Parameters subtab and create the specified parameter name and parameter value
PLP_InventoryMonthlyBalance $$GRAIN 'MONTH'
PLP_InventoryDailyBalance_Trim $$KEEP_PERIOD 'MONTH'
PLP_InventoryDailyBalance_Trim $$NUM_OF_PERIOD 3
To incrementally refresh the Inventory Monthly Balance table:
Delete the records from the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F) aggregate table for a certain time.
The GRAIN parameter determines the time period for the deletion. For example, if GRAIN=MONTH, and the date is May 15, 2005, then all records for April and the current month (May) are deleted in the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F) table.
Running the PLP_InventoryMonthlyBalance workflow mapping implements this step.
Retrieve the records in the Inventory Balance (W_INVENTORY_DAILY_BALANCE_F) fact table and load the records to the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F) table at a certain grain level.
For example, if GRAIN=MONTH, then the month end balance records in the W_INVENTORY_DAILY_BALANCE_F fact table are stored in and aggregated to the Monthly Balance (W_INVENTORY_MONTHLY_BAL_F).
Running the S_M_PLP_INV_BALANCE_A1_AGG session, and the M_PLP_INV_BALANCE_A1_AGG mapping implements this step. For the current month balance, balance records of the previous day (if it is in the same month) are deleted from W_INVENTORY_MONTHLY_BAL_F, and balance records of the current day will be loaded from W_INVENTORY_BALANCE_F to W_INVENTORY_MONTHLY_BAL_F.
Running the PLP_InventoryMonthlyBalance workflow implements this step.
Remove the old records from the W_INVENTORY_DAILY_BALANCE_F fact table.
To remove old records you need to use the KEEP_PERIOD and the NUM_OF_PERIOD parameters. For example, if KEEP_PERIOD=MONTH, NUM_OF_PERIOD=1, and the date is May 15, 2005, then the records for April and the current month (May) are kept and the older records are deleted.
Running the PLP_InventoryDailyBalance_Trim workflow implements this step.
Note:
The trimming process is to reduce data size in the table. It is important to emphasize that you will not be able to see the old daily balance records. But, you will still be able to see the month-end balance. Therefore, ensure that you adjust the NUM_OF_PERIOD values to reflect your data volume and data recency requirement.There are two aggregation scenarios to configure the Product Transaction aggregate (W_PRODUCT_XACT_A) table—the initial ETL run and then the incremental ETL run.
For your initial ETL run, you need to configure the aggregation level, and the length of history kept in the Product Transaction fact table.
For your initial ETL run, you need to configure the aggregation grain, using the GRAIN parameter.
For the incremental ETL run, you need to configure the aggregation level, the update period in aggregation, and the length of history kept in the Product Transaction fact table, using the following parameters:
GRAIN
The GRAIN parameter specifies the aggregation level. Valid values are DAY, WEEK, MONTH (preconfigured value), QUARTER, YEAR.
REFRESH_PERIOD
The REFRESH_PERIOD parameter, together with NUM_OF_PERIOD, indicates the number of period of records that will be refresh from the transaction table to the aggregate table. Valid values are DAY, WEEK, MONTH (preconfigured value), QUARTER, YEAR.
NUM_OF_PERIOD
The NUM_OF_PERIOD parameter, together with REFRESH_METHOD, indicates the number of period of records that will be refresh from the transaction table to the aggregate table. Valid values are positive integers, for example, 1, 2, 3 (preconfigured value).
Before you run the initial ETL and then the incremental ETL to load the Product Transaction aggregate table, you need to configure the Product Transaction Aggregate Table, as follows.
To configure the Product Transaction Aggregate Table:
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Tasks tab.
Locate the Task named PLP_ProductTransactionAggregate, display the Parameters subtab, and ensure that the following three parameters are set as specified:
$$REFRESH_PERIOD = 'MONTH'
$$GRAIN = 'MONTH'
$$NUM_OF_PERIOD = 3
Note: If any of these parameters do not exist, create them as Data Type = Text with the specified Values.
To configure the Product Transaction aggregate table for the initial ETL run:
Retrieve the records in the Product Transaction fact (W_PRODUCT_XACT_F) table, and aggregate the records to the Product Transaction aggregate (W_PRODUCT_XACT_A) table at a certain grain level.
For example, if GRAIN=MONTH then the records in the W_PRODUCT_XACT_F fact table are retrieved and aggregated to the W_PRODUCT_XACT_A table at a monthly level.
Running the PLP_ProductTransactionAggregatel workflow implements this step.
To configure the Product Transaction aggregate table for the incremental ETL run:
Delete the refreshed records from the Product Transaction aggregate (W_PRODUCT_XACT_A) table for a certain time.
The REFRESH_PERIOD and the NUM_OF_PERIOD parameters determine the time period for the deletion.
For example, if REFRESH_PERIOD=MONTH, NUM_OF_PERIOD=1, and the date is May 15, 2005, then all records for April and the current month (May) are deleted in the W_PRODUCT_XACT_A table.
Running the PLP_ProductTransactionAggregate workflow implements this step.
Retrieve the records in the Product Transaction fact (W_PRODUCT_XACT_F) table, and aggregate the records to the W_PRODUCT_XACT_A table at a certain grain level.
For example, if GRAIN=MONTH then the records in the W_PRODUCT_XACT_F fact table are retrieved and aggregated to the W_PRODUCT_XACT_A table at a monthly level.
Running the PLP_ProductTransactionAggregate workflow implements this step.
This section contains configuration steps that apply to Oracle EBS. It contains the following topics:
Section 6.3.2.1, "About Configuring Sales Order Lines Data Storage"
Section 6.3.2.3, "How to Configure Sales Schedule Lines Data Storage"
Section 6.3.2.4, "About Loading Bookings at the Schedule Line Level"
Section 6.3.2.5, "How to Configure Early and Late Tolerances for Shipments"
Section 6.3.2.6, "How to Configure Sales Invoice Lines Data Storage"
Section 6.3.2.7, "How to Configure the Sales Invoice Extract"
Section 6.3.2.10, "About Tracking Attribute Changes in Bookings"
Section 6.3.2.11, "About Viewing the Data Warehouse Changes by Salesperson ID"
Section 6.3.2.12, "How to Configure Different Types of Backlog Calculations"
Section 6.3.2.13, "Adding Closed Orders to Backlog Calculations"
Section 6.3.2.14, "How to Configure the Bill of Materials Explosion Options"
Section 6.3.2.15, "How to Configure the Left Bound and Right Bound Calculation Option"
Sales order lines are the itemized lines that make up a sales order. This information is stored in the W_SALES_ORDER_LINE_F table.
By default, only booked orders are extracted from the Oracle source system as shown in Figure 6-1. Therefore, all orders loaded into the Sales Order Lines, Sales Schedule Lines, and Sales Booking Lines tables are booked.
However, if you want to load non-booked orders into the Sales Order Lines and Sales Schedule Lines tables, you have to configure the extract so that it does not filter out non-booked orders. In Oracle 11i and Oracle R12, the OE_ORDER_LINES_ALL.BOOKED_FLAG = 'Y'
condition indicates that an order is booked; therefore, this statement is used to filter out non-booked orders. To load all orders, including non-booked orders, remove the filter condition from the Source SQL query in the following objects:
SDE_ORA_SalesOrderLinesFact (mapping)
SDE_ORA_SalesOrderLinesFact_Primary (mapping)
SDE_ORA_SalesOrderLinesFact_Full (session)
If you include non-booked orders in the Sales Order Lines and Sales Schedule Lines tables, you have to exclude non-booked orders when you populate the Sales Booking Lines table from the Sales Order Lines or from the Sales Schedule Lines. You can do this by adding the W_SALES_ORDER_LINE_F.BOOKING_FLG = 'Y'
condition to the Source SQL query in the following mappings:
SIL_SalesBookingLinesFact_Load_OrderLine_Credit, SIL_SalesBookingLinesFact_Load_OrderLine_Debit
SIL_SalesBookingLinesFact_Load_ScheduleLine_Credit, SIL_SalesBookingLinesFact_Load_ScheduleLine_Debit
Figure 6-1 Handling Booked and Non-Booked Orders
By default, only booked orders are loaded into the Sales Order Lines (W_SALES_ORDER_LINES_F), Sales Schedule Lines (W_SALES_SCHEDULE_LINE_F), and Sales Booking Lines (W_SALES_BOOKING_LINE_F) tables. However, you can also load non-booked orders in Sales Order Lines (W_SALES_ORDERS_LINES_F) and Sales Schedule Lines (W_SALES_SCHEDULE_LINE_F), while loading only booked orders in Sales Booking Lines (W_SALES_BOOKING_LINE_F).
To include non-booked orders in the Sales Order Lines and Sales Schedule Lines tables (for incremental load):
In Informatica PowerCenter Designer, open the SDE_ORA115Version_Adaptor or SDE_ORAR12_Adaptor folder.
Open and check out the mplt_BC_ORA_SalesOrderLinesFact mapplet in the Mapplet Designer.
Double-click the SQ_BCI_SALES_ORDLNS
Source Qualifier to open the Edit Transformations box.
Display the Properties tab.
For the Sql Query Transformation Attribute, the User Defined Join Transformation Attribute, and the Source Filter Transformation Attribute, do the following:
Select the down arrow in the Value field to display the SQL Editor box.
In the SQL box, remove the line 'AND OE_ORDER_LINES_ALL.BOOKED_FLAG='Y' (or 'OE_ORDER_LINES_ALL.BOOKED_FLAG='Y' AND') from the Sql Query Transformation Attribute, the User Defined Join Transformation Attribute, and the Source Filter Transformation Attribute. The User Defined Join Transformation Attribute and the Source Filter Transformation Attribute might not have the line.
Click OK to save the changes.
Validate and save your changes to the repository, and check in the mapplet.
Open the mplt_BC_ORA_SalesOrderLinesFact_Primary in the Mapplet Designer. Repeat steps 3 - 6 for Sales Order Lines Primary table.
To include non-booked orders in the Sales Order Lines and Sales Schedule Lines tables (for Full load):
In Informatica PowerCenter Workflow Manager, open the SDE_ORA115Version_Adaptor or SDE_ORAR12_Adaptor folder.
Drag the SDE_ORA_SalesOrderLinesFact_Full session to the Task Developer.
Check out and double-click the session to open the Edit Tasks window.
Display the Mapping tab and click mplt_BC_ORA_SalesOrderLinesFact.SQ_BCI_SALES_ORDLNS in the left pane.
For the Sql Query Attribute, the Source Filter Attribute, and the User Defined Join Attribute under Properties in the right pane, do the following:
Select the down arrow in the Value field to display the SQL Editor box.
In the SQL box, remove the line 'AND OE_ORDER_LINES_ALL.BOOKED_FLAG='Y' (or 'OE_ORDER_LINES_ALL.BOOKED_FLAG='Y' AND') from the Sql Query Attribute, the Source Filter Attribute, and the User Defined Join Attribute. The User Defined Join Attribute or the Source Filter Attribute might not have the line.
Click OK to save the changes.
Validate and save your changes to the repository, and check in the session.
To include only booked orders in the Sales Booking Lines table:
In Informatica PowerCenter Designer, open the SILOS folder.
Open and check out the SIL_SalesBookingLinesFact_Load_OrderLine_Credit mapping in the Mapping Designer.
Double-click the SQ_W_SALES_ORDER_LINE_F
Source Qualifier to open the Edit Transformations box.
Display the Properties tab.
Perform the following steps for the Source Filter Transformation Attribute and the Sql Query Transformation Attribute, if it is not empty.
Select the down arrow in the Value field to display the SQL Editor box.
In the SQL box, add ' W_SALES_ORDER_LINE_F.BOOKING_FLG = 'Y'' (plus AND if there are existing filters) to the Sql Query Transformation Attribute and the Source Filter Transformation Attribute.
Note: Do not add this line to the Sql Query Transformation Attribute if it is empty. In that case, only add the line to the Source Filter Transformation Attribute.
Click OK to save the changes.
Validate and save your changes to the repository, and check in the mapping.
Repeat steps 2 - 6 for the SIL_SalesBookingLinesFact_Load_OrderLine_Debit, SIL_SalesBookingLinesFact_Load_ScheduleLine_Credit, and SIL_SalesBookingLinesFact_Load_ScheduleLine_Debit mappings.
Sales schedule lines detail when each order's items are slated for shipment. Each sales order is broken into sales order lines, and each sales order line can have multiple schedule lines.
For example, you might not have enough stock to fulfill a particular sales order line, therefore you create two schedules to fulfill it. One schedule ships what you currently have in stock, and the other schedule includes enough time for you to manufacture and ship the remaining items of the sales order line. This information is stored in the W_SALES_SCHEDULE_LINE_F table. This topic describes how to modify the type of information stored in this table.
As initially configured for Oracle 11i and Oracle R12, bookings are recorded at the Sales Order Line level. For each booked order, there is at least one row in the Bookings table, as shown in Figure 6-2.
Figure 6-2 Sales Order Lines and Bookings Table
There are two Subject Areas in the SDE_ORA115Version_Adaptor or SDE_ORAR12_Adaptor containers:
Enterprise Sales - Booking Lines & Order Lines
Enterprise Sales - Booking Lines & Schedule Lines
The Execution Plans that are installed with Oracle BI Applications use the Enterprise Sales - Booking Lines & Order Lines Subject Area by default. If you want to load booking lines at the schedule line level, create a new Execution Plan and include the Enterprise Sales - Booking Lines & Schedule Lines Subject Area instead of Enterprise Sales - BookingLines & Order Lines.
Bookings may be recorded at the Sales Schedule Line level instead of the Sales Order Line level. At the Sales Schedule Line level, bookings provide a more granular view, as the orders are segmented by schedule line. Bookings recorded at the Schedule Line level provide one row in the Bookings table for each schedule line, as shown in Figure 6-3. Oracle Applications schedule lines have the same granularity as order lines. Therefore, if you pull booking lines from schedule lines, the booking lines are limited to scheduled order lines.
Figure 6-3 Bookings Recorded at the Schedule Line Level
You configure the definition of early and late shipments by editing the EXP_SALES_PCKLNS expression in the mplt_SA_ORA_SalesPickLinesFact mapplet. The mplt_SA_ORA_SalesPickLinesFact mapplet is used by the SDE_ORASalesPickLinesFact mapping.
This mapplet compares the pick date and ship date with the scheduled ship date to determine whether or not orders are late.
To configure early and late tolerances for shipments:
Open the file parameterfileOLTP.txt and locate at the section [SDE_ORA_SalesPickLinesFact].
Edit the parameter for the tolerance you want to modify.
For example:
If you want to allow two days after the scheduled pick date before the pick is flagged as late, set the $$PICK_LATE_TIME_TOL=2.
To set the number of days before a pick is flagged as early, set the value for $$PICK_EARLY_TIME_TOL
To set the number of days before a pick is flagged as late, set the value for $$PICK_LATE_TIME_TOL.
If you want to change the shipping tolerances, set the values for the shipping parameters ($$SHIP_LATE_TIME_TOL and $$SHIP_EARLY_TIME_TOL).
Validate and save your changes to the parameter file.
Sales invoice lines are payments for items ordered by a customer. This information is stored in the W_SALES_INVOICE_LINE_F table. This topic describes how to modify the type of information stored in this table.
By default, the Oracle Supply Chain and Order Management Analytics application is configured to extract completed sales invoices when performing the Sales Invoice data extract. Oracle 11i and Oracle R12 use a flag to indicate whether a sales invoice is complete. In particular, completed sales invoices are those where the RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG = Y
in Oracle 11i and Oracle R12.
To extract incomplete sales invoices, as well as complete invoices, remove the extract filter statement.
To remove the extract filter for sales invoices:
In Informatica PowerCenter Designer, open the SDE_ORA115Version_Adaptor or SDE_ORAR12_Adaptor folder.
Open the mplt_BC_ORA_SalesInvoiceLinesFact mapplet in the Mapplet Designer.
Double-click the SQ_BCI_SALES_IVCLNS
Source Qualifier to open the Edit Transformations box.
Display the Properties tab.
For the SQL Query Transformation Attribute, select the down arrow in the Value field to display the SQL Editor box.
In the SQL box, remove the line 'AND RA_CUSTOMER_TRX_ALL.COMPLETE_FLAG='Y''.
Validate and save your changes to the repository.
Repeat steps 2 - 7 for the mplt_BC_ORA_SalesInvoiceLinesFact_Primary mapplet.
The Backlogs and Cycle Lines in the out-of-the-box ETL component assumes that the Backlog, Picking and Cycle Lines tables have been updated with shipping and invoicing information (for example, using Oracle EBS Interface programs). If the Oracle Order Line Tables have not been updated with shipping and invoicing information, you need to update the out-of-the-box ETL components as follows:
To configure the Order Lines:
In Informatica PowerCenter Designer, open the PLP\Mappings folder.
Using the Mapping Designer, edit the PLP_SalesCycleLinesFact_Load mapping and the PLP_SalesCycleLinesFact_Load_Full mappings as follows:
Open the mapping in Mapping Designer.
Double-click the SQ_W_SALES_ORDER_LINE_F
Source Qualifier to open the Edit Transformations box.
Display the Properties tab.
For the SQL Query Transformation Attribute, select the down arrow in the Value field to display the SQL Editor box.
In the SQL box, replace the SQL text 'X.TOTAL_SHIPPED_QTY' with 'PICKLINE.TOTAL_SHIPPED_QTY'.
In the SQL box, replace the SQL text 'X.TOTAL_INVOICED_QTY' with 'IVCLINE.TOTAL_INVOICE_QTY'.
Validate and save your changes to the repository.
In DAC, do the following:
Go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Configuration Tags tab.
Query for the tag 'Sales PLP Optional Tasks'.
Display the Subject Areas subtab.
Activate the appropriate Subject Areas by clearing the Inactive check box.
In PowerCenter Designer, open the Source Qualifier in SDE_ORA_SalesPickLinesFact.
Modify the SQL Query as follows:
Add to the following join condition:
AND WSH_DELIVERY_DETAILS.DELIVERY_DETAIL_ID=WSH_DELIVERY_ASSIGNMENTS. DELIVERY_DETAIL_ID (+) AND WSH_DELIVERY_ASSIGNMENTS.DELIVERY_ID= WSH_NEW_DELIVERIES.DELIVERY_ID (+)
Nest the following filter condition within the parenthesis:
OR WSH_NEW_DELIVERIES.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS')
Change select OE_ORDER_LINES_ALL.ACTUAL_SHIPMENT_DATE
to select WSH_NEW_DELIVERIES.INTIAL_PICKUP_DATE
.
Select WSH_NEW_DELIVERIES.LAST_UPDATE_DATE
and link it to EXP_SALES_PCKLNS.LAST_UPDATE_DATE1
.
The Oracle Supply Chain and Order Management Analytics application uses tables that are also used in the Oracle Financial Analytics application.
For Oracle 11i and Oracle R12, you need to use the following configuration steps for Oracle Financial Analytics to configure Oracle Supply Chain and Order Management Analytics:
Changes in booked orders are tracked in the Booking Lines table (W_SALES_BOOKING_LINE_F), not in the Sales Order Lines table (W_SALES_ORDER_LINE). By default, the only changes tracked in the W_SALES_BOOKING_LINE_F table are changes in the ordered amount, ordered quantity, or Booking ID. By default, the Booking ID is defined as:
TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||TO_CHAR(INP_WAREHOUSE_ID)
Any changes in these fields results in another row in the W_SALES_BOOKING_LINE_F table. However, changes in any other fields does not result in a new row; instead, the existing information are overwritten with the changed information. No history is kept for changes to these other field values. If you want to track other changes you can do so. For example, you may want to track changes to the sales representative who is handling the order. The ETL processes are prepackaged to overwrite sales representative changes; however, if you want to retain them, you must add the attribute to the Booking ID definition in the Booking ID expression in the source adapter mapplet (mplt_SA_ORA_SalesOrderLinesFact). The following section describes what happens if you modify the Booking ID to include the sales representative.
Assume you want to track changes to the sales representative for bookings and de-bookings. You decide to do this to better evaluate each representative's sales performance. To track changes by Salesperson ID, you have to modify the VAR_BOOKING_ID to use the value:
TO_CHAR(INP_LINE_ID)||'~'||TO_CHAR(INP_INV_ITEM_ID)||'~'||to_char(INP_WAREHOUSE_ID)
For example, to edit the VAR_BOOKING_ID value, do the following:
In Informatica PowerCenter Designer, open the mplt_SA_ORA_SalesOrderLinesFact mapplet in the Mapplet Designer.
Double-click the MAPI_SALES_ORDLNS transformation to open the Edit Transformation box.
Display the Ports tab.
Select the EXP_SALES_ORDLNS transformation.
Edit the expression for the VAR_BOOKING_ID port.
The following paragraphs and tables describe what happens in the source system and the W_SALES_BOOKING_LINE_F table when you change sales representatives under this scenario.
Day 1: One order is placed with Salesperson 1001. The source system displays the information as shown in Table 6-10.
Table 6-10 Oracle 11i and Oracle R12: Source System Table Row After Day One Activity
Sales Order Number | Sales Order Line Number | Salesperson ID | Quantity | Selling Price | Date |
---|---|---|---|---|---|
1 |
1 |
1001 |
100 |
25 |
1-June-2000 |
The row in Table 6-10 is entered into the IA Bookings table (W_SALES_BOOKING_LINE_F) as shown in Table 6-11.
Table 6-11 Oracle 11i and Oracle R12: W_SALES_BOOKING_LINE_F Table Row After Day One Activity
SALES_ORDER_NUM | SALES_ORDER_ITEM | SALESREP_ID | SALES_QTY | NET_DOC_AMT | BOOKED_ON_DT |
---|---|---|---|---|---|
1 |
1 |
1001 |
100 |
2500 |
1-June-2000 |
Day 2: Salesperson 1002 takes over this order, replacing Salesperson 1001. Thus, the salesperson associated with the order is changed from 1001 to 1002 in the source system. The row in the source system looks like the row shown in Table 6-12.
Table 6-12 Oracle 11i and Oracle R12: Source System Table Row After Day Two Activity
Sales Order Number | Sales Order Line Number | Salesperson ID | Quantity | Selling Price | Date |
---|---|---|---|---|---|
1 |
1 |
1002 |
100 |
25 |
2-June-2000 |
The SIL_SalesBookingLinesFact_Load_OrderLine_Credit, which also writes to the booking table, now does a debooking for the old line and SIL_SalesBookingLinesFact_Load_OrderLine_Debt inserts a new row into the W_SALES_BOOKING_LINE_F booking table. On day two, the row in the W_SALES_BOOKING_LINE_F table looks like the row shown in the Table 6-13.
Backlog information is stored in the W_SALES_BACKLOG_LINE_F and W_SALES_BACKLOG_HISTORY_F tables. This topic describes how to modify the type of information stored in these tables. Many types of backlog exist in the Oracle Supply Chain and Order Management Analytics application—financial backlog, operational backlog, delinquent backlog, scheduled backlog, unscheduled backlog, and blocked backlog. Each type of backlog is defined by two particular dates in the sales process; therefore, calculations of backlog hits multiple fact tables.
For example, financial backlog records which items have been ordered but payment has not been received. Thus, to calculate the number of financial backlog items, you use the Sales Order Lines table (to determine which items have been ordered) and the Sales Invoice Lines table (to see which orders have been paid for). Using these two tables, you can determine the number of items and the value of those items that are on financial backlog.
By default, the Oracle Supply Chain and Order Management Analytics application only extracts open sales orders from the Sales Order Lines (W_SALES_ORDER_LINE_F) table and Sales Schedule Lines table (W_SALES_SCHEDULE_LINE_F) for backlog calculations to populate the Backlog tables. Open sales orders are defined as orders that are not canceled or not complete. The purpose in extracting only open orders is that in most organizations those orders that are closed are no longer a part of backlog. However, if you want to extract sales orders that are marked as closed, you may remove the default filter condition from the extract mapping.
For example, assume your customer orders ten items. Six items are invoiced and shipped, but four items are placed on operational and financial backlog. This backlog status continues until one of two things happens:
The items are eventually shipped and invoiced.
The remainder of the order is canceled.
If you choose to extract sales orders that are flagged as closed, you must remove the condition in the Backlog flag. To do so, use the following procedure.
The BACKLOG_FLAG in the W_SALES_ORDER_LINE_F table is also used to identify which sales orders are eligible for backlog calculations. By default, all sales order types have their Backlog flag set to Y
. As a result, all sales orders are included in backlog calculations.
To remove open order extract filters:
In Informatica PowerCenter Designer, open the SDE_ORA115Version_Adaptor or SDE_ORAR12_Adaptor folder.
Open the mplt_BC_ORA_SalesOrderLinesFact mapplet in the Mapplet Designer.
Double-click the EXP_SALES_ORDLNS, and display the Ports tab.
Edit the VAR_OPR_BACKLOG_FLG and remove the OPEN_FLAG='Y'
.
Edit the VAR_FIN_BACKLOG_FLG and remove the OPEN_FLAG='Y'
.
Open the mplt_BC_ORA_SalesScheduleLinesFact mapplet in the Mapplet Designer.
Double-click the EXP_SALES_SCHLNS, and display the Ports tab.
Edit the VAR_OPR_BACKLOG_FLG and remove the OPEN_FLAG='Y'
.
Edit the VAR_FIN_BACKLOG_FLG and remove the OPEN_FLAG='Y'
.
Validate and save your changes to the repository.
Open the PLP folder.
Open the mappings PLP_SalesBacklogLinesFact_LoadOrderLines and PLP_SalesBacklogLinesFact_LoadScheduleLines.
Remove the condition W_STATUS_CODE <> 'Closed'
from the SQL Query in the Source Qualifier.
The Bill of Materials (BOM) functional area enables you to analyze the components that comprise the finished goods. BOM enables you to determine how many products use a certain component. It also enables you to get visibility into the complete BOM hierarchy for a finished product. In order to explode BOM structures, certain objects need to be deployed in your EBS system.
Note that to run the ETL as the apps_read_only user, you must first run the following DCL commands from the APPS schema:
Grant insert on opi.opi_obia_w_bom_header_ds to &read_only_user; Grant analyze any to &read_only_user;
Note:
If you plan to consume analytics on Bill of Materials (BOM), you need to include the DAC Subject Area "Supply Chain - BOM Items" in your ETL Execution Plan. However, the ETL process for extracting BOM data from Oracle EBS into the OBAW data warehouse is not designed for very high volumes of BOM data. There are known performance issues with initial ETL load for very large BOM data size (for example, more than 100,000 rows of data in EBS BOM). Oracle recommends that you restrict the BOM data size in the initial ETL load by applying appropriate filters (for example, On time or Inventory Organization).You can explode the BOM structure with three different options:
All. All the BOM components are exploded regardless of their effective date or disable date. To explode a BOM component is to expand the BOM tree structure.
Current. The incremental extract logic considers any changed components that are currently effective, any components that are effective after the last extraction date, or any components that are disabled after the last extraction date.
Current and Future. All the BOM components that are effective now or in the future are exploded. The disabled components are left out.
These options are controlled by the EXPLODE_OPTION variable. The EXPLODE_OPTION variable is preconfigured with a value of 2, explode Current BOM structure.
In the SDE_ORA_BOM_Explosion mapping, the OPI_OBIA_BOMPEXPL_KPG
stored procedure calls bompexpl.exploder_userexit
stored procedure to explode the BOM structure. Table 6-14 lists the variables for the bompexpl.exploder_userexit
stored procedure.
Table 6-14 Variables for the bompexpl.exploder_userexit Stored Procedure
Input Variable | Preconfigured Value | Description |
---|---|---|
VERIFY_FLAG |
0 |
A verify flag with a value of 1 only applies to standard BOM. |
ORG_ID |
ORGANIZATION_ID |
Organization ID |
ORDER_BY |
1 |
Controls the order of the records. 1—Operation Sequence Number, Item Number 2—Item Number, Operation Sequence Number |
GRP_ID |
Negative Sequence ID -1, -2, and so on. |
Unique value to identify current explosion. |
SESSION_ID |
Negative Sequence ID -1, -2, and so on. |
Unique value to identify current session. |
LEVELS_TO_EXPLODE |
10 |
Levels to explode. |
BOM_OR_ENG |
1 |
1—BOM 2—ENG |
IMPL_FLAG |
1 |
1—Implemented Only 2—Implemented and Non-implemented |
PLAN_FACTOR |
2 |
1—Yes 2—No |
EXPLODE_OPTION |
2 |
1—All 2—Current 3—Current and Future |
MODULE |
2 |
1—Costing 2—BOM 3—Order Entry 4—ATO 5—WSM |
CST_TYPE_ID |
0 |
Cost type ID for cost explosion. |
STD_COMP_FLAG |
0 |
1—Explode only standard components 2—All components |
EXPL_QTY |
1 |
Explosion quantity |
ITEM_ID |
ROUND(TO_DECIMAL(PRODUCT_ID)) |
Item ID of assembly to explode. |
ALT_DESG |
ALTERNATE_BOM_DESIGNATOR |
Alternate routing designator |
COMP_CODE |
NULL |
Concatenated component code |
REV_DATE |
TO_CHAR(CREATION_DT, 'YYYY/MM/DD HH24:MI') |
Explosion date YYYY/MM/DD HH24:MI |
There are five different BOM types in a source system: 1- Model, 2 - Option Class, 3 - Planning, 4 - Standard, and 5 - Product Family. By default, only the Standard BOM type is extracted and exploded.
To configure the BOM explosion to the All option:
In Informatica PowerCenter Designer, open the SDE_ORAVersion_Adaptor.
Go to Mapplet Designer, and open mplt_BC_ORA_BOMHeaderDimension.
Double-click the SQL qualifier SQ_BOM_INVENTORY_COMPONENTS
to open the Edit Transformations dialog, display the Properties tab, and open the value for SQL Query.
Modify the following default WHERE
condition from:
(( /* CURRENT valid component changed */ INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') AND (INV.EFFECTIVITY_DATE <= TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') and (INV.DISABLE_DATE > TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR INV.DISABLE_DATE IS NULL)) OR /* Component that became effective after last extract date and before today's extract, for CURRENT Option*/ INV.EFFECTIVITY_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR /* Component that become disabled after last extract date and before today's extract, for CURRENT and CURRENT-FUTURE Option*/ INV.DISABLE_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') ) OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS')) GROUP BY
To:
(INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS')) GROUP BY
Click Apply, and Validate the mapping and save your changes to the repository.
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Tasks tab.
Query for task SDE_ORA_BOMItemFact, and then display the parameters subtab.
Select the parameter $$EXPLODE_OPTION, edit the Value field and change it to 1, and save.
To configure the BOM explosion to the Current and Future option:
In Informatica PowerCenter Designer, open the SDE_ORAVersion_Adaptor.
Go to Mapplet Designer, and open mplt_BC_ORA_BOMHeaderDimension.
Double-click the SQL qualifier SQ_BOM_INVENTORY_COMPONENTS
to open the Edit Transformations dialog, and display the Properties tab, and open the value for SQL Query.
Modify the following default WHERE
condition from:
(( /* CURRENT valid component changed */ INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') AND (INV.EFFECTIVITY_DATE <= TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') and (INV.DISABLE_DATE > TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR INV.DISABLE_DATE IS NULL)) OR /* Component that became effective after last extract date and before today's extract, for CURRENT Option*/ INV.EFFECTIVITY_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR /* Component that become disabled after last extract date and before today's extract, for CURRENT and CURRENT-FUTURE Option*/ INV.DISABLE_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') ) OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS')) GROUP BY
To:
(( INV.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') AND ((INV.DISABLE_DATE > TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') OR INV.DISABLE_DATE IS NULL)) OR INV.DISABLE_DATE between TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS') and TO_DATE('$$CURRENT_DATE','MM/DD/YYYY HH24:MI:SS') ) OR BOM.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE','MM/DD/YYYY HH24:MI:SS')) GROUP BY
Click Apply, and Validate the mapping and save your changes to the repository.
In DAC, go to the Design view, and select the appropriate custom container from the drop-down list.
Display the Tasks tab.
Query for task SDE_ORA_BOMItemFact, and then display the parameters subtab.
Select the parameter $$EXPLODE_OPTION, edit the Value field and change it to 3, and save.
To configure the BOM type:
In Informatica PowerCenter Designer, open the SDE_ORAVersion_Adaptor.
Open the mplt_BC_ORA_BOMHeaderDimension mapplet.
Double-click the SQL qualifier SQ_BOM_INVENTORY_COMPONENTS
to open the Edit Transformations dialog, and display the Properties tab, and open the value for SQL Query.
Modify the BOM_ITEM_TYPE
section in the WHERE
statement to change the number to your BOM type. For example, change the number to 3 for a Planning BOM type, as follows:
Where INV.BOM_ITEM_TYPE = 3 AND M.BOM_ITEM_TYPE = 3 AND
Note:
You can also remove these two filters to extract all types of BOM.Click Apply, and Validate the mapping and save your changes to the repository.
You can use the left bound and the right bound calculation to expedite some reporting requirements. For example, you can find the components in a subassembly within a finished product. Left bound and right bound are stored in the W_BOM_ITEM_F table for each BOM node, and they have one row of data in the W_BOM_ITEM_F table. The COMPUTE_BOUNDS
stored procedure traverses the exploded BOM tree structure and calculates the left bound and right bound. By default, the COMPUTE_BOUNDS
stored procedure is off and the W_BOM_ITEM_F.LEFT_BOUNDS and W_BOM_ITEM_F.RIGHT_BOUNDS columns are empty.
Figure 6-4 illustrates a sample BOM structure with the left bound and right bound values listed for each node. To find all the components under node B, you select the components with a top product key value of A, the left bound value is greater than 2, and the right bound value is less than 17.
You can use the following procedure to turn on the left bound and the right bound calculation and populate the W_BOM_ITEM_F.LEFT_BOUNDS and W_BOM_ITEM_F.RIGHT_BOUNDS columns.
Note: Before you run an ETL using a BOM, you must compile and deploy the SQL code in Compute_Bounds_Ora11i.sql. See Section 6.2.3.11, "How to Deploy the Stored Procedure for the Left Bound and Right Bound Calculation Option" for more information.
To configure the left bound and right bound calculation option:
In Informatica PowerCenter Designer, navigate to the SILOS folder and edit the SIL_BOMItemFact mapping.
Double-click the COMPUTE_BOUNDS
stored procedure transformation to open the Edit Transformations dialog, and display the Properties tab.
Change the value of the Call Text attribute to 'compute_bounds_ora11i(1)'.
Click Apply.
Validate the mapping, and save your changes to the repository.