Oracle® Business Intelligence Applications Release Notes Version 7.9.4 E10921-03 |
|
![]() Previous |
These release notes describe known issues and workarounds for Oracle Business Intelligence Applications Version 7.9.4, and contain the following sections:
These release notes are updated periodically as new information becomes available. To ensure that you are reading the latest version of the release notes, check the Oracle Business Intelligence Applications Documentation Web site:
This section describes general issues and workarounds for Oracle Business Intelligence Applications products. It contains the following topics:
For certification information, refer to the System Requirements and Supported Platforms for Oracle Business Intelligence Applications document. This document is part of the Oracle Business Intelligence Applications documentation set. It is also available from the Certify tab on Oracle Metalink at the following location:
This section provides release notes on installing and upgrading Oracle Business Intelligence Applications. It contains the following topics:
If you are upgrading from a previous release, then include in your upgrade preparations, plans for side-by-side instances of the entire Oracle BI Applications environment, not just for the Oracle Business Intelligence Presentation Catalog.
Enabling side-by-side instances of the entire Oracle BI Applications environment is a critical success factor for upgrade.
This section provides corrections for various pieces of the documentation set for Oracle Business Intelligence Applications. It contains the following topics:
Note the following corrections to the Oracle Business Intelligence Applications Installation and Configuration Guide:
In the Preface, the URL link to the OTN index page contains an extraneous space character before the 'www' part, and should be:
http://www.oracle.com/technology/about/index.html
instead of:
http:// www.oracle.com/technology/about/index.html
Selecting this URL link in the PDF version of this guide will correctly display the OTN page in a browser. Selecting this URL link in the HTML version of this guide will not display the OTN page in a browser.
Not applicable to Oracle Business Intelligence Applications Version 7.9.4.
Not applicable to Oracle Business Intelligence Applications Version 7.9.4.
This section provides release notes for Oracle Business Intelligence Applications in general. It contains the following topics:
Section 1.3.1, "Issue Including Subject Areas from Two Different Sources into One Execution Plan"
Section 1.3.2, "Incremental Aggregation Tasks Can Run Slowly in Multisource ETL Situation"
Section 1.3.5, "Lack of Time Zone Setting Prevents Delivery of iBots to Applications Users"
Section 1.3.6, "Large Dimensional Tables Causing Slow Fact Lookups"
Section 1.3.8, "Issue with Exchange Rates and Transaction Currencies"
Section 1.3.9, "Incorrect Default Physical SQL for Saved Result Sets on Teradata"
Section 1.3.10, "Issue with Mapping in Oracle Business Intelligence Applications 7.7.1.x Versions"
Section 1.3.11, "Issues Inserting Tables on Infa Server in a Non-English Language Deployment"
Section 1.3.12, "Restoring the Informatica Repository in a Non-English Language Deployment"
Section 1.3.14, "Issue with Reports in Oracle Business Intelligence Presentation Catalog"
Section 1.3.16, "Invalid Expressions In MPLT_SA_ORA_SALESINVOICELINESFACT"
Section 1.3.17, "Forecasting Interoperability With Siebel OLTP 8.0"
Section 1.3.20, "Using An ODBC Connection During An Upgrade"
Section 1.3.23, "Extraneous Dimension in Campaign Contacts_Segmentation Subject Area"
Data Warehouse Administration Console allows customers to build execution plans that are comprised of subject areas from different source system containers. However, such behavior is not supported in this release. For example, you cannot have a subject area from Oracle Applications 11.5.8 and Siebel Applications 7.8 in the same execution plan.
To work around this issue, create separate execution plans for subject areas in different containers. For example, suppose an execution plan has subject areas such as Sales, Inventory, and Receivables that come from an Oracle Applications 11.5.8 container. If the Sales subject area in Siebel Applications 7.8 must be populated, then it requires another execution plan.
Data Warehouse Administration Console runs full load or incremental load commands based on the dates when data was last extracted from a source table and when data was loaded into a target table.
Dependence on last extract or last load dates can become an issue when loading data from two discrete source systems for the first time. ETL from the first source runs as a full extract and full load. ETL from the second source runs as a full extract and as an incremental load. This can cause the second load to be slower than the first, because full load mappings are simple while incremental mappings (especially for aggregate tables) must handle delta aggregation, manipulation, and other complicated situations. Thus incremental loads are costlier.
Slow performance when running incremental ETL is not noticeable in daily ETL runs because of lower data volumes. But, in situations where data is extracted and loaded for the first time from two source systems one after the other, there will be a noticeable slowing down of the second load due to the high data volumes loaded in an incremental mode.
For example, suppose that three years of data is extracted and loaded from Oracle Applications 11.5.8 followed by a three-year load of data from Oracle Applications 11.5.10. The load from Oracle Applications 11.5.10 source will be noticeably slower.
This issue has no workaround.
You cannot use the Data Warehouse Administration Console repository in batch mode when building execution plans on an Oracle 9i Release 2 Database. If batch mode is enabled in this situation, then you might see an error message such as the following one:
MESSAGE:::Error while persisting dependency. EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
This error occurs because the Oracle 9i JDBC drivers do not handle array inserts or bulk inserts. The execution plan build works well without the bulk inserts.
To work around this issue, perform these steps:
Deselect the "Enable Batch Mode" box when building execution plans. The default is to enable the batch mode.
Use the JDBC driver that is available in Oracle Database 10g and connect to the DAC repository with a thin connection.
The Oracle Business Intelligence Delivers iBots use a predefined query against the SA System subject area to retrieve a list of applications users who are associated with the iBot's Recipient Group. When an iBot is run, this predefined query will fail with an error message that is similar to the following one:
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 27005] Unresolved column: "Time Zone".
The query error prevents the Oracle BI Delivers iBot from generating and delivering any content for the applications users that are associated with the iBot's Recipient Group. This impacts all iBot destination types including Interactive Dashboards and Emails and Disconnected Application cache generation for Oracle Business Intelligence Disconnected Analytics. The issue occurs because the "Time Zone" presentation column name is incorrectly spelled as "Timezone"in the SA System subject area.
To work around this issue use the Oracle Business Intelligence Administration Tool to change the name of the existing "Timezone" presentation column in the SA System presentation catalog to "Time Zone".
For information on the SA System subject area, see the Oracle Business Intelligence Server Administration Guide.
The Oracle BI Delivers iBots use a predefined query against the SA System subject area to retrieve a list of applications users who are associated with the iBot's Recipient Group. When an iBot is run, users who do not have a time zone specified in user preferences are considered invalid users and iBots are not delivered to them.
This issue occurs because the join type for the S_TIMEZONE join in the S_USER Logical Table Source in the SA System Business Model is defined as INNER when it should be defined as RIGHT OUTER.
To work around this issue, perform these steps:
Display the Oracle Business Intelligence Administration Tool.
In the Business Model and Mapping layer, expand the SA System Business Model and the USER Logical Table.
Double-click the S_USER Logical Table Source under Sources in the USER Logical Table.
In the Logical Table Source - S_USER Dialog Box, change the type to RIGHT OUTER from INNER for the S_TIMEZONE join in the Joins section of the General tab.
For information on the SA System subject area, see the Oracle Business Intelligence Server Administration Guide.
Several dimensional tables that are larger than 255 columns are causing the fact lookups on these dimensions to perform slowly and the lookup database server to run out of memory. The following tables have been identified as experiencing this issue:
W_CUSTOMER_FIN_PROFL_D
W_CUSTOMER_LOC_D
W_CUSTOMER_LOC_USE_D
W_ORG_D
W_PRODUCT_D
W_SALES_PRODUCT_D
W_SUPPLIER_PRODUCT_D
To work around this issue and improve performance, complete the following steps:
Edit the slow running session.
For all transformations that look up the large dimensions that are listed in this section, change the following
Index cache size = 10 MB
Data cache size = 20 MB
This is a session-side change. No modifications are required in mapping.
Data Warehouse Administration Console automatically drops and recreates the registered indexes on the primary target table that is defined for a task for better ETL performance if the primary target table is truncated in full or incremental load. Data Warehouse Administration Console also gathers statistics on the ETL indexes automatically after recreating these indexes on this table. Statistics help the database optimizer to select the best access path to query the data.
The underlying workflow for this task's full and incremental load command is SDE_ListOfValues, which consists of the SIL_ListOfValuesGeneral and SIL_CopyListOfValuesToOLTP sessions. The first session populates the W_LST_OF_VAL_G table, and the second session populates the S_ETL_LOV table from the W_LST_OF_VAL_G table. Because the S_ETL_LOV table is defined as the primary table for this task, Data Warehouse Administration Console does not analyze the W_LST_OF_VAL_G table.
Even if this task is modified to include the W_LST_OF_VAL_G table as another primary table, Data Warehouse Administration Console will not analyze this table until after executing the entire workflow that comprises these two sessions. This causes the task to hang while it extracts rows from the unanalyzed W_LST_OF_VAL_G table to populate the S_ETL_LOV table.
To work around this issue:
If necessary, make a copy of the deployed container where you want to apply this workaround. You cannot make changes to the out-of-the-box container.
Create two new workflows in the Informatica Workflow Manager: one for the SIL_ListOfValuesGeneral session and the other for the SIL_CopyListOfValuesToOLTP session. The workflow name should match the session name that is used inside the workflow.
Make two copies of the Fix List Of Values and Copy to OLTP task in Data Warehouse Administration Console. Rename the first copy as Fix List Of Values and the second copy as Copy List Of Values to OLTP task. Verify that the properties for both these tasks are set as follows:
Task 1: Fix List of Values
Command for Incremental Load: SIL_ListOfValuesGeneral
Command for Full Load: SIL_ListOfValuesGeneral
Folder Name: Extract
Primary Source: DBConnection_OLAP
Primary Target: DBConnection_OLTP
Task Phase: General
Execution Type: Informatica
Priority: 5
Source Tables: W_LST_OF_VAL_G (Primary), W_PARAM_G (Lookup)
Target: W_LST_OF_VAL_G (Primary).
Ensure that Truncate Always and Truncate For Full Load options are not selected.
Task 2: Copy List of Values to OLTP
Command for Incremental Load: SIL_CopyListOfValuesToOLTP
Command for Full Load: SIL_CopyListOfValuesToOLTP
Folder Name: Extract
Primary Source: DBConnection_OLAP
Primary Target: DBConnection_OLTP
Task Phase: General
Execution Type: Informatica
Priority: 5
Source Tables: W_LST_OF_VAL_G (Primary)
Target: S_LST_OF_VAL (Primary)
Ensure that Truncate Always and Truncate For Full Load options are selected.
Edit the TASK_GROUP_Extract_lst_of_val task group by removing the Fix List of Values and Copy to List task and adding the newly created tasks Fix List Of Values and Copy List Of Values to OLTP to this task group. Set the execution order for these tasks as follows:
Fix List of Values: 1
Copy Lists of Values to OLTP: 2
Reassemble the appropriate subject area to ensure that these new tasks are included in the subject area.
Rebuild the execution plan in which these tasks must be executed.
Current design and support of multiple currencies within the Oracle BI Applications and the data warehouse assumes that the transactional system (or OLTP system) provides exchange rates and table structures that store exchange rates.
If the OLTP system does not provide exchange rates from the "transaction currency" to the chosen "one or more data warehouse currencies", then the Fact table will have a null exchange rate value for "transaction" currency to "Global1" currency, and hence, analysis based on Global currencies will not be possible for these transactions. It also impacts the correctness of the data for data that resides in various aggregate tables. This issue is also seen in the other two supported currencies (Global2 and Global3).
To work around this issue, ensure that the OLTP system has all currency exchange rates from all possible transaction currencies added to all the three chosen data warehouse currencies, up front. If this is not taken care of beforehand and you encounter a missing exchange rate issue, then you can rerun transactions in "full" mode after you have fixed the missing exchange rate issue in the OLTP system.
When you set up segment Saved Result Sets in the Marketing Metadata on a Teradata data source, you find that the default SQL for inserting the saved result set header has incorrect syntax. The user will receive the following error message when trying to save a saved result set for a segment:
Odbc driver returned an error (SQLExecDirect). State: 37000. Code: 10034. [NQODBC] [SQL_STATE: 37000] [nQSError: 10034] The SQL statement has a syntax error. [nQSError: 43093] An error occurred while processing the EXECUTE PHYSICAL statement. [nQSError: 16001] ODBC error state: 37000 code: -3706 message: [NCR][ODBC Teradata Driver][Teradata RDBMS]Syntax error: expected something between the 'COUNT' keyword and ','.. [nQSError: 16014
]]
This issue is caused by the fact that in the default SQL, the COUNT keyword requires double quotes on Teradata databases. To work around this issue, perform the following steps:
In the Oracle BI Administration Tool, select Marketing from the Manage menu.
Double-click the Target Level and select the Saved Result Sets tab.
Modify the physical SQL that inserts the saved result set header by adding double quotes around the COUNT keyword.
The following example shows SQL code in which double quotes were included around the COUNT keyword.
INSERT INTO M_SR_HEADER (GUID, SEGMENT_PATH, DATE_TIME, TARGET_LEVEL, "COUNT", CREATED_BY) VALUES ('@{guid}', '@{segmentPath}', '@{createdTime}', '@{targetLevel}', @{count}, '@{createdBy}')
The "SIL_ListofValuesDimension_MissingActivityCodes" mapping, which was used to move unbounded LOV types from the S_LST_OF_VAL OLTP table to the data warehouse table, is not available. This issue does not affect customers who have bounded LOV type. However, if customers use unbounded LOV types for sra_resolution_cd, this mapping is still required in the 7.7.1.x versions.
To work around this issue, re-import the old mapping in the new environment and execute it as a one-time job, if your data source is Oracle's Siebel CRM Applications Version 7.7 or higher. If your data source is Oracle's Siebel CRM Applications Version 7.5.3 or lower, then run this job as part of the ETL process.
If you use the Informatica Infa server on a non-English language machine and your warehouse is on an Oracle database, then you might encounter an Oracle database problem when the Informatica server is trying to enter records with a decimal data type column into the warehouse. You might see an Oracle error message such as the following one:
Error: ORA-01722: invalid number
To work around this issue, disable the "Enable high precision" option in the session property. This option indicates whether to keep the precision of a decimal port up to 28 digits. If this option is disabled, then Informatica keeps a precision of up to only 15 digits for decimal ports. Because Oracle Business Intelligence Applications uses decimal ports for ID columns, a precision of 15 is sufficient.
Due to a limitation in Informatica PowerCenter, you must use the 'PMREPAGENT restore' command to restore the Informatica Repository in a non-English language deployment of Oracle BI Applications (for example, Japanese or Simplified Chinese). If you try to use the Informatica PowerCenter user interface to restore the Informatica Repository in a non-English language deployment of Oracle BI Applications, the process will fail.
To restore the Informatica Repository in a non-English language deployment of Oracle BI Applications, use the 'PMREPAGENT restore' command, as specified in the following sample command:
\OracleBI\dwrep\Informatica\PMREP\PMREPAGENT restore -r <INFA_REP_NAME> -t <INFA_REP_CONNECT_TYPE> -u <INFA_REP_DBOWNER> -p <INFA_REP_PASSWORD> -c <INFA_REP_CONNECT_STR> -h <INFA_REPSERVNAME> -o <INFA_REPSERVPORT> -i <INFA_REP_FILE> -n
For more information about the PMREPAGENT command, see Informatica PowerCenter documentation on installation and configuration.
The Contact Geography dimension attributes on the Campaign Contacts_Segmentation catalog, which map to the Person Geography dimension table, are not joined appropriately to the Campaign History Fact table. Therefore, Oracle Business Intelligence Server cannot find an appropriate navigation path when this dimension is used.
To work around this issue, remove the Geography dimension from the Campaign Contacts_Segmentation catalog. If users want to use this dimension, then they can switch to another subject area that has this dimension and reference it there, for example, Customer Profile_Segmentation.
The following reports within the shipped Oracle BI Presentation Catalog have poor query performance:
/shared/Interactive Selling/Pricing/Promotions/Recommended Promotions
/shared/Interactive Selling/Pricing/Promotions/Market Basket Analysis - Promotion Recommendations
/shared/Marketing/Customer Insight (B2C)/Products/Next Product Purchased
/shared/Interactive Selling/Products/Marketing/Next Product Purchased
Do not use these reports in the current release. This issue has no workaround.
The DAC Deployment procedure causes sequence generators transformations in the ETL to be reset to start from the number 1 again. This can cause some issues when running Source Independent Load (SIL) mappings for the different applications and adapters together. These SIL mappings will be unable to run together.
To illustrate this limitation, two examples of ETLs being unable to run together are shown below:
When the Siebel Vertical adapter and Oracle EBS adapter is used, the Siebel adapter leverages the SIL mappings in the SIL_Vert folder, while the Oracle EBS adapter leverages the SIL mappings in the SILOS folder. The sequence generators in the SIL_Vert folder gets updated to new values, while the same ones in the SILOS folder does not. This results in all SILOS mappings for common dimensions (like Employee, Exchange Rate, etc.) to fail.
The same dimensional tables are loaded from mappings within the SILOS folder and the PLP folders. This results in the same issue as above, and when the mappings are run, they fail.
The workaround is to set the Sequence Generator value to run between 1 and a sufficiently large finite value (for example, 1,000,000,000) for one of the folders and set the SILOS folder Sequence Generator value to run between 1,000,000,001 and its maximum limit of 2,000,000,000.
The expressions for EXT_SALES_ORDER_NUM, EXT_SALES_ORDER_ITEM, and EXT_ITEM_DETAIL_NUM in MPLT_SA_ORA_SalesInvoiceLinesFact are inconsistent with SALES_ORDLN_ID.
For customers with Versions 7.9, 7.9.1, and 7.9.2, the following steps will fix this issue.
Change the expressions of SALES_ORDER_NUM, SALES_ORDER_LINE and ITEM_DETAIL_NUM in MPLT_SA_ORA_SalesInvoiceLinesFact as follows:
IIF(INP_INTERFACE_LINE_CONTEXT='ORDER ENTRY', INP_SALES_ORDER, NULL)
IIF(INP_INTERFACE_LINE_CONTEXT='ORDER ENTRY', INP_SALES_ORDER_LINE, NULL)
IIF(INP_INTERFACE_LINE_CONTEXT='ORDER ENTRY', INP_ITEM_DETAIL_NUMBER, NULL)
If you are using Siebel OLTP 8.0, you must add a filter in the RPD fiscal layer, as follows:
Display the Oracle BI Administration Tool.
In the physical layer under Forecasting Siebel OLTP connection pool, open the table S_FCST_ITEM_DTL.
Change the Type to Select.
Paste the following SQL into the window.
S_FCST_ITEM_DTL.AGGR_DIM_NAME_1 IS NULL AND
S_FCST_ITEM_DTL.AGGR_DIM_NAME_2 IS NULL AND
S_FCST_ITEM_DTL.AGGR_DIM_NAME_3 IS NULL AND
S_FCST_ITEM_DTL.AGGR_DIM_NAME_4 IS NULL AND
S_FCST_ITEM_DTL.AGGR_DIM_NAME_5 IS NULL AND
S_FCST_ITEM_DTL.AGGR_DIM_NAME_6 IS NULL
Save the changes.
The email personalization formats that are installed out-of-the-box in Siebel 8.0 are not constraining the generated lists by treatment ID. As a result, when a campaign launch issues the SOAP call to generate list files for a given treatment, everyone that qualifies for the campaign is being returned in the list. For example, if the campaign has two email treatments that have been allocated to two different sets of campaign members, this issue causes all campaign members to receive both treatments.
This section explains how to work around this issue.
This section explains how to update the repository in Oracle BI Administration Tool.
Start the Oracle BI Administration Tool.
In the Physical Layer, add a physical column DCP_ID to the Campaign Promotion physical table in Marketing OLTP database and specify the following values in the Physical Column dialog:
Name: DCP_ID
Type: VARCHAR
Length: 15
Nullable: yes
In the Business Model and Mapping Layer, add a logical column Treatment Id to the OLTP Campaign Promotion logical table in Marketing Contact List business model and specify the following values in the Logical Column dialog:
Name: Treatment Id
Logical Table Source: S_CAMP_CON
Mapped as: "Marketing OLTP".dbo."Campaign Promotion".DCP_ID
In the Business Model and Mapping Layer, add a logical column Treatment Id to the OLTP Campaign Promotion logical table in Marketing Account List business model and specify the following values in the Logical Column dialog:
Name: Treatment Id
Logical Table Source: S_CAMP_CON
Mapped as: "Marketing OLTP".dbo."Campaign Promotion".DCP_ID
In the Presentation Layer, add the Treatment Id presentation column to the Campaign History (Transaction Database) presentation table in the Marketing Contact List presentation catalog and specify the following values in the Presentation Column dialog:
Name: Treatment Id
Logical Column: "Marketing Contact List"."OLTP Campaign Promotion"."Treatment Id".
In the Presentation Layer, add the Treatment Id presentation column to the Campaign History (Transaction Database) presentation table in the Marketing Account List presentation catalog and specify the following values in the Presentation Column dialog:
Name: Treatment Id
Logical Column: "Marketing Account List"."OLTP Campaign Promotion"."Treatment Id".
This section explains how to update the Campaign Load Format and Email Server Format in Siebel Marketing.
Log in to Siebel Marketing.
Add Treatment Id to the Campaign Contact integration component and specify the following values in the Edit Column Formula dialog:
Table heading: Campaign Contact
Column Heading: Treatment Id
Column Formula: '@{treatmentID}{0}'
Add a filter to constrain the output based on the Treatment Id column and specify the following values in the Create/Edit Filter dialog:
Operator : is equal to / is in
Expression: '@{treatmentID}{0}'
The username that appears in the Greetings message in the Oracle Business Intelligence Interactive Dashboards does not display correctly if the source system is either Oracle E-Business Suite (EBS) or Oracle's PeopleSoft. The variable DISPLAY_NAME used in the Greetings message in the dashboard header gets populated through an Initialization Block called LOGIN Properties. Out of the box, the connection pool and SQL statements used in this init block point to the Siebel OLTP. If you are running Oracle Business Intelligence Applications and your source system is either Oracle EBS or PeopleSoft, you need to change the Connection pool and data source SQL for the Initialization Block: LOGIN Properties, as described below.
The solution to this issue is:
Open the EnterpriseBusinessAnalytics.rpd file using Oracle BI Administration Tool.
Navigate to Manage > Variables to open the Variables Manager.
Under Session > Initialization Block, select the "LOGIN Properties" initialization block.
Double-click to open the properties dialog box.
Click on the Edit Data Source button.
Click on the Browse button.
In the Select Connection Pool window, select either Oracle EBS OLTP Connection Pool or PeopleSoft OLTP Connection Pool, depending on your OLTP system.
In the Default Initialization String box on the "Session Variable Initialization Block Data Source - LOGIN Properties" window, enter the SQL as appropriate for your source system application and the database platform it is running on.
Table 1-1 Required SQL Strings For Each Source System Application And Database Combination
Source System Application (and Database Platform) | SQL required for Default Initialization String |
---|---|
Oracle EBS (Oracle RDBMS) |
Select PER.FULL_NAME, 0 from PER_ALL_PEOPLE_F PER, FND_USER USR WHERE USR.USER_NAME= ':USER' AND USR.EMPLOYEE_ID=PER.PERSON_ID AND (SYSDATE <= USR.END_DATE OR USR.END_DATE IS NULL) AND PER.EFFECTIVE_START_DATE <= USR.START_DATE AND (USR.START_DATE < PER.EFFECTIVE_END_DATE OR PER.EFFECTIVE_END_DATE IS NULL) |
PeopleSoft (Oracle RDBMS) |
SELECT CASE WHEN EMPLOYEE_NAME_TODAY.EMPLID IS NULL THEN USR.OPRDEFNDESC ELSE EMPLOYEE_NAME_TODAY.NAME END DISPLAY_NAME, 0 FROM PSOPRDEFN USR LEFT OUTER JOIN (SELECT B.EMPLID, B.NAME FROM PS_NAMES B, (SELECT EMPLID, MAX(EFFDT) EFFDT FROM PS_NAMES WHERE NAME_TYPE = 'PRI' AND EFFDT <= SYSDATE GROUP BY EMPLID) C WHERE B.EMPLID = C.EMPLID AND B.EFFDT = C.EFFDT AND B.NAME_TYPE = 'PRI') EMPLOYEE_NAME_TODAY ON USR.EMPLID = EMPLOYEE_NAME_TODAY.EMPLID WHERE USR.OPRID=':USER' |
PeopleSoft (MSSQL RDBMS) |
SELECT CASE WHEN EMPLOYEE_NAME_TODAY.EMPLID IS NULL THEN USR.OPRDEFNDESC ELSE EMPLOYEE_NAME_TODAY.NAME END DISPLAY_NAME, 0 FROM PSOPRDEFN USR LEFT OUTER JOIN (SELECT B.EMPLID, B.NAME FROM PS_NAMES B, (SELECT EMPLID, MAX(EFFDT) EFFDT FROM PS_NAMES WHERE NAME_TYPE = 'PRI' AND EFFDT <= GETDATE() GROUP BY EMPLID) C WHERE B.EMPLID = C.EMPLID AND B.EFFDT = C.EFFDT AND B.NAME_TYPE = 'PRI' ) EMPLOYEE_NAME_TODAY ON USR.EMPLID = EMPLOYEE_NAME_TODAY.EMPLID WHERE USR.OPRID=':USER' |
PeopleSoft (DB2 RDBMS) |
SELECT CASE WHEN EMPLOYEE_NAME_TODAY.EMPLID IS NULL THEN USR.OPRDEFNDESC ELSE EMPLOYEE_NAME_TODAY.NAME END DISPLAY_NAME, 0 FROM PSOPRDEFN USR LEFT OUTER JOIN (SELECT B.EMPLID, B.NAME FROM PS_NAMES B, (SELECT EMPLID, MAX(EFFDT) EFFDT FROM PS_NAMES WHERE NAME_TYPE = 'PRI' AND EFFDT <= CURRENT TIMESTAMP GROUP BY EMPLID) C WHERE B.EMPLID = C.EMPLID AND B.EFFDT = C.EFFDT AND B.NAME_TYPE = 'PRI' ) EMPLOYEE_NAME_TODAY ON USR.EMPLID = EMPLOYEE_NAME_TODAY.EMPLID WHERE USR.OPRID=':USER' |
This issue applies when upgrading to Oracle Business Intelligence Applications version 7.9.4 from the previous versions of Siebel Analytics beginning with version 7.5.x. While migrating data into the upgraded data warehouse, if you are connected to an Oracle database and use an ODBC connection instead of Oracle Native Driver, the upgrade workflows will fail (see section "Migrating Data into the Upgraded Data Warehouse" in Oracle Business Intelligence Applications Upgrade Guide) with the following error:
CMN_1836 Error: Data for Lookup [LKP_W_POSITION_DH_WID] fetched from the database is not sorted on the condition ports. Please check your database sort order for the lookup condition ports.
The workaround for this issue is to use the Oracle Native driver instead of ODBC.
The following facts in Sales Forecasting show incorrect values leading to errors in reporting:
Prior Forecast Summary Best Case
Prior Forecast Summary Worst Case
Prior Forecast Summary Revenue
Prior Forecast Summary Expected Revenue
Prior Forecast Summary Cost
Prior Forecast Summary Margin
This issue is caused by absence of necessary filters that need to be applied to the above metrics.
Note: This issue does not affect any out-of-the-box reports or dashboards. |
The workaround to solve this issue is to add the column "S_FCST_ITEM_DTL". "AGGR_DIM_NAME_1" in the physical layer and make necessary changes to the metric expressions in the logical layer. Following the detailed steps enumerated below will result in filtering out of the detail rows that are causing the issue.
To add the column "S_FCST_ITEM_DTL". "AGGR_DIM_NAME_1" in the physical layer and make necessary changes:
Open the repository (.rpd file) using the Oracle BI Administration Tool.
In the Physical layer of the repository, locate the table S_FCST_ITEM_DTL (path: Forecasting Siebel OLTP.Catalog.dbo).
Add the column AGGR_DIM_NAME_1 (Data type: varchar; Length: 50) to the table.
In the Business Model and Mapping layer of the repository, look for the metrics listed above under the folder Forecasting.FACTS.
For each metric, do the following:
Right click a metric and choose Properties.
Highlight the OLTP Prior Forecast Summary under Data Type and choose View….
Wrap the following around the metric expression:
CASE WHEN "Forecasting Siebel OLTP"."Catalog"."dbo"."S_FCST_ITEM_DTL"."AGGR_DIM_NAME_1" IS NULL THEN IFNULL ( ) END
The table below shows the amended expression for each metric.
Save the repository.
Table 1-2 Metric Expressions And Their Values After Being Updated
Metric | Amended Expression |
---|---|
Prior Forecast Summary Best Case |
|
Prior Forecast Summary Worst Case |
|
Prior Forecast Summary Revenue |
|
Prior Forecast Summary Expected Revenue |
|
Prior Forecast Summary Cost |
|
Prior Forecast Summary Margin |
|
The following language folders are missing from the location %:\oraclebidata\disconnected\pharma\messages.
l_ar - Arabic
l_el - Greek
l_hu - Hungarian
l_iw - Hebrew
l_no - Norwegian
l_pl - Polish
l_ro - Romanian
l_ru - Russian
l_sk - Slovakian
l_th - Thai
l_tr - Turkish
To work around this issues, perform the following steps:
Go to %:\oraclebidata\disconnected\pharma\messages\ and add the following folders:
l_ar
l_el
l_hu
l_iw
l_no
l_pl
l_ro
l_ru
l_sk
l_th
l_tr
Copy the corresponding _iBotsCaptions.xml and PharmaCaptions.xml files from %:\oraclebidata\web\res\l_XX\Captions\to %:\oraclebidata\disconnected\pharma\messages\l_XX\.
The Campaign Contacts_Segmentation Subject Area contains an 'Industry Name' dimension that was not intended to be included in this Subject Area. Using this column in your analysis will result in a Metadata Inconsistency Error. Oracle recommends that you remove the column from the Presentation Subject Area or advise users to not use it.
This issue is specific to Oracle EBS adaptors used in conjunction with General Ledger Analytics.
During the ETL, the ETL process will try to resolve exchange rates from each transaction's transaction currency to the warehouse's global currencies. If you are extracting 'Statistics' journals from the General Ledger, the ETL process will try to resolve the exchange rates for these transactions as well. However, the ETL process will not be able to resolve the exchange rate because the 'Statistics' journal's transaction currency is 'STAT'. Since 'STAT' is not a currency code, the ETL process will not be able to resolve the exchange rates. As a result, the exchange rates will be defaulted to 0. When customers view the global amounts for these transactions, the global amounts would all be 0 because Oracle Business Intelligence Applications calculates the global amounts by multiplying the transaction amount with the global exchange rate. However, you typically want the global amounts for these statistical transactions to appear as their original transaction amount (that is, these transactions' transaction amounts should be equal to their global amounts). In order to achieve this, perform the task specified below to specify a special case to handle the 'STAT' currency. When the transaction currency is 'STAT', the returning exchange rate should always be '1'.
To specify a special case to handle the 'STAT' currency:
Log into Informatica Designer.
Open the SILOS folder.
Locate the reusable mapplet MPLT_CURCY_CONVERSION_RATES and open it.
Open the EXPT_CALC_EXCH_RATES expression transformation.
Use the Expression Editor dialog to change the Expression value of the variables listed in the table below from the value in the Old Expression Value column to the value in the New Expression Value column.
Table 1-3 EXPT_CALC_EXCH_RATES Variables and Their Old and New Expression Values
Variable Name | Old Expression Value | New Expression Value |
---|---|---|
DOC_TO_LOC_EXCH_RATE_VAR |
IIF(ISNULL(DOC_TO_LOC_EXCH_RATE), IIF(LOC_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(ISNULL(LOC_CURR_CODE), NULL, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE,LOC_CURR_CODE,EXCH_DT,LOC_RATE_TYPE_VAR,DATASOURCE_NUM_ID))), DOC_TO_LOC_EXCH_RATE) |
IIF(ISNULL(DOC_TO_LOC_EXCH_RATE), IIF(LOC_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(DOC_CURR_CODE = 'STAT', 1.0, IIF(ISNULL(LOC_CURR_CODE), NULL, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE,LOC_CURR_CODE,EXCH_DT,LOC_RATE_TYPE_VAR,DATASOURCE_NUM_ID)))), DOC_TO_LOC_EXCH_RATE) |
DOC_TO_GLOBAL1_EXCH_RATE_VAR |
IIF(ISNULL(GLOBAL1_CURR_CODE), NULL, IIF(GLOBAL1_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(GLOBAL1_CURR_CODE = LOC_CURR_CODE, DOC_TO_LOC_EXCH_RATE_VAR, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE, GLOBAL1_CURR_CODE, EXCH_DT,GLOBAL1_RATE_TYPE, DATASOURCE_NUM_ID)))) |
IIF(ISNULL(GLOBAL1_CURR_CODE), NULL, IIF(GLOBAL1_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(DOC_CURR_CODE = 'STAT', 1.0, IIF(GLOBAL1_CURR_CODE = LOC_CURR_CODE, DOC_TO_LOC_EXCH_RATE_VAR, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE, GLOBAL1_CURR_CODE, EXCH_DT,GLOBAL1_RATE_TYPE, DATASOURCE_NUM_ID))))) |
DOC_TO_GLOBAL2_EXCH_RATE_VAR |
IIF(ISNULL(GLOBAL2_CURR_CODE), NULL, IIF(GLOBAL2_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(GLOBAL2_CURR_CODE = LOC_CURR_CODE, DOC_TO_LOC_EXCH_RATE_VAR, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE, GLOBAL2_CURR_CODE, EXCH_DT,GLOBAL2_RATE_TYPE, DATASOURCE_NUM_ID)))) |
IIF(ISNULL(GLOBAL2_CURR_CODE), NULL, IIF(GLOBAL2_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(DOC_CURR_CODE = 'STAT', 1.0, IIF(GLOBAL2_CURR_CODE = LOC_CURR_CODE, DOC_TO_LOC_EXCH_RATE_VAR, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE, GLOBAL2_CURR_CODE, EXCH_DT,GLOBAL1_RATE_TYPE, DATASOURCE_NUM_ID))))) |
DOC_TO_GLOBAL3_EXCH_RATE_VAR |
IIF(ISNULL(GLOBAL3_CURR_CODE), NULL, IIF(GLOBAL3_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(GLOBAL3_CURR_CODE = LOC_CURR_CODE, DOC_TO_LOC_EXCH_RATE_VAR, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE, GLOBAL3_CURR_CODE, EXCH_DT,GLOBAL1_RATE_TYPE, DATASOURCE_NUM_ID)))) |
IIF(ISNULL(GLOBAL3_CURR_CODE), NULL, IIF(GLOBAL3_CURR_CODE = DOC_CURR_CODE, 1.0, IIF(DOC_CURR_CODE = 'STAT', 1.0, IIF(GLOBAL3_CURR_CODE = LOC_CURR_CODE, DOC_TO_LOC_EXCH_RATE_VAR, :LKP.LKP_W_EXCH_RATE(DOC_CURR_CODE, GLOBAL3_CURR_CODE, EXCH_DT,GLOBAL1_RATE_TYPE, DATASOURCE_NUM_ID))))) |
Save the changes.