Skip Headers
Oracle® Business Intelligence Applications Installation Guide for Informatica PowerCenter Users
Version 7.9.6

Part Number E14217-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Preinstallation and Predeployment Requirements for Oracle BI Applications

Note:

Some of the information about database platforms and source systems might not apply to this version of Oracle Business Intelligence Applications. For up-to-date information about supported databases and source systems in this version of Oracle Business Intelligence Applications, make sure you read System Requirements and Supported Platforms for Oracle Business Intelligence Applications. Make sure that you also read the Oracle Business Intelligence Applications Release Notes. The most up-to-date versions of these documents are located on the Oracle Technology Network at http://www.oracle.com/technology/documentation/bi_apps.html. To register for a free account on the Oracle Technology Network, go to http://www.oracle.com/technology/about/index.html.

This section provides information about preparing to install and deploy Oracle BI Applications. You should review this information before you begin the installation and deployment process. You should also read the general guidelines for setting up the Oracle Business Analytics Warehouse and read the appropriate database-specific guidelines for the source OLTP databases that you are using.

You also need to satisfy the database and Informatica PowerCenter requirements that are specified in Section 4.3, "Mandatory Requirements."

Notes

This section contains the following topics:

3.1 General Guidelines for Setting Up Oracle Business Analytics Warehouse

The Oracle Business Analytics Warehouse is a database that contains dimensional schemas. Although it is technically possible to put the Oracle Business Analytics Warehouse in the same database as the transactional database, it is not recommended for performance reasons. The transactional database is structured as an online transaction processing (OLTP) database, whereas the Oracle Business Analytics Warehouse is structured as an online analytical processing (OLAP) database, each optimized for its own purpose. The reasons for not combining the two databases are:

The Informatica Repository stores all of the Informatica object definitions for the ETL mappings that populate the Oracle Business Analytics Warehouse. It is a series of repository tables that are stored in a database, which can be transactional, analytical, or a separate database.

The Oracle Business Analytics Warehouse works with relational database management systems. In addition to the general requirements, there are additional database management systems (DBMS)-specific requirements depending on the DBMS you are using.

The following general guidelines will help you set up the data warehouse physical database for performance and growth:

During the Oracle Business Analytics Warehouse configuration process, when you create the data warehouse tables using the procedure Section 4.12.1, "Creating Data Warehouse Tables," you can create tables in one tablespace and indexes in another tablespace. However, for performance reasons, it is recommended that you create tablespaces as described in Table 3-1.

Table 3-1 Recommended Tablespace Configuration

Tablespace Name List of Tables

DIM_STG

W_*DS

FACT_STG

W_*FS

DIM

W_*D and W_*MD

FACT

W_*F

AGG

W_*A

OTHER

Remaining W* tables

DIM_INDX

Indexes of W_*D tables

(for example, other tables would include W*G and W*GS tables)

FACT_INDX

Indexes of W_*F tables

OTHER_INDX

Remaining indexes of W* tables


Note:

To avoid fatal deadlocks during the ETL, make sure that you select the 'Session Level Retry on Deadlock' option in Informatica.

3.2 IBM DB2 UDB-Specific Database Guidelines for Oracle Business Analytics Warehouse

Table 3-2 provides guidelines for parameter settings for DB2 relational database management system (RDBMS) usage. Use these guidelines as a starting point. You will need to make changes based on your specific database sizes, data shape, server size (CPU and memory), and type of storage. The database administrator should make changes to the settings based on performance monitoring and tuning considerations.

Table 3-2 Recommended DB2 Parameter Settings

Parameter DB2 UDB V7 DB2 UDB V8 and V9 Notes

SHEAPTHRES

400000

400000

 

ASLHEAPSZ

15

15

 

RQRIOBLK

65535

65535

 

QUERY_HEAP_SZ

16384

16384

 

JAVA_HEAP_SZ

2048

2048

 

MAXAGENTS

400

400

 

NUM_INITAGENTS

10

10

 

NUM_POOLAGENTS

200

200

 

INTRA_PARALLEL

YES

YES

 

FCM_NUM_BUFFERS

12288

12288

 

SHEAPTHRES_SHR

N/A

=SHEAPTHRES

 

DBHEAP

16384

16384

 

CATALOGCACHE_SZ

5558

5558

 

LOGBUFSZ

2048

2048

 

UTIL_HEAP_SZ

10000

10000

 

NUM_ESTORE_SEGS

16

NIL

Restore is not needed in DB2 V8 64-bit because the V7 limit of 1.75 GB addressable memory has been lifted.

ESTORE_SEG_SZ

65536

NIL

 

LOCKLIST

25000

25000

 

APP_CTL_HEAP_SZ

5000

5000

 

SORTHEAP

4000

4000

 

STMTHEAP

40960

40960

 

APPLHEAPSZ

2560

2560

 

PCKCACHESZ

2560

2560

 

STAT_HEAP_SZ

20000

20000

 

DLCHKTIME

10000

10000

 

MAXLOCKS

50

50

 

LOCKTIMEOUT

1200

1200

 

MAXAPPLS

500

500

 

AVG_APPLS

10

10

 

MAXFILOP

500

500

 

GROUPHEAP_RATIO

N/A

70

New in V8

APPGROUP_MEM_SZ

N/A

30000

New in V8

DATABASE_MEMORY

N/A

AUTOMATIC

New in V8


Note:

To avoid fatal deadlocks during the ETL, make sure that you select the 'Session Level Retry on Deadlock' option in Informatica.

3.3 IBM DB2 UDB zOS and OS/390 and z/OS-Specific Database Guidelines for Oracle Business Analytics Warehouse

The following requirements apply to IBM DB2 RDBMS usage for zOS and OS/390:

Table 3-3 Variable Settings for IBM DB2 UDB zOS and OS/390 Databases

Parameter Recommended Setting Notes

IDTHTOIN

1800

 

CDSSRDEF

Any

 

STARJOIN

1

This setting indicates that star join is enabled. The one table with the largest cardinality is the fact table. However, if there is more than one table with this cardinality, star join is not enabled.


3.4 SQL Server-Specific Database Guidelines for Oracle Business Analytics Warehouse

This section provides guidelines for SQL Server database usage.

Note:

The SQL Server database must be created with a collation sequence that supports binary sort order or case-sensitive dictionary sort order. Case-insensitive dictionary sort order is not supported. For example, for binary sort order with the U.S English character set, use the collation 'Latin1_General_BIN'. If you use the default collation setting of 'SQL_Latin1_General_CP1_CI_AS', the database is set to case-insensitive, which is not supported, and causes index creation failures.

This section includes the following topics:

3.4.1 Setting the ANSI NULL Option

Oracle BI Applications requires that SQL Server databases be created with the ANSI NULL option selected.

To set the ANSI NULL option

  1. In the SQL Server Enterprise Manager, right-click the appropriate database, and choose Properties.

  2. Click the Options tab and select the box for ANSI NULL default.

3.4.2 Modifying the DB Library Options Setting

In a SQL Server 2000 environment, when loading Analytics tables with international data, or loading more than one language, you need to modify the DB Library Options setting.

To modify the DB Library Options setting

  1. From the Microsoft SQL Server program menu, select Client Network Utility.

  2. Select the DB Library Options tab.

  3. Clear the option Automatic ANSI to OEM.

    Note:

    SQL Server 2000 automatically tunes many of the server configuration options; therefore, an administrator is required to do little, if any, tuning. Although these configuration options can be modified, the general recommendation is that these options be left at their default values, allowing SQL Server to automatically tune itself based on run-time conditions.

3.4.3 Recommended SQL Server Database Parameters

If necessary, SQL Server components can be configured to optimize performance, as shown in Table 3-4.

Table 3-4 Recommended Variable Settings for SQL Server Databases

Parameter Recommended Setting Notes

Affinity mask

0

 

Allow updates

0

 

Awe enabled

0

 

C2 audit mode

0

 

Cost threshold for parallelism

5

 

Cursor threshold

–1

 

Default full-text language

1033

 

Default language

0

 

Fill factor

95%

For insert-intensive transactions, set Fill Factor between 90 and 95%. For better query performance, set Fill factor to 95 or even 100%.

Index create memory

1024 KB

Default is 0.

Lightweight pooling

0

 

Locks

0

 

Max degree of parallelism

0

Default is 0. This turns off parallelism. Max degree of parallelism should be left at 0, which means use parallel plan generation. It should be set to 1 (use only 1 process) if you run multi threaded components (for example, several EIM threads).

Max server memory

2000 MB

Default is 2147483647.

Max text repl size

65536 B

 

Max worker threads

100

Default is 255.

Media retention

0

 

Min memory per query

1024 KB

 

Min server memory

500 MB

Default is 0.

Nested triggers

1

 

Network packet size

8192 B

Default is 4096.

Open objects

0

 

Priority boost

0

 

Query governor cost limit

0

Modify to 60 only if CPU is high.

Query wait

–1 sec

 

Recovery interval

0 min

 

Remote access

1

 

Remote login timeout

20 sec

 

Remote proc trans

0

 

Remote query timeout

600 sec

 

Scan for startup procs

0

 

Set working set size

0

 

Two-digit year cutoff

2049

 

User connections

0

 

User options

0

 

  • SQL Server memory: Make sure adequate memory is available.

  • Transaction logs and TempDB: Reside on a separate disk from those used by database data.

  • Full load: Full Recovery model for the database.

  • Incremental (Refresh) load: Change from Full to Bulk-Logged Recovery model.

3.5 Teradata-Specific Database Guidelines for Oracle Business Analytics Warehouse

This sections contains the recommended best practices and guidelines for maximizing performance in Teradata deployments. It contains the following topics:

3.5.1 Installation of Required JDBC Driver for Teradata Databases

If you have a Unicode environment on a Teradata database, you must install the Teradata JDBC Driver for Teradata 12.0. This driver is required for all supported versions of Teradata databases, including versions earlier than version 12.

The Teradata JDBC Driver for Teradata 12.0 is available at http://www.teradata.com/DownloadCenter, under the title Teradata JDBC Driver.

3.5.2 General Guidelines for Teradata Deployments

The following requirements apply to Teradata database usage:

  • Install the FastLoad and TPump Teradata utilities on the machine on which the Informatica Server is installed.

  • When installing Informatica PowerCenter, make sure there are no spaces in the Informatica Server directory name or the directory path. The default directory contains spaces that you must remove manually.

  • When you install Oracle BI Applications in a Teradata environment, you must load the prebuilt Informatica Repository file \dwrep\Oracle_BI_DW_Teradata.rep.

  • In the ODBC configuration for Teradata, set the session Mode to ANSI and DateFormat to AAA, before you create the schema. Tables must be created as case specific. If you do the ODBC configuration after the schema is created, tables might not be created as case specific.

  • To avoid overflows if the arithmetic calculation involves any decimal data type more than 18,3 (precision, scale), add the following static Source System Parameters:

    • "as decimal(18,3))" for $$Hint_Tera_Post_Cast

    • "Cast (" for $$Hint_Tera_Pre_Cast $$Hint_Tera_Post_Cast $$Hint_Tera_Pre_Cast)

    For more information about setting Source System Parameters in DAC, see Section 4.18.1, "Setting DAC Source System Parameters."

Note:

To avoid fatal deadlocks during the ETL, make sure that you select the 'Session Level Retry on Deadlock' option in Informatica.

3.5.3 Best Practices For Teradata Deployments

This sections contains the recommended best practises for maximizing performance in Teradata deployments, and contains the following topics:

3.5.3.1 LEFT OUTER JOIN Issue

Teradata redistributes the OUTER table according to the join key, unless the INNER table is tiny. If the INNER table is tiny, the INNER table could be simply copied to all AMPs and the OUTER table would not be redistributed.

Many fact SIL mappings need to get the dimensional ROW_WID from ROW_ID/INTEGRATION_ID. For example, W_PER_RANK_FS.ACCNT_ID needs to be converted to ACCNT_WID before loading into W_PER_RANK_F table. Since ACCT_ID is nullable, the join between W_PER_RANK_FS and W_ORG_D is defined as a LEFT OUTER JOIN.

However, the percentage of NULLs in ACCT_ID column can be as high as 50% or more depending on the data set. When redistributing the W_PER_RANK_FS according to ACCT_ID, all rows with ACCT_ID = NULL are put onto a single AMP.

Although a Teradata database usually has hundreds of gigabytes of spool space, the spool space is allocated across hundreds of AMPs. The spool space for each AMP is limited (for example, to two gigabytes).

When a large percentage of W_PER_RANK_FS is distributed to a single AMP, this can result in insufficient spool space being available. This happens when too much data is spooled onto one AMP, not because the spool space is too small.

To work with Teradata's mechanism of parallel processing, and resolving the LEFT OUTER JOIN, SQL must be re-written.

As an example, refer to the following original SQL:

SELECT ...  FROM 
 W_PER_RANK_FS FS LEFT OUTER JOIN  W_ORG_D ORG ON
   FS.ACCNT_ID = ORG.INTEGRATION_ID  AND
   FS.DATASOURCE_NUM_ID = ORG.DATASOURCE_NUM_ID

 The above SQL should be re-coded as follows: 

SELECT ... FROM
 W_PER_RANK_FS FS LEFT OUTER JOIN  
   (SELECT
   FS.INTEGRATION_ID, FS.DATASOURCE_NUM_ID, ORG.ROW_WID, ORG.GEO_WID
    FROM
   W_PER_RANK_FS FS, W_ORG_D ORG
   WHERE
   FS.ACCNT_ID = ORG.INTEGRATION_ID AND FS.DATASOURCE_NUM_ID = 
   ORG.DATASOURCE_NUM_ID AND FS.ACCNT_ID IS NOT NULL)
ORG  ON
   FS.DATASOURCE_NUM_ID = ORG.DATASOURCE_NUM_I AND
   FS.INTEGRATION_ID = ORG.INTEGRATION_ID 

The same SQL re-coding method can be used for other Source Qualifiers that run into spool space problems.

In addition, there is another solution to the lack of spool space issue. This solution is to convert the NULLs to some evenly distributed but non-matched values, as shown in the following SQL example:

SELECT ...  FROM
W_PER_RANK_FS FS LEFT OUTER JOIN  W_ORG_D ORG ON
   CASE WHEN FS.ACCNT_ID IS NOT NULL
   THEN FS.ACCNT_ID
   ELSE  '#' || FS.INTEGRATION_ID END
   = ORG.INTEGRATION_ID
AND FS.DATASOURCE_NUM_ID = ORG.DATASOURCE_NUM_ID

Note:

The re-coded SQL is shown in bold type.

3.5.3.2 Additional Statistics Required

Oracle Business Intelligence provides a list of column and index statistics collection statements. These are generally applicable for all situations, but should be evaluated for each site and situation. Additional statistics might be required, depending on your requirements. For example, some workflows need more statistics to avoid an out-of-spool space error on Teradata server.

3.5.3.2.1 Collect Statistics As A Prerequisite

Once the tables have been created in the staging and target databases, you must run the supplied statistics collection. Failure to do so can affect ETL performance, and possibly result in a spool space error (error number 2646)

DAC does the re-collection of statistics as part of the ETL processes. However, DAC issues the collect statistics statement at the table level only (for example, collect statistics on w_org_d), and only for existing statistics.

3.5.3.2.2 Group By Vs Distinct

When there is a low number of distinct values, it is more efficient to use the GROUP BY phrase. Do not use the DISTINCT phrase, unless the number of distinct values is high.

3.5.3.2.3 Pruning Of Mapping

If you do not use all of the OTB fields supplied, you can improve performance by stripping the extraneous fields from the mapping and the tables.

3.5.3.2.4 Loader Configurations

This section explains the loaders that are available in Teradata, and how they are used in Oracle BI Applications.

Teradata has three different types of Teradata loader processes, as follows:

  • Fastload

  • Mload

  • Tpump

Each loader process can be used in two different modes, as follows:

  • Staged Mode: The Informatica process does the following in this order:

    • Reads from the source data.

    • Creates a data file.

    • Invokes the loader process to load the table using the data file created.

    Advantages: In the event of failures, you can recover using the Teradata recovery process.

    Disadvantages: Staged mode is slower than Piped mode, and you need more disk space, as it can create large data files.

  • Piped Mode: The Informatica process reads from the source and simultaneously pipes that data to the loader to start loading the target table.

    Advantages: Quicker than Staged mode, and you do not require large amounts of disk space because no data files are created.

    Disadvantages: In the event of failures, you cannot recover using the Teradata recovery process (because tpump does row commits unlike fastload and mload).

3.5.3.2.5 Fastload

You use the Fastload process on empty tables, such as loading staging tables and in initial loads where the tables are empty.

When the Fastload process starts loading, it locks the target table, which means that processes (for example, lookups) cannot access that table. One solution to this problem is to specify dummy SQL for the look up overrides at the session level.

Tip:

If a session fails during a Fastlload process, use SQL Assistant to run a simple SQL command (for example, count(*)), to determine whether the table is locked by a Fastload process.

If a table is locked (for example, (for W_ORG_DS), use the following script to release the lock:

LOGON DATABASEALIAS/USER,PASSWORD
BEGIN LOADING USER.W_ORG_DS
ERRORFILES USER.ET_W_ORG_DS,USER.UV_W_ORG_DS;
END LOADING;

If you save the above text in a file called test.ctl, you would run this process by entering the following command at a command prompt:

C:\fastload\test.ctl

Tip:

To create a load script for a table, edit the test.ctl script above to change the login information, and replace all occurrences of W_ORG_DS with the required target table name.

After a load process script runs successfully, you should be able to run the command 'select count(*)' on the target table. If you are not able release the lock, you might need to drop and re-create the table to remove the lock. If you do so, you must re-create the statistics.

Tip:

Fastload is typically used in piped mode to load staging tables and initial loads. In the event of errors, reload the entire data.
3.5.3.2.6 Mload

The Mload process is slower than Fastload but quicker than Tpump. The Mload process can work on both empty tables as well as on tables with data. In the event of errors when running in piped mode, you cannot recover the data.

3.5.3.2.7 Tpump

The Tpump process is slower than Mload but faster than ODBC. The Tpump process does row commits, which enables you to recover processed operations, even if you use piping mode. In other words, if you re-start the process, Tpump starts loading data from the last committed data.

Tpump can be used in the following modes:

  • Tpump_Insert: Use to do inserts.

  • Tpump_Update: Use to do updates (this mode requires you to define the primary key in the Informatica target table definition).

  • Tpump_Upsert: Use to do update otherwise insert (this mode requires you to define the primary key in the Informatica target table definition).

  • Tpump_Delete: Use to do deletes (this mode requires you to define the primary key in the Informatica target table definition).

Informatica uses the actual target table name to generate the error table and log tables to be used as part of its control file generation. If you have two instances of Tpump loading into same target table at the same time, you need to modify the session to use a different error table and log table name.

The Tpump load process in piped mode is useful for incremental loads, and where the table is not empty. In the event of errors, restart the process and it starts re-loading from the last committed data.

Refer to Informatica documentation for information about configuring a session to use Teradata loaders.

3.6 Oracle-Specific Database Guidelines for Oracle Business Analytics Warehouse

To configure the Business Analytics Data Warehouse on Oracle databases more easily, refer to the parameter template files init10gR2.ora and init11g.ora. These files are located in <DRIVE>:\<BI Applications install directory>\dwrep\Documentation\.

The parameter template files provide parameter guidelines based on the cost-based optimizer for Oracle 10g and 11g. Use these guidelines as a starting point. You will need to make changes based on your specific database sizes, data shape, server size (CPU and memory), and type of storage. The database administrator should make changes to the settings based on performance monitoring and tuning.

Copy the appropriate template file into your $ORACLE_HOME/dbs directory. Then, review the recommendations in the template file, and make the changes based on your specific database configuration. The database administrator should make changes to the settings based on performance monitoring and tuning considerations.

3.7 Additional Suggestions for Optimizing Oracle Performance in Oracle Business Analytics Warehouse

This section contains additional suggestions for optimizing performance for Oracle databases.

3.8 Miscellaneous Information About Oracle BI Applications Deployments

This section contains the following topics:

3.8.1 Preconfigured Mapping for the SA System Subject Area

For Oracle's Siebel Applications customers, Table 3-5 describes the preconfigured mappings for the SA system subject area. Fields that are not available in Oracle's Siebel transactional database will default to values shown in the table.

  • Overriding Defaults. You can add user-specific values for these fields, by creating an extension table to the S_USER table. to store the user-specific defaults for these fields. Additionally, you can change any of the default values. The metadata for the following logical table can be modified to include any physical extension table.

    SA User.(User)
    

    For instructions, refer to the documentation about configuring tables and columns for Oracle's Siebel Business Applications.

  • Setting Provider Information. Typically, the cell phone and the fax numbers in the Oracle Business Analytics Warehouse do not contain a provider name. Therefore, the Pager will typically be a numeric value such as 555-483-3843. To append a provider to this address, use the following guidelines:

    • If the entire company has the same provider, then you can append the provider in the column mappings.

    • If users can have different providers, you need to create an extension table. For instructions, refer to the documentation about configuring tables and columns for Oracle's Siebel business applications.

Table 3-5 Preconfigured Mappings for the User Table in the SA System Subject Area

Logical Column Physical Table Expression Comments

Cell Phone

 

''

It might be mapped to S_CONTACT.CELL_PH_NUM if this field contains SMTP address.

Cell Phone Priority

 

''

Defaults to N

Display Name

S_CONTACT

"Real Time OLTP"."".SIEBEL.S_CONTACT_User.FST_NAME || ' ' || "Real Time OLTP"."".SIEBEL.S_CONTACT_User.LAST_NAME

First Name concatenated with Last Name

Email

S_CONTACT

EMAIL_ADDR

 

Email Priority

 

'HNL'

Defaults to N

Email Type

 

'html'

Defaults to HTML

Group Name

S_RESP

NAME

 

Handheld

 

''

Defaults to an empty string

Handheld Priority

 

''

Defaults to an empty string

Language

 

'en'

Defaults to 'en'

Locale

 

'en'

Defaults to 'en'

Logon

S_USER

LOGIN

 

Pager

 

''

It could be mapped to S_CONTACT.PAGER_PH_NUM if this field contains SMTP address

Pager Priority

 

''

Defaults to N

Time Zone

S_TIMEZONE

NAME

 

3.8.2 Using Initialization Blocks

Table 3-6 lists some of the initialization blocks common to all Oracle BI Applications and their purposes. Initialization blocks that are specific to each Oracle BI Applications area are not listed here.

Table 3-6 Initialization Blocks and Their Purposes

Initialization Block Purpose

Authorization

Calculates user's responsibilities from the database.

Authentication

Authenticates against the database and verifies the user exists as a operational application user.

External Metadata Strings

Gets the translated value of metadata strings for the user's locale. This initialization block is critical to Intelligence Dashboards in international deployment.

LOCALE

Sets the user's locale specification in the Oracle BI Server.

Login Properties

Gets user's login properties, such as full name, from the database. This initialization block also sets the logging level of the users. By default, the log level is 0 for all users. If you wish to generate Oracle BI query logs for all users, this initialization block should be updated by changing the default value as well as the value in the initialization SQL.

Default System Language ID

Sets the variable OLTP_LANG_ID by querying the operational application database.

Organizations for Org-based Security

Queries the operational application database to get the organization membership for each user. It sets the variable ORGANIZATION.

Primary Owner ID

Sets the primary owner ID based on the user login ID.

Primary Position ID

Queries the operational application database to set the variable PRIMARY_POSTN_ID.

Warehouse Refresh Date

Sets several time-based variables such as CURRENT_YEAR.

ETL Run Date

Retrieves the ETL run date.

ETL Default Currency

Retrieves the default currency.


3.8.3 Creating Custom Indexes in Siebel Source Databases for Incremental Load Performance

To maximize performance for Siebel CRM, you can implement indexes using SQL files that are available in the \OracleBI\dwrep directory. Table 3-7 describes the SQL files that are appropriate for the specified applications.

Table 3-7 SQL files for Siebel transactional databases

Application Name SQL File Name

Horizontal Application

PerfIndex_Horizontal.sql

Industry Application

PerfIndex_Industry.sql


The SQL files generate indexes on all S_.* tables that are being used by the preconfigured applications.

Note:

If you move from a test to a production environment, you need to drop the indexes and re-create them in the production environment.

3.8.3.1 An Example of Change Capture SQL and Required Indexes

Change capture SQL generates the following SQL:

Insert into S_ETL_I_IMG_XX (ROW_ID, LAST_UPD)
AS
SELECT ROW_ID, LAST_UPD, MODIFICATION_NUM
From
S_XXX
WHERE LAST_UPD > 'LAST REFRESH_DATE – PRUNE DAYS'
AND NOT EXISTS
(
SELECT 'X' FROM S_ETL_R_IMAGE
WHERE S_ETL_R_IMAGE.ROW_ID = S_XXX.ROW_ID AND
S_ETL_R_IMG_XX.MODIFICATION_NUM = S_XXX.MODIFICATION_NUM AND
S_ETL_R_IMG_XX.LAST_UPD = S_XXX.LAST_UPD
)

Table 3-8 shows the indexes that, based on the SQL above, are created on the S_CONTACT table by the SQL generation scripts.

Table 3-8 Indexes Created on the S_CONTACT Table in Siebel transactional database

Index Index Column

S_CONTACT_W1

LAST_UPD, ROW_ID_MODIFICATION_NUM

S_CONTACT_W11

LAST_UPD


3.8.4 Creating Custom Indexes in Oracle EBS Source Database for Incremental Load Performance

Oracle EBS source database tables contain mandatory LAST_UPDATE_DATE columns, which are used by Oracle BI Applications for capturing incremental data changes. Some Oracle EBS source tables used by Oracle BI Applications do not have an index on the LAST_UPDATE_DATE column because the presence of the index can impede performance of the source application.

There are three categories of Oracle EBS tables as they relate to indexes on the LAST_UPDATE_DATE column:

  • Category 1: Tables that do not have indexes on the LAST_UPDATE_DATE column but on which indexes can be created without impeding performance.

  • Category 2: Tables that have indexes on LAST_UPDATE_DATE columns. These indexes were introduced in Oracle EBS release 12.

  • Category 3: Tables that cannot have indexes on the LAST_UPDATE_DATE column because performance will be impeded in the Oracle EBS environment.

3.8.4.1 Creating Indexes for Category 1 Tables

The following DDL script creates custom indexes on the LAST_UPDATE_DATE column for Category 1 tables, that is, tables in all Oracle EBS releases that do not already have this index created and for which there are no known performance implications for creating such indexes.

You should run this DDL script if your source system is Oracle EBS release 11i or release 12 and you have experienced slow incremental extraction mapping performance while implementing specific subject areas.

Note: If your source system is Oracle EBS release 12, Oracle EBS release 11.5.10, Oracle EBS release 11.5.9 or lower and has been migrated to Oracle Applications Tablespace Model (OATM), then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX.

The DDL script is as follows:

CREATE index AP.OBIEE_AP_INVOICE_PAYMENTS_ALL ON AP.AP_INVOICE_PAYMENTS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index AP.OBIEE_AP_PAYMENT_SCHEDULES_ALL ON AP.AP_PAYMENT_SCHEDULES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index AP.OBIEE_AP_INVOICES_ALL ON AP.AP_INVOICES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index GL.OBIEE_GL_JE_HEADERS ON GL.GL_JE_HEADERS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index ONT.OBIEE_OE_ORDER_HEADERS_ALL ON ONT.OE_ORDER_HEADERS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PER.OBIEE_PAY_INPUT_VALUES_F ON PER.PAY_INPUT_VALUES_F (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PER.OBIEE_PAY_ELEMENT_TYPES_F ON PER.PAY_ELEMENT_TYPES_F (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PO.OBIEE_RCV_SHIPMENT_LINES ON PO.RCV_SHIPMENT_LINES (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PO.OBIEE_RCV_SHIPMENT_HEADERS ON PO.RCV_SHIPMENT_HEADERS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index AR.OBIEE_AR_CASH_RECEIPTS_ALL ON AR.AR_CASH_RECEIPTS_ALL (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index WSH.OBIEE_WSH_DELIVERY_DETAILS ON WSH.WSH_DELIVERY_DETAILS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index WSH.OBIEE_WSH_NEW_DELIVERIES ON WSH.WSH_NEW_DELIVERIES (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

Note:

  • Make sure you use FND_STATS to compute statistics on the newly created indexes and update statistics on newly indexed table columns in the Oracle EBS database.

  • All indexes created with the DDL in this section have the prefix OBIEE_. This prefix does not follow standard Oracle EBS index naming conventions. Therefore, Autopatch may fail during future upgrades. In such cases, the indexes with the OBIEE_ prefix should be dropped and Autopatch restarted.

3.8.4.2 Creating Indexes for Category 2 Tables

The following DDL creates custom indexes on the LAST_UPDATE_DATE column for Category 2 tables, that is, tables for which indexes were introduced for the LAST_UPDATE_DATE column in Oracle release 12.

You should run this DDL script if your source system is Oracle EBS release 11i.

Note: If your source system is Oracle EBS release 11.5.10, Oracle EBS release 11.5.9 or lower and has been migrated to Oracle Applications Tablespace Model (OATM), then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX.

The DDL script is as follows:

CREATE index PO.RCV_TRANSACTIONS_N23 ON PO.RCV_TRANSACTIONS (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_DISTRIBUTIONS_N13 ON PO.PO_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_LINE_LOCATIONS_N11 ON PO.PO_LINE_LOCATIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_LINES_N10 ON PO.PO_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_REQ_DISTRIBUTIONS_N6 ON PO.PO_REQ_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_REQUISITION_LINES_N17 ON PO.PO_REQUISITION_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_HEADERS_N9 ON PO.PO_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 1M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_REQUISITION_HEADERS_N6 ON PO.PO_REQUISITION_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index AR.RA_CUSTOMER_TRX_N14 ON AR.RA_CUSTOMER_TRX_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

Note: Make sure you use FND_STATS to compute statistics on the newly created indexes and update statistics on newly indexed table columns in the Oracle EBS database.

3.8.4.3 Creating Indexes for Implementing Human Resources Performance Change Capture

This section provides information on source index creation for Human Resources performance change capture. Creating source indexes enables incremental processing.

To enable incremental processing, a user with database administrator rights must manually create the following indexes:

  • For Oracle EBS Source Systems:

    • PER_ALL_ASSIGNMENTS_F.LAST_UPDATE_DATE

    • PER_PAY_PROPOSALS.LAST_UPDATE_DATE

    • PER_ASSIGNMENT_BUDGET_VALUES_F.LAST_UPDATE_DATE

    • PER_PERSON_TYPE_USAGES_F.LAST_UPDATE_DATE

    • PER_PERFORMANCE_REVIEWS.LAST_UPDATE_DATE

    • PER_APPRAISALS.LAST_UPDATE_DATE

    • PER_ALL_PEOPLE_F.LAST_UPDATE_DATE

    • PER_PERIODS_OF_SERVICE.LAST_UPDATE_DATE

    • PER_PERIODS_OF_PLACEMENT.LAST_UPDATE_DATE

3.8.4.3.1 Incremental Processing for Oracle EBS

When set as default, all new or changed data will get refreshed except where data has been purged. This includes new records, corrections, date-tracked updates, and date-tracked deletes.

There are separate mappings to detect purges and process them.

3.8.5 Creating Custom Indexes in PeopleSoft Source Database for Incremental Load Performance

This section provides information on source index creation for Human Resources performance change capture. Creating source indexes enables incremental processing.

To enable incremental processing, a user with database administrator rights must manually create the following index:

  • For PeopleSoft Source Systems:

    • PS_JOB.LASTUPDDTM

3.8.5.1 Incremental Processing for PeopleSoft

When set as default, most data will get refreshed, even if no last update timestamps are present on some base tables; however, excluding Job data, there are some restrictions.

  • Appraisals-Only new records or changes to the most recent record are processed

  • International Assignments-Only changes to the most recent record are processed

There are separate mappings to detect purges and process them.

When set as Extending, you can force changes to any appraisal or international assignment to be picked up; a simple customization to incremental mappings is provided. You can increase the processing time for an incremental load by removing the filter that restricts records to the most recent incremental load.

3.8.6 Running an ETL with a Non-English OLTP Data Source

To run an ETL with a non-English OLTP data source, you must make a copy of an appropriate source system container, and configure the language, country, and continent parameters.

To run an ETL with a non-English OLTP data source

  1. In DAC, select File, then New Source System to display the New Source System Container dialog box.

  2. Select the Create as a Copy of Existing Container radio button.

  3. From the Existing Containers drop-down list, select the container that you want to copy, then click OK.

  4. Go to the Design view.

  5. Ensure that you have selected the correct container from the Containers drop-down list.

  6. Select the Source System Parameters tab.

  7. Use the Edit tab below the list of Source System Parameters to change the value of the following parameters in the list:

    • $$DFLT_LANG (Default Language) – For example, for a Japanese data source, change this value to JPN.

    • (optional) $$DFLT_COUNTRY (Default Country)

    • (optional) $$DFLT_CONTINENT (Default Continent)

    Tip: To find out the value to specify for the $$DFLT_LANG parameter, issue the query 'select VAL from S_SYS_PREF where SYS_PREF_CD=<ETL value>' against the OLTP database. For example, to find out the default ETL language, issue the following command:

    select VAL from S_SYS_PREF where SYS_PREF_CD='ETL Default Language';

  8. Create a new ETL plan for the new source system container, and edit its parameters, as follows:

    1. Click the Execute tab.

    2. Click the Execution Plans subtab.

    3. Click New to create a new blank execution tab and use the subtabs below (for example, Subject Areas, Parameters, Ordered Tasks) to specify the execution plan details.

    4. Click Save.

  9. Click Run Now to run the new ETL plan (or use the Schedule tab to specify when you want to run the new execution plan).