Skip Headers
Oracle® Business Intelligence Applications Installation Guide for Informatica PowerCenter Users
Release 7.9.6.3

Part Number E19038-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

3 Preinstallation and Deployment Requirements for Oracle BI Applications

Note:

Some of the information about database platforms and source systems might not apply to this version of Oracle Business Intelligence Applications. For up-to-date information about supported databases and source systems in this version of Oracle Business Intelligence Applications, make sure you read System Requirements and Supported Platforms for Oracle Business Intelligence Applications. Make sure that you also read the Oracle Business Intelligence Applications Release Notes. The most up-to-date versions of these documents are located on the Oracle Technology Network at http://www.oracle.com/technology/documentation/bi_apps.html. To register for a free account on the Oracle Technology Network, go to http://www.oracle.com/technetwork/index.html.

This section provides information about preparing to install and deploy Oracle BI Applications. You should review this information before you begin the installation and deployment process. You should also read the general guidelines for setting up the Oracle Business Analytics Warehouse and read the appropriate database-specific guidelines for the source OLTP databases that you are using.

You also need to satisfy the database and Informatica PowerCenter requirements that are specified in Section 4.3, "Mandatory Requirements."

Notes

This section contains the following topics:

3.1 General Guidelines for Setting Up Oracle Business Analytics Warehouse

The Oracle Business Analytics Warehouse is a database that contains dimensional schemas. Although it is technically possible to put the Oracle Business Analytics Warehouse in the same database as the transactional database, it is not recommended for performance reasons. The transactional database is structured as an online transaction processing (OLTP) database, whereas the Oracle Business Analytics Warehouse is structured as an online analytical processing (OLAP) database, each optimized for its own purpose. The reasons for not combining the two databases are:

The Informatica Repository stores all of the Informatica object definitions for the ETL mappings that populate the Oracle Business Analytics Warehouse. It is a series of repository tables that are stored in a database, which can be a transactional, analytical, or separate database.

The Oracle Business Analytics Warehouse works with relational database management systems. In addition to the general requirements, there are additional database management systems (DBMS)-specific requirements depending on the DBMS you are using.

The following general guidelines will help you set up the data warehouse physical database for performance and growth:

During the Oracle Business Analytics Warehouse configuration process, when you create the data warehouse tables using the procedure Section 4.9.1, "Creating Data Warehouse Tables," you can create tables in one tablespace and indexes in another tablespace. However, for performance reasons, it is recommended that you create tablespaces as described in Table 3-1.

Table 3-1 Recommended Tablespace Configuration

Tablespace Name List of Tables

DIM_STG

W_*DS

FACT_STG

W_*FS

DIM

W_*D and W_*MD

FACT

W_*F

AGG

W_*A

OTHER

Remaining W* tables

DIM_INDX

Indexes of W_*D tables

(for example, other tables would include W*G and W*GS tables)

FACT_INDX

Indexes of W_*F tables

OTHER_INDX

Remaining indexes of W* tables


Note:

To avoid fatal deadlocks during the ETL, make sure that you select the 'Session Level Retry on Deadlock' option in Informatica.

3.2 IBM DB2 UDB-Specific Database Guidelines for Oracle Business Analytics Warehouse

Table 3-2 provides guidelines for parameter settings for DB2 relational database management system (RDBMS) usage. Use these guidelines as a starting point. You will need to make changes based on your specific database sizes, data shape, server size (CPU and memory), and type of storage. The database administrator should make changes to the settings based on performance monitoring and tuning considerations.

Table 3-2 Recommended DB2 Parameter Settings

Parameter DB2 UDB V7 DB2 UDB V8 and V9 Notes

SHEAPTHRES

400000

400000

 

ASLHEAPSZ

15

15

 

RQRIOBLK

65535

65535

 

QUERY_HEAP_SZ

16384

16384

 

JAVA_HEAP_SZ

2048

2048

 

MAXAGENTS

400

400

 

NUM_INITAGENTS

10

10

 

NUM_POOLAGENTS

200

200

 

INTRA_PARALLEL

YES

YES

 

FCM_NUM_BUFFERS

12288

12288

 

SHEAPTHRES_SHR

N/A

=SHEAPTHRES

 

DBHEAP

16384

16384

 

CATALOGCACHE_SZ

5558

5558

 

LOGBUFSZ

2048

2048

 

UTIL_HEAP_SZ

10000

10000

 

NUM_ESTORE_SEGS

16

NIL

Restore is not needed in DB2 V8 64-bit because the V7 limit of 1.75 GB addressable memory has been lifted.

ESTORE_SEG_SZ

65536

NIL

 

LOCKLIST

25000

25000

 

APP_CTL_HEAP_SZ

5000

5000

 

SORTHEAP

4000

4000

 

STMTHEAP

40960

40960

 

APPLHEAPSZ

2560

2560

 

PCKCACHESZ

2560

2560

 

STAT_HEAP_SZ

20000

20000

 

DLCHKTIME

10000

10000

 

MAXLOCKS

50

50

 

LOCKTIMEOUT

1200

1200

 

MAXAPPLS

500

500

 

AVG_APPLS

10

10

 

MAXFILOP

500

500

 

GROUPHEAP_RATIO

N/A

70

New in V8

APPGROUP_MEM_SZ

N/A

30000

New in V8

DATABASE_MEMORY

N/A

AUTOMATIC

New in V8


Note:

To avoid fatal deadlocks during the ETL, make sure that you select the 'Session Level Retry on Deadlock' option in Informatica.

3.3 IBM DB2 UDB zOS and OS/390 and z/OS-Specific Database Guidelines for Oracle Business Analytics Warehouse

The following requirements apply to IBM DB2 RDBMS usage for zOS and OS/390:

Table 3-3 Variable Settings for IBM DB2 UDB zOS and OS/390 Databases

Parameter Recommended Setting Notes

IDTHTOIN

1800

 

CDSSRDEF

Any

 

STARJOIN

1

This setting indicates that star join is enabled. The one table with the largest cardinality is the fact table. However, if there is more than one table with this cardinality, star join is not enabled.


3.4 SQL Server-Specific Database Guidelines for Oracle Business Analytics Warehouse

This section provides guidelines for SQL Server database usage.

Note:

The SQL Server database must be created with a collation sequence that supports binary sort order or case-sensitive dictionary sort order. Case-insensitive dictionary sort order is not supported. For example, for binary sort order with the U.S English character set, use the collation 'Latin1_General_BIN'. If you use the default collation setting of 'SQL_Latin1_General_CP1_CI_AS', the database is set to case-insensitive, which is not supported, and causes index creation failures.

This section includes the following topics:

3.4.1 Setting the ANSI NULL Option

Oracle BI Applications requires that SQL Server databases be created with the ANSI NULL option selected.

To set the ANSI NULL option

  1. In the SQL Server Enterprise Manager, right-click the appropriate database, and choose Properties.

  2. Click the Options tab and select the box for ANSI NULL default.

3.4.2 Modifying the DB Library Options Setting

In a SQL Server 2000 environment, when loading Oracle BI Applications tables with international data, or loading more than one language, you need to modify the DB Library Options setting.

To modify the DB Library Options setting

  1. From the Microsoft SQL Server program menu, select Client Network Utility.

  2. Select the DB Library Options tab.

  3. Clear the option Automatic ANSI to OEM.

    Note:

    SQL Server 2000 automatically tunes many of the server configuration options; therefore, an administrator is required to do little, if any, tuning. Although these configuration options can be modified, the general recommendation is that these options be left at their default values, allowing SQL Server to automatically tune itself based on run-time conditions.

3.4.3 Recommended SQL Server Database Parameters

If necessary, SQL Server components can be configured to optimize performance, as shown in Table 3-4.

Table 3-4 Recommended Variable Settings for SQL Server Databases

Parameter Recommended Setting Notes

Affinity mask

0

 

Allow updates

0

 

Awe enabled

0

 

C2 audit mode

0

 

Cost threshold for parallelism

5

 

Cursor threshold

–1

 

Default full-text language

1033

 

Default language

0

 

Fill factor

95%

For insert-intensive transactions, set Fill Factor between 90 and 95%. For better query performance, set Fill factor to 95 or even 100%.

Index create memory

1024 KB

Default is 0.

Lightweight pooling

0

 

Locks

0

 

Max degree of parallelism

0

Default is 0. This turns off parallelism. Max degree of parallelism should be left at 0, which means use parallel plan generation. It should be set to 1 (use only 1 process) if you run multi threaded components (for example, several EIM threads).

Max server memory

2000 MB

Default is 2147483647.

Max text repl size

65536 B

 

Max worker threads

100

Default is 255.

Media retention

0

 

Min memory per query

1024 KB

 

Min server memory

500 MB

Default is 0.

Nested triggers

1

 

Network packet size

8192 B

Default is 4096.

Open objects

0

 

Priority boost

0

 

Query governor cost limit

0

Modify to 60 only if CPU is high.

Query wait

–1 sec

 

Recovery interval

0 min

 

Remote access

1

 

Remote login timeout

20 sec

 

Remote proc trans

0

 

Remote query timeout

600 sec

 

Scan for startup procs

0

 

Set working set size

0

 

Two-digit year cutoff

2049

 

User connections

0

 

User options

0

 

  • SQL Server memory: Make sure adequate memory is available.

  • Transaction logs and TempDB: Reside on a separate disk from those used by database data.

  • Full load: Full Recovery model for the database.

  • Incremental (Refresh) load: Change from Full to Bulk-Logged Recovery model.

3.5 Teradata-Specific Database Guidelines for Oracle Business Analytics Warehouse

This sections contains the recommended best practices and guidelines for maximizing performance in Teradata deployments. It contains the following topics:

3.5.1 Installation of Required JDBC Driver for Teradata Databases

The Data Warehouse Administration Console (DAC) requires JDBC drivers for database connectivity. You should only use JDBC drivers that are compatible with the supported databases. For information about supported databases, see System Requirements and Supported Platforms for Oracle Business Intelligence Applications. Since JDBC drivers show variations with different database versions, only drivers that are shipped with the database or downloaded from the database vendor site and are known to be certified for the given database version should be used. Currently, third-party JDBC drivers for the databases are not supported.

If you have a Unicode environment on a Teradata database, you must install the Teradata JDBC Driver for Teradata 12.0 or Teradata 13.0. This driver is required for all supported versions of Teradata databases, including versions earlier than version 12. The Teradata JDBC Driver for Teradata 12.0 and Teradata 13.0 is available at http://www.teradata.com/DownloadCenter, under the title Teradata JDBC Driver.

3.5.2 General Guidelines for Teradata Deployments

The following requirements apply to Teradata database usage:

  • If a deadlock issue is encountered, Oracle recommends that you use the 'Execute Serially' option for the specific groups within DAC. For Teradata, this is mandatory. When this option is selected for the task group, you must rebuild and run the Execution Plan that includes that task.

  • Install the Teradata Parallel Data Pump (TPump) Teradata Load utility on the machine on which the Informatica Server is installed.

  • When installing Informatica PowerCenter, make sure there are no spaces in the Informatica Server directory name or the directory path. The default directory contains spaces that you must remove manually.

  • When you install Oracle BI Applications in a Teradata environment, you must load the pre-built Informatica Repository file Oracle_BI_DW_Teradata.rep, which is located in ORACLE_HOME\biapps\dwrep\Informatica\Repository.

  • In the ODBC configuration for Teradata, set the session Mode to ANSI and DateFormat to AAA, before you create the schema. Tables must be created as case specific. If you do the ODBC configuration after the schema is created, tables might not be created as case specific.

  • To avoid overflows if the arithmetic calculation involves any decimal data type more than 18,3 (precision, scale), add the following static source system parameters:

    • $$Hint_Tera_Post_Cast = "as Decimal(18,3))"

    • $$Hint_Tera_Pre_Cast = "Cast("

    For more information about setting source system parameters in DAC, see Section 4.18.2, "Setting DAC Source System Parameters."

  • Install the reswords.txt file in the server/bin directory for Informatica. When you are configuring the reswords.txt file, note the following points:

    If any table name or column name contains a database reserved word, such as MONTH or YEAR, the session fails with database errors when the Informatica Integration Service executes SQL against the database. You can create and maintain a reserved words file, reswords.txt, in the server/bin directory. When the Integration Service initializes a session, it searches for reswords.txt. If the file exists, the Integration Service places quotes around matching reserved words when it executes SQL against the database.

    Use the following rules and guidelines when working with reserved words:

    • The Integration Service searches the reserved words file when it generates SQL to connect to source, target, and lookup databases.

    • If you override the SQL for a source, target, or lookup, you must enclose any reserved word in quotes.

    • You may need to enable some databases, such as SQL Server, to use SQL-92 standards regarding quoted identifiers. Use connection environment SQL to issue the command.

      For example, use the following command with SQL Server: SET QUOTED_IDENTIFIER ON Sample reswords.txt File.

    To use a reserved words file, create a file named reswords.txt and place it in the server/bin directory. Create a section for each database that you need to store reserved words for. Add reserved words used in any table or column name. You do not need to store all reserved words for a database in this file. Database names and reserved words in reswords.txt are not case sensitive.

    A sample reswords.txt file is listed below:

    [Teradata]
    MONTH
    DATE
    INTERVAL
    

Note:

To avoid fatal deadlocks during ETL processes, make sure that you select the 'Session Level Retry on Deadlock' option in Informatica.

3.5.3 Best Practices For Teradata Deployments

This sections contains the recommended best practises for maximizing performance in Teradata deployments, and contains the following topics:

3.5.3.1 Collect Statistics as a Prerequisite

Once the tables have been created in the staging and target databases, you must run the supplied statistics collection. Failure to do so can affect ETL performance and possibly result in a spool space error (error number 2646).

DAC does the recollection of statistics as part of the ETL processes. However, DAC issues the collect statistics statement at the table level only (for example, collect statistics on w_org_d), and only for existing statistics.

3.5.3.2 LEFT OUTER JOIN Issue

Teradata code performance is highly dependent on the specific environment of each installation. A high number of occurrences of a single (or few) values in columns that are involved in joins, whether null or not, may cause skewing of data across Teradata AMPs. The impact of this is increased likelihood of exceeding the "per AMP" spool limit as well as increased CPU usage on one AMP while the other AMPs are much less utilized for the query experiencing the skew. This extends the processing time for this query, and negatively impacts other queries in the system that are competing for CPU resources on the AMP experiencing the skew.

Depending on the environment, the current code may redistribute the outer table on the join key, except when the inner table is very small; the Teradata Optimizer may choose to copy the inner table to all AMPs and not redistribute the outer table. Excessive nulls or other values in foreign keys will skew data in Teradata during join operations. If this happens, verify that statistics were defined and collected on the tables involved. If all the necessary statistics are defined and recently collected, it may be necessary to rewrite the SQL.

Many fact SIL mappings need to get the dimensional ROW_WID from ROW_ID/INTEGRATION_ID. For example, W_PER_RANK_FS.ACCNT_ID needs to be converted to ACCNT_WID before loading into the W_PER_RANK_F table. Since ACCT_ID is nullable, the join between W_PER_RANK_FS and W_ORG_D is defined as a LEFT OUTER JOIN.

However, the percentage of NULLs in ACCT_ID column can be as high as 50 percent or more depending on the data set. When redistributing the W_PER_RANK_FS according to ACCT_ID, all rows with ACCT_ID = NULL are put onto a single AMP.

Although a Teradata database usually has hundreds of gigabytes of spool space, the spool space is allocated across hundreds of AMPs. The spool space for each AMP is limited (for example, to two gigabytes).

When a large percentage of W_PER_RANK_FS is distributed to a single AMP, this can result in insufficient spool space being available. This happens when too much data is spooled onto one AMP, not because the spool space is too small.

To work with Teradata's mechanism of parallel processing and resolving the LEFT OUTER JOIN, SQL must be re-written.

As an example, refer to the following original SQL:

SELECT ...  FROM 
 W_PER_RANK_FS FS LEFT OUTER JOIN  W_ORG_D ORG ON
   FS.ACCNT_ID = ORG.INTEGRATION_ID  AND
   FS.DATASOURCE_NUM_ID = ORG.DATASOURCE_NUM_ID

The above SQL should be re-coded to convert the NULLs to some evenly distributed but non-matched values, as shown in the following SQL example:

SELECT ... FROM
 W_PER_RANK_FS FS LEFT OUTER JOIN  
   (SELECT
   FS.INTEGRATION_ID, FS.DATASOURCE_NUM_ID, ORG.ROW_WID, ORG.GEO_WID
    FROM
   W_PER_RANK_FS FS, W_ORG_D ORG
   WHERE
   FS.ACCNT_ID = ORG.INTEGRATION_ID AND FS.DATASOURCE_NUM_ID = 
   ORG.DATASOURCE_NUM_ID AND FS.ACCNT_ID IS NOT NULL)
ORG  ON
   FS.DATASOURCE_NUM_ID = ORG.DATASOURCE_NUM_I AND
   FS.INTEGRATION_ID = ORG.INTEGRATION_ID 

The same SQL re-coding method can be used for other Source Qualifiers that run into spool space problems.

3.5.3.3 Group By Versus Distinct

When there is a low number of distinct values, it is more efficient to use the GROUP BY phrase. Do not use the DISTINCT phrase, unless the number of distinct values is high.

3.5.3.4 Pruning of Mappings and Tables

If you do not use all of the preconfigured fields supplied, you can improve performance by stripping the extraneous fields from the mappings and tables.

3.5.3.5 Loader Configurations

This section explains the loaders that are available in Teradata, and how they are used in Oracle Business Intelligence Applications.

Teradata has three different types of Teradata loader processes, as follows:

Each loader process can be used in two different modes, as follows:

  • Staged Mode: The Informatica process does the following in this order:

    • Reads from the source data.

    • Creates a data file.

    • Invokes the loader process to load the table using the data file created.

    Advantages: In the event of failures, you can recover using the Teradata recovery process.

    Disadvantages: Staged mode is slower than Piped mode, and you need more disk space, as it can create large data files.

  • Piped Mode: The Informatica process reads from the source and simultaneously pipes that data to the loader to start loading the target table.

    Advantages: Quicker than Staged mode, and you do not require large amounts of disk space because no data files are created.

    Disadvantages: In the event of failures, you cannot recover using the Teradata recovery process (because tpump does row commits unlike fastload and mload).

3.5.3.5.1 Tpump

TPump is a data loading utility that helps you maintain (update, delete, insert, and atomic upsert) the data in your Teradata database. TPump allows you to achieve near real-time data in your data warehouse.TPump uses standard Teradata SQL to achieve moderate to high data loading rates to the Teradata Database. Multiple sessions and multistatement requests are typically used to increase throughput.Unlike most load utilities, TPump uses row hash locks rather than table level locks. This allows you to run queries while TPump is running. This also means that TPump can be stopped instantaneously.

Tpump can be used in the following modes:

  • Tpump_Insert: Use to do inserts.

  • Tpump_Update: Use to do updates (this mode requires you to define the primary key in the Informatica target table definition).

  • Tpump_Upsert: Use to do update otherwise insert (this mode requires you to define the primary key in the Informatica target table definition).

  • Tpump_Delete: Use to do deletes (this mode requires you to define the primary key in the Informatica target table definition).

Informatica uses the actual target table name to generate the error table and log tables to be used as part of its control file generation. If you have two instances of Tpump loading into the same target table at the same time, then you need to modify the session to use a different error table and log table name.

The Tpump load process in piped mode is useful for incremental loads, and where the table is not empty. In the event of errors, restart the process and it starts re-loading from the last committed data.

Refer to Informatica documentation for information about configuring a session to use Teradata loaders.

3.5.3.5.2 Fastload

The Fastload External Loader process is used on empty tables, such as loading staging tables and in initial loads where the tables are empty. When the Fastload process starts loading, it locks the target table, which means that processes (for example, lookups) cannot access that table. One solution to this problem is to specify dummy SQL for the look up overrides at the session level.

Tip:

If a session fails during a Fastload process, use SQL Assistant to run a simple SQL command (for example, count(*)), to determine whether the table is locked by a Fastload process.

If a table is locked (for example, for W_ORG_DS), use the following script to release the lock:

LOGON DATABASEALIAS/USER,PASSWORD
BEGIN LOADING USER.W_ORG_DS
ERRORFILES USER.ET_W_ORG_DS,USER.UV_W_ORG_DS;
END LOADING;

If you save the above text in a file called test.ctl, you would run this process by entering the following command at a command prompt:

C:\fastload\test.ctl

Tip:

To create a load script for a table, edit the test.ctl script above to change the login information, and replace all occurrences of W_ORG_DS with the required target table name.

After a load process script runs successfully, you should be able to run the command 'select count(*)' on the target table. If you are not able release the lock, you might need to drop and re-create the table to remove the lock. If you do so, you must re-create the statistics.

Tip:

Fastload is typically used in piped mode to load staging tables and initial loads. In the event of errors, reload the entire data.

3.6 Oracle-Specific Database Guidelines for Oracle Business Analytics Warehouse

To configure the Business Analytics Data Warehouse on Oracle databases more easily, refer to the parameter template files init10gR2.ora and init11g.ora. These files are located in <DRIVE>:\<Oracle BI Applications install directory>\dwrep\Documentation\.

The parameter template files provide parameter guidelines based on the cost-based optimizer for Oracle 10g and 11g. Use these guidelines as a starting point. You will need to make changes based on your specific database sizes, data shape, server size (CPU and memory), and type of storage. The database administrator should make changes to the settings based on performance monitoring and tuning.

Copy the appropriate template file into your $ORACLE_HOME/dbs directory. Then, review the recommendations in the template file, and make the changes based on your specific database configuration. The database administrator should make changes to the settings based on performance monitoring and tuning considerations.

Note:

The NLS_LENGTH_SEMANTICS parameter enables you to define byte- or character-length semantics. Oracle BI Applications supports BYTE and CHAR values for this parameter. You can add this parameter to the init10gR2.ora and init11g.ora files if you are using MLS characters.

3.7 Additional Suggestions for Optimizing Oracle Database Performance in Oracle Business Analytics Warehouse

This section contains additional suggestions for optimizing performance for Oracle databases.

3.8 Partitioning Guidelines For Large Fact Tables

This section explains how to use partitioning to maximize performance in your Oracle BI Applications deployment. It contains the following topics:

3.8.1 Introduction to Partitioning Large Fact Tables

Taking advantage of range and composite range-range partitioning for fact tables reduces index and statistics maintenance time during ETL processes as well as improves Web query performance. Because the majority of inserts and updates impact the last partition(s), you only need to disable local indexes on a few impacted partitions, and then rebuild disabled indexes after the load and compute statistics on updated partitions only. Online reports and dashboards should also render results faster, since the optimizer builds more efficient execution plans using partitioning elimination logic.

Large fact tables, with more than 20 million rows, can be suitable for partitioning. To build an optimal partitioned table with reasonable data distribution, you can consider partitioning by month, quarter, year, and so on. You can either identify and partition target fact tables before the initial load or convert the populated tables into partitioned objects after the full load.

To implement support for partitioned tables in Oracle Business Analytics Data Warehouse, you need to update the DAC metadata and manually convert the candidates into partitioned tables in the target database.

To deploy partitioned fact tables

  1. Partition the large fact tables. For more information, see Section 3.8.2, "Partitioning Large Fact Tables."

  2. Configure DAC to support ETL for partitioned tables. For more information, see Section 3.8.3, "Configuring DAC to Support Partitioned Tables."

3.8.2 Partitioning Large Fact Tables

If you have large fact tables that are affecting performance, you can maximize performance by partitioning the fact tables as described in this section.

The procedures in this section use an example that converts the fact table W_WRKFC_EVT_MONTH_F into a partitioned table and uses range partitioning by year.

To partition large fact tables

  1. Identify a partitioning key and decide on a partitioning interval.

    Choosing the correct partitioning key is the most important factor for effective partitioning, since it defines how many partitions will be involved in Web queries or ETL updates. Review the following guidelines for selecting a column for a partitioning key:

    • Identify eligible columns of type DATE for implementing range partitioning.

    • Connect to the Oracle BI Server repository and check the usage or dependencies on each column in the logical and presentation layers.

    • Analyze the summarized data distribution in the target table by each potential partitioning key candidate and data volumes per time range, month, quarter or year.

    • Based on the compiled data, decide on the appropriate partitioning key and partitioning range for your future partitioned table.

    • The recommended partitioning range for most implementations is a month; however, you may want to consider implementing quarterly or yearly partitioning ranges.

    • These guidelines assume the majority of incremental ETL volume data is made up of new records, which would be stored in one of the two latest partitions. Depending on the range granularity you chose, it is recommended that you rebuild local indexes for the most impacted of the latest partitions, as described below:

      • Monthly range. It is recommended that you maintain the two latest partitions, that is, the current and previous partition.

      • Quarterly range. It is only necessary to maintain the current partition.

      • Yearly range. It is recommended that you maintain the current partition.

  2. Create a partitioned table.

    You can pre-create a partitioned table prior to the initial load, or load data into the regular table and then create its partitioned copy and migrate the summarized data. If you have already completed the initial load into a regular table and then decided to partition it, you do not need to re-run the initial load.

    You can consider two options to convert a table into a partitioned one:

    • Create the table using the "as select" SQL statement.

      This method is simpler and faster.

    • Create the table using exchange partition syntax and then split the partitions.

      This method is suitable in high-availability data warehouses when you have to carry out partitioning with end users accessing the data.

      The syntax in the following SQL commands uses the example of one tablespace named USERS:

      1. Rename the original table.

        SQL> rename W_WRKFC_EVT_MONTH_F to W_WRKFC_EVT_MONTH_F_ORIG;
        
      2. Create the partitioned table using the range partitioning by year:

        SQL> create table W_WRKFC_EVT_MONTH_F partition by range (EVENT_YEAR)(
        partition PART_MIN values less than (2006),
        partition PART_2006 values less than (2007),
        partition PART_2007 values less than (2008),
        partition PART_2008 values less than (2009),
        partition PART_2009 values less than (2010),
        partition PART_2010 values less than (2011),
        partition PART_MAX  values less than (maxvalue)
        )
        tablespace BIAPPS_DATA
        nologging parallel enable row movement
        as select * from W_WRKFC_EVT_MONTH_F_ORIG;
        

        Note:

        You must use the format YYYY for partitioning by year, YYYYQQ for partitioning by quarter, and YYYYMM for partitioning by month. Make sure you check the partitioning column data type before partitioning the table.

        The EVENT_YEAR column in the example above uses number (4) precision, so the table partition values are defined using the format YYYY. If you choose the WID column for a partitioning key, then you have to define your partition ranges using the format YYYYMMDD.

        You can implement composite range-range partitioning using syntax similar to the following. This example uses the Quarter range for partitioning and the Year range for sub-partitioning. Because the EXPENDITURE_DT_WID column has number(8) precision, the table partition values are defined using the format "YYYY" partition by year, "YYYYMM" partition by month, and "YYYYMMDD" partition by WID. You must use the correct date format for the range: YYYY for year, YYYYQQ for quarter, YYYYMMDD for month. Make sure you check the partitioning column data type before you partition a table.

        SQL> create table W_PROJ_EXP_LINE_F
        partition by range    (CHANGED_ON_DT)
        subpartition by range (EXPENDITURE_DT_WID)
        (partition PART_MIN values less then (TO_DATE('01-JAN-2008','DD-MON-YYYY'))
        ( subpartition PART_MIN_MIN  values less than (19980000)
        , subpartition PART_MIN_1998 values less than (19990000)
        , subpartition PART_MIN_1999 values less than (20010000)
        , subpartition PART_MIN_2001 values less than (20020000)
        , subpartition PART_MIN_2002 values less than (20030000)
        , subpartition PART_MIN_2003 values less than (20040000)
        , subpartition PART_MIN_2004 values less than (20050000)
        , subpartition PART_MIN_2005 values less than (20060000)
        , subpartition PART_MIN_2006 values less than (20070000)
        , subpartition PART_MIN_2007 values less than (20080000)
        , subpartition PART_MIN_2008 values less than (20090000)
        , subpartition PART_MIN_2009 values less than (20100000)
        , subpartition PART_MIN_MAX values less than (MAXVALUE)
        )
        (partition PART_200801 values less then (TO_DATE('01-APR-2008','DD-MON-YYYY'))
        ( subpartition PART_200801_MIN  values less than (19980000)
        ( subpartition PART_200801_1998  values less than (19990000)
        ( subpartition PART_200801_1999  values less than (20010000)
        ( subpartition PART_200801_2001  values less than (20020000)
        ( subpartition PART_200801_2002  values less than (20030000)
        ( subpartition PART_200801_2003  values less than (20040000)
        ( subpartition PART_200801_2004  values less than (20050000)
        ( subpartition PART_200801_2005  values less than (20060000)
        ( subpartition PART_200801_2006  values less than (20070000)
        ( subpartition PART_200801_2007  values less than (20080000)
        ( subpartition PART_200801_2008  values less than (20090000)
        ( subpartition PART_200801_2009  values less than (20100000)
        ( subpartition PART_200801_MAX  values less than (MAXVALUE)
        )
        ...
        ...
        , partition PART_MAX values less than (maxvalue)
        ( subpartition PART_MAX_MIN values less than (19980000)
        , subpartition PART_MAX_1998 values less than (19990000)
        , subpartition PART_MAX_1999 values less than (20010000)
        , subpartition PART_MAX_2001 values less than (20020000)
        , subpartition PART_MAX_2002 values less than (20030000)
        , subpartition PART_MAX_2003 values less than (20040000)
        , subpartition PART_MAX_2004 values less than (20050000)
        , subpartition PART_MAX_2005 values less than (20060000)
        , subpartition PART_MAX_2006 values less than (20070000)
        , subpartition PART_MAX_2007 values less than (20080000)
        , subpartition PART_MAX_2008 values less than (20090000)
        , subpartition PART_MAX_2009 values less than (20100000)
        , subpartition PART_MAX_MAX values less than (maxvalue)
        )
        ) nologging parallel
        enable row movement
        as (select * from W_PROJ_EXP_LINE_F_ORIG);
        
  3. Drop and rename indexes on the renamed table.

    SQL> spool drop_ind.sql
    SQL> SELECT 'DROP INDEX '|| INDEX_NAME||';' FROM USER_INDEXES
         WHERE TABLE_NAME = 'W_WRKFC_EVT_MONTH_F_ORIG';
    SQL> spool off
    SQL> @drop_ind.sql
    

    If you want to keep indexes on the original renamed table until you have successfully completed the partitioning conversion, use the following commands:

    SQL> spool rename_ind.xql
    SQL> SELECT 'ALTER INDEX '|| INDEX _NAME ||' rename to '|| INDEX_NAME || '_ORIG; ' FROM USER_INDEXES 
    WHERE TABLE_NAME = 'W_WRKFC_EVT_MONTH_F_ORIG';
    SQL> spool off
    SQL> @rename_ind.sql
    
  4. Create global and local indexes.

    1. Execute the following queries as the DAC Repository owner:

      SQL> spool indexes.sql
      SQL> SELECT 'CREATE'
           ||DECODE(ISUNIQUE,'Y','UNIQUE')
           ||DECODE(ISBITMAP,'Y','BITMAP')
           ||'INDEX'
           ||I.NAME
           ||'ON'
           ||T.NAME
           ||'('
           ||MAX(DECODE(POSTN,1,C.NAME||'ASC'))
           ||MAX(DECODE(POSTN,2,' ,'||C.NAME||'ASC'))
           ||MAX(DECODE(POSTN,3,' ,'||C.NAME||'ASC'))
           ||MAX(DECODE(POSTN,4,' ,'||C.NAME||'ASC'))
           ||MAX(DECODE(POSTN,5,' ,'||C.NAME||'ASC'))
           ||MAX(DECODE(POSTN,6,' ,'||C.NAME||'ASC'))
           ||MAX(DECODE(POSTN,7,' ,'||C.NAME||'ASC'))
           ||') tablespace USERS_IDX '
           ||DECODE(ISUNIQUE, 'Y','GLOBAL','LOCAL')
           ||' NOLOGGING;'
      FROM  W_ETL_TABLE T, W_ETL_INDEX I, W_ETL_INDEX_COL C
      WHERE T.ROW_WID = I.TABLE_WID
            AND T.NAME = 'W_WRKFC_EVT_MONTH_F'
            AND I.ROW_WID = C.INDEX_WID
            AND I.INACTIVE_FLG = 'N'
      GROUP BY T.NAME,I.NAME,ISBITMAP,ISUNIQUE;
      S       QL> spool off;
      

      This script creates indexes with a maximum of seven positions. If you have indexes with more than seven column positions, then modify the "MAX(DECODE(POSTN...))" statement.

    2. Run the spooled file indexes.sql in the data warehouse schema. For example:

      SQL> @indexes.sql
      
  5. Compute statistics on the partitioned table. For example:

    SQL> BEGIN
         dbms_stats.Gather_table_stats(
              NULL
              tabname => 'W_WRKFC_EVT_MONTH_F',
              CASCADE => true,
              estimate_percent => dbms_stats.auto_sample_size,
              method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');
         END;
    
  6. Configure Informatica to support partitioned tables by enabling row movement.

    1. In Informatica Workflow Manager, from the menu bar select Connections, and then Relational.

    2. In the Relational Connection Browser dialog box, select the DataWarehouse connection.

    3. Update the Connection Environment SQL with the following:

      ALTER SESSION SET SKIP_UNUSABLE_INDEX=TRUE;
      

You now need to configure DAC to support the partitioned tables. For instructions, see Section 3.8.3, "Configuring DAC to Support Partitioned Tables."

3.8.3 Configuring DAC to Support Partitioned Tables

Once you have partitioned your fact tables as described in Section 3.8.2, "Partitioning Large Fact Tables," you need to configure DAC to support the partitioned tables. In this process, you first create new source system parameters. Then, you use the DAC Actions feature to create index actions for overriding the default behavior for dropping and creating indexes. You also create a table action for overriding the default action for analyzing tables. After you create the index and table actions, you then need to associate these actions with the appropriate indexes and tables in the DAC Repository.

For more information about the DAC Action feature, see Oracle Business Intelligence Data Warehouse Administration Console User's Guide.

Note:

The example in this process shows how to set up rebuilding indexes and maintaining statistics for the last two partitions, that is, the previous and current partitions, for range partitioning by year. You should consider implementing previous and current partitions only for monthly or more granular ranges. If you implement partitions for quarterly or yearly ranges, you only need to maintain the current partition. Maintaining the previous partition for partitioning by a quarter or a year may introduce unnecessary overhead and extend your incremental ETL execution time.

To configure DAC to support partitioned tables, you must complete the following procedures:

For additional information about the DAC Actions feature, see the Oracle Business Intelligence Data Warehouse Administration Console User's Guide, which is available on the Oracle Technology Network at http://www.oracle.com/technology/documentation/bi_dac.html.

3.8.3.1 Creating Source System Parameters in DAC to Support Partitioned Tables

Follow this procedure to create source system parameters in DAC to support partitioned tables.

To create source system parameters in DAC to support partitioned tables

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. Click the Source System Parameters tab.

  4. In the toolbar, click New to open a new record.

  5. For yearly partitions:

    1. Create a parameter with the following values:

      -Name: $$CURRENT_YEAR_WID

      -Data Type: SQL

      -Value: SELECT TO_CHAR(ROW_WID) FROM W_YEAR_D WHERE W_CURRENT_CAL_YEAR_CODE = 'Current'

      -Logical Data Source: DBConnection_OLAP

    2. Create a second parameter with the following values:

      -Name: $$PREVIOUS_YEAR_WID

      -Data Type: SQL

      -Value: SELECT TO_CHAR(ROW_WID) FROM W_YEAR_D WHERE W_CURRENT_CAL_YEAR_CODE = 'Previous'

      -Logical Data Source: DBConnection_OLAP

  6. For monthly partitions:

    1. Create a parameter with the following values:

      -Name: $$CURRENT_MONTH_WID

      -Data Type: SQL

      -Value: SELECT TO_CHAR(ROW_WID) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE = 'Current'

      -Logical Data Source: DBConnection_OLAP

    2. Create a second parameter with the following values:

      -Name: $$PREVIOUS_MONTH_WID

      -Data Type: SQL

      -Value: SELECT TO_CHAR(ROW_WID) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE = 'Previous'

      -Logical Data Source: DBConnection_OLAP

  7. For quarterly partitions:

    1. Create a parameter with the following values:

      -Name: $$CURRENT_QTR_WID

      -Data Type: SQL

      -Value: SELECT TO_CHAR(ROW_WID) FROM W_QTR_D WHERE W_CURRENT_CAL_QTR_CODE = 'Current'

      -Logical Data Source: DBConnection_OLAP

    2. Create a second parameter with the following values:

      -Name: $$PREVIOUS_QTR_WID

      -Data Type: SQL

      -Value: SELECT TO_CHAR(ROW_WID) FROM W_QTR_D WHERE W_CURRENT_CAL_QTR_CODE = 'Previous'

      -Logical Data Source: DBConnection_OLAP

3.8.3.2 Creating Index Actions in DAC

This section provides instructions for creating index actions to disable and create indexes on partitioned tables. It includes the following topics:

3.8.3.2.1 How to Create the Index Action to Disable the Local Index Parameter

This index action disables local indexes. This example uses the year partition range. If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL, for example, PREVIOUS_MONTH_WID/CURRENT_MONTH_WID or PREVIOUS_QTR_WID/CURRENT_QTR_WID. You must define separate actions for each range you use.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Index Actions.

  4. In the Index Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Year Partitioning: Disable Local Index."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

      This image shows the new record field.
    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Disable PREVIOUS_YEAR_WID Local Indexes
      Type Select: SQL
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      alter index getIndexName() modify partition PART_@DAC_$$PREVIOUS_YEAR_WID unusable

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

  9. Define the SQL:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Disable CURRENT_YEAR_WID Local Indexes
      Type Select: SQL
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      alter index getIndexName() modify partition PART_@DAC_$$CURRECT_YEAR_WID unusable

    4. Click Save.

3.8.3.2.2 How to Create the Index Action to Enable the Local Index Parameter

This index action enables local indexes. This example uses the year partition range. If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL, for example, PREVIOUS_MONTH_WID/CURRENT_MONTH_WID or PREVIOUS_QTR_WID/CURRENT_QTR_WID. You must define separate actions for each range you use.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Index Actions.

  4. In the Index Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Year Partitioning: Enable Local Index."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Enable PREVIOUS_YEAR_WID Local Indexes
      Type Select: SQL
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      alter index getIndexName() rebuild partition PART_@DAC_$$PREVIOUS_YEAR_WID nologging

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

  9. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Enable CURRENT_YEAR_WID Local Indexes
      Type Select: SQL
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      alter index getIndexName() rebuild partition PART_@DAC_$$CURRECT_YEAR_WID nologging

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

3.8.3.2.3 How to Create the Index Action to Enable the Local Sub-Partitioned Index Parameter

This index action is for composite partitioning only. This example uses the year partition range. If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Index Actions.

  4. In the Index Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Year Partitioning: Enable Local Index."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Enable Local Sub-partitioned Index
      Type Select: Stored Procedure
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      DECLARE
           CURSOR C1 IS
           SELECT DISTINCT SUBPARTITION_NAME
           FROM USER_IND_SUBPARTITIONS
           WHERE INDEX_NAME='getIndexName()' AND STATUS = 'UNUSABLE';
      BEGIN
         FOR REC IN C1 LOOP
           EXECUTE IMMEDIATE 'alter index getIndexName() rebuild subpartition
      '||REC.SUBPARTITION_NAME||'';
         END LOOP;
      END
      

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

3.8.3.2.4 How to Create the Index Action to Create the Local Bitmap Index Parameter

This example uses the year partition range. If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Index Actions.

  4. In the Index Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Year Partitioning: Create Local Bitmap Index."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Create Local Bitmap Indexes
      Type Select: SQL
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      Create bitmap index getIndexName() on getTableName() (getUniqueColumns()) tablespace getTableSpace() local parallel nologging

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

3.8.3.2.5 How to Create the Index Action to Create the Local B-Tree Index Parameter

This example uses the year partition range. If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Index Actions.

  4. In the Index Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Year Partitioning: Create Local B-Tree Index."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Create Local B-Tree Index
      Type Select: SQL
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      Create index getIndexName() on getTableName() (getUniqueColumns()) tablespace getTableSpace() local parallel nologging

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

3.8.3.2.6 How to Create the Index Action to Create the Global Unique Index Parameter

This example uses the year partition range. If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Index Actions.

  4. In the Index Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Year Partitioning: Create Global Unique Index."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Create Global Unique Index
      Type Select: SQL
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      Create index getIndexName() on getTableName() (getUniqueColumns()) tablespace getTableSpace() global parallel nologging

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

3.8.3.3 Creating Table Actions in DAC

This section provides instructions for creating table actions to gather statistics on partitioned tables. It includes the following topic:

3.8.3.3.1 How to Create a Table Action to Gather Statistics on a Partitioned Table

This example uses the year partition range. If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Table Actions.

  4. In the Table Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Year Partitioning: Gather Partition Stats."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Gather Partition Stats
      Type Select: Stored Procedure
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      DECLARE
           CURSOSR C1 IS
           SELECT DISTINCT UIP.PARTITION_NAME
           FROM USER_IND_PARTITIONS UIP, USER_PART_INDEXES UPI
           WHERE UPI.TABLE_NAME = 'getTableName()'
                AND UIP.INDEX_NAME=UPI.INDEX_NAME
                AND UIP.STATUS = 'USABLE'
                AND UIP.PARTITION_NAME IN
      ('PART_@DAC_$$CURRENT_YEAR_WID','PART_@DAC_$$PREVIOUS_YEAR_WID');
      BEGIN
         FOR REC IN C1 LOOP
      DBMS_STATS.GATHER_TABLE_STATS(
                     NULL,
                     TABNAME => 'getTableName()',
                     CASCADE => TRUE
                     PARTNAME => REC.PARTITION_NAME
                  ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE,
                  GRANULARITY => 'PARTITION',
                  METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE AUTO',
                  DEGREE => DBMS_STATS.DEFAULT_DEGREE);
         END LOOP;
      END
      

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

3.8.3.3.2 How to Create a Table Action to Gather Statistics on a Partitioned Table for Composite Partitioning

If you use quarterly or monthly partition ranges, substitute the correct name for the action and in the SQL.

Note: Do not change the Drop and Create Always or Drop and Create Always Bitmap properties for the modified indexes. If you deselect these check boxes, DAC will skip the index actions that are defined.

  1. Log into DAC.

    For instructions on logging into DAC, see Section A.1, "How to Log into DAC."

  2. Go to the Design view, and select the appropriate custom container from the drop-down list.

  3. From the Menu bar, select Tools, then Seed Data, then Actions, then Table Actions.

  4. In the Table Actions dialog box, click New.

    A new record field appears at the top of the list of actions.

  5. In the name field, enter "Quarter Composite Partitioning: Gather Partition Stats."

  6. Click Save.

  7. Double-click in the Value field to open the Value dialog box.

    The Value dialog box appears.

  8. Define the SQL script:

    1. Click Add.

      A new record field appears at the top of the list of SQL blocks.

    2. Enter the following information in the new record field:

      Field Description
      Name Enter: Gather Partition Stats
      Type Select: Stored Procedure
      Database Connection Select: Target
      Valid Database Platforms Double-click in the field to open the Supported Database Types dialog box, and select the appropriate database type.

    3. In the text box on the lower-right side of the window, enter the following SQL:

      DECLARE
           CURSOSR C1 IS
           SELECT DISTINCT UIP.PARTITION_NAME
           FROM USER_IND_PARTITIONS UIP, USER_PART_INDEXES UPI
           WHERE UPI.TABLE_NAME = 'getTableName()'
                AND UIP.INDEX_NAME=UPI.INDEX_NAME
                AND UIP.STATUS = 'USABLE'
                AND UIP.PARTITION_NAME IN
      ('PART_@DAC_$$CURRENT_QTR_WID','PART_@DAC_$$PREVIOUS_QTR_WID');
      BEGIN
         FOR REC IN C1 LOOP
      DBMS_STATS.GATHER_TABLE_STATS(
                     NULL,
                     TABNAME => 'getTableName()',
                     CASCADE => TRUE
                     PARTNAME => REC.PARTITION_NAME
                  ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE,
                  GRANULARITY => 'PARTITION',
                  METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE AUTO',
                  DEGREE => DBMS_STATS.DEFAULT_DEGREE);
         END LOOP;
      END
      

      Note: Do not use a semicolon (;) at the end of the SQL in the text area.

    4. Click Save.

3.8.3.4 Assigning Index Actions to Indexes in DAC

After you create index actions in DAC, you need to assign the index actions to specific indexes for each of the following index actions you created in the previous procedure:

  • Disable Local Indexes

  • Enable Local Indexes

  • Create Local Bitmap Index

  • Create Local B-Tree Index

  • Create Global Unique Index

To assign index actions to indexes

  1. In the DAC Design view, click the Index tab.

  2. Query for the appropriate indexes on the partitioned tables, based on the index action you want to assign.

    Note: Do not include global indexes in the query. Global indexes must not have any assigned index action tasks.

  3. Right-click on the list of query results, and select Add Actions.

    The Add Actions dialog box opens.

  4. In the Action Type field, select the appropriate Action Type.

  5. In the Load Type field:

    Select Incremental for Disable and Enable Local Indexes actions

    Select Initial for Create Local Bitmap Index, Create Local B-Tree Index, and Create Global Unique Index.

  6. In the Action field, double-click to open the Choose Action dialog.

  7. Select the appropriate action name.

  8. Click OK to close the Add Actions dialog box.

3.8.3.5 Assigning Table Actions to Tables in DAC

After you create table actions in DAC, you need to assign the table actions to specific tables for each of the table actions you created in the previous procedure.

To assign index actions to indexes

  1. In the DAC Design view, click the Index tab.

  2. Query for the appropriate table, based on the table action you want to assign.

  3. Right-click on the list of query results, and select Add Actions.

    The Add Actions dialog box opens.

  4. In the Action Type field, select Analyze Table.

  5. In the Load Type field, select Incremental.

  6. In the Action field, double-click to open the Choose Action dialog.

  7. Select the appropriate action name, for example, <Range> Partitioning: Gather Partition Stats.

  8. Click OK to close the Add Actions dialog box.

  9. Click OK to close the Add Actions dialog box.

Note: Make sure you use the appropriate composite partitioning table action for composite range-range tables.

3.9 Miscellaneous Information About Oracle BI Applications Deployments

This section contains the following topics:

3.9.1 Preconfigured Mapping for the SA System Subject Area

For Oracle's Siebel Applications customers, Table 3-5 describes the preconfigured mappings for the SA system subject area. Fields that are not available in Oracle's Siebel transactional database will default to values shown in the table.

  • Overriding Defaults. You can add user-specific values for these fields, by creating an extension table to the S_USER table. to store the user-specific defaults for these fields. Additionally, you can change any of the default values. The metadata for the following logical table can be modified to include any physical extension table.

    SA User.(User)
    

    For instructions, refer to the documentation about configuring tables and columns for Oracle's Siebel Business Applications.

  • Setting Provider Information. Typically, the cell phone and the fax numbers in the Oracle Business Analytics Warehouse do not contain a provider name. Therefore, the Pager will typically be a numeric value such as 555-483-3843. To append a provider to this address, use the following guidelines:

    • If the entire company has the same provider, then you can append the provider in the column mappings.

    • If users can have different providers, you need to create an extension table. For instructions, refer to the documentation about configuring tables and columns for Oracle's Siebel business applications.

Table 3-5 Preconfigured Mappings for the User Table in the SA System Subject Area

Logical Column Physical Table Expression Comments

Cell Phone

 

''

It might be mapped to S_CONTACT.CELL_PH_NUM if this field contains SMTP address.

Cell Phone Priority

 

''

Defaults to N

Display Name

S_CONTACT

"Real Time OLTP"."".SIEBEL.S_CONTACT_User.FST_NAME || ' ' || "Real Time OLTP"."".SIEBEL.S_CONTACT_User.LAST_NAME

First Name concatenated with Last Name

Email

S_CONTACT

EMAIL_ADDR

 

Email Priority

 

'HNL'

Defaults to N

Email Type

 

'html'

Defaults to HTML

Group Name

S_RESP

NAME

 

Handheld

 

''

Defaults to an empty string

Handheld Priority

 

''

Defaults to an empty string

Language

 

'en'

Defaults to 'en'

Locale

 

'en'

Defaults to 'en'

Logon

S_USER

LOGIN

 

Pager

 

''

It could be mapped to S_CONTACT.PAGER_PH_NUM if this field contains SMTP address

Pager Priority

 

''

Defaults to N

Time Zone

S_TIMEZONE

NAME

 

3.9.2 Using Initialization Blocks

Table 3-6 lists some of the initialization blocks common to all Oracle BI Applications and their purposes. Initialization blocks that are specific to each Oracle BI Applications area are not listed here.

To view the initialization blocks provided with Oracle BI Applications, open the Variable Manager in the Oracle Business Intelligence Enterprise Edition Administration Tool. For instructions, see Oracle Business Intelligence Server Administration Guide.

Table 3-6 Initialization Blocks and Their Purposes

Initialization Block Purpose

Authorization

Calculates user's responsibilities from the database.

Authentication

Authenticates against the database and verifies the user exists as a operational application user.

External Metadata Strings

Gets the translated value of metadata strings for the user's locale. This initialization block is critical to Intelligence Dashboards in international deployment.

LOCALE

Sets the user's locale specification in the Oracle BI Server.

Login Properties

Gets user's login properties, such as full name, from the database. This initialization block also sets the logging level of the users. By default, the log level is 0 for all users. If you wish to generate Oracle BI query logs for all users, this initialization block should be updated by changing the default value as well as the value in the initialization SQL.

Default System Language ID

Sets the variable OLTP_LANG_ID by querying the operational application database.

Organizations for Org-based Security

Queries the operational application database to get the organization membership for each user. It sets the variable ORGANIZATION.

Primary Owner ID

Sets the primary owner ID based on the user login ID.

Primary Position ID

Queries the operational application database to set the variable PRIMARY_POSTN_ID.

Warehouse Refresh Date

Sets several time-based variables such as CURRENT_YEAR.

ETL Run Date

Retrieves the ETL run date.

ETL Default Currency

Retrieves the default currency.


3.9.3 Creating Custom Indexes in Siebel Source Databases for Incremental Load Performance

To maximize performance for Siebel CRM, you can implement indexes using SQL files that are available in the ORACLE_HOME\biapps\dwrep directory. Table 3-7 describes the SQL files that are appropriate for the specified applications.

Table 3-7 SQL files for Siebel transactional databases

Application Name SQL File Name

Horizontal Application

PerfIndex_Horizontal.sql

Industry Application

PerfIndex_Industry.sql


The SQL files generate indexes on all S_.* tables that are being used by the preconfigured applications.

Note:

If you move from a test to a production environment, you need to drop the indexes and re-create them in the production environment.

3.9.3.1 An Example of Change Capture SQL and Required Indexes

Change capture SQL generates the following SQL:

Insert into S_ETL_I_IMG_XX (ROW_ID, LAST_UPD)
AS
SELECT ROW_ID, LAST_UPD, MODIFICATION_NUM
From
S_XXX
WHERE LAST_UPD > 'LAST REFRESH_DATE – PRUNE DAYS'
AND NOT EXISTS
(
SELECT 'X' FROM S_ETL_R_IMAGE
WHERE S_ETL_R_IMAGE.ROW_ID = S_XXX.ROW_ID AND
S_ETL_R_IMG_XX.MODIFICATION_NUM = S_XXX.MODIFICATION_NUM AND
S_ETL_R_IMG_XX.LAST_UPD = S_XXX.LAST_UPD
)

Table 3-8 shows the indexes that, based on the SQL above, are created on the S_CONTACT table by the SQL generation scripts.

Table 3-8 Indexes Created on the S_CONTACT Table in Siebel transactional database

Index Index Column

S_CONTACT_W1

LAST_UPD, ROW_ID_MODIFICATION_NUM

S_CONTACT_W11

LAST_UPD


3.9.4 Creating Custom Indexes in Oracle EBS Source Databases for Incremental Load Performance

Oracle EBS source database tables contain mandatory LAST_UPDATE_DATE columns, which are used by Oracle BI Applications for capturing incremental data changes. Some Oracle EBS source tables used by Oracle BI Applications do not have an index on the LAST_UPDATE_DATE column because the presence of the index can impede performance of the source application.

There are three categories of Oracle EBS tables as they relate to indexes on the LAST_UPDATE_DATE column:

  • Category 1: Tables that do not have indexes on the LAST_UPDATE_DATE column but on which indexes can be created without impeding performance.

  • Category 2: Tables that have indexes on LAST_UPDATE_DATE columns. These indexes were introduced in Oracle EBS release 12.

  • Category 3: Tables that cannot have indexes on the LAST_UPDATE_DATE column because performance will be impeded in the Oracle EBS environment.

3.9.4.1 Creating Indexes for Category 1 Tables

The following DDL script creates custom indexes on the LAST_UPDATE_DATE column for Category 1 tables, that is, tables in all Oracle EBS releases that do not already have this index created and for which there are no known performance implications for creating such indexes.

You should run this DDL script if your source system is Oracle EBS release 11i or release 12 and you have experienced slow incremental extraction mapping performance while implementing specific subject areas.

Note: If your source system is Oracle EBS release 12, Oracle EBS release 11.5.10, Oracle EBS release 11.5.9 or lower and has been migrated to Oracle Applications Tablespace Model (OATM), then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX.

The DDL script is as follows:

CREATE index AP.OBIEE_AP_INVOICE_PAYMENTS_ALL ON AP.AP_INVOICE_PAYMENTS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index AP.OBIEE_AP_PAYMENT_SCHEDULES_ALL ON AP.AP_PAYMENT_SCHEDULES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index AP.OBIEE_AP_INVOICES_ALL ON AP.AP_INVOICES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index GL.OBIEE_GL_JE_HEADERS ON GL.GL_JE_HEADERS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index ONT.OBIEE_OE_ORDER_HEADERS_ALL ON ONT.OE_ORDER_HEADERS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PER.OBIEE_PAY_INPUT_VALUES_F ON PER.PAY_INPUT_VALUES_F (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PER.OBIEE_PAY_ELEMENT_TYPES_F ON PER.PAY_ELEMENT_TYPES_F (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PO.OBIEE_RCV_SHIPMENT_LINES ON PO.RCV_SHIPMENT_LINES (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index PO.OBIEE_RCV_SHIPMENT_HEADERS ON PO.RCV_SHIPMENT_HEADERS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index AR.OBIEE_AR_CASH_RECEIPTS_ALL ON AR.AR_CASH_RECEIPTS_ALL (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index WSH.OBIEE_WSH_DELIVERY_DETAILS ON WSH.WSH_DELIVERY_DETAILS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

CREATE index WSH.OBIEE_WSH_NEW_DELIVERIES ON WSH.WSH_NEW_DELIVERIES (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;

Note:

  • Make sure you use FND_STATS to compute statistics on the newly created indexes and update statistics on newly indexed table columns in the Oracle EBS database.

  • All indexes created with the DDL in this section have the prefix OBIEE_. This prefix does not follow standard Oracle EBS index naming conventions. Therefore, Autopatch may fail during future upgrades. In such cases, the indexes with the OBIEE_ prefix should be dropped and Autopatch restarted.

3.9.4.2 Creating Indexes for Category 2 Tables

The following DDL creates custom indexes on the LAST_UPDATE_DATE column for Category 2 tables, that is, tables for which indexes were introduced for the LAST_UPDATE_DATE column in Oracle release 12.

You should run this DDL script if your source system is Oracle EBS release 11i.

Note: If your source system is Oracle EBS release 11.5.10, Oracle EBS release 11.5.9 or lower and has been migrated to Oracle Applications Tablespace Model (OATM), then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX.

The DDL script is as follows:

CREATE index PO.RCV_TRANSACTIONS_N23 ON PO.RCV_TRANSACTIONS (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_DISTRIBUTIONS_N13 ON PO.PO_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_LINE_LOCATIONS_N11 ON PO.PO_LINE_LOCATIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_LINES_N10 ON PO.PO_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_REQ_DISTRIBUTIONS_N6 ON PO.PO_REQ_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_REQUISITION_LINES_N17 ON PO.PO_REQUISITION_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_HEADERS_N9 ON PO.PO_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 1M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index PO.PO_REQUISITION_HEADERS_N6 ON PO.PO_REQUISITION_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

CREATE index AR.RA_CUSTOMER_TRX_N14 ON AR.RA_CUSTOMER_TRX_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE>;

Note: Make sure you use FND_STATS to compute statistics on the newly created indexes and update statistics on newly indexed table columns in the Oracle EBS database.

3.9.5 Running an ETL with a Non-English OLTP Data Source

To run an ETL with a non-English OLTP data source, you must make a copy of an appropriate source system container, and configure the language, country, and continent parameters.

To run an ETL with a non-English OLTP data source

  1. In DAC, select File, then New Source System to display the New Source System Container dialog box.

  2. Select the Create as a Copy of Existing Container radio button.

  3. From the Existing Containers drop-down list, select the container that you want to copy, then click OK.

  4. Go to the Design view.

  5. Ensure that you have selected the correct container from the Containers drop-down list.

  6. Select the Source System Parameters tab.

  7. Use the Edit tab below the list of Source System Parameters to change the value of the following parameters in the list:

    • $$DFLT_LANG (Default Language) – For example, for a Japanese data source, change this value to JPN.

    • (optional) $$DFLT_COUNTRY (Default Country)

    • (optional) $$DFLT_CONTINENT (Default Continent)

    Tip: To find out the value to specify for the $$DFLT_LANG parameter, issue the query 'select VAL from S_SYS_PREF where SYS_PREF_CD=<ETL value>' against the OLTP database. For example, to find out the default ETL language, issue the following command:

    select VAL from S_SYS_PREF where SYS_PREF_CD='ETL Default Language';

  8. Create a new ETL plan for the new source system container, and edit its parameters, as follows:

    1. Click the Execute tab.

    2. Click the Execution Plans subtab.

    3. Click New to create a new blank execution tab and use the subtabs below (for example, Subject Areas, Parameters, Ordered Tasks) to specify the execution plan details.

    4. Click Save.

  9. Click Run Now to run the new ETL plan (or use the Schedule tab to specify when you want to run the new execution plan).