|Oracle® Fusion Middleware Configuration Guide for Oracle Business Intelligence Applications
11g Release (11.1.1)
Part Number E16814-02
|PDF · Mobi · ePub|
This section provides information about preparing to install and deploy Oracle BI Applications. You should review this information before you begin the installation and deployment process. You should also read the guidelines for setting up the Oracle Business Analytics Warehouse and read the database guidelines for the source OLTP databases that you are using.
This section contains the following topics:
For information about database-specific settings, see Section 1.2, "System Requirements and Certification".
The Oracle Business Analytics Warehouse is a database that contains dimensional schemas. Although it is technically possible to put the Oracle Business Analytics Warehouse in the same database as the transactional database, for performance reasons it is not recommended. The transactional database is structured as an online transaction processing (OLTP) database, whereas the Oracle Business Analytics Warehouse is structured as an online analytical processing (OLAP) database, each optimized for its own purpose. Reasons for not combining the two databases include:
The analytical queries interfere with normal use of the transactional database, which is entering and managing individual transactions.
The data in a transactional database is normalized for update efficiency. Transactional queries join several normalized tables and will be slow (as opposed to pre-joined, de-normalized analytical tables).
Historical data cannot be purged from a transactional database, even if not required for current transaction processing, because you need it for analysis. (By contrast, the analytical database is the warehouse for historical as well as current data.) This causes the transactional database to further slow down.
Transactional databases are tuned for one specific application, and it is not productive to use these separate transactional databases for analytical queries that usually span more than one functional application.
The analytical database can be specifically tuned for the analytical queries and Extract-Transform-Load (ETL) processing. Analytical database requirements are different from transactional database requirements.
The Informatica Repository stores all of the Informatica object definitions for the ETL mappings that populate the Oracle Business Analytics Warehouse. It is a series of repository tables that are stored in a database, which can be a transactional, analytical, or separate database.
To configure the Business Analytics Data Warehouse on Oracle databases more easily, refer to the parameter template file init11gR2_FusionApplications_template.ora. This files is located in \biapps\etl under the Oracle Home for BI.
The parameter template file provides parameter guidelines based on the cost-based optimizer for Oracle 11g, and the guidelines will help you set up the data warehouse physical database for performance and growth. Use these guidelines as a starting point. You will need to make changes based on your specific database sizes, data shape, server size (CPU and memory), and type of storage.
Copy the appropriate template file into your $ORACLE_HOME/dbs directory. Then, review the recommendations in the template file, and make the changes based on your specific database configuration. The database administrator should make changes to the settings based on performance monitoring and tuning considerations.
Note: The NLS_LENGTH_SEMANTICS parameter enables you to define byte- or character-length semantics. Oracle BI Applications supports BYTE and CHAR values for this parameter. If you are using MLS characters, then you can add this parameter to the init11gR2_FusionApplications_template.ora file.
Oracle recommends that you use the following database parameters for provisioning:
Common ====== audit_trail NONE plsql_code_type NATIVE nls_sort BINARY open_cursors 500 session_cached_cursors 500 _b_tree_bitmap_plans FALSE query_rewrite_integrity TRUSTED job_queue_processes 10 star_transformation_enabled TRUE parallel_max_servers 16 Sizeable Parameters =================== processes 2500 sga_target 8GB pga_aggregate_target 4GB Optimizer Stats Gathering ========================= Auto Stats Gathering Job None
At a minimum, separate the data and index tablespaces. Create more tablespaces to separate heavily used tables and their indexes.
Use the maximum block and page size available for tablespaces ((for example, 32K), because it provides good overall performance and also does not impose low limits to the maximum size to which the tablespace can grow, as compared to 4K, 8K, and 16K sizes.
If you are using multiple disk storage systems, stripe the tablespace containers and files across as many disks as possible.
Unbuffered (or 'raw') devices for tablespaces provide better performance in comparison to 'cooked' file systems (or files that are buffered through the operating system).
RAID-5 is known to give a good balance of performance and availability.
For Oracle databases, size the buffer pools based on content and size (number of tables and their sizes) of tablespaces.
Allocate about two-thirds of the total available server memory to the database, assuming that no other application is running on the same server.
Use Oracle Automated Storage Management (ASM) option for tablespaces as this provides better performance and manageability as compared to OS based general purpose file systems.
Oracle BI Applications under Oracle support only binary sorting. If you are running an Oracle client, do one of the following:
Set the NLS_SORT parameter to BINARY.
Choose a NLS_LANG setting that includes binary.
These settings are required for adequate performance from the dedicated Web client.
Make sure that cost-based optimization is enabled in the Oracle development, test, and production databases and that statistics are kept up to date. Otherwise, the rule-based optimizer may be used.
Create foreign keys in the Oracle database, but configure Oracle to not enforce the foreign key relationship. The existence of foreign keys will allow Oracle to better optimize certain queries. By turning off enforcement, the database load should not be negatively affected.
Analyze application for occurrences of highly skewed data that is indexed. Create histogram statistics for these indexes to enable the optimizer to better perform queries.
To increase data throughput between Oracle BI Server and Oracle, change SDU and TDU settings in listener.ora. The default is 2 KB and can be increased to 8 KB.
On the server side, edit the listener.ora file. Under the particular SID_LIST entry, modify SID_DESC as follows:
SID_LIST_LISTENER = SID_LIST = SID_DESC = (SDU=16384)(TDU=16384) ORACLE_HOME = /.....) SID_NAME = SOLAP) ) )
Make sure the temporary tablespace has adequate space.
Set the number of log file groups to 4.
Set the size of each log file to 10 MB.
On the client side, edit the tnsnames.ora file. Modify the TNS alias by adding SDU= and TDU= as follows:
myhost_orcl.world= DESCRIPTION=(SDU=16384)(TDU=16384) ADDRESS = (PROTOCOL = TCP)(HOST=myhost)(PORT=1521)) CONNECT_DATA=(SID=ORCL))
Oracle Fusion Applications database tables contain mandatory LAST_UPDATE_DATE columns, which are used by Oracle BI Applications for capturing incremental data changes. Some Oracle Fusion Applications source tables used by Oracle BI Applications do not have an index on the LAST_UPDATE_DATE column because the presence of the index may impede performance of some application flows or processes for a small set of Fusion Applications implementations. Since LAST_UPDATE_DATE indexes on such tables are used by Oracle BI Applications only, customers should manually create them in their source Fusion Applications environments. Such customizations have been approved by Oracle Fusion Application Development.
To create the index, use the following syntax:
CREATE index AP.OBIEE_<TABLE_NAME> ON <TABLE_NAME>(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE>;
Note:Indexes created with the DDL in this section have the prefix OBIEE_. This prefix does not follow standard Fusion Applications index naming conventions. Therefore, Autopatch may fail during future upgrades. In such cases, the indexes with the OBIEE_ prefix should be dropped and Autopatch restarted. Use FND_STATS to compute statistics on the newly created indexes and update statistics on newly indexed table columns in the Oracle Fusion Applications database.
The Oracle Business Analytics Warehouse can be deployed in various code page environments and supports global deployments. Data movement in the following source database and data warehouse configuration modes are supported:
Unicode to Unicode - Note: This is the only supported configuration mode for Oracle Business Intelligence Applications 11g Release (11.1.1).
Code page (multi- or single-byte) to Unicode
Code page to code page (where the code pages are the same)
Oracle BI Applications uses Informatica PowerCenter to perform extract, transform and load routines to move data from source database(s) to the Oracle Business Analytics Warehouse.
During the installation and configuration procedures described in this chapter, you will make various settings to enable accurate data movement. Use the guidelines and references noted below to determine values for these settings that are appropriate for your environment:
Determining the source to target configuration mode. Consult your database administrator to determine the code page your source OLTP database uses. Based on the type of data that will be moved from one or more source databases to the Oracle Business Analytics Warehouse, determine what code page you will need to use for the Oracle Business Analytics Warehouse database. When determining what code page to use for the Oracle Business Analytics Warehouse, consider future requirements for storing data.
For accurate data movement from source database to target, the code page of the Oracle Business Analytics Warehouse (target) must be a superset of the code page of the source database. Informatica considers a code page to be a superset of another code page when the code page contains all the characters encoded in the other code page and additional characters not encoded in the other code page.
Note: To enable data movement from source(s) to the Oracle Business Analytics Warehouse, you will set relaxed Code Page Validation for the Integration Services. You must ensure that the target code page is a superset of the source code page for accurate data movement.
Setting the SiebelUnicodeDB property. If your source to target configuration mode for data movement is Unicode to Unicode, then you will set a custom property called SiebelUnicodeDB on the Integration Services. Configuration modes of code page to Unicode or code page to code page do not require this property to be set.
Determining the Data Movement Mode. Before you deploy Oracle BI Applications, you must determine what data movement mode to use (ASCII or Unicode) for the PowerCenter Integration Service.
Note: Oracle BI Applications 11g Release (11.1.1) with Oracle Fusion Applications only supports 'Unicode to Unicode' as a Data Movement mode.
The Character Data Movement Mode is an Informatica PowerCenter Integration Service option that you choose based on whether you want to move single-byte or multi-byte data. Choose Unicode if non-ASCII characters have to be moved. Otherwise, choose ASCII. The Data Movement Mode option is configurable and can be reset after installation.
To set the Data Movement Mode, log into Informatica PowerCenter Administration Console, select the Integration Service, then click the Properties tab, then display the General Properties tab, and set the DataMovementMode value.
Determining code pages for Informatica PowerCenter components. In addition to source and target code pages, Informatica PowerCenter uses code pages for PowerCenter Client, the Integration Service, the Informatica Repository, and PowerCenter command line programs (pmcmd and pmrep, which are used by DAC to communicate with PowerCenter Services). Carefully review 'Chapter 22: Understanding Globalization,' in the Informatica PowerCenter Administrator Guide, particularly if your environment requires the Data Mode Movement mode to be set to UNICODE. The section discusses code page compatibility and code page requirements for the Informatica components.
Setting environment variables. You must manually set the appropriate environment variables for UNIX environments. In addition, the Informatica installer requires the appropriate locale to be set on UNIX machines. Use LANG, LC_CTYPE or LC_ALL to set the UNIX code page. For more information, see the topic titled 'Configuring Environment Variables,' in 'Chapter 2: Before You Install,' in the Informatica PowerCenter Installation Guide.
You also need to set the environment variable NLS_LANG.
Configuring Relational Connections. When you configure relational connections in the Workflow Manager, choose a code page that is compatible with the code page of the database client. If you set a database environment variable to specify the language for the database, then you must ensure the code page for the connection is compatible with the language set for the variable. For example, if you set the NLS_LANG environment variable for an Oracle database, ensure that the code page of the Oracle connection is identical to the value set in the NLS_LANG variable.