Skip Headers
Oracle® Fusion Middleware Metadata Repository Builder's Guide for Oracle Business Intelligence Enterprise Edition
11g Release 1 (11.1.1)

Part Number E10540-06
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

F Exchanging Metadata with Databases to Enhance Query Performance

This appendix explains how to use Oracle Database or IBM DB2 to enhance the data warehouse performance and functionality of queries that run on the Oracle BI Server.

This appendix contains the following topics:

About Exchanging Metadata with Databases

By exchanging Oracle Business Intelligence metadata from the Oracle BI Server with your Oracle Database or IBM DB2 database, you enable the database to accelerate the performance of data warehouse queries.

You use the Oracle BI Server utility sametaexport to exchange the metadata. When you run this utility to generate cube views for DB2, the utility is called the DB2 Cube Views Generator. When you run this utility to generate metadata for Oracle Database, the utility is called the Oracle Database Metadata Generator.

The Oracle BI Server export utility works with the following tools:

The sametaexport utility generates the information necessary for the SQL Access Advisor or IBM DB2 Cube Views tool to preaggregate the relational data and improve query performance.

Generating the Import File

Both the Oracle Database Metadata Generator and the DB2 Cube Views Generator create the files that are needed to import metadata from the Oracle BI Server into the SQL Access Advisor or an IBM DB2 database.

This section contains the following topics that are common to the two generators:

Running the Generator

The Oracle Database Metadata Generator and the DB2 Cube Views Generator are invoked from the command line or embedded in a batch file. The command-line executable is named sametaexport.

The sametaexport utility is available on both Windows and UNIX systems. However, you can only use sametaexport with binary repositories in RPD format.

Before running the utility, you must first run bi-init to launch a command prompt or shell window that is properly initialized. See "Running bi-init to Launch a Shell Window Initialized to Your Oracle Instance" for more information.

After you run bi-init, run sametaexport from the resulting command prompt with the desired options, as follows:

sametaexport -r "PathAndRepositoryFileName" [-p repository_password]
-f "InputFileNameAndPath" [options]

Table F-1 contains descriptions of the parameters in the command-line executable file.

Table F-1 Parameters for sametaexport

Parameter Definition Additional Information

-r

Repository file name and full path

Quotation marks are required for the file name and path only if the file path is in long format or has spaces. Use the full path if the file is not in the current directory.

-p

Repository password

The password for the given repository.

The password argument is optional. If you do not provide a password argument, you are prompted to enter a password when you run the command. To minimize the risk of security breaches, Oracle recommends that you do not provide a password argument either on the command line or in scripts. Note that the password argument is supported for backward compatibility only, and will be removed in a future release. For scripting purposes, you can pass the password through standard input.

-f

Input file name and full path

Quotation marks are required for the file name and path only if the file path is in long format or has spaces. Use the full path if the file is not in the current directory. You specify input files so that you do not have to type all the required information at the command line, and so that you can type international characters. See "About the Metadata Input File" for more information.


You can include some additional parameters in the input file or at the command line to change various defaults for the Oracle Database Metadata Generator and the DB2 Cube Views Generator. Parameters specified in the input file take precedence over parameters specified at the command line. You must include these parameters only if you want to change the default values.

Table F-2 and Table F-3 describe these optional parameters.

Table F-2 Optional Parameters and Defaults for the Oracle Database Metadata Generator

Parameter Definition Additional Information Input File Usage Example Command Line Usage Example

Use schema name from RPD

When set to YES, the table schema names are used as they are used in the repository. The default value is YES.

USE_SCHEMA_NAME_FROM_RPD = NO

-schemafrom rpd NO

Default schema name

The default schema name is used as the table schema name if the value of -schemafromrpd is set to NO, or if the repository schema name cannot be determined. The default value is SIEBEL.

DEFAULT_SCHEMA_NAME = ORACLE

-defaultschema ORACLE

Oracle schema name

The metadata from Oracle Database Metadata Generator is created under this schema. The default value is SIEBEL.

ORA_DIM_SCHEMA_NAME = ORACLE

-orclschema ORACLE

Logging enabled

Indicates whether to keep a log of the metadata export process. Valid values are ON, OFF, and DEBUG. The default value is ON.

LOGGING = DEBUG

-logging DEBUG

Log file name

The path to the log file. If you provide an invalid path, an error occurs.

If you do not provide this parameter, the default log file path is used. The default path is:

ORACLE_INSTANCE\diagnostics\logs\
OracleBIServerComponent\
coreapplication_obisn\
OraDimExp.log

LOG_FILE_NAME = C:\bea_default\instances
\instance1\diagnostics\
logs\generator\logfile.log

-logfile C:\bea_default\instances\
instance1\diagnostics\
logs\generator\logfile.log


Table F-3 Optional Parameters and Defaults for the DB2 Cube Views Generator

Parameter Definition Additional Information Input File Usage Example Command Line Usage Example

Distinct count supported

When set to YES, allows measure containing the DISTINCT_COUNT aggregation to be exported. The recommended setting and default value is NO.

DISTINCT_COUNT_SUPPORTED = YES

-distinct YES

Statistical functions supported

When set to YES, allows measures containing the aggregation STDDEV to be exported. The recommended setting and default value is NO.

STATISTICAL_FUNCTIONS_SUPPORTED = YES

-stat YES

Use schema name

When set to YES, the Cube Views metadata attributes have columns from tables under a schema name, which are then specified in the parameters. When set to NO, the schema names for these tables are empty. The default value is YES.

USE_SCHEMA_NAME = NO

-useschema NO

Use schema name from RPD

When set to YES, the table schema names are used as they are used in the repository. The default value is YES.

USE_SCHEMA_NAME_FROM_RPD = NO

-schemafromrpd NO

Default schema name

The default schema name is used as the table schema name if the value of -schemafromrpd is set to NO, or if the repository schema name cannot be determined. The default value is SIEBEL.

DEFAULT_SCHEMA_NAME = ORACLE

-defaultschema ORACLE

Cube views schema name

The name of the schema under which the Cube Views metadata is created. The default value is SIEBEL.

CUBE_VIEWS_SCHEMA_NAME = ORACLE

-cubeschema ORACLE

Log file name

The path to the log file. If you provide an invalid path, an error occurs.

If you do not provide this parameter, the default log file path is used. The default path is:

ORACLE_INSTANCE\diagnostics\logs\
OracleBIServerComponent\
coreapplication_obisn\
CubeViews.log

LOG_FILE_NAME = C:\bea_default\instances\instance1\diagnostics\logs\generator\logfile.log

-logfile C:\bea_default\instances\instance1\diagnostics\logs\generator\logfile.log

Log failures

When set to YES, the log file lists the metadata that was invalidated under a certain rule. The default value is YES.

LOG_FAILURES = NO

-logfail NO

Log success

When set to YES, the log file lists the metadata that has been checked under each rule and has passed the check. The default value is NO.

LOG_SUCCESS = YES

-logsuccess YES


About the Metadata Input File

The input file is a text file that contains the parameters that are described in Table F-4.

Table F-4 Cube Metadata Input File Parameters

Input File Name Description

BUSINESS_MODEL

The name of the business model in the logical layer of the Oracle Business Intelligence repository that contains the metadata to export. If the business model is not found in the repository, then an error message is displayed.

You can only specify one business model name in the input file. To generate metadata for multiple business models, create another input file and run the Oracle Database Metadata Generator or DB2 Cube Views Generator again.

PHYSICAL_DATABASE

The name of the database in the physical layer of the Oracle Business Intelligence repository that contains the metadata to export. When the business model derives from multiple databases, then it eliminates metadata from all databases other than the one specified here. When the physical database is not found in the repository, an error message is displayed.

RUN_AS_USER

The user name of the database user whose visibility must be duplicated for the metadata export. This parameter cannot be empty. This user must exist as a user reference in the repository.

OUTPUT_FOLDER

The full path and file name of the folder to which the SQL file will be written. If the folder does not exist when you run the Oracle Database Metadata Generator, then it will be created. See "About the Output Files" for more information.


The following text shows a sample metadata input file:

BUSINESS_MODEL = "1 - Sample App"
PHYSICAL_DATABASE = "1 - Sample App Data"
RUN_AS_USER = "Administrator"
OUTPUT_FOLDER = "C:\OracleBI"

About the Output Files

Each Generator creates different types of output files, as described in the following list:

  • Oracle Database Metadata Generator: Generates a SQL file that is encoded in UTF-8 and stored in the specified output folder. The file name is based on the name of the business model you specified in the input file, such as my_business_model.sql.

  • DB2 Cube Views Generator: Generates the following files in the specified output folder:

    • XML file (encoded in UTF8). One XML file is created for the specified business model. It contains all objects that were converted to cubes. Additionally, objects in the repository will be mapped to similar objects in the IBM Cube Views metadata. See "Conversion Rules for IBM DB2 Databases" for a list of objects that will not be converted.

      The name of the XML file matches the business model name, without spaces, followed by the XML extension (for example, SalesResults.xml).

    • A SQL file that contains the alias generation DLL. A SQL file is created for the specified business model only if aliases exist in the physical layer databases that are referenced in the business model. The alias file contains SQL commands that will create the aliases in the DB2 database. The name of the SQL file matches the business model name, without spaces, followed by the SQL extension (for example, SalesResults-alias.sql).

Troubleshooting Errors from the Generator

Error messages indicate that the Generator was unable to complete some or all of its tasks. After starting the Generator, you might observe the following error messages:

  • Unable to write to Log file: log_file_name.

    The log file specified in the input file or at the command line might contain the wrong path, the user might not have write permissions to that folder, or the disk could be out-of-space.

  • Run_as_user, user_name, is invalid.

    The user name is incorrect.

  • Repository, repository_name.rpd, is invalid or corrupt.

    The repository name might be incorrect, it might not exist in the given path, or the user might not have permission to read it.

  • Physical Database, database_name, is invalid.

    The physical database name does not match a valid physical database object in the repository.

  • Business Model, model_name, is invalid.

    The business model name does not match a valid business model object in the repository.

  • Authentication information provided is invalid.

    The repository password provided at the command line is incorrect.

  • Path: "path_name" is invalid.

    The path or file name is incorrect, or the current user does not have read access.

Metadata Conversion Rules and Error Messages

When the Generator creates the output files, it also maps the metadata objects in the Oracle Business Intelligence repository to similar objects in the metadata of the Oracle Database or the IBM DB2 database.

This section explains the rules used to identify Oracle Business Intelligence metadata that cannot be translated (converted) into either SQL or XML format. These rules are necessary because Oracle Database and IBM Cube Views do not support some of the metadata constructs that are allowed by Oracle Business Intelligence.

Dimensional metadata in the SQL or XML file will be generated at the logical fact table source level. If a logical fact table source has an invalid logical dimension table source, then the logical dimension table source will be invalidated. If the logical fact table source is invalid, then all the logical dimension table sources that are mapped to it will also be invalidated. Invalid Oracle Business Intelligence repository metadata elements will not be converted to cubes in the SQL or XML file.

When a rule is violated, the Generator writes the error messages and the metadata that violated the rule to the log file.

Conversion Rules for Oracle Databases

The following list provides the rules for converting Oracle Business Intelligence metadata into objects in the Oracle Database:

  • Attributes that contain expressions in the logical table cannot be exported.

  • Tables joined using complex joins are not considered.

  • Tables that are opaque views are not considered.

  • Columns used as part of a key in one level cannot be used as part of another level key.

    Oracle Database prohibits the use of columns as keys in multiple levels. This prohibition requires the Oracle Database Metadata Generator to eliminate one of the two joins, usually the join that is encountered first. Therefore, the other joins are lost, which prevents them from being exported.

Conversion Rules for IBM DB2 Databases

Table F-5 lists the rules used to validate Oracle Business Intelligence repository metadata elements, error messages that are written to the log file if the rule is violated, and an explanation of what caused the rule violation. The error messages help you determine why a particular Oracle Business Intelligence metadata object was not exported to the XML file.

Table F-5 Validation Rules for Metadata Elements

Rule Message Explanation

ComplexJoinFactsRule

[Fact Logical Table Source]Complex Physical Joins not supported

%qn has a complex Join %qn between Physical Tables %qn and %qn

If the physical fact tables are connected through complex joins, then the join is not supported. A complex join is defined as any join between two tables that do not have a foreign key relationship.

ComplexJoinDimsRule

[Dimension Logical Table Source]Complex Physical Joins not supported

%qn has a complex Join %qn between Physical Tables %qn and %qn

If the dimension physical tables are connected through a complex join, then that join is not supported.

ComplexJoinFactDimRule

[Fact Logical Table Source -> Dimension Logical Table Source] Complex Physical Joins not supported.

%qn has a complex Join %qn between Physical Tables %qn and %qn.

If a dimension physical table and a fact physical table are connected through a complex join, then that join is not supported and the dimension table source is invalidated.

OpaqueViewFactRule

[Fact Logical table Source] Physical SQL Select Statements not supported.

%qn uses the SQL Select Statement %qn.

When the physical fact table is generated by a SQL select statement, the logical fact table source that contains the table is invalidated. All logical dimension table sources connected to this logical fact table source are also invalidated. This construct allows subquery processing.

OpaqueViewDimRule

[Dimension Logical table Source] Physical SQL Select Statements not supported.

%qn uses the SQL Select Statement %qn.

When a physical dimension table is generated by a SQL select statement, the logical dimension table source containing that table is invalidated.

OuterJoinFactRule

[Fact Logical Table Source] Physical Outer Joins not supported.

%qn has an outer join %qn between physical tables %qn and %qn.

If the logical fact table source has an outer join mapping, then that logical fact table source is invalidated and all logical dimension table sources mapped to this source will also be invalidated.

OuterJoinDimRule

[Dimension Logical Table Source] Physical Outer Joins not supported.

%qn has an outer join %qn between physical tables %qn and %qn.

If the logical dimension table source has an outer join mapping, then that logical dimension table source is invalidated.

WhereClauseFactRule

[Fact Logical Table Source] WHERE clauses are not supported.

%qn has a where condition %s.

If the fact table source uses a WHERE clause to filter the data that is loaded, then this table source is invalidated.

WhereClauseDimRule

[Dimension Logical Table Source] WHERE clauses are not supported.

%qn has a where condition %s.

If the dimension table source uses a WHERE clause to filter the data that is loaded, then this table source is invalidated.

TwoJoinFactDimRule

[Fact Logical Table Source -> Dimension Logical Table Source] Multiple Joins between sources not supported.

%qn and %qn have at least the following joins : %qn, %qn.

If a physical fact table is mapped to two dimension tables from the same dimension source (if the fact table is not exclusively mapped to the most detailed table in the table source), then the dimension table source is invalidated.

HiddenManyManyRule

[Fact Logical Table Source -> Dimension Logical Table Source] Join between (physical or logical?) fact and dimension is not on the most detailed table.

%qn between %qn and %qn is not on the most detailed table %qn {Join name, facttable, dimtable).

This is related to the TwoJoinFactDimRule. If the fact table is joined to a dimension table that is not the most detailed table in the table source, then the dimension table source is invalidated.

ComplexMeasureRule

[Column] Complex Aggregation Rules not supported.

%qn uses an aggregation rule of %s which is not supported.

The supported aggregations are typically SUM, COUNT, AVG, MIN, MAX, STDDEV, COUNTDISTINCT, and COUNT.

CountDistMeasureRule

[Column] COUNT-DISTINCT Aggregation Rule not supported.

%qn uses an aggregation rule of %s which is not supported.

COUNTDISTINCT aggregation is not supported for this particular column.

InvalidColumnLevelRule

[Level] Some columns that are part of the Primary Level Key are invalid.

%qn has %qn as part of its primary key, when %qn has already been marked invalid.

The level key for this level has one or more columns that are invalid.

VariableBasedColumnRule

[Logical Table Source -> Column] Column uses a Variable in the Expression

Column %qn uses a variable in its mapping.

The logical column uses repository and session variables in the expression.

OneFactToManyDimRule

[Fact Logical Table Source -> Dimension Logical Table Source] There must be a unique join path between the most detailed tables in the (logical or physical?) fact and the dimension.

No join paths found between %qn and %qn (both physical table names).

Found at least the following join paths: (%qn->%qn....), (%qn->%qn....)

Same as in TwoJoinFactDimRule or HiddenManyManyRule.

ManyMDTinFactRule

[Fact Logical Table Source] Fact Logical Table Source must have a unique most detailed table.

%qn has at least the following most detailed tables : %qn,%qn.

A fact that has more than one table that is the most detailed table.

NoMeasureFactRule

[Fact Logical Table Source] Fact Logical Table Source does not have any Measures.

%qn does not have any deployable measures.

A fact table does not have any measures because all the measures have been invalidated.

NoInActiveFactRule

[Fact Logical Table Source] Fact Logical Table Source is not marked Active.

A fact source is not active.

NoInActiveDimRule

[Dimension Logical Table Source] Dimension Logical Table Source is not marked Active.

A dimension source is not active.

NoAttributeInFactRule

[Fact Logical Table Source -> Column] Attribute found in Fact.

%qn in a fact source %qn does not have an aggregation rule.

No attributes in the fact source.

NoMeasureInDimRule

[Dimension Logical Table Source -> Column] Measure found in Dimension.

%qn in a dimension source %qn has an aggregation rule.

No measures in the dimension source.

VisibleColumnsAttrRule

[Column] -> The run_as_user does not have visibility to this Logical Column.

%qn is not accessible to the run_as_user %qn due to visibility rules.

A column does not have visibility for this user.

VisibleColumnsMeasRule

[Column] -> The run_as_user does not have visibility to this Logical Column.

%qn is not accessible to the run_as_user %qn due to visibility rules.

A column does not have visibility for this user.

MultiplePrimaryKeysDimRule

[Dimension Logical Table Source] A Join uses an alternate key in the Dimension Logical Table Source.

%qn between %qn and %qn in %qn uses the alternate key %qn.

A dimension physical table can contain only one primary key. It is joined to another dimension physical table using a different unique key and that join is invalid.

IBM Cube Views does not accept any unique keys to be used for foreign joins and always requires the primary key.

MultiplePrimaryKeysFactRule

[Dimension Logical Table Source] A Join uses an alternate key in the Dimension Logical Table Source.

%qn between %qn and %qn in %qn uses the alternate key %qn.

A fact physical table can contain only one primary key. It is joined to another fact physical table using a different unique key and that join is invalid.

IBM Cube Views does not accept any unique keys to be used for foreign joins and always requires the primary key.

MultiplePrimaryKeysFactDimRule

[Fact Logical Table Source -> Dim Logical Table Source] A Join uses an alternate key between the Logical Table sources.

%qn between %qn and %qn for sources %qn and %qn uses the alternate key %qn.

A fact physical table can contain only one primary key. It is joined to a dimension physical table using a different unique key and is invalid.

IBM Cube Views does not accept any unique keys to be used for foreign joins and always requires the primary key.

NotDB2ExpressionAttrRule

[Dimension Logical Table Source -> Column] The Column contains an Expression not supported.

%qn has expression %s which is not supported.

The attribute contains an expression not supported by IBM Cube Views.

This includes metadata expressions that use DateTime functions (for example, CURRENT_DATE).

NotDB2ExpressionMeasRule

[Fact Logical Table Source -> Column] The Column contains an Expression not supported.

%qn has expression %s which is not supported.

A measure contains an expression not supported by IBM Cube Views.

This includes metadata expressions that use DateTime functions (for example,. CURRENT_DATE).

NoAttributeDimRule

[Dimension Logical Table Source] Dimension Logical Table Source does not have any attributes visible to the run_as_user.

%qn can not be queried by user %qn since none of its attributes are visible.

A dimension does not have any attributes.


Using Materialized Views in the Oracle Database with Oracle Business Intelligence

This section explains how to export metadata from Oracle Business Intelligence into the SQL Access Advisor and create materialized views using the Oracle Database Metadata Generator.

This section contains the following topics:

About Using the SQL Access Advisor with Materialized Views

This feature enhances the data warehouse performance and functionality of a database. It enables the SQL Access Advisor to store metadata about the logical relationships of the data that resides in the database. Additionally, it accelerates data warehouse queries by using more efficient Oracle materialized views. These materialized views preaggregate the relational data and improve query performance. Once the metadata is stored in the SQL Access Advisor, the database administrator can optimize the database objects and improve query performance.

When processing queries, Oracle Database routes queries to tables that hold materialized views when possible. Because these tables of materialized views are smaller than the underlying base tables and the data has been pre aggregated, the queries that are rerouted to them might run faster.

Oracle Database Metadata Generator works as a metadata bridge to convert the Oracle Business Intelligence proprietary metadata into a SQL file that contains PL/SQL commands to generate dimensions in the SQL Access Advisor. After converting metadata into a SQL file, you use a tool such as SQL*Plus to import the translated metadata into the SQL Access Advisor and store it in metadata catalog tables. After importing the metadata, you create materialized views, which are used by to optimize incoming application queries.

You can use this feature with Oracle Database 9i and higher. See "System Requirements and Certification" for information about platform compatibility.

Note that in database releases prior to 10g, the SQL Access Advisor was called the Oracle Database Summary Advisor and was documented in Oracle9i Data Warehousing Guide.

Deploying Metadata for Oracle Database

Become familiar with the Oracle Database and its tools before attempting to deploy metadata in the Oracle Database. For more information, see "SQL Access Advisor" in Oracle Database Performance Tuning Guide.

Ensure that you complete the steps in "Running the Generator" before deploying metadata. To deploy cube metadata, perform the tasks described in the following sections:

Executing the SQL File for Oracle Database

Before executing the SQL file for importing into the SQL Access Advisor, ensure that you are familiar with Oracle Database import tools. See the Oracle Database documentation set for information.

Use a tool such as SQL*Plus to execute the SQL file that the Oracle Database Metadata Generator generated. You might see error messages if the dimensions already exist or if the database schema differs from that in the RPD file. When the script executes successfully, you can see the dimensions that were created by using the database web console or the Oracle Enterprise Manager Database Control. In the Oracle Enterprise Manager Database Control, expand the following nodes: Network, Databases, database-name, Warehouse, Summary Management, Dimensions, System.

After you execute the SQL file, be aware of the following:

  • No incremental metadata changes are allowed. Schema changes require that you manually delete cube model metadata in the Oracle Database and convert the Oracle Business Intelligence metadata again. For example, if you must make a change to a dimension in a cube in the Oracle BI repository, you must delete the cube model in the Oracle Database, regenerate the SQL file from the Oracle BI repository, and import it into the SQL Access Advisor.

  • You cannot delete metadata using the Oracle Database Metadata Generator. Instead, you must manually delete the cube model using the Oracle Enterprise Manager Database Control.

Defining Constraints for the Existence of Joins

For more information on this topic, see the Oracle Database documentation set.

You must ensure that Oracle Database knows about the joins between the dimension tables and the fact tables. To do so, you create constraints in SQL*Plus or the Oracle Enterprise Manager Database Control. In the Oracle Enterprise Manager Database Control, you select the table on which you must create a constraint, then select the Constraint tab.

You create a different type of constraint for each kind of table, as follows:

  • For dimension tables, create a UNIQUE key constraint.

  • For fact tables, create a FOREIGN key constraint and specify the referenced schema and referenced table. In the Constraint Definition area, include the foreign key columns in the fact table and the corresponding unique keys in the dimension table. An attempt to create a foreign key on a fact table can fail if the foreign key column data does not match the unique key column data on the dimension table.

Creating the Query Workload

See the Oracle Database documentation set for detailed information about creating the query workload.

A query workload is a sample set of physical queries to optimize. Before you create the workload, you generate a Trace file with information on the slowest-running queries.

To generate the Trace file:

You can generate the Trace file of the slowest-running queries using a tool that is appropriate to your database version, as described in the following list:

  • Usage Tracking: Use this capability in Oracle Business Intelligence to log queries and how long they take to run. Long-running Oracle Business Intelligence queries can then be executed as a script and used with the Trace feature in the Oracle Database to capture the Oracle Database SQL code for these queries.

  • Oracle Database Trace: Use this tool to identify the slowest physical query. You can enable the Trace feature either within Oracle Enterprise Manager Database Control or by entering SQL commands with the DBMS_MONITOR package. Once you enable the Trace feature, you use a script to create a Trace file to capture the SQL code for queries in a query workload table.

  • Oracle Enterprise Manager: Use this tool to track slow-running queries.

    Note:

    The capabilities that are described in the following sections are available in Oracle Database, rather than as part of Oracle Business Intelligence.

To analyze the information in the Trace file:

  1. Use the following guidelines when reviewing the Trace file:

    • When you have traced many statements at once, such as in batch processes, quickly discard any statements that have acceptable query execution times. Focus on those statements that take the longest times to execute.

    • Check the Query column for block visits for read consistency, including all query and subquery processing. Inefficient statements are often associated with a large number of block visits. The Current column indicates visits not related to read consistency, including segment headers and blocks that will be updated.

    • Check the Disk column for the number of blocks that were read from disk. Because disk reads are slower than memory reads, the value will likely be significantly lower than the sum of the Query and Current columns. If it is not, check for issues with the buffer cache.

    • Locking problems and inefficient PL/SQL loops can lead to high CPU time values even when the number of block visits is low.

    • Watch for multiple parse calls for a single statement, because this indicates a library cache issue.

  2. After identifying the problem statements in the file, check the execution plan to learn why each problem statement occurred.

To load queries into the workload:

  • After you use the Trace utility to learn the names of the slowest physical queries, insert them into the USER_WORKLOAD table.

    Table F-6 describes the columns of the USER_WORKLOAD table.

  • Use INSERT statements to populate the QUERY column with the SQL statements for the slowest physical queries and the OWNER column with the appropriate owner names.

Table F-6 Columns in USER_WORKLOAD Table

Column Data Type Required Description

QUERY

Any LONG or VARCHAR type (all character types)

YES

SQL statement for the query.

OWNER

VARCHAR2 (30)

YES

User who last executed the query.

APPLICATION

VARCHAR2 (30)

NO

Application name for the query.

FREQUENCY

NUMBER

NO

Number of times that the query was executed.

LASTUSE

DATE

NO

Last date on which the query was executed.

PRIORITY

NUMBER

NO

User-supplied ranking of the query.

RESPONSETIME

NUMBER

NO

Execution time of the query in seconds.

RESULTSIZE

NUMBER

NO

Total number of bytes that the query selected.

SQL_ADDR

NUMBER

NO

Cache address of the query.

SQL_HASH

NUMBER

NO

Cache hash value of the query.


Creating Materialized Views

After you populate the query workload table, use the appropriate tool for the Oracle Database version to create materialized views. In Oracle Database 10g, use the SQL Access Advisor in the Oracle Enterprise Manager Database Control and specify the query workload table that you created.

The SQL Access Advisor generates recommendations on improving the performance of the fact tables that you specify. The SQL Access Advisor displays the SQL code with which it will create the appropriate materialized views. Before indicating that the SQL Access Advisor should create the materialized views, review the following tips:

  • The creation of a materialized view can fail if the SQL code includes a CAST statement.

  • Ensure that the CREATE MATERIALIZED VIEW statement does not specify the same query that you provided as a workload table. If the statement does specify the same query, then the materialized views will likely not reflect the true performance gain. However, if the query is executed frequently, then creating a materialized view might still be worthwhile.

  • Add a forward slash (/) to the end of the CREATE MATERIALIZED VIEW statement after the SQL statement. Otherwise, the SQL*Plus worksheet will not recognize it as a valid statement.

    Note:

    The SQL Access Advisor can also help determine appropriate indexing schemes.

Using IBM DB2 Cube Views with Oracle Business Intelligence

This section explains how to export metadata from Oracle Business Intelligence into IBM DB2 using the DB2 Cube Views Generator.

This section contains the following topics:

About Using IBM DB2 Cube Views with Oracle Business Intelligence

The term IBM DB2 Cube Views is a registered trademark of IBM. See "System Requirements and Certification" for information about platform compatibility.

This feature enhances the data warehouse performance and functionality of a database. It enables the DB2 database to store metadata about the logical relationships of the data residing in the database. Additionally, it accelerates data warehouse queries by using more efficient DB2 materialized query tables (MQTs). These MQTs preaggregate the relational data and improve query performance.

When processing queries, the DB2 Query Rewrite functionality routes queries to the MQTs when possible. Because these tables are smaller than the underlying base tables and the data has been pre aggregated, the queries that are rerouted to them might run faster.

DB2 Cube Views Generator works as a metadata bridge to convert the Oracle Business Intelligence proprietary metadata into an IBM Cube Views XML file. After converting metadata into an XML file, you use IBM Cube Views to import the translated metadata into the DB2 database and store it in IBM Cube Views metadata catalog tables. After importing the metadata, you use the IBM Optimization Advisor to generate scripts to create materialized query tables (MQT) and their indexes. The deployed MQTs are used by the DB2 Query Reroute Engine to optimize incoming application queries.

DB2 provides an API (implemented as a stored procedure) that passes XML documents as arguments to create, modify, delete, or read the metadata objects. For more information about IBM Cube Views, see the IBM DB2 documentation.

Deploying Cube Metadata

The alias-SQL file generated by the DB2 Cube Views Generator should be executed before importing the XML file. The XML file generated by the DB2 Cube Views Generator contains the cube metadata in XML format. After importing the XML file into your DB2 database, you must create materialized query tables.

Note:

It is strongly recommended that you become familiar with IBM Cube Views and its tools before attempting to import the XML file. For more information, see the IBM documentation.

Ensure that you complete the steps in "Running the Generator" before deploying metadata. To deploy cube metadata, perform the tasks described in the following sections:

Executing the Alias-SQL File for IBM Cube Views

You must execute the alias-SQL file before you import the XML file into your DB2 database. For more information, see the IBM documentation.

The alias-SQL file that is generated by the DB2 Cube Views Generator must be executed by a SQL client on the database where the data warehouse is located. When executed, it creates aliases (synonyms) for tables in the database.

Importing the XML File

After you execute the alias-SQL file, you can import the XML file into the database. For more information, see the IBM documentation.

Note:

It is strongly recommended that you become familiar with IBM Cube Views and its tools before attempting to import the XML file. For more information, see the IBM documentation.

You can import this file using the following IBM tools:

  • IBM OLAP Center (recommended). For more information, see "Guidelines for Importing the XML File Using the IBM OLAP Center" and the IBM documentation.

  • IBM command-line client utility (db2mdapiclient.exe). IBM ships this utility with DB2. For more information about using the command-line client utility, see the IBM documentation.

  • IBM DB2 Stored Procedure. IBM Cube Views provides a SQL-based and XML-based application programming interface (API) that you can use to run a single stored procedure to create, modify, and retrieve metadata objects. For more information, see the IBM documentation.

Guidelines for Importing the XML File Using the IBM OLAP Center

Using the IBM OLAP Center, you can import cube metadata into DB2. The IBM OLAP Center provides wizards to help you import the file. For more information, see the IBM documentation.

To import the XML file, use the following guidelines:

  • Using the IBM OLAP Center tool, connect to the DB2 database.

  • In the Import Wizard, choose the XML file that you want to import.

  • If metadata exists that refers to database constructs that are not in the database, then an error message is displayed.

  • When the wizard asks for an import option, choose to replace existing objects.

  • When you are returned to the IBM OLAP Center, a diagram of the cube model is shown.

Guidelines for Changing Cube Metadata After Importing the XML File

After you import the XML file, you might need to perform the following actions:

  • Because Oracle OLAP does not store foreign keys as metadata, they will not exist in the converted metadata in the DB2 database. You must use the IBM Referential Integrity Utility for IBM Cube Views to generate foreign key informational constraints. You can obtain this utility on the IBM Web site.

  • You might encounter other issues such as foreign key join columns being nullable. You can use the following methods to solve this problem:

    • If data in these columns are not null, then you should convert these columns to not-null columns.

    • If data in these columns are null or you prefer not to convert the column data type even if the column data is not null, then you should modify the cube model using the following guidelines:

      • In a fact-to-dimension join, you must manually eliminate this dimension object from the converted cube model and create a degenerated dimension object consisting of the foreign key of this join.

      • In a dimension-to-dimension join, you must manually eliminate the dimension object that represents the primary-key side of the join from the converted cube model and create a degenerated dimension object consisting of the foreign key of this join.

      • In a fact-to-fact join, you must manually eliminate the fact object that represents the primary-key side of the join from the converted cube model and create a degenerated dimension object consisting of the foreign key of this join.

  • No incremental metadata changes will be allowed by the Cube Generator. Schema changes require that you manually delete cube model metadata in the DB2 database and convert the Oracle Business Intelligence metadata again. For example, if you must make a change to a dimension in a cube in the Oracle Business Intelligence metadata repository, then you must delete the cube model in the DB2 database, regenerate the XML file from the Oracle Business Intelligence repository, and import it into the DB2 database.

  • You cannot delete metadata using the DB2 Cube Views Generator. Instead, you must manually delete the cube model using the IBM OLAP Center.

  • The IBM Statistics tool and IBM Optimization Advisor must be run periodically.

For more information, see the IBM documentation.

Guidelines for Creating Materialized Query Tables (MQTs)

For more information, see the IBM documentation.

After you import the cube metadata into the database, you must run the IBM Optimization Advisor to generate SQL scripts and then execute those scripts to create the MQTs. You must provide certain parameters to the IBM Optimization Advisor to get optimal results from the implementation. The IBM Optimization Advisor wizard analyzes your metadata and recommends how to build summary tables that store and index aggregated data for SQL queries. Running the IBM Optimization Advisor can help you keep the MQTs current. Additionally, you must refresh your database after each ETL.

To create MQTs, use the following guidelines:

  • In the IBM OLAP Center, choose the cube model that you want to optimize and open the IBM Optimization Advisor wizard.

  • Follow the instructions in the wizard, using the following table as a guide.

    When asked for: Choose:

    Summary Tables

    Choose Deferred (or Immediate) and provide a tablespace for the tables

    Limitations

    Choose an appropriate value for the optimization parameters. You should turn on the Data-sampling option.

    SQL Scripts

    Creation of the scripts needed to run to create the Summary tables. Choose the filename and locations


  • When the IBM Optimization Advisor closes, you must execute the SQL scripts to create the MQTs.