|Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide
11g Release 2 (11.2)
Part Number E10935-03
This chapter contains the following topics:
Earlier in the design phase, you defined a logical model for your target system using Oracle Warehouse Builder design objects. This chapter includes reference information for assigning physical properties to mappings and process flows. This chapter presents configuration parameters in the order in which they appear in the user interface.
This section contains the following topics:
When you configure mappings properly, you can improve the Extract, Transform, and Load (ETL) performance. Use this section as a reference for setting configuration parameters that govern how data is loaded and to optimize code for better performance.
Configuration parameters for mappings are classified into the following categories:
In addition to these parameters, you can configure the source and target operators in the mapping as described in "Sources and Targets Reference". There are some additional parameters that you can configure for CT mappings as described in "Configuring Code Template (CT) Mappings".
When you configure Runtime parameters for a mapping, you set the default behaviors for the mapping. You can override these parameters when you execute the mapping either in the Control Center, the Process Flow Editor, or Oracle Enterprise Manager.
The Runtime parameters include the following parameters:
When you select the Analyze Table Statements option, Warehouse Builder estimates when gathering statistics on the target tables. After data is loaded into the target tables, statistics used for cost-based optimization are gathered on each target table. You can set this parameter to the percentage of rows in each target table used for this analysis.
Use Bulk Size to specify the number of rows in each bulk for PL/SQL Bulk Processing. Warehouse Builder uses the Bulk Size parameter only when the Bulk Processing Code option is selected and the operating mode is set to row-based. For more information, see Oracle PL/SQL Reference Guide.
This parameter specifies whether the parallel chunking task must be resumed. Select False to run a new parallel chunking task. Select True to resume the parallel chunking task. At runtime, you can override the value set for this parameter by using the Input Parameters dialog box.
This parameter specifies whether the parallel chunking task must be resumed if it has been stopped due to errors. Set this parameter to True to force Warehouse Builder to resume the parallel chunking task. At runtime, you can override the value set for this parameter by using the Input Parameters dialog box.
This parameter specifies the number of times that Warehouse Builder attempts to resume a failed parallel chunking task. Enter a numeric value that specifies the number of attempts. At runtime, you can override the value set for this parameter by using the Input Parameters dialog box.
This parameter specifies the number of threads that must be used to process the chunks in parallel. Enter a numeric value for this parameter. Setting this parameter to NULL or zero causes the mapping to be executed serially.
At runtime, you can override the value set for this parameter by using the Input Parameters dialog box.
Commit frequency applies only to non-bulk mode mappings. Bulk mode mappings commit according to the bulk size.
If you set the Default Operating Mode to row-based and deselect Bulk Processing Code, then use the Commit Frequency parameter to determine the number of rows to be processed before a commit operation. Warehouse Builder commits data to the database after processing the number of rows specified in this parameter.
If you select the Bulk Processing Code option, set the Commit Frequency equal to the Bulk Size. If the two values differ, then Bulk Size overrides Commit Frequency and a commit operation is implicitly performed for every bulk size.
Use Default Audit Level to indicate the audit level used when executing the package. Audit levels dictate the amount of audit information captured in the runtime schema when the package is run. The audit level settings are:
None: No auditing information is recorded in run time.
Statistics: Statistical auditing information is recorded in run time.
Error Details: Error information and statistical auditing information is recorded in run time.
Complete: All auditing information is recorded in run time. Running a mapping with the audit level set to Complete generates a large amount of diagnostic data, which may quickly fill the allocated tablespace.
For mappings with a PL/SQL implementation, select a default operating mode. The operating mode, you select can greatly affect mapping performance. For details on how operating modes affect performance, see "Set-Based Versus Row-Based Operating Modes". You can select one of the following operating modes:
Set based: A single SQL statement that inserts all data and performs all operations on the data is generated. This increases the speed of Data Manipulation Language (DML) operations. Set based mode offers optimal performance but minimal auditing details.
Row based: Statements that process data row by row are generated. The select statement is a SQL cursor. All subsequent statements are PL/SQL. Because data is processed row by row, the row-based operating mode has the slowest performance but offers exhaustive auditing details.
Row based (Target Only): A cursor select statement is generated and attempts are made to include as many operations as possible in the cursor. For each target, Warehouse Builder generates a PL/SQL insert statement and inserts each row into the target separately.
Set based fail over row based: The mapping is executed in set based mode. If an error occurs, then the execution fails and the mapping is started over again in the row-based mode. This mode is recommended for use only in test environments and is not recommended for use in production environments.
Set based fail over row based (Target Only): The mapping is first executed in set based mode. If an error occurs, then the execution fails over to Row based (Target Only) mode. This mode is recommended for use only in test environments and is not recommended for use in production environments.
Default Purge Group is used when executing the package. Each audit record in the runtime schema is assigned to the purge group specified.
Use Maximum Number of Errors to indicate the maximum number of errors allowed while executing the package. Execution of the package terminates when the number of errors exceeds the maximum number of errors value.
The Maximum Number of Errors parameter applies to the count of errors for the entire mapping run, whether run in set-based, row-based, or failover modes. Consider the following cases:
Maximum number of errors is set to 50 and the mapping is run in set-based mode. The data did not load successfully. One error resulted from failure of the set-based load DML statement. The mapping return status is WARNING.
Maximum number of errors is set to 50, the mapping is run in set-based mode, and Enable Constraint parameter is set to false. The data is loaded successfully, but 60 constraint violation errors occurred during reenabling of the constraint. The mapping return status is ERROR.
Max number of errors is set to 50 and the mapping is run in row-based mode. Some of the data loaded successfully, but with many errors. The mapping will terminate after reaching the 50th error. The mapping return status is ERROR.
Max number of errors is set to 50 and the mapping is run in set-based failover to row-based mode. The data did not load successfully in the set-based mode. One error resulted from the failure of the set-based load DML statement. Some of the data loaded successfully in the row-based mode, but with many errors. The mapping will terminate after reaching the 49th error in the row-based mode because there was one error counted in set-based mode. The mapping return status is ERROR.
The Number of Threads to Process Chunks parameter represents the number of threads used to process the chunks of source data. Setting this parameter to zero means that the mapping will not be run in parallel.
If you select this option, ANSI SQL syntax is generated. Otherwise, Oracle SQL syntax is generated.
Automatic: This is the default setting. Warehouse Builder loads and then automatically commits data based on the mapping design. This setting is valid for all mapping types. For multiple targets in a single mapping, data is committed based on target by target processing (insert, update, delete).
Automatic Correlated: Automatic correlated commit is a specialized type of automatic commit that applies only to PL/SQL mappings with multiple targets.Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets.
The mapping behavior varies according to the operating mode that you select. For more information about automatic correlated commit, see "Committing Data from a Single Source to Multiple Targets".
Manual: Select manual commit control for PL/SQL mappings when you want to interject complex business logic or perform validations before committing data.
You have the following options for specifying manual commits:
You can define the commit logic within the mapping as described in "Embedding Commit Logic into the Mapping".
You can commit data in a process flow or from a SQL*Plus session as described in "Committing Data Independently of Mapping Design".
No Commit: If you set this option, then Warehouse Builder mapping does not issue a commit while the mapping executes.
If you select this option, code is generated for analyzing the target table after the target is loaded, if the resultant target table is double or half its original size.
If the target table is not in the same schema as the mapping and you want to analyze the table, then you must grant ANALYZE ANY to the schema owning the mapping.
If you select this option, parallel DML is enabled at run time. Executing DML statements in parallel improves the response time of data-intensive operations in large databases that are present in a data warehouse.
Select this option to improve performance for mappings that include the Splitter operator and inserts into multiple target tables. When this option is selected and the mapping is executed by Oracle9i or later, a single SQL statement is generated (
multi_table_insert) that inserts data into multiple tables based on the same set of source data.
Note that the multiple table insert is performed only if this option is selected and the Oracle target module database is Oracle9i or later. The multiple table insert is performed only for mappings in set-based mode that include a Splitter operator, and does not include active operators, such as an Aggregator or Joiner operator, between the Splitter and the target. In addition, the multiple insert is available only for tables. It is not available for views, materialized views, dimensions, or cubes. Each target table must have fewer than 999 columns. For detailed instructions on how to create a mapping with multiple targets, see "Example: Creating Mappings with Multiple Targets".
Do not select this option for mappings run in row-based mode or for mappings executed by Oracle8i server. Also, do not select this option when auditing information for individual targets is required.
When this option is selected, one total SELECT and INSERT count is returned for all targets.
Specifies the AUTHID option to be used while generating the code. The options that you can select are Current_User, Definer, or None.
For PL/SQL mappings with multiple targets, you can generate code that defines an order for loading the targets. This is important when a parent-child relationship exists between two or more targets in a mapping. The option is selected by default.
Specify the name of the error trigger procedure in this field.
If this configuration parameter is selected and the operating mode is set to row-based, Warehouse Builder generates PL/SQL bulk processing code. PL/SQL bulk processing improves row-based ETL performance by collecting, processing, and writing rows in bulk, instead of doing it row by row. The size of each bulk is determined by the configuration parameter Bulk Size. Set-based mode offers optimal performance, followed by bulk processing, and finally by row-based mode. For more information, see Oracle PL/SQL Reference Guide.
By default, when code is generated for a mapping, the code for all possible operating modes is generated. That is, if you set the Default Operating Mode to Set based, Warehouse Builder still generates code for all possible operating modes when Generation Mode is set to All Operating Modes. This enables you to switch the operating modes for testing purposes at run time.
The parameters in the Chunking Options section enable you to configure parallel chunking for a PL/SQL mapping.
Expand the Details node under Chunking Options to specify the following parallel chunking options:
Chunk by Number Column
Chunk by ROWID
Chunk by SQL Statement
Use this parameter to select the method to be used for performing parallel chunking. Following are the options you can select for this parameter:
None: Does not perform parallel chunking.
Chunk By Number Column: Uses a numeric column in the source data as a basis for dividing the source data into chunks. When you select this option, you must set the parameters listed under the Chunk By Number Column node.
Chunk by ROWID: Divides the source rows into chunks depending on the
ROWID of the column in the source data. When you select this option, you must set the parameters listed under the Chunk By ROWID node.
Chunk by SQL Statement: Uses a SQL statement to divide the source data into chunks. When you select this option, you must set the parameters listed under the Chunk By SQL Statement node.
Use Chunk Table to select the source table on which parallel data chunking must be performed.
The Chunking Column parameter is applicable when you perform parallel chunking using a numeric column in the source table. Use this parameter to select the source column based on which parallel chunking is performed.
Use Chunk Size to specify the range of each chunk.
Use chunk Type to specify how source data must be divided into chunks based on the
ROWID of the source columns. The options you can select are By data blocks and By rows.
Specify the SQL statement using which the source data is divided into smaller chunks.
Use this parameter to specify how the SQL statement, specified using SQL Statement, chunks the source data. Select one of the following options:
NUMBER_COLUMN: Data is chunked using a numeric column in the source data.
ROWID: Data is chunked using the
ROWID of columns
The parameters in the SCD Updates section enable you to configure serial chunking for a PL/SQL mapping. The SCD Updates options contains one parameter called Chunking Strategy.
The Chunking Strategy parameter enables to perform serial chunking. Select Serial to perform serial chunking on the source data. Select None to disable serial chunking for the PL/SQL mapping.
For relational and dimensional sources and targets such as tables, views, and cubes, Warehouse Builder displays the following set of properties for each operator:
By default, this setting is enabled and DML is performed using LCR APIs if available. If no LCR APIs are available, then the standard DML is used.
This parameter is maintained for backward compatibility only.
In previous releases, you could select a database link by name from the list. Source operators can be configured for schemas and database links, but targets can be configured for schemas only. Sources and targets can reside in different schemas, but they must reside in the same database instance.
This setting specifies the location that is used to access the source or target operator.
Enable this setting to detect and resolve any conflicts that may arise during DML operations using the LCR APIs.
This parameter is maintained for backward compatibility only.
In previous releases, you could link the mapping to a particular schema by clicking on the Schema field and typing a name.
Use the settings in this section to enable Partition Exchange Loading (PEL) into a target table. For specific information about each of these settings and additional information about how to design mappings for PEL, see "Improved Performance through Partition Exchange Loading".
Define loading or extraction hints. Application developers often develop insights into their data. For example, they know that a query runs much faster if a set of tables is joined in one order rather than another. Warehouse Builder can incorporate these insights into the generated SQL code packages as SQL Optimizer Hints.
When you select a hint from the Hints dialog box, the hint appears in the Existing Hints field. Enter additional text as appropriate in the Extra Text column. The editor includes the hint in the mapping definition as is. There is no validation or checking on this text.
You can define loading hints for mappings that load data in INSERT or UPDATE mode. By default, commonly used hints such as APPEND and PARALLEL are added. For all loading modes other than INSERT, the APPEND hint causes no effect and you can choose to remove it.
Hint is available during mapping configuration. To configure a hint:
In the Projects Navigator, expand the Databases folder, and then the required module.
In the module, expand the Mappings node.
Right-click the required mapping and select Configure.
The Configuration tab displays the configuration parameters of the mapping.
In the Configuration tab, expand the required operator type and then expand the required operator.
Expand the Hints node and click the Ellipsis button to the right of a hint type to enter a hint.
For information about optimizer hints and how to use them, see Oracle Database Performance Tuning Guide.
Configure the following Constraint Management parameters:
Exceptions Table Name: All rows that violate their foreign key constraints during reenabling are logged into the specified exceptions table. No automatic truncation of this table is performed either before or after the load. Constraint violations are also loaded into the runtime audit error tables.
For SQL and PL/SQL loading, if you do not specify an exceptions table, invalid rows are loaded into a temporary table located in the default tablespace and then loaded into the Runtime Audit error table. The table is dropped at the end of the load.
If you are using SQL*Loader direct path loading, you must specify an exception table. Consult the SQL*Loader documentation for more information.
Enable Constraints: If you set this option to False, Warehouse Builder disables constraints on the target tables, loads data, and then reenables the constraints. Constraint violations found during reenable are identified in the runtime audit error table and, if specified, loaded into an exceptions table. If you set this option to True, then Warehouse Builder does not manage constraints and the data from the source is loaded into the target table.
When you disable constraints, loading is quicker because a constraint check is not performed. However, if exceptions occur for any rows during reenabling, the constraints for those rows will remain in a nonvalidated state. These rows are logged in the runtime audit error table by their
ROWID. You must manually inspect the error rows to take any necessary corrective action.
The disabling and enabling of constraints happens on the target table. When the Enable Constraints parameter is set to True, the constraints on the target table will be disabled prior to the loading of data, and will be reenabled after the loading of data. When the constraints are reenabled, the entire table is scanned and rows that violate the constraints are logged in the exceptions table. These rows are reported as constraint violation errors in the audit browser.
Consider a scenario where the target table is empty and the Enable Constraints parameter is set to True. Initially suppose that the source table has 10 rows, of which 2 rows violate the constraint on the target table. When the mapping is executed, the constraints on the target table are first disabled, then data is loaded (all 10 rows), and then constraints on the target table are reenabled. When the constraints are reenabled, the 2 rows that violate the constraints are logged into the exceptions table. The audit browser reports that there are 2 constraint violation errors.
Later, the mapping is again executed with a new source table containing 20 rows, of which 5 rows violate the constraint on the target table. After the data is loaded into the target table (all 20 rows), the target table has 30 rows. When the constraints on the target table are reenabled, 7 rows will be logged in to the exceptions table and reported as constraint violation errors in the audit browser. These include the 5 rows reported newly as well as the 2 rows reported initially. This is because Warehouse Builder scans the entire target table, which means that all 30 rows will be checked and therefore the 2 rows with violations from the first data load will still be included. Warehouse Builder cannot identify only the new rows added when the mapping was executed the second time. Therefore, unless you truncate the target table before each data load, you will always see the constraint violations from the previous data loads reported each time.
Setting the Enable Constraints option to True is subject to the following restrictions:
For set-based operating mode, the foreign key constraints on the targets are disabled before loading, and then reenabled after loading. This parameter has no effect on foreign key constraints on other tables referencing the target table. If the load is done using SQL*Loader instead of a SQL or PL/SQL package, then a reenable clause is added to the .ctl file.
For set-based fail over to row-based and set-based fail over to row-based (target only) operating modes, the deselect setting disables the foreign key constraints on the targets before loading and then reenables them if the load succeeds in set-based mode. This setting has no effect on foreign keys referencing other tables. If the load fails over to row-based, then loading will repeat in row-based mode and all constraints remain enabled.
Note:Constraint violations created during reenabling will not cause the load to fail from set-based mode over to row-based mode.
For row-based or row-based (target only) operating modes, all foreign key constraints remain enabled even if the option is not selected.
For the TRUNCATE/INSERT DML type, the deselect setting disables foreign key constraints on other tables referencing the target table before loading, and then reenables the constraints after loading, regardless of the default operating mode.
When you have a Table operator that contains inputs from a flat file, you must configure the following SQL*Loader Parameters properties:
Partition Name: Indicates that the load is a partition-level load. Partition-level loading enables you to load one or more specified partitions or subpartitions in a table. Full database, user, and transportable tablespace mode loading does not support partition-level loading. Because incremental loading (incremental, cumulative, and complete) can be done only in full database mode, partition-level loading cannot be specified for incremental loads. In all modes, partitioned data is loaded in a format such that partitions or subpartitions can be selectively loaded.
Sorted Indexes Clause: Identifies the indexes on which the data is presorted. This clause is allowed only for direct path loads. Because the data sorted for one index is not usually in the right order for another index, you specify only one index in the SORTED INDEXES clause. When the data is in the same order for multiple indexes, all indexes can be specified at once. All indexes listed in the SORTED INDEXES clause must be created before you start the direct path load.
Singlerow: Intended for use during a direct path load with APPEND on systems with limited memory, or when loading a small number of records into a large table. This option inserts each index entry directly into the index, one record at a time. By default, SQL*Loader does not use SINGLEROW to append records to a table. Index entries are stored in a temporary area and merged with the original index at the end of the load. Although this method achieves better performance and produces an optimal index, it requires extra storage space. During the merge, the original index, the new index, and the space for new entries all simultaneously occupy storage space. With the SINGLEROW option, storage space is not required for new index entries or for a new index. Although the resulting index may not be as optimal as a freshly sorted one, it takes less space to produce. It also takes more time, because additional UNDO information is generated for each index insert. This option is recommended when the available storage is limited. It is also recommended when the number of records to be loaded is small compared to the size of the table. A ratio of 1:20 or less is considered small.
Trailing Nullcols: Sets SQL*Loader to treat any relatively positioned columns that are not present in the record as null columns.
Records To Skip: Invokes the SKIP command in SQL*Loader. SKIP specifies the number of logical records from the beginning of the file that should not be loaded. By default, no records are skipped. This parameter continues loads that have been interrupted for some reason. It is used for all conventional loads, for single-table direct loads, and for multiple-table direct loads when the same number of records is loaded into each table. It is not used for multiple-table direct loads when a different number of records is loaded into each table.
Database File Name: Specifies the names of the export files to import. The default extension is .dmp. Because you can export multiple export files, you may must specify multiple file names to be imported. You must have read access to the imported files. You must also have the IMP_FULL_DATABASE role.
The configuration parameters for Code Template (CT) mappings are similar to the parameters for PL/SQL mappings. However, for CT mappings, the nodes SCD Updates and Chunking Options are not displayed because these options only apply to PL/SQL mappings.
Select this parameter to enclose SQL identifiers in the generated code of CT mappings. Warehouse Builder supports delimited identifiers in CT mapping across heterogeneous database systems.
For example, if the Use Enclosure Character parameter is selected, and the enclosure character is set to square brackets, the SQL statement generated is as follows:
SELECT [My Empno], [My Emp Name] FROM [My Employees]
By default, the enclosure character used for Oracle Database, IBM DB2 UDB, and SQL Server is the double quotation marks. You can set the enclosure character that should be used for each platform by using OMB*Plus. For example, to change the enclosure character for the SQL Server platform to square brackets, use the following OMB*Plus command.
OMBALTER PLATFORM 'SQLSERVER' SET PROPERTIES (ENCLOSURE_CHAR) VALUES ('/[/]');
For a CT mapping that has the Use Enclosure Character parameter selected to work correctly, you must set up the correct enclosure character for any new platform that you create. For example, you create a new platform called
MYSQL that references a MySQL database. Your CT mapping uses a table from the MySQL database and you want to generate code using the double quotation mark as the enclosure character. Before you generate code for the CT mapping, you must set the enclosure character for the MYSQL platform using the following statement:
OMBALTER PLATFORM 'MYSQL' SET PROPERTIES (ENCLOSURE_CHAR) VALUES ('"')
Note:The Use Enclosure Parameter is not supported for CT mappings that contain Flat File operators.
Use the SQL Loader Data Files section to set the Source File Operator parameter. For CT mappings that contain one or more Flat File operators, you can set parameters for the associated data files.
Use this property to specify the Flat File operator that must be associated with the current data file.
If your CT mapping contains only one Flat File operator, setting the Source File Operator parameter is optional. However, if your CT mapping contains more than one Flat File operator, you must set this parameter. This parameter enables Warehouse Builder to understand the relationship between the data file and the Flat File operator.
This property is available starting with Oracle Warehouse Builder 11g Release 2 (126.96.36.199).
Flat File Operators as a Target: A PL/SQL deployment code package is generated. For information about configuring the parameters associated with a Flat File operator used as a target, see "Flat File Operators as a Target".
To configure properties unique to mappings with flat file targets:
Select a mapping from the Projects Navigator, select Design from the menu bar, and select Configure.
Or, right-click the mapping you want to configure and select Configure.
Warehouse Builder displays the Configuration tab for the mapping.
Choose the parameters that you want to configure and click the space to the right of the parameter name to edit its value.
For each parameter, you can either select an option from a list, enter a value, or click the Ellipsis button to display another properties dialog box.
Select the Deployable option to generate a set of scripts for mapping objects marked as deployable. If this option is not selected for a mapping, then scripts are not generated for that mapping.
Set Language to the type of code that you want to generate for the selected mapping. The options you can choose from depend upon the design and use of the operators in the mapping. Depending on the mapping, you can select from PL/SQL, ABAP (for an SAP source mapping), or SQL*Loader.
Specify the location to deploy the mapping.
Under Runtime Parameters, set the Default Operating Mode to Row based (target only). This type of mapping will not generate code in any other default operating mode. For a description of each runtime parameter, see "Runtime Parameters".
For Access Specification, specify the name of the flat file target in Target Data File Name. For the Target Data File Location, specify a target file located on the computer where you installed the Runtime Platform. Select Output as XML file if you want the output to be in an xml file.
To configure a mapping with a Flat File operator as a source:
Select a mapping from the Projects Navigator, select Design from the menu bar, and select Configure. Or, right-click the mapping that you want to configure and select Configure.
Select the parameters that you want to configure and click the space to the right of the parameter name to edit its value.
For each parameter, you can specify whether you want the parameter to be selected, select an option from a list, enter a value, or click the Ellipsis button to display another properties dialog box.
Select the Deployable option to generate SQL*Loader script.
Specify the Log File Location and Log File Name.
Select Continue Load.
If SQL*Loader runs out of space for data rows or index entries, the load is discontinued. If the Continue Load option is selected, then an attempt is made to continue discontinued loads.
In Nls Characterset, specify the character set to place in the CHARACTERSET clause.
Select Direct Mode to indicate that a direct path load will be done. If this option is not selected, then a conventional load will be done. In general, direct mode is faster.
Select Operation Recoverable to indicate that the load is recoverable. If this option is not selected, then the load is not recoverable and records are not recorded in the redo log.
Configure the following parameters that affect the OPTIONS clause in the SQL*Loader script that is generated for mappings with flat file sources.
Perform Parallel Load: If this option is selected, direct loads can operate in multiple concurrent sessions.
Errors Allowed: If the value specified is greater than 0, then the ERRORS = n option is generated. SQL*Loader terminates the load at the first consistent point after it reaches this error limit.
Records To Skip: If the value specified is greater than 0, then the SKIP = n option is generated. This value indicates the number of records from the beginning of the file that should not be loaded. If the value is not specified, then no records are skipped.
Records To Load: If the value specified is greater than 0, then the LOAD = n option is generated. This value specifies the maximum number of records to load. If a value is not specified, then all of the records are loaded.
Rows Per Commit: If the value specified is greater than 0, then the ROWS = n option is generated. For direct path loads, the value identifies the number of rows to read from the source before a data is saved. For conventional path loads, the value specifies the number of rows in the bind array.
Read Size: If the value specified is greater than 0, then the READSIZE = n option is generated. The value is used to specify the size of the read buffer.
Bind Size: If the value specified is greater than 0, then the BINDSIZE = n option is generated. The value indicates the maximum size, in bytes, of the bind array.
Read Buffers: If the value specified is greater than 0, then the READBUFFERS = n clause is generated. READBUFFERS specifies the number of buffers to use during a direct path load. Do not specify a value for READBUFFERS unless it is necessary.
Preserve Blanks: If this option is selected, then the PRESERVE BLANKS clause is generated. PRESERVE BLANKS retains leading white space when optional enclosure delimiters are not present. It also leaves the trailing white space intact when fields are specified with a predetermined size.
Database File Name: This parameter enables you to specify the characteristics of the physical files to be loaded. The initial values of these parameter is set from the properties of the flat file used in the mapping.
If this parameter is set to a nonblank value, then the FILE= option is generated. The value specified is enclosed in single quotation marks in the generated code.
Control File Location and Control File Name: The control file name necessary for audit details.
For more information about each SQL*Loader option and clause, see Oracle Database Utilities.
Expand the Runtime Parameters to configure your mapping for deployment.
Audit: Select this option to perform an audit when the package is executed.
Default Purge Group: The Default Purge Group is used when executing the package. Each audit record in the runtime schema is assigned to the purge group specified.
To configure a process flow module:
Right-click the process flow module and select Configure.
Warehouse Builder displays the Configuration tab for the process flow module.
Set the parameters for Evaluation Location and Identification Location.
Evaluation Location is the location from which this process flow is evaluated.
Identification Location provides the location where the generated code will be deployed to.
To configure a process flow package:
Right-click the process flow package and select Configure.
Warehouse Builder displays the Configuration tab for the process flow package.
Set the parameters for Referred Calendar and Generation Comments.
Referred Calendar provides the schedule to associate with this package.
Generation Comments provides additional comments for the generated code.
Click any of the activities of a package to view its properties.
Under Path Settings, set the following properties for each activity in the process flow:
Execution Location: The location from which this activity is executed. If you configured Oracle Enterprise Manager, then you can select an OEM agent to execute the process flow.
Remote Location: The remote location for FTP activities only.
Working Location: The working location for FTP, FILE EXISTS, and External Process activities only.
Deployed Location: The deployment location. This setting applies to transformation activities only. For activities referring to predefined transformations, you must change the setting from Use Default Location and specify a valid location.
Under General Properties, you can view the bound name, which is the name of the object that the activity represents in the process flow. Only mapping, transformation, and subprocess activities have bound names.
Under Execution Settings, select the option Use Return as Status.
This setting governs the behavior for activities that return
NUMBER in their output. These activities include the FTP, User Defined, and Transform activities. When you select Use Return as Status, the Process Flow Editor assigns the outgoing transition conditions based on the following numeric return values for the activity:
1 = Success Transition
2 = Warning Transition
3 = Error Transition