Skip Headers
Oracle® Warehouse Builder User's Guide
10g Release 2 (10.2.0.2)

Part Number B28223-05
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

25 Source and Target Operators

This chapter provides details on how to use operators as sources and targets in a mapping and includes the following topics:

Using Source and Target Operators

This section describes how to set properties for source and target operators, and also provides details on how to use each operator.

Defining a mapping involves the following general steps:

  1. Creating a Mapping

  2. Adding Operators

  3. Editing Operators

  4. Connecting Operators

  5. Setting Mapping Properties

  6. Setting Properties for Oracle Source and Target Operators

  7. Configuring Mappings Reference

  8. Debugging a Mapping

List of Source and Target Operators

The list of source and target operators are:

Using Oracle Source and Target Operators

Oracle source and target operators refer to operators that are bound to Oracle data objects in the Warehouse Builder repository. Use these operators in a mapping to load data into or source data from Oracle data objects.

Setting Properties for Oracle Source and Target Operators

The properties panel in the Mapping Editor displays the properties of the selected operator. When you select an object on the canvas, the editor displays its associated properties in the Properties panel along the left side. It contains the following categories of parameters for source and target operators:

Primary Source

For Oracle Application Embedded Data Warehouse (EDW) users, refer to EDW documentation. For all other users, disregard this parameter.

Loading Types for Oracle Target Operators

Select a loading type for each target operator using the Loading Type property.

For all Oracle target operators, except for dimensions and cubes, select one of the following options.

  • CHECK/INSERT: Warehouse Builder checks the target for existing rows. If there are no existing rows, Warehouse Builder inserts the incoming rows into the target.

  • DELETE: Warehouse Builder uses the incoming row sets to determine which of the rows on the target to delete.

  • DELETE/INSERT: Warehouse Builder deletes all rows in the target and then inserts the new rows.

  • INSERT: Warehouse Builder inserts the incoming row sets into the target. Insert fails if a row already exists with the same primary or unique key.

  • INSERT/UPDATE: For each incoming row, Warehouse Builder performs the insert operation first. If the insert fails, an update operation occurs. If there are no matching records for update, the insert is performed. If you select INSERT/UPDATE and the Default Operating Mode is row based, you must set unique constraints on the target. If the operating mode is set based, Warehouse Builder generates a MERGE statement.

  • None: Warehouse Builder performs no operation on the target. This setting is useful for testing. Extraction and transformations run but have no effect on the target.

  • TRUNCATE/INSERT: Warehouse Builder truncates the target and then inserts the incoming row set. If you choose this option, Warehouse Builder cannot roll back the operation even if the execution of the mapping fails. Truncate permanently removes the data from the target.

  • UPDATE: Warehouse Builder uses the incoming row sets to update existing rows in the target. If no rows exist for the specified match conditions, no changes are made.

    If you set the configuration property PL/SQL Generation Mode of the target module to Oracle 10g or Oracle 10gR2, Warehouse Builder updates the target in set based mode. The generated code includes a MERGE statement without an insert clause. For modules configured to generate 9i and earlier versions of PL/SQL code, Warehouse Builder updates the target in row based mode.

  • UPDATE/INSERT: For each incoming row, Warehouse Builder performs the update first. If you select UPDATE/INSERT and the Default Operating Mode for the target is set-based, Warehouse Builder generates a MERGE statement.

For dimensions and cubes, the Loading Type property has he following options: Load and Remove. Use load to load data into the dimension or cube. Use Remove to remove data from the dimension or cube.

Loading Types for Flat File Targets

Configure SQL*Loader parameters to define SQL*Loader options for your mapping. The values chosen during configuration directly affect the content of the generated SQL*Loader and the runtime control files. SQL*Loader provides two methods for loading data:

  • Conventional Path Load: Executes an SQL INSERT statement to populate tables in an Oracle Database.

  • Direct Path Load: Eliminates much of the Oracle Database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. Because a direct load does not compete with other users for database resources, it can usually load data at or near disk speed.

    Certain considerations such as restrictions, security, and backup implications are inherent to each method of access to database files. See Oracle9i Database Utilities for more information.

    When designing and implementing a mapping that extracts data from a flat file using SQL*Loader, you can configure different properties affecting the generated SQL*Loader script. Each load operator in a mapping has an operator property called Loading Types. The value contained by this property affects how the SQL*Loader INTO TABLE clause for that load operator is generated. Although SQL*Loader can append, insert, replace, or truncate data, it cannot update any data during its processing. Table 25-1 lists the INTO TABLE clauses associated with each load type and their affect on data in the existing targets.

Table 25-1 Loading Types and INTO TABLE Relationship

Loading Types INTO TABLE Clause Affect on Target with Existing Data

INSERT/UPDATE

APPEND

Adds additional data to target.

DELETE/INSERT

REPLACE

Removes existing data and replaces with new (DELETE trigger fires).

TRUNCATE/INSERT

TRUNCATE

Removes existing data and replaces with new (DELETE trigger fires).

CHECK/INSERT

INSERT

Assumes target table is empty.

NONE

INSERT

Assumes target table is empty.


Target Load Order

This property enables you to specify the order in which multiple targets within the same mapping are loaded. Warehouse Builder determines a default load order based on the foreign key relationships. You can overrule the default order.

Target Filter for Update

If the condition evaluates to true, the row is included in the update loading operation.

Target Filter for Delete

If evaluated to true, the row is included in the delete loading operation.

Match By Constraint

When loading target operators with the UPDATE or the DELETE conditions, you can specify matching criteria. You can set matching and loading criteria manually or choose from several built-in options. Use Match By Constraint to indicate whether unique or primary key information on a target overrides the manual matching and loading criteria set on its attributes. When you click the property Match By Constraint, Warehouse Builder displays a drop down list containing the constraints defined on that operator and the built-in loading options, as shown in Figure 25-1.

Figure 25-1 Match By Constraint Options for Operators

Description of Figure 25-1 follows
Description of "Figure 25-1 Match By Constraint Options for Operators"

If you select All Constraints, all manual attribute load settings are overruled and the data is loaded as if the load and match properties of the target attributes were set as displayed in Table 25-2.

Table 25-2 All Constraints Target Load Settings

Load Setting Key Attribute All Other Attributes

Load Column when Updating Row

NO

YES

Match Column when Updating Row

YES

NO

Match Column when Deleting Row

YES

NO


If you select No Constraints, all manual load settings are honored and the data is loaded accordingly.

If you select a constraint previously defined for the operator, all manual attribute load settings are overruled and the data is loaded as if the load and match properties of the target were set as displayed in Table 25-4.

Table 25-3 Target Load Settings for a Selected Constraint

Load Setting Selected Key Attributes All Other Attributes

Load Column when Updating Row

NO

YES

Match Column when Updating Row

YES

NO

Match Column when Deleting Row

YES

NO


Reverting Constraints to Default Values

If you made changes at the attribute level and you want to default all settings, click Advanced. Warehouse Builder displays a drop down box list containing the loading options as shown in Figure 25-2. Warehouse Builder defaults the settings based on the constraint type you select.

Figure 25-2 Advanced Settings for Match By Constraint

Description of Figure 25-2 follows
Description of "Figure 25-2 Advanced Settings for Match By Constraint"

For example, if you want to reset the match properties for all key attributes, click Advanced, select No Constraints, and click OK. Warehouse Builder overwrites the manual load settings and loads the data based on the settings displayed in Table 25-4.

Table 25-4 Default Load Settings for Advanced No Constraints

Load Setting All Key Attributes All Other Attributes

Load Column when Updating Row

YES

YES

Match Column when Updating Row

NO

NO

Match Column when Deleting Row

NO

NO


Alternatively, if you click Advanced and select All Constraints, Warehouse Builder overwrites the manual load settings and loads the data based on the settings displayed in Table 25-5

Table 25-5 Default Load Settings for Advanced All Constraints

Load Setting All Key Attributes All Other Attributes

Load Column when Updating Row

NO

YES

Match Column when Updating Row

YES

NO

Match Column when Deleting Row

YES

NO


Bound Name

The name used by the code generator. If an operator is currently bound and synchronized, then this property is read-only. If an operator is not yet bound, you can edit the bound name within the Mapping Editor before you synchronize it to a repository object.

Key Name

Name of the primary, foreign, or unique key.

Key Columns

Local columns that define this key. Each key column is comma-delimited if the operator contains more than one key column.

Key Type

Type of key, either primary, foreign, or unique.

Referenced Keys

If the operator contains a foreign key, Referenced Keys displays the primary key or unique key for the referenced object.

Error Table Name

The name of the error table that stores the invalid records during a load operation.

Roll up Errors

Select Yes to roll up records selected from the error table by the error name. Thus all errors generated by a particular input record will be rolled up into a single record with the error names concatenated in the error name attribute.

Select Only Errors from this Operator

Rows selected from the error table will contain only errors created by this operator in this map execution

Setting Attribute Properties

For each attribute in a source and target operator, parameters are categorized into the following types:

Bound Name

Name used by the code generator to identify this item. By default, it is the same name as the item. This is a read-only setting when the operator is bound.

Data Type

Data type of the attribute.

Precision

The maximum number of digits this attribute will have if the data type of this attribute is a number or a float. This is a read-only setting.

Scale

The number of digits to the right of the decimal point. This only applies to number attributes.

Length

The maximum length for a CHAR, VARCHAR, or VARCHAR2 attribute.

Fractional Seconds Precision

The number of digits in the fractional part of the datetime field. It can be a number between 0 and 9. This property is used only for TIMESTAMP data types.

Load Column When Inserting Row

This setting prevents data from moving to a target even though it is mapped to do so. If you select Yes (default), the data will reach the mapped target.

Load Column When Updating Row

This setting prevents the selected attribute data from moving to a target even though it is mapped to do so. If you select Yes (default), the data reaches the mapped target attribute. If all columns of a unique key are not mapped, then the unique key is not used to construct the match condition. If no columns of a unique key are mapped, Warehouse Builder displays an error. If a column (not a key column) is not mapped, then it is not used in loading.

Match Column When Updating Row

This setting updates a data target row only if there is a match between the source attribute and mapped target attribute. If a match is found, then an update occurs on the row. If you set this property to Yes (default), the attribute is used as a matching attribute. If you use this setting, then all the key columns must be mapped. If there is only one unique key defined on the target entity, use constraints to override this setting.

Update: Operation

You can specify an update operation to be performed when Warehouse Builder locates a matching row and updates the target. An update operation is performed on the target attribute using the data of the source attribute. Table 25-6 lists the update operations you can specify and describes the update operation logic.

Table 25-6 Update Operations

Operation Example Result If Source Value = 5 and Target Value = 10

=

TARGET = SOURCE

TARGET = 5

+=

TARGET = SOURCE + TARGET

TARGET = 15 (5 + 10)

-=

TARGET = TARGET - SOURCE

TARGET = 5 (10 - 5)

=-

TARGET = SOURCE - TARGET

TARGET = negative 5 (5 - 10)

||=

TARGET = TARGET || SOURCE

TARGET = 105 (10 concatenated with 5)

=||

TARGET = SOURCE || TARGET

TARGET = 510 (5 concatenated with 10)


Match Column When Deleting Row

Deletes a data target row only if there is a match between the source attribute and mapped target attribute. If a match is found, then a delete occurs on the row. If you set this property to Yes (default), the attribute is used as a matching attribute. Constraints can override this setting.

Constant Operator

The Constant operator enables you to define constant values. You can place Constants anywhere in any PL/SQL or ABAP mapping.

Note: For SQL*Loader mappings, use a Data Generator Operator instead.

Description of constant.gif follows
Description of the illustration constant.gif

The Constant operator produces a single output group that contains one or more constant attributes. Warehouse Builder initializes constant at the beginning of the execution of the mapping.

For example, use a constant operator to load the value of the current system date into a table operator. In the Expression Builder, select the public transformation SYSDATE from the list of pre-defined transformations. For more information on public transformations, see Chapter 9, "Using Oracle Warehouse Builder Transformations".

To define a constant operator in a PL/SQL or ABAP mapping:

  1. Drop a Constant operator onto the Mapping Editor canvas.

  2. Right-click the Constant operator and select Open Editor.

    The Constant Editor dialog is displayed.

  3. On the Output tab, click the Add button to add an output attribute.

    You can modify the name and the data type of the attribute.

  4. Click OK to close the Constant Editor dialog.

  5. In the Mapping Editor, select an output attribute on the Constant operator.

    The Properties panel of the Mapping Editor displays the properties of the output attribute.

  6. Click the Ellipsis button to the right of the Expression field.

    The Expression Builder dialog is displayed. Use this dialog to write an expression for the constant.

    The length, precision, and scale properties assigned to the output attribute must match the values returned by the expressions defined in the mapping. For VARCHAR, CHAR, or VARCHAR2 data types, enclose constant string literals within single quotes, such as, 'my_string'.

Construct Object Operator

The Construct Object operator enables you to create SQL object data types (object types and collection types), PL/SQL object types, and cursors in a mapping by using the individual attributes that they comprise.

Description of construct_object_icon.gif follows
Description of the illustration construct_object_icon.gif

For example, you can use a Construct Object operator to create a SQL object type that is used to load data into a table that contains a column whose data type is an object type. You can also use this operator to create the payload that loads data into an advanced queue. This operator also enables you to construct a SYS.REFCURSOR object.

The Construct Object operator has one input group and one output group. The input group represents the individual attributes that comprise the object type. The output of the Construct Object operator is an object type that is created using the individual attributes. In a mapping, the data type of the output attribute of the Construct Object operator should match the target attribute to which it is being mapped.

Figure 25-3 displays a mapping that uses a Construct Object operator. The source table CUST_SRC uses separate attributes to store each component of the customer address. But the target table CUSTOMERS uses an object type to store the customer address. To load data from the CUST_SRC table into the CUSTOMERS table, the customer address should to an object type whose signature matches that of the customer address in CUSTOMERS. The Construct Object operator takes the individual attributes, from CUSTOMERS_SRC, that store the customer address as input and constructs an object type. The Construct Object operator is bound to the user-defined data type CUST_ADDR stored in the Warehouse Builder repository.

Figure 25-3 Construct Object Operator in a Mapping

Description of Figure 25-3 follows
Description of "Figure 25-3 Construct Object Operator in a Mapping"

To define a Construct Object operator in a mapping:

  1. Drag and drop a Construct Object operator onto the Mapping Editor canvas.

  2. Use the Add Construct Object dialog to create or select an object. For more information on these options, see Adding Operators that Bind to Repository Objects.

  3. Map the individual source attributes that are used to construct the object to the input group of the Construct Object operator.

  4. Map the output attribute of the Construct Object operator to the target attribute. the data type of the target attribute should be an object type.

    Note that the signatures of the output attribute of the Construct Object operator and the target attribute should be the same.

Cube Operator

You use the Cube operator to source data from or load data into cubes.

Description of cube.gif follows
Description of the illustration cube.gif

The Cube operator contains a group with the same name as the cube. This group contains an attribute for each of the cube measures. It also contains the attributes for the surrogate identifier and business identifier of each dimension level that the cube references. Additionally, the Cube operator displays one group for each dimension that the cube references.

You can bind a Cube operator to a cube defined in any Oracle module in the current project. You can also synchronize the cube operator and update it with changes made to the cube to which it is bound. To synchronize a Cube operator, right-click the Cube operator on the Mapping Editor canvas and select Synchronize.

To create a mapping that contains a Cube operator:

  1. Drag and drop a Cube operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Cube dialog.

  2. Use the Add Cube dialog to create or select a cube. For more information on these options, see "Adding Operators that Bind to Repository Objects".

    Alternatively, you can perform steps 1 and 2 as one step. In the Mapping Editor, navigate to the Selection Tree tab of the Explorer window. Select the cube and drag and drop it onto the Mapping Editor canvas.

  3. Map the attributes from the Cube operator to the target or map attributes from the source operator to the Cube operator.

When you load a cube, you map the data flow from the source to the attribute that represents the business identifier of the referencing level. Warehouse Builder performs a lookup and then stores the corresponding surrogate ID in the cube table. Fro example, when you map the attributes from the dimension operator to the cube operator, Warehouse Builder creates a Key Lookup operator in cases where it is needed to lookup the surrogate identifier of the dimension.

Note that if there is a possibility of the lookup condition returning multiple rows, you must ensure that only one row is selected out of the returned rows. You can do this by using the Deduplicator operator or Filter operator.

Figure 25-4 displays a mapping that uses the Cube operator as a target. Data from three source tables is joined using a Joiner operator. An Aggregator operator is used to aggregate the joined data with the data from another source table. The output of the Aggregator operator is mapped to the Cube operator.

Figure 25-4 Mapping that Loads a Cube

Description of Figure 25-4 follows
Description of "Figure 25-4 Mapping that Loads a Cube"

Cube Operator Properties

The cube operator has the following properties that you can use to load a cube.

  • Target Load Order: This property determines the order in which multiple targets within the same mapping are loaded. Warehouse Builder determines a default order based on the foreign key relationships. You can use this property to overrule the default order.

  • Solve the Cube: Select YES for this property to aggregate the cube data while loading the cube. This increases the load time, but decreases the query time. The data is first loaded and then aggregated.

  • Incremental Aggregation: Select this option to perform incremental loads. This means that if the cube has been solved earlier, subsequent loads will only aggregate the new data.

  • AW Staged Load: If true, the set-based AW load data is staged into a temporary table before loading into the AW.

  • AW Truncate Before Load: Indicates whether all existing cube values should be truncated before loading the cube. Setting this property to YES truncates existing cube data.

Data Generator Operator

Use a single Data Generator operator to introduce constants or sequences into a SQL*Loader mapping. Each SQL*Loader mapping can contain a maximum of one Data Generator operator.

Description of data_gen.gif follows
Description of the illustration data_gen.gif

Recommendation:

For PL/SQL mappings use a Constant Operator or Sequence Operator instead of a Data Generator.

For mappings with flat file sources and targets, the Data Generator operator connects the mapping to SQL*Loader to generate the data stored in the database record.

The following functions are available:

  • RECNUM

  • SYSDATE

  • SEQUENCE

Warehouse Builder can generate data by specifying only sequences, record numbers, system dates, and constants as field specifications. SQL*Loader inserts as many records as are specified by the LOAD keyword.

The Data Generator operator has one output group with predefined attributes corresponding to Record Number, System Date, and a typical Sequence. You can create new attributes but not alter the predefined attributes.

Figure 25-5 shows a mapping that uses the Data Generator operator to obtain the current system date. The data from a flat file CUSTOMERS_TXT is loaded in to a staging table CUST_STAGE. The staging table contains an additional attribute for the date the data was loaded. The SYSDATE attribute of the Data Generator operator is mapped to the DATE_LOADED attribute of the staging table CUST_STAGE.

Figure 25-5 Data Generator in a Mapping

Description of Figure 25-5 follows
Description of "Figure 25-5 Data Generator in a Mapping"

To define a Data Generator in a SQL *Loader mapping:

  1. Drop a Data Generator operator onto the Mapping Editor canvas.

  2. Right-click the operator and select Open Details.

    The DATA_GENERATOR Editor is displayed.

  3. Select the Output Attributes tab of the DATA_GENERATOR Editor.

    Warehouse Builder displays the pre-defined output attributes RECNUM, SYSDATE1, and SEQUENCE.

  4. On the Output Attributes tab, define the properties and type an optional description for the predefined output attributes.

  5. Click OK to close the DATA_GENERATOR Editor.

  6. On the operator in the mapping canvas, select the RECNUM attribute.

    Warehouse Builder displays the properties of this attribute in the Properties panel of the Mapping Editor.

  7. In the Expression field, click the Ellipsis button to open the Expression Builder and define an expression.

  8. Repeat steps 6 and 7 for the SEQUENCE attribute.

Setting a Column to the Data File Record Number

Use the RECNUM keyword to set an attribute to the number of the records that the record was loaded from. Records are counted sequentially from the beginning of the first data file, starting with record 1. RECNUM increments as each logical record is assembled. It increments for records that are discarded, skipped, rejected, or loaded. For example, if you use the option SKIP=10, the first record loaded has a RECNUM of 11.

Setting a Column to the Current Date

A column specified with SYSDATE gets the current system date, as defined by the SQL language SYSDATE function.

The target column must be of type CHAR or DATE. If the column is of type CHAR, the date is loaded in the format dd-mon-yy. If the system date is loaded into a DATE column, then you can access it in the time format and the date format. A new system date/time is used for each array of records inserted in a conventional path load and for each block of records loaded during a direct path load.

Setting a Column to a Unique Sequence Number

The SEQUENCE keyword ensures a unique value for a column. SEQUENCE increments for each record that is loaded or rejected. It does not increment for records that are discarded or skipped.

The combination of column name and the SEQUENCE function is a complete column specification. Table 25-7 lists the options available for sequence values.

Table 25-7 Sequence Value Options

Value Description

column_name

The name of the column in the database to which the sequence is assigned.

SEQUENCE

Specifies the value for a column.

integer

Specifies the beginning sequence number.

COUNT

The sequence starts with the number of records already in the table plus the increment.

MAX

The sequence starts with the current maximum value for the column plus the increment.

incr

The value that the sequence number is to increment after a record is loaded or rejected.


If records are rejected during loading, Warehouse Builder preserves the sequence of inserts despite data errors. For example, if four rows are assigned sequence numbers 10, 12, 14, and 16 in a column, and the row with 12 is rejected, Warehouse Builder inserts the valid rows with assigned numbers 10, 14, and 16, not 10, 12, 14. When you correct the rejected data and reinsert it, you can manually set the columns to match the sequence.

Dimension Operator

You use the Dimension operator to source data from or load data into dimensions and slowly changing dimensions.

Description of dimension.gif follows
Description of the illustration dimension.gif

The Dimension operator contains one group for each level in the dimension. The groups use the same name as the dimension levels. The level attributes of each level are listed under the group that represents the level.

You cannot map a data flow to the surrogate identifier attribute or the parent surrogate identifier reference attribute of any dimension level. Warehouse Builder automatically populates these columns when it loads a dimension.

You can bind and synchronize a Dimension operator with a dimension stored in the repository. To avoid errors in the generated code, ensure that the repository dimension is deployed successfully before you deploy the mapping that contains the Dimension operator. To synchronize a Dimension operator with the repository dimension, right-click the dimension on the Mapping Editor canvas and select Synchronize.

To use a Dimension operator in a mapping:

  1. Drag and drop a Dimension operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Dimension dialog.

  2. Use the Add Dimension dialog to select a dimension.

    Alternately, you can combine steps 1 and 2 into one single step. In the Mapping Editor, navigate to the Selection Tree tab of the Explorer window. Select the dimension and drag and drop it onto the Mapping Editor canvas.

  3. Map the attributes from the Dimension operator to the target or map attributes from the source to the Dimension operator.

Dimension Operator Properties

Use the Properties panel of the Mapping Editor to set options that define additional details about loading or removing data from a dimension or slowly changing dimension.

You can set properties at the following three levels: operator, group that represents each level in the dimension, and level attribute.

The Dimension operator has the following properties.

Loading Type Represents the type of operation to be performed on the dimension. The options you can select are as follows:

  • LOAD: Select this value to load data into the dimension or slowly changing dimension.

  • REMOVE: Select this value to delete data from the dimension or slowly changing dimension.

While loading or removing data, Warehouse Builder performs a lookup to determine if the source record exists in the dimension. The matching is performed by the natural key identifier. If the record exists, a REMOVE operation removes existing data. A LOAD operation updates existing data and then loads new data.

Note that when you remove a parent record, the child records will have references to a non-existent parent.

Target Load Order Specifies the order in which multiple targets within the same mapping are loaded. Warehouse Builder determines a default order based on the foreign key relationships. Use this property to overrule the default order.

Default Effective Time of Open Record This property is applicable for Type 2 SCDs only. It represents the default value set for the effective time of the current record. In cases where data is loaded into the Type 2 SCD on a daily basis, you can set this property to SYSDATE.

Default Expiration Time of Open Record This property is applicable for Type 2 SCDs only. It represents a date value that is used as the expiration time of a newly created open record for all the levels in the dimension. The default value is NULL.

Note that, even when you set the value of this property, Warehouse Builder ignores this setting. You must set the Default Expiration Time of Open Record for each level in the Type 2 Slowly Changing Dimension. This property will be implemented in future releases.

Type 2 Extract/Remove Current Only This property is applicable only for Type 2 SCDs. Use this property to specify which records are to be extracted or removed. You can set the following values for this property:

  • YES: When you are extracting data from the Type 2 SCD, only the current record is extracted. When you are removing data from a Type 2 SCD, only the current record is closed (expiration date is set either to SYSDATE or to the date defined in the Default Expiration Time of Open Record property).

    Note that in the case of a Type 2 SCD that uses a snowflake implementation, you cannot remove a record if it has child records that have a Type 2 trigger.

  • NO: When you are extracting data from a Type 2 SCD, all the records, including historical records, that match the natural identifier are extracted from the dimension.

    When you are removing data from the Type 2 SCD, all records, including historical records, that match the natural identifier are closed. However, the records will not be removed from the dimension. Any child records of the closed record will remain linked to a closed parent.

AW Truncate Before Load This property is applicable for MOLAP dimensions only. It indicates whether all existing dimension data should be truncated before loading fresh data. Set this property to YES to truncate any existing dimension data before you load fresh data.

AW Staged Load This property is applicable for MOLAP dimensions only. Select this option to stage the set-based load data into a temporary table before loading into the analytic workspace.

Each group in the Dimension operator represents a dimension level. You can set the following properties for each dimension level:

  • Extracting Type: Represents the extraction operation to be performed when the dimension is used as a source. Select Extract Current Only (Type 2 Only) to extract current records only from a Type 2 SCD. This property is valid only for Type 2 SCDs. Select Extract All to extract all records from the dimension or SCD.

  • Default Expiration Time of Open Record: This property is applicable for Type 2 SCDs only. It represents a date value that is used as the expiration time of a newly created open record. The default value is NULL.

Dimension Operator as a Source

You can source data stored in a repository dimension or SCD by using a Dimension operator in a mapping.

Sourcing Data From Dimensions To source data stored in a dimension, create a mapping that contains a Dimension operator. Ensure that the Dimension operator is bound to the repository dimension that contains the source data. Then map the attributes from the dimension levels to the target operator. See Figure 25-4 for and example of a mapping that sources data from a dimension.

Sourcing Data From Type 2 Slowly Changing Dimensions The Dimension operator enables you to source data stored in a Type 2 SCD. Since a Type 2 SCD stores multiple versions of a single record, you must specify which version of the record you want to use.

Use the Extracting Type property of the group that represents each dimension level to specify the version of the level record from which you want to source data. To load all versions of a particular level record select Extract All. To retrieve only the latest version of a level record, set the Extracting Type property to Extract Current Only (Type 2 Only).

To set the properties of a dimension level, select the group that represents the level in the Dimension operator. The Group Properties panel displays the properties of the level.

Sourcing Data From Type 3 Slowly Changing Dimensions To source data from a Type 3 SCD, create a mapping that contains a Dimension operator. Bind the Dimension operator to the Type 3 SCD in the repository from which you want to source data. A Type 3 SCD uses separate attributes to store historic values of versioned attributes. Thus, to source historic data, map the attributes that represent the previous values of versioned attributes to the target. To source current data, map the attributes that store the level attributes to the target.

Figure 25-6 displays a mapping that sources the historic data records from the PRODUCTS Type 3 dimension. In this example, to source historic data, use the PREV_DESCRIPTION, PREV_PACKAGE_TYPE, or PREV_PACKAGE_SIZE attributes. To source current data, use DESCRIPTION, PACKAGE_TYPE, or PACKAGE_SIZE.

Figure 25-6 Mapping that Sources Data from a Type 3 SCD

Description of Figure 25-6 follows
Description of "Figure 25-6 Mapping that Sources Data from a Type 3 SCD"

Dimension Operator as a Target

You use a Dimension operator as a target in a mapping to load data into dimensions and SCDs. Define a data flow from the operators that represent the source objects to the dimension or SCD.

Warehouse Builder loads data into the dimension starting from the highest level.

Note:

You cannot map a data flow to the surrogate identifier or the parent surrogate identifier reference of a level.

Loading Data Into Dimensions Before loading records into a dimension, Warehouse Builder checks if a record with the same business identifier as the one being loaded exists in the dimension. If a similar record does not exist, the record from the source is added to the dimension. If a similar record exists, Warehouse Builder updates the current record to reflect the new attribute values.

Figure 25-7 displays a mapping that loads data into the PRODUCTS dimension, represented by the operator PRODUCTS_OUT. The source data is stored in two tables, CATEGORIES and PRODUCTS. The CATEGORIES table stores both the category and subcategory information. So the data from this table is filtered using two Filter operators CATS and SUBCATS. The filtered data is then mapped to the CATEGORIES level and the SUBCATEGORIES dimension levels. The TOTAL level of the dimension is loaded using a Constant operator. The data from the PRODUCTS table is mapped directly to the PRODUCTS level in the dimension.

Figure 25-7 Loading the Products Dimension

Description of Figure 25-7 follows
Description of "Figure 25-7 Loading the Products Dimension"

When you define a data flow to load a dimension, Warehouse Builder creates an in-line pluggable mapping that actually loads data into the dimension. To view this pluggable mapping, select the Dimension operator on the Mapping Editor canvas and click the Visit Child Graph icon on the graphical toolbar.

Loading Data Into Type 2 SCDs Before loading records into the Type 2 SCD, Warehouse Builder first checks if a record with the same business identifier exists in the Type 2 SCD. If the record does not exist, it is added to the Type 2 SCD. If the record already exists, Warehouse Builder performs the following steps:

  • Marks the existing record as closed by setting the value specified in the property Default Expiration Time of Open Record.

  • Creates a new record using the changed attribute values. If the effective date input for the level is not mapped, the effective time and expiration time are set using the Default Effective Time of Current Record and the Default Expiration Time of Current Record properties of the operator. If the effective date input for the level is mapped, then the effective time of the new record is set to the value that is obtained from the effective date input data flow. The effective date input, if connected, represents the actual effective date of each individual new record.

    The expiration date of the old record is set using the Default Expiration Time of Current Record property, regardless of the input connections

To load data into a slowly changing dimension, you map the source attributes to the attributes in the Type 2 SCD. Figure 25-8 displays a mapping that loads data into the PRODUCTS level of a Type 2 SCD.

Figure 25-8 Loading Data Into a Type 2 SCD

Description of Figure 25-8 follows
Description of "Figure 25-8 Loading Data Into a Type 2 SCD"

In this mapping, the effective time of a record is loaded from the source. You can also choose to set this to a default value, such as SYSDATE, using the Default Effective Time of Current Record property.

In addition to this, you need to store the closure date of historical records. In the mapping shown in Figure 25-8, the expiration time attribute is not loaded from the source. It is set to a default value using the Default Expiration Time of Current Record property.

To set default values for the expiration time or effective time for a level record:

  1. In the Dimension operator, select the group that represents the level for which default values for the effective time and expiration time attributes should be set. For example, in the mapping shown in Figure 25-8, select the PRODUCTS group on the Dimension operator.

    The Group properties panel displays the properties of the selected level.

  2. Use Default Effective Time of Current Record to specify the value to be stored as the effective time of the record being loaded.

  3. Use Default Expiration Time of Current Record to specify the value to be stored as the effective time of the record being loaded.

Loading Data Into Type 3 SCDs Before loading data into a Type 3 SCD, Warehouse Builder checks if a record with the same business identifier exists in the Type 3 SCD. If the record does not exist, it is added. If the record already exists, Warehouse Builder performs the following steps:

  • Moves the values of the versioned attributes to the attributes that store the previous values of versioned attributes.

  • Updates the record with the values from the source record.

Figure 25-9 displays a mapping that loads data into the PRODUCTS level of the Type 3 SCD. In this mapping, the effective time of the current record is loaded from the source. You can also use the Default Effective Time of Current Record property to set a default value for effective time of a level record.

Figure 25-9 Loading a Type 3 SCD

Description of Figure 25-9 follows
Description of "Figure 25-9 Loading a Type 3 SCD"

You cannot map a data flow to the attributes that represent the previous values of versioned attributes. For example, in the mapping shown in Figure 25-9, you cannot map an attribute to the PREV_PACK_SIZE and PREV_DESCRIPTION attributes of the PRODUCTS level.

External Table Operator

The External Table operator enables you to source data stored in external tables in the Warehouse Builder repository. You can then load the external table data into another repository object or perform transformations on the data. For example, you can source data stored in an external table, transform the data using mapping operators, and then load the data into a dimension or a cube.

Description of external_table_icon.gif follows
Description of the illustration external_table_icon.gif

Figure 25-10 displays a mapping that uses the External Table operator. The External Table operator EXPENSE_CATEGORY_EXT is bound to the external table of the same name in the repository. The data stored in this external table is used to load the dimension EXPENSE_CATEGORIES.

Figure 25-10 External Table Operator in a Mapping

Description of Figure 25-10 follows
Description of "Figure 25-10 External Table Operator in a Mapping"

To create a mapping that contains an External Table operator:

  1. Drag and drop an External Table operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add External Table dialog.

  2. Use the Add External Table dialog to create or select an external table. For more information on these options, see Adding Operators that Bind to Repository Objects.

  3. Map the attributes from the output group of the External Table operator to the target operator or the intermediate transformation operator.

Expand Object Operator

The Expand Object operator enables you to expand an object type and obtain the individual attributes that comprise the object type.

Description of expand_object_icon.gif follows
Description of the illustration expand_object_icon.gif

You can bind and synchronize an Expand Object operator with a repository object type. To avoid generation errors in the mapping, ensure that you deploy the repository object type before you deploy the mapping.

The Expand Object operator has one input group and one output group. The input group represents the object type that you want to expand in order to obtain its individual attributes. When you bind an Expand Object operator to a repository object, the output group of the operator contains the individual attributes that comprise the object type.

To successfully deploy a mapping that contains an Expand Object operator, ensure the following conditions are satisfied.

  • The schema that contains the source tables must be on the same instance as the warehouse schema.

  • The warehouse schema is granted the SELECT privilege on the source tables.

  • The warehouse schema is granted the EXECUTE privilege on all the object types and nested tables used in the Expand Object operator.

Figure 25-11 displays a mapping that uses an Expand Object operator. The source table CUSTOMERS contains a column CUSTOMER_ADDRESS of data type ADDR_TYPE, a SQL object type. But the target table CUST contains four different columns, of Oracle built-in data types, that store each component of the customer address. To obtain the individual attributes of the column CUSTOMER_ADDRESS, create an Expand Object operator that is bound to the object type ADDR_TYPE. You then map the CUSTOMER_ADDRESS column to the input group of an Expand Object operator. The output group of the Expand Object operator contains the individual attributes of the column CUSTOMER_ADDRESS. Map these output attributes to the target operator.

Figure 25-11 Expand Operator in a Mapping

Description of Figure 25-11 follows
Description of "Figure 25-11 Expand Operator in a Mapping"

To define an Expand Object operator in a mapping:

  1. Drag and drop an Expand Object operator onto the Mapping Editor canvas.

  2. Use the Add Expand Object dialog to create or select an object. For more information on these options, see Adding Operators that Bind to Repository Objects.

  3. Map the source attribute that needs to be expanded to the input group of the Expand Object operator.

    Note that the signature of the input object type should be same as that of the Expand Object operator.

  4. Map the output attributes of the Expand Object operator to the target attributes.

Mapping Input Parameter Operator

You can introduce information external to Warehouse Builder as input into a mapping using a Mapping Input Parameter.

Description of input_parm.gif follows
Description of the illustration input_parm.gif

For example, you can use a Mapping Input Parameter operator to pass SYSDATE to a mapping that loads data to a staging area. Use the same Mapping Input Parameter to pass the timestamp to another mapping that loads the data to a target.

When you a generate mapping, Warehouse Builder creates a PL/SQL package. Mapping input parameters become part of the signature of the main procedure in the package.

The Mapping Input Parameter has a cardinality of one. It creates a single row set that can be combined with another row set as input to the next operator.

The names of the input attributes become the names of the mapping output parameters. The parameters can be used by connecting the attributes of the Mapping Input Parameters operator within the mapping editor. You can have only one Mapping Input Parameter in a mapping.

When you define the Mapping Input Parameter, you specify a data type and an optional default value.

To define a Mapping Input Parameter operator in a mapping:

  1. Drag and drop a Mapping Input Parameter operator onto the Mapping Editor canvas.

  2. Right-click the Mapping Input Parameter operator and select Open Details.

    The INPUT_PARAMETER Editor is displayed.

  3. Select the Output tab and click Add to add output attributes.

    You can rename the attributes and define the data type and other attribute properties.

  4. Click OK to close the INPUT_PARAMETER Editor.

  5. Connect the output attribute of the Mapping Input Parameter operator to an attribute in the target operator as shown in Figure 25-12.

    Figure 25-12 Mapping Editor Showing A Mapping Input Parameter

    Description of Figure 25-12 follows
    Description of "Figure 25-12 Mapping Editor Showing A Mapping Input Parameter"

Mapping Output Parameter Operator

Use a single Mapping Output Parameter operator to send values out of a PL/SQL mapping to applications external to Warehouse Builder.

Description of output_parm.gif follows
Description of the illustration output_parm.gif

A Mapping Output Parameter operator is not valid for a SQL*Loader mapping. When you generate mapping, Warehouse Builder creates a PL/SQL package. Mapping Output Parameters become part of the signature of the main procedure in the package.

The Mapping Output Parameter operator has only one input group and no output groups. You can have only one Mapping Output Parameter operator in a mapping. Only attributes that are not associated with a row set can be mapped into a Mapping Output Parameter operator. For example, constant, input parameter, output from a pre-mapping process, or output from a post process can all contain attributes that are not associated with a row set.

To define a Mapping Output Parameter operator in a mapping:

  1. Drag and drop a Mapping Output Parameter operator onto the Mapping Editor canvas.

  2. Right-click the Mapping Output Parameter operator and select Edit.

  3. Select the Input Attributes tab and click Add to add input attributes.

    You can rename the attributes and define the data type and other attribute properties.

    Figure 25-13 displays an example of a Mapping Output Parameter operator used in a mapping.

    Figure 25-13 Mapping Editor Showing An Output Parameter Operator

    Description of Figure 25-13 follows
    Description of "Figure 25-13 Mapping Editor Showing An Output Parameter Operator"

Materialized View Operator

The Materialized View operator enables you to source data from or load data into a materialized view stored in the Warehouse Builder repository.

Description of mview.gif follows
Description of the illustration mview.gif

For example, you can use the data stored in a materialized view to load a cube. The Materialized View operator has one Input/Output group called INOUTGRP1. You cannot add additional groups to this operator, but you can add attributes to the existing Input/Output group.

You can bind and synchronize a Materialized View operator to a repository table. The repository materialized view must be deployed before the mapping that contains the Materialized View operator is generated to avoid errors in the generated code package.

Figure 25-14 displays a mapping that uses a Materialized View operator. The data from the two source tables PRODUCTS and ALL_SALES is joined using a Joiner operator. This data is then aggregated using an Aggregator operator. The aggregated data is used to load the materialized view SALES_MV.

Figure 25-14 Mapping that Contains a Materialized View Operator

Description of Figure 25-14 follows
Description of "Figure 25-14 Mapping that Contains a Materialized View Operator"

To create a mapping that contains a Materialized View operator:

  1. Drag and drop a Materialized View operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Materialized View dialog.

  2. Use the Add Materialized View dialog to create or select a materialized view. For more information on these options, see "Adding Operators that Bind to Repository Objects".

  3. Map the attributes of the Materialized View operator.

    If you are using the materialized view operator as a target, connect the source attributes to the Materialized View operator attributes. If you are using the materialized view as a source, connect the Materialized View operator attributes to the target.

Sequence Operator

A Sequence operator generates sequential numbers that increment for each row.

Description of sequence.gif follows
Description of the illustration sequence.gif

For example, you can use the Sequence operator to create surrogate keys while loading data into a dimension table. You can connect a Sequence to a target operator input or to the inputs of other types of operators. You can combine the sequence outputs with outputs from other operators.

Because sequence numbers are generated independently of tables, the same sequence can be used for multiple tables. Sequence numbers may not be consecutive because the same sequence can be used by multiple sessions.

This operator contains an output group containing the following output attributes:

  • CURRVAL: Generates from the current value.

  • NEXTVAL: Generates a row set of consecutively incremented numbers beginning with the next value.

You can bind and synchronize Sequences to a repository sequence in one of the modules. The repository sequence must be generated and deployed before the mapping containing the Sequence is deployed to avoid errors in the generated code package. See "Adding Operators that Bind to Repository Objects" for more information.

Generate mappings with sequences using row based mode. Sequences are incremented even if rows are not selected. If you want a sequence to start from the last number, then do not run your SQL package in set based or in set based with failover operating modes. See "Runtime Parameters" for more information on configuring mode settings.

Figure 25-15 shows a mapping that uses a Sequence operator to automatically generate the primary key of a table. The NEXTVAL attribute of the Sequence operator is mapped to an input attribute of the target. The other input attributes from the source table are mapped directly to the target.

Figure 25-15 Mapping Sequence Operator in a Mapping

Description of Figure 25-15 follows
Description of "Figure 25-15 Mapping Sequence Operator in a Mapping"

To define a Mapping Sequence operator in a mapping:

  1. Drag and drop the Sequence operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Sequence dialog.

  2. Use the Add Sequence dialog to create or select a sequence. For more information on these options, see "Adding Operators that Bind to Repository Objects".

  3. Connect the required output attribute from the Sequence operator to a target attribute.

Table Operator

The Table operator enables you to source data from and load data into tables stored in the Warehouse Builder repository.

Description of table.gif follows
Description of the illustration table.gif

You can bind and synchronize a Table operator to a repository table. The repository table must be deployed before the mapping that contains the Table operator is generated to avoid errors in the generated code package.

Figure 25-15 displays a mapping that uses Table operators as both source and target. Figure 25-5 displays a mapping that uses the Table operator as a target.

To define a Table operator in a mapping:

  1. Drag and drop a Table operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Table dialog.

  2. Use the Add Table dialog to create or select a table. For more information on these options, see "Adding Operators that Bind to Repository Objects".

  3. Map the attributes of the Table operator.

    If you are using the table as a target, connect the source attributes to the Table operator attributes. If you are using the table as a source, connect the Table operator attributes to the target.

Varray Iterator Operator

When you have an object of type nested table or varray, you can use the Varray Iterator operator to iterate through the values in the table type.

Description of varrayiterator.gif follows
Description of the illustration varrayiterator.gif

This operator accepts a table type attribute as the source and generates a value that is of the base element type defined in the nested table or varray type. If the operator is bound, reconciliation operations are performed on the operator. Reconciliation operations are not supported for unbound Varray Iterator operators.

You can create the Varray Iterator operator as either a bound operator or an unbound operator. You cannot synchronize or validate an unbound Varray Iterator operator. It has only one input group and one output group. You can have an input group with attributes of other data types. However, there must be at least one table type attribute.

The attributes of the output group are a copy of the attributes of the input group. The only difference is that instead of the table type to which the operator is bound (which is one of the input group attributes), the output group will have an attribute that is the same as the base element type of the input group. The input group is editable. The output group is not editable.

Figure 25-16 Varray Iterator Operator in a Mapping

Description of Figure 25-16 follows
Description of "Figure 25-16 Varray Iterator Operator in a Mapping"

To define a Varray Iterator operator in a mapping:

  1. Drag and drop a Varray Iterator operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Varray Iterator dialog.

  2. From the Add Varray Iterator dialog, choose either an unbound operator or a bound operator.

    • If you select the unbound operator, then a Varray Iterator operator with no attributes is created. You will have to create these attributes manually.

    • If you select the bound operator, then you must choose one of the available nested table or varray types shown in the tree. The output attribute is the same as the base element.

  3. Click OK.

  4. Map the attributes of the Varray Iterator operator.

For an unbound operator, right-click the unbound Varray Iterator operator on the Mapping Editor canvas and then select Open Details. This opens the Varray Iterator editor dialog. You can add attributes to the input group by using the Add button. You can only change the data type of the attributes in the output group.

View Operator

The View operator enables you to source data from or load data into a view stored in the Warehouse Builder repository

Description of view.gif follows
Description of the illustration view.gif

You can bind and synchronize a View operator to a repository view. The repository view must be deployed before the mapping that contains the View operator is generated to avoid errors in the generated code package.

To define a View operator in a mapping:

  1. Drag and drop a View operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add View dialog.

  2. Use the Add View dialog to create or select a view. For more information on these options, see "Adding Operators that Bind to Repository Objects".

  3. Map the attributes of the View operator.

    If you are using the view as a target, connect the source attributes to the View operator attributes. If you are using the view as a source, connect the View operator attributes to the target.

Using Remote and non-Oracle Source and Target Operators

You can bind a target operator in a mapping to an object in a remote Oracle location or a non-Oracle Database location such as SQL Server or DB2. Such operators are referred to as non-Oracle targets. You use database links to access these targets. The database links are created using the locations. SAP targets are not supported, in that Warehouse Builder will not generate ABAP to populate SAP.

There are certain restrictions to using remote or non-Oracle targets in a mapping as described in the following sections:

Limitations of Using non-Oracle or Remote Targets

The following limitations apply when you use a remote or non-Oracle target in a mapping:

  • You cannot set the Loading Type property of the target operator to TRUNCATE/INSERT.

    This results in a validation error when you validate the mapping.

  • For non-Oracle targets, setting the Loading Type property of the target operator to INSERT/UPDATE produces the same result as setting the loading type to INSERT.

  • The RETURNING clause is not supported in a DML statement.

    The RETURNING clause enables you to obtain the ROWIDs of the rows that are loaded into the target using row-based mode. These ROWIDs are recorded by the runtime auditing system. But in the case of a remote or non-Oracle target, the RETURNING clause is not generated and nulls are passed to the runtime auditing system for the ROWID field.

  • In set-based mode, you cannot load data from an Oracle Database into a remote or non-Oracle target. All other modes, including set-based failover, are supported.

    When you set the Operating Mode property of the target operator to Set-based, a runtime error occurs.

Warehouse Builder Workarounds for non-Oracle and Remote Targets

When you use a remote or non-Oracle target in a mapping, Warehouse Builder uses default workarounds for certain restricted activities. These workarounds are listed for your information only. You need not explicitly do anything to enable these workarounds.

The default workarounds that Warehouse Builder uses for a remote or a non-Oracle target are as follows:

  • When you set the loading type of a target to INSERT/UPDATE or UPDATE/INSERT in Oracle9i and to UPDATE in Oracle10g, Warehouse Builder generates a MERGE to implement this mapping in set-based mode. But a MERGE statement cannot be run against remote or non-Oracle targets. Thus, when you use a remote or non-Oracle target in a mapping, Warehouse Builder generates code without a MERGE statement. The generated code is the same as that generated when the PL/SQL generation mode is set to Oracle8i.

  • For set-based DML statements that reference a database sequence that loads into a remote or non-Oracle target, the GLOBAL_NAMES parameter to must be set to TRUE. When Warehouse Builder generates code for a mapping, it sets this parameter to TRUE if the mapping contains a remote or non-Oracle target.

  • For a multi-table insert to a remote or non-Oracle target, the Warehouse Builder code generator generates an insert statement for each table instead of a multi-table insert statement.

  • While performing bulk inserts on a remote or non-Oracle Database, Warehouse Builder does not generate bulk processing code. Instead, it generates code that processes one row at a time. This means that the Generate Bulk property of the operator is ignored.

Note:

The loading types used for remote or non-Oracle targets are the same as the ones used for other Oracle target operators. For more information about the loading type property, see Loading Types for Oracle Target Operators.

Using Flat File Source and Target Operators

The Flat File operator enables you to use a flat file as a source or target in a mapping.

Setting Properties for Flat File Source and Target Operators

You can set properties for a flat file operator as either a source or target. You can set Loading Types for Flat Files and the Field Names in the First Row setting. All other settings are read-only and depend upon how you imported the flat file. The operator properties window displays as shown in Figure 25-17.

Figure 25-17 Properties Window for Flat File Operator

Description of Figure 25-17 follows
Description of "Figure 25-17 Properties Window for Flat File Operator"

Loading Types for Flat Files

Select a loading type from the drop-down list:

  • Insert: Creates a new target file. If a target file already exists, then it is replaced with a new target file.

  • Update: Creates a new target file. If a target file already exists, then it is appended.

  • None: No operation is performed on the data target. This setting is useful for test runs where all transformations and extractions are run but have no effect on the target.

Field Names in the First Row

Set this property to True if you want to write the field names in the first row of the operator or False if you do not.

Flat File Operator

You can use a flat file operator as either a source or target.

Description of flatfile.gif follows
Description of the illustration flatfile.gif

However, the two are mutually exclusive within the same mapping. There are differences in code generation languages for flat file sources and targets. Subsequently, mappings can contain a mix of flat files, relational objects, and transformations, but with the restrictions discussed further in this section.

You have the following options for flat file operators:

  • Using Previously Imported Flat Files

  • Importing and Binding New Flat Files into Your Mapping

  • Defining New Flat File Sources or Targets in Mappings

Flat File Source Operators

You can introduce data from a flat file into a mapping using either a flat file operator or an external table operator. If you are loading large volumes of data, loading to a flat file enables you to use the DIRECT PATH SQL*Loader option, which results in better performance.

If you are not loading large volumes of data, you can benefit from many of the relational transformations available in the external table feature. See "External Table Operators versus Flat File Operators" for more information.

As a source, the flat file operator acts as the row set generator that reads from a flat file using the SQL*Loader utility. Do not use a flat file source operator to map to a flat file target or to an external table. When you design a mapping with a flat file source operator, you can use the following operators:

When you use a flat file as a source in a mapping, remember to create a connector from the flat file source to the relational target for the mapping to deploy successfully.

Flat File Target Operators

A mapping with a flat file target generates a PL/SQL package that loads data into a flat file instead of loading data into rows in a table.

Note:

A mapping can contain a maximum of 50 flat file target operators at one time.

You can use an existing flat file with either a single or multiple record types. If you use a multiple-record-type flat file as a target, you can only map to one of the record types. If you want to load all of the record types in the flat file from the same source, you can drop the same flat file into the mapping as a target again and map to a different record type. For an example of this, see "Using Direct Path Loading to Ensure Referential Integrity in SQL*Loader Mappings". Alternatively, create a separate mapping for each record type you want to load.

To create a new flat file target, see "Creating a New Flat File Target".