17 Source and Target Operators

This chapter provides details on how to use operators as sources and targets in a mapping and includes the following topics:

Using Source and Target Operators

Recall that defining a mapping involves the following general steps:

  1. Creating a Mapping

  2. Adding Operators

  3. Editing Operators

  4. Connecting Operators

  5. Setting Mapping Properties

  6. Setting Properties for Source and Target Operators

    When you select an object on the canvas, the editor displays its associated properties in the Properties panel along the left side.

  7. Configuring Mappings Reference

  8. Debugging a Mapping

This chapter describes step 6, Setting Properties for Source and Target Operators, and also provides details on how to use each operator.

Using Oracle Source and Target Operators

Oracle source and target operators refer to operators that are bound to Oracle data objects in the workspace. Use these operators in a mapping to load data into or source data from Oracle data objects.

Setting Properties for Oracle Source and Target Operators

The Properties panel in the Mapping Editor displays the properties of the selected operator. It contains the following categories of parameters for source and target operators:

Primary Source

For Oracle Application Embedded Data Warehouse (EDW) users, refer to EDW documentation. For all other users, disregard this parameter.

Loading Types for Oracle Target Operators

Select a loading type for each target operator using the Loading Type property.

For all Oracle target operators, except for dimensions and cubes, select one of the following options.

  • CHECK/INSERT: Checks the target for existing rows. If there are no existing rows, the incoming rows are inserted into the target.

  • DELETE: The incoming row sets are used to determine which of the rows on the target are to delete.

  • DELETE/INSERT: Deletes all rows in the target and then inserts the new rows.

  • INSERT: Inserts the incoming row sets into the target. Insert fails if a row already exists with the same primary or unique key.

  • INSERT/UPDATE: For each incoming row, the insert operation is performed first. If the insert fails, an update operation occurs. If there are no matching records for update, the insert is performed. If you select INSERT/UPDATE and the Default Operating Mode is row based, you must set unique constraints on the target. If the operating mode is set based, Warehouse Builder generates a MERGE statement.

  • None: No operation is performed on the target. This setting is useful for testing. Extraction and transformations run but have no effect on the target.

  • TRUNCATE/INSERT: Truncates the target and then inserts the incoming row set. If you select this option, the operation cannot be rolled back even if the execution of the mapping fails. Truncate permanently removes the data from the target.

  • UPDATE: Uses the incoming row sets to update existing rows in the target. If no rows exist for the specified match conditions, no changes are made.

    If you set the configuration property PL/SQL Generation Mode of the target module to Oracle 10g or Oracle 10gR2, the target is updated in set based mode. The generated code includes a MERGE statement without an insert clause. For modules configured to generate 9i and earlier versions of PL/SQL code, the target is updated in row based mode.

  • UPDATE/INSERT: For each incoming row, the update is performed first. If you select UPDATE/INSERT and the Default Operating Mode for the target is set-based, a MERGE statement is generated.

For dimensions and cubes, the Loading Type property has the following options: Load and Remove. Use Load to load data into the dimension or cube. Use Remove to remove data from the dimension or cube.

Loading Types for Flat File Targets

Configure SQL*Loader parameters to define SQL*Loader options for your mapping. The values chosen during configuration directly affect the content of the generated SQL*Loader and the runtime control files. SQL*Loader provides two methods for loading data:

  • Conventional Path Load: Executes an SQL INSERT statement to populate tables in an Oracle Database.

  • Direct Path Load: Eliminates much of the Oracle Database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. Because a direct load does not compete with other users for database resources, it can usually load data at or near disk speed.

    Certain considerations such as restrictions, security, and backup implications are inherent to each method of access to database files. See Oracle 9i Database Utilities for more information.

    When designing and implementing a mapping that extracts data from a flat file using SQL*Loader, you can configure different properties affecting the generated SQL*Loader script. Each load operator in a mapping has an operator property called Loading Types. The value contained by this property affects how the SQL*Loader INTO TABLE clause for that load operator is generated. Although SQL*Loader can append, insert, replace, or truncate data, it cannot update any data during its processing. Table 17-1 lists the INTO TABLE clauses associated with each load type and their affect on data in the existing targets.

Table 17-1 Loading Types and INTO TABLE Relationship

Loading Types INTO TABLE Clause Affect on Target with Existing Data

INSERT/UPDATE

APPEND

Adds additional data to target.

DELETE/INSERT

REPLACE

Removes existing data and replaces with new (DELETE trigger fires).

TRUNCATE/INSERT

TRUNCATE

Removes existing data and replaces with new (DELETE trigger fires).

CHECK/INSERT

INSERT

Assumes target table is empty.

NONE

INSERT

Assumes target table is empty.


Target Load Order

This property enables you to specify the order in which multiple targets within the same mapping are loaded. Warehouse Builder determines a default load order based on the foreign key relationships. You can overrule the default order.

Target Filter for Update

If the condition evaluates to true, the row is included in the update loading operation.

Target Filter for Delete

If evaluated to true, the row is included in the delete loading operation.

Match By Constraint

When loading target operators with the UPDATE or the DELETE conditions, you can specify matching criteria. You can set matching and loading criteria manually or choose from several built-in options. Use Match By Constraint to indicate whether unique or primary key information on a target overrides the manual matching and loading criteria set on its attributes. When you click the property Match By Constraint, Warehouse Builder displays a list containing the constraints defined on that operator and the built-in loading options.

If you select All Constraints, all manual attribute load settings are overruled and the data is loaded as if the load and match properties of the target attributes were set as displayed in Table 17-2.

Table 17-2 All Constraints Target Load Settings

Load Setting Key Attribute All Other Attributes

Load Column when Updating Row

NO

YES

Match Column when Updating Row

YES

NO

Match Column when Deleting Row

YES

NO


If you select No Constraints, all manual load settings are honored and the data is loaded accordingly.

If you select a constraint previously defined for the operator, all manual attribute load settings are overruled and the data is loaded as if the load and match properties of the target were set as displayed in Table 17-4.

Table 17-3 Target Load Settings for a Selected Constraint

Load Setting Selected Key Attributes All Other Attributes

Load Column when Updating Row

NO

YES

Match Column when Updating Row

YES

NO

Match Column when Deleting Row

YES

NO


Reverting Constraints to Default Values

If you made changes at the attribute level and you want to default all settings, click Advanced. A list containing the loading options is displayed. Warehouse Builder defaults the settings based on the constraint type you select.

For example, if you want to reset the match properties for all key attributes, click Advanced, select No Constraints, and click OK. The manual load settings are overwritten and the data is loaded based on the settings displayed in Table 17-4.

Table 17-4 Default Load Settings for Advanced No Constraints

Load Setting All Key Attributes All Other Attributes

Load Column when Updating Row

YES

YES

Match Column when Updating Row

NO

NO

Match Column when Deleting Row

NO

NO


Alternatively, if you click Advanced and select All Constraints, the manual load settings are overwritten and data is loaded based on the settings displayed in Table 17-5

Table 17-5 Default Load Settings for Advanced All Constraints

Load Setting All Key Attributes All Other Attributes

Load Column when Updating Row

NO

YES

Match Column when Updating Row

YES

NO

Match Column when Deleting Row

YES

NO


Bound Name

The name used by the code generator. If an operator is currently bound and synchronized, then this property is read-only. If an operator is not yet bound, you can edit the bound name within the Mapping Editor before you synchronize it to a workspace object.

Key Name

Name of the primary, foreign, or unique key.

Key Columns

Local columns that define this key. Each key column is comma-delimited if the operator contains more than one key column.

Key Type

Type of key, either primary, foreign, or unique.

Referenced Keys

If the operator contains a foreign key, Referenced Keys displays the primary key or unique key for the referenced object.

Error Table Name

The name of the error table that stores the invalid records during a load operation.

Roll up Errors

Select Yes to roll up records selected from the error table by the error name. Thus all errors generated by a particular input record will be rolled up into a single record with the error names concatenated in the error name attribute.

Select Only Errors from this Operator

Rows selected from the error table will contain only errors created by this operator in this map execution

Setting Attribute Properties

For each attribute in a source and target operator, parameters are categorized into the following types:

Bound Name

Name used by the code generator to identify this item. By default, it is the same name as the item. This is a read-only setting when the operator is bound.

Data Type

Data type of the attribute.

Precision

The maximum number of digits this attribute will have if the data type of this attribute is a number or a float. This is a read-only setting.

Scale

The number of digits to the right of the decimal point. This only applies to number attributes.

Length

The maximum length for a CHAR, VARCHAR, or VARCHAR2 attribute.

Fractional Seconds Precision

The number of digits in the fractional part of the datetime field. It can be a number between 0 and 9. This property is used only for TIMESTAMP data types.

Load Column When Inserting Row

This setting prevents data from moving to a target even though it is mapped to do so. If you select Yes (default), the data will reach the mapped target.

Load Column When Updating Row

This setting prevents the selected attribute data from moving to a target even though it is mapped to do so. If you select Yes (default), the data reaches the mapped target attribute. If all columns of a unique key are not mapped, then the unique key is not used to construct the match condition. If no columns of a unique key are mapped, an error is displayed. If a column (not a key column) is not mapped, then it is not used in loading.

Match Column When Updating Row

This setting updates a data target row only if there is a match between the source attribute and mapped target attribute. If a match is found, then an update occurs on the row. If you set this property to Yes (default), the attribute is used as a matching attribute. If you use this setting, then all the key columns must be mapped. If there is only one unique key defined on the target entity, use constraints to override this setting.

Update: Operation

You can specify an update operation to be performed when a matching row is located. An update operation is performed on the target attribute using the data of the source attribute. Table 17-6 lists the update operations you can specify and describes the update operation logic.

Table 17-6 Update Operations

Operation Example Result If Source Value = 5 and Target Value = 10

=

TARGET = SOURCE

TARGET = 5

+=

TARGET = SOURCE + TARGET

TARGET = 15 (5 + 10)

-=

TARGET = TARGET - SOURCE

TARGET = 5 (10 - 5)

=-

TARGET = SOURCE - TARGET

TARGET = negative 5 (5 - 10)

||=

TARGET = TARGET || SOURCE

TARGET = 105 (10 concatenated with 5)

=||

TARGET = SOURCE || TARGET

TARGET = 510 (5 concatenated with 10)


Match Column When Deleting Row

Deletes a data target row only if there is a match between the source attribute and mapped target attribute. If a match is found, then a delete occurs on the row. If you set this property to Yes (default), the attribute is used as a matching attribute. Constraints can override this setting.

Constant Operator

The Constant operator enables you to define constant values. You can place Constants anywhere in any PL/SQL or ABAP mapping.

Note: For SQL*Loader mappings, use a Data Generator Operator instead.

Description of constant.gif follows
Description of the illustration constant.gif

The Constant operator produces a single output group that contains one or more constant attributes. Warehouse Builder initializes constant at the beginning of the execution of the mapping.

For example, use a constant operator to load the value of the current system date into a table operator. In the Expression Builder, select the public transformation SYSDATE from the list of pre-defined transformations. For more information about public transformations, see Chapter 15, "Data Transformation".

To define a constant operator in a PL/SQL or ABAP mapping:

  1. Drop a Constant operator onto the Mapping Editor canvas.

  2. Right-click the Constant operator and select Open Editor.

    The Constant Editor dialog box is displayed.

  3. On the Output tab, click the Add button to add an output attribute.

    You can modify the name and the data type of the attribute.

  4. Click OK to close the Constant Editor dialog box.

  5. In the Mapping Editor, select an output attribute on the Constant operator.

    The Properties panel of the Mapping Editor displays the properties of the output attribute.

  6. Click the Ellipsis button to the right of the Expression field.

    The Expression Builder dialog box is displayed. Use this dialog box to write an expression for the constant.

    The length, precision, and scale properties assigned to the output attribute must match the values returned by the expressions defined in the mapping. For VARCHAR, CHAR, or VARCHAR2 data types, enclose constant string literals within single quotes, such as, 'my_string'.

Construct Object Operator

The Construct Object operator enables you to create SQL object data types (object types and collection types), PL/SQL object types, and cursors in a mapping by using the individual attributes that they comprise.

Description of construct_object_icon.gif follows
Description of the illustration construct_object_icon.gif

For example, you can use a Construct Object operator to create a SQL object type that is used to load data into a table that contains a column whose data type is an object type. You can also use this operator to create the payload that loads data into an advanced queue. This operator also enables you to construct a SYS.REFCURSOR object.

The Construct Object operator has one input group and one output group. The input group represents the individual attributes that comprise the object type. The output of the Construct Object operator is an object type that is created using the individual attributes. In a mapping, the data type of the output attribute of the Construct Object operator should match the target attribute to which it is being mapped.

Figure 17-1 displays a mapping that uses a Construct Object operator. The source table CUST_SRC uses separate attributes to store each component of the customer address. But the target table CUSTOMERS uses an object type to store the customer address. To load data from the CUST_SRC table into the CUSTOMERS table, the customer address should to an object type whose signature matches that of the customer address in CUSTOMERS. The Construct Object operator takes the individual attributes, from CUSTOMERS_SRC, that store the customer address as input and constructs an object type. The Construct Object operator is bound to the user-defined data type CUST_ADDR stored in the workspace.

Figure 17-1 Construct Object Operator in a Mapping

Description of Figure 17-1 follows
Description of "Figure 17-1 Construct Object Operator in a Mapping"

To define a Construct Object operator in a mapping:

  1. Drag and drop a Construct Object operator onto the Mapping Editor canvas.

  2. Use the Add Construct Object dialog box to create or select an object. For more information about these options, see Adding Operators that Bind to Workspace Objects.

  3. Map the individual source attributes that are used to construct the object to the input group of the Construct Object operator.

  4. Map the output attribute of the Construct Object operator to the target attribute. The data type of the target attribute should be an object type.

    Note that the signatures of the output attribute of the Construct Object operator and the target attribute should be the same.

Cube Operator

Use the Cube operator to source data from or load data into cubes.

Description of cube.gif follows
Description of the illustration cube.gif

The Cube operator contains a group with the same name as the cube. This group contains an attribute for each of the cube measures. It also contains the attributes for the surrogate identifier and business identifier of each dimension level that the cube references. Additionally, the Cube operator displays one group for each dimension that the cube references.

You can bind a Cube operator to a cube defined in any Oracle module in the current project. You can also synchronize the cube operator and update it with changes made to the cube to which it is bound. To synchronize a Cube operator, right-click the Cube operator on the Mapping Editor canvas and select Synchronize.

To create a mapping that contains a Cube operator:

  1. Drag and drop a Cube operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Cube dialog box.

  2. Use the Add Cube dialog box to create or select a cube. For more information about these options, see "Adding Operators that Bind to Workspace Objects".

    Alternatively, you can perform steps 1 and 2 as one step. In the Mapping Editor, navigate to the Selection Tree tab of the Explorer window. Select the cube and drag and drop it onto the Mapping Editor canvas.

  3. Map the attributes from the Cube operator to the target or map attributes from the source operator to the Cube operator.

When you load a cube, you map the data flow from the source to the attribute that represents the business identifier of the referencing level. Warehouse Builder performs a lookup and then stores the corresponding surrogate ID in the cube table. For example, when you map the attributes from the dimension operator to the cube operator, a Key Lookup operator is created in cases where it is needed to lookup the surrogate identifier of the dimension.

Note that if there is a possibility of the lookup condition returning multiple rows, you must ensure that only one row is selected out of the returned rows. You can do this by using the Deduplicator operator or Filter operator.

The Cube operator contains an attribute called ACTIVE_DATE. This attribute represents the point in time that is used to determine which record in a Type 2 SCD is the active record. This property is applicable only when the cube you are loading has one or more Type 2 SCDs.

If you do not map an attribute from the source to the ACTIVE_DATE, SYSDATE is used as the default.

If you map a source attribute to ACTIVE_DATE, the value of the source attribute is used to determine which version of the Type 2 SCD record is referenced by the cube record.For any cube that references a dimension in which the level is of a Type 2 SCD, the WHERE clause generated to determine the dimension member is as follows:

...
WHERE
(...
   (<dim_name>.DIMKEY = <lookup_for_dimension_dimkey> AND
           (<level>_EFFECTIVE_DATE <= ACTIVE_DATE AND
                    <level>_EXPIRATION_DATE >= ACTIVE_DATE) OR
           (<level>_EFFECTIVE_DATE <= ACTIVE_DATE AND
                    <level>_EXPIRATION_DATE IS NULL))
...)

If a mapping that loads a cube references at least one Type 2 SCD that has the Default Expiration Time of Open Record set to a non-NULL value, then the ACTIVE_DATE attribute of the Cube operator must be mapped from the source that contains the date value that defines the range for the dimension record.

If the ACTIVE_DATE attribute is not mapped from the source, then the SYSDATE value will define the date range for the dimension record.

When the ACTIVE_DATE attribute is mapped from the source, the source attribute value is used to perform a range comparison to determine which dimension record should be loaded.

The logic used to perform the lookup for the dimension member is described in the WHERE clause listed previously.

Figure 17-2 displays a mapping that uses the Cube operator as a target. Data from three source tables is joined using a Joiner operator. An Aggregator operator is used to aggregate the joined data with the data from another source table. The output of the Aggregator operator is mapped to the Cube operator.

Figure 17-2 Mapping that Loads a Cube

Description of Figure 17-2 follows
Description of "Figure 17-2 Mapping that Loads a Cube"

Cube Operator Properties

The cube operator has the following properties that you can use to load a cube.

  • Target Load Order: This property determines the order in which multiple targets within the same mapping are loaded. Warehouse Builder determines a default order based on the foreign key relationships. You can use this property to overrule the default order.

  • Solve the Cube: Select YES for this property to aggregate the cube data while loading the cube. This increases the load time, but decreases the query time. The data is first loaded and then aggregated.

  • Incremental Aggregation: Select this option to perform incremental loads. This means that if the cube has been solved earlier, subsequent loads will only aggregate the new data.

  • AW Staged Load: If true, the set-based AW load data is staged into a temporary table before loading into the AW.

  • AW Truncate Before Load: Indicates whether all existing cube values should be truncated before loading the cube. Setting this property to YES truncates existing cube data.

Data Generator Operator

Use a single Data Generator operator to introduce constants or sequences into a SQL*Loader mapping. Each SQL*Loader mapping can contain a maximum of one Data Generator operator.

Description of data_gen.gif follows
Description of the illustration data_gen.gif

Recommendation:

For PL/SQL mappings use a Constant Operator or Sequence Operator instead of a Data Generator.

For mappings with flat file sources and targets, the Data Generator operator connects the mapping to SQL*Loader to generate the data stored in the database record.

The following functions are available:

  • RECNUM

  • SYSDATE

  • SEQUENCE

Warehouse Builder can generate data by specifying only sequences, record numbers, system dates, and constants as field specifications. SQL*Loader inserts as many records as are specified by the LOAD keyword.

The Data Generator operator has one output group with predefined attributes corresponding to Record Number, System Date, and a typical Sequence. You can create new attributes but not alter the predefined attributes.

Figure 17-3 shows a mapping that uses the Data Generator operator to obtain the current system date. The data from a flat file CUSTOMERS_TXT is loaded in to a staging table CUST_STAGE. The staging table contains an additional attribute for the date the data was loaded. The SYSDATE attribute of the Data Generator operator is mapped to the DATE_LOADED attribute of the staging table CUST_STAGE.

Figure 17-3 Data Generator in a Mapping

Description of Figure 17-3 follows
Description of "Figure 17-3 Data Generator in a Mapping"

To define a Data Generator in a SQL *Loader mapping:

  1. Drop a Data Generator operator onto the Mapping Editor canvas.

  2. Right-click the operator and select Open Details.

    The DATA_GENERATOR Editor is displayed.

  3. Select the Output Attributes tab of the DATA_GENERATOR Editor.

    Warehouse Builder displays the pre-defined output attributes RECNUM, SYSDATE1, and SEQUENCE.

  4. On the Output Attributes tab, define the properties and type an optional description for the predefined output attributes.

  5. Click OK to close the DATA_GENERATOR Editor.

  6. On the operator in the mapping canvas, select the RECNUM attribute.

    Warehouse Builder displays the properties of this attribute in the Properties panel of the Mapping Editor.

  7. In the Expression field, click the Ellipsis button to open the Expression Builder and define an expression.

  8. Repeat steps 6 and 7 for the SEQUENCE attribute.

Setting a Column to the Data File Record Number

Use the RECNUM keyword to set an attribute to the number of the records that the record was loaded from. Records are counted sequentially from the beginning of the first data file, starting with record 1. RECNUM increments as each logical record is assembled. It increments for records that are discarded, skipped, rejected, or loaded. For example, if you use the option SKIP=10, the first record loaded has a RECNUM of 11.

Setting a Column to the Current Date

A column specified with SYSDATE gets the current system date, as defined by the SQL language SYSDATE function.

The target column must be of type CHAR or DATE. If the column is of type CHAR, the date is loaded in the format dd-mon-yy. If the system date is loaded into a DATE column, then you can access it in the time format and the date format. A new system date/time is used for each array of records inserted in a conventional path load and for each block of records loaded during a direct path load.

Setting a Column to a Unique Sequence Number

The SEQUENCE keyword ensures a unique value for a column. SEQUENCE increments for each record that is loaded or rejected. It does not increment for records that are discarded or skipped.

The combination of column name and the SEQUENCE function is a complete column specification. Table 17-7 lists the options available for sequence values.

Table 17-7 Sequence Value Options

Value Description

column_name

The name of the column in the database to which the sequence is assigned.

SEQUENCE

Specifies the value for a column.

integer

Specifies the beginning sequence number.

COUNT

The sequence starts with the number of records already in the table plus the increment.

MAX

The sequence starts with the current maximum value for the column plus the increment.

incr

The value that the sequence number is to increment after a record is loaded or rejected.


If records are rejected during loading, the sequence of inserts is preserved despite data errors. For example, if four rows are assigned sequence numbers 10, 12, 14, and 16 in a column, and the row with 12 is rejected, the valid rows with assigned numbers 10, 14, and 16, not 10, 12, 14 are inserted. When you correct the rejected data and reinsert it, you can manually set the columns to match the sequence.

Dimension Operator

Use the Dimension operator to source data from or load data into dimensions and slowly changing dimensions.

Description of dimension.gif follows
Description of the illustration dimension.gif

The Dimension operator contains one group for each level in the dimension. The groups use the same name as the dimension levels. The level attributes of each level are listed under the group that represents the level.

You cannot map a data flow to the surrogate identifier attribute or the parent surrogate identifier reference attribute of any dimension level. Warehouse Builder automatically populates these columns when it loads a dimension.

You can bind and synchronize a Dimension operator with a dimension stored in the workspace. To avoid errors in the generated code, ensure that the workspace dimension is deployed successfully before you deploy the mapping that contains the Dimension operator. To synchronize a Dimension operator with the workspace dimension, right-click the dimension on the Mapping Editor canvas and select Synchronize.

To use a Dimension operator in a mapping:

  1. Drag and drop a Dimension operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Dimension dialog box.

  2. Use the Add Dimension dialog box to select a dimension.

    Alternately, you can combine steps 1 and 2 into one single step. In the Mapping Editor, navigate to the Selection Tree tab of the Explorer window. Select the dimension and drag and drop it onto the Mapping Editor canvas.

  3. Map the attributes from the Dimension operator to the target or map attributes from the source to the Dimension operator.

Dimension Operator Properties

Use the Properties panel of the Mapping Editor to set options that define additional details about loading or removing data from a dimension or slowly changing dimension.

You can set properties at the following three levels: operator, group that represents each level in the dimension, and level attribute.

The Dimension operator has the following properties.

Loading Type Represents the type of operation to be performed on the dimension. The options you can select are as follows:

  • LOAD: Select this value to load data into the dimension or slowly changing dimension.

  • REMOVE: Select this value to delete data from the dimension or slowly changing dimension.

While loading or removing data, a lookup is performed to determine if the source record exists in the dimension. The matching is performed by the natural key identifier. If the record exists, a REMOVE operation removes existing data. A LOAD operation updates existing data and then loads new data.

Note that when you remove a parent record, the child records will have references to a non-existent parent.

Target Load Order Specifies the order in which multiple targets within the same mapping are loaded. Warehouse Builder determines a default order based on the foreign key relationships. Use this property to overrule the default order.

Default Effective Time of Initial Record This property is applicable for Type 2 SCDs only. It represents the default value assigned as the effective time for the initial load of a particular dimension record. The default value set for this property is SYSDATE.

Default Effective Time of Open Record This property is applicable for Type 2 SCDs only. It represents the default value set for the effective time of the open records, after the initial record. The default value of this property is SYSDATE. This value should not be modified.

Default Expiration Time of Open Record This property is applicable for Type 2 SCDs only. It represents a date value that is used as the expiration time of a newly created open record for all the levels in the dimension. The default value is NULL.

Type 2 Gap v This property is applicable for Type 2 SCDs only. It represents the time interval between the expiration time of an old record and the effective time of the current record when a record is versioned.

When the value of a triggering attribute is updated, the current record is closed and a new record is created with the updated values. Since the closing of the old and opening of the current record occur simultaneously, it is useful to have a time interval between the expiration time of the old record and the effective time of the open record, instead of using the same value for both.

Type 2 Gap Units This property is applicable for Type 2 SCDs only. It represents the unit of time used to measure the gap interval represented in Type2 Gap property. The options are: Seconds, Minutes, Hours, Days, and Weeks.

Type 2 Extract/Remove Current Only This property is applicable only for Type 2 SCDs. Use this property to specify which records are to be extracted or removed. You can set the following values for this property:

  • YES: When you are extracting data from the Type 2 SCD, only the current record is extracted. When you are removing data from a Type 2 SCD, only the current record is closed (expiration date is set either to SYSDATE or to the date defined in the Default Expiration Time of Open Record property).

    Note that in the case of a Type 2 SCD that uses a snowflake implementation, you cannot remove a record if it has child records that have a Type 2 trigger.

  • NO: When you are extracting data from a Type 2 SCD, all the records, including historical records, that match the natural identifier are extracted from the dimension.

    When you are removing data from the Type 2 SCD, all records, including historical records, that match the natural identifier are closed. However, the records will not be removed from the dimension. Any child records of the closed record will remain linked to a closed parent.

AW Truncate Before Load This property is applicable for MOLAP dimensions only. It indicates whether all existing dimension data should be truncated before loading fresh data. Set this property to YES to truncate any existing dimension data before you load fresh data.

AW Staged Load This property is applicable for MOLAP dimensions only. Select this option to stage the set-based load data into a temporary table before loading into the analytic workspace.

Each group in the Dimension operator represents a dimension level. You can set the following properties for each dimension level:

  • Extracting Type: Represents the extraction operation to be performed when the dimension is used as a source. Select Extract Current Only (Type 2 Only) to extract current records only from a Type 2 SCD. This property is valid only for Type 2 SCDs. Select Extract All to extract all records from the dimension or SCD.

  • Default Expiration Time of Open Record: This property is applicable for Type 2 SCDs only. It represents a date value that is used as the expiration time of a newly created open record. The default value is NULL.

Note:

If you set the Commit Control property to Manual, ensure that you set the Automatic Hints Enable property to false. Otherwise, your mapping may not execute correctly.

Example of Type 2 SCD Operator Property Values 

For example, the mapping that loads the Products Type 2 SCD has the following configuration properties:

Default Effective Time of Initial Record: 01-jan-2000

Default Effective time of Open Record: SYSDATE

Default Expiration Time of Open Record: 01-jan-2099

Type2 Gap: 1

Type2 Gap Units: MINUTE

Product is the leaf level in this Type 2 SCD. The data type of the effective date and expiration date attributes is TIMESTAMP. The first product record is loaded on 20-apr-2007. The product record uses the following values:

Effective Time: 01-jan-2000

Expiration Time: 01-jan-2099

When the triggering attribute of this product record is updated on 22-apr-2007 at 10:45 AM, the current product record is closed and a new record is opened with the updated values.

The closed product record has the following values:

Effective Time: 01-jan-2000

Expiration Time: 22-apr-2007 10:44:00

The currently open Product record has the following values:

Effective Time: 20-apr-2007 10:45:00

Expiration Time: 01-jan-2099

Dimension Operator as a Source

You can source data stored in a workspace dimension or SCD by using a Dimension operator in a mapping.

Sourcing Data From Dimensions To source data stored in a dimension, create a mapping that contains a Dimension operator. Ensure that the Dimension operator is bound to the workspace dimension that contains the source data. Then map the attributes from the dimension levels to the target operator. See Figure 17-2 for and example of a mapping that sources data from a dimension.

Sourcing Data From Type 2 Slowly Changing Dimensions The Dimension operator enables you to source data stored in a Type 2 SCD. Since a Type 2 SCD stores multiple versions of a single record, you must specify which version of the record you want to use.

Use the Extracting Type property of the group that represents each dimension level to specify the version of the level record from which you want to source data. To load all versions of a particular level record select Extract All. To retrieve only the latest version of a level record, set the Extracting Type property to Extract Current Only (Type 2 Only).

To set the properties of a dimension level, select the group that represents the level in the Dimension operator. The Group Properties panel displays the properties of the level.

Sourcing Data From Type 3 Slowly Changing Dimensions To source data from a Type 3 SCD, create a mapping that contains a Dimension operator. Bind the Dimension operator to the Type 3 SCD in the workspace from which you want to source data. A Type 3 SCD uses separate attributes to store historic values of versioned attributes. Thus, to source historic data, map the attributes that represent the previous values of versioned attributes to the target. To source current data, map the attributes that store the level attributes to the target.

Figure 17-4 displays a mapping that sources the historic data records from the PRODUCTS Type 3 dimension. In this example, to source historic data, use the PREV_DESCRIPTION, PREV_PACKAGE_TYPE, or PREV_PACKAGE_SIZE attributes. To source current data, use DESCRIPTION, PACKAGE_TYPE, or PACKAGE_SIZE.

Figure 17-4 Mapping that Sources Data from a Type 3 SCD

Description of Figure 17-4 follows
Description of "Figure 17-4 Mapping that Sources Data from a Type 3 SCD"

Dimension Operator as a Target

Use a Dimension operator as a target in a mapping to load data into dimensions and SCDs. Define a data flow from the operators that represent the source objects to the dimension or SCD.

Warehouse Builder loads data into the dimension starting from the highest level.

Note:

You cannot map a data flow to the surrogate identifier or the parent surrogate identifier reference of a level.

Loading Data Into Dimensions Before loading records into a dimension, a check is made to determine if a record with the same business identifier as the one being loaded exists in the dimension. If a similar record does not exist, the record from the source is added to the dimension. If a similar record exists, the current record is updated to reflect the new attribute values.

Figure 17-5 displays a mapping that loads data into the PRODUCTS dimension, represented by the operator PRODUCTS_OUT. The source data is stored in two tables, CATEGORIES and PRODUCTS. The CATEGORIES table stores both the category and subcategory information. So the data from this table is filtered using two Filter operators CATS and SUBCATS. The filtered data is then mapped to the CATEGORIES level and the SUBCATEGORIES dimension levels. The TOTAL level of the dimension is loaded using a Constant operator. The data from the PRODUCTS table is mapped directly to the PRODUCTS level in the dimension.

Figure 17-5 Loading the Products Dimension

Description of Figure 17-5 follows
Description of "Figure 17-5 Loading the Products Dimension"

When you define a data flow to load a dimension, an in-line pluggable mapping that actually loads data into the dimension is created. To view this pluggable mapping, select the Dimension operator on the Mapping Editor canvas and click the Visit Child Graph icon on the graphical toolbar.

Loading Data Into Type 2 SCDs Before loading records into the Type 2 SCD, a check is first made to determine if a record with the same business identifier exists in the Type 2 SCD. If the record does not exist, it is added to the Type 2 SCD. If the record already exists, the following steps are performed:

  • Marks the existing record as closed by setting the value specified in the property Default Expiration Time of Open Record.

  • Creates a new record using the changed attribute values. If the effective date input for the level is not mapped, the effective time and expiration time are set using the Default Effective Time of Current Record and the Default Expiration Time of Current Record properties of the operator. If the effective date input for the level is mapped, then the effective time of the new record is set to the value that is obtained from the effective date input data flow. The effective date input, if connected, represents the actual effective date of each individual new record.

    The expiration date of the old record is set using the Default Expiration Time of Current Record property, regardless of the input connections

To load data into a slowly changing dimension, you map the source attributes to the attributes in the Type 2 SCD. Figure 17-6 displays a mapping that loads data into the PRODUCTS level of a Type 2 SCD.

Figure 17-6 Loading Data Into a Type 2 SCD

Description of Figure 17-6 follows
Description of "Figure 17-6 Loading Data Into a Type 2 SCD"

In this mapping, the effective time of a record is loaded from the source. You can also choose to set this to a default value, such as SYSDATE, using the Default Effective Time of Current Record property.

In addition to this, you need to store the closure date of historical records. In the mapping shown in Figure 17-6, the expiration time attribute is not loaded from the source. It is set to a default value using the Default Expiration Time of Current Record property.

To set default values for the expiration time or effective time for a level record:

  1. In the Dimension operator, select the group that represents the level for which default values for the effective time and expiration time attributes should be set. For example, in the mapping shown in Figure 17-6, select the PRODUCTS group on the Dimension operator.

    The Group properties panel displays the properties of the selected level.

  2. Use Default Effective Time of Current Record to specify the value to be stored as the effective time of the record being loaded.

  3. Use Default Expiration Time of Current Record to specify the value to be stored as the effective time of the record being loaded.

In a mapping that loads a Type 2 SCD, if you map attributes from the source to the effective date attribute of a level, Warehouse Builder performs the following

  • While loading the initial record, if the value of the source attribute is earlier than the value specified by the Default Effective Time of Initial Record property of the dimension operator (bound to the Type 2 SCD), the value from the source is used as the effective date of the record; else the value specified in the Default Effective Time of Initial Record property is used as the effective date of the record.

  • During subsequent loads for the record, if the record is being versioned, the effective time of the new record is taken from the source. If no value is given for the effective time, SYSDATE is used. The expiration time of the closed record is set to the effective time of the new record minus the gap.

If you do not map attributes from the source to the effective date attribute of a level, Warehouse Builder performs the following:

  • While loading the initial record, the value specified in the Default Effective Time of Initial Record property is used as the effective date of the record.

  • During subsequent loads for the record, if a new version is being created, the effective time of the new record is taken from the source. If no value is given for the effective time, SYSDATE is used. The expiration time of the previous version is taken as the effective time of the new version minus the gap.

    For more information about the gap, see "Type 2 Gap" and "Type 2 Gap Units".

Note:

Mapping to the Expiration Date attribute of a level is not allowed. While loading a record, the Default Expiration Time of Open Record property is used as the expiration date. The default value of this property is NULL.

For example, you create a mapping that loads the Products Type 2 SCD. The leaf level of this Type 2 SCD, Product, is loaded from a source table. The Dimension operator has the following properties:

Default Effective Time of Initial Record: 01-jan-2000

Default Effective Time of Open Record: SYSDATE

Default Expiration Time of Open Record: 01-jan-2099

The effective date attribute of the Product level is mapped from the source attribute EFF_DATE. Consider a source Product level record with the value of EFF_DATE as 21-mar-2007.When the initial Product level record is loaded, the effective date stored in this record is 01-jan-2000 and the expiration date is 01-jan-2099. When this Product level record is versioned during a subsequent load on 21-mar-2007, the value of the source attribute overrides the Default Effective Time of Open Record property. The effective date stored in the new Product level record is 21-mar-2007 and expiration date is set to 01-jan-2099. The initial Product level record is closed with the value of expiration date set to 21-mar-2007 minus the Type 2 Gap value.

Loading Data Into Type 3 SCDs Before loading data into a Type 3 SCD, a check is made to determine if a record with the same business identifier exists in the Type 3 SCD. If the record does not exist, it is added. If the record already exists, the following steps are performed:

  • Moves the values of the versioned attributes to the attributes that store the previous values of versioned attributes.

  • Updates the record with the values from the source record.

Figure 17-7 displays a mapping that loads data into the PRODUCTS level of the Type 3 SCD. In this mapping, the effective time of the current record is loaded from the source. You can also use the Default Effective Time of Current Record property to set a default value for effective time of a level record.

Figure 17-7 Loading a Type 3 SCD

Description of Figure 17-7 follows
Description of "Figure 17-7 Loading a Type 3 SCD"

You cannot map a data flow to the attributes that represent the previous values of versioned attributes. For example, in the mapping shown in Figure 17-7, you cannot map an attribute to the PREV_PACK_SIZE and PREV_DESCRIPTION attributes of the PRODUCTS level.

External Table Operator

The External Table operator enables you to source data stored in external tables in the workspace. You can then load the external table data into another workspace object or perform transformations on the data. For example, you can source data stored in an external table, transform the data using mapping operators, and then load the data into a dimension or a cube.

Description of external_table_icon.gif follows
Description of the illustration external_table_icon.gif

Figure 17-8 displays a mapping that uses the External Table operator. The External Table operator EXPENSE_CATEGORY_EXT is bound to the external table of the same name in the workspace. The data stored in this external table is used to load the dimension EXPENSE_CATEGORIES.

Figure 17-8 External Table Operator in a Mapping

Description of Figure 17-8 follows
Description of "Figure 17-8 External Table Operator in a Mapping"

To create a mapping that contains an External Table operator:

  1. Drag and drop an External Table operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add External Table dialog box.

  2. Use the Add External Table dialog box to create or select an external table. For more information about these options, see Adding Operators that Bind to Workspace Objects.

  3. Map the attributes from the output group of the External Table operator to the target operator or the intermediate transformation operator.

Expand Object Operator

The Expand Object operator enables you to expand an object type and obtain the individual attributes that comprise the object type.

Description of expand_object_icon.gif follows
Description of the illustration expand_object_icon.gif

You can bind and synchronize an Expand Object operator with a workspace object type. To avoid generation errors in the mapping, ensure that you deploy the workspace object type before you deploy the mapping.

The Expand Object operator has one input group and one output group. The input group represents the object type that you want to expand in order to obtain its individual attributes. When you bind an Expand Object operator to a workspace object, the output group of the operator contains the individual attributes that comprise the object type.

To successfully deploy a mapping that contains an Expand Object operator, ensure the following conditions are satisfied.

  • The schema that contains the source tables must be on the same instance as the warehouse schema.

  • The warehouse schema is granted the SELECT privilege on the source tables.

  • The warehouse schema is granted the EXECUTE privilege on all the object types and nested tables used in the Expand Object operator.

Figure 17-9 displays a mapping that uses an Expand Object operator. The source table CUSTOMERS contains a column CUSTOMER_ADDRESS of data type ADDR_TYPE, a SQL object type. But the target table CUST contains four different columns, of Oracle built-in data types, that store each component of the customer address. To obtain the individual attributes of the column CUSTOMER_ADDRESS, create an Expand Object operator that is bound to the object type ADDR_TYPE. You then map the CUSTOMER_ADDRESS column to the input group of an Expand Object operator. The output group of the Expand Object operator contains the individual attributes of the column CUSTOMER_ADDRESS. Map these output attributes to the target operator.

Figure 17-9 Expand Operator in a Mapping

Description of Figure 17-9 follows
Description of "Figure 17-9 Expand Operator in a Mapping"

To define an Expand Object operator in a mapping:

  1. Drag and drop an Expand Object operator onto the Mapping Editor canvas.

  2. Use the Add Expand Object dialog box to create or select an object. For more information about these options, see Adding Operators that Bind to Workspace Objects.

  3. Map the source attribute that needs to be expanded to the input group of the Expand Object operator.

    Note that the signature of the input object type should be same as that of the Expand Object operator.

  4. Map the output attributes of the Expand Object operator to the target attributes.

Mapping Input Parameter Operator

You can introduce information external to Warehouse Builder as input into a mapping using a Mapping Input Parameter.

Description of input_parm.gif follows
Description of the illustration input_parm.gif

For example, you can use a Mapping Input Parameter operator to pass SYSDATE to a mapping that loads data to a staging area. Use the same Mapping Input Parameter to pass the timestamp to another mapping that loads the data to a target.

When you a generate mapping, a PL/SQL package is created. Mapping input parameters become part of the signature of the main procedure in the package.

The Mapping Input Parameter has a cardinality of one. It creates a single row set that can be combined with another row set as input to the next operator.

The names of the input attributes become the names of the mapping output parameters. The parameters can be used by connecting the attributes of the Mapping Input Parameters operator within the mapping editor. You can have only one Mapping Input Parameter in a mapping.

When you define the Mapping Input Parameter, you specify a data type and an optional default value.

To define a Mapping Input Parameter operator in a mapping:

  1. Drag and drop a Mapping Input Parameter operator onto the Mapping Editor canvas.

  2. Right-click the Mapping Input Parameter operator and select Open Details.

    The INPUT_PARAMETER Editor is displayed.

  3. Select the Output tab and click Add to add output attributes.

    You can rename the attributes and define the data type and other attribute properties.

  4. Click OK to close the INPUT_PARAMETER Editor.

  5. Connect the output attribute of the Mapping Input Parameter operator to an attribute in the target operator as shown in Figure 17-10.

    Figure 17-10 Mapping Editor Showing A Mapping Input Parameter

    Description of Figure 17-10 follows
    Description of "Figure 17-10 Mapping Editor Showing A Mapping Input Parameter"

Mapping Output Parameter Operator

Use a single Mapping Output Parameter operator to send values out of a PL/SQL mapping to applications external to Warehouse Builder.

Description of output_parm.gif follows
Description of the illustration output_parm.gif

A Mapping Output Parameter operator is not valid for a SQL*Loader mapping. When you generate mapping, a PL/SQL package is created. Mapping Output Parameters become part of the signature of the main procedure in the package.

The Mapping Output Parameter operator has only one input group and no output groups. You can have only one Mapping Output Parameter operator in a mapping. Only attributes that are not associated with a row set can be mapped into a Mapping Output Parameter operator. For example, constant, input parameter, output from a pre-mapping process, or output from a post process can all contain attributes that are not associated with a row set.

To define a Mapping Output Parameter operator in a mapping:

  1. Drag and drop a Mapping Output Parameter operator onto the Mapping Editor canvas.

  2. Right-click the Mapping Output Parameter operator and select Edit.

  3. Select the Input Attributes tab and click Add to add input attributes.

    You can rename the attributes and define the data type and other attribute properties.

    Figure 17-11 displays an example of a Mapping Output Parameter operator used in a mapping.

    Figure 17-11 Mapping Editor Showing An Output Parameter Operator

    Description of Figure 17-11 follows
    Description of "Figure 17-11 Mapping Editor Showing An Output Parameter Operator"

Materialized View Operator

The Materialized View operator enables you to source data from or load data into a materialized view stored in the workspace.

Description of mview.gif follows
Description of the illustration mview.gif

For example, you can use the data stored in a materialized view to load a cube. The Materialized View operator has one Input/Output group called INOUTGRP1. You cannot add additional groups to this operator, but you can add attributes to the existing Input/Output group.

You can bind and synchronize a Materialized View operator to a workspace table. The workspace materialized view must be deployed before the mapping that contains the Materialized View operator is generated to avoid errors in the generated code package.

Figure 17-12 displays a mapping that uses a Materialized View operator. The data from the two source tables PRODUCTS and ALL_SALES is joined using a Joiner operator. This data is then aggregated using an Aggregator operator. The aggregated data is used to load the materialized view SALES_MV.

Figure 17-12 Mapping that Contains a Materialized View Operator

Description of Figure 17-12 follows
Description of "Figure 17-12 Mapping that Contains a Materialized View Operator"

To create a mapping that contains a Materialized View operator:

  1. Drag and drop a Materialized View operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Materialized View dialog box.

  2. Use the Add Materialized View dialog box to create or select a materialized view. For more information about these options, see "Adding Operators that Bind to Workspace Objects".

  3. Map the attributes of the Materialized View operator.

    If you are using the materialized view operator as a target, connect the source attributes to the Materialized View operator attributes. If you are using the materialized view as a source, connect the Materialized View operator attributes to the target.

Sequence Operator

A Sequence operator generates sequential numbers that increment for each row.

Description of sequence.gif follows
Description of the illustration sequence.gif

For example, you can use the Sequence operator to create surrogate keys while loading data into a dimension table. You can connect a Sequence to a target operator input or to the inputs of other types of operators. You can combine the sequence outputs with outputs from other operators.

Because sequence numbers are generated independently of tables, the same sequence can be used for multiple tables. Sequence numbers may not be consecutive because the same sequence can be used by multiple sessions.

This operator contains an output group containing the following output attributes:

  • CURRVAL: Generates from the current value.

  • NEXTVAL: Generates a row set of consecutively incremented numbers beginning with the next value.

You can bind and synchronize Sequences to a workspace sequence in one of the modules. The workspace sequence must be generated and deployed before the mapping containing the Sequence is deployed to avoid errors in the generated code package. See "Adding Operators that Bind to Workspace Objects" for more information.

Generate mappings with sequences using row based mode. Sequences are incremented even if rows are not selected. If you want a sequence to start from the last number, then do not run your SQL package in set based or in set based with failover operating modes. See "Runtime Parameters" for more information about configuring mode settings.

Figure 17-13 shows a mapping that uses a Sequence operator to automatically generate the primary key of a table. The NEXTVAL attribute of the Sequence operator is mapped to an input attribute of the target. The other input attributes from the source table are mapped directly to the target.

Figure 17-13 Mapping Sequence Operator in a Mapping

Description of Figure 17-13 follows
Description of "Figure 17-13 Mapping Sequence Operator in a Mapping"

To define a Mapping Sequence operator in a mapping:

  1. Drag and drop the Sequence operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Sequence dialog box.

  2. Use the Add Sequence dialog box to create or select a sequence. For more information about these options, see "Adding Operators that Bind to Workspace Objects".

  3. Connect the required output attribute from the Sequence operator to a target attribute.

Table Operator

The Table operator enables you to source data from and load data into tables stored in the workspace.

Description of table.gif follows
Description of the illustration table.gif

You can bind and synchronize a Table operator to a workspace table. To avoid errors in the generated code package, the workspace table must be deployed before the mapping that contains the Table operator is generated.

Figure 17-13 displays a mapping that uses Table operators as both source and target. Figure 17-3 displays a mapping that uses the Table operator as a target.

To define a Table operator in a mapping:

  1. Drag and drop a Table operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Table dialog box.

  2. Use the Add Table dialog box to create or select a table. For more information about these options, see "Adding Operators that Bind to Workspace Objects".

  3. Map the attributes of the Table operator.

    If you are using the table as a target, connect the source attributes to the Table operator attributes. If you are using the table as a source, connect the Table operator attributes to the target.

Merge Optimization for Table Operators

Beginning with Warehouse Builder 10.2.0.3, you can enable the Merge Optimization property for table operators. When set to True, this property optimizes the invocation or execution of expressions and transformations in the MERGE statement.

For example, consider a mapping in which the target table contains a column that is part of the update operation only and is mapped to a transformation. In previous releases, Warehouse Builder would execute the transformation for all rows, including rows that did not require transformation. Beginning in this release, if Merge Optimization is enabled, then Warehouse Builder calls the transformation only in the update part of the MERGE statement.

Creating Temporary Tables While Performing ETL

Warehouse Builder enables you to use temporary tables while extracting and loading data. Temporary tables are useful in scenarios where you need to extract data from remote sources into multiple targets.

Temporary staging tables are used in the dimension loading logic that is automatically generated by the Dimension operator submapping expansion. This prevents problems that would be caused by doing lookups on the target table.

The properties described below enable you to create temporary tables while performing ETL.

Is Temp Stage Table

When you set the Is Temp Stage Table property to True, any existing bindings for the Table operator are ignored. A temporary staging table is deployed with the mapping and all loading and extracting operations from this Table operator are performed from the staging table.

The name of the deployed table in the database is based on the operator name, with a unique identifier appended to the name to prevent name clashes. The table is automatically dropped when the map is dropped or redeployed. Before each execution of the mapping, the table is automatically truncated.

When you set this property to its default value of False, it has no effect.

Extra DDL Clauses

Use this property to add additional clauses to the DDL statement that is used to create the table. For example, use the following TABLESPACE clause to allocate storage for the temporary table in the MY_TBLSPC tablespace, instead of in the default tablespace: TABLESPACE MY_TBLSPC.

If you do not provide a value for the Extra DDL Clauses property, this property has no effect on the table creation.

Temp Stage Table ID

Use the Temp Stage Table ID property to specify the internal identifier used for the temporary staging table by the code generator. If any other temporary staging table in the mapping has the same value for the Temp Stage Table ID property, then the same deployed temporary staging table will be bound to both operators. This enables multiple usages of the same temporary staging table in the same mapping.

Varray Iterator Operator

When you have an object of type nested table or varray, you can use the Varray Iterator operator to iterate through the values in the table type.

Description of varrayiterator.gif follows
Description of the illustration varrayiterator.gif

This operator accepts a table type attribute as the source and generates a value that is of the base element type defined in the nested table or varray type. If the operator is bound, reconciliation operations are performed on the operator. Reconciliation operations are not supported for unbound Varray Iterator operators.

You can create the Varray Iterator operator as either a bound operator or an unbound operator. You cannot synchronize or validate an unbound Varray Iterator operator. It has only one input group and one output group. You can have an input group with attributes of other data types. However, there must be at least one table type attribute.

The attributes of the output group are a copy of the attributes of the input group. The only difference is that instead of the table type to which the operator is bound (which is one of the input group attributes), the output group will have an attribute that is the same as the base element type of the input group. The input group is editable. The output group is not editable.

Figure 17-14 Varray Iterator Operator in a Mapping

Description of Figure 17-14 follows
Description of "Figure 17-14 Varray Iterator Operator in a Mapping"

To define a Varray Iterator operator in a mapping:

  1. Drag and drop a Varray Iterator operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add Varray Iterator dialog box.

  2. From the Add Varray Iterator dialog box, select either an unbound operator or a bound operator.

    • If you select the unbound operator, then a Varray Iterator operator with no attributes is created. You will have to create these attributes manually.

    • If you select the bound operator, then you must select one of the available nested table or varray types shown in the tree. The output attribute is the same as the base element.

  3. Click OK.

  4. Map the attributes of the Varray Iterator operator.

For an unbound operator, right-click the unbound Varray Iterator operator on the Mapping Editor canvas and then select Open Details. This opens the Varray Iterator editor dialog box. You can add attributes to the input group by using the Add button. You can only change the data type of the attributes in the output group.

View Operator

The View operator enables you to source data from or load data into a view stored in the workspace.

Description of view.gif follows
Description of the illustration view.gif

You can bind and synchronize a View operator to a workspace view. The workspace view must be deployed before the mapping that contains the View operator is generated to avoid errors in the generated code package.

To define a View operator in a mapping:

  1. Drag and drop a View operator onto the Mapping Editor canvas.

    Warehouse Builder displays the Add View dialog box.

  2. Use the Add View dialog box to create or select a view. For more information about these options, see "Adding Operators that Bind to Workspace Objects".

  3. Map the attributes of the View operator.

    If you are using the view as a target, connect the source attributes to the View operator attributes. If you are using the view as a source, connect the View operator attributes to the target.

Using Remote and non-Oracle Source and Target Operators

You can bind a target operator in a mapping to an object in a remote Oracle location or a non-Oracle Database location such as SQL Server or DB2. Such operators are referred to as non-Oracle targets. Use database links to access these targets. The database links are created using the locations. SAP targets are not supported, in that it is nor possible to generate ABAP to populate SAP.

There are certain restrictions to using remote or non-Oracle targets in a mapping as described in the following sections:

Limitations of Using non-Oracle or Remote Targets

The following limitations apply when you use a remote or non-Oracle target in a mapping:

  • You cannot set the Loading Type property of the target operator to TRUNCATE/INSERT.

    This results in a validation error when you validate the mapping.

  • For non-Oracle targets, setting the Loading Type property of the target operator to INSERT/UPDATE produces the same result as setting the loading type to INSERT.

  • The RETURNING clause is not supported in a DML statement.

    The RETURNING clause enables you to obtain the ROWIDs of the rows that are loaded into the target using row-based mode. These ROWIDs are recorded by the runtime auditing system. But in the case of a remote or non-Oracle target, the RETURNING clause is not generated and nulls are passed to the runtime auditing system for the ROWID field.

  • In set-based mode, you cannot load data from an Oracle Database into a remote or non-Oracle target. All other modes, including set-based failover, are supported.

    When you set the Operating Mode property of the target operator to Set-based, a runtime error occurs.

Warehouse Builder Workarounds for non-Oracle and Remote Targets

When you use a remote or non-Oracle target in a mapping, default workarounds are used for certain restricted activities. These workarounds are listed for your information only. You need not explicitly do anything to enable these workarounds.

The default workarounds used for a remote or a non-Oracle target are as follows:

  • When you set the loading type of a target to INSERT/UPDATE or UPDATE/INSERT in Oracle 9i and to UPDATE in Oracle 10g, a MERGE statement is generated to implement this mapping in set-based mode. But a MERGE statement cannot be run against remote or non-Oracle targets. Thus, when you use a remote or non-Oracle target in a mapping, code is generated without a MERGE statement. The generated code is the same as that generated when the PL/SQL generation mode is set to Oracle8i.

  • For set-based DML statements that reference a database sequence that loads into a remote or non-Oracle target, the GLOBAL_NAMES parameter to must be set to TRUE. When code is generated for a mapping, this parameter is set to TRUE if the mapping contains a remote or non-Oracle target.

  • For a multi-table insert to a remote or non-Oracle target, an INSERT statement is generated per table instead of a multi-table insert statement.

  • While performing bulk inserts on a remote or non-Oracle Database, bulk processing code is not generated. Instead, code that processes one row at a time is generated. This means that the Generate Bulk property of the operator is ignored.

Note:

The loading types used for remote or non-Oracle targets are the same as the ones used for other Oracle target operators. For more information about the loading type property, see Loading Types for Oracle Target Operators.

Using Flat File Source and Target Operators

The Flat File operator enables you to use a flat file as a source or target in a mapping.

Setting Properties for Flat File Source and Target Operators

You can set properties for a flat file operator as either a source or target. You can set Loading Types for Flat Files and the Field Names in the First Row setting. All other settings are read-only and depend upon how you imported the flat file. The operator properties window displays as shown in Figure 17-15.

Figure 17-15 Properties Window for Flat File Operator

Description of Figure 17-15 follows
Description of "Figure 17-15 Properties Window for Flat File Operator"

Loading Types for Flat Files

Select a loading type from the list:

  • Insert: Creates a new target file. In the case that there is an existing target file, then the newly created file replaces the previous file.

  • Update: Creates a new target file if one does not already exist. In the case that there is an existing target file, then that file is appended.

  • None: No operation is performed on the data in the target file. This setting is useful for testing purposes. All transformations and extractions are run without affecting the target.

Field Names in the First Row

Set this property to True if you want to write the field names in the first row of the operator or False if you do not.

Flat File Operator

You can use a flat file operator as either a source or target.

Description of flatfile.gif follows
Description of the illustration flatfile.gif

However, the two are mutually exclusive within the same mapping. There are differences in code generation languages for flat file sources and targets. Subsequently, mappings can contain a mix of flat files, relational objects, and transformations, but with the restrictions discussed further in this section.

You have the following options for flat file operators:

  • Using Previously Imported Flat Files

  • Importing and Binding New Flat Files into Your Mapping

  • Defining New Flat File Sources or Targets in Mappings

Flat File Source Operators

You can introduce data from a flat file into a mapping using either a flat file operator or an external table operator. If you are loading large volumes of data, loading to a flat file enables you to use the DIRECT PATH SQL*Loader option, which results in better performance.

If you are not loading large volumes of data, you can benefit from many of the relational transformations available in the external table feature. See "External Table Operators versus Flat File Operators" for more information.

As a source, the flat file operator acts as the row set generator that reads from a flat file using the SQL*Loader utility. Do not use a flat file source operator to map to a flat file target or to an external table. When you design a mapping with a flat file source operator, you can use the following operators:

When you use a flat file as a source in a mapping, remember to create a connector from the flat file source to the relational target for the mapping to deploy successfully.

Flat File Target Operators

A mapping with a flat file target generates a PL/SQL package that loads data into a flat file instead of loading data into rows in a table.

Note:

A mapping can contain a maximum of 50 flat file target operators at one time.

You can use an existing flat file with either a single or multiple record types. If you use a multiple-record-type flat file as a target, you can only map to one of the record types. If you want to load all of the record types in the flat file from the same source, you can drop the same flat file into the mapping as a target again and map to a different record type. For an example of this, see "Using Direct Path Loading to Ensure Referential Integrity in SQL*Loader Mappings". Alternatively, create a separate mapping for each record type you want to load.

To create a new flat file target, see "Creating a New Flat File Target".