6.37 Mapping Editor Logical Tab

Use this to define the logical mapping rules from source to target datastores.

More

The Logical tab consists of a logical diagram which contains components that are linked through connectors with arrow heads pointing the direction of the flow from source to target. Each component has left and right connector ports to anchor incoming and outgoing connectors.

In order to create a Logical Diagram perform the following steps:

  1. Drag and drop datastores or reusable mappings from the designer navigator into the source or target areas. Source datastores and reusable mappings can also be organized within a Dataset Component.

  2. Drag and drop components such as joins, filters, or others from the component palette into the source area to define rules for the mapping between source and target. If datasets are used, join, lookup, or filter, rules can be defined by dragging attributes between datastores and onto the dataset background.

  3. Create connectors between components by dragging from the originating connector port to the destination connector port. Connectors can also be implicitly created by dragging attributes between components. When creating a connector between two ports, an Attribute Matching Dialog can be shown to automatically map attributes based on name or position.

  4. Set component properties, expressions, and conditions by selecting components and editing the properties in the Property Inspector.

Toolbar

Name Description
Perform Layout Orders the mapping components in a default layout.
View as Compact Changes all selected components to a compact view as icons.
View as Expanded Changes all selected components to an expanded view.
Validate the Mapping Checks the mapping for correctness; any issues that are found are listed in the Validation Window. Mapping Editor Validation Results Tab

Properties

Properties Description
Name Detailed description of the mapping.
Description Name of the mapping.
Target Load Order List of targets that are loaded in a defined order. Uses the Target Load Order Dialog to select and order targets.
Staging Location Hint Specifies which logical schema is used as a staging location. If left as the default <unbound>, the logical schema of the target is used.

6.37.1 Component Palette

The component palette allows you to pick components and drag and drop them into a logical diagram (see: Mapping Editor Logical Tab) of a mapping or reusable mapping.

The component palette has the following components:

The following options are available on the component palette:

  • All Pages Option: Shows components from all Component Palette pages

  • Logical Option: Shows components from the Mapping Logical view

  • Reusable: Shows components that can be used in a Reusable Mapping Logical view. Only visible for a Reusable Mapping Editor.

  • My Components Option: Shows a subset of components based on prior use and selection as a favorite

    • Favorites: Shows components that were selected as favorite. Favorites can be added to this list by right clicking on a component and choosing the "Add to Favorites" menu item.

    • Recently Used: Shows components that were recently used in mappings

Note: Because of the new custom component extensibility in ODI there could be other components in the palette.

6.37.1.1 11g Compatible Dataset Component

An 11g compatible dataset component is special case of the Dataset Component.

A dataset can be converted into equivalent flow components by using the command Convert to Flow in the dataset component context menu.

11g-compatible datasets are only present in 11g-compatible mappings. 11g compatible mappings maintain compatibility to the 11g SDK, so that existing programs using the ODI SDK continue working in 12c.

  • Upgrade from an 11g work repository, and in the Upgrade Assistant, deselect Don't Maintain ODI 11g SDK Compatibility.

  • Through commands of the 11g-compatible SDK.

11g-compatible maps are presented under the mapping node in the Designer Navigator. 11g-compatible mappings cannot be created or modified in the Studio; a mapping opened in a mapping editor is read-only. These mappings are distinguished from 12c mappings by an 11g interface icon.

The 11g-compatible dataset uses the same properties as a standard dataset but it additionally has a list of Component Attributes with expressions and execute-on hints.

6.37.1.2 Aggregate Component

Use to group and aggregate attributes using aggregate functions, such as average, count, maximum, sum, and so on. ODI automatically selects attributes without aggregation functions to be used as group-by attributes. You can override this by using the Is Group By Column and Manual Group By Clause properties.

More

Attributes

See: Component Attributes

General

Properties Description
Name Name of the Aggregate component.
Description Detailed description of the Aggregate component.
HAVING Having clause to be added to the generated SQL. This is optional.
Manual Group by Clause Group by clause to be used in the generated SQL, instead of the columns automatically selected by the Is Group By property.
Execute on Hint Preferred execution unit that performs the transformation. This is a hint that can be overridden on the physical view. This execution unit must have sufficient capacity (functions, SQL, and so forth) to effect these transformations.
  • No Hint (default): The system determines the optimal location for the execution.

  • Source: This is the data server that hosts the source datastore or datastores. For a source execution of a transformation that could be executed on several sources, a drop-down list allows you to choose the source on which the mapping is to be executed. For example, if, with two source tables on two Oracle servers, you enter the SYSDATE mapping, with execution on the source, the choice allows you to state if you want the date of one Oracle server or the other.

  • Staging Area: This is the execution unit selected on the Definition tab on the mapping. By default, this is the Target unit.

  • Target: This is the data server that hosts the target datastore. Mappings on the target cannot reference source attributes, nor can they contain aggregate functions.

Component Type Type of the component, set to AGGREGATE.

Connection Points

See: Connector Points.

6.37.1.3 Dataset Component

A dataset component is a selector component (see: Selector Components) that allows the user to group multiple data sources and join them through relationship joins.

More

A dataset contains the following components:

Joins and lookups are created by dragging an attribute from one datastore to another inside the dataset. A dialog box is shown to select if the relationship will be a join or lookup. A filter is created by dragging a datastore or reusable mapping attribute onto the dataset background. Joins, lookups, and filters cannot be dragged from the component palette into the dataset.

General

Properties Description
Name Name of the dataset.
Description Description of the dataset.
Component Type Type of the component, set to DATASET

Datastores

See the properties in Datastore Component.

Joins

See the properties in Join Component.

Filters

See the properties in Filter Component.

Lookups

See the properties in Lookup Component.

Reusable Mappings

See the properties in Reusable Mapping Component.

Connection Points

See: Connector Points.

6.37.1.4 Distinct Component

A distinct component is a projector component that projects a subset of attributes in the flow. The values of each row have to be unique; the behavior follows the rules of the SQL DISTINCT clause.

More

Attributes

See: Component Attributes

General

Properties Description
Name Name of the Distinct component.
Description Detailed description of the Distinct component.
Execute on Hint Hint on which execution unit the distinct clause will be evaluated. Possible Values: No Hint (default), Source, Staging, Target.¿
Component Type Type of the component, set to DISTINCT

Connection Points

See: Connector Points.

6.37.1.5 Expression Component

An expression component is a selector component (see: Selector Components) that inherits attributes from a preceding component in the flow and adds additional reusable attributes. An expression can be used to define a number of reusable expressions within a single mapping. Attributes can be renamed and transformed from source attributes using SQL expressions. The behavior follows the rules of the SQL SELECT clause.

More

Attributes

See: Component Attributes

General

Properties Description
Name Name of the expression component.
Description Detailed description of the expression component.
Execute on Hint Hint on which execution unit the distinct clause is evaluated. Possible Values: No Hint (default), Source, Staging, Target.
Component Type Type of the component, set to EXPRESSION.

Connection Points

See: Connector Points.

6.37.1.6 Filter Component

A filter is a selector component (see: Selector Components) that can select a subset of data based on a filter condition. The behavior follows the rules of the SQL WHERE clause.

Filters can be located in a dataset or directly in a mapping as a flow component. When used in a dataset, a filter is connected to one datastore or reusable mapping to filter all projections of this component out of the dataset.

More

Condition

Properties Description
Filter Condition SQL Condition to filter the data passed through the component.
Technical Description Provides a version of the filter with all attribute names replaced by their descriptions. This field cannot be edited.
Execute on Hint Hint on which execution unit the filter will be evaluated. Possible Values: No Hint (default), Source, Staging, Target.

General

Properties Description
Name Name of the filter component.
Description Detailed description of the filter component.
Active Indicator Indicates that the filter must be considered when executing the mapping.
Component Type Type of the component, set to FILTER.

Connection Points

See: Connector Points.

Note:

Connection Points are not shown for components within a dataset.

6.37.1.7 Flatten Component

The flatten component is a Projector component (see: Projector Components) that can process input data with complex structure and produce a flattened representation of the same data using standard datatypes.

More

Attributes

See Component Attributes for information on attributes. In addition to the attributes described in the Component Attributes section, the following attribute is specific to the flatten component.

Properties Description
Tag Used if the component attribute has a different name from the one used to define the physical data.

General

Properties Description
Name Name of the component.
Description Detailed description of the flatten component.
Component Type Type of component, set to Flatten.
Complex Type Attribute Input attribute that contains the complex data, which must be flattened.
Include Nulls Select to generate rows with no data for the attributes that are defined as NULL.
Execute on Hint Hint on which execution unit the expression is evaluated. Possible values are: No Hint (default), Source, Staging, Target.

Connection Points

See: Connector Points.

6.37.1.8 Input and Output Signature Components

Input Signature and Output Signature are Projector components (see: Projector Components) that define the input and output of reusable mappings. A reusable mapping can have zero to many input and output signatures.

More

Attributes

Attributes are defined by the user to define the data structure of the reusable mapping. Input and output signatures have different sets of attribute properties. See: Component Attributes for details.

General

Properties Description
Name Name of the signature component.
Description Detailed description of the signature component.
Component Type Type of the component, set to INPUTSIGNATURE or OUTPUTSIGNATURE.

Connection Points

See: Connector Points.

6.37.1.9 Jagged Component

A jagged component processes unstructured data. Source data from sources such as social media or e-commerce businesses are represented in a key-value free format. Using the jagged component, this data can be transformed into structured entities that can be loaded into database tables.

More

Attributes and Attribute Groups

The jagged component has one input group and multiple output groups based on the configuration of the component. Using the jagged component, you can group attributes based on dimensionality. These attribute groups are exposed as output groups. You can specify one or more attribute groups and assign named attributes to these groups. Attribute grouping allows you to specify data or dimensional sectioning. The key, value datasets can contain multiple logical datasets. These can be dimensional data, detail data, or hierarchical data. Cube data require grouping dimensional attributes and complex data require attribute grouping to specify complex type definitions.

See Component Attributes for information on attributes. In addition to the attributes described in the Component Attributes section, the following attributes are specific to the jagged component.

Properties Description
Default Use to define a default value for the missing keys in an incoming dataset.
Record ID Select to allow the use of running sequences to delineate rows in the datasets. An incoming dataset is usually a name and a value pair but in some cases, running row ID sequences are also generated to help in the pivoting of the dataset. The Record ID attribute stores the input data set sequence value.
Unique Index Select to identify the attribute as the primary key for the attribute grouping to which it belongs. This can be used to validate or consolidate the incoming datasets and is used in combination with the Consolidate Data property defined for the group.
Required Select to identify the attribute as required for generating an output record.

Note: If the required attribute is not part of the incoming dataset, then no output record will be generated.


General

Properties Description
Name Name of the component.
Description Detailed description of the component.
Component Type Type of the component, set to Jagged.
End of Data Indicator Use to indicate the attribute that will mark the beginning of a new record.

Connection Points

See Connector Points for information on connector points. In addition to the connector points described in the Connection Points section, the following attribute group properties are populated under output connector points.

Properties Description
Serialization file Name of the temporary file used by the jagged component to process incoming records. The jagged component partitions the incoming data to the relevant attribute groups and to the corresponding file. Jagged uses default names if the serialization file names are not specified.

Note: The default attribute group others has a predefined temporary file name.

Consolidate data Select to overwrite or update the existing row with the incoming row if there is key collision. This allows the data to be consolidated before it is sent to the downstream component for further processing.

Note: For attribute groups that have the Consolidate data option selected and at least one primary key attribute, the jagged component retains the last updated row.


6.37.1.10 Join Component

A join is a selector component (see Selector Components) that creates a join between multiple flows. The attributes of all flows are combined as the attributes of the join component.

A join can be located in a dataset or directly in a mapping as a flow component. A join combines data from two or more components, datastores, datasets, or reusable mappings.

When used in a dataset, a join combines the data of the datastores using the selected join type. Joins in a dataset can be created by dragging an attribute from one datastore to another.

A join used as a flow component can join two or more datastores. A join condition can be formed by dragging attributes from two components successively onto a join component in the mapping editor; the join condition is an equi-join between the two attributes.

More

Condition

Properties Description
Join Condition A join condition is usually expressed in SQL language. For sequenced joins (ISO), this is the text of the ON clause.

For example: REG.REGION_ID=CIT.REGION_ID

Join Type The following join types can be selected:
  • Inner Join (No check boxes selected)

  • Left Outer Join (Left datastore check box selected)

  • Right Outer Join (Right datastore check box selected)

  • Full Outer Join (Both datastore check boxes selected)

  • Cross Join (Cross check box selected)

  • Natural Join (Natural check box selected)

Technical Description Provides a version of the join with all attribute names replaced by their descriptions. This field cannot be edited.
Generate ANSI Syntax Only available for join components inside datasets.

If set, ODI generates SQL in ANSI syntax to implement the join logic. In ANSI syntax, changing the order of joins may change the result, especially when outer joins are involved.

Join Order Only available for join components inside datasets.

If true, join order is determined by the numerical field, User Defined. If false, ODI will choose an arbitrary join order when converting the dataset logic to flow based logic. If Generate ANSI Syntax is set, it is strongly recommended that you carefully review the join orders. When the Generate ANSI Syntax is checked, ODI will automatically set a unique user-defined value for Join Order, if it is has not yet been set.

User Defined Join order number to determine the order of multiple joins and lookups in a dataset. A join with a lower order number is resolved first. The number must be a positive integer and is only considered if the Join Order check box is checked.
Execute on Hint Hint on which execution unit the join will be evaluated. Possible Values: No Hint (default), Source, Staging, Target.

If Source is selected, all datastores in the join need to be in the same data server, and the server must have join capabilities.


General

Properties Description
Name Name of the join component.
Description Detailed description of the join component.
Active Indicator This check box indicates that the join must be considered when executing the mapping.
Component Type Type of component, set to JOIN.
Join Source Order Only available for join components inside datasets.

This field cannot be edited.

Displays a comma-separated list of two input datastore components for Join Source Order.

Note: All joins in a dataset are binary joins.

The first of this list is taken as the "left" side of the join, and the second of the list is taken as the "right" side of the join. This order of the datasource to a join is important in unambiguously defining outer joins.

Derived from If the join is a derived join, shows name of the Join component this join is derived from.

A derived join is created if a join condition of another join involves more than two datastores. In this case the join is deconstructed into multiple conditions with two datastores; all additional conditions create derived joins.

Only available for join components inside datasets.


Connection Points

See: Connector Points.

Note:

Connection Points are not shown for components within a dataset.

6.37.1.11 Lookup Component

A lookup is a selector component (see Selector Components) that returns data from a lookup flow being given a value from a driving flow. The attributes of both flows are combined, similarly to a join component. A lookup can be implemented in generated SQL either through a Left Outer Join or a nested Select statement.

Lookups can be located in a dataset or directly in a mapping as a flow component.

When used in a dataset, a Lookup is connected to two datastores or reusable mappings combining the data of the datastores using the selected lookup type. Lookups in a dataset can be created by dragging an attribute from the driving datastore to the lookup datastore. The lookup datastore can be identified in the dataset by an arrowhead pointing to it.

Lookups used as flow components can join two flows. A lookup condition can be created by dragging an attribute from the driving flow and then the lookup flow onto the lookup component; the lookup condition is an equi-join between the two attributes.

More

Condition

Properties Description
Lookup Condition A lookup expression is usually expressed in SQL language.

For example: REG.REGION_ID=CIT.REGION_ID

Technical Description Formal syntax of expression used for lookup. For example:

[SRC_CUSTOMERS (SRC_CUSTOMERS)].[ADDRESS_KEY] = [SAMP_ADDRESSES_D (SAMP_ADDRESSES_D)].[ADDRESS_KEY]

This field cannot be edited.

Execute on Hint Hint on which execution unit the lookup is evaluated. Possible Values: No Hint (default), Source, Staging, Target.

If Source is selected, all datastores in the lookup need to be in the same data server, and the server must have join capabilities.


Match Row Rules

Properties Description
Multiple Match Rows Select from the following to specify the action to perform when multiple rows are returned by the lookup operation:
  • All Rows (number of result rows may differ from the number of input rows): This option indicates that when the lookup operation returns multiple rows, all the rows should be returned as the lookup result.

  • Error: multiple rows cause mapping to fail: This option indicates that when the lookup operation returns multiple rows, the mapping execution fails.

  • Select any single row: This option indicates that when the lookup operation returns multiple rows, any one row from the returned rows must be selected as the lookup result.

  • Select first single row: This option indicates that when the lookup operation returns multiple rows, the first row from the returned rows must be selected as the lookup result.

  • Select last single row: This option indicates that when the lookup operation returns multiple rows, the last row from the returned rows must be selected as the lookup result.

  • Select nth single row: This option indicates that when the lookup operation returns multiple rows, the nth row from the result rows must be selected as the lookup result.

Note: For existing mappings in Oracle Data Integrator 12.1.2, the "Deprecated - All Rows (number of result rows may differ from the number of input rows)" and "Deprecated - Error: multiple rows cause mapping to fail" options indicate the same functionality as the "All Rows (number of result rows may differ from the number of input rows)" and "Error: multiple rows cause mapping to fail" options respectively.

No-Match Rows Select one of the following options to specify the action to perform when no rows are returned by the lookup operation:

Return no row: This option does not return any row when no row in the lookup results satisfies the lookup condition.

Return a row with the following default values: This option returns a row that contains default values when no row in the lookup results satisfies the lookup condition. Use the Lookup Attributes Default Value & Order By to specify the default values for each lookup attribute.

Nth Row Number Specify the value of the nth row. The value of the nth row should be equal to or greater than 1.

This field is displayed when the Select nth single row option is selected in the Multiple Match Rows drop-down list.

Lookup Attributes Default Value & Order By Specify how the result set that contains multiple rows should be ordered. Ensure that the attributes are listed in the same order (from top to bottom) in which you want the result set to be ordered. The Default Value column is displayed if Return a row with the following default values is selected in No-Match Rows.

General

Properties Description
Name Name of the lookup component.
Description Detailed description of the lookup component.
Component Type Type of component, set to LOOKUP.
Driver Source Name of the driver source component.
Lookup Source Name of the lookup source component.

Connection Points

See: Connector Points.

Note:

Connection Points are not shown for components within a dataset.

6.37.1.12 Pivot Component

A pivot component is a projector component (see Projector Components) that transforms data that is contained in multiple input rows into a single output row. The pivot component extracts data from a source once and produces one row from a set of source rows that are grouped by attributes in the source data. The pivot component can be placed anywhere in the data flow of a mapping.

More

Attributes

See Component Attributes for information on attributes. In addition to the attributes described in the Component Attributes section, the following attribute is specific to the pivot component.

Properties Description
Matching Row Value of the row locator, which indicates the row within the pivot group from which the attribute obtains its data. You can modify the Row Locator values to define the available values for the Matching Row attribute.

General

Properties Description
Name Name of the pivot component.
Description Detailed description of the pivot component.
Execute on Hint Hint on which execution unit the join is evaluated. Possible Values: No Hint (default), Source, Staging, Target.
Aggregate Function Name of the aggregate function used in a pivot operation. For example, MIN (default), SUM, AVG.
Component Type Type of component, set to PIVOT.

Row Locator

When using the pivot component, multiple input rows are transformed into a single row based on the row locator. The row locator is an attribute that must be selected from the source to correspond with the set of defined output attributes. The row locator Values specify the possible values of the row locator. It is necessary to specify a row locator to perform the pivot operation.

Connector Points

See: Connector Points.

6.37.1.13 Reusable Mapping Component

A reusable mapping component is a projector component (see Projector Components) that transforms zero or more incoming flows to zero or more outgoing flows, based on a nested reusable mapping (see Reusable Mapping Editor). A reusable mapping allows the definition of flow logic and expressions that can be reused in multiple mappings.

Select View As: Expanded in the context menu of the Reusable Mapping component, to display the embedded reusable mapping.

More

Reusable Mapping Signatures

This table lists the attributes of input and output signatures (see: Input and Output Signature Components) defined in the reusable mapping. The attribute properties are read-only and can only be changed in the reusable mapping itself. Input attributes allow the setting of an expression to map from attributes of the incoming flow, as well as an Execute on Hint for executing the expression.

General

Properties Description
Name Name of the reusable mapping component.
Description Detailed description of the reusable mapping component.
Component Type Type of the component, set to REUSABLEMAPPING.
Reusable Mapping Name of the reusable mapping used in this component.
Subselect enabled If set to true, the SQL statement generated from the reusable mapping is placed into a nested select statement.

The Subselect Enabled check box is provided for backward compatibility. It's only enabled if the Reusable Mapping has no Input Signatures and exactly one Output Signature, which is the equivalent of the old Temporary Interface.

When this check box is unchecked, the code generator will decide if the reusable mapping expression is "flattened" into the generated SQL or if a subselect is still used.


6.37.1.14 Set Component

A set component is a projector component (see: Projector Components) that combines multiple input flows into one using set operation such as UNION, INTERSECT, EXCEPT, MINUS, and others. The behavior reflects the SQL operators.

Additional input flows can be added to the set component by connecting new flows to it. The amount of input flows is shown in the list of Input Connection Points in the Operators tab. If an input flow is removed, the input connection point needs to be removed as well.

More

Attributes

Properties Description
Target Name of the attribute.
Expression for: INPUT<#> Expression originating from the connector point called <INPUT_NAME>.

General

Properties Description
Name Name of the set component.
Description Detailed description of the set component.
Execute on Hint Hint on which execution unit the join is evaluated. Possible Values: No Hint (default), Source, Staging, Target.

If Source is selected, all datastores in the join need to be in the same data server, and the server must have join capabilities.

Component Type Type of component, set to SET.

Operators Tab

The operators tab contains the standard Connector Points properties and a set operator for the second and following input connection points. The operator can be one of the following SQL-defined set operations:

  • EXCEPT

  • EXCEPT ALL

  • INTERSECT

  • INTERSECT ALL

  • INTERSECT DISTINCT

  • MINUS

  • MINUS ALL

  • MINUS DISTINCT

  • UNION

  • UNION ALL

  • UNION DISTINCT

6.37.1.15 Sort Component

A sort is a selector component (see Selector Components) that sorts the incoming flow based on a list of attributes. The functionality is equivalent to the SQL ORDER BY clause.

Sort condition fields can be added by dragging source attributes onto the sort component.

More

Condition

Properties Description
Sorter Condition Comma-separated list of attributes to be used for sorting.
Execute on Hint Hint on which execution unit the join is evaluated. Possible Values: No Hint (default), Source, Staging, Target.

If Source is selected, all datastores in the join must be in the same data server, and the server must have join capabilities.


General

Properties Description
Name Name of the sort component.
Description Detailed description of the sort component.
Component Type Type of component, set to SORT.

Connection Points

See: Connector Points.

6.37.1.16 Split Component

A Split is a Selector component (see: Selector Components) that divides a flow into two or more flows based on if-then-else conditions. If a flow is divided unconditionally into multiple flows, no split component is necessary. It is sufficient to connect multiple flows to an outgoing connector port of any preceding component.

More

General

Properties Description
Name Name of the split component.
Description Detailed description of the split component.
Execute on Hint Hint on which execution unit the join is evaluated. Possible Values: No Hint (default), Source, Staging, Target.

If Source is selected, all datastores in the join need to be in the same data server, and the server must have join capabilities.

Component Type Type of the component, set to SPLIT.

Split Conditions

The Split Conditions tab contains the standard Connector Points properties as well as expression and remainder properties.

Properties Description
Remainder If checked, only select rows that do not satisfy any of the other conditions.

Note: If the Remainder check box is checked and the split condition contains an attribute whose value is null, the corresponding rows are not moved to the target table.

Expression Condition to filter rows for the given output connector point. Conditions do not need to be mutually exclusive; the same row can be passed to multiple flows. The expression cannot be empty.

6.37.1.17 Subquery Filter Component

A subquery filter component is a projector component (see Projector Components) that filters rows based on the results of a subquery.

More

Attributes

See Component Attributes for information on attributes. In addition to the attributes described in the Component Attributes section, the following attributes are specific to the subquery filter component.

Properties Description
Expression for DRIVER_INPUT SQL expression for the DRIVER_INPUT connector point.
Expression for SUBQUERY_FILTER_INPUT SQL expression for the SUBQUERY_FILTER_INPUT connector point.

Conditions

The conditions that can be used to filter rows are EXISTS, NOT EXISTS, IN, NOT IN, =, >, < , >=, <= , !=, <>, and ^=.

Properties Description
Subquery Filter Condition A subquery filter condition is usually expressed in SQL language for the EXISTS or NOT EXISTS conditions.
Subquery Filter Input Role Select from the EXISTS, NOT EXISTS, IN, NOT IN, = , >, < , >=, <=, !=, <>, or ^= conditions.
Group Comparison Condition A group comparison condition specifies a comparison with any or all members in a subquery. Group comparison condition can only be used with = , >, < , >=, <= , !=, <>, ^=.

Select from No condition, Any, Some, and All group conditions.

No condition is the default value and signifies that no group comparison condition should not be used.


General

Properties Description
Name Name of the subquery filter component.
Description Detailed description of the subquery filter component.
Component Type Type of component, set to SUBQUERYFILTER.
Execute on Hint Hint on which execution unit the expression is evaluated. Possible values are: No Hint (default), Source, Staging, Target.

Connector Points

A subquery filter component has two input connector points and one output connector point. The two input connector points are Driver Input connector point and Subquery Filter Input connector point. The Driver Input connector point is where the main datastore is set, which drives the whole query. The Subquery Filter Input connector point is where the datastore that is used in the subquery is set.

See: Connector Points.

6.37.1.18 Table Function Component

A table function component is a projector component (see Projector Components) that represents a table function in a mapping. Table function components can be used to manipulate a set of input rows and return another set of output rows of the same or different cardinality. The set of output rows can be queried like a physical table. A table function component can be placed anywhere in a mapping, as a source, a target, or a data flow component.

More

Attributes

See: Component Attributes

General

Properties Description
Name Name of the table function component.
Description Detailed description of the table function component.
Component Type Type of component, set to TABLEFUNCTION.
Function Name Name of the function.
Execute on Hint Hint on which execution unit the expression is evaluated. Possible values are: No Hint (default), Source, Staging, Target.

Connector Points

A table function component can have multiple input connector points and one output connector point. The input connector point attributes act as the input parameters for the table function, while the output connector point attributes are used to store the return values.

For each input connector, you can define the parameter type, REF_CURSOR or SCALAR, depending on the type of attributes the input connector point will hold.

See: Connector Points.

6.37.1.19 Unpivot Component

An unpivot component is a projector component (see Projector Components) that transforms data that is contained across attributes into multiple rows.

The unpivot component does the reverse of what the pivot component does. Similar to the pivot component, an unpivot component can be placed anywhere in the flow of a mapping.

The unpivot component is specifically useful in situations when extracting data from non-relational data sources, such as a flat file, which contains data across attributes rather than rows.

More

General

Properties Description
Name Name of the unpivot component.
Component Type Type of component, set to UNPIVOT.
Description Detailed description of the unpivot component.
Execute on Hint Hint on which execution unit the expression is evaluated. Possible values are: No Hint (default), Source, Staging, Target.
Row Locator Name of the Row Locator attribute. Attributes in the unpivot component that do not have an expression can be row locator candidates.
INCLUDE NULLS Select to generate rows with no data for the attributes that are defined as NULL.

Attributes

The row locator is an output attribute that corresponds to the repeated set of data from the source. Output attributes that are not assigned an expression are row locator candidates and are displayed in the Row Locator drop-down list. Once an attribute is selected as a row locator, the remaining output attributes are referred to as value locators. The row locator and value locators are displayed in the Unpivot Transforms table.

See: Component Attributes

Unpivot Transforms

In the Unpivot Transforms section, the transform rules for each output attribute can be added and the default values of the transform rules can be edited. The appropriate expressions to create the required logic can also be specified. Note that the value attribute expression can also be an arbitrary expression, for example, UNPIVOT_EMP_SALES.Q1_SALES + 100.

6.37.2 Datastore Component

A datastore component is a projector component (see Projector Components) that represents a source or target datastore based on a model. A datastore is a table-like data structure based on a physical storage mechanism such as a database, file, API, or other mechanism.

More

Datastores are added to a mapping by dragging and dropping them from the Model Navigator onto the logical mapping diagram. Datastores can be inside a dataset or in the source or target areas. Datastores can also be used as both target and source by connecting input and output connector ports to other components.

A datastore can have multiple output connectors to use the same data in multiple flows. Like other components, a datastore can also be used multiple times in a flow. Only one input connector is permitted; if multiple flows should be combined into one target datastore, a set component should be used.

A datastore is considered a source datastore if it has at least one output connector but no input connectors. When it has no output connectors, a datastore is considered a target datastore.

More

Attributes

Attributes are fixed and provided by the base datastore. Source and target datastores have different sets of attribute properties. See: Component Attributes for details.

General

Properties Description
Name Name of the datastore component.
Description Detailed description of the Datastore component.
Partition/Sub-Partition The partition or sub-partition of the datastore to use as a source of the mapping.
Datastore Name of the base datastore.
Shortcut Name of the shortcut if base model is a shortcut. Empty if no shortcut is used.
Logical Schema Name of the logical schema specified for the datastore's model.
Component Context (Forced) Context within which this mapping uses the datastore. Leave empty to use the execution context. For more details, see Context Editor.
Physical Schema Physical Schema containing the datastore in the context.
Data Server Data Server containing the datastore in the context
Catalog Catalog containing the datastore in the context.
Component Type Type of component, set to DATASTORE.

Target

Properties Description
Integration Type Type of integration strategy used by the Integration Knowledge Module:
  • Control Append: Inserts rows to the target datastore.

  • Incremental Update: Inserts, changes, or deletes rows in the target datastore.

  • Slowly Changing Dimension: Adds data to the target as slowly changing dimension.

Update Key Specifies the primary or alternate key that is used when matching the source rows with the target datastore.

This value is usually used when UPDATE DML is generated; for example in IKM SQL Incremental Update.

Maximum Number of Errors Allowed Applies to errors detected while extracting data from source file datastores, and to errors detected by the flow control process. When this number or percentage of errors is exceeded, the mapping switches to an error state.

If the Integration Errors as Percentage check box is unselected, the mapping switches to the error state when one of the following conditions is true:

  • The number of errors detected while extracting data from one file exceeds the number of errors allowed.

  • The number of errors detected during the flow control exceeds the number of errors allowed.

If the Integration Errors as Percentage check box is selected, the mapping switches to the error state when one of the following conditions is true:

  • The number of errors detected while extracting data from one file exceeds the percentage of errors allowed. The percentage is computed from the number of lines extracted from the file.

  • The number of errors detected during the flow control exceeds the percentage of errors allowed. This percentage is computed from the number of records integrated to the target datastore (Insert + Update).

Note: When this field is left empty, then an unlimited number of errors is allowed.

Note: The total number of errors can exceed the number of rows in the mapping, as a line can infringe several constraints or contain several errors. In this case, the multiple errors add up.

Integration Errors as Percentage Causes the Maximum Number of Errors Allowed field to be interpreted as a percentage value.

Journalizing

Properties Description
Journalized Data Filter Filter to use for Journalized Data Only. The default filter checks for the subscriber name that will process the data; you should substitute the default SUNOPSIS with the actual subscriber name. Filter is ignored when Journalized Data Only is deselected.

Constraints

Properties Description
Constraint Name Name of constraint as defined in base datastore.
Constraint Type Constraint type as defined in base datastore.
Value Shows if the given constraint should be checked (true/false).

Connection Points

See: Connector Points.

Note:

Connection Points are not shown for components within a dataset.

6.37.3 Mapping Diagram

Use the mapping diagram to lay out a visual flow of data from source datastores, through intermediate components, and into target datastores. You can view the mapping diagram from the Logical or Physical tabs. The diagram shows either the logical or physical layout of your mapping. The logical tab of the mapping diagram may be referred to as the Logical Diagram, and the physical tab may be referred to as the Physical Diagram.

Right-click an object in the mapping diagram to access a context menu for that object.

Table 6-2 Mapping Diagram Toolbar

Name Description

Zoom In

Zoom Out

Zoom drop-down list

Use the zoom buttons and drop-down menu to zoom in and out of the mapping diagram

Perform Layout

Automatically rearranges the components in the mapping diagram in a neat and organized way, with data flowing from the left to the right

View As Compact

Shows all components in the mapping diagram as small icons.

View As Expanded

Shows all projector components in the mapping diagram as boxes containing lists of their attributes.

Validate the Mapping

Performs a validation operation on the mapping. The Validation Results pane will list validation errors detected in the mapping.

Expand All Execution Units

Shows execution units as boxes with contained components displayed.

Collapse All Execution Units

Shows execution units as compact icons.

Synchronize with Logical

Updates a physical diagram to incorporate changes made to the parent logical diagram.

Reset Physical Mapping Design

Resets the current physical mapping design to remove all changes made since it was generated by ODI from the logical diagram.


"Navigating the Mapping Editor" in the "Creating and Using Mappings" chapter in Oracle Fusion Middleware Developing Integration Projects with Oracle Data Integrator

"Using Mapping Components" in Oracle Fusion Middleware Developing Integration Projects with Oracle Data Integrator