PK p\Eoa,mimetypeapplication/epub+zipPKp\EiTunesMetadata.plistl artistName Oracle Corporation book-info cover-image-hash 948538959 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 262908179 publisher-unique-id E12643-05 unique-id 483520077 genre Oracle Documentation itemName Oracle® Fusion Middleware Developer's Guide for Oracle Data Integrator, 11g Release 1 (11.1.1) releaseDate 2011-11-04T09:26:47Z year 2011 PK$ԤqlPKp\EMETA-INF/container.xml PKYuPKp\EOEBPS/loadplans.htm Working with Load Plans

14 Working with Load Plans

This chapter gives an introduction to Load Plans. It describes how to create a Load Plan and provides information about how to work with Load Plans.

This chapter includes the following sections:

14.1 Introduction to Load Plans

Oracle Data Integrator is often used for populating very large data warehouses. In these use cases, it is common to have thousands of tables being populated using hundreds of scenarios. The execution of these scenarios has to be organized in such a way that the data throughput from the sources to the target is the most efficient within the batch window. Load Plans help the user organizing the execution of scenarios in a hierarchy of sequential and parallel steps for these type of use cases.

A Load Plan is an executable object in Oracle Data Integrator that can contain a hierarchy of steps that can be executed conditionally, in parallel or in series. The leaves of this hierarchy are Scenarios. Packages, interfaces, variables, and procedures can be added to Load Plans for executions in the form of scenarios. For more information, see Section 14.2, "Creating a Load Plan".

Load Plans allow setting and using variables at multiple levels. See Section 14.2.3, "Working with Variables in Load Plans" for more information. Load Plans also support exception handling strategies in the event of a scenario ending in error. See Section 14.2.4, "Handling Load Plan Exceptions and Restartability" for more information.

Load Plans can be started, stopped, and restarted from a command line, from Oracle Data Integrator Studio, Oracle Data Integrator Console or a Web Service interface. They can also be scheduled using the run-time agent's built-in scheduler or an external scheduler. When a Load Plan is executed, a Load Plan Instance is created. Each attempt to run this Load Plan Instance is a separate Load Plan Run. See Chapter 14, "Running Load Plans" for more information.

A Load Plan can be modified in production environments and steps can be enabled or disabled according to the production needs. Load Plan objects can be designed and viewed in the Designer and Operator Navigators. Various design operations (such as create, edit, delete, and so forth) can be performed on a Load Plan object if a user connects to a development work repository, but some design operations will not be available in an execution work repository. See Section 14.2.2.2, "Editing Load Plan Steps" for more information.

Once created, a Load Plan is stored in the work repository. The Load Plan can be exported then imported to another repository and executed in different contexts. Load Plans can also be versioned. See Section 14.4.3, "Exporting, Importing and Versioning Load Plans" for more information.

Load Plans appear in Designer Navigator and in Operator Navigator in the Load Plans and Scenarios accordion. The Load Plan Runs are displayed in the Load Plan Executions accordion in Operator Navigator.

14.1.1 Load Plan Execution Lifecycle

When running or scheduling a Load Plan you provide the variable values, the contexts and logical agents used for this Load Plan execution.

Executing a Load Plan creates a Load Plan instance and a first Load Plan run. This Load Plan instance is separated from the original Load Plan, and the Load Plan Run corresponds to the first attempt to execute this instance. If a run is restarted a new Load Plan run is created under this Load Plan instance. As a consequence, each execution attempt of the Load Plan Instance is preserved as a different Load Plan run in the Log. See Section 14.3, "Running Load Plans" for more information.

14.1.2 Differences between Packages, Scenarios, and Load Plans

A Load Plan is the largest executable object in Oracle Data Integrator. It uses Scenarios in its steps. When an executable object is used in a Load Plan, it is automatically converted into a scenario. For example, a package is used in the form of a scenario in Load Plans. Note that Load Plans cannot be added to a Load Plan. However, it is possible to add a scenario in form of a Run Scenario step that starts another Load Plan using the OdiStartLoadPlan tool.

Load plans are not substitutes for packages or scenarios, but are used to organize at a higher level the execution of packages and scenarios.

Unlike packages, Load Plans provide native support for parallelism, restartability and exception handling. Load plans are moved to production as is, whereas packages are moved in the form of scenarios. Load Plans can be created in Production environments.

The Load Plan instances and Load Plan runs are similar to Sessions. The difference is that when a session is restarted, the existing session is overwritten by the new execution. The new Load Plan Run does not overwrite the existing Load Plan Run, it is added after the previous Load Plan Runs for this Load Plan Instance. Note that the Load Plan Instance cannot be modified at run-time.

14.1.3 Load Plan Structure

A Load Plan is made up of a sequence of several types of steps. Each step can contain several child steps. Depending on the step type, the steps can be executed conditionally, in parallel or sequentially. By default, a Load Plan contains an empty root serial step. This root step is mandatory and the step type cannot be changed.

Table 14-1 lists the different types of Load Plan steps and the possible child steps.

Table 14-1 Load Plan Steps

TypeDescriptionPossible Child Steps

Serial Step

Defines a serial execution of its child steps. Child steps are ordered and a child step is executed only when the previous one is terminated.

The root step is a Serial step.

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Parallel Step

Defines a parallel execution of its child steps. Child steps are started immediately in their order of Priority.

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Run Scenario Step

Launches the execution of a scenario.

This type of step cannot have a child steps.

Case Step

When Step

Else Steps

The combination of these steps allows conditional branching based on the value of a variable.

Note: If you have several When steps under a Case step, only the first enabled When step that satisfies the condition is executed. If no When step satisfies the condition or the Case step does not contain any When steps, the Else step is executed.

Of a Case Step:

  • When step

  • Else step

Of a When step:

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Of an Else step:

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Exception Step

Defines a group of steps that is executed when an exception is encountered in the associated step from the Step Hierarchy. The same exception step can be attached to several steps in the Steps Hierarchy.

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step


Figure 14-1 shows a sample Load Plan created in Oracle Data Integrator. This sample Load Plan loads a data warehouse:

  • Dimensions are loaded in parallel. This includes the LOAD_TIME_DIM, LOAD_PRODUCT_DIM, LOAD_CUSTOMER_DIM scenarios, the geographical dimension and depending on the value of the ODI_VAR_SESS1 variable, the CUST_NORTH or CUST_SOUTH scenario.

  • The geographical dimension consists of a sequence of three scenarios (LOAD_GEO_ZONE_DIM, LOAD_COUNTRIES_DIM, LOAD_CITIES_DIM).

  • After the dimensions are loaded, the two fact tables are loaded in parallel (LOAD_SALES_FACT and LOAD_MARKETING_FACT scenarios).

Figure 14-1 Sample Load Plan

Description of Figure 14-1 follows

14.1.4 Introduction to the Load Plan Editor

The Load Plan Editor provides a single environment for designing Load Plans. Figure 14-2 gives an overview of the Load Plan Editor.

Figure 14-2 Steps Tab of the Load Pan Editor

Description of Figure 14-2 follows

The Load Plan steps are added, edited and organized in the Steps tab of the Load Plan Editor. The Steps Hierarchy table defines the organization of the steps in the Load Plan. Each row in this table represents a step and displays its main properties.

You can drag components such as packages, integration interfaces, variables, procedures, or scenarios from the Designer Navigator into the Steps Hierarchy table for creating Run Scenario steps for these components.

You can also use the Add Step Wizard or the Quick Step tool to add Run Scenario steps and other types of steps into this Load Plan. See Section 14.2.2.1, "Adding Load Plan Steps" for more information.

The Load Plan Editor toolbar, located on top of the Steps Hierarchy table, provides tools for creating, organizing, and sequencing the steps in the Load Plan. Table 14-2 details the different toolbar components.

Table 14-2 Load Plan Editor Toolbar

IconNameDescription
Search

Search

Searches for a step in the Steps Hierarchy table.

Expand All icon

Expand All

Expands all tree nodes in the Steps Hierarchy table.

Collapse All icon

Collapse All

Collapses all tree nodes in the Steps Hierarchy table.

Add Step

Add Step

Opens a Add Step menu. You can either select the Add Step Wizard or a Quick Step tool to add a step. See Section 14.2.2.1, "Adding Load Plan Steps" for more information.

Remove Step icon

Remove Step

Removes the selected step and all its child steps.

Navigation arrows

Reorder arrows: Move Up, Move Down, Move Out, Move In

Use the reorder arrows to move the selected step to the required position.


The Properties Panel, located under the Steps Hierarchy table, displays the properties for the object that is selected in the Steps Hierarchy table.

14.2 Creating a Load Plan

This section describes how to create a new Load Plan in ODI Studio.

  1. Define a new Load Plan. See Section 14.2.1, "Creating a New Load Plan" for more information.

  2. Add Steps into the Load Plan and define the Load Plan Sequence. See Section 14.2.2, "Defining the Load Plan Step Sequence" for more information.

  3. Define how the exceptions should be handled. See Section 14.2.4, "Handling Load Plan Exceptions and Restartability" for more information.

14.2.1 Creating a New Load Plan

Load Plans can be created from the Designer or Operator Navigator.

To create a new Load Plan:

  1. In Designer Navigator or Operator Navigator, click New Load Plan in the toolbar of the Load Plans and Scenarios accordion. The Load Plan Editor is displayed.

  2. In the Load Plan Editor, type in the Name and a Description for this Load Plan.

  3. Optionally, set the following parameters:

    • Log Sessions: Select how the session logs should be preserved for the sessions started by the Load Plan. Possible values are:

      • Always: Always keep session logs (Default)

      • Never: Never keep session logs. Note that for Run Scenario steps that are configured as Restart from Failed Step or Restart from Failed Task, the agent will behave as if the parameter is set to Error as the whole session needs to be preserved for restartability.

      • Error: Only keep the session log if the session completed in an error state.

    • Log Session Step: Select how the logs should be maintained for the session steps of each of the session started by the Load Plan. Note that this applies only when the session log is preserved. Possible values are:

      • By Scenario Settings: Session step logs are preserved depending on the scenario settings. Note that for scenarios created from packages, you can specify whether to preserve or not the steps in the advanced step property called Log Steps in the Journal. Other scenarios preserve all the steps (Default).

      • Never: Never keep session step logs. Note that for Run Scenario steps that are configured as Restart from Failed Step or Restart from Failed Task, the agent will behave as if the parameter is set to Error as the whole session needs to be preserved for restartability.

      • Errors: Only keep session step log if the step is in an error state.

    • Session Task Log Level: Select the log level for the sessions. This value corresponds to the Log Level value when starting unitary scenarios. Default is 5. Note that when Run Scenario steps are configured as Restart from Failed Step or Restart From Failed Task, this parameter is ignored as the whole session needs to be preserved for restartability.

    • Keywords: Enter a comma separated list of keywords that will be set on the sessions started from this load plan. These keywords improve the organization of ODI logs by session folders and automatic classification. Note that you can overwrite these keywords at the level of the child steps. See Section 22.3.3, "Managing the Log" for more information.

  4. Go to the Steps tab and add steps as described in Section 14.2.2, "Defining the Load Plan Step Sequence".

  5. If your Load Plan requires conditional branching, or if your scenarios use variables, go to the Variables tab and declare variables as described in Section 14.2.3.1, "Declaring Load Plan Variables".

  6. To add exception steps that are used in the event of a load plan step failing, go to the Exceptions tab and define exception steps as described in Section 14.2.4.1, "Defining Exceptions Flows".

  7. From the File menu, click Save.

The Load Plan appears in the Load Plans and Scenarios accordion. You can organize your Load Plans by grouping related Load Plans and Scenarios into a Load Plan and Scenarios folder.

14.2.2 Defining the Load Plan Step Sequence

Load Plans are an organized hierarchy of child steps. This hierarchy allows conditional processing of steps in parallel or in series.

The execution flow can be configured at two stages:

  • At Design-time, when defining the Steps Hierarchy:

    • When you add a step to a Load Plan, you select the step type. The step type defines the possible child steps and how these child steps are executed: in parallel, in series, or conditionally based on the value of a variable (Case step). See Table 14-1 for more information on step types.

    • When you add a step to a Load Plan, you also decide where to insert the step. You can add a child step, a sibling step after the selected step, or a sibling step before the selected step. See Section 14.2.2.1, "Adding Load Plan Steps" for more information.

    • You can also reorganize the order of the Load Plan steps by dragging the step to the wanted position or by using the arrows in the Step table toolbar. See Table 14-2 for more information.

  • At design-time and run-time by enabling or disabling a step. In the Steps hierarchy table, you can enable or disable a step. Note that disabling a step also disables all its child steps. Disabled steps and all their child steps are not executed when you run the load plan.

This section contains the following topics:

14.2.2.1 Adding Load Plan Steps

A Load Plan step can be added either by using the Add Step Wizard or by selecting the Quick Step tool for a specific step type. See Table 14-1 for more information on the different types of Load Plan steps. To create Run Scenario steps, you can also drag components such as packages, integration interfaces, variables, procedures, or scenarios from the Designer Navigator into the Steps Hierarchy table. Oracle Data Integrator automatically creates a Run Scenario step for the inserted component.

When a Load Plan step is added, it is inserted into the Steps Hierarchy with the minimum required settings. See Section 14.2.2.2, "Editing Load Plan Steps" for more information on how to configure Load Plan steps.

Adding a Load Plan Step with the Add Step Wizard

To insert Load Plan step with the Add Step Wizard:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. Select a step in the Steps Hierarchy table.

  3. In the Load Plan Editor toolbar, select Add Step > Add Step Wizard.

  4. In the Add Step Wizard, select:

    • Step Type. Possible step types are: Serial, Parallel, Run Scenario, Case, When, and Else. See Table 14-1 for more information on the different step types.

    • Step Location. This parameter defines where the step is added.

      • Add a child step to selection: The step is added under the selected step.

      • Add a sibling step after selection: The step is added on the same level after the selected step.

      • Add a sibling step before selection: The step is added on the same level before the selected step.


    Note:

    Only values that are valid for the current selection are displayed for the Step Type and Step Location.

  5. Click Next.

  6. Follow the instructions in Table 14-3 for the step type you are adding.

    Table 14-3 Add Step Wizard Actions

    Step TypeDescription and Action Required

    Serial or Parallel step

    Enter a Step Name for the new Load Plan step.

    Run Scenario step

    1. Click the Lookup Scenario button.

    2. In the Lookup Scenario dialog, you can select the scenario you want to add to your Load Plan and click OK.

      Alternately, to create a scenario for an executable object and use this scenario, select this object type in the Executable Object Type selection box, then select the executable object that you want to run with this Run Scenario step and click OK. Enter the new scenario name and version and click OK. A new scenario is created for this object and used in this Run Scenario Step.

      Tip: At design time, you may want to create a Run Scenario step using a scenario that does not exist yet. In this case, instead of selecting an existing scenario, enter directly a Scenario Name and a Version number and click Finish. Later on, you can select the scenario using the Modify Run Scenario Step wizard. See Change the Scenario of a Run Scenario Step for more information.

      Note that when you use the version number -1, the latest version of the scenario will be used.

    3. The Step Name is automatically populated with the name and version number of the scenario. Optionally, change the Step Name.

    4. Click Next.

    5. In the Add to Load Plan column, select the scenario variables that you want to add to the Load Plan variables. If the scenario uses certain variables as its startup parameters, they are automatically added to the Load Plan variables.

      See Section 14.2.3, "Working with Variables in Load Plans" for more information.

    Case

    1. Select the variable you want to use for the conditional branching. Note that you can either select one of the load plan variables from the list or click Lookup Variable to add a new variable to the load plan and use it for this case step.

      See Section 14.2.3, "Working with Variables in Load Plans" for more information.

    2. The Step Name is automatically populated with the step type and name of the variable. Optionally, change the Step Name.

      See Section 14.2.2.2, "Editing Load Plan Steps" for more information.

    When

    1. Select the Operator to use in the WHEN clause evaluation. Possible values are:

      • Less Than (<)

      • Less Than or Equal (<=)

      • Different (<>)

      • Equals (=)

      • Greater Than (>)

      • Greater Than or Equal (>=)

      • Is not Null

      • Is Null

    2. Enter the Value to use in the WHEN clause evaluation.

    3. The Step Name is automatically populated with the operator that is used. Optionally, change the Step Name.

      See Section 14.2.2.2, "Editing Load Plan Steps" for more information.

    Else

    The Step Name is automatically populated with the step type. Optionally, change the Step Name.

    See Section 14.2.2.2, "Editing Load Plan Steps" for more information.


  7. Click Finish.

  8. The step is added in the steps hierarchy.


Note:

You can reorganize the order of the Load Plan steps by dragging the step to the desired position or by using the reorder arrows in the Step table toolbar to move a step in the Steps Hierarchy.

Adding a Load Plan Step with the Quick Step Tool

To insert Load Plan step with the Quick Step Tool:

  1. Open the Load Plan editor and go to the Steps tab.

  2. In the Steps Hierarchy, select the Load Plan step under which you want to create a child step.

  3. In the Steps toolbar, select Add Step and the Quick Step option corresponding to the Step type you want to add. Table 14-4 lists the options of the Quick Step tool.

    Table 14-4 Quick Step Tool

    Quick Step tool optionDescription and Action Required

    serial step icon


    Adds a serial step after the selection. Default values are used. You can modify these values in the Steps Hierarchy table or in the Property Inspector. See Section 14.2.2.2, "Editing Load Plan Steps" for more information.

    parallel step icon


    Adds a parallel step after the selection. Default values are used. You can modify these values in the Steps Hierarchy table or in the Property Inspector. See Section 14.2.2.2, "Editing Load Plan Steps" for more information.

    run scenario step icon

    Adds a run scenario step after the selection. Follow the instructions for Run Scenario steps in Table 14-3.

    case step icon

    Adds a Case step after the selection. Follow the instructions for Case steps in Table 14-3.

    when step icon

    Adds a When step after the selection. Follow the instructions for When steps in Table 14-3.

    else step icon

    Adds an Else step after the selection. Follow the instructions for Else steps in Table 14-3.



    Note:

    Only step types that are valid for the current selection are enabled in the Quick Step tool.

14.2.2.2 Editing Load Plan Steps

To edit a Load Plan step:

  1. Open the Load Plan editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the Load Plan step you want modify. The Property Inspector displays the step properties.

  3. Edit the Load Plan step properties according to your needs.

The following operations are common tasks when editing steps:

Change the Scenario of a Run Scenario Step

To change the scenario:

  1. In the Steps Hierarchy table of the Steps or Exceptions tab, select the Run Scenario step.

  2. In the Step Properties section of the Properties Inspector, click Lookup Scenario. This opens the Modify Run Scenario Step wizard.

  3. In the Modify Run Scenario Step wizard, click Lookup Scenario and follow the instructions in Table 14-3 corresponding to the Run Scenario step.

Set Advanced Options for Run Scenario Steps

You can set the following properties for Run Scenario steps in the Property Inspector:

  • Priority: Priority for this step when the scenario needs to start in parallel. The integer value range is from 0 to 100 (100 being the highest priority). Default is 0. The priority of a Run Scenario step is evaluated among all runnable scenarios within a running Load Plan. The Run Scenario step with the highest priority is executed first.

  • Context: Context that is used for the step execution. Default context is the Load Plan context that is defined in the Start Load Plan Dialog when executing a Load Plan. Note that if you only specify the Context and no Logical Agent value, the step is started on the same physical agent that started the Load Plan, but in this specified context.

  • Logical Agent: Logical agent that is used for the step execution. By default, the logical agent, which is defined in the Start Load Plan Dialog when executing a Load Plan, is used. Note that if you set only the Logical Agent and no context, the step is started with the physical agent corresponding to the specified Logical Agent resolved in the context specified when starting the Load Plan. If no Logical Agent value is specified, the step is started on the same physical agent that started the Load Plan (whether a context is specified for the step or not).

Open the Linked Object of Run Scenario Steps

Run Scenario steps can be created for packages, integration interfaces, variables, procedures, or scenarios. Once this Run Scenario step is created, you can open the Object Editor of the original object to view and edit it.

To view and edit the linked object of Run Scenario steps:

  1. In the Steps Hierarchy table of the Steps or Exceptions tab, select the Run Scenario step.

  2. Right-click and select Open the Linked Object.

The Object Editor of the linked object is displayed.

Change the Test Variable in Case Steps

To change the variable that is used for evaluating the tests defined in the WHEN statements:

  1. In the Steps Hierarchy table of the Steps tab or Exceptions tab, select the Case step.

  2. In the Step Properties section of the Properties Inspector, click Lookup Variable. This opens the Modify Case Step Dialog.

  3. In the Modify Case Step Dialog, click Lookup Variable and follow the instructions in Table 14-3 corresponding to the Case step.

Define the Exception and Restart Behavior

Exception and Restart behavior can be set on the steps in the Steps Hierarchy table. See Section 14.2.4, "Handling Load Plan Exceptions and Restartability" for more information.

Regenerate Scenarios

To regenerate all the scenarios of a given Load Plan step, including the scenarios of its child steps:

  1. From the Steps Hierarchy table of the Steps tab or Exceptions tab, select the Load Plan step.

  2. Right-click and select Regenerate. Note that this option is not available for scenarios with the version number -1.

  3. Click OK.


Caution:

Regenerating a scenario cannot be undone. For important scenarios, it is better to generate a scenario with a new version number.

Refresh Scenarios to Latest Version

To modify all the scenario steps of a given Load Plan step, including the scenarios of its child steps, and set the scenario version to the latest version available for each scenario:

  1. From the Steps Hierarchy table of the Steps tab or Exceptions tab, select the Load Plan step.

  2. Right-click and select Refresh Scenarios to Latest Version. Note that this option is not available for scenarios with the version number -1.

  3. Click OK.

14.2.2.3 Deleting a Step

To delete a step:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the step to delete.

  3. In the Load Plan Editor toolbar, select Remove Step.

The step and its child steps are removed from the Steps Hierarchy table.


Note:

It is not possible to undo a delete operation in the Steps Hierarchy table.

14.2.2.4 Duplicating a Step

To duplicate a step:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. In the Steps Hierarchy table, right-click the step to duplicate and select Duplicate Selection.

  3. A copy of this step, including its child steps, is created and added as a sibling step after the original step to the Step Hierarchy table.

You can now move and edit this step.

14.2.3 Working with Variables in Load Plans

Project and Global Variables used in a Load Plan are declared as Load Plan Variables in the Load Plan editor. These variables are automatically available in all steps and their value passed to the Load Plan steps.

The value of the variables are passed to the Load Plan on startup as startup parameters. At a step level, you can overwrite the variable value (by setting it or forcing a refresh) for this step and its child steps.


Note:

At startup, Load Plans do not take into account the default value of a variable, or the historized/latest value of a variable in the execution context. The value of the variable is either the one specified when starting the Load Plan, or the value set/refreshed within the Load Plan.

You can use variables in Run Scenario steps - the value of the variable are passed as startup parameters to the scenario - or in Case/When/Else steps for conditional branching.

This section contains the following topics:

14.2.3.1 Declaring Load Plan Variables

To declare a Load Plan variable:

  1. Open the Load Plan editor and go to the Variables tab.

  2. From the Load Plan Editor toolbar, select Add Variable. The Lookup Variable dialog is displayed.

  3. In the Lookup Variable dialog, select the variable to add your Load Plan.

  4. The variable appears in the Variables tab of the Load Plan Editor and in the Property Inspector of each step.

14.2.3.2 Setting Variable Values in a Step

Variables in a step inherit their value from the value from the parent step and ultimately from the value specified for the variables when starting the Load Plan.

For each step, except for Else and When steps, you can also overwrite the variable value, and change the value used for this step and its child steps.

To override variable values at step level:

  1. Open the Load Plan editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the step for which you want to overwrite the variable value.

  3. In the Property Inspector, go to the Variables section. The variables that are defined for this Load Plan are listed in this Variables table. You can modify the following variable parameters:

    Select Overwrite, if you want to specify a variable value for this step and all its children. Once you have chosen to overwrite the variable value, you can either:

    • Set a new variable value in the Value field.

    • Select Refresh to refresh this variable prior to executing the step. The Refresh option can be selected only for variables with a Select Query defined for refreshing the variable value.

14.2.4 Handling Load Plan Exceptions and Restartability

Load Plans provide two features for handling error cases in the execution flows: Exceptions and Restartability.

Exceptions

An Exception Step contains a hierarchy of steps that is defined on the Exceptions tab of the Load Plan editor.

You can associate a given exception step to one or more steps in the Load Plan. When a step in the Load Plan errors out, the associated exception step is executed automatically.

Exceptions can be optionally raised to the parent step of the failing step. Raising an exception fails the parent step, which can consequently execute its exception step.

Restartability

When a Load Plan Run is restarted after a failure, the failed Load Plan steps are restarted depending on the Restart Type parameter. For example, you can define whether a parallel step should restart all its child steps or only those that have failed.

This section contains the following topics:

14.2.4.1 Defining Exceptions Flows

Exception steps are created and defined on the Exceptions tab of the Load Plan Editor.

This tab contains a list of Exception Steps. Each Exception Step consists in a hierarchy of Load Plan steps.The Exceptions tab is similar to the Steps tab in the Load Plan editor. The main differences are:

  • There is no root step for the Exception Step hierarchy. Each exception step is a separate root step.

  • The Serial, Parallel, Run Scenario, and Case steps have the same properties as on the Steps tab but do not have an Exception Handling properties group. An exception step that errors out cannot raise another exception step.

An Exception step can be created either by using the Add Step Wizard or with the Quick Step tool by selecting the Add Step > Exception Step in the Load Plan Editor toolbar. By default, the Exception step is created with the Step name: Exception. You can modify this name in the Steps Hierarchy table or in the Property Inspector.

To create an Exception step with the Add Step Wizard:

  1. Open the Load Plan Editor and go to the Exceptions tab.

  2. In the Load Plan Editor toolbar, select Add Step > Add Step Wizard.

  3. In the Add Step Wizard, select Exception from the Step Type list.


    Note:

    Only values that are valid for the current selection are displayed for the Step Type.

  4. Click Next.

  5. In the Step Name field, enter a name for the Exception step.

  6. Click Finish.

  7. The Exception step is added in the steps hierarchy.

You can now define the exception flow by adding new steps and organizing the hierarchy under this exception step.

14.2.4.2 Using Exception Handling

Defining exception handling for a Load Plan step consists of associating an Exception Step to this Load Plan step and defining the exception behavior. Exceptions steps can be set for each step except for When and Else steps.

To define exception handling for a Load Plan step:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the step for which you want to define an exception behavior. The Property Inspector displays the Step properties.

  3. In the Exception Handling section of the Property Inspector, set the parameters as follows:

    • Timeout (s): Enter the maximum time (in seconds) that this step takes before it is aborted by the Load Plan. When a time-out is reached, the step is marked in error and the Exception step (if defined) is executed. In this case, the exception step never times out. If needed, a timeout can be set on a parent step to safe guard such a potential long running situation.

      If the step fails before the timeout and an exception step is executed, then the execution time of the step plus the execution time of the exception step should not exceed the timeout, otherwise the exception step will fail when the timeout is reached.

      Note that the default value of zero (0) indicates an infinite timeout.

    • Exception Step: From the list, select the Exception step to execute if this step fails. Note that only Exception steps that have been created and defined on the Exceptions tab of the Load Plan Editor appear in this list. See Section 14.2.4.1, "Defining Exceptions Flows" for more information on how to create an Exception step.

    • Exception Behavior: Defines how this step behaves in case an exception is encountered. Select one of the following:

      • Run Exception and Raise: Runs the Exception Step (if any) and raises the exception to the parent step.

      • Run Exception and Ignore: Runs the Exception Step (if any) and ignores the exception. The parent step is notified of a successful run. Note that if an exception is caused by the exception step itself, the parent step is notified of the failure.

    For Parallel steps only, the following parameters may be set:

    Max Error Child Count: Displays the maximum number of child steps in error that is accepted before this step is to be considered in error. When the number of failed child steps exceeds this value, the parallel step is considered failed. The currently running child steps are continued or stopped depending on the Restart Type parameter for this parallel step:

    • If the Restart type is Restart from failed children, the Load Plan waits for all child sessions (these are the currently running sessions and the ones waiting to be executed) to run and complete before it raises the error to the parent step.

    • If the Restart Type is Restart all children, the Load Plan kills all running child sessions and does not start any new ones before it raises the error to the parent.

14.2.4.3 Defining the Restart Behavior

The Restart Type option defines how a step in error restarts when the Load Plan is restarted. You can define the Restart Type parameter in the Exception Handling section of the Properties Inspector.

Depending on the step type, the Restart Type parameter can take the values listed in Table 14-5.

Table 14-5 Restart Type Values

Step TypeValues and Description

Serial

  • Restart all children: When the Load Plan is restarted and if this step is in error, the sequence of steps restarts from the first one.

  • Restart from failure: When the Load Plan is restarted and if this step is in error, the sequence of child steps starts from the one that has failed.

Parallel

  • Restart all children: When the Load Plan is restarted and if this step is in error, all the child steps are restarted regardless of their status. This is the default value.

  • Restart from failed children: When the Load Plan is restarted and if this step is in error, only the failed child steps are restarted in parallel.

Run Scenario

  • Restart from new session: When restarting the Load Plan and this Run Scenario step is in error, start the scenario and create a new session. This is the default value.

  • Restart from failed step: When restarting the Load Plan and this Run Scenario step is in error, restart the session from the step in error. All the tasks under this step are restarted.

  • Restart from failed task: When restarting the Load Plan and this Run Scenario step is in error, restart the session from the task in error.

The same limitation as those described in Section 21.4, "Restarting a Session" apply to the sessions restarted from a failed step or failed task.


14.3 Running Load Plans

You can run a Load Plan from Designer Navigator or Operator Navigator in ODI Studio.

To run a Load Plan in Designer Navigator or Operator Navigator:

  1. In the Load Plans and Scenarios accordion, select the Load Plan you want to execute.

  2. Right-click and select Execute.

  3. In the Start Load Plan dialog, select the execution parameters:

    • Select the Context into which the Load Plan will be executed.

    • Select the Logical Agent that will run the Load Plan.

    • In the Variables table, enter the Startup values for the variables used in this Load Plan.

  4. Click OK.

  5. The Load Plan Started dialog is displayed.

  6. Click OK.

The Load Plan execution starts: a Load Plan instance is created along with the first Load Plan run. You can review the Load Plan execution in the Operator Navigator. See Chapter 22, "Monitoring Integration Processes" for more information. See also Chapter 21, "Running Integration Processes" for more information on the other run-time operations on Load Plans.

14.4 Using Load Plans in Production

Using Load Plans in production involves the following tasks:

14.4.1 Running Load Plans in Production

In Production, the following tasks can be performed to execute Load Plans interactively:

14.4.2 Scheduling Load Plans

You can schedul8e the executions of your scenarios and Load Plans using the Oracle Data Integrator built-in scheduler or an external scheduler. See Section 21.9, "Scheduling Scenarios and Load Plans" for more information.

14.4.3 Exporting, Importing and Versioning Load Plans

A Load Plan can be exported and then imported into a development or execution repository. This operation is used to deploy Load Plans in a different repository, possibly in a different environment or site.

The export (and import) procedure allows you to transfer Oracle Data Integrator objects from one repository to another.

14.4.3.1 Exporting Load Plans

It is possible to export a single Load Plan or several Load Plans at once.

Exporting one single Load Plan follows the standard procedure described in Section 20.2.4, "Exporting one ODI Object".

For more information on exporting several Load Plans at once, see Section 20.2.5, "Export Multiple ODI Objects".

Note that when you export a Load Plan and you select Export child objects, all its child steps, schedules, and variables are also exported.


Note:

The export of a Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be exported separately. How to export scenarios is described in Section 13.5, "Exporting Scenarios".

14.4.3.2 Importing Load Plans

Importing a Load Plan in a development repository is performed via Designer or Operator Navigator. With an execution repository, only Operator Navigator is available for this purpose.

The Load Plan import uses the standard object import method. See Section 20.2.6, "Importing Objects" for more information.


Note:

The export of a Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be imported separately.

14.4.3.3 Versioning Load Plans

Load Plans can also be deployed and promoted to production using versions and solutions. See Chapter 19, "Working with Version Management" for more information.

PK0G8PKp\EOEBPS/appendix_b.htmPQ User Parameters

B User Parameters

This appendix lists the Oracle Data Integrator user parameters. User parameters configure the behavior of Oracle Data Integrator.

To set the user parameters:

  1. From the ODI main menu, select User Parameters.

  2. In the Editing User Parameters dialog, set the values of the user parameters.

  3. Click OK.

Table B-1 contains the complete list of ODI user parameters.

Table B-1 User Parameters

ParameterValuesDescription

Display lock icons in the tree view

Yes | No

Display lock icons in the Designer Navigator tree for locked objects. Disabling this option can provide a speed improvement when displaying the tree view. Refer to Section 18.5.2, "Object Locking" for more information.

Lock object when opening

0 | 1| Ask

When opening an object for edition:

  • 1: It is automatically locked

  • 0: It is not locked

  • Ask: the user is prompted to lock the object.

Refer to Section 18.5.2, "Object Locking" for more information.

Default path for generation of Data Services

Directory

This is the default path to store the generated Data Service in. Oracle Data Integrator places the generated source code and the compiled Web Service here. This directory is a temporary location that can be deleted after generation.

Refer to Section 8.3, "Generating and Deploying Data Services" for more information.

Unlock object when closing

0 | 1| Ask

When closing a modified object:

  • 1: It is automatically unlocked

  • 0: It is not unlocked

  • Ask: the user is prompted to unlock the object.

Refer to Section 18.5.2, "Object Locking" for more information.

Process Model Datastores Only

Yes | No | Ask

Whether to generate DDL code for datastores which do not exist in this model. If set to "Ask", a confirmation message is displayed.

Refer to Section 6.3, "Generating DDL scripts" for more information.

Show Web Service operation changed warning

Yes | No | Ask

If set to Yes, the Operation Changed warning is shown.

Never transform non ASCII characters to underscores

Yes | No

When true, no characters are transformed during an export or an alias generation. Beware, should only be used on platforms fully supporting your encoding.

Bypass 'Exit ODI' prompt on exit

Yes | No

If this user parameter is set to Yes , the exit confirmation dialog will NOT be shown.

Bypass displaying "Fix issues" Dialog for Interfaces

Yes | No

If set to Yes, the Fix issues dialog for interfaces will NOT be shown.

Delete linked sessions with scenarios

Yes | No | Ask

When deleting a scenario in the Scenarios accordion of Operator Navigator, linked sessions are automatically deleted if set to Yes.

Hide On Connect and Disconnect steps

Yes | No

If this user parameter is set to Yes, the On Connect and Disconnect steps will NOT be shown. Default is No.

Automatic Mapping

Yes | No | Ask

Automatically maps source columns to target columns when new datastores are added to an interface by detecting column name matches. Refer to Section 11.3.5, "Define the Mappings" for more information.

Use New Load Balancing

Yes | No

When using load balancing, agents that run out of sessions can be reallocated sessions from other agents that have not yet been started. Otherwise, sessions are only allocated once each. Refer to Section 4.3.3, "Load Balancing Agents" for more information.

Help for Interface Diagram

0 | 1

If 1, a help message is displayed whenever editing an interface diagram with no datastores attached.

Check for concurrent editing

0 | 1

When saving changes to any object, checks whether other changes have been made to the same object, by another user. If another user has made changes, the object cannot be saved.

Refer to Section 18.5.1, "Concurrent Editing Check" for more information.

Keeps in the cache the list of models whose DBMS is not accessible.

Yes | No

If this user parameter is set to true, the list of models whose DBMS is not accessible is kept in the cache. This speeds up expanding and displaying the nodes under these models.

Operator display limit (0=no limit)

Numeric

When the number of sessions to display in Operator Navigator exceeds this number, a confirmation message is displayed. Default: 100

Delay between two refresh (seconds)

Numeric

The number of seconds to wait between two refreshes in Operator Navigator. Only applies when auto-refresh mode is enabled.

Default PDF generation directory

Directory

When generating a report, the default directory to save the generated .pdf file to. Refer to Section 18.6, "Creating PDF Reports" for more information.

Directory for Saving your Diagrams (PNG)

Directory

When printing a model diagram with Common Format Designer, specifies the default directory to save the generated .png file to. Refer to Chapter 6, "Working with Common Format Designer" for more information.

Default Context for Execution

Context name

When executing any object, this is the context selected by default in the Execution dialog. If an invalid context name is specified, the default context in Designer is used.

Refer to Chapter 21, "Running Integration Processes" for more information.

PDF Viewer

Path to file

Complete path including filename of program to view generated .pdf files. Required to use the Open file after generation option.

Refer to Section 18.6, "Creating PDF Reports" for more information.

Query buffer size

Numeric

Size of the cache used for prepared statements (Queries) issued on the repositories. Only applies to repositories on Oracle instances. Changes in this value are only taken into account when the application is restarted.

Default Context for Designer

Context name

Default context used in Designer Navigator. This context will be displayed by default in the different lists, and selected when opening Designer Navigator.

Default Agent

Agent name

When executing any object, the agent selected by default in the Execution options window. If an invalid agent name is specified, the local agent is used.

Oracle Data Integrator Timeout

Numeric

Number of seconds to wait during database connections before giving up. Increase this value if you regularly encounter timeout problems. Default: 30. Changes in this value are only taken into account when the application is restarted

Export default Java encoding

Java encoding

Export default Java encoding. Default is ISO8859_1. You will find a list of supported encodingss at the following URL: http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

Refer to Chapter 20, "Exporting/Importing" for more information.

Export default charset

Charset encoding

Default export charset encoding. Default is ISO-8859-1. You will find a list of supported encodingss at the following URL: http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

Refer to Chapter 20, "Exporting/Importing" for more information.

Export default version

0 | 1

If 1, the versioned objects will be exported during the master repository export. If 0, the versioned objetcs are not exported. Default is 1.

Refer to Chapter 20, "Exporting/Importing" for more information.

Show the CDC modifications in the tree

0 | 1

Shows the CDC modifications in the tree (1=Enabled/0 = Disabled). Default is 1.

Refer to Chapter 7, "Working with Changed Data Capture" for more information.

Display resource names in the tree view

Yes | No

Whether to display the resource name of a datastore in the Models accordion. This might be useful when the resource name differs from the datastore name.

Scenario Name Convention

Naming convention

Use this parameter to define the default naming pattern for new scenarios created for objects via the Studio or the tools. The following tags can be used in the pattern.

  • %PROJECT_NAME% : Name of the project containing the object.

  • %FOLDER_PATH%: Folder path to the object from in the project tree, separated with underscores.

  • %FOLDER_NAME(n)%: Name of one folder in the path, starting from the bottom (n=1 corresponds to the object's parent) to the top folder in the project tree. If the folder does not exist for the given index, returns an empty string.

  • %FOLDER_NAME%: Shortcut to %FOLDER_NAME(1)%

  • %OBJECT_NAME%: Name of the source object of the scenario

Enable Operator display limit dialog

0 | 1

If 1, a warning dialog is shown each time the Operator display limit is exceeded.

If 0, no warning dialog is displayed.


PKUQPQPKp\EOEBPS/partpage7.htmr Managing the Security Settings

Part VII

Managing the Security Settings

This part describes how to manage the security settings in Oracle Data Integrator.

This part contains the following chapters:

PK;PKp\EOEBPS/dcommon/oracle.gifJGIF87aiyDT2F'G;Q_oKTC[ 3-Bq{ttsoGc4I)GvmLZ).1)!ꑈ53=Z]'yuLG*)g^!8C?-6(29K"Ĩ0Яl;U+K9^u2,@@ (\Ȱ Ë $P`lj 8x I$4H *(@͉0dа8tA  DсSP v"TUH PhP"Y1bxDǕ̧_=$I /& .)+ 60D)bB~=0#'& *D+l1MG CL1&+D`.1qVG ( "D2QL,p.;u. |r$p+5qBNl<TzB"\9e0u )@D,¹ 2@C~KU 'L6a9 /;<`P!D#Tal6XTYhn[p]݅ 7}B a&AƮe{EɲƮiEp#G}D#xTIzGFǂEc^q}) Y# (tۮNeGL*@/%UB:&k0{ &SdDnBQ^("@q #` @1B4i@ aNȅ@[\B >e007V[N(vpyFe Gb/&|aHZj@""~ӎ)t ? $ EQ.սJ$C,l]A `8A o B C?8cyA @Nz|`:`~7-G|yQ AqA6OzPbZ`>~#8=./edGA2nrBYR@ W h'j4p'!k 00 MT RNF6̙ m` (7%ꑀ;PKl-OJPKp\EOEBPS/dcommon/oracle-logo.jpg0]ϢJFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222'7" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE!KEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEzE7V%ȣOΏ9??:a"\fSrğjAsKJ:nOzO=}E1-I)3(QEQEQEQEQEQEQE֝Hza<["2"pO#f8M[RL(,?g93QSZ uy"lx4h`O!LŏʨXZvq& c՚]+: ǵ@+J]tQ]~[[eϸ (]6A&>ܫ~+כzmZ^(<57KsHf妬Ϧmnẁ&F!:-`b\/(tF*Bֳ ~V{WxxfCnMvF=;5_,6%S>}cQQjsOO5=)Ot [W9 /{^tyNg#ЄGsֿ1-4ooTZ?K Gc+oyڙoNuh^iSo5{\ܹ3Yos}$.nQ-~n,-zr~-|K4R"8a{]^;I<ȤL5"EԤP7_j>OoK;*U.at*K[fym3ii^#wcC'IIkIp$󿉵|CtĈpW¹l{9>⪦׺*ͯj.LfGߍԁw] |WW18>w.ӯ! VӃ :#1~ +މ=;5c__b@W@ +^]ևՃ7 n&g2I8Lw7uҭ$"&"b eZ":8)D'%{}5{; w]iu;_dLʳ4R-,2H6>½HLKܹR ~foZKZ࿷1[oZ7׫Z7R¢?«'y?A}C_iG5s_~^ J5?œ tp]X/c'r%eܺA|4ծ-Ե+ْe1M38Ǯ `|Kյ OVڅu;"d56, X5kYR<̭CiطXԮ];Oy)OcWj֩}=܅s۸QZ*<~%뺃ȶp f~Bðzb\ݳzW*y{=[ C/Ak oXCkt_s}{'y?AmCjޓ{ WRV7r. g~Q"7&͹+c<=,dJ1V߁=T)TR՜*N4 ^Bڥ%B+=@fE5ka}ędܤFH^i1k\Sgdk> ֤aOM\_\T)8靠㡮3ģR: jj,pk/K!t,=ϯZ6(((((((49 xn_kLk&f9sK`zx{{y8H 8b4>ÇНE|7v(z/]k7IxM}8!ycZRQ pKVr(RPEr?^}'ðh{x+ՀLW154cK@Ng C)rr9+c:׹b Жf*s^ fKS7^} *{zq_@8# pF~ [VPe(nw0MW=3#kȵz晨cy PpG#W:%drMh]3HH<\]ԁ|_W HHҡb}P>k {ZErxMX@8C&qskLۙOnO^sCk7ql2XCw5VG.S~H8=(s1~cV5z %v|U2QF=NoW]ո?<`~׮}=ӬfԵ,=;"~Iy7K#g{ñJ?5$y` zz@-~m7mG宝Gٱ>G&K#]؃y1$$t>wqjstX.b̐{Wej)Dxfc:8)=$y|L`xV8ߙ~E)HkwW$J0uʟk>6Sgp~;4֌W+חc"=|ř9bc5> *rg {~cj1rnI#G|8v4wĿhFb><^ pJLm[Dl1;Vx5IZ:1*p)إ1ZbAK(1ׅ|S&5{^ KG^5r>;X׻K^? s fk^8O/"J)3K]N)iL?5!ƾq:G_=X- i,vi2N3 |03Qas ! 7}kZU781M,->e;@Qz T(GK(ah(((((((Y[×j2F}o־oYYq $+]%$ v^rϭ`nax,ZEuWSܽ,g%~"MrsrY~Ҿ"Fت;8{ѰxYEfP^;WPwqbB:c?zp<7;SBfZ)dϛ; 7s^>}⍱x?Bix^#hf,*P9S{w[]GF?1Z_nG~]kk)9Sc5Ո<<6J-ϛ}xUi>ux#ţc'{ᛲq?Oo?x&mѱ'#^t)ϲbb0 F«kIVmVsv@}kҡ!ˍUTtxO̧]ORb|2yԵk܊{sPIc_?ħ:Ig)=Z~' "\M2VSSMyLsl⺿U~"C7\hz_ Rs$~? TAi<lO*>U}+'f>7_K N s8g1^CeКÿE ;{+Y\ O5|Y{/o+ LVcO;7Zx-Ek&dpzbӱ+TaB0gNy׭ 3^c T\$⫫?F33?t._Q~Nln:U/Ceb1-im WʸQM+VpafR3d׫é|Aү-q*I P7:y&]hX^Fbtpܩ?|Wu󭏤ʫxJ3ߴm"(uqA}j.+?S wV ~ [B&<^U?rϜ_OH\'.;|.%pw/ZZG'1j(#0UT` Wzw}>_*9m>󑓀F?EL3"zpubzΕ$+0܉&3zڶ+jyr1QE ( ( ( ( ( ( ( (UIdC0EZm+]Y6^![ ԯsmܶ捆?+me+ZE29)B[;я*wGxsK7;5w)}gH~.Ɣx?X\ߚ}A@tQ(:ͧ|Iq(CT?v[sKG+*רqҍck <#Ljα5݈`8cXP6T5i.K!xX*p&ќZǓϘ7 *oƽ:wlຈ:Q5yIEA/2*2jAҐe}k%K$N9R2?7ýKMV!{W9\PA+c4w` Wx=Ze\X{}yXI Ү!aOÎ{]Qx)#D@9E:*NJ}b|Z>_k7:d$z >&Vv󃏽WlR:RqJfGإd9Tm(ҝEtO}1O[xxEYt8,3v bFF )ǙrPNE8=O#V*Cc𹾾&l&cmCh<.P{ʦ&ۣY+Gxs~k5$> ӥPquŽўZt~Tl>Q.g> %k#ú:Kn'&{[yWQGqF}AЅ׮/}<;VYZa$wQg!$;_ $NKS}“_{MY|w7G!"\JtRy+贾d|o/;5jz_6fHwk<ѰJ#]kAȎ J =YNu%dxRwwbEQEQEQEQEQEQEQEQEQE'fLQZ(1F)hQ@X1KEQE-Q@ 1KE3h=iPb(((1GjZ(-ʹRPbR@ 1KE7`bڒyS0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((_ğ<+F; sU%ԑ >,BH(uSU xþ1Wϲs${wgoQn_swB/'L\ܓFgԏZ ^dj^L^NmH Ҁ6(?nƓjh%ةlΣ /F6}pj2E3HgHЌ(UQR8oX,G8OB]>o9@$xWy'ڹOM=ҼWb"٠9-*r⬻zWokeh͝(F@n~X=q+⟇1b>ƑeIX.~C,o5የ-m;D Nʬ` `+CcE??Ki!R!cxw[ jvc}&Eٱ7T)8&þ/?os$wSn^bo:-4^js4JKm!#rv89>' O59t , \8r,Vk|IgxEv((RmĜ+bkz6,u/}-|.'<VÚ~tk,^cH61¢ !;M;Ėz[#CuAƶ+j_&*/;Q8d ǹHyAsM↷7l-6rò,%Fs;A*',}'f[]tݷs~UWhk?:4JE]WpcY=" ƚw/|_xSw(kycH#r28,X7D5Kh76 mɍ~0H;6194WpGӧգ%8Z&GdPƧo6kcO5Kv`{}fyq \`@?Kv=26OޝyAe Qɼ芍H8͟2敮j#;iѻm؏6+wTx;KYY\-%'Aӣ?|=\-ٴk+٬$ɷ_X<+{#:8IlGLm\n5}=WE%n9SwD:ޣU6F_*"RƌXd1 Oݠ±SK?r\y-mvoH;P`VwTx;KYY\-%'Aӣ9߀>%6yezWYds$YN2s1@Ep~/X/=UuCܰ#ԲWuxoMzI.E @MG$krQWw<_Vk^l`R@,~%j:H*oLw&˼!u?ûp@=Bsxo~8x,s,c01q1qjM|v+2)G"kz6,u/}-|.'<V>K=Ozև s֑}@FP r\_i~*xk9FHᐃ$U#$uQ^':k̶ UXiwROʾ]/\1r]E $RQr2sg((((((((((5\gg#bo۷;9=밪zqj03Y.X ,@85b ặXTeu# 8 s@_;3K>}il -jk&- !;; o7<# 4k+m ZY~էvFJF(F獡\WQ@W "rS?'G?~gݷsۭnxzl|SY麥ŭuHp#fS]ĀǎFHz%/H%na_˂#(!aN6HKz5|5s,'z'so [ `Pm7Ϭ>ig;ۛco޹l#f H|Oxzk6$nR{FUp 0$ hP<#viO R]J}L\hi,(ݹI |Pڟ i3GjS=M֕wu)3H8Uu5 oJ𞛣_FU *L6sGAT8?|iePX2[B9$U/(n Q@ į꿴?ۺy^8M0QA; ǑxDo#/VcrQN w«vqX]wk&vMDlppAy_< ]Zi1En${g[K#y : 0:`^,?3?0]\0Fج78>/g7OxaȋT %Y&@IG$OxZÖ]iV:T; yV᝙S X70[qآ ( ( ( ( ( ( ( ( ( (KIoQ%v_j #Aeo%ċ dQ^oAi5{I;bHXrWnz:Y#Vatk.X/m䷑ 0WR3k=< AC&w[]Go,"*BBGBJ+|@♴'c$BZ0f-&ھgԯ}AfCU}σo)j4yo<{x{De'Lr_2i0S $Vk(C[?54?OZ]B+Uw* ejM:Wf:7jUf\n9a QzxVO?vzxѭTJVkB[Kq 9qqG=y_^4Ѵ;7Rr,'%gp -z;*Ԭխ>KIcBwn,Td(?6F 7 r{8k0W\.ņH3 p*Fu=_T~VN!PDJ\ r~*[ω,SF9,MmrfDb('kp8#ּ#GӬf'|S/ [Jnr} ݾ cpv m(|=}/u[BI &RUK.FH~SӼ'ayku[8fMnFzT"ynn-i.`kT9p XcxNO.ĶsLIڊ$,p98'vx<]ӡ[Ae.b0)pb T1Zge{]&{UPIL$x'q'jrzb!k6ZaOR/R cm yoQ-+CK;Yc{S)bqq~˺ W.v`:j"r0N;oľ.=O[Ae|@pkxz_RK+MWSVG)BDžRUOy.Q&HfXp\TK qX|i6QGwH nRco-c$_,?NHQxJk}Ixv~AL+x`F)RJʢ!V;h4jpiz? Y"D?pdv9,XsikQ\}c[B*!$xw*9>aT4%ϋ?aggc^c 222y;{G4Ct~"؈ p _oO j: Z#Ķ_'RD\Hs h5]VC5=N;k;t,G$$I7e|GyMJD&\ZRN"2Gčrݸ⋛O ]GzW4r=2FHĿtkagOKe4`uId"g^T;@=SB=jOH|۶o@sg7ej;-^Cy(1˂1J< $_ K\,?_l K|3$F FTp@?s /26w-uDRavb[ko8|yxͦxxG_ם ZEp@gtB> iV?Ϣv$Y2JFĀ:( @Y?ωu}^*?<*/+ $ pƣx~!]=R.PƤ3@½r:~ΞHy4A+Yi[ N8Y<w7/bL i3!bNyfp1[$F;=XK4yi+^A}'u;94@,I$0#1m2c8:; Bgۿ>ϳt_?Oo^x<;}WN֤'o-S2y)eue:RFnKg~?-:䪱_Q6ԑ*I`X ?Z֚aהּ#E&wG/Q- i$|c~-#[^jpc+7>B;p@ͦxxG_ם ZEp@gtB> s/Zn}[GE"I䳚ddu?Qm^o_^#|=E \2/,{.UQ'-cFWOtkHfDeBw# QEQEQEQEQEQEQEQEQEWco_zŎi]ϹXÓX0kƧO2OghԱdҀ6(x\廝vvLiЌ,c+b8j96wv!@XHTm!p:h(nḿ$Bc#2T\}VKt=bKSBY[1LS$N@Q@Q@Q@Q@Q@Q@q?#']xrm6]2 @c#vO9r:]xfT$|A:˩^3TEPHU( ( ( XoflվBJ>:gE(((7u6w_d'Hn<'88=q^wxwWG׵O<7"`0p SP{ }3N˵!7'ZEQEQEQEQEQEQEQEQEQEQEQEQEQEQEIu<R|;𯋵+wQ!WD$ #ՏGW:K"0Y2Fr|VAs p'rQת8<++<=~ky mU_$; 1/Pw+6w `.9 %cנWsyyWR?,p$x'ú't ;Ot3y6bņTܩu__r^_ik7<4\X2Hpнr hڇWV~fG\a#*PnwW86+ !пJ$ ~Ig;Wg |b [IѾ}._Lw)S9G":hg}y^ټyyۿo]=3\??kӿ!?L/?{߿n^5=FB.y7;X>I=(?o)|C- 2rH"D rFyCƙz|p#Nq!Wo*5_0^Gu^\I{ axQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEPK5]0]PKp\EOEBPS/dcommon/cpyr.htmD Oracle Legal Notices

Oracle Legal Notices

Copyright Notice

Copyright © 1994-2014, Oracle and/or its affiliates. All rights reserved.

Trademark Notice

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

License Restrictions Warranty/Consequential Damages Disclaimer

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

Warranty Disclaimer

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

Restricted Rights Notice

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

Hazardous Applications Notice

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Third-Party Content, Products, and Services Disclaimer

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Alpha and Beta Draft Documentation Notice

If this document is in preproduction status:

This documentation is in preproduction status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.

Oracle Logo

PK0hPKp\EOEBPS/dcommon/blafdoc.cssc@charset "utf-8"; /* Copyright 2002, 2011, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2011.8.12 */ body { font-family: Tahoma, sans-serif; /* line-height: 125%; */ color: black; background-color: white; font-size: small; } * html body { /* http://www.info.com.ph/~etan/w3pantheon/style/modifiedsbmh.html */ font-size: x-small; /* for IE5.x/win */ f\ont-size: small; /* for other IE versions */ } h1 { font-size: 165%; font-weight: bold; border-bottom: 1px solid #ddd; width: 100%; text-align: left; } h2 { font-size: 152%; font-weight: bold; text-align: left; } h3 { font-size: 139%; font-weight: bold; text-align: left; } h4 { font-size: 126%; font-weight: bold; text-align: left; } h5 { font-size: 113%; font-weight: bold; display: inline; text-align: left; } h6 { font-size: 100%; font-weight: bold; font-style: italic; display: inline; text-align: left; } a:link { color: #039; background: inherit; } a:visited { color: #72007C; background: inherit; } a:hover { text-decoration: underline; } a img, img[usemap] { border-style: none; } code, pre, samp, tt { font-family: monospace; font-size: 110%; } caption { text-align: center; font-weight: bold; width: auto; } dt { font-weight: bold; } table { font-size: small; /* for ICEBrowser */ } td { vertical-align: top; } th { font-weight: bold; text-align: left; vertical-align: bottom; } li { text-align: left; } dd { text-align: left; } ol ol { list-style-type: lower-alpha; } ol ol ol { list-style-type: lower-roman; } td p:first-child, td pre:first-child { margin-top: 0px; margin-bottom: 0px; } table.table-border { border-collapse: collapse; border-top: 1px solid #ccc; border-left: 1px solid #ccc; } table.table-border th { padding: 0.5ex 0.25em; color: black; background-color: #f7f7ea; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } table.table-border td { padding: 0.5ex 0.25em; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } span.gui-object, span.gui-object-action { font-weight: bold; } span.gui-object-title { } p.horizontal-rule { width: 100%; border: solid #cc9; border-width: 0px 0px 1px 0px; margin-bottom: 4ex; } div.zz-skip-header { display: none; } td.zz-nav-header-cell { text-align: left; font-size: 95%; width: 99%; color: black; background: inherit; font-weight: normal; vertical-align: top; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-header-link { font-size: 95%; } td.zz-nav-button-cell { white-space: nowrap; text-align: center; width: 1%; vertical-align: top; padding-left: 4px; padding-right: 4px; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-button-link { font-size: 90%; } div.zz-nav-footer-menu { width: 100%; text-align: center; margin-top: 2ex; margin-bottom: 4ex; } p.zz-legal-notice, a.zz-legal-notice-link { font-size: 85%; /* display: none; */ /* Uncomment to hide legal notice */ } /*************************************/ /* Begin DARB Formats */ /*************************************/ .bold, .codeinlinebold, .syntaxinlinebold, .term, .glossterm, .seghead, .glossaryterm, .keyword, .msg, .msgexplankw, .msgactionkw, .notep1, .xreftitlebold { font-weight: bold; } .italic, .codeinlineitalic, .syntaxinlineitalic, .variable, .xreftitleitalic { font-style: italic; } .bolditalic, .codeinlineboldital, .syntaxinlineboldital, .titleinfigure, .titleinexample, .titleintable, .titleinequation, .xreftitleboldital { font-weight: bold; font-style: italic; } .itemizedlisttitle, .orderedlisttitle, .segmentedlisttitle, .variablelisttitle { font-weight: bold; } .bridgehead, .titleinrefsubsect3 { font-weight: bold; } .titleinrefsubsect { font-size: 126%; font-weight: bold; } .titleinrefsubsect2 { font-size: 113%; font-weight: bold; } .subhead1 { display: block; font-size: 139%; font-weight: bold; } .subhead2 { display: block; font-weight: bold; } .subhead3 { font-weight: bold; } .underline { text-decoration: underline; } .superscript { vertical-align: super; } .subscript { vertical-align: sub; } .listofeft { border: none; } .betadraft, .alphabetanotice, .revenuerecognitionnotice { color: #f00; background: inherit; } .betadraftsubtitle { text-align: center; font-weight: bold; color: #f00; background: inherit; } .comment { color: #080; background: inherit; font-weight: bold; } .copyrightlogo { text-align: center; font-size: 85%; } .tocsubheader { list-style-type: none; } table.icons td { padding-left: 6px; padding-right: 6px; } .l1ix dd, dd dl.l2ix, dd dl.l3ix { margin-top: 0ex; margin-bottom: 0ex; } div.infoboxnote, div.infoboxnotewarn, div.infoboxnotealso { margin-top: 4ex; margin-right: 10%; margin-left: 10%; margin-bottom: 4ex; padding: 0.25em; border-top: 1pt solid gray; border-bottom: 1pt solid gray; } p.notep1 { margin-top: 0px; margin-bottom: 0px; } .tahiti-highlight-example { background: #ff9; text-decoration: inherit; } .tahiti-highlight-search { background: #9cf; text-decoration: inherit; } .tahiti-sidebar-heading { font-size: 110%; margin-bottom: 0px; padding-bottom: 0px; } /*************************************/ /* End DARB Formats */ /*************************************/ @media all { /* * * { line-height: 120%; } */ dd { margin-bottom: 2ex; } dl:first-child { margin-top: 2ex; } } @media print { body { font-size: 11pt; padding: 0px !important; } a:link, a:visited { color: black; background: inherit; } code, pre, samp, tt { font-size: 10pt; } #nav, #search_this_book, #comment_form, #comment_announcement, #flipNav, .noprint { display: none !important; } body#left-nav-present { overflow: visible !important; } } PKr.hcPKp\EOEBPS/dcommon/doccd_epub.jsM /* Copyright 2006, 2012, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2012.3.17 */ function addLoadEvent(func) { var oldOnload = window.onload; if (typeof(window.onload) != "function") window.onload = func; else window.onload = function() { oldOnload(); func(); } } function compactLists() { var lists = []; var ul = document.getElementsByTagName("ul"); for (var i = 0; i < ul.length; i++) lists.push(ul[i]); var ol = document.getElementsByTagName("ol"); for (var i = 0; i < ol.length; i++) lists.push(ol[i]); for (var i = 0; i < lists.length; i++) { var collapsible = true, c = []; var li = lists[i].getElementsByTagName("li"); for (var j = 0; j < li.length; j++) { var p = li[j].getElementsByTagName("p"); if (p.length > 1) collapsible = false; for (var k = 0; k < p.length; k++) { if ( getTextContent(p[k]).split(" ").length > 12 ) collapsible = false; c.push(p[k]); } } if (collapsible) { for (var j = 0; j < c.length; j++) { c[j].style.margin = "0"; } } } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(compactLists); function processIndex() { try { if (!/\/index.htm(?:|#.*)$/.test(window.location.href)) return false; } catch(e) {} var shortcut = []; lastPrefix = ""; var dd = document.getElementsByTagName("dd"); for (var i = 0; i < dd.length; i++) { if (dd[i].className != 'l1ix') continue; var prefix = getTextContent(dd[i]).substring(0, 2).toUpperCase(); if (!prefix.match(/^([A-Z0-9]{2})/)) continue; if (prefix == lastPrefix) continue; dd[i].id = prefix; var s = document.createElement("a"); s.href = "#" + prefix; s.appendChild(document.createTextNode(prefix)); shortcut.push(s); lastPrefix = prefix; } var h2 = document.getElementsByTagName("h2"); for (var i = 0; i < h2.length; i++) { var nav = document.createElement("div"); nav.style.position = "relative"; nav.style.top = "-1.5ex"; nav.style.left = "1.5em"; nav.style.width = "90%"; while (shortcut[0] && shortcut[0].toString().charAt(shortcut[0].toString().length - 2) == getTextContent(h2[i])) { nav.appendChild(shortcut.shift()); nav.appendChild(document.createTextNode("\u00A0 ")); } h2[i].parentNode.insertBefore(nav, h2[i].nextSibling); } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(processIndex); PKo"nR M PKp\EOEBPS/monitoring_executions.htm Monitoring Integration Processes

22 Monitoring Integration Processes

This chapter describes how to manage your development executions in Operator Navigator. An overview of the Operator Navigator's user interface is provided.

This chapter includes the following sections:

22.1 Introduction to Monitoring

Monitoring your development executions consists of viewing the execution results and managing the development executions when the executions are successful or in error. This section provides an introduction to the monitoring features in Oracle Data Integrator. How to work with your execution results is covered in Section 22.2, "Monitoring Executions Results". How to manage your development executions is covered in Section 22.3, "Managing your Executions".

22.1.1 Introduction to Operator Navigator

Through Operator Navigator, you can view your execution results and manage your development executions in the sessions, as well as the scenarios and Load Plans in production.

Operator Navigator stores this information in a work repository, while using the topology defined in the master repository.

Operator Navigator displays the objects available to the current user in six accordions:

  • Session List displays all sessions organized per date, physical agent, status, keywords, and so forth

  • Hierarchical Sessions displays the execution sessions also organized in a hierarchy with their child sessions

  • Load Plan Executions displays the Load Plan Runs of the Load Plan instances

  • Scheduling displays the list of physical agents and schedules

  • Load Plans and Scenarios displays the list of scenarios and Load Plans available

  • Solutions displays the list of solutions

The Operator Navigator Toolbar Menu

You can perform the main monitoring tasks via the Operator Navigator Toolbar menu. The Operator Navigator toolbar menu provides access to the features detailed in Table 22-1.

Table 22-1 Operator Navigator Toolbar Menu Items

IconMenu ItemDescription

Refresh icon


Refresh

Click Refresh to refresh the trees in the Operator Navigator accordions.

Filter icon

Filter activated icon


Filter

Filter activated

Click Filter to define the filters for the sessions to display in Operator Navigator.

Autor Refresh icon


Auto Refresh

Click Auto Refresh to refresh automatically the trees in the Operator Navigator accordions.

Connect Navigator icon


Connect Navigator

Click Connect Navigator to access the Operator Navigator toolbar menu. Through the Operator Navigator toolbar menu you can:

  • Import a scenario

  • Import and export the log

  • Perform multiple exports

  • Purge the log

  • Display the scheduling information

  • Clean stale sessions


22.1.2 Scenarios

A scenario is designed to put a source component (interface, package, procedure, variable) into production. A scenario results from the generation of code (SQL, shell, etc.) for this component.

When a scenario is executed, it creates a Session.

Scenarios are imported into production environment and can be organized into Load Plan and Scenario folders. See Section 22.3.4, "Managing Scenarios and Load Plans" for more details.

22.1.3 Sessions

In Oracle Data Integrator, an execution results in a Session. Sessions are viewed and managed in Operator Navigator.

A session is an execution (of a scenario, an interface, a package or a procedure, and so forth) undertaken by an execution agent. A session is made up of steps which are themselves made up of tasks.

A step is the unit of execution found between a task and a session. It corresponds to a step in a package or in a scenario. When executing an interface or a single variable, for example, the resulting session has only one step.

Two special steps called Command On Connect and Command On Disconnect are created if you have set up On Connect and Disconnect commands on data servers used in the session. See Setting Up On Connect/Disconnect Commands for more information.

The task is the smallest execution unit. It corresponds to a command in a KM, a procedure, and so forth.

Sessions can be grouped into Session folders. Session folders automatically group sessions that were launched with certain keywords. Refer to Section 22.3.3.3, "Organizing the Log with Session Folders" for more information.

22.1.4 Load Plans

A Load Plan is the largest executable object in Oracle Data Integrator. It uses Scenarios in its steps. A Load Plans is an organized hierarchy of child steps. This hierarchy allows conditional processing of steps in parallel or in series.

Load Plans are imported into production environment and can be organized into Load Plan and Scenario folders. See Section 22.3.4, "Managing Scenarios and Load Plans" for more details.

22.1.5 Load Plan Executions

Executing a Load Plan creates a Load Plan instance and the first Load Plan run for the instance. This Load Plan instance is separated from the original Load Plan and can be modified independently. Every time a Load Plan instance is restarted, a Load Plan run is created for this Load Plan instance. A Load Plan run corresponds to an attempt to execute the instance. See Section 14.1.1, "Load Plan Execution Lifecycle" for more information.

When running, a Load Plan Run starts sessions corresponding to the scenarios sequenced in the Load Plan.

Note that in the list of Load Plan executions, only the Load Plan runs appear. Each run is identified by a Load Plan Instance ID and an Attempt (or Run) Number.

22.1.6 Schedules

You can schedule the executions of your scenarios and Load Plans using Oracle Data Integrator's built-in scheduler or an external scheduler. Both methods are detailed in Section 21.9, "Scheduling Scenarios and Load Plans".

The schedules appear in Designer and Operator Navigator under the Scheduling node of the scenario or Load Plan. Each schedule allows a start date and a repetition cycle to be specified.

22.1.7 Log

The Oracle Data Integrator log corresponds to all the Sessions and Load Plan instances/runs stored in a repository. This log can be exported, purged or filtered for monitoring. See Section 22.3.3, "Managing the Log" for more information.

22.1.8 Status

A session, step, task or Load Plan run always has a status. Table 22-2 lists the six possible status values:

Table 22-2 Status Values

Status NameStatus Icon for SessionsStatus Icon for Load PlansStatus Description

Done

Done status icon
Done status icon LP

The Load Plan, session, step or task was executed successfully.

Done in previous run


Done previous statuts icon LP

The Load Plan step has been executed in a previous Load Plan run. This icon is displayed after a restart.

Error

Error status icon
Error status icon

The Load Plan, session, step or task has terminated due to an error.

Running

Running status icon
Running Status icon LP

The Load Plan, session, step or task is being executed.

Waiting

Waiting status icon
waiting status icon

The Load Plan, session, step or task is waiting to be executed.

Warning (Sessions and tasks only)

Warning status icon
  • For Sessions: The session has completed successfully but errors have been detected during the data quality check.

  • For Tasks: The task has terminated in error, but since errors are allowed on this task, this did not stop the session.

Queued (Sessions only)

Queued status icon

The session is waiting for an agent to be available for its execution


When finished, a session takes the status of the last executed step (Done or Error). When finished, the step, takes the status of the last executed task (Except if the task returned a Warning. In this case, the step takes the status Done).

A Load Plan is successful (status Done) when all its child steps have been executed successfully. It is in Error status when at least one of its child steps is in error and has raised its exception to the root step.

22.2 Monitoring Executions Results

In Oracle Data Integrator, an execution results in a session or in a Load Plan run if a Load Plan is executed. A session is made up of steps which are made up of tasks. Sessions are viewed and managed in Operator Navigator.

Load Plan runs appear in the Operator Navigator. To review the steps of a Load Plan run, you open the editor for this run. The sessions attached to a Load Plan appear with the rest of the sessions in the Operator Navigator.

22.2.1 Monitoring Sessions

To monitor your sessions:

  1. In the Operator Navigator, expand the Session List accordion.

  2. Expand the All Executions node and click Refresh in the Navigator toolbar.

  3. Optionally, activate a Filter to reduce the number of visible sessions. For more information, see Section 22.3.3.1, "Filtering Sessions".

  4. Review in the list of sessions the status of your session(s).

22.2.2 Monitoring Load Plan Runs

To monitor your Load Plan runs:

  1. In the Operator Navigator, expand the Load Plan Executions accordion.

  2. Expand the All Executions node and click Refresh in the Navigator toolbar.

  3. Review in the list the status of your Load Plan run.

  4. Double-click this Load Plan run to open the Load Plan Run editor.

  5. In the Load Plan Run editor, select the Steps tab.

  6. Review the state of the Load Plan steps. On this tab, you can perform the following tasks:

    • Click Refresh in the Editor toolbar to update the content of the table.

    • For the Run Scenario steps, you can click in the Session ID column to open the session started by this Load Plan for this step.

22.2.3 Handling Failed Sessions

When your session ends in error or with a warning, you can analyze the error in Operator Navigator.

To analyze an error:

  1. In the Operator Navigator, identify the session, the step and the task in error.

  2. Double click the task in error. The Task editor opens.

  3. On the Definition tab in the Execution Statistics section, the return code and message give the error that stopped the session.

  4. On the Code tab, the source and target code for the task is displayed and can be reviewed and edited.

    Optionally, click Show/Hide Values to display the code with resolved variable and sequence values. Note that:

    • If the variable values are shown, the code becomes read-only. You are now able to track variable values.

    • Variables used as passwords are never displayed.

    See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

  5. On the Connection tab, you can review the source and target connections against which the code is executed.

You can fix the code of the command in the Code tab and apply your changes. Restarting a Session is possible after performing this action. The session will restart from the task in error.


Note:

Fixing the code in the session's task does not fix the source object that was executed (interface, procedure, package or scenario). This source object must be fixed in Designer Navigator and the scenario (if any) must be regenerated. Modifying the code within the session is useful for debugging issues.


WARNING:

When a session fails, all connections and transactions to the source and target systems are rolled back. As a consequence, uncommitted statements on transactions are not applied.


22.2.4 Reviewing Successful Sessions

When your session ends successfully, you can view the changes performed in Operator Navigator. These changes include record statistics such as the number of inserts, updates, deletes, errors, and the total number of rows as well as execution statistics indicating start and end time of the execution, the duration in seconds, the return code, and the message (if any).

Session level statistics aggregate the statistics of all the steps of this session, and each step's statistics aggregate the statistics of all the tasks within this step.

To review the execution statistics:

  1. In the Operator Navigator, identify the session, the step or the task to review.

  2. Double click the session, the step or the task. The corresponding editor opens.

  3. The record and execution statistics are displayed on the Definition tab. Note that for session steps in which an interface has been executed or a datastore check has been performed also the target table details are displayed.

Record Statistics

PropertiesDescription
No. of InsertsNumber of rows inserted during the session/step/task.
No. of UpdatesNumber of rows updated during the session/step/task.
No. of DeletesNumber of rows deleted during the session/step/task.
No. of ErrorsNumber of rows in error in the session/step/task.
No. of RowsTotal number of rows handled during this session/step/task.

Execution Statistics

PropertiesDescription
StartStart date and time of execution of the session/step/task.
EndEnd date and time of execution of the session/step/task.
Duration (seconds)The time taken for execution of the session/step/task.
Return codeReturn code for the session/step/task.

Target Table Details

PropertiesDescription
Table NameName of the target datastore.
Model CodeCode of the Model in which the target datastore is stored.
Resource NameResource name of the target datastore.
Logical SchemaLogical schema of this datastore.
Forced Context CodeThe context of the target datastore.

22.2.5 Handling Failed Load Plans

When a Load Plan ends in error, review the sessions that have failed and caused the Load Plan to fail. Fix the source of the session failure.

You can restart the Load Plan instance. See Section 21.7, "Restarting a Load Plan Run" for more information.

Note that it will restart depending on the Restart Type defined on its steps. See Section 14.2.4, "Handling Load Plan Exceptions and Restartability" for more information.

You can also change the execution status of a failed Load Plan step from Error to Done on the Steps tab of the Load Plan run Editor to ignore this particular Load Plan step the next time the Load Pan run is restarted. This might be useful, for example, when the error causing this Load Plan step to fail is not possible to fix at the moment and you want to execute the rest of the Load Plan regardless of this Load Plan step.

22.2.6 Reviewing Successful Load Plans

When your Load Plan ends successfully, you can review the execution statistics from the Load Plan run editor.

You can also review the statistics for each session started for this Load Plan in the session editor.

To review the Load Plan run execution statistics:

  1. In the Operator Navigator, identify the Load Plan run to review.

  2. Double click the Load Plan run. The corresponding editor opens.

  3. The record and execution statistics are displayed on the Steps tab.

22.3 Mar@naging your Executions

Managing your development executions takes place in Operator Navigator. You can manage your executions during the execution process itself or once the execution has finished depending on the action that you wish to perform. The actions that you can perform are:

22.3.1 Managing Sessions

Managing sessions involves the following tasks

In addition to these tasks, it may be necessary in production to deal with stale sessions.

22.3.1.1 Cleaning Stale Sessions

Stale sessions are sessions that are incorrectly left in a running state after an agent or repository crash.

The Agent that started a session automatically detects when this session becomes stale and changes it to Error status. You can manually request specific Agents to clean stale sessions in Operator Navigator or Topology Navigator.

To clean stale sessions manually:

  1. Do one of the following:

    • From the Operator Navigator toolbar menu, select Clean Stale Sessions.

    • In Topology Navigator, from the Physical Architecture accordion, select an Agent, right-click and select Clean Stale Sessions.

    The Clean Stale Sessions Dialog opens.

  2. In the Clean Stale Sessions Dialog specify the criteria for cleaning stale sessions:

    • From the list, select the Agents that will clean their stale sessions.

      Select Clean all Agents if you want all Agents to clean their stale sessions.

    • From the list, select the Work Repositories you want to clean.

      Select Clean all Work Repositories if you want to clean stale sessions in all Work Repositories.

  3. Click OK to start the cleaning process. A progress bar indicates the progress of the cleaning process.

22.3.2 Managing Load Plan Executions

Managing Load Plan Executions involves the following tasks:

22.3.3 Managing the Log

Oracle Data Integrator provides several solutions for managing your log data:

22.3.3.1 Filtering Sessions

Filtering log sessions allows you to display only certain sessions in Operator Navigator, by filtering on parameters such as the user, status or duration of sessions. Sessions that do not meet the current filter are hidden from view, but they are not removed from the log.

To filter out sessions:

  1. In the Operator Navigator toolbar menu, click Filter. The Define Filter editor opens.

  2. In the Define Filter Editor, set the filter criteria according to your needs. Note that the default settings select all sessions.

    • Session Number: Use blank to show all sessions.

    • Session Name: Use % as a wildcard. For example DWH% matches any session whose name begins with DWH.

    • Session's execution Context

    • Agent used to execute the session

    • User who launched the session

    • Status: Running, Waiting etc.

    • Date of execution: Specify either a date From or a date To, or both.

    • Duration greater than a specified number of seconds

  3. Click Apply for a preview of the current filter.

  4. Click OK.

Sessions that do not match these criteria are hidden in the Session List accordion. The Filter button on the toolbar is activated.

To deactivate the filter click Filter in the Operator toolbar menu. The current filter is deactivated, and all sessions appear in the list.

22.3.3.2 Purging the Log

Purging the log allows you to remove past sessions and Load Plan runs from the log. This procedure is used to keeping a reasonable volume of sessions and Load Plans archived in the work repository. It is advised to perform a purge regularly. This purge can be automated using the OdiPurgeLog tool in a scenario.

To purge the log:

  1. From the Operator Navigator toolbar menu select Connect Navigator > Purge Log... The Purge Log editor opens.

  2. In the Purge Log editor, set the criteria listed in Table 22-3 for the sessions or Load Plan runs you want to delete.

    Table 22-3 Purge Log Parameters

    ParameterDescription

    Purge Type

    Select the objects to purge.

    From ... To

    Sessions and/or Load Plan runs in this time range will be deleted.

    When you choose to purge session logs only, then the sessions launched as part of the Load Plan runs are not purged even if they match the filter criteria.When you purge Load Plan runs, the Load Plan run which matched the filter criteria and the sessions launched directly as part of the Load Plan run and its child/grand sessions will be deleted.

    Context

    Sessions and/or Load Plan runs executed in this context will be deleted.

    Agent

    Sessions and/or Load Plan runs executed by this agent will be deleted.

    Status

    Session and/or Load Plan runs in this status will be deleted.

    User

    Sessions and/or Load Plan runs executed by this user will be deleted.

    Name

    Sessions and/or Load Plan runs matching this session name will be deleted. Note that you can specify session name masks using % as a wildcard.

    Purge scenario reports

    If you select Purge scenario reports, the scenario reports (appearing under the execution node of each scenario) will also be purged.


    Only the sessions and/or Load Plan runs matching the specified filters will be removed:

    • When you choose to purge session logs only, then the sessions launched as part of the Load Plan runs are not purged even if they match the filter criteria.

    • When you purge Load Plan runs, the Load Plan run which matched the filter criteria and the sessions launched directly as part of Load Plan run and its child/grand sessions will be deleted.

    • When aLoad Plan run matches the filter, all its attached sessions are also purged irrespective of whether they match the filter criteria or not.

  3. Click OK.

Oracle Data Integrator removes the sessions and/or Load Plan runs from the log.


Note:

It is also possible to delete sessions or Load Plan runs by selecting one or more sessions or Load Plan runs in Operator Navigator and pressing the Delete key. Deleting a Load Plan run in this way, deletes the corresponding sessions.

22.3.3.3 Organizing the Log with Session Folders

You can use session folders to organize the log. Session folders automatically group sessions and Load Plan Runs that were launched with certain keywords. Session folders are created under the Keywords node on the Session List or Load Plan Executions accordions.

Each session folder has one or more keywords associated with it. Any session launched with all the keywords of a session folder is automatically categorized beneath it.

To create a new session folder:

  1. In Operator Navigator, go to the Session List or Load Plan Executions accordion.

  2. Right-click the Keywords node and select New Session Folder.

  3. Specify a Folder Name.

  4. Click Add to add a keyword to the list. Repeat this step for every keyword you wish to add.


Note:

Only sessions or load plans with all the keywords of a given session folder will be shown below that session folder. Keyword matching is case sensitive.

Table 22-4 lists examples of how session folder keywords are matched.

Table 22-4 Matching of Session Folder Keywords

Session folder keywordsSession keywordsMatches?

DWH, Test, Batch

Batch

No - all keywords must be matched.

Batch

DWH, Batch

Yes - extra keywords on the session are ignored.

DWH, Test

Test, dwh

No - matching is case-sensitive.


To launch a session with keywords, you can for example start a scenario from a command line with the -KEYWORDS parameter. Refer to Chapter 21, "Running Integration Processes" for more information.


Note:

Session folder keyword matching is dynamic. If the keywords for a session folder are changed or if a new folder is created, existing sessions are immediately re-categorized.

22.3.3.4 Exporting and Importing Log Data

Export and import log data for archiving purposes.

Exporting Log Data

Exporting log data allows you to export log files for archiving purposes.

To export the log:

  1. Select Export... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Export Selection dialog, select Export the Log.

  3. Click OK.

  4. In the Export the log dialog, set the log export parameters as described in Table 22-5.

    Table 22-5 Log Export Parameters

    PropertiesDescription

    Export to directory

    Directory in which the export file will be created.

    Export to zip file

    If this option is selected, a unique compressed file containing all log export files will be created. Otherwise, a set of log export files is created.

    Zip File Name

    Name given to the compressed export file.

    Filters

    This set of options allow to filter the log files to export according to the specified parameters.

    Log Type

    From the list, select for which objects you want to retrieve the log. Possible values are: All|Load Plan runs and attached sessions|Sessions

    From / To

    Date of execution: specify either a date From or a date To, or both.

    Agent

    Agent used to execute the session. Leave the default All Agents value, if you do not want to filter based on a given agent.

    Context

    Session's execution Context. Leave the default All Contexts value, if you do not want to filter based on a context.

    Status

    The possible states are Done, Error, Queued, Running, Waiting, Warning and All States. Leave the default All States value, if you do not want to filter based on a given session state.

    User

    User who launched the session. Leave the default All Users value, if you do not want to filter based on a given user.

    Session Name

    Use % as a wildcard. For example DWH% matches any session whose name begins with DWH.

    Advanced options

    This set of options allow to parameterize the output file format.

    Character Set

    Encoding specified in the export file. Parameter encoding in the XML file header.

    <?xml version="1.0" encoding="ISO-8859-1"?>

    Java Character Set

    Java character set used to generate the file.


  5. Click OK.

The log data is exported into the specified location.

Note that you can also automate the log data export using the OdiExportLog tool.

Importing Log Data

Importing log data allows you to import into your work repository log files that have been exported for archiving purposes.

To import the log:

  1. Select Import... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Import Selection dialog, select Import the Log.

  3. Click OK.

  4. In the Import of the log dialog:

    1. Select the Import Mode. Note that sessions can only be imported in Synonym Mode INSERT mode. Refer to Section 20.1.3, "Import Types" for more information.

    2. Select whether you want to import the files From a Folder or From a ZIP file.

    3. Enter the file import folder or zip file.

    4. Click OK.

The specified folder or ZIP file is imported into the work repository.

22.3.4 Managing Scenarios and Load Plans

You can also manage your executions in Operator Navigator by using scenarios or Load Plans.

Before running a scenario, you need to generate it in Designer Navigator or import from a file. See Chapter 13, "Working with Scenarios". Load Plans are also created using Designer Navigator, but can also be modified using Operator Navigator. See Chapter 14, "Working with Load Plans" for more information.

Launching a scenario from Operator Navigator is covered in Section 21.3.1, "Executing a Scenario from ODI Studio" and how to run a Load Plan is described in Section 21.6, "Executing a Load Plan".

22.3.4.1 Load Plan and Scenario Folders

In Operator Navigator, scenarios and Load Plans can be grouped into Load Plan and Scenario folders to facilitate organization. Load Plan and Scenario folders can contain other Load Plan and Scenario folders.

To create a Load Plan and Scenario folder:

  1. In Operator Navigator go to the Load Plans and Scenarios accordion.

  2. From the Load Plans and Scenarios toolbar menu, select New Load Plan and Scenario Folder.

  3. On the Definition tab of the Load Plan and Scenario Folder editor enter a name for your folder.

  4. From the File menu, select Save.

You can now reorganize your scenarios and Load Plans. Drag and drop them into the Load Plan and Scenario folder.

22.3.4.2 Importing Load Plans, Scenarios, and Solutions in Production

A Load Plan or a scenario generated from Designer can be exported and then imported into a development or execution repository. This operation is used to deploy Load Plans and scenarios in a different repository, possibly in a different environment or site.

Importing a Load Plan or scenario in a development repository is performed via Designer or Operator Navigator. With a execution repository, only Operator Navigator is available for this purpose.

See Section 13.6, "Importing Scenarios in Production" for more information on how to import a scenario in production and Section 14.4.3.2, "Importing Load Plans" for more information on the Load Plan import.

Similarly, a solution containing several scenarios can be imported to easily transfer and restore a group of scenarios at once. See Chapter 19, "Working with Version Management" for more information. Note that when connected to an execution repository, only scenarios may be restored from solutions.

22.3.5 Managing Schedules

A schedule is always attached to one scenario or one Load Plan. Schedules can be created in Operator Navigator. See Section 21.9, "Scheduling Scenarios and Load Plans" for more information.

You can also import an already existing schedule along with a scenario or Load Plan import. See Section 13.6, "Importing Scenarios in Production" and Section 14.4.3, "Exporting, Importing and Versioning Load Plans"for more information.

You can view the scheduled tasks of all your agents or you can view the scheduled tasks of one particular agent. See Section 21.9.1.3, "Displaying the Schedule" for more information.

PK5SPKp\E OEBPS/toc.ncx_ Oracle® Fusion Middleware Developer's Guide for Oracle Data Integrator, 11g Release 1 (11.1.1) Cover Title and Copyright Information Contents Preface What's New In Oracle Data Integrator? Part I Understanding Oracle Data Integrator 1 Introduction to Oracle Data Integrator 2 Oracle Data Integrator QuickStart Part II Administering the Oracle Data Integrator Architecture 3 Administering the Oracle Data Integrator Repositories 4 Setting-up the Topology Part III Managing and Reverse-Engineering Metadata 5 Creating and Reverse-Engineering a Model 6 Working with Common Format Designer 7 Working with Changed Data Capture 8 Working with Data Services Part IV Developing Integration Projects 9 Creating an Integration Project 10 Working with Packages 11 Working with Integration Interfaces 12 Working with Procedures, Variables, Sequences, and User Functions 13 Working with Scenarios 14 Working with Load Plans 15 Working with Web Services in Oracle Data Integrator 16 Working with Oracle Data Quality Products 17 Working with Shortcuts Part V Managing Integration Projects 18 Organizing and Documenting your Work 19 Working with Version Management 20 Exporting/Importing Part VI Running and Monitoring Integration Processes 21 Running Integration Processes 22 Monitoring Integration Processes 23 Working with Oracle Data Integrator Console Part VII Managing the Security Settings 24 Managing the Security in Oracle Data Integrator A Oracle Data Integrator Tools Reference B User Parameters C Using Groovy Scripting in Oracle Data Integrator Copyright PKnCPKp\EOEBPS/odi_console.htm Working with Oracle Data Integrator Console

23 Working with Oracle Data Integrator Console

This chapter describes how to work with Oracle Data Integrator Console. An overview of the Console user interface is provided.

This chapter includes the following sections:

23.1 Introduction to Oracle Data Integrator Console

Oracle Data Integrator Console is a web-based console for managing and monitoring an Oracle Data Integrator run-time architecture and for browsing design-time objects.

This section contains the following topics:

23.1.1 Introduction to Oracle Data Integrator Console

Oracle Data Integrator Console is a web-based console available for different types of users:

  • Administrators use Oracle Data Integrator Console to create and import repositories and to configure the Topology (data servers, schemas, and so forth).

  • Production operators use Oracle Data Integrator Console to manage scenarios and Load Plans, monitor sessions and Load Plan runs, and manage the content of the error tables generated by Oracle Data Integrator.

  • Business users and developers browse development artifacts in this interface, using, for example, the Data Lineage and Flow Map features.

This web interface integrates seamlessly with Oracle Fusion Middleware Control Console and allows Fusion Middleware administrators to drill down into the details of Oracle Data Integrator components and sessions.


Note:

Oracle Data Integrator Console is required for the Fusion Middleware Control Extension for Oracle Data Integrator. It must be installed and configured for this extension to discover and display the Oracle Data Integrator components in a domain.

23.1.2 Oracle Data Integrator Console Interface

Oracle Data Integrator Console is a web interface using the ADF-Faces framework.

Figure 23-1 shows the layout of Oracle Data Integrator Console.

Figure 23-1 Oracle Data Integrator Console

This image shows ODI Console.

Oracle Data Integrator Console displays the objects available to the current user in two Navigation tabs in the left panel:

  • Browse tab displays the repository objects that can be browsed and edited. In this tab you can also manage sessions and error tables.

  • Management tab is used to manage the repositories and the repository connections. This tab is available to connection users having Supervisor privileges, or to any user to set up the first repository connections.

The right panel displays the following tabs:

  • Search tab is always visible and allows you to search for objects in the connected repository.

  • One Master/Details tab is displayed for each object that is being browsed or edited. Note that it is possible to browse or edit several objects at the same time.

The search field above the Navigation tabs allows you to open the search tab when it is closed.

Working with the Navigation Tabs

In the Navigation tabs, you can browse for objects contained in the repository. When an object or node is selected, the Navigation Tab toolbar displays icons for the actions available for this object or node. If an action is not available for this object, the icon is grayed out. For example, you can edit and add data server objects under the Topology node in the Browse Tab, but you cannot edit Projects under the Designer node. Note that the number of tabs that you can open at the same time is limited to ten.

23.2 Using Oracle Data Integrator Console

This section explains the different types of operations available in Oracle Data Integrator console. It does not focus on each type of object that can be managed with the console, but gives keys to manage objects with the console.

This section includes the following topics:


Note:

Oracle Data Integrator Console uses the security defined in the master repository. Operations that are not allowed for a user will appear grayed out for this user.

In addition, the Management tab is available only for users with Supervisor privileges.


23.2.1 Connecting to Oracle Data Integrator Console

Oracle Data Integrator console connects to a repository via a Repository Connection, defined by an administrator.

Note that you can only connect to Oracle Data Integrator Console if it has been previously installed. See the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for more information about installing Oracle Data Integrator Console.


Note:

The first time you connect to Oracle Data Integrator Console, if no repository connection is configured, you will have access to the Management tab to create a first repository connection. See "Creating a Repository Connection" for more information. After your first repository connection is created, the Management tab is no longer available from the Login page, and is available only for users with Supervisor privileges.

Connecting to Oracle Data Integrator Console

To connect to Oracle Data Integrator Console:

  1. Open a web browser, and connect to the URL where Oracle Data Integrator Console is installed. For example: http://odi_host:8001/odiconsole/.

  2. From the Repository list, select the Repository connection corresponding to the master or work repository you want to connect.

  3. Provide a User ID and a Password.

  4. Click Sign In.

23.2.2 Generic User Operations

This section describes the generic operations available in Oracle Data Integrator Console for a typical user.

This section includes the following operations:


Note:

Creating, editing, and deleting operations are not allowed for Scenarios and Load Plans. For more information on the possible actions that can be performed with these objects in ODI Console, see Section 23.2.3, "Managing Scenarios and Sessions" and Section 23.2.4, "Managing Load Plans".

Viewing an Object

To view an object:

  1. Select the object in the Browse or Management Navigation tab.

  2. Click View in the Navigation tab toolbar. The simple page or the Master/Detail page for the object opens.

Editing an Object

To edit an object:

  1. Select the object in the Browse or Management Navigation tab.

  2. Click Update in the Navigation tab toolbar. The edition page for the object opens.

  3. Change the value for the object fields.

  4. Click Save in the edition page for this object.

Creating an Object

To create an object:

  1. Navigate to the parent node of the object you want to create in the Browse or Management Navigation tab. For example, to create a Context, navigate to the Topology > Contexts node in the Browse tab.

  2. Click Create in the Navigation tab toolbar. An Add dialog for this object appears.

  3. Provide the values for the object fields.

  4. Click Save in the Add dialog of this object. The new object appears in the Navigation tab.

Deleting an Object

To delete an object:

  1. Select the object in the Browse or Management Navigation tab.

  2. Click Delete in the Navigation tab toolbar.

  3. Click OK in the confirmation window.

Searching for an Object

To search for an object:

  1. In the Search tab, select the tab corresponding to the object you want to search:

    • Design Time tab allows you to search for design-time objects

    • Topology tab allows you to search for topology objects

    • Runtime tab allows you to search for run-time objects such as Load Plans, Scenarios, Scenario Folders, or Session Folders

    • Sessions tab allows you to search for sessions

    • Load Plan Execution tab allows you to search for Load Plan runs

  2. Set the search parameters to narrow your search.

    For example when searching design-time or topology objects:

    1. In the Search Text field, enter a part of the name of the object that you want to search.

    2. Select Case sensitive if you want the search to be case sensitive (this feature is not provided for the sessions or Load Plan execution search.

    3. Select in Models/Project (Designer tab) or Topology (Topology tab) the type of object you want to search for. Select All to search for all objects.

  3. Click Search.

  4. The Search Results appear, grouped by object type. You can click an object in the search result to open its master/details page.

23.2.3 Managing Scenarios and Sessions

This section describes the operations related to scenarios and sessions available in Oracle Data Integrator Console.

This section includes the following operations:

Importing a Scenario

To import a scenario:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Scenarios.

  3. Click Import in the Navigation tab toolbar.

  4. Select an Import Mode and select an export file in Scenario XML File.

  5. Click Import Scenario.

Exporting a Scenario

To export a scenario:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Scenarios.

  3. Click Export in the Navigation tab toolbar.

  4. In the Export Scenario dialog, set the parameters as follows:

    • From the Scenario Name list, select the scenario to export.

    • In the Encoding Java Charset field, enter the Java character set for the export file.

    • In the Encoding XML Charset field, enter the encoding to specify in the export file.

    • In the XML Version field, enter the XML Version to specify in the export file.

    • Optionally, select Include Dependant objects to export linked child objects.

  5. Click Export Scenario.

Running a Scenario

To execute a scenario:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Scenarios.

  3. Select the scenario you want to execute.

  4. Click Execute in the Navigation tab toolbar.

  5. Select an Agent, a Context, and a Log Level for this execution.

  6. Click Execute Scenario.

Stopping a Session

Note that you can perform a normal or an immediate kill of a running session. Sessions with the status Done, Warning, or Error cannot be killed.

To kill a session:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Sessions.

  3. Select the session you want to stop.

  4. Click Kill in the Navigation tab toolbar.

Restarting a Session

To restart a session:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Sessions.

  3. Select the session you want to restart.

  4. Click Restart in the Navigation tab toolbar.

  5. In the Restart Session dialog, set the parameters as follows:

    • Agent: From the list, select the agent you want to use for running the new session.

    • Log Level: From the list, select the log level. Select Log Level 6 in the Execution or Restart Session dialog to enable variable tracking. Log level 6 has the same behavior as log level 5, but with the addition of variable tracking.

  6. Click Restart Session.

Cleaning Stale Sessions

To clean stale sessions:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Sessions.

  3. Click Clean in the Navigation tab toolbar.

  4. In the Clean Stale Sessions dialog, select the Agent for which you want to clean stale sessions.

  5. Click OK.

Managing Data Statistics and Erroneous Records

Oracle Data Integrator Console allows you to browse the details of a session, including the record statistics. When a session detects erroneous data during a flow or static check, these errors are isolated into error tables. You can also browse and manage the erroneous rows using Oracle Data Integrator Console.


Note:

Sessions with erroneous data detected finish in Warning status.

To view the erroneous data:

  1. Select the Browse Navigation tab.

  2. Navigate to a given session using Runtime > Sessions/Load Plan Executions > Sessions. Select the session and click View in the Navigation tab toolbar.

    The Session page is displayed.

  3. In the Session page, go to the Relationships section and select the Record Statistics tab.

    This tab shows each physical table targeting in this session, as well as the record statistics.

  4. Click the number shown in the Errors column. The content of the error table appears.

    • You can filter the errors by Constraint Type, Name, Message Content, Detection date, and so forth. Click Filter Result to apply a filter.

    • Select a number of errors in the Query Results table and click Delete to delete these records.

    • Click Delete All to delete all the errors.


Note:

Delete operations cannot be undone.

23.2.4 Managing Load Plans

This section describes the operations related to Load Plans available in Oracle Data Integrator Console.

This section includes the following operations:

Importing a Load Plan

To import a Load Plan:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Load Plans.

  3. Click Import in the Navigation tab toolbar.

  4. In the Import Load Plan dialog, select an Import Mode and select an export file in the Select Load Plan XML File field.

  5. Click Import.


Note:

When you import a Load Plan that has been previously exported, the imported Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be imported separately. See Importing a Scenario for more information.

Exporting a Load Plan

To export a Load Plan:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Load Plans.

  3. Select the Load Plan to export.

  4. Click Export in the Navigation tab toolbar.

  5. In the Export dialog, set the parameters as follows:

    • From the Load Plan Name list, select the Load Plan to export.

    • In the Encoding Java Charset field, enter the Java character set for the export file.

    • In the Encoding XML Charset field, enter the encoding to specify in the export file.

    • In the XML Version field, enter the XML Version to specify in the export file.

    • Optionally, select Include Dependant objects to export linked child objects.

  6. Click Export.


Note:

The export of a Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be exported separately. See Exporting a Scenario for more information.

Running a Load Plan

To run a Load Plan:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Load Plans.

  3. Select the Load Plan you want to execute.

  4. Click Execute in the Navigation tab toolbar.

  5. Select a Logical Agent, a Context, a Log Level, and if your Load Plan uses variables, specify the Startup values for the Load Plan variables.

  6. Click Execute.

Stopping a Load Plan Run

Note that you can perform a normal or an immediate kill of a Load Plan run. Any running or waiting Load Plan Run can be stopped.

To stop a Load Plan Run:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Load Plan Executions.

  3. Select the Load Plan run you want to stop.

  4. Click Kill in the Navigation tab toolbar.

Restarting a Load Plan Run

A Load Plan can only be restarted if the selected run of the current Load Plan instance is in Error status and if there is no other instance of the same Load Plan currently running.

To restart a Load Plan Run:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Load Plan Executions.

  3. Select the Load Plan run you want to restart.

  4. In the Restart Load Plan Dialog, select the Physical Agent that restarts the Load Plan. Optionally, select a different log level.

  5. Click Restart in the Navigation tab toolbar.

23.2.5 Purging the Log

This section describes how to purge the log in Oracle Data Integrator Console by removing past sessions and/or Load Plan runs from the log.

To purge the log:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions.

  3. Click Purge in the Navigation tab toolbar.

  4. In the Purge Sessions/Load Plan Executions dialog, set the purge parameters listed in Table 23-1.

    Table 23-1 Purge Log Parameters

    ParameterDescription

    Purge Type

    Select the objects to purge.

    From ... To

    Sessions and/or Load Plan runs in this time range will be deleted.

    When you choose to purge session logs only, then the sessions launched as part of the Load Plan runs are not purged even if they match the filter criteria.When you purge Load Plan runs, the Load Plan run which matched the filter criteria and the sessions launched directly as part of the Load Plan run and its child/grand sessions will be deleted.

    Context

    Sessions and/or Load Plan runs executed in this context will be deleted.

    Agent

    Sessions and/or Load Plan runs executed by this agent will be deleted.

    Status

    Session and/or Load Plan runs in this status will be deleted.

    User

    Sessions and/or Load Plan runs executed by this user will be deleted.

    Name

    Sessions and/or Load Plan runs matching this session name will be deleted. Note that you can specify session name masks using % as a wildcard.

    Purge scenario reports

    If you select Purge scenario reports, the scenario reports (appearing under the execution node of each scenario) will also be purged.


    Only the sessions and/or Load Plan runs matching the specified filters will be removed:

    • When you choose to purge session logs only, then the sessions launched as part of the Load Plan runs are not purged even if they match the filter criteria.

    • When you purge Load Plan runs, the Load Plan run which matched the filter criteria and the sessions launched directly as part of Load Plan run and its child/grand sessions will be deleted.

    • When a Load Plan run matches the filter, all its attached sessions are also purged irrespective of whether they match the filter criteria or not.

  5. Click OK.

Oracle Data Integrator Console removes the sessions and/or Load Plan runs from the log.

23.2.6 Using Data Lineage and Flow Map

This section describes how to use the Data Lineage and Flow Map features available in Oracle Data Integrator Console.

  • Data Lineage provides graph displaying the flows of data from the point of view of a given datastore. In this graph, you can navigate back and forth and follow this data flow.

  • Flow Map provides a map of the relations that exist between the data structures (models, sub-models and datastores) and design-time objects (projects, folders, packages, interfaces). This graph allows you to draw a map made of several data structures and their data flows.

This section includes the following operations:

Working with the Data Lineage

To view the Data Lineage:

  1. Select the Browse Navigation tab.

  2. Navigate to Design Time > Models > Data Lineage.

  3. Click View in the Navigation tab toolbar.

  4. In the Data Lineage page, select a Model, then a Sub-Model and a datastore in this model.

  5. Select Show Interfaces if you want that interfaces are displayed between the datastores nodes.

  6. Select the prefix to add in your datastores and interface names in the Naming Options section.

  7. Click View to draw the Data Lineage graph. This graph is centered on the datastore selected in step 4.

    In this graph, you can use the following actions:

    • Click Go Back to return to the Data Lineage options and redraw the graph.

    • Use the Hand tool and then click a datastore to redraw the lineage centered on this datastore.

    • Use the Hand tool and then click an interface to view this interface's page.

    • Use the Arrow tool to expand/collapse groups.

    • Use the Move tool to move the graph.

    • Use the Zoom In/Zoom Out tools to resize the graph.

    • Select View Options to change the display options have the graph refreshed with this new option.

Working with the Flow Map

To view the Flow Map:

  1. Select the Browse Navigation tab.

  2. Navigate to Design Time > Models > Flow Map.

  3. Click View in the Navigation tab toolbar.

  4. In the Data Lineage page, select one or more Model. Select All to select all models.

  5. Select one of more Projects. Select All to select all projects.

  6. In the Select the level of details of the map section, select the granularity of the map. The object that you select here will be the nodes of your graph.

    Check Do not show Projects, Folders... if you want the map to show only data structure.

  7. Optionally, indicate the grouping for the data structures and design-time objects in the map, using the options in the Indicate how to group Objects in the Map section.

  8. Click View to draw the Flow Map graph.

    In this graph, you can use the following actions:

    • Click Go Back to return to the Flow Map options and redraw the graph.

    • Use the Hand tool and then click a node (representing a datastore, an interface, and so forth) in the map to open this object's page.

    • Use the Arrow tool to expand/collapse groups.

    • Use the Move tool to move the graph.

    • Use the Zoom In/Zoom Out tools to resize the graph.

23.2.7 Performing Administrative Operations

This section describes the different administrative operations available in Oracle Data Integrator Console. These operations are available for a user with Supervisor privileges.

This section includes the following operations:

Creating a Repository Connection

A repository connection is a connection definition for Oracle Data Integrator Console. A connection does not include Oracle Data Integrator user and password information.

To create a repository connection:

  1. Navigate to the Repository Connections node in the Management Navigation tab.

  2. Click Create in the Navigation tab toolbar. A Create Repository Connection dialog for this object appears.

  3. Provide the values for the repository connection:

    • Connection Alias: Name of the connection that will appear on the Login page.

    • Master JNDI URL: JNDI URL of the datasource to connect the master repository database.

    • Supervisor User Name: Name of the Oracle Data Integrator user with Supervisor privileges that Oracle Data Integrator Console will use to connect to the repository. This user's password must be declared in the WLS Credential Store.

    • Work JNDI URL: JNDI URL of the datasource to connect the work repository database. If no value is given in this field. The repository connection will allow connection to the master only, and the Navigation will be limited to Topology information.

    • JNDI URL: Check this option if you want to use the environment naming context (ENC). When this option is checked, Oracle Data Integrator Console automatically prefixes the data source name with the string java:comp/env/ to identify it in the application server's JNDI directory. Note that the JNDI Standard is not supported by Oracle WebLogic Server and for global data sources.

    • Default: Check this option if you want this Repository Connection to be selected by default on the login page.

  4. Click Save. The new Repository Connection appears in the Management Navigation tab.

Testing a Data Server or a Physical Agent Connection

This sections describes how to test the data server connection or the connection of a physical agent in Oracle Data Integrator Console.

To test the data server connection:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Data Servers.

  3. Select the data server whose connection you want to test.

  4. Click Test Connection in the Navigation tab toolbar.

  5. In the Test Connection dialog, select the:

    • Physical Agent that will carry out the test

    • Transaction on which you want to execute the command. This parameter is only displayed if there is any On Connect/Disconnect command defined for this data server. The transactions from 0 to 9 and the Autocommit transaction correspond to connection created by sessions (by procedures or knowledge modules). The Client Transaction corresponds to the client components (ODI Console and Studio).

  6. Click Test.

A dialog showing "Connection successful!" is displayed if the test has worked. If not, an error message is displayed.

To test the physical agent connection:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Agents > Physical Agents.

  3. Select the physical agent whose connection you want to test.

  4. Click Test Connection in the Navigation tab toolbar.

A dialog showing "Connection successful!" is displayed if the test has worked. If not, an error message is displayed.

Administering Repositories

Oracle Data Integrator Console provides you with features to perform management operations (create, import, export) on repositories. These operations are available from the Management Navigation tab, under the Repositories node. These management operations reproduce in a web interface the administrative operations available via the Oracle Data Integrator Studio and allow setting up and maintaining your environment from the ODI Console.

See Chapter 3, "Administering the Oracle Data Integrator Repositories" and Chapter 20, "Exporting/Importing" for more information on these operations.

Administering Java EE Agents

Oracle Data Integrator Console allows you to add JDBC datasources and create templates to deploy physical agents into WebLogic Server.

See Chapter 4, "Setting-up the Topology" for more information on Java EE Agents, datasources and templates.

To add a datasource to a physical agent:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Agents > Physical Agents.

  3. Select the agent you want to manage.

  4. Click Edit in the Navigation tab toolbar.

  5. Click Add Datasource

  6. Provide a JNDI Name for this datasource and select the Data Server Name. This datasource will be used to connect to this data server from the machine into which the Java EE Agent will be deployed.

  7. Click OK.

  8. Click Save to save the changes to the physical agent.

To create a template for a physical agent:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Agents > Physical Agents.

  3. Select the agent you want to manage.

  4. Click Edit in the Navigation tab toolbar.

  5. Click Agent Deployment.

  6. Follow the steps of the Agent Deployment wizard. This wizard reproduces in a web interface the WLS Template Generation wizard. See Chapter 4, "Deploying an Agent in a Java EE Application Server (Oracle WebLogic Server)" for more details.

PK?0:0PKp\EOEBPS/content.opf"6 Oracle® Fusion Middleware Developer's Guide for Oracle Data Integrator, 11g Release 1 (11.1.1) en-US E12643-05 Oracle Corporation Oracle Corporation Oracle® Fusion Middleware Developer's Guide for Oracle Data Integrator, 11g Release 1 (11.1.1) 2011-11-04T09:26:47Z Provides guidelines for developers interested in using Oracle Data Integrator for integration projects. PKF^['6"6PKp\EOEBPS/partpage1.htmB Understanding Oracle Data Integrator

Part I

Understanding Oracle Data Integrator

This part provides an introduction to Oracle Data Integrator and the basic steps of creating an integration project with Oracle Data Integrator.

This part contains the following chapters:

PK<GBPKp\EOEBPS/data_capture.htm Working with Changed Data Capture

7 Working with Changed Data Capture

This chapter describes how to use Oracle Data Integrator's Changed Data Capture feature to detect changes occurring on the data and only process these changes in the integration flows.

This chapter includes the following sections:

7.1 Introduction to Changed Data Capture

Changed Data Capture (CDC) allows Oracle Data Integrator to track changes in source data caused by other applications. When running integration interfaces, thanks to CDC, Oracle Data Integrator can avoid processing unchanged data in the flow.

Reducing the source data flow to only changed data is useful in many contexts, such as data synchronization and replication. It is essential when setting up an event-oriented architecture for integration. In such an architecture, applications make changes in the data ("Customer Deletion", "New Purchase Order") during a business process. These changes are captured by Oracle Data Integrator and transformed into events that are propagated throughout the information system.

Changed Data Capture is performed by journalizing models. Journalizing a model consists of setting up the infrastructure to capture the changes (inserts, updates and deletes) made to the records of this model's datastores.

Oracle Data Integrator supports two journalizing modes:

  • Simple Journalizing tracks changes in individual datastores in a model.

  • Consistent Set Journalizing tracks changes to a group of the model's datastores, taking into account the referential integrity between these datastores. The group of datastores journalized in this mode is called a Consistent Set.

7.1.1 The Journalizing Components

The journalizing components are:

  • Journals: Where changes are recorded. Journals only contain references to the changed records along with the type of changes (insert/update, delete).

  • Capture processes: Journalizing captures the changes in the source datastores either by creating triggers on the data tables, or by using database-specific programs to retrieve log data from data server log files. See the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator for more information on the capture processes available for the technology you are using.

  • Subscribers: CDC uses a publish/subscribe model. Subscribers are entities (applications, integration processes, etc.) that use the changes tracked on a datastore or on a consistent set. They subscribe to a model's CDC to have the changes tracked for them. Changes are captured only if there is at least one subscriber to the changes. When all subscribers have consumed the captured changes, these changes are discarded from the journals.

  • Journalizing views: Provide access to the changes and the changed data captured. They are used by the user to view the changes captured, and by integration processes to retrieve the changed data.

These components are implemented in the journalizing infrastructure.

7.1.2 Simple vs. Consistent Set Journalizing

Simple Journalizing enables you to journalize one or more datastores. Each journalized datastore is treated separately when capturing the changes.

This approach has a limitation, illustrated in the following example: You want to process changes in the ORDER and ORDER_LINE datastores (with a referential integrity constraint based on the fact that an ORDER_LINE record should have an associated ORDER record). If you have captured insertions into ORDER_LINE, you have no guarantee that the associated new records in ORDERS have also been captured. Processing ORDER_LINE records with no associated ORDER records may cause referential constraint violations in the integration process.

Consistent Set Journalizing provides the guarantee that when you have an ORDER_LINE change captured, the associated ORDER change has been also captured, and vice versa. Note that consistent set journalizing guarantees the consistency of the captured changes. The set of available changes for which consistency is guaranteed is called the Consistency Window. Changes in this window should be processed in the correct sequence (ORDER followed by ORDER_LINE) by designing and sequencing integration interfaces into packages.

Although consistent set journalizing is more powerful, it is also more difficult to set up. It should be used when referential integrity constraints need to be ensured when capturing the data changes. For performance reasons, consistent set journalizing is also recommended when a large number of subscribers are required.

It is not possible to journalize a model (or datastores within a model) using both consistent set and simple journalizing.

7.2 Setting up Journalizing

This section explains how to set up and start the journalizing infrastructure, and check that this infrastructure is running correctly. It also details the components of this infrastructure.

7.2.1 Setting up and Starting Journalizing

The basic process for setting up CDC on an Oracle Data Integrator data model is as follows:

  • Set the CDC parameters in the data model

  • Add the datastores to the CDC

  • For consistent set journalizing, set the datastores order

  • Add subscribers

  • Start the journals

Set the CDC parameters

Setting up the CDC parameters is performed on a data model. This consists of selecting or changing the journalizing mode and journalizing Knowledge Module used for the model.

To set up the CDC parameters:

  1. In the Models tree in the Designer Navigator, select the model that you want to journalize.

  2. Double-click this model to edit it.

  3. In the Journalizing tab, select the journalizing mode you want to use: Consistent Set or Simple.

  4. Select the Journalizing Knowledge Module (JKM) you want to use for this model. Only Knowledge Modules suitable for the data model's technology and journalizing mode, and that have been previously imported into at least one of your projects will appear in the list.

  5. Set the Options for this KM. See the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator for more information about this KM and its options.

  6. From the File menu, select Save All.


Note:

If the model is already being journalized, it is recommended that you stop journalizing with the existing configuration before modifying the data model journalizing parameters.

Add or remove datastores for the CDC:

You must flag the datastores that you want to journalize within the journalized model. A change in the datastore flag is taken into account the next time the journals are (re)started. When flagging a model or a sub-model, all of the datastores contained in the model or sub-model are flagged.

To add or remove datastores for the CDC:

  1. Right-click the model, sub-model or datastore that you want to add to/remove from the CDC in the Model tree in the Designer Navigator.

  2. Right-click then select Changed Data Capture > Add to CDC or Changed Data Capture > Remove from CDC to add to the CDC or remove from the CDC the selected datastore, or all datastores in the selected model/sub-model.

The datastores added to CDC should now have a marker icon. The journal icon represents a small clock. It should be yellow, indicating that the journal infrastructure is not yet in place.


Note:

It is possible to add datastores to the CDC after the journal creation phase. In this case, the journals should be re-started.

If a datastore with journals running is removed from the CDC in simple mode, the journals should be stopped for this individual datastore. If a datastore is removed from CDC in Consistent Set mode, the journals should be restarted for the model (Journalizing information is preserved for the other datastores).


Set the datastores order (consistent set journalizing only):

You only need to arrange the datastores in order when using consistent set journalizing. You should arrange the datastores in the consistent set in an order which preserves referential integrity when using their changed data. For example, if an ORDER table has references imported from an ORDER_LINE datastore (i.e. ORDER_LINE has a foreign key constraint that references ORDER), and both are added to the CDC, the ORDER datastore should come before ORDER_LINE. If the PRODUCT datastore has references imported from both ORDER and ORDER_LINE (i.e. both ORDER and ORDER_LINE have foreign key constraints to the PRODUCT table), its order should be lower still.

To set the datastores order:

  1. In the Models tree in the Designer Navigator, select the model journalized in consistent set mode.

  2. Double-click this model to edit it.

  3. Go to the Journalized Tables tab.

  4. If the datastores are not currently in any particular order, click the Reorganize button. This feature suggests an order for the journalized datastores based on the foreign keys defined in the model. Review the order suggested and edit the datastores order if needed.

  5. Select a datastore from the list, then use the Up and Down buttons to move it within the list. You can also directly edit the Order value for this datastore.

  6. Repeat the previous step until the datastores are ordered correctly.

  7. From the File menu, select Save All.


Note:

Changes to the order of datastores are taken into account the next time the journals are (re)started.

If existing scenarios consume changes from this CDC set, you should regenerate them to take into account the new organization of the CDC set.


Add or remove subscribers:

Each subscriber consumes in a separate thread changes that occur on individual datastores for Simple Journalizing or on a model for Consistent Set Journalizing. Adding or removing a subscriber registers it to the CDC infrastructure in order to trap changes for it.

To add subscribers:

  1. In the Models tree in the Designer Navigator, select the journalized data model if using Consistent Set Journalizing or select a data model or an individual datastore if using Simple Journalizing.

  2. Right-click, then select Changed Data Capture > Subscriber > Subscribe. A window appears which lets you select your subscribers.

  3. Type a Subscriber name, then click the Add Subscriber button. Repeat the operation for each subscriber you want to add.


    Note:

    Subscriber names cannot contain single quote characters.

  4. Click OK.

  5. In the Execution window, select the execution parameters:

    • Select the Context into which the subscribed must be registered.

    • Select the Logical Agent that will run the journalizing tasks.

  6. Click OK.

  7. The Session Started Window appears.

  8. Click OK.

You can review the journalizing tasks in the Operator Navigator.

Removing a subscriber is a similar process. Select the Changed Data Capture > Subscriber > Unsubscribe option instead.

You can also add subscribers after starting the journals. Subscribers added after journal startup will only retrieve changes captured since they were added to the subscribers list.

Start/Drop the journals:

Starting the journals creates the CDC infrastructure if it does not exist yet. It also validates the addition, removal and order changes for journalized datastores.

Dropping the journals deletes the entire journalizing infrastructure.


Note:

Dropping the journals deletes all captured changes as well as the infrastructure. For simple journalizing, starting the journal in addition deletes the journal contents. Consistent Set JKMs support restarting the journals without losing any data.

To start or drop the journals:

  1. In the Models tree in the Designer Navigator, select the journalized data model if using Consistent Set Journalizing or select a data model or an individual datastore if using Simple Journalizing.

  2. Right-click, then select Changed Data Capture > Start Journal if you want to start the journals, or Changed Data Capture > Drop Journal if you want to stop them.

  3. In the Execution window, select the execution parameters:

    • Select the Context into which the journals must be started or dropped.

    • Select the Logical Agent that will run the journalizing tasks.

  4. Click OK.

  5. The Session Started Window appears.

  6. Click OK.

A session begins to start or drops the journals. You can review the journalizing tasks in the Operator Navigator.

Automate journalizing setup:

The journalizing infrastructure is implemented by the journalizing KM at the physical level. Consequently, Add Subscribers and Start Journals operations should be performed in each context where journalizing is required for the data model. It is possible to automate these operations using Oracle Data Integrator packages. Automating these operations is recommended to deploy a journalized infrastructure across different contexts.

For example, a developer will manually configure CDC in the Development context. When the development phase is complete, he provides a package that automates the CDC infrastructure. CDC is automatically deployed in the Test context by using this package. The same package is also used to deploy CDC in the Production context.

An overview designing such a package follows. See Chapter 10, "Working with Packages" for more information on package creation.

To automate CDC configuration:

  1. Create a new package.

  2. Drag and drop from the Models accordion the model or datastore you want to journalize into the package Diagram tab. A new package step appears.

  3. Double-Click the step icon in the package diagram. The properties inspector for this steps opens.

  4. In the Type list, select Journalizing Model/Datastore.

  5. Check the Start box to start the journals.

  6. Check the Add Subscribers box, then enter the list of subscribers into the Subscribers group.

  7. Enter the first subscriber in the subscriber field, and click the Add button to add it to the Subscribers list. Repeat this operation for all your subscribers.

  8. From the File menu, select Save.

When this package is executed in a context, it starts the journals according to the model configuration and creates the specified subscribers in this context.

It is possible to split subscriber and journal management into different steps and packages. Deleting subscribers and stopping journals can be automated in the same manner.

7.2.2 Journalizing Infrastructure Details

When the journals are started, the journalizing infrastructure (if not installed yet) is deployed or updated in the following locations:

  • When the journalizing Knowledge Module creates triggers, they are installed on the tables in the Work Schema for the Oracle Data Integrator physical schema containing the journalized tables. Journalizing trigger names are prefixed with the prefix defined in the Journalizing Elements Prefixes for the physical schema. The default value for this prefix is T$. For details about database-specific capture processes see the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator.

  • A CDC common infrastructure for the data server is installed in the Work Schema for the Oracle Data Integrator physical schema that is flagged as Default for this data server. This common infrastructure contains information about subscribers, consistent sets, etc. for all the journalized schemas of this data server. This common infrastructure consists of tables whose names are prefixed with SNP_CDC_.

  • Journal tables and journalizing views are installed in the Work Schema for the Oracle Data Integrator physical schema containing the journalized tables. The journal table and journalizing view names are prefixed with the prefixes defined in the Journalizing Elements Prefixes for the physical schema. The default value is J$ for journal tables and JV$ for journalizing views

All components (except the triggers) of the journalizing infrastructure (like all Data Integrator temporary objects, such as integration, error and loading tables) are installed in the Work Schema for the Oracle Data Integrator physical schemas of the data server. These work schemas should be kept separate from the schema containing the application data (Data Schema).


Important:

The journalizing triggers are the only components for journalizing that must be installed, when needed, in the same schema as the journalized data. Before creating triggers on tables belonging to a third-party software package, please check that this operation is not a violation of the software agreement or maintenance contract. Also ensure that installing and running triggers is technically feasible without interfering with the general behavior of the software package.

7.2.3 Journalizing Status

Datastores in models or interfaces have an icon marker indicating their journalizing status in Designer's current context:

  • OK - Journalizing is active for this datastore in the current context, and the infrastructure is operational for this datastore.

  • No Infrastructure - Journalizing is marked as active in the model, but no appropriate journalizing infrastructure was detected in the current context. Journals should be started. This state may occur if the journalizing mode implemented in the infrastructure does not match the one declared for the model.

  • Remnants - Journalizing is marked as inactive in the model, but remnants of the journalizing infrastructure such as the journalizing table have been detected for this datastore in the context. This state may occur if the journals were not stopped and the table has been removed from CDC.

7.3 Using Changed Data

Once journalizing is started and changes are tracked for subscribers, it is possible to use the changes captured. These can be viewed or used when the journalized datastore is used as a source of an interface.

7.3.1 Viewing Changed Data

To view the changed data:

  1. In the Models tree in the Designer Navigator, select the journalized datastore.

  2. Right-click and then select Changed Data Capture > Journal Data....

The changes captured for this datastore in the current context appear in a grid with three additional columns describing the change details:

  • JRN_FLAG: Flag indicating the type of change. It takes the value I for an inserted/updated record and D for a deleted record.

  • JRN_SUBSCRIBER: Name of the Subscriber.

  • JRN_DATE: Timestamp of the change.

Journalized data is mostly used within integration processes. Changed data can be used as the source of integration interfaces. The way it is used depends on the journalizing mode.

7.3.2 Using Changed Data: Simple Journalizing

Using changed data from simple journalizing consists of designing interfaces using journalized datastores as sources. See Chapter 11, "Working with Integration Interfaces" for detailed instructions for creating interfaces.

Designing Interfaces with Simple Journalizing

When a journalized datastore is inserted into an interface diagram, a Journalized Data Only check box appears in this datastore's property panel.

When this box is checked:

  • The journalizing columns (JRN_FLAG, JRN_DATE and JRN_SUBSCRIBER) become available for the datastore.

  • A journalizing filter is also automatically generated on this datastore. This filter will reduce the amount of source data retrieved to the journalized data only. It is always executed on the source. You can customize this filter (for instance, to process changes in a time range, or only a specific type of change). A typical filter for retrieving all changes for a given subscriber is: JRN_SUBSCRIBER = '<subscriber_name>'.

In simple journalizing mode all the changes taken into account by the interface (after the journalizing filter is applied) are automatically considered consumed at the end of the interface and removed from the journal. They cannot be used by a subsequent interface.

When processing journalized data, the SYNC_JRN_DELETE option of the integration Knowledge Module should be set carefully. It invokes the deletion from the target datastore of the records marked as deleted (D) in the journals and that are not excluded by the journalizing filter. If this option is set to No, integration will only process inserts and updates.


Note:

Only one datastore per dataset can have the Journalized Data Only option checked.

7.3.3 Using Changed Data: Consistent Set Journalizing

Using Changed data in Consistent journalizing is similar to simple journalizing for interface design. It requires extra steps before and after processing the changed data in the interfaces in order to enforce changes consistently within the set.

These operations can be performed either manually from the context menu of the journalized model or automated with packages.

Operations Before Using the Changed Data

The following operations should be undertaken before using the changed data when using consistent set journalizing:

  • Extend Window: The Consistency Window is a range of available changes in all the tables of the consistency set for which the insert/update/delete are possible without violating referential integrity. The extend window operation (re)computes this window to take into account new changes captured since the latest Extend Window operation. This operation is implemented using a package step with the Journalizing Model Type. This operation can be scheduled separately from other journalizing operations.

  • Lock Subscribers: Although the extend window is applied to the entire consistency set, subscribers consume the changes separately. This operation performs a subscriber(s) specific "snapshot" of the changes in the consistency window. This snapshot includes all the changes within the consistency window that have not been consumed yet by the subscriber(s). This operation is implemented using a package step with the Journalizing Model Type. It should be always performed before the first interface using changes captured for the subscriber(s).

Designing Interfaces

The changed data in consistent set journalizing are also processed using interfaces sequenced into packages.

Designing interfaces when using consistent set journalizing is similar to simple journalizing, except for the following differences:

  • The changes taken into account by the interface (that is filtered with JRN_FLAG, JRN_DATE and JRN_SUBSCRIBER) are not automatically purged at the end of the interface. They can be reused by subsequent interfaces. The unlock subscriber and purge journal operations described below are required to commit consumption of these changes, and remove useless entries from the journal respectively.

  • In consistent mode, the JRN_DATE column should not be used in the journalizing filter. Using this timestamp to filter the changes consumed does not entirely ensure consistency in these changes.

Operations after Using the Changed Data

After using the changed data, the following operations should be performed:

  • Unlock Subscribers: This operation commits the use of the changes that where locked during the Lock Subscribers operations for the subscribers. It should be processed only after all the changes for the subscribers have been processed. This operation is implemented using a package step with the Journalizing Model Type. It should be always performed after the last interface using changes capturedM for the subscribers. If the changes need to be processed again (for example, in case of an error), this operation should not be performed.

  • Purge Journal: After all subscribers have consumed the changes they have subscribed to, entries still remain in the journalizing tables and should be deleted. This is performed by the Purge Journal operation. This operation is implemented using a package step with the Journalizing Model Type. This operation can be scheduled separately from the other journalizing operations.


Note:

It is possible to perform an Extend Window or Purge Journal on a datastore. These operations process changes for tables that are in the same consistency set at different frequencies. These options should be used carefully, as consistency for the changes may be no longer maintained at the consistency set level

Automate Consistent Set CDC Operations

To automate the consistent set CDC usage, you can use a package performing these operations.

  1. Create a new package.

  2. Drag and drop from the Models tree the journalized model into the package Diagram tab. A new package step appears.

  3. Double-Click the step icon in the package diagram. The properties inspector for this step opens.

  4. In the Type list, select Journalizing Model/Datastore.

  5. Check the consistent set operations you want to perform.

  6. If you checked the Lock Subscriber or Unlock Subscriber operations, enter the first subscriber in the subscriber field, and click the Add button to add it to the Subscribers list. Repeat this operation for all the subscribers you want to lock or unlock.

  7. From the File menu, select Save All.


Note:

Only one datastore per dataset can have the Journalized Data Only option checked.

7.3.4 Journalizing Tools

Oracle Data Integrator provides a set of tools that can be used in journalizing to refresh information on the captured changes or trigger other processes:

  • OdiWaitForData waits for a number of rows in a table or a set of tables.

  • OdiWaitForLogData waits for a certain number of modifications to occur on a journalized table or a list of journalized tables. This tool calls OdiRefreshJournalCount to perform the count of new changes captured.

  • OdiWaitForTable waits for a table to be created and populated with a pre-determined number of rows.

  • OdiRetrieveJournalData retrieves the journalized events for a given table list or CDC set for a specified journalizing subscriber. Calling this tool is required if using Database-Specific Processes to load journalizing tables. This tool needs to be used with specific Knowledge Modules. See the Knowledge Module description for more information.

  • OdiRefreshJournalCount refreshes the number of rows to consume for a given table list or CDC set for a specified journalizing subscriber.

See Appendix A, "Oracle Data Integrator Tools Reference" for more information on these functions.

7.3.5 Package Templates for Using Journalizing

A number of templates may be used when designing packages to use journalized data. Below are some typical templates. See Chapter 10, "Working with Packages" for more information on package creation.

Template 1: One Simple Package (Consistent Set)

  • Step 1: Extend Window + Lock Subscribers

  • Step 2 to n-1: Interfaces using the journalized data

  • Step n: Unlock Subscribers + Purge Journal

This package is scheduled to process all changes every minutes. This template is relevant if changes are made regularly in the journalized tables.

Template 2: One Simple Package (Simple Journalizing)

Step 1 to n: Interfaces using the journalized data

This package is scheduled to process all changes every minutes. This template is relevant if changes are made regularly in the journalized tables.

Template 3: Using OdiWaitForLogData (Consistent Set or Simple)

  • Step 1: OdiWaitForLogData. If no new log data is detected after a specified interval, end the package.

  • Step 2: Execute a scenario equivalent to the template 1 or 2, using OdiStartScen

This package is scheduled regularly. Changed data will only be processed if new changes have been detected. This avoids useless processing if changes occur sporadically to the journalized tables (i.e. to avoid running interfaces that would process no data).

Template 4: Separate Processes (Consistent Set)

This template dissociates the consistency window, the purge, and the changes consumption (for two different subscribers) in different packages.

Package 1: Extend Window

  • Step 1: OdiWaitForLogData. If no new log data is detected after a specified interval, end the package.

  • Step 2: Extend Window.

This package is scheduled every minute. Extend Window may be resource consuming. It is better to have this operation triggered only when new data appears.

Package 2: Purge Journal (at the end of week)

Step 1: Purge Journal

This package is scheduled once every Friday. We will keep track of the journals for the entire week.

Package 3: Process the Changes for Subscriber A

  • Step 1: Lock Subscriber A

  • Step 2 to n-1: Interfaces using the journalized data for subscriber A

  • Step n: Unlock Subscriber A

This package is scheduled every minute. Such a package is used for instance to generate events in a MOM.

Package 4: Process the Changes for Subscriber B

  • Step 1: Lock Subscriber B

  • Step 2 to n-1: Interfaces using the journalized data for subscriber B

  • Step n: Unlock Subscriber B

This package is scheduled every day. Such a package is used for instance to load a data warehouse during the night with the changed data.

PKJPKp\EOEBPS/scenarios.htm_) Working with Scenarios

13 Working with Scenarios

This chapter describes how to work with scenarios. A scenario is designed to put a source component (interface, package, procedure, variable) into production. A scenario results from the generation of code (SQL, shell, etc) for this component.

This chapter includes the following sections:

13.1 Introduction to Scenarios

When a component is finished and tested, you can generate the scenario corresponding its actual state. This operation takes place in Designer Navigator.

The scenario code (the language generated) is frozen, and all subsequent modifications of the components which contributed to creating it will not change it in any way.

It is possible to generate scenarios for packages, procedures, interfaces or variables. Scenarios generated for procedures, interfaces or variables are single step scenarios that execute the procedure, interface or refresh the variable.

Scenario variables are variables used in the scenario that should be set when starting the scenario to parameterize its behavior.

Once generated, the scenario is stored inside the work repository. The scenario can be exported then imported to another repository (remote or not) and used in different contexts. A scenario can only be created from a development work repository, but can be imported into both development and execution work repositories.

Scenarios appear in a development environment under the source component in the Projects tree of Designer Navigator, and appear - for development and production environments - in the Scenarios tree of Operator Navigator.

Scenarios can also be versioned. See Chapter 19, "Working with Version Management" for more information.

Scenarios can be launched from a command line, from the Oracle Data Integrator Studio and can be scheduled using the built-in scheduler of the run-time agent or an external scheduler. Scenario execution and scheduling scenarios is covered in Chapter 21, "Running Integration Processes".

13.2 Generating a Scenario

Generating a scenario for an object compiles the code for this object for deployment and execution in a production environment.

To generate a scenario:

  1. In Designer Navigator double-click the Package, Interface, Procedure or Variable under the project for which you want to generate the scenario. The corresponding Object Editor opens.

  2. On the Scenarios tab, click Generate Scenario. The New Scenario dialog appears.

  3. Enter the Name and the Version of the scenario. As this name can be used in an operating system command, the name is automatically uppercased and special characters are replaced by underscores.

    Note that the Name and Version fields of the Scenario are preset with the following values:

    • Name: The same name as the latest scenario generated for the component

    • Version: The version number is automatically incremented (if the latest version is an integer) or set to the current date (if the latest version is not an integer)

    If no scenario has been created yet for the component, a first version of the scenario is automatically created.

    New scenarios are named after the component according to the Scenario Naming Convention user parameter. See Appendix B, "User Parameters" for more information.

  4. Click OK.

  5. If you use variables in the scenario, you can define in the Scenario Variables dialog the variables that will be considered as parameters for the scenario.

    • Select Use All if you want all variables to be parameters

    • Select Use Selected to use the selected variables to be parameters

    • Select None to unselect all variables

  6. Click OK.

The scenario appears on the Scenarios tab and under the Scenarios node of the source object under the project.

13.3 Regenerating a Scenario

An existing scenario can be regenerated with the same name and version number. This lets you replace the existing scenario by a scenario generated from the source object contents. Schedules attached to this scenario are preserved.

To regenerate a scenario:

  1. Select the scenario in the Projects accordion.

  2. Right-click and select Regenerate...

  3. Click OK.


Caution:

Regenerating a scenario cannot be undone. For important scenarios, it is better to generate a scenario with a new version number.

13.4 Generating a Group of Scenarios

When a set of packages, interfaces, procedures and variables grouped under a project or folder is finished and tested, you can generate the scenarios. This operation takes place in Designer Navigator.

To generate a group of scenarios:

  1. Select the Project or Folder containing the group of objects.

  2. Right-click and select Generate All Scenarios...

  3. In the Scenario Generation dialog, select the scenario Generation Mode:

    • Replace: Overwrites for each object the last scenario version with a new one with the same ID, name and version. Sessions, scenario reports and schedules are deleted. If no scenario exists for an object, a scenario with version number 001 is created.

    • Re-generate: Overwrites for each object the last scenario version with a new one with the same id, name and version. It preserves the schedule, sessions and scenario reports. If no scenario exists for an object, no scenario is created using this mode.

    • Creation: Creates for each object a new scenario with the same name as the last scenario version and with an automatically incremented version number. If no scenario exists for an object, a scenario named after the object with version number 001 is created.


    Note:

    If no scenario has been created yet for the component, a first version of the scenario is automatically created.

    New scenarios are named after the component according to the Scenario Naming Convention user parameter. See Appendix B, "User Parameters" for more information

    If the version of the last scenario is an integer, it will be automatically incremented by 1 when selecting the Creation generation mode. If not, the version will be automatically set to the current date.


  4. In the Objects to Generate section, select the types of objects for which you want to generate scenarios.

  5. In the Marker Filter section, you can filter the components to generate according to a marker from a marker group.

  6. Click OK.

  7. If you use variables in the scenario, you can define in the Scenario Variables dialog the variables that will be considered as parameters for the scenario. Select Use All if you want all variables to be parameters, or Use Selected and check the parameter variables.

13.5 Exporting Scenarios

The export (and import) procedure allows you to transfer Oracle Data Integrator objects from one repository to another.

It is possible to export a single scenario or groups of scenarios.

Exporting one single scenario is covered in Section 20.2.4, "Exporting one ODI Object".

To export a group of scenarios:

  1. Select the Project or Folder containing the group of scenarios.

  2. Right-click and select Export All Scenarios... The Export all scenarios dialog opens.

  3. In the Export all scenarios dialog, specify the export parameters as follows:

    ParameterDescription
    Export DirectoryDirectory in which the export file will be created.

    Note that if the Export Directory is not specified, the export file is created in the Default Export Directory.

    Child components exportIf this option is checked, the objects linked to the object to be exported will be also exported. These objects are those visible under the exported object in the tree. It is recommended to leave this option checked. See Exporting an Object with its Child Components for more details.
    Replace existing files without warningIf this option is checked, the existing file will be replaced by the ones of the export.

  4. Select the type of objects whose scenarios you want to export.

  5. Set the advanced options. This set of options allow to parameterize the XML output file format. It is recommended that you leave the default values.

  6. ParameterDescription
    XML VersionXML Version specified in the export file. Parameter xml version in the XML file header.

    <?xml version="1.0" encoding="ISO-8859-1"?>

    Character SetEncoding specified in the export file. Parameter encoding in the XML file header.

    <?xml version="1.0" encoding="ISO-8859-1"?>

    Java Character SetJava character set used to generate the file.

  7. Click OK.

The XML-formatted export files are created at the specified location.

13.6 Importing Scenarios in Production

A scenario generated from Designer can be exported and then imported into a development or execution repository. This operation is used to deploy scenarios in a different repository, possibly in a different environment or site.

Importing a scenario in a development repository is performed via Designer or Operator Navigator. With a execution repository, only Operator Navigator is available for this purpose.

There are two ways to import a scenario:

  • Import uses the standard object import method. During this import process, it is possible to choose to import the schedules attached to the exported scenario.

  • Import Replace replaces an existing scenario with the content of an export file, preserving references from other objects to this scenario. Sessions, scenario reports and schedules from the original scenario are deleted and replaced with the schedules from the export file.

Scenarios can also be deployed and promoted to production using versions and solutions. See Chapter 19, "Working with Version Management" for more information.

13.6.1 Import Scenarios

To import one or more scenarios into Oracle Data Integrator:

  1. In Operator Navigator, select the Scenarios panel.

  2. Right-click and select Import > Import Scenario.

  3. Select the Import Type. Refer to Chapter 20, "Exporting/Importing" for more information on the import types.

  4. Specify the File Import Directory.

  5. Check the Import schedules option, if you want to import the schedules exported with the scenarios as well.

  6. Select one or more scenarios to import from the Select the file(s) to import list.

  7. Click OK.

The scenarios are imported into the work repository. They appear in the Scenarios tree of the Operator Navigator. If this work repository is a development repository, these scenario are also attached to their source Package, Interface, Procedure or Variable.

13.6.2 Replace a Scenario

Use the import replace mode if you want to replace a scenario with an exported one.

To import a scenario in replace mode:

  1. In Designer or Operator Navigator, select the scenario you wish to replace.

  2. Right-click the scenario, and select Import Replace...

  3. In the Replace Object dialog, specify the scenario export file.

  4. Click OK.

13.6.3 Working with a Scenario from a Different Repository

A scenario may have to be operated from a different work repository than the one where it was generated.

Examples

Here are two examples of organizations that give rise to this type of process:

  • A company has a large number of agencies equipped with the same software applications. In its IT headquarters, it develops packages and scenarios to centralize data to a central data center. These scenarios are designed to be executed identically in each agency.

  • A company has three distinct IT environments for developing, qualifying and operating its software applications. The company's processes demand total separation of the environments, which cannot share the Repository.

Prerequisites

The prerequisite for this organization is to have a work repository installed on each environment (site, agency or environment). The topology of the master repository attached to this work repository must be compatible in terms of its logical architecture (the same logical schema names). The connection characteristics described in the physical architecture can differ.

Note that in cases where some procedures or interfaces explicitly specify a context code, the target topology must have the same context codes. The topology, that is, the physical and logical architectures, can also be exported from a development master repository, then imported into the target repositories. Use the Topology module to carry out this operation. In this case, the physical topology (the servers' addresses) should be personalized before operating the scenarios. Note also that a topology import simply references the new data servers without modifying those already present in the target repository.

To operate a scenario from a different work repository:

  1. Export the scenario from its original repository (right-click, export)

  2. Forward the scenario export file to the target environment

  3. Open Designer Navigator in the target environment (connection to the target repository)

  4. Import the scenario from the export file

13.7 Encrypting and Decrypting a Scenario

Encrypting a scenario allows you to protect valuable code. An encrypted scenario can be executed but cannot be read or modified if it is not decrypted. The commands generated in the log by an encrypted scenario are also unreadable.

Oracle Data Integrator uses a DES Encryption algorithm based on a personal encryption key. This key can be saved in a file and can be reused to perform encryption or decryption operations.


WARNING:

There is no way to decrypt an encrypted scenario or procedure without the encryption key. It is therefore strongly advised to keep this key in a safe location.


To encrypt a scenario:

  1. In Designer or Operator Navigator, select the scenario you want to encrypt.

  2. Right-click and select Encrypt.

  3. In the Encryption Options dialog, you can either:

    • Encrypt with a personal key that already exists by giving the location of the personal key file or by typing in the value of the personal key.

    • Get a new encryption key to have a new key generated.

  4. Click OK to encrypt the scenario. If you have chosen to generate a new key, a dialog will appear with the new key. Click Save to save the key in a file.


Note:

If you type in a personal key with too few characters, an invalid key size error appears. In this case, please type in a longer personal key. A personal key of 10 or more characters is required.

To decrypt a scenario:

  1. Right-click the scenario you want to decrypt.

  2. Select Decrypt.

  3. In the Scenario Decryption dialog, either

    • Select an existing encryption key file

    • or type in (or paste) the string corresponding to your personal key.

A message appears when decryption is finished.

PKM$q__PKp\EOEBPS/orcl_data_qual.htm Working with Oracle Data Quality Products

16 Working with Oracle Data Quality Products

This chapter describes how to work with Data Quality Products in Oracle Data Integrator.

This chapter includes the following sections:

16.1 Introduction to Oracle Data Quality Products

Oracle Data Profiling and Oracle Data Quality for Data Integrator (also referred to as Oracle Data Quality Products) extend the inline Data Quality features of Oracle Data Integrator to provide more advanced data governance capabilities.

A complete Data Quality system includes data profiling, integrity and quality:

  • Profiling makes possible data investigation and quality assessment. It allows business users to get a clear picture of their data quality challenges, to monitor and track the quality of their data over time. Profiling is handled by Oracle Data Profiling. It allows business users to assess the quality of their data through metrics, to discover or infer rules based on this data, and finally to monitor over time the evolution of the data quality.

  • Integrity control is essential in ensuring the overall consistency of the data in your information system's applications. Application data is not always valid for the constraints and declarative rules imposed by the information system. You may, for instance, find orders with no customer, or order lines with no product, and so forth. Oracle Data Integrator provides built-in working environment to detect these constraint violation and store them for recycling or reporting purposes. Static and Flow checks in Oracle Data Integrator are integrity checks.

  • Quality includes integrity and extends to more complex quality processing. A rule-based engine apply data quality standards as part of an integration process to cleanse, standardize, enrich, match and de-duplicate any type of data, including names and addresses. Oracle Data Quality for Data Integrator places data quality as well as name and address cleansing at the heart of the enterprise integration strategy.

16.2 The Data Quality Process

The data quality process described in this section uses Oracle Data Quality products to profile and cleanse data extracted from systems using Oracle Data Integrator. The cleansed data is also re-integrated into the original system using Oracle Data Integrator.

The Quality Process has the following steps:

  1. Create a Quality Input File from Oracle Data Integrator, containing the data to cleanse.

  2. Create an Entity in Oracle Data Quality, based on this file.

  3. Create a Profiling Project to determine quality issues.

  4. Create a Oracle Data Quality Project cleansing this Entity.

  5. Export the Data Quality Project for run-time.

  6. Reverse-engineer the Entities using the RKM Oracle Data Quality.

  7. Use Oracle Data Quality Input and Output Files in Interfaces

  8. Run this Quality Project from Oracle Data Integrator using the OdiDataQuality tool.

  9. Sequence the Process in a Package.

16.2.1 Create a Quality Input File

Oracle Data Quality uses as a source for the Quality project a flat file which contains the data to cleanse. This Quality input file can be created from Data Integrator and loaded from any source datastore using interfaces. This file should be a FILE datastore with the following parameters defined on the Files tab:

ParameterValue
File FormatDelimited
Heading (Number of Lines)1
Record SeparatorMS-DOS
Field SeparatorOther
[Field Separator] Other,(comma sign - Hexadecimal 2C)
Text Delimiter" (double quotation marks)
Decimal Separatorempty, not specified

For more information on creating a FILE datastore, refer to the Chapter 5, "Creating and Reverse-Engineering a Model". For more information on loading flat files, see "Files" in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator.

16.2.2 Create an Entity

To import a data source into Oracle Data Quality for Data Integrator means to create an entity based on a delimited source file.

16.2.2.1 Step 1: Validate Loader Connections

Your administrator must set up at least one Loader Connection when he or she installs Oracle Data Quality for Data Integrator. This Loader Connection is used to access the Oracle Data Quality input file. As the input file is a delimited file, this Loader Connection should be a Delimited Loader Connection. Step 1 requires you validate this Delimited Loader Connection set up. Also verify that all the data and schema files you need are copied to the directory defined by the Loader Connection.

If you do not have access to the Metabase Manager, ask your Metabase administrator to verify the Loader Connection for you.

If you are a Metabase User and have access to the Metabase Manager, follow this procedure:

To validate a Loader Connection

  1. Open the Metabase Manager (Start > All Programs > Oracle > Oracle Data Profiling and Quality > Metabase Manager).

  2. Verify you are in Admin Mode.

  3. Expand the Control Admin node.

  4. Double-click Loader Connections.

  5. On the right, the Loader Connections list view displays each Loader Connection, showing its name, type, data file, and parameters. Review the information to verify that the Loader Connection created by your administrator is a Delimited Loader Connection and that the data and schema directories are pointing to the correct location.


Note:

If you are a Metabase User with full Metabase privileges, you can create a new Loader Connection.

16.2.2.2 Step 2: Create Entity and Import Data

Use the Create Entity wizard to create an Entity. The Wizard takes you through each step, helps you to select data to load, and provides an interface for specifying connection and schema settings. It also gives you options for customizing how the data appears in an Entity.

To import a delimited source file into Oracle Data Quality for Data Integrator:

  1. Copy the flat file that you want to import into Oracle Data Quality for Data Integrator into the data directory that you specified when you defined the Loader Connection.

  2. Click on the Windows Start menu and select All Programs > Oracle > Oracle Data Profiling and Quality > Oracle Data Profiling and Quality.

  3. Log in the user interface with your metabase user. The Oracle Data Profiling and Quality user interface opens

  4. From the Main menu, select Analysis >Create Entity…

  5. The Create Entity wizard opens in the upper right pane.

  6. On the Connection Page of the Create Entity wizard, select the Loader Connection given to you by the administrator that you have checked in Step 1.

  7. Leave the default settings for the filter and the connection and click Next.

  8. Oracle Data Quality connects to the data source using the Loader Connection you selected in Step 4. If the connection fails, contact your Metabase Administrator

  9. In the Entity Selection dialog, select the data source file name you want to import in the list and click Next.

  10. Select the schema settings for the selected data file corresponding to the parameters of the file described in the section Section 16.2.1, "Create a Quality Input File"

    • Delimiter: , (comma)

    • Quote: " (double quotation marks)

    • Attribute information: Names on first line

    • Select Records are CR/LF terminated.

    • Character encoding: ascii

    For more information on configuring Entities for delimited files, see the Online Help for Oracle Data Profiling and Oracle Data Quality.


    Note:

    If the file is generated using Oracle Data Integrator These file format parameters should correspond to the file format specified in the Files tab of the datastore definition.

  11. After you select the schema settings, click Preview. The Preview mode shows how the data will appear in the Entity, based on your selected schema settings. The data displays below in a list view. Use the Preview mode to customize how the data will appear in the new Entity.

  12. When you are ready to continue, click Close.

  13. Click Next. The Load Parameters dialog opens. Specify the parameters as follows:

    • Select All Rows.

    • Leave the default Job name.

  14. Click Next to continue.

  15. In Confirm Settings, review the list of settings and click Finish to schedule the Entity creation job. The Schedule Job window opens.

  16. Click Run Now.

16.2.2.3 Step 3: Verify Entity

During the data import process, Oracle Data Quality for Data Integrator translates your data files into three basic components (Metabase objects): Entities, Attributes, and Rows.

Perform the following list of verification tasks to ensure that the data you expected has been successfully imported to a Metabase and are correctly represented in the Metabase Explorer.

  1. Make sure that for every data file imported you have one corresponding Entity.

  2. Make sure that the column names do not contain any special characters with the exception of underscore (_) or minus sign (-) characters. Minus signs and underscores will be translated into spaces during the data load process.

  3. Make sure that for every field imported you have one corresponding Attribute.

  4. Make sure that you have one Entity Row for every data row imported.

16.2.3 Create a Profiling Project

You can now run a Data Profiling Project with Oracle Data Profiling to find quality problems. Profiling discovers and analyzes the quality of your enterprise data. It analyzes data at the most detailed levels to identify data anomalies, broken filters and data rules, misaligned data relationships, and other concerns that allow data to undermine your business objectives.

For more information on Data Profiling see "Working with Oracle Data Profiling" in the Online Help for Oracle Data Profiling and Oracle Data Quality.

16.2.4 Create a Oracle Data Quality Project

You can now create an Oracle Data Quality Project to validate and transform your data, and resolve data issues such as mismatching and redundancy.

Oracle Data Quality for Data Integrator is a powerful tool for repairing and correcting fields, values and records across multiple business contexts and applications, including data with country-specific origins. Oracle Data Quality for Data Integrator enables data processing for standardization, cleansing and enrichment, tuning capabilities for customization, and the ability to view your results in real-time.

A Quality Project cleanses input files and loads cleansed data into output files. At the end of your Oracle Data Quality project this input file may be split into several output files, depending on the data Quality project.

Important Note: A Data Quality project contains many temporary entities, some of them not useful in the integration process. To limit the Entities reversed-engineered for usage by Oracle Integrator, a filter based on entities name can be used. To use this filter efficiently, it is recommended that you rename in your quality project the entities that you want to use in Oracle Data Integrator in a consistent way. For example rename the entities ODI_IN_XXX and the output (and no-match) files ODI_OUT_XXX , where XXX is the name of the entity.

For more information on Data Quality projects see "Working with Oracle Data Quality" in the Online Help for Oracle Data Profiling and Oracle Data Quality.

16.2.5 Export the Data Quality Project

Oracle Data Integrator is able to run projects exported from Oracle Data Quality. Once the Data Quality project is complete, you need to export it for Oracle Data Integrator. The exported project contains the data files, Data Dictionary Language (DDL) files, settings files, output and statistics files, user-defined tables and scripts for each process module you in the project. An exported project can be run on UNIX or Windows platforms without the user interface, and only requires the Oracle Data Quality Server.

To create a batch script:

  1. In the Explorer or Project Workflow, right-click the Oracle Data Quality project and select Export... > ODQ Batch Project > No data.

  2. In Browse for Folder, select or make a folder where you want the project to be exported.

  3. Click OK. A message window appears indicating that the files are being copied. This export process creates a folder named after the metabase (<metabase_name>) at the location that you specified. This folder contains a projectN sub-folder (where N is the project identifier in Oracle Data Quality). This project folder contains the following folders among others:

    • data: This folder is used to contain input and output data as well as temporary data files. These files have a .DAT extension. As you specified No data for the export, this folder is empty.

    • ddl: This folder contains the entities metadata files (.DDX and .XML). These files describe the data files' fields. They are prefixed with eNN_ , where NN is the Entity number. Each entity is described in two metadata files. eNN_<name of the entity>.ddx is the description of the entity with possible duplicated columns (suitable for fixed files). enNN_<name of the entity_csv.ddx is the description of the entity with non-duplicated columns (suitable for fixed and delimited files). It recommended to use these files for the reverse-engineering process.

    • scripts: This folder contains the batch script runprojectN. This script runs the quality process and is the one that will be triggered by Oracle Data Integrator.

    • settings: This folder contains settings files (.ddt, .sto, .stt, .stx) and the configuration file config_batch.tbl.

  4. After the message window has disappeared, examine the folder you have specified and check that all folders and files are correctly created.

  5. Move the exported project to a folder on the run-time machine. This machine must have the Oracle Data Quality Server installed at it will run the quality project.

  6. Open with a text Editor the batch script (runprojectN) and the configuration file (config_batch.tbl) in the /batch/settings sub-folder of your projectN folder.

  7. Perform the following changes to configure the run-time directory in the project.

    • In config_batch.tbl, specify the location (absolute path) of the directory containing the projectN folder for the DATABASE parameter.

    • In runprojectN, specify the location (absolute path) of the projectN directory for the TS_PROJECT parameter.

    For example, if you have the config_batch.tbl and runproject2.* files located in C:\oracle\oracledq\metabase_ data\metabase\oracledq\project2\batch\, you should specify

    • in \settings\config_batch.tbl: DATABASE = C:\oracle\oracledq\metabase_ data\metabase\oracledq\project2\batch

    • in \scripts\runprojectN.*: set TS_PROJECT=C:\oracle\oracledq\metabase_ data\metabase\oracledq\project2\batch

  8. Save and close the config_batch.tbl file.

  9. In runprojectN uncomment the very last line of the file (remove the :: character at the beginning of the last line).

  10. Save and close the runprojectN file.

  11. Oracle Data Integrator uses CSV formatted files (typically, comma-delimited with one header line) to provide the data quality project with input data, and expects output data to be in the same format.

    In the /settings directory, open with an Editor the settings file corresponding to the first process of your project. This file is typically named eN_transfmr_p1.stx (where N is the internal ID of the entity corresponding to the quality input file) if the first process is a transformer.

  12. Change the following input parameters in the settings file:

    • In DATA_FILE_NAME, specify the name and location (absolute path) of your quality input file in run-time.

    • In FILE_DELIMITER, specify the delimiter used in the quality input file.

    • In START_RECORD, specify the line number where data starts. For example, if there is a 1 line header, the value should be 2.

    For example, if you have the customer_master.csv quality input file (comma-separated with one header line) located in C:/oracle/oracledq/metabase_data/metabase/oracledq/Data/, you should edit the following section:

    <CATEGORY><INPUT><PARAMETER><INPUT_SETTINGS>
      <ARGUMENTS>
      <ENTRY>
        <ENTRY_ID>1</ENTRY_ID>
        <FILE_QUALIFIER>Customer_Master(1)</FILE_QUALIFIER>
        <DATA_FILE_NAME>$(INPUT)/e1_customer_master.dat</DATA_FILE_NAME>
        <DDL_FILE_NAME>$(DDL)/e1_customer_master.ddx</DDL_FILE_NAME>
        <FILE_DELIMITER/>
        <USE_QUOTES_AS_QUALIFIER/>
        <START_RECORD/>
    

    as shown below:

    <CATEGORY><INPUT><PARAMETER><INPUT_SETTINGS>
       <ENTRY>
         <ENTRY_ID>1</ENTRY_ID>
         <FILE_QUALIFIER>Customer_Master(1)</FILE_QUALIFIER>
    <DATA_FILE_NAME>C:\oracle\oracledq\metabase_data\metabase\oracledq\Data\customer_master.csv</DATA_FILE_NAME>
    <DDL_FILE_NAME>$(DDL)/e1_customer_master.ddx</DDL_FILE_NAME>
    <FILE_DELIMITER>,</FILE_DELIMITER>
    <USE_QUOTES_AS_QUALIFIER/>
    <START_RECORD>2</START_RECORD>
    
  13. Save and close the settings file.

  14. Also in the /settings directory, open the file that corresponds to the settings of the process generating the output (cleansed) data. Typically, for a cleansing project which finishes with a Data Reconstructor process, it is named with eNN_datarec_pXX.stx. Change the following value in the settings file to give the full path of the generated output file.

    <CATEGORY><OUTPUT><PARAMETER>
    <OUTPUT_SETTINGS>
    <ARGUMENTS>
    <FILE_QUALIFIER>OUTPUT</FILE_QUALIFIER>
    <DATA_FILE_NAME>C:\oracle\oracledq\metabase_data\metabase\oracledq\Data\customer_master_cleansed.csv</DATA_FILE_NAME>
    <DDL_FILE_NAME>$(DDL)/e36_us_datarec_p11.ddx</DDL_FILE_NAME>
    
  15. Save and close the settings file.

  16. If you have several data quality processes that generate useful output files (for example, one data reconstructor per country). Repeat the two previous steps for each of these processes.

16.2.6 Reverse-engineer the Entities

In order to provide the Quality process with input data and use its output data in data integrator's integration processes, it is necessary to reverse-engineer these Entities. This operation is performed using a customized reverse-engineering method based on the Oracle Data Quality RKM. The RKM reads metadata from the .ddx files located in the /ddl folder of your data quality project.

To reverse-engineer the Entities of a data Quality project:

  1. Import the RKM Oracle Data Quality into your Oracle Data Integrator project.

  2. Insert a physical schema for the File technology in Topology Manager. Specifying for both, the Directory (Schema) and the Directory (Work Schema), the absolute path of your data folder. For example C:\oracle\oracledq\metabase_data\metabase\oracledq\projectN\data

    This directory must be accessible to the agent that will be used to run the transformations. Oracle Data Integrator will look in the schema for the source and target data structures for the interfaces. The RKM will access the output data files and reverse-engineer them.

  3. Create a File model and reverse the /ddl folder.

    1. In Designer Navigator expand the Models panel.

    2. Right-click then select New Model.

    3. Enter the following fields in the Definition tab:

      Name: Name of the model used in the user interface.

      Technology: File

      Logical Schema: Select the Logical Schema on which your model will be based.

    4. In the Reverse tab and select:

      ParameterValue/Action
      Reverse:Customized
      Context:Reverse-engineering Context
      Type of objects to reverse-engineer:Table
      KMSelect the RKM Oracle Data Quality

    5. Set the RKM options as shown in Table 16-1:

    Table 16-1 KM Options for RKM Oracle Data Quality

    ParameterDefault ValueDescription

    DDX_FILE_NAME

    *.ddx

    Mask for DDX Files to process. If you have used a naming convention in the Quality project for the Entities that you want to use, enter a mask that will return only these Entities. For example, specify the ODI*_csv.ddx mask if you have used the ODI_IN_XX and ODI_OUT_XX naming convention for your input and output entities.

    USE_FRIENDLY_NAMES

    No

    Set this option to Yes if you want the Reverse-Engineering process to generate user-friendly names for datastore columns based on the field name specified in the DDX file.

    USE_LOG

    Yes

    Set to Yes if you want the reverse-engineering process activity be logged in a log file.

    LOG_FILE_NAME

    /temp/reverse.log

    Name of the log file.


  4. Click Apply. The model is created, but contains no datastores yet.

  5. Click Reverse. Now, the model contains datastores that you can see in the Models view.

16.2.7 Use Oracle Data Quality Input and Output Files in Interfaces

You can now create in Oracle Data Integrator interfaces sourcing or targeting the data Quality input and output files.

For example, you can:

  • Create interfaces to load the input file using datastores from various sources.

  • Create interfaces to re-integrate the output data back into the sources after cleansing.

16.2.8 Run this Quality Project from Oracle Data Integrator

The OdiDataQuality tool executes the batch file to run the Oracle Data Quality project. This tool takes as a parameter the path to the runprojectN script file. It can run either in synchronous (the tool waits for the quality process to complete) or asynchronous mode.

For more information about the OdiDataQuality tool and its parameters, see Section A.6.3, "OdiDataQuality".

16.2.9 Sequence the Process in a Package

Create a package in Oracle Data Integrator sequencing the following process:

  1. One or more Interfaces creating the Quality input file, containing the data to cleanse.

  2. OdiDataQuality tool step launching the Oracle Data Quality process.

  3. One or more Interfaces loading the data from the Oracle Data Quality output files into the target datastores.

PKk oePKp\EOEBPS/intro.htm Introduction to Oracle Data Integrator

1 Introduction to Oracle Data Integrator

This chapter contains the following sections:

1.1 Introduction to Data Integration with Oracle Data Integrator

Data Integration ensures that information is timely, accurate, and consistent across complex systems. This section provides an introduction to data integration and describes how Oracle Data Integrator provides support for Data Integration.

1.1.1 Data Integration

Integrating data and applications throughout the enterprise, and presenting them in a unified view is a complex proposition. Not only are there broad disparities in technologies, data structures, and application functionality, but there are also fundamental differences in integration architectures. Some integration needs are Data Oriented, especially those involving large data volumes. Other integration projects lend themselves to an Event Driven Architecture (EDA) or a Service Oriented Architecture (SOA), for asynchronous or synchronous integration.

Data Integration ensures that information is timely, accurate, and consistent across complex systems. Although it is still frequently referred as Extract-Load-Transform (ETL) - Data Integration was initially considered as the architecture used for loading Enterprise Data Warehouse systems - data integration now includes data movement, data synchronization, data quality, data management, and data services.

1.1.2 Oracle Data Integrator

Oracle Data Integrator provides a fully unified solution for building, deploying, and managing complex data warehouses or as part of data-centric architectures in a SOA or business intelligence environment. In addition, it combines all the elements of data integration—data movement, data synchronization, data quality, data management, and data services—to ensure that information is timely, accurate, and consistent across complex systems.

Oracle Data Integrator (ODI) features an active integration platform that includes all styles of data integration: data-based, event-based and service-based. ODI unifies silos of integration by transforming large volumes of data efficiently, processing events in real time through its advanced Changed Data Capture (CDC) capability, and providing data services to the Oracle SOA Suite. It also provides robust data integrity control features, assuring the consistency and correctness of data. With powerful core differentiators - heterogeneous E-LT, Declarative Design and Knowledge Modules - Oracle Data Integrator meets the performance, flexibility, productivity, modularity and hot-pluggability requirements of an integration platform.

1.1.3 E-LT

Traditional ETL tools operate by first Extracting the data from various sources, Transforming the data in a proprietary, middle-tier ETL engine that is used as the staging area, and then Loading the transformed data into the target data warehouse or integration server. Hence the term ETL represents both the names and the order of the operations performed, as shown in Figure 1-1.

Figure 1-1 Traditional ETL versus ODI E-LT

Description of Figure 1-1 follows

The data transformation step of the ETL process is by far the most compute-intensive, and is performed entirely by the proprietary ETL engine on a dedicated server. The ETL engine performs data transformations (and sometimes data quality checks) on a row-by-row basis, and hence, can easily become the bottleneck in the overall process. In addition, the data must be moved over the network twice – once between the sources and the ETL server, and again between the ETL server and the target data warehouse. Moreover, if one wants to ensure referential integrity by comparing data flow references against values from the target data warehouse, the referenced data must be downloaded from the target to the engine, thus further increasing network traffic, download time, and leading to additional performance issues.

In response to the issues raised by ETL architectures, a new architecture has emerged, which in many ways incorporates the best aspects of manual coding and automated code-generation approaches. Known as E-LT, this new approach changes where and how data transformation takes place, and leverages existing developer skills, RDBMS engines and server hardware to the greatest extent possible. In essence, E-LT moves the data transformation step to the target RDBMS, changing the order of operations to: Extract the data from the source tables, Load the tables into the destination server, and then Transform the data on the target RDBMS using native SQL operators. Note, with E-LT there is no need for a middle-tier engine or server as shown in Figure 1-1.

Oracle Data Integrator supports both ETL- and E-LT-Style data integration. See Section 11.5, "Designing Integration Interfaces: E-LT- and ETL-Style Interfaces" for more information.

1.2 Oracle Data Integrator Concepts

This section provides an introduction to the main concepts of Oracle Data Integrator.

1.2.1 Introduction to Declarative Design

To design an integration process with conventional ETL systems, a developer needs to design each step of the process: Consider, for example, a common case in which sales figures must be summed over time for different customer age groups. The sales data comes from a sales management database, and age groups are described in an age distribution file. In order to combine these sources then insert and update appropriate records in the customer statistics systems, you must design each step, which includes:

  1. Load the customer sales data in the engine

  2. Load the age distribution file in the engine

  3. Perform a lookup between the customer sales data and the age distribution data

  4. Aggregate the customer sales grouped by age distribution

  5. Load the target sales statistics data into the engine

  6. Determine what needs to be inserted or updated by comparing aggregated information with the data from the statistics system

  7. Insert new records into the target

  8. Update existing records into the target

This method requires specialized skills, depending on the steps that need to be designed. It also requires significant efforts in development, because even repetitive succession of tasks, such as managing inserts/updates in a target, need to be developed into each task. Finally, with this method, maintenance requires significant effort. Changing the integration process requires a clear understanding of what the process does as well as the knowledge of how it is done. With the conventional ETL method of design, the logical and technical aspects of the integration are intertwined.Declarative Design is a design method that focuses on “What” to do (the Declarative Rules) rather than “How” to do it (the Process). In our example, “What” the process does is:

  • Relate the customer age from the sales application to the age groups from the statistical file

  • Aggregate customer sales by age groups to load sales statistics

“How” this is done, that is the underlying technical aspects or technical strategies for performing this integration task – such as creating temporary data structures or calling loaders – is clearly separated from the declarative rules.

Declarative Design in Oracle Data Integrator uses the well known relational paradigm to declare in the form of an Interface the declarative rules for a data integration task, which includes designation of sources, targets, and transformations.

Declarative rules often apply to metadata to transform data and are usually described in natural language by business users. In a typical data integration project (such as a Data Warehouse project), these rules are defined during the specification phase in documents written by business analysts in conjunction with project managers. They can very often be implemented using SQL expressions, provided that the metadata they refer to is known and qualified in a metadata repository.

The four major types of Declarative Rules are mappings, joins, filters and constraints:

  • A mapping is a business rule implemented as an SQL expression. It is a transformation rule that maps source columns (or fields) onto one of the target columns. It can be executed by a relational database server at run-time. This server can be the source server (when possible), a middle tier server or the target server.

  • A join operation links records in several data sets, such as tables or files. Joins are used to link multiple sources. A join is implemented as an SQL expression linking the columns (fields) of two or more data sets. Joins can be defined regardless of the physical location of the source data sets involved. For example, a JMS queue can be joined to an Oracle table. Depending on the technology performing the join, it can be expressed as an inner join, right outer join, left outer join and full outer join.

  • A filter is an expression applied to source data sets columns. Only the records matching this filter are processed by the data flow.

  • A constraint is an object that defines the rules enforced on data sets' data. A constraint ensures the validity of the data in a given data set and the integrity of the data of a model. Constraints on the target are used to check the validity of the data before integration in the target.

Table 1-1 gives examples of declarative rules.

Table 1-1 Examples of declarative rules

Declarative RuleTypeSQL Expression

Sum of all amounts or items sold during October 2005 multiplied by the item price

Mapping

SUM(
 CASE WHEN SALES.YEARMONTH=200510 THEN
  SALES.AMOUNT*product.item_PRICE
 ELSE
  0
 END
)

Products that start with 'CPU' and that belong to the hardware category

Filter

Upper(PRODUCT.PRODUCT_NAME)like 'CPU%'
And PRODCUT.CATEGORY = 'HARDWARE'

Customers with their orders and order lines

Join

CUSTOMER.CUSTOMER_ID = ORDER.ORDER_ID
And ORDER.ORDER_ID = ORDER_LINE.ORDER_ID

Reject duplicate customer names

Unique Key Constraint

Unique key (CUSTOMER_NAME)

Reject orders with a link to an non-existent customer

Reference Constraint

Foreign key on ORDERS(CUSTOMER_ID) references CUSTOMER(CUSTOMER_ID)

1.2.2 Introduction to Knowledge Modules

Knowledge Modules (KM) implement “how” the integration processes occur. Each Knowledge Module type refers to a specific integration task:

A Knowledge Module is a code template for a given integration task. This code is independent of the Declarative Rules that need to be processed. At design-time, a developer creates the Declarative Rules describing integration processes. These Declarative Rules are merged with the Knowledge Module to generate code ready for runtime. At runtime, Oracle Data Integrator sends this code for execution to the source and target systems it leverages in the E-LT architecture for running the process.

Knowledge Modules cover a wide range of technologies and techniques. Knowledge Modules provide additional flexibility by giving users access to the most-appropriate or finely tuned solution for a specific task in a given situation. For example, to transfer data from one DBMS to another, a developer can use any of several methods depending on the situation:

  • The DBMS loaders (Oracle's SQL*Loader, Microsoft SQL Server's BCP, Teradata TPump) can dump data from the source engine to a file then load this file to the target engine

  • The database link features (Oracle Database Links, Microsoft SQL Server's Linked Servers) can transfer data directly between servers

These technical strategies amongst others corresponds to Knowledge Modules tuned to exploit native capabilities of given platforms.

Knowledge modules are also fully extensible. Their code is opened and can be edited through a graphical user interface by technical experts willing to implement new integration methods or best practices (for example, for higher performance or to comply with regulations and corporate standards). Without having the skill of the technical experts, developers can use these custom Knowledge Modules in the integration processes.

For more information on Knowledge Modules, refer to the Connectivity and Modules Guide for Oracle Data Integrator and the Knowledge Module Developer's Guide for Oracle Data Integrator.

1.2.3 Introduction to Integration Interfaces

An integration interface is an Oracle Data Integrator object stored that enables the loading of one target datastore with data transformed from one or more source datastores, based on declarative rules implemented as mappings, joins, filters and constraints.

An integration interface also references the Knowledge Modules (code templates) that will be used to generate the integration process.

1.2.3.1 Datastores

A datastore is a data structure that can be used as a source or a target in an integration interface. It can be:

  • a table stored in a relational database

  • an ASCII or EBCDIC file (delimited, or fixed length)

  • a node from a XML file

  • a JMS topic or queue from a Message Oriented Middleware

  • a node from a enterprise directory

  • an API that returns data in the form of an array of records

Regardless of the underlying technology, all data sources appear in Oracle Data Integrator in the form of datastores that can be manipulated and integrated in the same way. The datastores are grouped into data models. These models contain all the declarative rules –metadata - attached to datastores such as constraints.

1.2.3.2 Declarative Rules

The declarative rules that make up an interface can be expressed in human language, as shown in the following example: Data is coming from two Microsoft SQL Server tables (ORDERS joined to ORDER_LINES) and is combined with data from the CORRECTIONS file. The target SALES Oracle table must match some constraints such as the uniqueness of the ID column and valid reference to the SALES_REP table.

Data must be transformed and aggregated according to some mappings expressed in human language as shown in Figure 1-2.

Figure 1-2 Example of a business problem

Description of Figure 1-2 follows

Translating these business rules from natural language to SQL expressions is usually straightforward. In our example, the rules that appear in Figure 1-2 could be translated as shown in Table 1-2.

Table 1-2 Business rules translated

TypeRuleSQL Expression/Constraint

Filter

Only ORDERS marked as closed

ORDERS.STATUS = 'CLOSED'

Join

A row from LINES has a matching ORDER_ID in ORDERS

ORDERS.ORDER_ID = LINES.ORDER_ID

Mapping

Target's SALES is the sum of the order lines' AMOUNT grouped by sales rep, with the corrections applied

SUM(LINES.AMOUNT + CORRECTIONS.VALUE)

Mapping

Sales Rep = Sales Rep ID from ORDERS

ORDERS.SALES_REP_ID

Constraint

ID must not be null

ID is set to "not null" in the data model

Constraint

ID must be unique

A unique key is added to the data model with (ID) as set of columns

Constraint

The Sales Rep ID should exist in the Target SalesRep table

A reference (foreign key) is added in the data model on SALES.SALES_REP = SALES_REP.SALES_REP_ID


Implementing this business problem using Oracle Data Integrator is a very easy and straightforward exercise. It is done by simply translating the business rules into an interface. Every business rule remains accessible from the interface's diagram, as shown in Figure 1-3.

Figure 1-3 Implementation using Oracle Data Integrator

Description of Figure 1-3 follows

1.2.3.3 Data Flow

Business rules defined in the interface are automatically converted into a data flow that will carry out the joins filters, mappings, and constraints from source data to target tables.

By default, Oracle Data Integrator will use the Target RBDMS as a staging area for loading source data into temporary tables and applying all the required mappings, staging filters, joins and constraints. The staging area is a separate area in the RDBMS (a user/database) where Oracle Data Integrator creates its temporary objects and executes some of the rules (mapping, joins, final filters, aggregations etc.). When performing the operations this way, Oracle Data Integrator behaves like an E-LT as it first extracts and loads the temporary tables and then finishes the transformations in the target RDBMS.

In some particular cases, when source volumes are small (less than 500,000 records), this staging area can be located in memory in Oracle Data Integrator's in-memory relational database – In-Memory Engine. Oracle Data Integrator would then behave like a traditional ETL tool.

Figure 1-4 shows the data flow automatically generated by Oracle Data Integrator to load the final SALES table. The business rules will be transformed into code by the Knowledge Modules (KM). The code produced will generate several steps. Some of these steps will extract and load the data from the sources to the staging area (Loading Knowledge Modules - LKM). Others will transform and integrate the data from the staging area to the target table (Integration Knowledge Module - IKM). To ensure data quality, the Check Knowledge Module (CKM) will apply the user defined constraints to the staging data to isolate erroneous records in the Errors table.

Figure 1-4 Oracle Data Integrator Knowledge Modules in action

Description of Figure 1-4 follows

Oracle Data Integrator Knowledge Modules contain the actual code that will be executed by the various servers of the infrastructure. Some of the code contained in the Knowledge Modules is generic. It makes calls to the Oracle Data Integrator Substitution API that will be bound at run-time to the business-rules and generates the final code that will be executed.

At design time, declarative rules are defined in the interfaces and Knowledge Modules are only selected and configured.

At run-time, code is generated and every Oracle Data Integrator API call in the Knowledge Modules (enclosed by <% and %>) is replaced with its corresponding object name or expression, with respect to the metadata provided in the Repository. The generated code is orchestrated by Oracle Data Integrator run-time component - the Agent – on the source and target systems to make them perform the processing, as defined in the E-LT approach.

Refer to Chapter 11, "Working with Integration Interfaces" for more information on how to work with integration interfaces.

1.3 Typical ODI Integration Projects

Oracle Data Integrator provides a wide range of integration features. This section introduces the most typical ODI Integration Projects.

1.3.1 Batch Oriented Integration

ODI is a comprehensive data integration platform with a built-in connectivity to all major databases, data warehouse and analytic applications providing high-volume and high-performance batch integration.

The main goal of a data warehouse is to consolidate and deliver accurate indicators to business users to help them make decisions regarding their everyday business. A typical project is composed of several steps and milestones. Some of these are:

  • Defining business needs (Key Indicators)

  • Identifying source data that concerns key indicators; specifying business rules to transform source information into key indicators

  • Modeling the data structure of the target warehouse to store the key indicators

  • Populating the indicators by implementing business rules

  • Measuring the overall accuracy of the data by setting up data quality rules

  • Developing reports on key indicators

  • Making key indicators and metadata available to business users through adhoc query tools or predefined reports

  • Measuring business users' satisfaction and adding/modifying key indicators

Oracle Data Integrator will help you cover most of these steps, from source data investigation to metadata lineage, and through loading and data quality audit. With its repository, ODI will centralize the specification and development efforts and p_2rovide a unique architecture on which the project can rely to succeed.

Scheduling and Operating Scenarios

Scheduling and operating scenarios is usually done in the Test and Production environments in separate Work Repositories. Any scenario can be scheduled by an ODI Agent or by any external scheduler, as scenarios can be invoked by an operating system command.

When scenarios are running in production, agents generate execution logs in an ODI Work Repository. These logs can be monitored either through the Operator Navigator or through any web browser when Oracle Data Integrator Console is setup. Failing jobs can be restarted and ad-hoc tasks submitted for execution.

E-LT

ODI uses a unique E-LT architecture that leverages the power of existing RDBMS engines by generating native SQL and bulk loader control scripts to execute all transformations.

1.3.2 Event Oriented Integration

Capturing events from a Message Oriented Middleware or an Enterprise Service Bus has become a common task in integrating applications in a real-time environment. Applications and business processes generate messages for several subscribers, or they consume messages from the messaging infrastructure.

Oracle Data Integrator includes technology to support message-based integration and that complies with the Java Message Services (JMS) standard. For example, a transformation job within Oracle Data Integrator can subscribe and source messages from any message queue or topic. Messages are captured and transformed in real time and then written to the target systems.

Other use cases of this type of integration might require capturing changes at the database level. Oracle Data Integrator Changed Data Capture (CDC) capability identifies and captures inserted, updated, or deleted data from the source and makes it available for integration processes.

ODI provides two methods for tracking changes from source datastores to the CDC framework: triggers and RDBMS log mining. The first method can be deployed on most RDBMS that implement database triggers. This method is optimized to minimize overhead on the source systems. For example, changed data captured by the trigger is not duplicated, minimizing the number of input/output operations, which slow down source systems. The second method involves mining the RDBMS logs—the internal change history of the database engine. This has little impact on the system's transactional performance and is supported for Oracle (through the Log Miner feature) and IBM DB2/400.

The CDC framework used to manage changes, based on Knowledge Modules, is generic and open, so the change-tracking method can be customized. Any third-party change provider can be used to load the framework with changes.

Changes frequently involve several data sources at the same time. For example, when an order is created, updated, or deleted, both the orders table and the order lines table are involved. When processing a new order line, it is important that the new order, to which the line is related, is taken into account too. ODI provides a mode of change tracking called Consistent Set CDC. This mode allows for processing sets of changes for which data consistency is guaranteed.

For example, incoming orders can be detected at the database level using CDC. These new orders are enriched and transformed by ODI before being posted to the appropriate message queue or topic. Other applications such as Oracle BPEL or Oracle Business Activity Monitoring can subscribe to these messages, and the incoming events will trigger the appropriate business processes.

For more information on how to use the CDC framework in ODI, refer to Chapter 7, "Working with Changed Data Capture".

1.3.3 Service-Oriented Architecture

Oracle Data Integrator can be integrated seamlessly in a Service Oriented Architecture (SOA) in several ways:

Data Services are specialized Web services that provide access to data stored in database tables. Coupled with the Changed Data Capture capability, data services can also provide access to the changed records for a given subscriber. Data services are automatically generated by Oracle Data Integrator and deployed as Web services to a Web container, usually a Java application server. For more information on how to set up, generate and deploy data services, refer to Chapter 8, "Working with Data Services".

Oracle Data Integrator can also expose its transformation processes as Web services to enable applications to use them as integration services. For example, a LOAD_SALES batch process used to update the CRM application can be triggered as a Web service from any service-compliant application, such as Oracle BPEL, Oracle Enterprise Service Bus, or Oracle Business Activity Monitoring. Transformations developed using ODI can therefore participate in the broader Service Oriented Architecture initiative.

Third-party Web services can be invoked as part of an ODI workflow and used as part of the data integration processes. Requests are generated on the fly and responses processed through regular transformations. Suppose, for example, that your company subscribed to a third-party service that exposes daily currency exchange rates as a Web service. If you want this data to update your multiple currency data warehouse, ODI automates this task with a minimum of effort. You would simply invoke the Web service from your data warehouse workflow and perform any appropriate transformation to the incoming data to make it fit a specific format. For more information on how to use web services in ODI, refer to Chapter 15, "Working with Web Services in Oracle Data Integrator".

1.3.4 Data Quality with ODI

With an approach based on declarative rules, Oracle Data Integrator is the most appropriate tool to help you build a data quality framework to track data inconsistencies.

Oracle Data Integrator uses declarative data integrity rules defined in its centralized metadata repository. These rules are applied to application data to guarantee the integrity and consistency of enterprise information. The Data Integrity benefits add to the overall Data Quality initiative and facilitate integration with existing and future business processes addressing this particular need.

Oracle Data Integrator automatically retrieves existing rules defined at the data level (such as database constraints) by a reverse-engineering process. ODI also allows developers to define additional, user-defined declarative rules that may be inferred from data discovery and profiling within ODI, and immediately checked.

Oracle Data Integrator provides a built-in framework to check the quality of your data in two ways:

  • Check data in your data servers, to validate that this data does not violate any of the rules declared on the datastores in Oracle Data Integrator. This data quality check is called a static check and is performed on data models and datastores. This type of check allows you to profile the quality of the data against rules that are not enforced by their storage technology.

  • Check data while it is moved and transformed by an interface, in a flow check that checks the data flow against the rules defined on the target datastore. With such a check, correct data can be integrated into the target datastore while incorrect data is automatically moved into error tables.

Both static and flow checks are using the constraints that are defined in the datastores and data models, and both use the Check Knowledge Modules (CKMs). For more information refer to Section 11.3.7, "Set up Flow Control and Post-Integration Control".

These checks check the integrity of the data and validate constraints. For advanced data quality projects (for example for name and address cleansing projects) as well as advanced data profiling, the Oracle Data Profiling and Oracle Data Quality for Data Integrator products can be used along with Oracle Date Integrator.

Oracle Data Quality and Oracle Data Profiling Integration

Oracle Data Profiling and Oracle Data Quality for Data Integrator (also referred to as Oracle Data Quality Products) extend the inline Data Quality features of Oracle Data Integrator to provide more advanced data governance capabilities.

Oracle Data Profiling is a data investigation and quality monitoring tool. It allows business users to assess the quality of their data through metrics, to discover or deduce rules based on this data, and to monitor the evolution of data quality over time.

Oracle Data Quality for Data Integrator is a comprehensive award-winning data quality platform that covers even the most complex data quality needs. Its powerful rule-based engine and its robust and scalable architecture places data quality and name and address cleansing at the heart of an enterprise data integration strategy.

For more information on Oracle Data Quality and Oracle Data Profiling refer to Chapter 16, "Working with Oracle Data Quality Products".

1.3.5 Managing Environments

Integration projects exist in different environments during their lifecycle (development, test, product) and may even run in different environments in production (multiple site deployment). Oracle Data Integrator makes easier the definition and maintenance of these environments, as well as the lifecycle of the project across these environments using the Topology

The Topology describes the physical and logical architecture of your Information System. It gives you a very flexible way of managing different servers, environments and agents. All the information of the Topology is stored in the master repository and is therefore centralized for an optimized administration. All the objects manipulated within Work Repositories refer to the Topology. That's why it is the most important starting point when defining and planning your architecture.

The Topology is composed of data servers, physical and logical schemas, and contexts.

Data servers describe connections to your actual physical application servers and databases. They can represent for example:

  • An Oracle Instance

  • An IBM DB2 Database

  • A Microsoft SQL Server Instance

  • A Sybase ASE Server

  • A File System

  • An XML File

  • A Microsoft Excel Workbook

  • and so forth.

At runtime, Oracle Data Integrator uses the connection information you have described to connect to the servers.

Physical schemas indicate the physical location of the datastores (tables, files, topics, queues) inside a data server. All the physical schemas that need to be accessed have to be registered under their corresponding data server, physical schemas are used to prefix object names and access them with their qualified names. When creating a physical schema, you need to specify a temporary, or work schema that will store temporary or permanent object needed at runtime.

A logical schema is an alias that allows a unique name to be given to all the physical schemas containing the same datastore structures. The aim of the logical schema is to ensure the portability of procedures and models on different design-time and run-time environments.

A Context represents one of these environments. Contexts are used to group physical resources belonging to the same environment.

Typical projects will have separate environments for Development, Test and Production. Some projects will even have several duplicated Test or Production environments. For example, you may have several production contexts for subsidiaries running their own production systems (Production New York, Production Boston etc). There is obviously a difference between the logical view of the information system and its physical implementation as described in Figure 1-5.

Figure 1-5 Logical and Physical View of the Infrastructure

Description of Figure 1-5 follows

The logical view describes logical schemas that represent the physical schemas of the existing applications independently of their physical implementation. These logical schemas are then linked to the physical resources through contexts.

Designers always refer to the logical view defined in the Topology. All development done therefore becomes independent of the physical location of the resources they address. At runtime, the logical information is mapped to the physical resources, given the appropriate contexts. The same scenario can be executed on different physical servers and applications simply by specifying different contexts. This brings a very flexible architecture where developers don't have to worry about the underlying physical implementation of the servers they rely on.

1.4 Oracle Data Integrator Architecture

The architecture of Oracle Data Integrator relies on different components that collaborate together, as described in Figure 1-6.

Figure 1-6 Functional Architecture Overview

Description of Figure 1-6 follows

1.4.1 Repositories

The central component of the architecture is the Oracle Data Integrator Repository. It stores configuration information about the IT infrastructure, metadata of all applications, projects, scenarios, and the execution logs. Many instances of the repository can coexist in the IT infrastructure. The architecture of the repository is designed to allow several separated environments that exchange metadata and scenarios (for example: Development, Test, Maintenance and Production environments). In the figure above, two repositories are represented: one for the development environment, and another one for the production environment. The repository also acts as a version control system where objects are archived and assigned a version number. The Oracle Data Integrator Repository can be installed on an OLTP relational database.

The Oracle Data Integrator Repository is composed of a master repository and several Work Repositories. Objects developed or configured through the user interfaces are stored in one of these repository types.

There is usually only one master repository that stores the following information:

  • Security information including users, profiles and rights for the ODI platform

  • Topology information including technologies, server definitions, schemas, contexts, languages etc.

  • Versioned and archived objects.

The Work Repository is the one that contains actual developed objects. Several work repositories may coexist in the same ODI installation (for example, to have separate environments or to match a particular versioning life cycle). A Work Repository stores information for:

  • Models, including schema definition, datastores structures and metadata, fields and columns definitions, data quality constraints, cross references, data lineage etc.

  • Projects, including business rules, packages, procedures, folders, Knowledge Modules, variables etc.

  • Scenario execution, including scenarios, scheduling information and logs.

When the Work Repository contains only the execution information (typically for production purposes), it is then called an Execution Repository.

For more information on how to manage ODI repositories, refer to Chapter 3, "Administering the Oracle Data Integrator Repositories".

1.4.2 User Interfaces

Administrators, Developers and Operators use the Oracle Data Integrator Studio to access the repositories. This Fusion Client Platform (FCP) based UI is used for administering the infrastructure (security and topology), reverse-engineering the metadata, developing projects, scheduling, operating and monitoring executions.

Business users (as well as developers, administrators and operators), can have read access to the repository, perform topology configuration and production operations through a web based UI called Oracle Data Integrator Console. This Web application can deployed in a Java EE application server such as Oracle WebLogic.

ODI Studio provides four Navigators for managing the different aspects and steps of an ODI integration project:

Topology Navigator

Topology Navigator is used to manage the data describing the information system's physical and logical architecture. Through Topology Navigator you can manage the topology of your information system, the technologies and their datatypes, the data servers linked to these technologies and the schemas they contain, the contexts, the language and the agents, as well as the repositories. The site, machine, and data server descriptions will enable Oracle Data Integrator to execute the same interfaces in different environments.

Designer Navigator

Designer Navigator is used to design data integrity checks and to build transformations such as for example:

  • Automatic reverse-engineering of existing applications or databases

  • Graphical development and maintenance of transformation and integration interfaces

  • Visualization of data flows in the interfaces

  • Automatic documentation generation

  • Customization of the generated code

The main objects you handle through Designer Navigator are Models and Projects.

Operator Navigator

Operator Navigator is the production management and monitoring tool. It is designed for IT production operators. Through Operator Navigator, you can manage your interface executions in the sessions, as well as the scenarios in production.

Security Navigator

Security Navigator is the tool for managing the security information in Oracle Data Integrator. Through Security Navigator you can create users and profiles and assign user rights for methods (edit, delete, etc) on generic objects (data server, datatypes, etc), and fine-tune these rights on the object instances (Server 1, Server 2, etc).

1.4.3 Design-time Projects

A typical project is composed of several steps and milestones.

Some of these are:

  • Define the business needs

  • Identify and declare the sources and targets in the Topology

  • Design and Reverse-engineer source and target data structures in the form of data models

  • Implement data quality rules on these data models and perform static checks on these data models to validate the data quality rules

  • Develop integration interfaces using datastores from these data models as sources and target

  • Develop additional components for tasks that cannot be achieved using interfaces, such as Receiving and sending e-mails, handling files (copy, compress, rename and such), executing web services

  • Integrate interfaces and additional components for building Package workflows

  • Version your work and release it in the form of scenarios

  • Schedule and operate scenarios.

Oracle Data Integrator will help you cover most of these steps, from source data investigation to metadata lineage, and through loading and data quality audit. With its repository, Oracle Data Integrator will centralize the specification and development efforts and provide a unique architecture on which the project can rely to succeed.

Chapter 2, "Oracle Data Integrator QuickStart" introduces you to the basic steps of creating an integration project with Oracle Data Integrator. Chapter 9, "Creating an Integration Project" gives you more detailed information on the several steps of creating an integration project in ODI.

1.4.4 Run-Time Agent

At design time, developers generate scenarios from the business rules that they have designed. The code of these scenarios is then retrieved from the repository by the Run-Time Agent. This agent then connects to the data servers and orchestrates the code execution on these servers. It retrieves the return codes and messages for the execution, as well as additional logging information – such as the number of processed records, execution time etc. - in the Repository.

The Agent comes in two different flavors:

  • The Java EE Agent can be deployed as a web application and benefit from the features of an application server.

  • The Standalone Agent runs in a simple Java Machine and can be deployed where needed to perform the integration flows.

Both these agents are multi-threaded java programs that support load balancing and can be distributed across the information system. This agent holds its own execution schedule which can be defined in Oracle Data Integrator, and can also be called from an external scheduler. It can also be invoked from a Java API or a web service interface. Refer to Chapter 4, "Setting-up the Topology" for more information on how to create and manage agents.

PK݊PKp\EOEBPS/appendix_c.htmC Using Groovy Scripting in Oracle Data Integrator

C Using Groovy Scripting in Oracle Data Integrator

This appendix provides an introduction to the Groovy language and explains how to use Groovy scripting in Oracle Data Integrator. This appendix contains the following sections:

C.1 Introduction to Groovy

Groovy is a scripting language with Java-like syntax for the Java platform. The Groovy scripting language simplifies the authoring of code by employing dot-separated notation, yet still supporting syntax to manipulate collections, Strings, and JavaBeans. Groovy language expressions in ADF Business Components differs from the Java code that you might use in a Business Components custom Java class because Groovy expressions are executed at runtime, while the strongly typed language of Java is executed at compile-time. Additionally, because Groovy expressions are dynamically compiled, they are stored in the XML definition files of the business components where you use it. ADF Business Components supports the use of the Groovy scripting language in places where access to entity object and view object attributes is useful, including attribute validators (for entity objects), attribute default values (for either entity objects or view objects), transient attribute value calculations (for either entity objects or view objects), bind variable default values (in view object query statements and view criteria filters), and placeholders for error messages (in entity object validation rules). Additionally, ADF Business Components provides a limited set of built-in keywords that can be used in Groovy expressions.

For more information about the Groovy language, see the following web site:

http://groovy.codehaus.org/

C.2 Introduction to the Groovy Editor

The Groovy editor provides a single environment for creating, editing, and executing Groovy scripts within the ODI Studio context. Figure C-1 gives an overview of the Groovy editor.

Figure C-1 Groovy Editor

Description of Figure C-1 follows

The Groovy editor provides all standard features of a code editor such as syntax highlighting and common code editor commands except for debugging. The following commands are supported and accessed through the context menu or through the Source main menu:

  • Show Whitespace

  • Text Edits

    • Join Line

    • Delete Current Line

    • Trim Trailing Whitespace

    • Convert Leading Tabs to Spaces

    • Convert Leading Spaces to Tabs

    • Macro Toggle Recording

    • Macro Playback

  • Indent Block

  • Unindent Block

C.3 Using the Groovy Editor

You can perform the following actions with the Groovy editor:

C.3.1 Create a Groovy Script

To create a Groovy script in ODI Studio:

  1. From the Tools Main menu select Groovy > New Script.

    This opens the Groovy editor.

  2. Enter the Groovy code.

You can now save or execute the script.

C.3.2 Open and Edit an Existing Groovy Script

To edit a Groovy Script that has been previously created:

  1. From the Tools Main menu select Groovy > Open Script or Recent Scripts.

  2. Select the Groovy file and click Open.

    This opens the selected file in the Groovy editor.

  3. Edit the Groovy script.

You can now save or execute the script.

C.3.3 Save a Groovy Script

To save a Groovy script that is currently open in the Groovy editor:

From the Tools Main menu select Groovy > Save Script or Save Script As.


Note:

The Save and Save All toolbar options are not associated with the Groovy Editor.

C.3.4 Execute a Groovy Script

You can execute one or several Groovy scripts at once and also execute one script several times in parallel.

You can only execute a script that is opened in the Groovy editor. ODI Studio does not execute a selection of the script, it executes the whole Groovy script.

To execute a Groovy script in ODI Studio:

  1. Select the script that you want to execute in the Groovy editor.

  2. Click Execute in the toolbar.

  3. The script is executed.

You can now follow the execution in the Log window.

Note that each script execution launches its own Log window. The Log window is named according to the following format: Running <script_name>.

C.3.5 Stop the Execution of a Groovy Script

You can only stop running scripts. If no script is running, the Stop buttons are deactivated.

The execution of Groovy scripts can be stopped using two methods:

  • Clicking Stop in the Log tab. This stops the execution of the particular script.

  • Click Stop on the toolbar. If several scripts are running, you can select the script execution to stop from the drop down list.

C.3.6 Perform Advanced Actions

This section describes some advanced actions that you can perform with the Groovy editor.

Use Custom Libraries

The Groovy editor is able to access external libraries for example if an external driver is needed.

To use external libraries, do one of the following:

  • Copy the custom libraries to the userlib folder. This folder is located:

    • On Windows operating systems:

      %APPDATA%/odi/oracledi/userlib

    • On UNIX operating systems:

      ~/.odi/oracledi/userlib

  • Add the custom libraries to the additional_path.txt file. This file is located in the userlib folder and has the following content:

    ; Additional paths file
    ; You can add here paths to additional libraries
    ; Examples:
    ;       C:\ java\libs\myjar.jar
    ;       C:\ java\libs\myzip.zip
    ;       C:\java\libs\*.jar will add all jars contained in the C:\java\libs\ directory
    ;       C:\java\libs\**\*.jar will add all jars contained in the C:\java\libs\ directory and subdirectories
    

Define Additional Groovy Execution Classpath

You can define a Groovy execution classpath in addition to all classpath entries available to ODI Studio.

To define an additional Groovy execution classpath:

  1. Before executing the Groovy script, select from the Tools Main menu Preferences...

  2. In the Preferences dialog, navigate to the Groovy Preferences page.

  3. Enter the classpath and click OK.


    Note:

    You do not need to restart ODI Studio after adding or changing the classpath.

Read Input with odiInputStream Variable

Oracle Data Integrator provides the odiInputStream variable to read input streams. This variable is used as follows:

odiInputStream.withReader { println (it.readLine())}

When this feature is used an Input text field is displayed on the bottom of the Log tab. Enter a string text and press ENTER to pass this value to the script. The script is exited once the value is passed to the script.

Example C-1 shows another example of how to use an input stream. In this example you can provide input until you click Stop <script_name>.

Example C-1 InputStream

odiInputStream.withReader { reader ->
  while (true) {
    println reader.readLine(); 
  }
}
 

Using Several Scripts

If you are using several scripts at once, note the following:

  • A log tab is opened for each execution.

  • If a script is referring to another script, the output of the second will not be redirected to the log tab. This is a known Groovy limitation with no workaround.

Using the ODI Instance

Oracle Data Integrator provides the variable odiInstance. This variable is available for any Groovy script running within ODI Studio. It represents the ODI instance, more precisely the connection to the repository, in which the script is executed. Note that this instance will be null if ODI Studio is not connected to a repository.

The odiInstance variable is initialized by the ODI Studio code before executing the code. You can use bind APIs of the Groovy SDK for this purpose. Example C-2 shows how you can use the odiInstance variable.

C.4 Automating Development Tasks - Examples

Oracle Data Integrator provides support for the use of Groovy to automate development tasks. These tasks include for example:

Example C-2 shows how to create an ODI Project with a Groovy script.

Example C-2 Creating a Project

import oracle.odi.core.persistence.transaction.ITransactionDefinition;
import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition;
import oracle.odi.core.persistence.transaction.ITransactionManager;
import oracle.odi.core.persistence.transaction.ITransactionStatus;
import oracle.odi.domain.project.OdiProject;
import oracle.odi.domain.project.OdiFolder;
 
 
ITransactionDefinition txnDef = new DefaultTransactionDefinition();
ITransactionManager tm = odiInstance.getTransactionManager()
ITransactionStatus txnStatus = tm.getTransaction(txnDef)
OdiProject myProject = new OdiProject("New Project 1","NEW_PROJECT_1")
OdiFolder myFolder = new OdiFolder(myProject,"Test Folder 001")
odiInstance.getTransactionalEntityManager().persist(myProject)
tm.commit(txnStatus)

Example C-3 shows how to import an external Groovy script.

Example C-3 External Groovy File

//Created by ODI Studio
import gd.Test1;
println "Hello World"
Test1 t1 = new Test1()
println t1.getName()
 

Example C-4 shows how to call a class from a different Groovy script.

Example C-4 Class from External File

import gd.GroovyTestClass
 
GroovyTestClass tc = new GroovyTestClass()
println tc.getClassLoaderName()
 

Example C-5 shows how to implement Studio UI automation.

Example C-5 For Studio UI Automation

import javax.swing.JMenuItem;
import javax.swing.JMenu;
import oracle.ide.Ide;
 
((JMenuItem)Ide.getMenubar().getGUI(false).getComponent(4)).doClick();
JMenu mnu = ((JMenu)Ide.getMenubar().getGUI(false).getComponent(4));
((JMenuItem)mnu.getMenuComponents()[0]).doClick()
PKyCCPKp\EOEBPS/export_import.htm Exporting/Importing

20 Exporting/Importing

This chapter describes how to manage export and import operations in Oracle Data Integrator. An introduction to the import and export concepts is provided.

This chapter includes the following sections:

20.1 Import and Export Concepts

This section introduces you to the fundamental concepts of export and import operations in Oracle Data Integrator. All export and import operations require a clear understanding of the concepts introduced in this section.

20.1.1 Internal Identifiers (IDs)

Before performing export and import operations, it is essential to understand in detail the concept of internal identifiers (ID) in Oracle Data Integrator.

To ensure object uniqueness across several work repositories, ODI uses a specific mechanism to generate unique IDs for objects (such as technologies, data servers, Models, Projects, Integration Interfaces, KMs, etc.). Every object in Oracle Data Integrator is identified by an internal ID. The internal ID appears on the Version tab of each object.

ODI Master and Work Repositories are identified by their unique internal IDs. This RepositoryID of 3 digits must be unique across all work repositories of an ODI installation and is used to compute the internal ID of an object.

The internal ID of an object is calculated by appending the value of the RepositoryID to an automatically incremented number: <UniqueNumber><RepositoryID>

If the Repository ID is shorter than 3 digits, the missing digits are completed with "0". For example, if a repository has the ID 5, possible internal IDs of the objects in this repository could be: 1005, 2005, 3005, ..., 1234567005. Note that all objects created within the same repository have the same three last digits, in this example 005.

This internal ID is unique for the object type within the repository and also unique between repositories for the object type because it contains the repository unique ID. This mechanism allows to:

  • Avoid any ID conflicts when exporting and importing from one repository to another

  • Understand the provenance of every object, simply by looking at its Internal ID. The 3 last digits always refer to the repository where it was created.

Important Export/Import Rules and Guidelines

Due to the structure of the object IDs, these guidelines should be followed:

  • Work repositories must always have different internal IDs. Work repositories with the same ID are considered to contain same objects.

  • If an export/import operation is performed between two Master/Work repositories possessing identical internal IDs, there is a risk of overwriting objects when importing. Objects from both repositories that have same IDs are considered the same.

20.1.2 Relationships between Objects

Oracle Data Integrator stores all objects in a relational database schema (the Repository) with dependencies between objects. Repository tables that store these objects maintain these dependencies as references using the IDs. When you drag and drop a target datastore into an integration interface, only the reference to the ID of this datastore is stored in the interface object. If you want to export this interface, and import it in Synonym mode into another work repository, a datastore with the same ID must already exist in this other work repository otherwise the import will create a missing reference. The missing references can be resolved either by fixing the imported object directly or by importing the missing object.

You can use the Smart export and import feature or solutions to export and import sets of dependent objects.

  • Use solutions in combination with versioning to maintain the dependencies when doing export/import. See Chapter 19, "Working with Version Management".

  • It is recommanded to use the Smart export and import feature because the dependencies are determined automatically.

Therefore, the Model or Sub-model holding this Datastore needs to be exported and imported in Synonym mode prior to importing the integration interface.

There are also dependencies between work repository objects and master repository objects. Dependencies within a work repository are ID-based. Dependencies between objects of the work and objects of the master are based on the Codes and not the IDs. This means that only the Code of the objects (for example ORACLE is the code of the Oracle technology in the master) of the master repository objects are referenced in the work repository.

It is important to import objects in the appropriate order. You can also use the Smart export and import feature to preserve these dependencies. Table 20-1 lists the dependencies of an integration interface to other objects when importing the integration interface in synonym mode. Note that a Smart export automatically includes these dependent objects when exporting an interface.

Table 20-1 Dependencies of an integration interface in the work and Master Repository

Dependencies on other objects of Work Repository when importing in Synonym ModeDependencies on objects of the Master Repository
  • (Parent/Child) Folder: Folder holding this Interface needs to be imported first.

  • (Reference) Model/Sub-Model: all Models/Sub-Models holding Datastore definitions referenced by the Interface need to be imported first. Datastore definitions including Columns, Data Types, Primary Keys, Foreign Keys (references), Conditions must be exactly the same as the ones used by the exported Interface

  • (Reference) Global Variables, Sequences and Functions used within the Interface need to imported first

  • (Reference) Local Variables, Sequences and Function used within the Interface need to imported first

  • (Reference) Knowledge Modules referenced within the Interface need to be imported first

  • (Reference) Any Interface used as source in the current Interface needs to be imported first

  • Technology Codes

  • Context Codes

  • Logical Schema Names

  • Data Type Codes

  • Physical Server Names of the Optimization Contexts of Interfaces


20.1.3 Import Types

Oracle Data Integrator can import objects, the topology or repositories using several modes.

Read carefully this section in order to determine the import type you need.

Import TypeDescription
DuplicationThis mode creates a new object (with a new internal ID) in the target Repository, and inserts all the elements of the export file. The ID of this new object will be based on the ID of the Repository in which it is to be created (the target Repository).

Dependencies between objects which are included into the export such as parent/child relationships are recalculated to match the new parent IDs. References to objects which are not included into the export are not recalculated.

Note that this mode is designed to insert only 'new' elements.

The Duplication mode is used to duplicate an object into the target repository. To transfer objects from one repository to another, with the possibility to ship new versions of these objects, or to make updates, it is better to use the three Synonym modes.

This import type is not available for importing master repositories. Creating a new master repository using the export of an existing one is performed using the master repository Import wizard.

Synonym Mode INSERTTries to insert the same object (with the same internal ID) into the target repository. The original object ID is preserved.

If an object of the same type with the same internal ID already exists then nothing is inserted.

Dependencies between objects which are included into the export such as parent/child relationships are preserved. References to objects which are not included into the export are not recalculated.

If any of the incoming attributes violates any referential constraints, the import operation is aborted and an error message is thrown.

Note that sessions can only be imported in this mode.

Synonym Mode UPDATETries to modify the same object (with the same internal ID) in the repository.

This import type updates the objects already existing in the target Repository with the content of the export file.

If the object does not exist, the object is not imported.

Note that this import type does NOT delete child objects that exist in the repository but are not in the export file. For example, if the target repository contains a project with some variables and you want to replace it with one that contains no variables, this mode will update for example the project name but will not delete the variables under this project. The Synonym Mode INSERT_UPDATE should be used for this purpose.

Synonym Mode INSERT_UPDATEIf no ODI object exists in the target Repository with an identical ID, this import type will create a new object with the content of the export file. Already existing objects (with an identical ID) will be updated; the new ones, inserted.

Existing child objects will be updated, non-existing child objects will be inserted, and child objects existing in the repository but not in the export file will be deleted.

Dependencies between objects which are included into the export such as parent/child relationships are preserved. References to objects which are not included into the export are not recalculated.

This import type is not recommended when the export was done without the child components. This will delete all sub-components of the existing object.

Import ReplaceThis import type replaces an already existing object in the target repository by one object of the same object type specified in the import file.

This import type is only supported for scenarios, Knowledge Modules, actions, and action groups and replaces all children objects with the children objects from the imported object.

Note the following when using the Import Replace mode:

If your object was currently used by another ODI component like for example a KM used by an integration interface, this relationship will not be impacted by the import, the interfaces will automatically use this new KM in the project.

Warnings:

  • When replacing a Knowledge module by another one, Oracle Data Integrator sets the options in the new module using option name matching with the old module's options. New options are set to the default value. It is advised to check the values of these options in the interfaces.

  • Replacing a KM by another one may lead to issues if the KMs are radically different. It is advised to check the interface's design and execution with the new KM.


20.1.4 Tips for Import/Export

This section provides tips for the import and export operations.

Repository IDs

As a general rule, always use different internal IDs for your repositories in order to avoid any ID conflicts. Note that the Smart Import feature verifies ID conflicts. For more information, see Section 20.2.7, "Smart Export and Import".

Export/Import Reports

A report is displayed after every export or import operation. It is advised to read it carefully in order to determine eventual errors of the import process.

Depending on the export or import operation performed, this report gives you details on, for example, the:

  • Import type

  • Imported Objects. For every imported object the object type, the original object name, the object name used for the import, the original ID, and the new, recalculated ID after the import is given.

  • Deleted Objects. For every deleted object the object type, the object name, and the original ID is given.

  • Created Missing References lists the missing references detected after the import.

  • Fixed Missing References lists the missing references fixed during the import.

The reports displayed after a smart export or smart import operation contain additional details to describe what happened to the objects during the export or import, for example which objects have been ignored, merged, overwritten and so forth.

You can save the import report as an.xml or .html file. Click Save... to save the import report.

Missing References

In order to avoid missing references, use either the Smart Export and Import feature or solutions to manage dependencies. For more information, see Section 20.2.7, "Smart Export and Import" and Section 19.4, "Working with Solutions".

Import Type

Choose the import type carefully. See Section 20.1.3, "Import Types" for more information.

20.2 Exporting and Importing Objects

Exporting and importing Oracle Data Integrator objects means transferring objects between different repositories.

When exporting an Oracle Data Integrator object, an XML export file is created. ODI objects have dependencies, as described in Section 20.1.2, "Relationships between Objects". These dependencies will be exported in the XML export file.

The content of this XML file will depend on the export method you will use:

The choice will depend on your goal, if you need to do a partial export then the Export Without Child Components is the one to use.

The Export Multiple ODI Objects feature is useful when you need to regularly export the same set of Objects.

Once the export has been performed, it is very important to choose the import strategy to suite your requirements.

The Smart Export and Import feature is a lightweight and consistent export and import mechanism. It supports the export and import of one or multiple ODI objects. It is recommended to use this feature to avoid most of the common issues that are encountered during an export or import.

This section contains the following topics:

20.2.1 Exporting an Object with its Child Components

This option is the most common when you want to export an object. It allows you to export all subcomponents of the current object along with the object itself.

When an Object is exported with its child components, all container-dependent Objects – those which possess a direct parent/child relationship - are also exported. Referenced Objects are not exported.

For example, when you choose to export a Project with its child components, the export will contain the Project definition as well as all objects included in the Project, such as Folders, Interfaces, Procedures, Packages, Knowledge Modules, Variables, Sequences, Functions, etc. However, this export will not contain dependent objects referenced which are outside of the Project itself, such as Datastores and Columns, as defined previously in Section 20.1.2, "Relationships between Objects". Only the numeric ID references of these Objects will be exported.

20.2.2 Exporting an Object without its Child Components

This option can be useful in some particular situations where you would want to take control of the import process. It allows you to export only the top-level definition of an object without any of its subobjects.

For example, if you choose to export a Model without its children, it will only contain the Model definition but not the underlying Sub-models and Datastores when you import this model to a new repository.

20.2.3 Partial Export/Import

If you have a very large project that contains thousands of interfaces and you only want to export a subset of these to another work repository, you can either export the entire Project and then import it, or choose to do a partial manual export/import as follows:

  1. Export all Models referenced by the sub-items of your project and import them in Synonym mode in the new repository to preserve their IDs

  2. Export the Project without its children and import it in Synonym mode. This will simply create the empty Project in the new repository (with the same IDs as in the source).

  3. Export every first level Folder you want, without its children, and import them in Synonym mode. The empty Folders will be created in the new repository.

  4. Export and Import all Markers, Knowledge Modules, Variables, Sequences, and so forth that are referenced by every object you plan to export, and import them in Synonym mode. See Section 20.1.3, "Import Types" for more information on the Synonym or Duplication mode and the impact on Object IDs for special caution regarding the import of Knowledge Modules in Synonym mode.

  5. Finally, export the Interfaces you are interested in and import them in Synonym mode in the new repository.

20.2.4 Exporting one ODI Object

Exporting one Oracle Data Integrator Object means export one single ODI object in order to transfer it from one repository to another.

To export an object from Oracle Data Integrator:

  1. Select the object to be exported in the appropriate Oracle Data Integrator Navigator.

  2. Right-click the object, and select Export...

    If this menu item does not appear, then this type of object does not have the export feature.

  3. In the Export dialog, set the Export parameters as indicated inTable 20-2.

    Table 20-2 Object Export Parameters

    PropertiesDescription

    Export Directory

    Directory in which the export file will be created.

    Export Name

    Name given to the export

    Child Components Export

    If this option is checked, the objects linked to the object to be exported will be also exported. These objects are those visible under the exported object in the tree. It is recommended to leave this option checked. Refer to Exporting an Object with its Child Components for more details.

    Note that when you are exporting a Load Plan, scenarios will not be exported even if you check this option.

    Replace exiting files without warning

    If this option is checked, the existing file will be replaced by the ones of the export. If a file with the same name as the export file already exists, it will be overwritten by the export file.

    Advanced options

    This set of options allow to parameterize the XML output file format. It is recommended that you leave the default values.

    XML Version

    XML Version specified in the export file. Parameter .xml version in the XML file header.

    <?xml version="1.0" encoding="ISO-8859-1"?>

    Character Set

    Encoding specified in the export file. Parameter encoding in the XML file header.

    <?xml version="1.0" encoding="ISO-8859-1"?>

    Java Character Set

    Java character set used to generate the file


    You must at least specify the Export Name.

  4. Click OK.

The object is exported as an XML file in the specified location.

20.2.5 Export Multiple ODI Objects

You can export one or more objects at once, using the Export Multiple Objects action. This lets you export ODI objects to a zip file or a directory, and lets you re-use an existing list of objects to export.

More powerful mechanisms for doing this are Solutions and also the Smart Export and Import. For more information, see Section 19.4, "Working with Solutions"or Section 20.2.7, "Smart Export and Import".

To export multiple objects at once:

  1. Select Export... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Export Selection dialog, select Export Multiple Objects.

  3. Click OK.

  4. In the Export Multiple Objects dialog, specify the export parameters as indicated in Table 20-2.

    The objects are either exported as .xml files directly into the directory, or as a zip file containing .xml files. If you want to generate a zip file, you need to select Export as zip file and enter the name of the zip file in the Zip file name field.

  5. Specify the list of objects to export:

    1. Drag and drop the objects from the Oracle Data Integrator Navigators into the Export list. Note that you can export objects from different Navigators at once.

    2. Click Load a list of objects... to load a previously saved list of objects. This is useful if you regularly export the same list of objects.

    3. To save the current list of objects, click SaveExport List and specify a file name. If the file already exists, it will be overwritten without any warning.

  6. Click OK to start the export.

To import multiple objects at once, you must use a Solution or the Smart Import. See Section 19.4, "Working with Solutions" and Section 20.2.7, "Smart Export and Import" for more information.

20.2.6 Importing Objects

Importing and exporting allows you to transfer objects (Interfaces, Knowledge Modules, Models,...) from one repository to another. When importing Knowledge Modules choose carefully your import strategy which may depend on the knowledge module's scope. See Section 9.3.1, "Project and Global Knowlegde Modules" for more information.

This section includes the following topics:

Importing an ODI object



To import an object in Oracle Data Integrator:

  1. In the Navigator, select the object or object node under which you want to import the object.

  2. Right-click the object, and select Import, then the type of the object you wish to import.

  3. In the Import dialog:

    1. Select the Import Type. See Section 20.1.3, "Import Types" for more information.

    2. Enter the File Import Directory.

    3. Select the file(s) to import from the list.

  4. Click OK.

The XML-formatted files are imported into the work repository, and the imported objects appear in the Oracle Data Integrator Navigators.

Note that the parent or node under which objects are imported is dependent on the import type used. When using DUPLICATION mode, the objects will be imported into where the Import option was selected. For Synonym imports, the objects will be imported under the parent specified by the objects parent id in the import file.

Importing a Project KM

To import a Knowledge Module into an Oracle Data Integrator project:

  1. In Designer Navigator, select the project into which you want to import the KM.

  2. Right-click the project, and select Import > Import Knowledge Modules....

  3. In the Import dialog:

    1. The Import Type is set to Duplication. Refer to Section 20.1.3, "Import Types" for more information.

    2. Enter the File Import Directory.

    3. Select the Knowledge Module file(s) to import from the list.

  4. Click OK.

The Knowledge Modules are imported into the work repository and appear in your project under the Knowledge Modules node.

Importing a KM in Replace Mode

Knowledge modules are usually imported into new projects in Duplication mode.

When you want to replace a global KM or a KM in a project by another one and have all interfaces automatically use the new KM, you must use the Import Replace mode.

To import a Knowledge Module in Replace mode:

  1. Select the Knowledge Module you wish to replace.

  2. Right-click the Knowledge Module and select Import Replace...

  3. In the Replace Object dialog, specify the Knowledge Module export file.

  4. Click OK.

The Knowledge Module is now replaced by the new one.


WARNING:

Replacing a Knowledge module by another one in Oracle Data Integrator sets the options in the new module using the option name similarities with the old module's options. New options are set to the default value.

It is advised to check the values of these options in the interfaces as well as the interfaces' design and execution with the new KM.

Refer to the Import Replace mode description in the Section 20.1.3, "Import Types" for more information.


Importing a Global Knowledge Module

To import a global knowledge module in Oracle Data Integrator:

  1. In the Navigator, select the Global Knowledge Modules node in the Global Objects accordion.

  2. Right-click and select Import Knowledge Modules.

  3. In the Import dialog:

    1. Select the Import Type. See Section 20.1.3, "Import Types" for more information.

    2. Enter the File Import Directory.

    3. Select the file(s) to import from the list.

  4. Click OK.

The global KM is now available in all your projects.

20.2.6.1 Importing Objects

Importing and exporting allows you to transfer objects (Interfaces, Knowledge Modules, Models,...) from one repository to another. When importing Knowledge Modules choose carefully your import strategy which may depend on the knowledge module's scope. See Section 9.3.1, "Project and Global Knowlegde Modules" for more information.

This section includes the following topics:

Importing an ODI object

To import an object in Oracle Data Integrator:

  1. In the Navigator, select the object or object node under which you want to import the object.

  2. Right-click the object, and select Import, then the type of the object you wish to import.

  3. In the Import dialog:

    1. Select the Import Type. See Section 20.1.3, "Import Types" for more information.

    2. Enter the File Import Directory.

    3. Select the file(s) to import from the list.

  4. Click OK.

The XML-formatted files are imported into the work repository, and the imported objects appear in the Oracle Data Integrator Navigators.

Note that the parent or node under which objects are imported is dependent on the import type used. When using DUPLICATION mode, the objects will be imported into where the Import option was selected. For Synonym imports, the objects will be imported under the parent specified by the objects parent id in the import file.

Importing a Project KM

To import a Knowledge Module into an Oracle Data Integrator project:

  1. In Designer Navigator, select the project into which you want to import the KM.

  2. Right-click the project, and select Import > Import Knowledge Modules....

  3. In the Import dialog:

    1. The Import Type is set to Duplication. Refer to Section 20.1.3, "Import Types" for more information.

    2. Enter the File Import Directory.

    3. Select the Knowledge Module file(s) to import from the list.

  4. Click OK.

The Knowledge Modules are imported into the work repository and appear in your project under the Knowledge Modules node.

Importing a KM in Replace Mode

Knowledge modules are usually imported into new projects in Duplication mode.

When you want to replace a global KM or a KM in a project by another one and have all interfaces automatically use the new KM, you must use the Import Replace mode.

To import a Knowledge Module in Replace mode:

  1. Select the Knowledge Module you wish to replace.

  2. Right-click the Knowledge Module and select Import Replace...

  3. In the Replace Object dialog, specify the Knowledge Module export file.

  4. Click OK.

The Knowledge Module is now replaced by the new one.


WARNING:

Replacing a Knowledge module by another one in Oracle Data Integrator sets the options in the new module using the option name similarities with the old module's options. New options are set to the default value.

It is advised to check the values of these options in the interfaces as well as the interfaces' design and execution with the new KM.

Refer to the Import Replace mode description in the Section 20.1.3, "Import Types" for more information.


Importing a Global Knowledge Module

To import a global knowledge module in Oracle Data Integrator:

  1. In the Navigator, select the Global Knowledge Modules node in the Global Objects accordion.

  2. Right-click and select Import Knowledge Modules.

  3. In the Import dialog:

    1. Select the Import Type. See Section 20.1.3, "Import Types" for more information.

    2. Enter the File Import Directory.

    3. Select the file(s) to import from the list.

  4. Click OK.

The global KM is now available in all your projects.

20.2.7 Smart Export and Import

It is recommended to use the Smart Export and Import feature to avoid most of the common issues that are encountered during an export or import such as broken links or ID conflicts. The Smart Export and Import feature is a lightweight and consistent export and import mechanism providing several smart features.

The Smart Export automatically exports an object with all its object dependencies. It is particularly useful when you want to move a consistent lightweight set of objects from one repository to another and when you want to include only a set of modified objects, for example in a patching use case, because Oracle Data Integrator manages all object dependencies automatically while creating a consistent sub-set of the repository.

The Smart Import provides:

  • Automatic and customizable object matching rules between the objects to import and the objects already present in the repository

  • A set of actions that can be applied to the object to import when a matching object has been found in the repository

  • Proactive issue detection and resolution that suggests a default working solution for every broken link or conflict detected during the Smart Import

20.2.7.1 Performing a Smart Export

To perform a Smart Export:

  1. Select Export... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Export Selection dialog, select Smart Export.


    Note:

    This option is only available if you are connected to a Work repository.

  3. Click OK.

  4. In the Smart Export dialog, specify the export parameters as follows:

    1. In the Export Name field, enter the name given to the export (mandatory). Default is SmartExport.xml.

    2. The objects are either exported into a single .xml file directly in the directory, or as a zip file containing a single .xml file. If you want to generate a zip file, you need to select Export as zip file and enter the name of the zip file in the Zip file name field.

    3. Optionally, customize the XML output file format in the Encoding Options section. It is recommended that you leave the default values.

      PropertiesDescription
      XML Character SetEncoding specified in the export file. Parameter encoding in the XML file header.

      <?xml version="1.0" encoding="ISO-8859-1"?>

      Java Character SetJava character set used to generate the file

    4. In the Dependencies section, drag and drop the objects you want to add to the Smart Export from the Oracle Data Integrator Navigators into the Selected Objects list on the left. Note that you can export objects from different Navigators at once.

      The object to export appears in a tree with all its related parent and child objects that are required to support.

      Repeat this step according to your needs.


      Note:

      • If your export contains shortcuts, you will be asked if you want to materialize the shortcuts. If you select No, both the shortcuts and the base objects will be exported.

      • A bold object name indicates that this object has been specifically added to the Smart Export. Only objects that appear in bold can be removed. Removing an object also removes its child objects and the dependent objects of the removed object. Note that child objects of a specifically added object also appear in bold and can be removed. To remove an object from the export tree, right-click the object and select Remove Object. If the removed object is dependent of another object, it will remain in the tree but will became unbold.

      • A grayed out object name indicates that this object is an dependent object that will not be exported, as for example a technology.


    5. Optionally, modify the list of objects to export. You can perform the following actions: Remove one object, remove all objects, add objects by release tag, and add shortcuts. See Change the List of Objects to Export for more information.

    6. If there are any cross reference objects, including shortcuts, they are displayed in the Dependencies list on the right. Parent objects will not be shown under the Uses node and child objects will not be shown under Used By node.

  5. Click Export to start the export process.

The Smart export generates a single file containing all the objects of the Selected Objects list. You can use this export file as the input file for the Smart Import. See Section 20.2.7.2, "Performing a Smart Import" for more information.

You can review the results of the Smart Export in the Smart Export report.

The Smart Export Toolbar

The Smart Export toolbar provides tools for managing the objects to export and for viewing dependencies. Table 20-3 details the different toolbar components.

Table 20-3 Smart Export Toolbar

IconNameDescription
Search icon

Search

Searches for a object in the Selected Objects or Dependencies list.

Expand All icon

Expand All

Expands all tree nodes in the Selected Objects or Dependencies list.

Collapse All

Collapse All

Collapses all tree nodes in the Selected Objects or Dependencies list.

Clear All icon

Clear All

Deletes all objects from the list. Warning: This also deletes Release Tags and Materialization selections.

Release tag icon


Add Objects by Release Tag

Adds all objects that have the same release tag as the object already in the Selected Objects list.


Change the List of Objects to Export

You can perform the following actions to change the list of objects to export:

  • Remove one object from the list

    Only objects that have been explicitly added to the Smart Export (objects in bold) can be removed from the Selected Objects list.

    To remove one object:

    1. In the Selected Objects list, select the object you wish to remove.

    2. Right-click and select Remove Object.

    The object and its dependencies are removed from the Selected Objects list and will not be included in the Smart Export.


    Note:

    If the object you wish to remove is a dependent object of another object to export, it remains in the list but becomes un-bold.

  • Remove all objects from the list

    To delete all objects from the Selected Objects list, select Clear All in the Smart Export Toolbar.


    Caution:

    This also deletes Release Tags and Materialization selections.

  • Add objects by release tag

    To add a folder or model folder of a certain release:

    1. Select Add Objects by Release Tag in the Smart Export Toolbar.

      This opens the Release Tag Selection dialog.

    2. In the Release Tag Selection dialog, select a release tag from the Release Tag list. All objects of this release tag will be added to the Smart Export. You don't need to add them individually to the Smart Export.

      The Release Tag Selection dialog displays the list of release tags that have been already added to the Smart Export.

    3. Click OK to add the objects of the selected release tag to the Smart Export.

    The release tag name is displayed in the Selected object list after the object name.


    Note:

    When you add a folder or model folder to the Selected Objects list that has a release tag, you can choose to automatically add all objects of the given release to the Smart Export by clicking OK in the Confirmation dialog.

  • Add shortcuts

    If you add shortcuts to the Smart Export, you can choose to materialize the shortcut. If you choose not to materialize a shortcut added to the Smart Export, then the shortcut is exported with all its dependent objects, including the base object. If you choose to materialize the shortcut, the shortcut is materialized and the base object referenced through the shortcut is not included.

20.2.7.2 Performing a Smart Import

To perform a Smart Import:

  1. Select Import... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Import Selection dialog, select Smart Import.

  3. Click OK.

    The Smart Import wizard opens.

  4. On the first screen, Step 1 - File Selection, specify the import settings as follows:

    1. In the File Selection field, enter the location of the Smart Export file to import.

    2. Optionally, select a response file to replay a previous Smart Import wizard execution by presetting all fields from the Response File field.

    3. Click Next to move to the next step of the Smart Import Wizard.

      Oracle Data Integrator launches a matching process that verifies whether the repository contains matching objects for each of the potential objects to import.

  5. On the second screen, Step 2 - Import Actions, verify the result of the matching process and fix eventual issues. The number of detected issues is displayed in the first line of this screen.

    Note that the Smart Import wizard suggests default values for every field.

    1. In the Object Match Details section, expand the nodes in the Import Object column to navigate to the objects available to import during this Smart Import..

    2. In the Action column, select the action to perform on the object during the import operation. Possible values are listed in Table 20-4.

      Table 20-4 Actions during Import

      ActionDescription

      Merge

      For containers, this means overwrite the target container with the source container, and then loop over the children for merging. Each child may have a different action. Child FCOs that are not in the import file will not be deleted. The Merge action may also be used for Datastores, which will be merged at the SCO level.

      Overwrite

      Overwrite target object with source object. Any child objects remaining after import come from the source object. Note that this applies to all the child objects (If a project overwrites another, all the folders in this project will be replaced and any extra folders will be removed).

      Create Copy

      Create source object including renaming or modifying any fields needed to avoid conflict with existing objects of same name/id/code. This action preserves the consistency and relations from and to the imported objects.

      Reuse

      Do not import the object, yet preserve the ability import all objects related to it and link them to the target object. Basically, this corresponds to overwriting the source object with the matched target object.

      Ignore

      Do not process the source object.


    3. In the Repository Object column, select the required repository object. This is the repository object that matches the best the import object.

    4. If an issue, such as a broken link or a code conflict, has been detected during the matching process, a warning icon is displayed in the Issues column. View the Issue Details section for more details on the issue.


      Note:

      The Next button is disabled until all critical issues are fixed.

    5. The table in the Issue Details section lists the issues that have been detected during the matching process. To fix an issue, select the action to perform in the Action column. Table 20-5 describes the possible actions.

      Table 20-5 Possible actions to fix an issue

      ActionDescription

      Ignore

      Not possible on critical issues

      Change

      If the collision is on an ID, the new value is always NEW_ID. If a name or code collision is detected, specify the new value in the Fix field.

      Do not change

      For value changed issues, the value in the matching target object will be kept.

      Fix Link

      For broken links, click Search in the Fix field.



      Note:

      Oracle Data Integrator provides a default working solution for every issue. However, missing references may still result during the actual import process depending on the choices you made for the import actions.

    6. In the Fix column, specify the fix. For example, for broken links, click Search and select the target object in the Broken Link Target Object Selection dialog.

    7. Click Next to move to the next step of the Smart Import Wizard.

  6. On the third screen, Step 3 - Summary, review the import file name and eventual issues.

    1. In the File Selection field, verify the import file name.

    2. If the Smart Import still contains unresolved warnings, they are displayed on this screen. Note that critical issues are not displayed here. To fix them, click Back.

    3. Optionally, select Save Response File to create a response file that you can reuse in another import to replay this Smart Import wizard execution by presetting all fields.

    4. Click Finish to launch the Smart Import and to finalize of the Smart Import Wizard.

You can review the results of the Smart Import in the Smart Import report.

20.3 Repository-Level Export/Import

At repository level you can export and import the master repository and the work repositories.

20.3.1 Exporting and Importing the Master Repository

The master repository export/import procedure allows you to transfer the whole repository (Topology and Security domains included) from one repository to another.

It can be performed in Topology Navigator, to import the exported objects in an existing repository, or while creating a new master repository.

Exporting the Master Repository in Topology Navigator

The objects that are exported when exporting the master repository are objects, methods, profiles, users, languages, versions (if option selected), solutions (if option selected), open tools, password policies, entities, links, fields, lookups, technologies, datatypes, datatypes conversions, logical agents, contexts and the child objects.

To export a master repository:

  1. Select Export... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Export Selection dialog, select Export the Master Repository.

  3. Click OK.

  4. In the Export Master Repository dialog, set the Export parameters as indicated inTable 20-2.

    The master repository and its topology and security settings are either exported as .xml files directly into the directory, or as a zip file containing .xml files. If you want to generate a zip file, you need to select Export to zip file and enter the name of the zip file in the Zip File Name field.

  5. Select Export versions, if you want to export all stored versions of objects that are stored in the repository. You may wish to unselect this option in order to reduce the size of the exported repository, and to avoid transferring irrelevant project work.

  6. Select Export solutions, if you want to export all stored solutions that are stored in the repository. You may wish to unselect this option for similar reasons.

  7. Click OK.

The export files are created in the specified export directory.

Importing the Master Repository

To import the exported master repository objects into an existing master repository:

  1. Select Import... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Import Selection dialog, select Import the Master Repository.

  3. Click OK.

  4. In the Import dialog:

    1. Select the Import Type. Refer to Section 20.1.3, "Import Types" for more information.

    2. Select whether you want to import the files From a Folder or From a ZIP file.

    3. Enter the file import folder or zip file.

  5. Click OK.

The master repository contains now the objects you have imported.


Note:

IThe import is not allowed if the source and target repositories have the same internal ID. If the target repository has the same ID, you can renumber the repository. This operation should be performed with caution. See Section 20.1.1, "Internal Identifiers (IDs)" for more information on the risks and how to renumber a repository is described in Section 3.8.3, "Renumbering Repositories".

Creating a new Master Repository using a previous Master export

To create a new master repository using an export of another master repository:

  1. Open the New Gallery by choosing File > New.

  2. In the New Gallery, in the Categories tree, select ODI.

  3. Select from the Items list the Master Repository Import Wizard.

  4. Click OK.

    The Master Repository Import Wizard appears.

  5. Specify the Database Connection parameters as follows:

    • Login: User ID/login of the owner of the tables you have created for the master repository

    • JDBC Driver: The driver used to access the technology, which will host the repository.

    • JDBC URL: The complete path for the data server to host the repository.

      Note that the parameters JDBC Driver and URL are synchronized and the default values are technology dependant.

    • User: The user id/login of the owner of the tables.

    • Password: This user's password.

    • DBA User: The database administrator's username

    • DBA Password: This user's password

  6. Specify the Repository Configuration parameters as follows:

    • ID: A specific ID for the new master repository, rather than the default 0. This will affect imports and exports between repositories.


      WARNING:

      All master repositories should have distinct identifiers. Check that the identifier you are choosing is not the identifier of an existing repository


    • Use a Zip File: If using a compressed export file, check the Use a Zip File box and select in the Export Zip File field the file containing your master repository export.

    • Export Path: If using an uncompressed export, select the directory containing the export in the Export Path field.

    • Technology: From the list, select the technology your repository will be based on.

  7. Click Test Connection to test the connection to your master repository.

    The Information dialog opens and informs you whether the connection has been established.

  8. Click Next.

  9. Specify the password storage details:

    • Select Use Password Storage Configuration specified in Export if you want to use the configuration defined in the export.

    • Select Use New Password Storage Configuration if you do not want to use the configuration defined in the export and select

      • Internal Password Storage if you want to store passwords in the Oracle Data Integrator repository

      • External Password Storage if you want use JPS Credential Store Framework (CSF) to store the data server and context passwords. Indicate the MBean Server Parameters to access the credential store as described in Table 24-2.

    Refer to the Section 24.3.1, "Setting Up External Password Storage" for more information on password storage details.

  10. In the Master Repository Import Wizard click Finish to validate your entries.

A new repository is created and the exported components are imported in this master repository.

20.3.2 Export/Import the Topology and Security Settings

Exporting then importing the topology or security allows you to transfer a domain from one master repository to another.

Exporting the Topology and Security Settings

The domains that can be exported are given below:

  • Topology: the full topology (logical and physical architectures including the local repository, data servers, hosts, agents, generic actions, technologies, datatypes, logical schemas, and contexts).

  • Logical Topology: technologies (connection, datatype or language information), logical agents, logical schemas, actions and action groups.

  • Security: objects, methods, users, profiles, privileges, password policies and hosts.

  • Execution Environment: technologies, data servers, contexts, generic actions, load balanced agents, physical schemas and agents.

To export the topology/security:

  1. Select Export... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Export Selection dialog, select one of the following:

    • Export the Topology

    • Export the Logical Topology

    • Export the Security Settings

    • Export the Execution Environment

  3. Click OK.

  4. In the Export dialog, specify the export parameters as indicated in Table 20-2.

    The topology and security settings are either exported as .xml files directly into the directory, or as a zip file containing .xml files. If you want to generate a zip file, you need to select Export to zip file and enter the name of the zip file in the Zip File Name field.

  5. Click OK.

The export files are created in the specified export directory.

Importing the Topology and Security Settings

To import a topology export:

  1. Select Import... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Import Selection dialog, select one of the following:

    • Import the Topology

    • Import the Logical Topology

    • Import Security Settings

    • Import the Execution Environment

  3. Click OK.

  4. In the Import dialog:

    1. Select the Import Mode. Refer to Section 20.1.3, "Import Types" for more information.

    2. Select whether to import the topology export from a Folder or a Zip File.

    3. Enter the file import directory.

  5. Click OK.

The specified files are imported into the master repository.

20.3.3 Exporting and Importing a Work Repository

Importing or exporting a work repository allows you to transfer all work repository objects from one repository to another.

Exporting a Work Repository

To export a work repository:

  1. Select Export... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Export Selection dialog, select Export the Work Repository.

  3. Click OK.

  4. In the Export dialog, set the Export parameters as indicated inTable 20-2.

    The work repository with its models and projects are either exported as .xml files directly into the directory, or as a zip file containing .xml files. If you want to generate a zip file, you need to select Export to zip file and enter the name of the zip file in the Zip File Name field

  5. Click OK.

The export files are created in the specified export directory.

Importing a Work Repository

To import a work repository:

  1. Select Import... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Import Selection dialog, select Import the Work Repository.

  3. Click OK.

  4. In the Import dialog:

    1. Select the Import mode. Refer to Section 20.1.3, "Import Types" for more information.

    2. Select whether to import the work repository from a Folder or a Zip File.

    3. Enter the file import directory.

  5. Click OK.

The specified files are imported into the work repository.

20.4 Exporting the Technical Environment

This feature produces a comma separated (.csv) file in the directory of your choice, containing the details of the technical environment. This information is useful for support purposes.

You can customize the format of this file.

To produce the technical environment file:

  1. Select Export... from the Designer, Topology, Security or Operator Navigator toolbar menu.

  2. In the Export Selection dialog, select Export the Technical Environment.

  3. Click OK.

  4. In the Technical environment dialog, specify the export parameters as indicated in Table 20-6:

    Table 20-6 Technical Environment Export Parameters

    PropertiesDescription

    Export Directory

    Directory in which the export file will be created.

    File Name

    Name of the .cvs export file

    Advanced options

    This set of options allow to parameterize the XML output file format. It is recommended that you leave the default values.

    Character Set

    Encoding specified in the export file. Parameter encoding in the XML file header.

    <?xml version="1.0" encoding="ISO-8859-1"?>

    Field codes

    The first field of each record produced contains a code identifying the kind of information present on the row. You can customize these codes as necessary.

    • Oracle Data Integrator Information Record Code: Code used to identify rows that describe the current version of Oracle Data Integrator and the current user. This code is used in the first field of the record.

    • Master, Work, Agent, and Technology Record Code: Code for rows containing information about the master repository, the work repositories, the running agents, or the the data servers, their version, etc.

    Record Separator and Field Separator

    These separators define the characters used to separate records (lines) in the file, and fields within one record.


  5. Click OK.

20.5 Exporting and Importing the Log

You can export and import log data for archiving purposes. See Section 22.3.3.4, "Exporting and Importing Log Data" for more information.

PKo~WXVIVPKp\EOEBPS/interfaces.htm Working with Integration Interfaces

11 Working with Integration Interfaces

This chapter describes how to work with integration interfaces. An overview of the interface components and the Interface Editor is provided.

This chapter includes the following sections:

11.1 Introduction to Integration Interfaces

An interface consists of a set of rules that define the loading of a datastore or a temporary target structure from one or more source datastores.

Before creating an integration interface in Section 11.3, "Creating an Interface", you must first understand the key components of an integration interface and the Interface Editor. An overview of the components that you use to design an integration interface is provided in Section 11.1.1, "Components of an Integration Interface". The interface Editor is described in Section 11.2, "Introduction to the Interface Editor".

11.1.1 Components of an Integration Interface

An integration interface is made up of and defined by the following components:

  • Target Datastore

    The target datastore is the element that will be loaded by the interface. This datastore may be permanent (defined in a model) or temporary (created by the interface).

  • Datasets

    One target is loaded with data coming from several datasets. Set-based operators (Union, Intersect, etc) are used to merge the different datasets into the target datastore.

    Each Dataset corresponds to one diagram of source datastores and the mappings used to load the target datastore from these source datastores.

  • Diagram of Source Datastores

    A diagram of sources is made of source datastores - possibly filtered - related using joins. The source diagram also includes lookups to fetch additional information for loading the target.

    Two types of objects can be used as a source of an interface: datastores from the models and interfaces. If an interface is used, its target datastore -temporary or not- will be taken as a source.

    The source datastores of an interface can be filtered during the loading process, and must be put in relation through joins. Joins and filters are either copied from the models or can be defined for the interface. Join and filters are implemented in the form of SQL expressions.

  • Mapping

    A mapping defines the transformations performed on one or several source columns to load one target column. These transformations are implemented in the form of SQL expressions. Each target column has one mapping per dataset. If a mapping is executed on the target, the same mapping applies for all datasets.

  • Staging Area

    The staging area is a logical schema into which some of the transformations (joins, filters and mappings) take place. It is by default the same schema as the target's logical schema.

    It is possible to locate the staging area on a different location (including one of the sources). It is the case if the target's logical schema is not suitable for this role. For example, if the target is a file datastore, as the file technology has no transformation capability.

    Mappings can be executed either on the source, target or staging area. Filters and joins can be executed either on the source or staging area.

  • Flow

    The flow describes how the data flows between the sources, the staging area if it is different from the target, and the target as well as where joins and filters take place. The flow also includes the loading and integration methods used by this interface. These are selected by choosing Loading and Integration Knowledge Modules (LKM, IKM).

  • Control

    An interface implements two points of control. Flow control checks the flow of data before it is integrated into the target, Post-Integration control performs a static check on the target table at the end of the interface. The check strategy for Flow and Post-Integration Control is defined by a Check Knowledge Module (CKM).

The interfaces use the following components that should be created before the interface:

11.2 Introduction to the Interface Editor

The interface Editor provides a single environment for designing integration interfaces. The interface Editor enables you to create and edit integration interfaces.

Figure 11-1 Interface Editor

Description of Figure 11-1 follows

The Interface Editor consists of the sections described in Table 11-1:

Table 11-1 Interface Editor Sections

SectionLocation in FigureDescription

Designer Navigator

Left side

The Designer Navigator displays the tree views for projects, models, solutions, and other (global) components.

Source Diagram

Middle

You drag the source datastores from the Models tree and Interfaces from the Projects tree into the Source Diagram. You can also define and edit joins and filters from this diagram.

Source Diagram Toolbar

Middle, above the Source Diagram.

This toolbar contains the tools that can be used for the source diagram, as well as display options for the diagram.

Dataset Tabs

Middle, below the Source Diagram.

Datasets are displayed as tabs in the Interface Editor.

Interface Editor tabs

Middle, below the Dataset tabs

The Interface Editor tabs are ordered according to the interface creation process. These tabs are:

  • Overview

  • Mapping

  • Quick-Edit

  • Flow

  • Controls

  • Scenarios

  • Execution

Target Datastore Panel

Upper right

You drag the target datastore from the Models tree in the Designer Navigator into the Target Datastore panel. The target datastore, with the mapping for each column, is displayed in this panel. To edit the datastore in the Property Inspector, select the datastore's title or a specific column. You can also create a temporary target for this interface from this panel.

Property Inspector

Bottom

Displays properties for the selected object.

If the Property Inspector does not display, select Property Inspector from the View menu.


11.3 Creating an Interface

Creating an interface follows a standard process which can vary depending on the use case. The following step sequence is usually performed when creating an interface, and can be used as a guideline to design your first interfaces:

  1. Create a New Interface

  2. Define the Target Datastore

  3. Define the Datasets

  4. Define the Source Datastores and Lookups

  5. Define the Mappings

  6. Define the Interface Flow

  7. Set up Flow Control and Post-Integration Control

  8. Execute the Integration Interface

Note that you can also use the Quick-Edit Editor to perform the steps 2 to 5. See Section 11.4, "Using the Quick-Edit Editor" for more information.

11.3.1 Create a New Interface

To create a new interface:

  1. In Designer Navigator select the Interfaces node in the folder under the project where you want to create the interface.

  2. Right-click and select New Interface. The Interface Editor is displayed.

  3. On the Definition tab fill in the interface Name.

  4. Select a Staging Area and an Optimization Context for your interface.


    Note:

    The staging area defaults to the target. It may be necessary to put it on a different logical schema if the target does not have the required transformation capabilities for the interface. This is the case for File, JMS, etc. logical schemas. After defining the target datastore for your interface, you will be able to set a specific location for the Staging Area from the Overview tab by clicking the Staging Area Different From Target option and selecting a logical schema that will be used as the staging area.

    If your interface has a temporary target datastore, then the Staging Area Different From Target option is grayed out. In this case, the staging area as well as the target are one single schema, into which the temporary target is created. You must select here this logical schema.

    Oracle Data Integrator includes a built-in lightweight database engine that can be used when no database engine is available as a staging area (for example, when performing file to file transformations). To use this engine, select In_MemoryEngine as the staging area schema. This engine is suitable for processing small volumes of data only.

    The optimization context defines the physical organization of the datastores used for designing an optimizing the interface. This physical organization is used to group datastores into sourcesets, define the possible locations of transformations and ultimately compute the structure of the flow. For example, if in the optimization context, two datastores on two different logical schema are resolved as located in the same data server, the interface will allow a join between them to be set on the source.


  5. Go to the Mapping tab to proceed. The steps described in Section 11.3.2, "Define the Target Datastore" to Section 11.3.5, "Define the Mappings" take place in the Mapping tab of the Interface Editor.


    Tip:

    To display the editor of a source datastore, a lookup, a temporary interface, or the target datastore that is used in the Mapping tab, you can right-click the objectand select Open.

11.3.2 Define the Target Datastore

The target datastore is the element that will be loaded by the interface. This datastore may be permanent (defined in a model) or temporary (created by the interface in the staging area).

11.3.2.1 Permanent Target Datastore

To insert the permanent target datastore in an interface:

  1. In the Designer Navigator, expand the Models tree and expand the model or sub-model containing the datastore to be inserted as the target.

  2. Select this datastore, then drag it into the Target Datastore panel. The target datastore appears.

  3. In the Property Inspector, select the Context for this datastore if you want to target this datastore in a fixed context. By default, the datastore is targeted on the context into which the interface is executed. This is an optional step.

  4. If you want to target a specific partition of this target datastore, select in the Property Inspector the partition or sub-partition defined for this datastore from the list. This is an optional step.

Once you have defined your target datastore you may wish to view its data.

To display the data of the permanent target datastore of an interface:

  1. Right-click the title of the target datastore in the Target Datastore panel.

  2. Select Data...

The Data Editor containing the data of the target datastore appears. Data in a temporary target datastore cannot be displayed since this datastore is created by the interface.

11.3.2.2 Temporary Target Datastore

To add a temporary target datastore:

  1. In the Target Datastore panel, select the title of the target datastore <Temporary Target Datastore> to display the Property Inspector for the target datastore.

  2. On the Diagram Property tab of Property Inspector, type in a Name for this datastore.

  3. Select the Context for this datastore if you want to target this datastore in a predefined context. By default, the datastore is targeted on the context into which the interface is executed. This is an optional step.

  4. Specify the Temporary Datastore Location. Select Work Schema or Data Schema if you wish to create the temporary datastore in the work or data schema of the physical schema that will act as the staging area. See Chapter 4, "Setting-up the Topology" for more information on schemas.


    Note:

    The temporary target datastore will be created only if you activate the IKM option CREATE_TARG_TABLE when defining the flow.

  5. Go to the Overview tab and select the logical schema into which this temporary target datastore is created.

The temporary target datastore is created without columns. They must be added to define its structure.

To add a column to a temporary target datastore:

  1. In the Target Datastore panel, right-click the title bar that shows the name of the target datastore.

  2. Select Add Column.

  3. A new empty column appears in the Target Datastore panel. Select this new column.

  4. In Diagram Property tab of the Target Mapping Property Inspector give the new column definition in the Target Column field group. You must define the column Name, Datatype, Length and Scale.

To delete a column from a temporary target datastore:

  1. Right-click the column to be deleted In the Target Datastore panel.

  2. Select Delete.

To add one or several columns from a source datastore to a temporary target datastore:

  1. Add the source datastore as described in Section 11.3.4, "Define the Source Datastores and Lookups".

  2. In the Source Diagram, select the source datastore columns you wish to add.

  3. Right-click and select Add Column to Target Table.

  4. The columns are added to the target datastore. Data types are set automatically.

To add all of the columns from a source datastore to a temporary target datastore:

  1. Add the source datastore.

  2. In the Source Diagram, select the title of the entity representing the source datastore.

  3. Right-click and select Add to Target.

  4. The columns are added to the Target Datastore. Data types are set automatically.

11.3.2.3 Define the Update Key

If you want to use update or flow control features in your interface, it is necessary to define an update key on the target datastore.

The update key identifies each record to update or check before insertion into the target. This key can be a unique key defined for the target datastore in its model, or a group of columns specified as a key for the interface.

To define the update key from a unique key:

  1. In the Target Datastore panel, select the title bar that shows the name of the target datastore to display the Property Inspector.

  2. In the Diagram Property tab, select the Update Key from the list.


Note:

Only unique keys defined in the model for this datastore appear in this list.

You can also define an update key from the columns if:

  • You don't have a unique key on your datastore. This is always the case on a temporary target datastore.

  • You want to specify the key regardless of already defined keys.

When you define an update key from the columns, you select manually individual columns to be part of the update key.

To define the update key from the columns:

  1. Unselect the update key, if it is selected. This step applies only for permanent datastores.

  2. In the Target Datastore panel, select one of the columns that is part of the update key to display the Property Inspector.

  3. In the Diagram Property tab, check the Key box. A key symbol appears in front of the column in the Target Datastore panel.

  4. Repeat the operation for each column that is part of the update key.

11.3.3 Define the Datasets

A dataset represents the data flow coming from a group of datastores. Several datasets can be merged into the interface target datastore using set-based operators such as Union and Intersect. The support for datasets as well as the set-based operators supported depend on the capabilities of the staging area's technology.

You can add, remove, and order the datasets of an interface and define the operators between them in the DataSets Configuration dialog. Note that the set-based operators are always executed on the staging area.

When designing the integration interface, the mappings for each dataset must be consistent, this means that each dataset must have the same number of target columns mapped.

To create a new dataset:

  1. In the Source Diagram toolbar click Add /Remove DataSet... to display the DataSet Configuration dialog.

  2. Click Add New DataSet... A new line is added for the new dataset at the bottom of the list.

  3. In the DataSet Name field, give the name of the new dataset. This name will be displayed in the dataset tab.

  4. In the Operator field, select the set-based operator for your dataset. Repeat steps 2 to 4 if you wish to add more datasets.

  5. Click Close.

To arrange the order of the datasets:

  1. Select a dataset in the DataSet Configuration dialog.

  2. Click the Up and Down arrows to move the dataset up or down in the list.

To delete a dataset:

  1. Select a dataset in the DataSet Configuration dialog.

  2. Click Delete.

11.3.4 Define the Source Datastores and Lookups

The source datastores contain data used to load the target datastore. Two types of datastores can be used as an interface source: datastores from the models and temporary datastores that are the target of an interface.

When using a temporary datastore that is the target of another interface as a source or as a lookup table, you can choose:

  • To use a persistent temporary datastore: You will run a first interface creating and loading the temporary datastore, and then a second interface sourcing from it. In this case, you would typically sequence the two interfaces in a package.

  • Not to use a persistent datastore: The second interface generates a sub-select corresponding to the loading of the temporary datastore. This option is not always available as it requires all datastores of the source interface to belong to the same data server (for example, the source interface must not have any source sets). You activate this option by selecting Use Temporary Interface as Derived Table on the source. Note the following when using a temporary interface as derived table:

    • The generated sub-select syntax can be either a standard sub-select syntax (default behavior) or the customized syntax from the IKM used in the first interface.

    • All IKM commands except the one that defines the derived-table statement option Use current command for Derived Table sub-select statement are ignored. This limitation causes, for example, that temporary index management is not supported.

The source datastores of an interface can be filtered during the loading process and must be put in relation through joins. Joins and filters can be automatically copied from the model definitions and can also be defined for the interface.

A lookup is a datastore (from a model or the target datastore of an interface) - called the lookup table - associated to a source datastore - the driving table - via a join expression and from which data can be fetched and used into mappings.

The lookup data is used in the mapping expressions. Lookup tables are added with the Lookup Wizard. Depending on the database, two syntaxes can be used for a lookup:

  • SQL Left-Outer Join in the FROM clause: The lookup is processed as a regular source and a left-outer join expression is generated to associate it with its driving table.

  • SQL expression in the SELECT clause: The lookup is performed within the select clause that fetches the data from the lookup table. This second syntax may sometimes be more efficient for small lookup tables.

11.3.4.1 Define the Source Datastores

To add a permanent-type source datastore to an interface:

  1. In the Designer Navigator, expand the Models tree and expand the model or sub-model containing the datastore to be inserted as a source.

  2. Select this datastore, then drag it into the Source Diagram. The source datastore appears in the diagram.

  3. In the Diagram Property tab of the Property Inspector, modify the Alias of the source datastore. The alias is used to prefix column names. This is an optional step that improves readability of the mapping, joins and filter expressions.

  4. Select the Context for this datastore if you want to source data from this datastore in a fixed context. By default, the datastore is accessed in the context into which the interface is executed. This is an optional step.

  5. If you want to source from a specific partition of this datastore, select the partition or sub-partition defined for this datastore from the list. This is an optional step


Caution:

If there are in the model filters defined on the datastore, or references between this datastore and datastores already in the diagram, they appear along with the datastore. These references and filters are copied as joins and filters in the interface. They are not links to the references and filters from the model. Therefore, modifying a reference or a filter in a model does not affect the join or filter in the interface, and vice versa.


Note:

If the source datastore is journalized, it is possible to use only the journalized data in the interface flow. Check the Journalized Data Only box in the source datastore properties. A Journalizing filter is automatically created in the diagram. See Chapter 7, "Working with Changed Data Capture" for more information.

To add a temporary-type source datastore to an interface:

  1. In the Designer Navigator, expand the Projects tree and expand the project containing the interface to be inserted as a source.

  2. Select this interface, then drag it into the Source Diagram. The source datastore appears in the diagram.

  3. In the Diagram Property tab of the Property Inspector, modify the Alias of the source datastore. The alias is used to prefix column names. This is an optional step that improves readability of the mapping, joins and filter expressions.

  4. If you want this interface to generate a sub-select corresponding to the loading of the temporary datastore, check the Use Temporary Interface as Derived Table (Sub-Select) box. If this box is not checked, make sure to run the interface loading the temporary datastore before running the current interface.

To delete a source datastore from an interface:

  1. Right-click the title of the entity representing the source datastore in the Source Diagram.

  2. Select Delete.

  3. Click OK in the Confirmation dialog.

The source datastore disappears, along with the associated filters and joins. Note that if this source datastore contained columns that were used in mappings, these mappings will be in error.

To display the data or the number for rows of a source datastore of an interface:

  1. Right-click the title of the entity representing the source datastore in the Source Diagram.

  2. Select Number of Lines to display the number of rows in this source datastore or Display Data to display the source datastore data.

A window containing the number or rows or the data of the source datastore appears.

11.3.4.2 Define Lookups

To add a lookup to an interface:

  1. From the Source Diagram toolbar menu, select Add a new Lookup. The Lookup Tables Wizard opens.

  2. In the Lookup Table Wizard select your Driving Table from the left pane. Source datastores for the current diagram appear here. Note that lookups do not appear in the list.

  3. From the tree in the Lookup Table pane on the right, do one of the following:

    • From the Datastores tab, select a datastore from a model to use as a lookup table.

    • From the Interfaces tab, select an interface whose target will be used as the lookup table. If this target is temporary and you want this interface to generate a sub-select corresponding to the loading of the temporary datastore, check the Use Temporary Interface as Derived Table (Sub-Select) box. If this box is not checked, make sure to run the interface loading the temporary datastore before running the current interface.

  4. Modify the Alias of the lookup table. The alias is used to prefix column names. This is an optional step that improves readability of the expressions.

  5. Click Next.

  6. On the left pane, select one or several source columns from the driving table you wish to join.

  7. On the right pane, select one or several columns of the lookup table you wish to join.

  8. Click Join. The join condition appears in the Lookup condition text field. You can edit the join condition in this field.

  9. Specify the Lookup options:

    • Execute on: Execution location (Source or Staging Area) of the lookup.

    • Lookup type: Indicates whether to use SQL left-outer join in the FROM clause or SQL expression in the SELECT clause during the SQL code generation.

  10. Click Finish. Your lookup appears in the Source Diagram of your dataset.


    Note:

    In order to use columns from this lookup, you need to expand the graphical artifact representing it. Right-click the lookup icon in the diagram and select View As > Symbolic.

To edit Lookup tables:

  1. Select a Lookup in the Source Diagram of your dataset. The Lookup table properties are displayed in the Property Inspector.

  2. Edit the lookup properties in the Property Inspector.

You cannot change from here the driving and lookup tables. To change these, you must delete the lookup and recreate it.

To delete a Lookup table:

  1. Select a Lookup in the Source Diagram of your dataset.

  2. Right-click and select Delete.

11.3.4.3 Define Filters on the Sources

To define a filter on a source datastore:

  1. In the Source Diagram, select one or several columns in the source datastore you want to filter, and then drag and drop these columns onto the source diagram. A filter appears. Click this filter to open the Property Inspector.

  2. In the Diagram Property tab of the Property Inspector, modify the Implementation expression to create the required filter. You may call the expression Editor by clicking Launch Expression Editor button. The filter expression must be in the form SQL condition. For example, if you want to take in the CUSTOMER table (that is the source datastore with the CUSTOMER alias) only those of the customers with a NAME that is not null, an expression would be CUSTOMER.NAME IS NOT NULL.

  3. Select the execution location: Source or Staging Area.

  4. Click the Check the Expression in the DBMS to validate the expression.

  5. Check the Active Filter box to enable or disable this filter. It is enabled by default.

  6. If you want ODI to automatically generate a temporary index to optimize the execution of the filter, select the index type to create from the Create Temporary Index list. This step is optional.


    Note:

    The creation of temporary indexes may be a time consuming operation in the overall flow. It is advised to review the execution statistics and to compare the execution time saved with the indexes to the time spent creating them.

To delete a filter on a source datastore:

  1. In the Source Diagram, select the filter.

  2. Right-click and select Delete.

To display the data or the number of rows resulting from a filter:

  1. In the Source Diagram, select the filter.

  2. Right-click and select Number of Lines to display the number of rows after the filter or Display Data to display the filtered data.

A window containing the data or the number of rows after the filter appears.

11.3.4.4 Define Joins between Sources

To create a join between the source datastores of an interface:

  1. In the Source Diagram, select a column in the first source datastore to join, and drag and drop this column on a column in the second source datastore to join. A join linking the two datastores appears. Click this join to open the Property Inspector.

  2. In the Diagram Property tab of the Property Inspector, modify the Implementation expression to create the required join. You may call the expression Editor by clicking Launch Expression Editor button. The join expression must be in the form of an SQL expression.

  3. Select the execution location: Source or Staging Area.

  4. Optionally, you can click the Check the Expression in the DBMS to validate the expression.

  5. Select the type of join (right/left, inner/outer, cross, natural). The text describing which rows are retrieved by the join is updated.

  6. If you want to use an ordered join syntax for this join, check the Ordered Join (ISO) box and then specify the Order Number into which this join is generated. This step is optional.

  7. Check the Active Clause box to enable or disable this join. You can disable a join for debugging purposes. It is enabled by default.

  8. If you want ODI to automatically generate temporary indexes to optimize the execution of this join, select the index type to create from the Temporary Index On lists. This step is optional.


    Note:

    The creation of temporary indexes may be a time consuming operation in the overall flow. It is advised to review the execution statistics and to compare the execution time saved with the indexes to the time spent creating them.

To delete a join between source datastores of an interface:

  1. In the Source Diagram, select the join.

  2. Right-click and select Delete.

To display the data or the number of rows resulting from a join:

  1. In the Source Diagram, select the join.

  2. Right-click and select Number of Lines to display the number of rows returned by the join or Display Data to display the result of the join.

A window containing the data or the number of rows resulting from the join appears.

11.3.5 Define the Mappings

A mapping defines the transformations on one or several source columns to load one target column.

Empty mappings are automatically filled when you add a source or target datastore by column name matching. The user-defined mapping always takes precedence over automatic mapping.

To regenerate the automatic mapping by column name matching:

  1. Right-click the target datastore.

  2. Select Redo Auto Mapping.

The target datastore columns are automatically mapped on the source datastores' columns with the same name.

To define the mapping of a target column:

  1. In the Target Datastore Panel, select the column of the target datastore to display the Property Inspector.

  2. In the Diagram Property tab of the Property Inspector, modify the Implementation to create the required transformation. The columns of all the tables in the model can be drag-and-dropped into the text. You may call the expression Editor by clicking Launch Expression Editor.

  3. Optionally, click Check the expression in the DBMS to validate the expression.

  4. Select the execution location: Source, Target or Staging Area. Some limitations exist when designing mappings. When a mapping does not respect these limitations, a red cross icon appears on the target column in the Target Datastore Panel. For example:

    • Mappings that contain constants cannot be mapped on the source without having selected a source datastore.

    • Mappings that contain reference source columns cannot be mapped on the target.

    • A mandatory column should be mapped.

    • A mapping mapped in one dataset must be mapped in all other datasets.

  5. Check the Update boxes if you want the mapping to be executed in Insert or Update operations. You can also check the UD1 to UD10 boxes to enable KM-specific options on columns. These options are optional, and must be used if the Knowledge Module documentation indicates it. Otherwise, they are ignored.

  6. Check Active Mapping if you want this mapping to be used for the execution of the interface. Note that if you enter a mapping text in a disabled mapping, this mapping will automatically be enabled.


Tip:

Before proceeding, you can check the consistency and errors in your diagram by clicking the Display Interface Errors Report in the Source Diagram Toolbar. This report will show you errors that may exist in your interface such as mappings incorrectly located.

At this stage, you may receive some errors because the Knowledge Modules are not selected yet for this interface.


11.3.6 Define the Interface Flow

In the Flow tab, you define the loading and integration strategies for mapped data. Oracle Data Integrator automatically computes the flow depending on the configuration in the interface's diagram. It proposes default KMs (global and project KMs) for the data flow. The Flow tab enables you to view the data flow and select the KMs used to load and integrate data.

In the flow, the following items appear:

  • Source Sets: Source Datastores that are within the same dataset, located on the same physical data server and which are joined with Joins located on the Source are grouped in a single source set in the flow diagram. A source set represents a group of datastores that can be extracted at the same time.

  • DataSets: Datasets appear as yellow boxes in the Staging Area.

  • Staging Area: It appears as a box that includes the different datasets, the target (if located on the same data server), and possibly some of the sources (if located on the same data server).

  • Target: It appears as a separate box if it is located in a different schema from the staging area (If the Staging Area Different from Target option is selected).

You use the following KMs in the flow:

  • LKM: They define how data is moved. One LKM is selected for each Source Set for moving data from the sources to the staging area. It can be also selected to move data from the Staging Area - when different from the Target - to the Target, when a single technology IKM is selected for the Staging Area.

  • IKM: They define how data is integrated into the target. One IKM is typically selected on the Target. When the staging area is different from the target, the selected IKM can be a multi-technology IKM that moves and integrates data from the Staging Area into the Target.


Note:

Only global KMs or KMs that have already been imported into the project can be selected in the interface. Make sure that you have imported the appropriate KMs in the project before proceeding.

To change the LKM in use:

  1. In the Flow tab, select one of the Source Sets or the Staging Area, if it is not into the Target group, by clicking its title. The Property Inspector opens for this object.

  2. If you are working on a Source Set, change the Name of this source set. This step is optional and improves readability of the flow.

  3. Select a LKM from the LKM Selector list.

  4. KMs are set with default options that work in most use cases. You can optionally modify the KM Options.

    Note that KM options of the previous KM are retained using homonymy when switching from a KM to another. By changing KMs several times you might lose custom KM option values.

To change the IKM in use:

  1. In the Flow tab, select the Target by clicking its title. The Property Inspector opens for this object.

  2. In the Property Inspector, select a IKM from the IKM Selector list.

  3. Check the Distinct option if you want to automatically apply a DISTINCT statement on your data flow and avoid possible duplicate rows.

  4. KMs are set with default options that work in most use cases. You can optionally modify the KM Options.

    Note that KM options of the previous KM are retained using homonymy when switching from a KM to another. By changing KMs several times you might lose custom KM option values.

    An important option to set is FLOW_CONTROL. This option triggers flow control and requires that you set up flow control.


Note:

Knowledge modules with an Incremental Update strategy, as well as flow control, require that you set an update key for the target datastore of the interface.


Note:

For more information on the KMs and their options, refer to the KM description and to the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator.

11.3.7 Set up Flow Control and Post-Integration Control

In an integration interface, it is possible to set two points of control. Flow Control checks the data in the incoming flow before it gets integrated into the target, and Post-Integration Control checks the target datastore as if in a static check at the end of the interface.

11.3.7.1 Set up Flow Control

The flow control strategy defines how data is checked against the constraints defined on the target datastore before being integrated into this datastore. It is defined by a CKM. In order to have the flow control running, you must set the FLOW_CONTROL option in the IKM to true. Flow control also requires that an update key is selected on the target datastore of this interface. Refer to Section 11.3.2.3, "Define the Update Key" for more information.

To define the CKM used in an interface:

  1. In the Controls tab of the interface, select a CKM from the CKM Selector list.

  2. Set the KM Options.

  3. Select the Constraints to be checked.

  4. Fill in the Maximum number of errors allowed. Note that if you leave this field empty, an infinite numbers of errors is allowed. The interface stops during the flow control (if any) or the post-integration control (if any) if the number of errors allowed is reached.

  5. Check the % box if you want the interface to fail when a percentage of errors is reached during flow or post-integration control, rather than a fixed number of errors. This percentage is calculated with the following formula:

    errors_detected * 100 / checked_rows
     
    

    where:

    • checked_rows is the number of checked rows during flow and post-integration control.

    • errors_detected are the number of errors detected during flow and post-integration control.

    This formula is calculated at the end of the execution of the interface. If the result of this formula is superior to the indicated percentage, the interface status will be in error. If the interface falls into error during a flow control, no changes are performed into the target. If the interface falls into error after a post-integration control, the changes performed to the target are not committed by the Knowledge Module.

11.3.7.2 Set up Post-Integration Control

The post-integration control strategy defines how data is checked against the constraints defined on the target datastore. This check takes place once the data is integrated into the target datastore. It is defined by a CKM. In order to have the post-integration control running, you must set the STATIC_CONTROL option in the IKM to true. Post-integration control requires that a primary key is defined in the data model for the target datastore of your interface.

Concerning the maximum number of errors allowed the same behavior is applied as for flow control.

Post-integration control uses the same CKM as the flow control.

11.3.8 Execute the Integration Interface

Once the interface is created, it is possible to execute it.

To run an interface:

  1. While editing the interface, click Execute in the toolbar.

  2. In the Execution dialog, select the execution parameters:

    • Select the Context into which the interface must be executed.

    • Select the Logical Agent that will run the interface.

  3. Click OK.

  4. The Session Started Window appears.

  5. Click OK.

11.4 Using the Quick-Edit Editor

You can use the Quick-Edit Editor to perform the same actions as on the Mapping tab of the Interface Editor in a non-graphical form:

The Quick-Edit Editor allows to:

The properties of the following components are displayed in tabular form and can be edited in the Quick-Edit Editor:

  • Sources

  • Lookups

  • Joins

  • Filters

  • Mappings

Note that components already defined on the Mapping tab of the Interface Editor are displayed in the Quick-Edit Editor and that the components defined in the Quick-Edit Editor will also be reflected in the Mapping tab.

11.4.1 Adding and Removing a Component

With the Quick-Edit Editor, you can add or remove components of an integration interface.

11.4.1.1 Adding Components

To add a source, lookup, join, filter, or temporary target column with the Quick-Edit Editor:

  1. In the Interface Editor, go to the Quick-Edit tab.

  2. From the Select DataSet list, select the dataset to which you want to add the new components.

  3. Expand the section of the components to add.

  4. From the toolbar menu, select Add.

  5. The next tasks depend on the type of component you are adding:

    If you are adding a new temporary target column, a new line representing the temporary target column is added to your target datastore table. You can modify directly the cells of this temporary target column in the target datastore table according to your needs.

    If you are adding a source, lookup, join, or filter, a wizard will guide you through the next steps.

Add Sources Wizard

Use the Add Sources Wizard to add the sources of your Interfaces. You can add datastores or integration interfaces as sources.

To a datastore as a source of the Interface:

  1. Select the Datastores tab.

    The Add Sources Wizard displays the list of datastores with their Models and Model folders that can be used as a source of the Interface.

  2. From the list, select the datastore that you want to add as a source of the Interface.

    Note that you can browse through the list or filter this list by entering a partial or complete name of the datastore in the search field.

  3. Modify the alias of the datastore (optional).

  4. Click OK.

To add an integration interface as a source of the Interface:

  1. Select the Interfaces tab.

    The Add Sources Wizard displays the list of Interfaces.

  2. From the list, select the Interface that you want to add as a source of the Interface.

    Note that you can browse through the list or filter this list by entering a partial or complete name of the Interface in the search field.

  3. Modify the alias of the Interface (optional).

  4. Click OK.

Lookup Tables Wizard

Use the Lookup Tables Wizard to add lookup tables to your integration interface. For more information, see Section 11.3.4.2, "Define Lookups".

Join Table Wizard

Use the Join Table Wizard to create joins between the source datastores of an interface.

To create a join:

  1. From the Left Source list in the Specify Join Criteria section, select the source datastore that contains the left column for your join.

  2. From the Right Source list, select the source datastore that contains the right column for your join.

  3. Select the left source and right source column and click Join. The join condition is displayed in the Join Condition field.

  4. You can modify the join condition to create the requirNed join. Note that the join expression must be in the form of an SQL expression. You may call the Expression Editor by clicking Launch Expression Editor to modify the join condition.

  5. Select the execution location: Source or Staging Area.

  6. Select the type of join you want to create: Inner Join, Cross, Natural, Left Outer, Right Outer, or Full. The text describing which rows are retrieved by the join is updated.

  7. Click OK.

Filter Table Wizard

Use the Filter Table Wizard to define the filter criteria of your source datastore.

To define a filter on a source datastore:

  1. From the source list, select the source datastore you want to filter.

  2. From the columns list, select the source column on which you want to create the filter. The filter condition is displayed in the Filter Condition field.

  3. You can modify this filter condition to create the required filter. You may call the expression Editor by clicking Launch Expression Editor. Note that the filter expression must be in the form SQL condition.

  4. Select the execution location: Source or Staging Area.

  5. Click OK.

11.4.1.2 Removing Components

To remove a source, lookup, join, filter, or temporary target column with the Quick-Edit Editor:

  1. In the Interface Editor, go to the Quick-Edit tab.

  2. From the Select DataSet list, select the dataset from which you want to remove the components.

  3. Expand the section of the components to remove.

  4. Select the lines you want to remove.

  5. From the toolbar menu, select Remove.

    The selected components are removed.

11.4.2 Editing a Component

To edit the sources, lookups, joins, filters, mappings or target column properties with the Quick-Edit Editor:

  1. In the Interface Editor, go to the Quick-Edit tab.

  2. From the Select DataSet list, select the dataset that contains the components to modify.

  3. Expand the section of the component to modify.

  4. Modify the table entry either by selecting or entering a new value.

Performing Mass Updates

The mass updates allow quick updates of several component properties at a time. You can perform mass updates in the Quick-Edit Editor using the Copy-Paste feature in the component tables.


Note:

The Copy-Paste feature is provided for text cells, drop down lists, and checkboxes.

To perform a mass update of component properties:

  1. In the component table, select the cell that contains the value you want to apply to other cells.

  2. Copy the cell value.

  3. Select multiple cells in the same column.

  4. Paste the copied value.

The copied value is set to all selected cells.

11.4.3 Adding, Removing, and Configuring Datasets

You can create, remove, and configure datasets with the Quick-Edit Editor.

To create, remove, and configure datasets with the Quick-edit Editor:

  1. From the Select DataSet list, select Manage DataSets...

  2. The DataSets Configuration dialog is displayed. Define the datasets as described in Section 11.3.3, "Define the Datasets".

11.4.4 Changing the Target DataStore

You can change the target datastore of your integration interface with the Quick-Edit Editor.

To change the target datastore of your Interface with the Quick-Edit Editor:

  1. In the Interface Editor, go to the Quick-Edit tab.

  2. Expand the Mappings section.

  3. Click Add or Modify Target Datastore.

  4. In the Add or Modify Target Datastore Dialog, do one of the following:

    • If you want to create a temporary target datastore, select Use Temporary Target and enter the name of the new temporary target datastore.

    • If you want to use a permanent target datastore, select the datastore that you want to add as the target of the Interface from the list.

      Note that you can browse through the list or filter this list by entering a partial or complete name of the datastore in the search field.

  5. Click OK.

11.4.5 Customizing Tables

There two ways to customize the tables of the Quick-Edit Editor:

  • From the table toolbar, select Select Columns and then, from the drop down menu, select the columns to display in the table.

  • Use the Customize Table Dialog.

    1. From the table toolbar, select Select Columns.

    2. From the drop down menu, select Select Columns...

    3. In the Customize Table Dialog, select the columns to display in the table.

    4. Click OK.

11.4.6 Using Keyboard Navigation for Common Tasks

This section describes the keyboard navigation in the Quick-Edit Editor.

Table 11-2 shows the common tasks and the keyboard navigation used in the Quick-Edit Editor.

Table 11-2 Keyboard Navigation for Common Tasks

NavigationTask

Arrow keys

Navigate: move one cell up, down, left, or right

TAB

Move to next cell

SHIFT+TAB

Move to previous cell

SPACEBAR

Start editing a text, display items of a list, or change value of a checkbox

CTRL+C

Copy the selection

CTRL+V

Paste the selection

ESC

Cancel an entry in the cell

ENTER

Complete a cell entry and move to the next cell or activate a button

DELETE

Clear the content of the selection (for text fields only)

BACKSPACE

Delete the content of the selection or delete the preceding character in the active cell (for text fields only)

HOME

Move to the first cell of the row

END

Move to the last cell of the row

PAGE UP

Move up to the first cell of the column

PAGE DOWN

Move down to the last cell of the column


11.5 Designing Integration Interfaces: E-LT- and ETL-Style Interfaces

In an E-LT-style integration interface, ODI processes the data in a staging area, which is located on the target. Staging area and target are located on the same RDBMS. The data is loaded from the source(s) to the target. To create an E-LT-style integration interface, follow the standard procedure described in Section 11.3, "Creating an Interface".

In an ETL-style interface, ODI processes the data in a staging area, which is different from the target. The data is first extracted from the source(s) and then loaded to the staging area. The data transformations take place in the staging area and the intermediate results are stored in temporary tables in the staging area. The data loading and transformation tasks are performed with the standard ELT KMs.

Oracle Data Integrator provides two ways for loading the data from the staging area to the target:

Depending on the KM strategy that is used, flow and static control are supported. See "Designing an ETL-Style Interface" in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator for more information.

Using a Multi-connection IKM

A multi-connection IKM allows updating a target where the staging area and sources are on different data servers. Figure 11-2 shows the configuration of an integration interface using a multi-connection IKM to update the target data.

Figure 11-2 ETL-Interface with Multi-connection IKM

Description of Figure 11-2 follows

See the chapter in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area for more information on when to use a multi-connection IKM.

To use a multi-connection IKM in an ETL-style interface:

  1. Create an integration interface using the standard procedure as described in Section 11.3, "Creating an Interface". This section describes only the ETL-style specific steps.

  2. In the Definition tab of the Interface Editor, select Staging Area different from Target and select the logical schema of the source tables or another logical schema that is not a source or the target. This schema will be used as the staging area.

  3. In the Flow tab, select one of the Source Sets, by clicking its title. The Property Inspector opens for this object.

  4. Select an LKM from the LKM Selector list to load from the source(s) to the staging area. See the chapter in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. In the Flow tab, select the Target by clicking its title. The Property Inspector opens for this object.

    In the Property Inspector, select an ETL multi-connection IKM from the IKM Selector list to load the data from the staging area to the target. See the chapter in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the IKM you can use.

  7. Optionally, modify the KM options.

Using an LKM and a mono-connection IKM

If there is no dedicated multi-connection IKM, use a standard exporting LKM in combination with a standard mono-connection IKM. Figure 11-3 shows the configuration of an integration interface using an exporting LKM and a mono-connection IKM to update the target data. The exporting LKM is used to load the flow table from the staging area to the target. The mono-connection IKM is used to integrate the data flow into the target table.

Figure 11-3 ETL-Interface with an LKM and a Mono-connection IKM

Description of Figure 11-3 follows

Note that this configuration (LKM + exporting LKM + mono-connection IKM) has the following limitations:

  • Neither simple CDC nor consistent CDC are supported when the source is on the same data server as the staging area (explicitly chosen in the Interface Editor)

  • Temporary Indexes are not supported

See the chapter in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area for more information on when to use the combination of a standard LKM and a mono-connection IKM.

To use an LKM and a mono-connection IKM in an ETL-style interface:

  1. Create an integration interface using the standard procedure as described in Section 11.3, "Creating an Interface". This section describes only the ETL-style specific steps.

  2. In the Definition tab of the Interface Editor, select Staging Area different from Target and select the logical schema of the source tables or a third schema.

  3. In the Flow tab, select one of the Source Sets.

  4. In the Property Inspector, select an LKM from the LKM Selector list to load from the source(s) to the staging area. See the chapter in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. Select the Staging Area. In the Property Inspector, select an LKM from the LKM Selector list to load from the staging area to the target. See the chapter in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the LKM you can use.

  7. Optionally, modify the options.

  8. Select the Target by clicking its title. The Property Inspector opens for this object.

    In the Property Inspector, select a standard mono-connection IKM from the IKM Selector list to update the target. See the chapter in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the IKM you can use.

  9. Optionally, modify the KM options.

PKV*NNPKp\EOEBPS/quickstart.htmP! Oracle Data Integrator QuickStart

2 Oracle Data Integrator QuickStart

The Oracle Data Integrator QuickStart will introduce you to the basic steps of creating an integration project with Oracle Data Integrator and show you how to put them immediately to work for you. It will help you get started with Oracle Data Integrator by pointing out only the basic functionalities and the minimum required steps.

This section is not intended to be used for advanced configuration, usage or troubleshooting.

2.1 Oracle Data Integrator QuickStart List

To perform the minimum required steps of an Oracle Data Integrator integration project follow the ODI QuickStart list and go directly to the specified section of this guide.

Before performing the QuickStart procedure ensure that you have:

  1. Installed Oracle Data Integrator according to the instructions in the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator.

  2. Set up the Oracle Data Integrator repository architecture. This means create the repositories to store the metadata for the applications involved in the transformation and integration processing, the developed project versions and all of the information required for their use (planning, scheduling and execution reports). To set up the Oracle Data Integrator repository architecture:

    1. You need to create one master repository containing information on the topology of a company's IT resources, on security and on version management of projects and data models. Refer to Section 3.3, "Creating the Master Repository"for more details.

      To test your master repository connection, refer to Section 3.4, "Connecting to the Master Repository".

    2. You need to create at least one Work Repository containing information about data models, projects, and their operations. Refer to Section 3.5, "Creating a Work Repository"for more details.

      To test your work repository connection and access this repository through Designer and Operator, refer to the section Section 3.6, "Connecting to a Work Repository".

ODI QuickStart list

The first part of the QuickStart (steps 1 to 3) consists of setting up the topology of your information system by defining the data servers, the schemas they contain, and the contexts. Refer to the Chapter 1, "Introduction to Oracle Data Integrator" if you are not familiar with these concepts.

The second part of the QuickStart (step 4) consists of creating a model. A model is a set of datastores corresponding to data structures contained in a physical schema: tables, files, JMS messages, elements from an XML file are represented as datastores.

The third part of the QuickStart (steps 5 to 7) consists of creating your integration project. In this project you create integration interfaces to load data from one or several source datastores to one target datastore.

The last part of the QuickStart (steps 8 and 9) consists of executing the interface you have create d in step 7 and viewing and monitoring the execution results.

  1. To connect source and target systems you need to declare data servers. A data server can be a database, a MOM, a connector or a file server and is always linked with one specific technology. How to create a data server corresponding to the servers used in Oracle Data Integrator is covered in the Chapter 4, "Creating a Data Server".

  2. A physical schema is a defining component of a data server. It allows the datastores to be classified and the objects stored in the data server to be accessed. For each data server, create the physical schemas as described in Chapter 4, "Creating a Physical Schema". Use the default Global context.

  3. In Oracle Data Integrator, you perform developments on top of a logical topology. Refer to the Chapter 1, "Introduction to Oracle Data Integrator" if you are not familiar with the logical architecture. Create the logical schemas and associate them with the physical schemas in the Global context. See Chapter 4, "Creating a Logical Schema" for more information.

  4. Integration interfaces use data models containing the source and target datastores. Data Models are usually reverse-engineered from your data servers metadata into the Oracle Data Integrator repository. Create a model according to the Section 5.2, "Creating and Reverse-Engineering a Model".

  5. The developed integration components are stored in a project. How to create a new project is covered in the Section 9.2, "Creating a New Project".

  6. Integration interfaces use Knowledge Modules to generate their code. For more information refer to the E-LT concept in the Chapter 1, "Introduction to Oracle Data Integrator". Before creating integration interfaces you need to import the Knowledge Modules corresponding to the technology of your data. How to import a Knowledge Module is covered in the Section 20.2.6, "Importing Objects". Which Knowledge Modules you need to import is covered in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator.

  7. To load your target datastores with the data from the source datastores you need to create an interface. An interface consists of a set of rules that define the loading from one or more source datastores to one target datastore. How to create a new interface for your integration project is covered in the Section 11.3, "Creating an Interface".

  8. Once you have finished creating the integration interface, you can execute it. The interface execution is covered in the Section 11.3.8, "Execute the Integration Interface". Select Local (No Agent) to execute the interface directly by Oracle Data Integrator.

  9. You can view and monitor the execution results in Operator. How to follow the interface's execution in Operator is covered in the Chapter 22, "Monitoring Integration Processes".

  10. An integration workflow may require the loading of several target datastores in a precise sequence. If you want to sequence your interfaces, create a package. This is optional step covered in Chapter 10, "Creating a new Package".

PK87U!P!PKp\EOEBPS/web_services.htmw Working with Web Services in Oracle Data Integrator

15 Working with Web Services in Oracle Data Integrator

This chapter describes how to work with web services in Oracle Data Integrator.

This chapter includes the following sections:

15.1 Introduction to Web Services in Oracle Data Integrator

Oracle Data Integrator provides the following entry points into a service-oriented architecture (SOA):

Figure 15-1 gives an overview of how the different types of Web services can interact.

Figure 15-1 Web Services in Action

Description of Figure 15-1 follows

It shows a simple example with the Data Services, Run-Time Web services (Public Web Service and Agent Web Service) and the OdiInvokeWebService tool.

The Data Services and Run-Time Web Services components are invoked by a third-party application, whereas the OdiInvokeWebService tool invokes a third-party Web service:

  • The Data Services provides access to data in data stores (both source and target data stores), as well as changes trapped by the Changed Data Capture framework. This web service is generated by Oracle Data Integrator and deployed in a Java EE application server.

  • The Public Web Service connects to the repository to retrieve a list of context and scenarios. This web service is deployed in a Java EE application server.

  • The Agent Web Service commands the Oracle Data Integrator Agent to start and monitor a scenario and to restart a session. Note that this web service is built-in the Java EE or Standalone Agent.

  • The OdiInvokeWebService tool is used in a package and invokes a specific operation on a port of the third-party Web service, for example to trigger a BPEL process.

Oracle Data Integrator Run-Time Web services and Data Services are two different types of Web services. Oracle Data Integrator Run-Time Web services enable you to access the Oracle Data Integrator features through Web services, whereas the Data Services are generated by Oracle Data Integrator to give you access to your data through Web services.

15.2 Data Services

Data Services are specialized Web Services that provide access to data in datastores, and to changes captured for these datastores using Changed Data Capture. These Web Services are automatically generated by Oracle Data Integrator and deployed to a Web Services container in an application server.

For more information on how to set up, generate and deploy Data Services refer to Chapter 8, "Working with Data Services".

15.3 Oracle Data Integrator Run-Time Services

Oracle Data Integrator Run-Time Services are web services that enable users to leverage Oracle Data Integrator features in a service-oriented architecture (SOA). These web services are invoked by a third-party application manage start scenarios developed with Oracle Data Integrator.

How to perform the different ODI execution tasks with the ODI Run-Time Services such as executing a scenario, restarting a session, listing execution contexts and scenarios is detailed in Section 21.11, "Managing Executions Using Web Services". Section 21.11 also provides examples of SOAP requests and responses.

15.4 Invoking Third-Party Web Services

This section describes how to invoke third-party web services in Oracle Data Integrator.

This section includes the following topics:

15.4.1 Introduction to Web Service Invocation

Web Services can be invoked:

  • In Oracle Data Integrator packages or procedures using the OdiInvokeWebService tool: This tool allows you to invoke any third party web service, and save the response in a XML file that can be processed with Oracle Data Integrator.

  • For testing Data Services: The easiest way to test whether your generated data services are running correctly is to use the graphical interface of the OdiInvokeWebService tool. See Section 15.4.2, "Using the OdiInvokeWebService Tool" for more information.

15.4.2 Using the OdiInvokeWebService Tool

The OdiInvokeWebService tool invokes a web service using the HTTP or HTTPS protocol and is able to write the returned response to an XML file, which can be an XML payload or a full-formed SOAP message including a SOAP header and body. The OdiInvokeWebService tool invokes a specific operation on a port of a web service whose description file (WSDL) URL is provided. If this operation requires a SOAP request, it is provided either in a request file or in the tool command. The response of the web service request is written to an XML file that can be used in Oracle Data Integrator.


Note:

If the web service operation is one-way and does not return any response, no response file is generated.

How to create a web service request is detailed in Section 15.4.3, "Web Service Invocation in Integration Flows".


Note:

When using the XML payload format, the OdiInvokeWebService tool does not support the SOAP headers of the request. In order to work with SOAP headers, for example for secured web service invocation, use a full SOAP message and modify manually the SOAP headers.

This tool can be used as a regular Oracle Data Integrator tool in a tool step of a package and also in procedures and knowledge modules. See Section 10.3.1.4, "Adding Oracle Data Integrator Tool Steps" for information on how to create a tool step in a package and Appendix A, "OdiInvokeWebService" for details on the OdiInvokeWeb Service tool parameters.

The OdiInvokeWebService tool provides an Advanced editor for generating its code. This Advanced editor is available when using the OdiInvokeWebService tool in a package or when performing a Data Service test. In this Advanced editor you can:

  • Connect to the WSDL

  • Specify parameters for the tool in addition to the parameters specified in the Properties pane

  • Select a specific operation on the automatically selected port and specify request parameters in the SOAP editor

  • Invoke a Web Service

  • Consult the Web service response in the SOAP editor

Figure 15-2 gives an overview of the Advanced Editor.

Figure 15-2 OdiInvokeWebService Advanced Editor

Description of Figure 15-2 follows

The Advanced Editor consists of the sections described in Table 15-1.

Table 15-1 Advanced Editor Sections

SectionIcon NameLocation in FigureDescription

Web Service Description

File (WSDL) URL


top

Enter here the WSDL location

Port


left

The port of the web service is set by default. If more than one port is available for the web service, select the appropriate port.

Invoke Web Service icon

Invoke Web Service

toolbar icon

Invokes immediately the current Web Service, displaying the response in the SOAP editor.

Switch Panel Position icon


Switch Panel Position

toolbar icon

Tiles vertically or horizontally the SOAP editor.

Export Reponse XSD

Export Response XSD

toolbar icon

Saves the current response XML schema description to a file.

Restore Default Request icon

Restore Default Request

toolbar icon

Discards the current request and reverts to a default, blank request structure.

Delete Empty Optional Components

Delete Empty Optional Components

toolbar icon

Removes all blank optional elements from the query. This may be necessary to construct a valid query.

Clean up before execution icon

Clean up before execution

toolbar icon

Automatically deletes empty optional elements in the SOAP request when Invoke Web Service is clicked. This checkbox has no effect on package steps at run-time.

Use Request File

Use Request File

toolbar icon

Uses a SOAP request stored in a file instead of the parameters specified in the SOAP editor.

Timeout (ms) icon

Timeout (ms)

toolbar icon

Specifies a maximum period of time to wait for the request to be complete.

Operation



The list of operations for the selected port.

Options



The HTTP request options:

  • Timeout: The web service request waits for a reply for this time before considering that the server will not provide a response and an error is produced.

  • HTTP Authentication: If you check this box, you should provide a user and password to authenticate on your HTTP server.

SOAP Editor


middle and right

Displays the web service request on the left pane in the SOAP Editor or Source tab and the SOAP response on the right pane.


SOAP Editor

The SOAP Editor allows you to graphically build the XML request for the web service and display the response.

If creating an OdiInvokeWebService step, this SOAP request filled in the SOAP editor is saved with the step.

The left part of the editor shows the structure of the request, the right part shows the structure of the response. This arrangement can be changed clicking Switch Panel Position. The request is displayed either in a hierarchical editor view (SOAP Editor tab), or in XML format (Source tab). When using the SOAP Editor tab, it is only possible to edit the body of the SOAP envelope. To edit or view the whole envelope, including the SOAP headers, you must use the Source tab.

In the Editor, you can fill in the value (and optionally the attributes) for each element of your request.


WARNING:

An empty element is passed as is to the Web service. For strings, this corresponds to an empty string. For numbers or date types, this may cause an error. If you want to send a null string, number or date, it is recommended to use the nil="true"attribute. To remove empty elements, click Remove blank optional elements in the Advanced editor toolbar.


Optional elements are displayed in italic. Repeatable elements are labelled with ...(n*) after the name.

Right-click any element to perform one of the following operations, if possible:

  • Duplicate content - copies the structure and content of the element.

  • Duplicate structure - copies the structure but leaves all fields blank.

  • Delete - deletes the element.

  • Export Request - exports the entire soap request to an XML file.

Results

This part of the interface appears only when using an OdiInvokeWebService tool step in a package, to control how the response is written to a XML file.

Figure 15-3 Result Section for the OdiInvokeWebService tool

Description of Figure 15-3 follows

  • File Mode (-RESPONSE_MODE): One of NEW_FILE, FILE_APPEND, NO_FILE

  • Result File (-RESPONSE_FILE): The name of the result file to write.

  • Result File Format (-RESPONSE_FILE_FORMAT): The format of the web service response file. Possible values are XML (default) and SOAP.

  • XML Charset (-RESPONSE_XML_ENCODING): The name of the character encoding to write into the XML file.

  • Java Charset (-RESPONSE_FILE_CHARSET): The name of the character encoding used when writing the file.

Refer to Section A.6.22, "OdiInvokeWebService" for more information on these parameters.


Note:

The result file parameters are only taken into account at run-time. No result file is generated when clicking Invoke Web Service.

15.4.3 Web Service Invocation in Integration Flows

Calling a Web Service using the OdiInvokeWebService tool

To call a Web Service:

  1. Create an OdiInvokeWebService tool step in a package, or right-click a datastore and select Test Web Service in the contextual menu.

  2. Fill in the location of the WSDL. You can use either:

    • A URL for a WSDL that has been deployed to a server (for example: http://host:8080/services/WSCustomer?wsdl)

    • A local file location (for example: c:/DataServices/WSCustomer.wsdl )

  3. Choose a Port, if more than one is available.

  4. Choose an Operation from the list on the left.

  5. In the SOAP Editor, enter the web service payload. The OdiInvokeWebService tool supports two web service request formats: the XML body of the SOAP request only or the full-formed SOAP envelope, including the SOAP header and body.


    Note:

    Input format (request) and output format (response) are independant. Oracle Data Integrator recognizes the input message format automatically and writes the response according to the RESPONSE_FILE_FORMAT (default is XML).However, in the Advanced editor the request file format determines the response file format. If you test the invocation using a XML payload message, the response will be XML payload. If you test using a full-formed SOAP message, the response will be a full-formed SOAP message.How to generate a web service request file with Oracle Data Integrator is covered in "Generating the Request File".

  6. (Optional) Click Remove blank optional elements to delete optional request parameters which have not been specified. Some Web Services treat blank elements as invalid.

  7. Click Invoke Web Service to immediately invoke the Web Service. The response is shown in right pane of the SOAP Editor.

  8. If you are creating an OdiInvokeWebService tool step, define the response file parameters.

  9. From the File menu, select Save.

Processing the Response File

When using the OdiInvokeWebService tool to call a web service, the response is written to an XML file.

Processing this XML file can be done with Oracle Data Integrator, using the following guidelines:

  1. Invoke the web service once and use the Export Response XSD option to export the XML schema.

  2. Create an XML model for the response file, based on this XML schema file and reverse-engineer the XSD to have your model structure.

  3. You can now process the information from your responses using regular Oracle Data Integrator interfaces sourcing for the XML technology.

Refer to the Connectivity and Modules Guide for Oracle Data Integrator for more information on XML file processing.


Note:

Each XML file is defined as a model in Oracle Data Integrator. When using XML file processing for the request or response file, model will be created for each request or response file. It is recommended to use model folders to arrange them. See Section 18.2, "Organizing Models with Folders" for more information.

Oracle Data Integrator provides the OdiXMLConcat and OdiXMLSplit tools for processing the web service response. Refer to the XML section of the Appendix A, "ODI Tools per Category" for details on how to use these tools.

Generating the Request File

There are several ways to create a request file:

  • Create the request directly in the SOAP Editor on the Advanced tab of the OdiInvokeWebService tool. The possible format is XML.

  • Use the XML driver, similarly to what is performed for processing the response file. If generating a request file using the XML driver, the request is not a full SOAP but a simplified XML format. Use the SOAP editor for generating a template request.

  • Use an external request file that has been previously generated with ODI. The possible formats are XML and SOAP.

  • Create a SOAP request. To generate a SOAP request, you have to use a third-party tool such as, for example, the HTTP Analyzer provided by JDeveloper. See "Using the HTTP Analyzer" in the Oracle SOA Suite Developer's Guide for more information.

    To call a web service with a SOAP request, perform the standard procedure as described in Calling a Web Service using the OdiInvokeWebService tool and perform the following steps for creating the web service request in SOAP format:

    1. Create the SOAP request in the third-party tool.

    2. Copy the SOAP request and paste the entire SOAP message into the Source tab of the SOAP Editor in ODI Studio.

    3. Optionally, edit the request.

    Note that the web service response will be in SOAP format.

Using the Binding Mechanism for Requests

It is possible to use the Binding mechanism when using a web service call in a Procedure. With this method, it is possible to call a web service for each row returned by a query, parameterizing the request based on the row's values. Refer to "Binding Source and Target Data" for more information.

PK wwPKp\EOEBPS/running_executions.htm Running Integration Processes

21 Running Integration Processes

This chapter describes how to run and schedule integration processes.

This chapter includes the following sections:

21.1 Understanding ODI Executions

An execution takes place when an integration task needs to be performed by Oracle Data Integrator. This integration task may be one of the following:

  • An operation on a model, sub-model or a datastore, such as a customized reverse-engineering, a journalizing operation or a static check started from the Oracle Data Integrator Studio

  • The execution of a design-time object, such as an interface, a package or a procedure, typically started from the Oracle Data Integrator Studio

  • The execution of a run-time scenario or a Load Plan that was launched from the Oracle Data Integrator Studio, from a command line, via a schedule or a web service interface

Oracle Data Integrator generates the code for an execution in the form of a session or in the form of a Load Plan run if a Load Plan is executed.

A run-time Agent processes this code and connects to the sources and targets to perform the data integration. These sources and targets are located by the Agent using a given execution context.

When an execution is started from Oracle Data Integrator Studio, the Execution Dialog is displayed. This dialog contains the execution parameters listed in Table 21-1.

Table 21-1 Execution Parameters

PropertiesDescription

Context

The context into which the session is started.

Agent

The agent which will execute the interface. The object can also be executed using the agent that is built into Oracle Data Integrator Studio, by selecting Local (No Agent).

Log Level

Level of logging information to retain. All session tasks with a defined log level lower than or equal to this value will be kept in the Session log when the session completes. However, if the object execution ends abnormally, all tasks will be kept, regardless of this setting.

Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

Simulation

Check Simulation if you want to simulate the execution and create an execution report. Refer to Section 21.10, "Simulating an Execution" for more information.


Session Lifecycle

This section describes the session lifecycle. See Section 14.1, "Introduction to Load Plans" for more information on Load Plan runs and the Load Plan life cycle.

The lifecycle of a session is as follows:

  1. An execution request is sent to the agent, or the agent triggers an execution from a schedule.

    Note that if the execution is triggered from Oracle Data Integrator Studio on a design-time object (interface, package, etc.), Studio pre-generates in the work repository the code for the session before sending the request. If the execution is started from a scenario, this phase is not necessary as the scenario already contains pre-generated code.

  2. The agent completes code generation for the session: It uses the context provided to resolve the physical information such as data server connections and fully qualified tables names. This resulting code is written into the work repository as a session in Waiting status.

  3. The agent initializes the connections to the source and target data servers that are required for the execution of the session.

  4. The agent acknowledges the execution request. If the execution was started from the Studio, the Session Started Dialog is displayed.

  5. The agent executes each of the tasks contained in this session, using the capabilities of the database servers, operating systems, or scripting engines to run the code contained in the session's tasks.

  6. While processing the session, the agent updates the execution log in the repository, reports execution statistics and error messages.

    Once the session is started, you can monitor it in the log, using for example Operator Navigator. Refer to Chapter 22, "Monitoring Integration Processes" for more information on session monitoring.

  7. When the session completes, tasks are preserved or removed from the log according to the log level value provided when starting for this session.


Note:

A Session is always identified by a unique Session Number (or Session ID). This number can be viewed when monitoring the session, and is also returned by the command line or web service interfaces when starting a session.

When starting an execution from other locations such as a command line or a web service, you provide similar execution parameters, and receive a similar Session Started feedback. If the session is started synchronously from a command line or web service interface, the command line or web service will wait until the session completes, and provide the session return code and an error message, if any.

21.2 Executing Interfaces, Procedures, Packages and Model Operations

Interfaces, procedures, and packages are design-time objects that can be executed from the Designer Navigator of Oracle Data Integrator Studio:

21.3 Executing a Scenario

Scenarios can be executed in several ways:


Note:

Before running a scenario, you need to have the scenario generated from Designer Navigator or imported from a file. Refer to Chapter 13, "Working with Scenarios" for more information.

21.3.1 Executing a Scenario from ODI Studio

You can start a scenario from Oracle Data Integrator Studio from Designer or Operator Navigator.

To start a scenario from Oracle Data Integrator Studio:

  1. Select the scenario in the Projects accordion (in Designer Navigator) or the Scenarios accordion (in Operator Navigator).

  2. Right-click, then select Execute.

  3. In the Execution dialog, set the execution parameters. Refer to Table 21-1 for more information. To execute the scenario with the agent that is built into Oracle Data Integrator Studio, select Local (No Agent).

  4. Click OK.

  5. If the scenario uses variables as parameters, the Variable values dialog is displayed. Select the values for the session variables. Selecting Latest value for a variable uses its current value, or default value if none is available.

When the agent has started to process the session, the Session Started dialog appears.

21.3.2 Executing a Scenario from a Command Line

You can start a scenario from a command line.

Before executing a scenario from a command line, read carefully the following requirements:

  • The command line scripts, which are required for performing the tasks described in this section, are only available if you have installed the Oracle Data Integrator Standalone Agent. See the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for information about how to install the Standalone Agent.

  • To use this command the connection to your repository must be configured in the odiparams file. See Chapter 4, "Managing Agents" for more information.

  • When starting a scenario from a command line, the session is not started by default against a remote run-time agent, but is executed by a local Java process started from the command line. This process can be aborted locally, but cannot receive a session stop signal as it is not a real run-time agent. As a consequence, sessions started this way cannot be stopped remotely.

    This process will be identified in the Data Integrator log after the Local Agent name. You can change this name using the NAME parameter.

    If you want to start the session against a run-time agent, you must use the AGENT_URL parameter.

To start a scenario from a command line:

  1. Change directory to the /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to start a scenario.

    On UNIX systems:

    ./startscen.sh <scenario_name> <scenario_version> <context_code> [<log_level>] [-AGENT_URL=<remote_agent_url>] [-ASYNC=yes|no] [-NAME=<local_agent_name>] [-SESSION_NAME=<session_name>] [-KEYWORDS=<keywords>] [<variable>=<value>]*

    On Windows systems:

    startscen.bat <scenario_name> <scenario_version> <context_code> [<log_level>] [-AGENT_URL=<remote_agent_url>][-ASYNC=yes|no] ["-NAME=<local_agent_name>"] ["-SESSION_NAME=<session_name>"] ["-KEYWORDS=<keywords>"] ["<variable>=<value>"]*


Note:

On Windows platforms, it is necessary to "delimit" the command arguments containing "=" signs or spaces, by using double quotes. The command call may differ from the Unix command call. For example:

On Unix

./startscen.sh DWH 001 GLOBAL SESSION_NAME=MICHIGAN

On Windows

startscen.bat DWH 001 GLOBAL "SESSION_NAME=MICHIGAN"


Table 21-2 lists the different parameters, both mandatory and optional. The parameters are preceded by the "-" character and the possible values are preceded by the "=" character. You must follow the character protection syntax specific to the operating system on which you enter the command.

Table 21-2 Startscen command Parameters

ParametersDescription

<scenario_name>

Name of the scenario (mandatory).

<scenario_version>

Version of the scenario (mandatory). If the version specified is -1, the latest version of the scenario is executed.

<context_code>

Code of the execution context (mandatory).

[<log_level>]

Level of logging information to retain.

This parameter is in the format <n> where <n> is the expected logging level, between 0 and 6. The default log level is 5. Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

Example: startscen.bat SCENAR 1 GLOBAL 5

[-AGENT_URL=<remote_agent_url>

URL of the run-time agent that will run this session. If this parameter is set, then NAME parameter is ignored.

[-ASYNC=yes|no]

Set to yes, for an asynchronous execution on the remote agent. If ASYNC is used, AGENT_URL is manadatory.

Note that when the asynchronous execution is used, the session ID of the scenario is returned.

[-NAME=<local_agent_name>]

Agent name that will appear in the execution log for this session, instead of Local Agent. This parameter is ignored if AGENT_URL is used.

Note that using an existing physical agent name in the NAME parameter is not recommended. The run-time agent whose name is used does not have all the information about this session and will not be able to manage it correctly. The following features will not work correctly for this session:

  • Clean stale session: This session will be considered as stale by this agent if this agent is started. The session will be pushed to error when the agent will detect this session

  • Kill Sessions: This agent cannot kill the session when requested.

  • Agent Session Count: This session is counted in this agent's sessions, even if it is not executed by it.

It is recommended to use a NAME that does not match any existing physical agent name.

If you want to start a session on a given physical agent, you must use the AGENT_URL parameter instead.

[-SESSION_NAME=<session_name>]

Name of the session that will appear in the execution log.

[-KEYWORDS=<keywords>]

List of keywords attached to this session. These keywords make session identification easier. The list is a comma-separated list of keywords.

[<variable>=<value>]

Allows to assign a <value> to a <variable> for the execution of the scenario. <variable> is either a project or global variable. Project variables should be named <Project Code>.<Variable Name>. Global variables should be called GLOBAL.<variable Name>.

This parameter can be repeated to assign several variables.

Do not use a hash sign (#) to prefix the variable name on the startscen command line.


21.4 Restarting a Session

Any session that has encountered an error, or has been stopped by the user can be restarted.

Oracle Data Integrator uses JDBC transactions when interacting with source and target data servers, and any open transaction state is not persisted when a session finishes in error state. The appropriate restart point is the task that started the unfinished transaction(s). If such a restart point is not identifiable, it is recommended that you start a fresh session by executing the scenario instead of restarting existing sessions that are in error state.

Only sessions in status Error or Waiting can be restarted. By default, a session restarts from the last task that failed to execute (typically a task in error or in waiting state). A session may need to be restarted in order to proceed with existing staging tables and avoid re-running long loading phases. In that case the user should take into consideration transaction management, which is KM specific. A general guideline is: If a crash occurs during a loading task, you can restart from the loading task that failed. If a crash occurs during an integration phase, restart from the first integration task, because integration into the target is within a transaction. This guideline applies only to one interface at a time. If several interfaces are chained and only the last one performs the commit, then they should all be restarted because the transaction runs over several interfaces.

To restart from a specific task or step:

  1. In Operator Navigator, navigate to this task or step, edit it and switch it to Waiting state.

  2. Set all tasks and steps after this one in the Operator tree view to Waiting state.

  3. Restart the session using one of the following methods:


WARNING:

When restarting a session, all connections and transactions to the source and target systems are re-created, and not recovered from the previous session run. As a consequence, uncommitted operations on transactions from the previous run are not applied, and data required for successfully continuing the session may not be present.


21.4.1 Restarting a Session from ODI Studio

To restart a session from Oracle Data Integrator Studio:

  1. In Operator Navigator, select the session that you want to restart.

  2. Right-click and select Restart.

  3. In the Restart Session dialog, specify the agent you want to use for running the new session.

    To select the agent to execute the session, do one of the following:

    • Select Use the previous agent: <agent name> to use the agent that was used for the previous session execution.

    • Select Choose another agent to select from the list the agent that you want to use for the session execution.


      Note:

      Select Internal to use the ODI Studio built-in Agent.

  4. Select the Log Level. Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

  5. Click OK to restart the indicated session and to close the dialog. Click Cancel if you do not want to restart session.

When Oracle Data Integrator has restarted the session, the Session Started dialog appears.

21.4.2 Restarting a Session from a Command Line

Before restarting a session from a command line, read carefully the following requirements:

  • The command line scripts, which are required for performing the tasks described in this section, are only available if you have installed the Oracle Data Integrator Standalone Agent. See the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for information about how to install the Standalone Agent.

  • To use this command the connection to your repository must be configured in the odiparams file. See Chapter 4, "Managing Agents" for more information.

  • When restarting a session from a command line, the session is not started by default against a remote run-time agent, but is executed by a local Java process started from the command line. This process can be aborted locally, but cannot receive a session stop signal as it is not a real run-time agent. As a consequence, sessions started this way cannot be stopped remotely.

    If you want to start the session against a run-time agent, you must use the AGENT_URL parameter.

To restart a session from a command line:

  1. Change directory to the /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to start a scenario.

    On UNIX systems:

    ./restartsession.sh <session_number> [-log_level][-AGENT_URL=<remote_agent_url>]

    On Windows systems:

    restartsession.bat <session_number> [-log_level]["-AGENT_URL=<remote_agent_url>"]

Table 21-3 lists the different parameters of this command, both mandatory and optional. The parameters are preceded by the "-" character and the possible values are preceded by the "=" character. You must follow the character protection syntax specific to the operating system on which you enter the command.

Table 21-3 restartsess command Parameters

ParametersDescription

<session_number>

Number (ID) of the session to be restarted.

[-log_level]

Level of logging information to retain. Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. Note that if this log_level parameter is not provided when restarting a session, the previous log level used for executing the session will be reused. See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

[-AGENT_URL=<remote_agent_url>

URL of the run-time agent that will restart this session. By default the session is executed by a local Java process started from the command line.



Note:

To use this command the connection to your repository must be configured in the odiparams file. See Section 4.3, "Managing Agents" for more information.


Note:

When restarting a session from a command line, the session is not started by default against a remote run-time agent, but is executed by a local Java process started from the command line.

If you want to start the session against a run-time agent, you must use the AGENT_URL parameter.


21.5 Stopping a Session

Any running or waiting session can be stopped. You may want to stop a session when you realize that for example your interface contains errors or when the execution takes a long time.

Note that there are two ways to stop a session:

  • Normal: The session is stopped once the current task is finished.

  • Immediate: The current task is immediately interrupted and the session is stopped. This mode allows to stop long-running tasks, as for example long SQL statements before they complete.


Note:

The immediate stop works only with technologies and drivers that support task interruption. It is supported if the statement.cancel method is implemented in the JDBC driver.


Note:

Only sessions that are running within a Java EE or standalone Agent can be stopped. Sessions running in the Studio built-in Agent or started with the startscen.sh or startscen.bat script without the AGENT_URL parameter, cannot be stopped. See Section 21.3, "Executing a Scenario" for more information.

Session can be stopped in several ways:

21.5.1 Stopping a Session From ODI Studio

To stop a session from Oracle Data Integrator Studio:

  1. In Operator Navigator, select the running or waiting session to stop from the tree.

  2. Right-click then select Stop Normal or Stop Immediate.

  3. In the Stop Session Dialog, click OK.

The session is stopped and changed to Error status.

21.5.2 Stopping a Session From a Command Line

Before stopping a session from a command line, read carefully the following requirements:

To stop a session from a command line:

  1. Change directory to the /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to start a scenario.

    On UNIX systems:

    ./stopsession.sh <session_id> [-AGENT_URL=<remote_agent_url>] [-STOP_LEVEL=<normal (default) | immediate>]

    On Windows systems:

    stopsession.bat <session_id> ["-AGENT_URL=<remote_agent_url>"] ["-STOP_LEVEL=<normal (default) | immediate>"]

Table 21-3 lists the different parameters of this command, both mandatory and optional. The parameters are preceded by the "-" character and the possible values are preceded by the "=" character. You must follow the character protection syntax specific to the operating system on which you enter the command.

Table 21-4 StopSession command Parameters

ParametersDescription

<session_id>

Number (ID) of the session to be restarted.

[-AGENT_URL=<remote_agent_url>

URL of the run-time agent that stops this session. By default the session is executed by a local Java process started from the command line.

[-STOP_LEVEL=<normal (default) | immediate>]

The level used to stop a running session. If it is omitted, normal will be used as the default stop level.



Note:

To use this command the connection to your repository must be configured in the odiparams file. See Section 4.3, "Managing Agents" for more information.

21.6 Executing a Load Plan

Load Plans can be executed in several ways:


Note:

A Load Plan cannot be executed using the ODI Studio built-in agent called Local (No Agent).

21.6.1 Executing a Load Plan from ODI Studio

In ODI Studio, you can run a Load Plan in Designer Navigator or in Operator Navigator.

To run a Load Plan in Designer Navigator or Operator Navigator:

  1. In the Load Plans and Scenarios accordion, select the Load Plan you want to execute.

  2. Right-click and select Execute.

  3. In the Start Load Plan dialog, select the execution parameters:

    • Select the Context into which the Load Plan will be executed.

    • Select the Logical Agent that will run the step.

    • Select the Log Level. All sessions with a defined log level lower than or equal to this value will be kept in the Session log when the session completes. However, if the object execution ends abnormally, all tasks will be kept, regardless of this setting.

      Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

      Select Use Session Task Log Level (default) to use the Session Tasks Log Level value defined in the Load Plan.

    • In the Variables table, enter the Startup values for the variables used in this Load Plan.

  4. Click OK.

  5. The Load Plan Started Window appears.

  6. Click OK.

A new execution of the Load Plan is started: a Load Plan instance is created and also the first Load Plan run. You can review the Load Plan execution in the Operator Navigator.

21.6.2 Executing a Load Plan from a Command Line

You can start a Load Plan from a command line.

Before executing a Load Plan from a command line, read carefully the following requirements:

  • The command line scripts, which are required for performing the tasks described in this section, are only available if you have installed the Oracle Data Integrator Standalone Agent. See the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for information about how to install the Standalone Agent.

  • To use this command, the connection to your repository must be configured in the odiparams file. See Chapter 4, "Managing Agents" for more information.

  • A Load Plan Run is started against a run-time agent identified by the AGENT_URL parameter.

To start a Load Plan from a command line:

  1. Change directory to /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to start a Load Plan.

    On UNIX systems:

    ./startloadplan.sh <load_plan_name> <context_code> [log_level] -AGENT_URL=<agent_url> [-KEYWORDS=<keywords>] [<variable>=<value>]*

    On WINDOWS systems:

    startloadplan.bat <load_plan_name> <context_code> [log_level]"-AGENT_URL=<agent_url>" ["-KEYWORDS=<keywords>"] ["<variable>=<value>"]*


Note:

On Windows platforms, it is necessary to "delimit" the command arguments containing "=" signs or spaces, by using double quotes. The command call may differ from the Unix command call. For example:

On UNIX systems:

./startloadplan.sh DWLoadPlan DEV -AGENT_URL=http://localhost:20910/oraclediagent

On WINDOWS systems:

startloadplan.bat DWLoadPlan DEV "-AGENT_URL=http://localhost:20910/oraclediagent"


Table 21-5 lists the different parameters, both mandatory and optional. The parameters are preceded by the "-" character and the possible values are preceded by the "=" character. You must follow the character protection syntax specific to the operating system on which you enter the command.

Table 21-5 Startloadplan Command Parameters

ParametersDescription

<load_plan_name>

Name of the Load Plan to be started (mandatory)

<context_code>

Code of the context used for starting the Load Plan. Note that if this value is not provided, the Load Plan uses the context of the session that calls it (mandatory)

[log_level]

Level of logging information to retain. All sessions with a defined log level lower than or equal to this value will be kept in the Session log when the session completes. However, if the object execution ends abnormally, all tasks will be kept, regardless of this setting.

Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. Default is the Load Plan's Session Tasks Log Level that has been used for starting the Load Plan. See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

["-AGENT_URL=<agent_url>"]

URL of the Physical Agent that starts the Load Plan (mandatory)

["-KEYWORDS=<Keywords>"]

Keywords to improve the organization of ODI logs by session folders and automatic classification. Enter a comma separated list of keywords that will be attached to this Load Plan.

["variable>=<value> "]

Startup values for the Load Plan variables (optional). Note that project variables should be named <project_code>.<variable_name> and global variables should be named GLOBAL.<variable_name>. This list is of the form <variable>=<value>.

The format for Date and Number variables is as follows:

  • Date: yyyy-MM-dd'T'HH:mm:ssZ

    For example: 2009-12-06T15:59:34+0100

  • Number: Integer value

    For example: 29833

For example:

"A_PROJ.A_REFRESH_VAR=bb" "A_PROJ.A_CROSS_PROJ_VAR=aa" "A_PROJ.A_VAR=cc"


21.7 Restarting a Load Plan Run

Restarting a Load Plan, starts a new run for the selected Load Plan instance. Note that when a Load Plan restarts the Restart Type parameter for the steps in error defines how the Load Plan and child sessions will be restarted. See Section 14.2.4.3, "Defining the Restart Behavior" and Section 21.4, "Restarting a Session" for more information.


Note:

Restarting a Load Plan instance depends on the status of its most-recent (highest-numbered) run. Restart is only enabled for the most-recent run, if its status is Error.

Load Plans can be restarted in several ways:

21.7.1 Restarting a Load Plan from ODI Studio

To restart a Load Plan from ODI Studio:

  1. In Operator Navigator, select the Load Plan Run to restart from the Load Plan Executions accordion.

  2. Right-click then select Restart.

  3. In the Restart Load Plan Dialog, select the Agent that restarts the Load Plan. Optionally, select a different log level.

  4. Click OK.

The Load Plan is restarted and a new Load Plan run is created.

21.7.2 Restarting a Load Plan from a Command Line

Before restarting a Load Plan from a command line, read carefully the following requirements:

  • The command line scripts, which are required for performing the tasks described in this section, are only available if you have installed the Oracle Data Integrator Standalone Agent. See the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for information about how to install the Standalone Agent.

  • To use this command the connection to your repository must be configured in the odiparams file. See Chapter 4, "Managing Agents" for more information.

  • A Load Plan Run is restarted against a remote run-time agent identified by the AGENT_URL parameter.

To restart a Load Plan from a command line:

  1. Change directory to /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to restart a Load Plan.

    On UNIX systems:

    ./restartloadplan.sh <load_plan_instance_id> [log_level] -AGENT_URL=<agent_url>

    On WINDOWS systems:

    restartloadplan.bat <load_plan_instance_id> [log_level] "-AGENT_URL=<agent_url>"


Note:

On Windows platforms, it is necessary to "delimit" the command arguments containing "=" signs or spaces, by using double quotes. The command call may differ from the Unix command call.

Table 21-6 lists the different parameters, both mandatory and optional. The parameters are preceded by the "-" character and the possible values are preceded by the "=" character. You must follow the character protection syntax specific to the operating system on which you enter the command.

Table 21-6 Restartloadplan Command Parameters

ParametersDescription

<load_plan_instance_id>

ID of the stopped or failed Load Plan instance that is to be restarted (mandatory)

[log_level]

Level of logging information to retain. All sessions with a defined log level lower than or equal to this value will be kept in the Session log when the session completes. However, if the object execution ends abnormally, all tasks will be kept, regardless of this setting.

Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. Default is the log level value used for the Load Plan's previous run.

See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

["-AGENT_URL=<agent_url>"]

URL of the Physical Agent that starts the Load Plan (optional)


21.8 Stopping a Load Plan Run

Any running or waiting Load Plan Run can be stopped. You may want to stop a Load Plan Run when you realize that for example your Load Plan contains errors or when the execution takes a long time.

Note that there are two ways to stop a Load Plan Run:

  • Stop Normal: In normal stop mode, the agent in charge of stopping the Load Plan sends a Stop Normal signal to each agent running a session for this Load Plan. Each agent will wait for the completion of the current task of the session and then end the session in error. Exception steps will not be executed by the Load Plan and once all exceptions are finished the load plan is moved to an error state.

  • Stop Immediate: In immediate stop mode, the agent in charge of stopping the Load Plan sends a Stop immediate signal to each agent running a session for this Load Plan. Each agent will immediately end the session in error and not wait for the completion of the current task of the session. Exception steps will not be executed by the Load Plan and once all exceptions are finished the load plan is moved to an error state.

Load Plans can be stopped in several ways:

21.8.1 Stopping a Load Plan from ODI Studio

To stop a Load Plan Run from ODI Studio:

  1. In Operator Navigator, select the running or waiting Load Plan Run to stop from the Load Plan Executions accordion.

  2. Right-click then select Stop Normal or Stop Immediate.

  3. In the Stop Load Plan Dialog, select the Agent that stops the Load Plan.

  4. Click OK.

The Load Plan run is stopped and changed to Error status.

21.8.2 Stopping a Load Plan Run from a Command Line

Before stopping a Load Plan from a command line, read carefully the following requirements:

  • The command line scripts, which are required for performing the tasks described in this section, are only available if you have installed the Oracle Data Integrator Standalone Agent. See the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for information about how to install the Standalone Agent.

  • To use this command the connection to your repository must be configured in the odiparams file. See Chapter 4, "Managing Agents" for more information.

  • A Load Plan Run signal is sent by a remote run-time agent identified by the AGENT_URL parameter.

To stop a Load Plan run from a command line:

  1. Change directory to /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to start a Load Plan.

    On UNIX systems:

    ./stoploadplan.sh <load_plan_instance_id> [<load_plan_run_count>] -AGENT_URL=<agent_url> [-STOP_LEVEL=<normal (default) | immediate>]

    On WINDOWS systems:

    stoploadplan.bat <load_plan_instance_id> [<load_plan_run_count>] "-AGENT_URL=<agent_url>" ["-STOP_LEVEL=<normal (default) | immediate>"]

Table 21-7 lists the different parameters, both mandatory and optional. The parameters are preceded by the "-" character and the possible values are preceded by the "=" character. You must follow the character protection syntax specific to the operating system on which you enter the command.

Table 21-7 Stoploadplan Command Parameters

ParametersDescription

<load_plan_instance_id>

ID of the running Load Plan run that is to be stopped (mandatory)

[<load_plan_run_count>]

Load Plan run count of the load plan instance. It prevents unintentional stopping of a load plan run that happens to be the latest one. If it is omitted, the last Load Plan run count will be used (optional)

["-AGENT_URL=<agent_url>"]

URL of the Physical Agent that starts the Load Plan (optional)

[-STOP_LEVEL=<normal (default) | immediate>]

Level used to stop the Load Plan run. Default is normal.



Note:

On Windows platforms, it is necessary to "delimit" the command arguments containing "=" signs or spaces, by using double quotes. The command call may differ from the Unix command call.

21.9 Scheduling Scenarios and Load Plans

You can schedule the executions of your scenarios and Load Plans using the Oracle Data Integrator built-in scheduler or an external scheduler. Both methods are detailed in this section:

21.9.1 Scheduling a Scenario or a Load Plan with the Built-in Scheduler

You can attach schedules to scenarios and also to Load Plans. Such schedules are managed by the scheduler built-in run-time agent.

It is important to understand that a schedule concerns only one scenario or one Load Plan, while a scenario or a Load Plan can have several schedules and can be scheduled in several ways. The different schedules appear under the Scheduling node of the scenario or Load Plan. Each schedule allows a start date and a repetition cycle to be specified.

For example:

  • Schedule 1: Every Thursday at 9 PM, once only.

  • Schedule 2: Every day from 8 am to 12 noon, repeated every 5 seconds.

  • Schedule 3: Every day from 2 PM to 6 PM, repeated every 5 seconds, with a maximum cycle duration of 5 hours.

21.9.1.1 Scheduling a Scenario or a Load Plan

To schedule a scenario or a Load Plan from Oracle Data Integrator Studio.

  1. Right-click the Scheduling node under a scenario or a Load Plan in the Designer or Operator Navigator.

  2. Select New Scheduling. The Scheduling editor is displayed.

  3. On the Definition tab of the Scheduling editor specify the parameters as follows:

    PropertiesDescription
    ContextContext into which the scenario or Load Plan is started.
    AgentAgent executing the scenario or Load Plan.
    Log LevelLevel of logging information to retain.

    The Status parameters define the activation of the schedule.

    PropertiesDescription
    ActiveThe scheduling will be active when the agent is restarted or when the scheduling of the physical agent is updated.
    InactiveThe schedule is not active and will not run.
    Active for the periodActivity range of the schedule. A schedule active for a period of time will only run within this given period.

    The Execution parameters define the frequency of execution for each execution cycle.

    PropertiesDescription
    ExecutionFrequency of execution option (annual, monthly,... simple). This option is completed by a set of options that depend on this main option.

  4. On the Execution Cycle tab, specify the parameters for the repeat mode of the scenario as follows:

    PropertiesDescription
    None (Execute once)The scenario or Load Plan is executed only one time.
    Many timesThe scenario or Load Plan is repeated several times.
    • Maximum number of repetitions: The maximum number of times the scenario is repeated during the cycle.

    • Maximum Cycle Duration: As soon as the maximum time is reached, the scenario is no longer restarted, and the cycle stops.

    • Interval between repetitions: The downtime between each scenario execution.

    ConstraintsAllows limitations to be placed on one cycle iteration, in the event of a problem during execution.
    • Number of Attempts on Failure: Maximum number of consecutive execution attempts for one iteration.

    • Stop Execution After: Maximum execution time for one iteration. If this time is reached, the scenario or Load Plan is automatically stopped.


  5. On the Variables tab, unselect Latest Value for variables for which you want to provide a Value. Only variables used in the scenario or Load Plan and flagged as parameters for this scenario or Load Plan appear in this tab.

  6. From the File menu, click Save.

The new schedule appears under the Scheduling node of the scenario or Load Plan.

The schedule changes are taken into account by the run-time agent when it starts or when it receives a schedule update request.

21.9.1.2 Updating an Agent's Schedule

An agent reads schedules when starting on all the repositories attached to the master repository it connects to. It is possible, if a schedule was added for this agent in a given repository, to refresh the agent schedule.

To update an agent's schedule:

  1. In Topology Navigator expand the Agents node in the Physical Architecture accordion.

  2. Select the Physical Agent you want to update the schedule.

  3. Right-click and select Update Scheduling...

  4. In the Select Repositories dialog, select the repositories from which you want to read scheduling information. Check Select All Work Repositories to read scheduling information from all these repositories.

  5. Click OK.

The agent refreshes and re-computes its in-memory schedule from the schedules defined in these repositories.

You can also use the OdiUpdateAgentSchedule tool to update an agent's schedule.

21.9.1.3 Displaying the Schedule

You can view the scheduled tasks of all your agents or you can view the scheduled tasks of one particular agent.


Note:

The Scheduling Information is retrieved from the Agent's in-memory schedule. The Agent must be started and its schedule refreshed in order to display accurate schedule information.

Displaying the Schedule for All Agent

To display the schedule for all agents:

  1. Select Connect Navigator >Scheduling... from the Operator Navigator toolbar menu.

The View Schedule dialog appears, displaying the schedule for all agents.

Displaying the Schedule for One Agent

To display the schedule for one agent:

  1. In Topology Navigator expand the Agents node in the Physical Architecture accordion.

  2. Select the Physical Agent you want to update the schedule.

  3. Right-click and select View Schedule.

The Schedule Editor appears, displaying the schedule for this agent.


Note:

The Scheduling Information is retrieved from the Agent's schedule. The Agent must be started and its schedule refreshed in order to display accurate schedule information.

Using the View Schedule Dialog

The schedule is displayed in form of a Gantt diagram. Table 21-8 lists the details of the Schedule dialog.

Table 21-8 Scheduling Details



Selected Agent

Agent for which the Schedule is displayed. You can display also the schedule of all agents by selecting All Agents.

Selected Work Repository

Only the scenarios executed in the selected Work Repository are displayed in the schedule. Default is All Work Repositories.

Scheduling from... to...

Time range for which the scheduling is displayed. Click Refresh to refresh this schedule.

Update

Click Update to update the schedule for the selected agent(s)

Time Range

The time range specified (1 hour, 2 hours, and so forth) allows you to center the diagram on the current time plus this duration. This feature provides a vision of the sessions in progress plus the incoming sessions. You can use the arrows to move the range forward or backward.

Scenarios details

This panel displays the details and execution statistics for each scheduled scenario.


If you select a zone in the diagram (keep the mouse button pressed), you automatically zoom on the select zone.

By right-clicking in the diagram, you open a context menu for zooming, saving the diagram as an image file, printing or editing the display properties.

21.9.2 Scheduling a Scenario or a Load Plan with an External Scheduler

To start a scenario or a Load Plan with an external scheduler, do one of the following:

  • Use the startscen or startloadplan command from the external scheduler

  • Use the web service interface for triggering the scenario or Load Plan execution

For more information, see:

If a scenario or a Load Plan completes successfully, the return code will be 0. If not, the return code will be different than 0. This code will be available in:

  • The return code of the command line call. The error message, if any, is available on the standard error output.

  • The SOAP response of the web service call. The web service response includes also the session error message, if any.

21.10 Simulating an Execution

In Oracle Data Integrator you have the possibility at design-time to simulate an execution. Simulating an execution generates and displays the code corresponding to the execution without running this code. Execution simulation provides reports suitable for code review.


Note:

No session is created in the log when the execution is started in simulation mode.

To simulate an execution:

  1. In the Project view of the Designer Navigator, select the object you want to execute.

  2. Right-click and select Execute.

  3. In the Execution dialog, set the execution parameters and select Simulation. See Table 21-1 for more information.

  4. Click OK.

The Simulation report is displayed.

You can click Save to save the report as.xml or.html file.

21.11 Managing Executions Using Web Services

This section explains how to use a web service to perform run-time operations. it contains the following sections.

21.11.1 Introduction to Run-Time Web Services

Oracle Data Integrator includes web services for performing run-time operations. These web services are located in two places:

  • In the run-time agent, a web service allows starting a scenario or a Load Plan, monitoring a session status or a Load Plan run status, and restarting a session or a Load Plan instance, as well as stopping a Load Plan run. To use operations from this web service, you must first install and configure a standalone or a Java EE agent.

  • A dedicated public web service component provides operations to list the contexts and scenarios available. To use operations from this web service, you must first install and configure this component in a Java EE container.

The following applies to the SOAP request used against the agent and public web services

  • The web services operations accept password in a plain text in the SOAP request. Consequently, it is strongly recommended to use secured protocols (HTTPS) to invoke web services over a non-secured network. You can alternately use external authentication. See Section 21.11.12, "Using the Run-Time Web Services with External Authentication" for more information.

  • Repository connection information is not necessary in the SOAP request as the agent or public web service component is configured to connect to a master repository. Only an ODI user and the name of a work repository are required to run most of the operations.

21.11.2 Executing a Scenario Using a Web Service

The invokeStartScen operation of the agent web service starts a scenario in synchronous or asynchronous mode; in a given work repository. The session is executed by the agent providing the web service.

      <OdiStartScenRequest>
         <Credentials>
            <OdiUser>odi_user</OdiUser>
            <OdiPassword>odi_password</OdiPassword>
            <WorkRepository>work_repository</WorkRepository>
         </Credentials>
         <Request>
            <ScenarioName>scenario_name</ScenarioName>
            <ScenarioVersion>scenario_version</ScenarioVersion>
            <Context>context</Context>
            <LogLevel>log_level</LogLevel>
            <Synchronous>synchronous</Synchronous>
            <SessionName>session_name</SessionName>
            <Keywords>session_name</Keywords>
            <Variables>
            <Name>variable_name</name>
            <Value>variable_value</Value>
            </Variables>
         </Request>
      </OdiStartScenRequest>

The scenario execution returns the session ID in a response that depends on the value of the synchronous element in the request.

  • In synchronous mode (Synchronous=1), the response is returned once the session has completed, and reflects the execution result.

  • In asynchronous mode (Synchronous=0), the response is returned once the session is started, and only indicates the fact whether the session was correctly started or not.

This operation returns a response in the following format:

<?xml version = '1.0' encoding = 'ISO-8859-1'?><ns2:OdiStartScenResponse xmlns:ns2="xmlns.oracle.com/odi/OdiInvoke/">   <Session>543001</Session></ns2:OdiStartScenResponse>

21.11.3 Monitoring a Session Status Using a Web Service

The getSessionStatus operation of the agent web service returns the status of one or more sessions in a given repository, identified by their Session Numbers provided in the SessionIds element. It manages both running and completed sessions.

      <OdiGetSessionsStatusRequest>
         <Credentials>
            <OdiUser>odi_user</OdiUser>
            <OdiPassword>odi_password</OdiPassword>
            <WorkRepository>work_repository</WorkRepository
         </Credentials>
         <SessionIds>session_number</SessionIds>
      </OdiGetSessionsStatusRequest>

This operation returns a response in the following format:

      <SessionStatusResponse>
         <SessionId>session_id</SessionId>
         <SessionStatus>status_code</SessionStatus>
         <SessionReturnCode>return_code</SessionReturnCode>
      </SessionStatusResponse>

The Return Code value is zero for successful sessions and possible status codes are:

  • D: Done

  • E: Error

  • M: Warning

  • Q: Queued

  • R: Running

  • W: Waiting

21.11.4 Restarting a Session Using a Web Service

The invokeRestartSess operation of the agent web service restarts a session identified by its session number (provided in the SessionID element) in a given work repository. The session is executed by the agent providing the web service.

Only sessions in status Error or Waiting can be restarted. The session will resume from the last non-completed task (typically, the one in error).

Note that you can change the value of the variables or use the KeepVariables boolean element to reuse variables values from the previous session run.

      <invokeRestartSessRequest> 
         <Credentials> 
            <OdiUser>odi_user</OdiUser> 
            <OdiPassword>odi_password</OdiPassword> 
            <WorkRepository>work_repository</WorkRepository>
         </Credentials> 
         <Request> 
            <SessionID>session_number</SessionID> 
            <Synchronous>synchronous</Synchronous>
            <KeepVariables>0|1</KeepVariables>
            <LogLevel>log_level</LogLevel>
            <Variables>
            <Name>variable_name</name>
            <Value>variable_value</Value>
            </Variables>
         </Request> 
      </invokeRestartSessRequest> 

This operation returns a response similar to InvokeStartScen, depending on the Synchronous element's value.

21.11.5 Executing a Load Plan Using a Web Service

The invokeStartLoadPlan operation of the agent web service starts a Load Plan in a given work repository. The Load Plan is executed by the agent providing the web service. Note the following concerning the parameters of the invokeStartLoadPlan operation:

  • OdiPassword: Use a password in clear text.

  • Context: Use the context code.

  • Keywords: If you use several keywords, enter a comma separated list of keywords.

  • Name: Use the fully qualified name for variables: GLOBAL.variable_name or PROJECT_CODE.variable_name

The following shows the format of the OdiStartLoadPlanRequest.

<OdiStartLoadPlanRequest>
   <Credentials> 
      <OdiUser>odi_user</OdiUser> 
      <OdiPassword>odi_password</OdiPassword> 
      <WorkRepository>work_repository</WorkRepository>
   </Credentials> 
   <StartLoadPlanRequest>
      <LoadPlanName>load_plan_name</LoadPlanName>
      <Context>context</Context>
      <Keywords>keywords</Keywords>
      <LogLevel>log_level</LogLevel>
      <LoadPlanStartupParameters>
         <Name>variable_name</Name>
         <Value>variable_value</Value>
      </LoadPlanStartupParameters>
    </StartLoadPlanRequest>
</OdiStartLoadPlanRequest>

The invokeStartLoadPlan operation returns the following values in the response:

  • Load Plan Run ID

  • Load Plan Run Count

  • Master Repository ID

  • Master Repository timestamp

The following is an example of an OdiStartLoadPlan response:

<?xml version = '1.0' encoding = 'UTF8'?>
<ns2:OdiStartLoadPlanResponse xmlns:ns2="xmlns.oracle.com/odi/OdiInvoke/">
   <executionInfo>
      <StartedRunInformation>
         <OdiLoadPlanInstanceId>2001</OdiLoadPlanInstanceId>
         <RunCount>1</RunCount>
         <MasterRepositoryId>0</MasterRepositoryId>
         <MasterRepositoryTimestamp>1290196542926</MasterRepositoryTimestamp>
      </StartedRunInformation>
   </executionInfo>
</ns2:OdiStartLoadPlanResponse>

21.11.6 Stopping a Load Plan Run Using a Web Service

The invokeStopLoadPlan operation of the agent web service stops a running Load Plan run identified by the Instance ID and Run Number in a given work repository. The Load Plan instance is stopped by the agent providing the web service. Note that the StopLevel parameter can take the following values:

  • NORMAL: Waits until the current task finishes and then stops the session.

  • IMMEDIATE: Stops the session immediately, cancels all open statements and then rolls back the transactions.

See Section 21.8, "Stopping a Load Plan Run" for more information on how to stop a Load Plan run and Section 21.11.5, "Executing a Load Plan Using a Web Service" for more information on the other parameters used by the invokeStopLoadPlan operation.

<OdiStopLoadPlanRequest>
   <Credentials> 
      <OdiUser>odi_user</OdiUser> 
      <OdiPassword>odi_password</OdiPassword> 
      <WorkRepository>work_repository</WorkRepository>
   </Credentials> 
   <OdiStopLoadPlanRequest>
      <LoadPlanInstanceId>load_plan_instance_id</LoadPlanInstanceId>
      <LoadPlanInstanceRunCount>load_plan_run_count</LoadPlanInstanceRunCount>
      <StopLevel>stop_level</StopLevel>
   </OdiStopLoadPlanRequest>
</OdiStopLoadPlanRequest>

The invokeStopLoadPlan operation returns the following values in the response:

  • Load Plan Run ID

  • Load Plan Run Count

  • Master Repository ID

  • Master Repository timestamp

The following is an example of an OdiStopLoadPlan response:

<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
   <S:Body>
      <ns2:OdiStopLoadPlanResponse xmlns:ns2="xmlns.oracle.com/odi/OdiInvoke/">
         <executionInfo>
            <StoppedRunInformation>
              <OdiLoadPlanInstanceId>3001</OdiLoadPlanInstanceId>
              <RunCount>1</RunCount>
              <MasterRepositoryId>0</MasterRepositoryId>
              <MasterRepositoryTimestamp>1290196542926</MasterRepositoryTimestamp>
            </StoppedRunInformation>
         </executionInfo>
      </ns2:OdiStopLoadPlanResponse>
   </S:Body>
</S:Envelope>

21.11.7 Restarting a Load Plan Instance Using a Web Service

The invokeRestartLoadPlan operation of the agent web service restarts a Load Plan instance identified by the Instance ID in a given work repository. The Load Plan instance is restarted by the agent providing the web service.

<OdiRestartLoadPlanRequest>
   <Credentials> 
      <OdiUser>odi_user</OdiUser> 
      <OdiPassword>odi_password</OdiPassword> 
      <WorkRepository>work_repository</WorkRepository>
   </Credentials> 
   <RestartLoadPlanRequest>
      <LoadPlanInstanceId>load_plan_instance_id</LoadPlanInstanceId>
      <LogLevel>log_level</LogLevel>
   </RestartLoadPlanRequest>
</OdiRestartLoadPlanRequest>

21.11.8 Monitoring a Load Plan Run Status Using a Web Service

The getLoadPlanStatus operation of the agent web service returns the status of one or more Load Plans by their Instance ID and Run Number in a given repository. It manages both running and completed Load Plan instances.

<OdiGetLoadPlanStatusRequest> 
   <Credentials> 
       <OdiUser>odi_user</OdiUser> 
       <OdiPassword>odi_password</OdiPassword> 
      <WorkRepository>work_repository</WorkRepository>
   </Credentials> 
   <LoadPlans> 
      <LoadPlanInstanceId>load_plan_instance_id</LoadPlanInstanceId>
      <LoadPlanRunNumber>load_plan_run_number</LoadPlanRunNumber>
   </LoadPlans> 
</OdiGetLoadPlanStatusRequest> 

The getStopLoadPlanStatus operation returns the following values in the response:

  • Load Plan Run ID

  • Load Plan Run Count

  • Load Plan Run return code

  • Load Plan message

The following is an example of an OdiGetLoadPlanStatus response:

<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
   <S:Body>
      <ns2:OdiGetLoadPlanStatusResponse xmlns:ns2="xmlns.oracle.com/odi/OdiInvoke/">
         <LoadPlanStatusResponse>
            <LoadPlanInstanceId>3001</LoadPlanInstanceId>
            <LoadPlanRunNumber>1</LoadPlanRunNumber>
            <LoadPlanStatus>E</LoadPlanStatus>
            <LoadPlanReturnCode>ODI-1530</LoadPlanReturnCode>
            <LoadPlanMessage>ODI-1530: Load plan instance was stopped by user request.</LoadPlanMessage>
         </LoadPlanStatusResponse>
      </ns2:OdiGetLoadPlanStatusResponse>
   </S:Body>
</S:Envelope>

21.11.9 Listing Contexts Using a Web Service

The listContext operation of the public web service lists of all the contexts present in a master repository.

   <listContextRequest> 
      <OdiUser>odi_user</OdiUser> 
      <OdiPassword>odi_password</OdiPassword> 
   <listContextRequest>

21.11.10 Listing Scenarios Using a Web Service

The listScenario operation of the public web service lists of all the scenarios present in a given work repository.

   <listScenarioRequest> 
      <OdiUser>odi_user</OdiUser> 
      <OdiPassword>odi_password</OdiPassword> 
      <WorkRepository>work_repository</WorkRepository> 
   <listScenarioRequest>

21.11.11 Accessing the Web Service from a Command Line

Oracle Data Integrator contains two shell scripts for UNIX platforms that use the web service interface for starting and monitoring scenarios from a command line via the run-time agent web service operations:

  • startscenremote.sh starts a scenario on a remote agent on its web service. This scenario can be started synchronously or asynchronously. When started asynchronously, it is possible to have the script polling regularly for the session status until the session completes or a timeout is reached.

  • getsessionstatusremote.sh gets the status of a session via the web service interface. This second script is used in the startscenremote.sh script.

Before accessing a web service from a command line, read carefully the following important notes:

  • The command line scripts, which are required for performing the tasks described in this section, are only available if you have installed the Oracle Data Integrator Standalone Agent. See the Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for information about how to install the Standalone Agent.

  • Unlike the startscen.sh command line, these scripts rely on the lightweight WGET utility installed with the UNIX or Linux platform to perform the web service calls. It does not use any java code and uses a polling mechanism to reduce the number of running processes on the machine. These scripts are suitable when a large number of scenarios and sessions need to be managed simultaneously from a command line.

Starting a Scenario

To start a scenario from a command line via the web service:

  1. Change directory to the /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to start a scenario.

    On UNIX systems:

    ./startscenremote.sh <scenario_name> <scenario_version> <context_code> <work_repository> <remote_agent_url> <odi_user> <odi_password> -l <log_level> -s <sync_mode> -n <session_name> -k <session_keyword> -a <assign_variable> -t <timeout> -i <interval> -h <http_timeout> -v

Table 21-9 lists the different parameters of this command, both mandatory and optional.

Table 21-9 Startscenremote command Parameters

ParametersDescription

<scenario_name>

Name of the scenario (mandatory).

<scenario_version>

Version of the scenario (mandatory). If the version specified is -1, the latest version of the scenario is executed.

<context_code>

Code of the execution context (mandatory).

<work_repository>

Name of the work repository containing the scenario.

<remote_agent_url>

URL of the run-time agent that will run this session.

<odi_user>

Name of the user used to run this sessions.

<odi_password>

This user's password.

-l <log_level>

Level of logging information to retain.

This parameter is in the format <n> where <n> is the expected logging level, between 0 and 6. The default log level is 5.

Note that log level 6 has the same behavior as log level 5, but with in addition of variable tracking. See Section 12.2.3.11, "Tracking Variables and Sequences" for more information.

Example: startscen.bat SCENAR 1 GLOBAL 5

-s <sync_mode>

Execution mode:

  • 0: Synchronous

  • 1:Asynchronous (Do not wait for session completion)

  • 2: Asynchronous (Wait for session completion).

-n <session_name>

Name of the session

-k <session_keyword>

List of keywords attached to this session. These keywords make session identification easier. The list is a comma-separated list of keywords.

-a <assign_variable>

Assign variable. Allows to assign a <value> to a <variable> for the execution of the scenario. <variable> is either a project or global variable. Project variables should be named <Project Code>.<Variable Name>. Global variables should be called GLOBAL.<variable Name>.

This parameter can be repeated to assign several variables.

Do not use a hash sign (#) to prefix the variable name on the startscen command line.

For Example: -a PROJ1.VAR1=100

-t <timeout>

Timeout in seconds for waiting for session to complete if sync_mode = 2.

-i <interval>

Polling interval for session status if sync_mode = 2.

-h <http_timeout>

HTTP timeout for the web services calls.

-v

Verbose mode.


Monitoring a Session Status

To monitor the status of a session from a command line via the web service:

  1. Change directory to the /agent/bin directory of the Oracle Data Integrator installation.

  2. Enter the following command to start a scenario.

    On UNIX systems:

    ./getsessionstatusremote.sh <session_number> <work_repository> <remote_agent_url> <odi_user> <odi_password> -w <wait_mode> -t <timeout> -i <interval> -h <http_timeout> -v

Table 21-10 lists the different parameters of this command, both mandatory and optional.

Table 21-10 GetSessionStatusRemote command Parameters

ParametersDescription

<session_number>

Number of the session to monitor.

<work_repository>

Name of the work repository containing the scenario.

<remote_agent_url>

URL of the run-time agent that will run this session.

<odi_user>

Name of the user used to run this sessions.

<odi_password>

This user's password.

-w <wait_mode>

Wait mode:

  • 0: Do not wait for session completion, report current status.

  • 1: Wait for session completion then report status.

-t <timeout>

Timeout in seconds for waiting for session to complete if sync_mode = 2.

-i <interval>

Polling interval for session status if sync_mode = 2.

-h <http_timeout>

HTTP timeout for the web services calls.

-v

Verbose mode.


21.11.12 Using the Run-Time Web Services with External Authentication

The web services examples in this chapter use an ODI authentication within the SOAP body, using the OdiUser and OdiPassword elements.

When external authentication is set up for the repository and container based authentication with Oracle Platform Security Services (OPSS) is configured (See Section 24.3.2, "Setting Up External Authentication" for more information), the authentication can be passed to the web service using HTTP basic authentication, WS-Security headers, SAML tokens and so forth. OPSS will transparently handle the authentication on the server-side with the identity provider. In such situation, the OdiUser and OdiPassword elements can be omitted.

The run-time web services will first try to authenticate using OPSS. If no authentication parameters have been provided, OPSS uses anonymous user and the OdiUser and OdiPassword are checked. Otherwise (this is in case of invalid credentials to OPSS) OPSS throws an authentication exception and the web service is not invoked.


Note:

OPSS authentication is only possible for a Public Web Service or JEE Agent deployed in a Oracle WebLogic Server.

21.11.13 Using WS-Addressing

The web services described in this chapter optionally support WS-Addressing. WS-Addressing allows replying on an endpoint when a run-time web service call completes. For this purpose, two endpoints, ReplyTo and FaultTo, can be optionally specified in the SOAP request header.

These endpoints are used in the following way:

  • When the run-time web service call completes successfully, the result of an Action is sent to the ReplyTo endpoint.

  • If an error is encountered with the SOAP request or if Oracle Data Integrator is unable to execute the request, a message is sent to the FaultTo address. If the FaultTo address has not been specified, the error is sent to the ReplyTo address instead.

  • If the Oracle Data Integrator Agent encounters errors while processing the request and needs to raise an ODI error message, this error message is sent back to the ReplyTo address.

Note that callback operations do not operate in callback mode unless a valid ReplyTo address is specified.

The following is an example of a request that is sent to retrieve the session status for session 20001:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:odi="xmlns.oracle.com/odi/OdiInvoke/">
<soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Action soapenv:mustUnderstand="1">xmlns.oracle.com/odi/OdiInvoke/getSessionStatus</wsa:Action>
<wsa:ReplyTo soapenv:mustUnderstand="1">
<wsa:Address>http://host001:8080/examples/servlets/servlet/RequestPrinter</wsa:Address>
</wsa:ReplyTo>
<wsa:MessageID soapenv:mustUnderstand="1">uuid:71bd2037-fbef-4e1c-a991-4afcd8cb2b8e</wsa:MessageID>
</soapenv:Header>
   <soapenv:Body>
      <odi:OdiGetSessionsStatusRequest>
         <Credentials>
            <!--You may enter the following 3 items in any order-->
            <OdiUser></OdiUser>
            <OdiPassword></OdiPassword>
            <WorkRepository>WORKREP1</WorkRepository>
         </Credentials>
         <!--Zero or more repetitions:-->
         <SessionIds>20001</SessionIds>
      </odi:OdiGetSessionsStatusRequest>
   </soapenv:Body>
</soapenv:Envelope>
 

The following call will be made to the ReplyTo address (http://host001:8080/examples/servlets/servlet/RequestPrinter).

Note that this call contains the response to the Action specified in the request, and includes the original MessageID to correlate request and response.

<?xml version='1.0' encoding='UTF-8'?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Header>
<To xmlns="http://www.w3.org/2005/08/addressing">http:// host001:8080/examples/servlets/servlet/RequestPrinter</To>
<Action xmlns="http://www.w3.org/2005/08/addressing">xmlns.oracle.com/odi/OdiInvoke/:requestPortType:getSessionStatusResponse</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">uuid:eda383f4-3cb5-4dc2-988c-a4f7051763ea</MessageID>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">uuid:71bd2037-fbef-4e1c-a991-4afcd8cb2b8e</RelatesTo>
</S:Header>
<S:Body>
<ns2:OdiGetSessionsStatusResponse xmlns:ns2="xmlns.oracle.com/odi/OdiInvoke/">
<SessionStatusResponse>
               <SessionId>26001</SessionId>
               <SessionStatus>D</SessionStatus>
               <SessionReturnCode>0</SessionReturnCode>
           </SessionStatusResponse>
</ns2:OdiGetSessionsStatusResponse>
    </S:Body>
</S:Envelope>

For more information on WS-Adressing, visit these World Wide Web Consortium (W3C) web sites at the following URLs:

21.11.14 Using Asynchronous Web Services with Callback

Long-running web service operations can be started asynchronously following the pattern of JRF asynchronous web services or asynchronous BPEL processes. These follow a "request-response port pair" pattern.

In this pattern, the web service client implements a callback operation. When the server completes the operation requested by the client, it sends the result to this callback operation.

Two specific operations in the agent web service support this pattern: invokeStartScenWithCallback and invokeRestartSessWithCallback.

These operations provide the following features:

  • They do not return any response. These are one way operations.

  • The client invoking these two operation must implement respectively the invokeStartSceCallback and invokeRestartSessCallback one way operations. Results from the invokeStartScenWithCallback and invokeRestartSessWithCallback actions are sent to these operations.

  • The invocation should provide in the SOAP header the ReplyTo and possibly FaultTo addresses. If the methods are invoked without a ReplyTo address, the operation will execute synchronously (which corresponds to a invokeStartScen or invokeRestartSess operation). When a fault is generated in the operation, it will be sent to the ReplyTo address or FaultTo address.

A scenario or session started synchronously using the invokeStartScenWithCallback and invokeRestartSessWithCallback will start and will not return any SOAP response, as they are one way operations. When the session completes, the response is sent the callback address.


Note:

Oracle BPEL takes care of automatically implementing these operations and sends out WS-Addressing headers that point to these endpoints.

PKfPKp\EOEBPS/whatsnew.htm What's New In Oracle Data Integrator?

What's New In Oracle Data Integrator?

This document describes the new and enhanced features introduced with Oracle Data Integrator 11g Release 1 (11.1.1).

This chapter includes the following sections:

New Features in Oracle Data Integrator 11gR1 PS2 (11.1.1.6)

The second Oracle Data Integrator 11gR1 Patch Set introduces the following enhancements:

Shortcuts

This ODI release introduces new objects called shortcuts. Shortcuts greatly improve productivity by allowing end users to express the large commonality that often exists between two different versions of the same source application, such as same tables and columns, same constraints, and same transformations.

Shortcuts are links to common Oracle Data Integrator objects stored in separate locations and can be created for datastores, integration interfaces, packages, and procedures. In addition, release tags have been introduced to manage the materialization of shortcuts based on specific tags.

Tracking Variables and Sequences

Variables and sequences are often used in Oracle Data Integrator processes. Oracle Data Integrator 11.1.1.6 introduces a new feature allowing end users to determine the actual values of variables and sequences that were used during an executed session. Tracking variables and sequences is extremely useful for debugging purposes.

With the variable tracking feature you can also easily determine whether the variable was used in a mapping or an internal operation such as an Evaluate Variable step.

Global Knowledge Modules

ODI 11.1.1.6 introduces Global Knowledge Modules (KMs) allowing specific KMs to be shared across multiple projects. In previous versions of ODI, Knowledge Modules were always specific to a Project and could only be used within the project into which they were imported. Global KMs are listed in the Designer Navigator in the Global Objects accordion.

Enhanced Session Logging

The readability of the execution logs has been improved in this release for Knowledge Modules and Procedure commands. The final code for source and target commands is now available in the Operator Navigator, making it easier to review executions containing several runtime parameters.

Handling Failed Load Plan Enhancements

It is now possible to change the execution status of a failed Load Plan step from Error to Done on the Steps tab of the Load Plan run Editor. This allows this particular Load Plan step to be ignored the next time the Load Plan run is restarted. This is useful, for example, when the error causing this Load Plan step to fail is not possible to fix at the moment. However, you want to execute the rest of the Load Plan regardless of this Load Plan step status. By changing the status to Done, the step will be ignored on the next execution.

Enhanced Variable Handling in Load Plans

Load Plan variables that are not used in a Load Plan can now be hidden to improve the readability of Load Plans.

Smart Export and Import

Exporting and importing Oracle Data Integrator objects between repositories is a common practice when working with multiple environments such as Development, Quality Assurance and Production. The new Smart Export and Import feature guides you through this task and provides advanced code management features.

Smart Export automatically exports an object with all its object dependencies. It is particularly useful when you want to move a consistent lightweight set of objects from one repository to another including only a set of modified objects.

The Smart Export and Import feature is a lightweight and consistent export and import mechanism providing several key features such as:

  • Automatic and customizable object matching rules between the objects to import and the objects already present in the repository

  • A set of actions that can be applied to the object to import when a matching object has been found in the repository

  • Proactive issue detection and resolution that suggests a default working solution for every broken link or conflict detected during the Smart Import

Enterprise Data Quality Integration

With the EnterpriseDataQuality Open Tool it is now possible to invoke an Oracle Enterprise Data Quality (Datanomic) Job in a Package. Developers can design a Data Quality process in Oracle Enterprise Data Quality and invoke it in a Package in ODI along with the ETL steps.

The EnterpriseDataQuality Open Tool is installed using the standard procedure for Open Tools and can be used in a Package or a Procedure, similarly to the tools provided out of the box in Oracle Data Integrator.

Groovy Editor

This release introduces the Groovy editor. The Groovy editor provides a single environment for creating, editing, and executing Groovy scripts within the ODI Studio context. It provides all standard features of a code editor such as syntax highlighting and common code editor commands.

Support of Undo and Redo Operations

It is now possible to undo or redo changes in editors, dialogs, wizards, and in the Property Inspector using the following keyboard shortcuts: CTRL+Z and CTRL+Y.

Autocomplete for Text Fields and Lists

Certain text components and drop down lists in the ODI Studio now support the autocomplete feature, making end users more productive.

Version Numbering for Knowledge Modules

The version numbering of Knowledge Modules improves the information provided to identify the used environment:

  • It is now possible to determine when a KM has been modified and when it is not the original Knowledge Module as released by Oracle.

  • The KM modifications can be tracked by a version number.

  • It is now possible to find out when a KM has been released with an external component such as a jar file or a dll file (This is the case for example for the SAP and Hyperion KMs.)

  • It is posssible to indicate whether a given ODI version is compatible with the KM version.

New Features in Oracle Data Integrator 11gR1 PS1 (11.1.1.5)

The first Oracle Data Integrator 11gR1 Patch Set introduces the following enhancements:

Load Plans

Load Plans are new objects introduced in this release to organize at a high level the execution of packages and scenarios. Load Plans provide features for parallel, sequential, and conditional scenario execution, restartability, and exception handling. Load Plans can be created and modified in production environments.

OBIEE Lineage

Oracle Business Intelligence Enterprise Edition (OBIEE) users need to know the origin of the data displayed on their reports. When this data is loaded from source systems into the data warehouse using Oracle Data Integrator, it is possible to use the Oracle Data Integrator Lineage for Oracle Business Intelligence feature to consolidate ODI metadata with OBIEE and expose this metadata in a report-to-source data lineage dashboard in OBIEE.

Commands on Connect/Disconnect

It is possible to define for a data server commands that will be automatically executed when connections to this data server are created or closed by ODI components or by a session.

Complex File Technology

Complex file formats (multiple record files) can now be integrated using the new Complex File technology. This technology leverages a new built-in driver that converts transparently complex file formats into a relational structure using a Native Schema (nXSD) description file.

Groovy Technology

Groovy is added to the list of scripting engines supported by Oracle Data Integrator for use in knowledge modules and procedures.

Web Services Enhancements

Web service support in Oracle Data Integrator has been enhanced with the following features:

  • Support for Container Based Authentication: When external authentication and container based authentication with Oracle Platform Security Services (OPSS) are configured, authentication can be passed to the ODI Run-Time Web Services using HTTP basic authentication, WS-Security headers, SAML tokens and so forth and not in the SOAP request.

  • Support for Asynchronous calls and Callback: A scenario or session can be started using the Run-Time Web Services on a one-way operation. When the session completes, the result of the execution can trigger an operation on a callback address. This pattern can be used for handling long running sessions started, for example, from Oracle BPEL.

  • Full SOAP edition for outbound web services calls: The OdiInvokeWebService tool now supports full-formed SOAP messages including the SOAP header and body.

Built-in Technology Additions and Updates

The following technologies used in Oracle Data Integrator have been added and updated:

  • Embedded HSQL engine is upgraded to version 2.0. This embedded engine is used for the Memory Engine as well as the XML and LDAP Drivers' built-in storage

  • Jython BSF engine is updated to version 2.1

  • JAX-WS/JRF is now used as a standard stack for web service calls and processing. Axis is no longer used

Support for Technologies with Ordered and Non-Ordered Join Syntax

Technologies can now support both the ordered or non-ordered (database-specific) syntax for joins. The Oracle DB technology was modified to support both join syntaxes.

New Method for Setting Task Names

A new setTaskName method is available to update at run-time the name of a task.

Shared Library for WLS Agent

A new template called Oracle Data Integrator - Agent Libraries includes libraries shared by all the deployed JEE agent in a domain, and must be deployed before the Oracle Data Integrator - Agent default template or a generate template.

Performance Optimization

The following optimizations have been made in the design-time and run-time components to improve their performance:

  • Long texts storage modified to use CLOBs

  • Agent-Repository network communications reduced at run-time

  • Agent JDBC to JDBC loading mechanism reviewed and optimized

New Features in Oracle Data Integrator 11gR1 (11.1.1.3)

This first release of Oracle Data Integrator introduces a large number of new features, grouped in this section by release themes.

This section includes the following topics:

  • Release Themes. This section provides the primary themes of this release and associated features.

  • New Features. This section provides a complete list of the new features for this release.

Release Themes

While the new features of Oracle Data Integrator for this release cover a number of different areas, the most important changes for new and existing customers are:

New Architectures Supported for Enterprise-Scale Deployment Options

Oracle Data Integrator now provides several deployment options for lightweight standalone deployments and enhanced architectures for deployments based on cluster-able and fault tolerant application server frameworks. Features in this area include:

Core Design-Time Features for Enhanced Productivity and Performance

Oracle Data Integrator now provides a set of core features for increasing development productivity and the performance of the integration flows. Features in this area include:

Standard JDeveloper-Based IDE: Oracle Data Integrator Studio

The Oracle Data Integrator User Interface now uses the JDeveloper-based integrated development environment (IDE), and is renamed Oracle Data Integrator Studio.The user interface has been entirely redesigned in this release to improve developer productivity and make advanced features more accessible. This new IDE provides the following key features:

Developer Usability and Productivity Enhancements

In addition to the entire redesign of the development interface, features have been added to improve the developer's experience and productivity while working in Oracle Data Integrator. Features in this area include:

New Features for Administration

Features have been added to improve manageability of the Oracle Data Integrator components and sessions. Features in this area include:

Enhanced Diagnostic Features and Capabilities

Oracle Data Integrator has been improved with features to facilitate problems troubleshooting and fixing. Features in this area include:

Technologies and Knowledge Modules Enhancements

New technologies and knowledge modules planned for this release have been continuously delivered during the 10g lifecycle patch sets. In addition, existing knowledge modules and technologies have been enhanced to support new core product features and diagnostics.

Features added in 10g Release 3 patch sets include:

Features added for this release include:

New Features

Release 11.1.1 includes many new features. These features are listed below and grouped in the followings component and functional areas:

Runtime Agent

Oracle Data Integrator runtime agent has been enhanced with the features listed in this section.

Java EE Agent

The Runtime Agent can now be deployed as a Java EE component within an application server. It benefits in this configuration from the application server layer features such as clustering and connection pooling for large configurations. This Java EE Agent exposes an MBeans interface enabling lifecycle operations (start/stop) from the application server console and metrics that can be used by the application server console to monitor the agent activity and health.

Standalone Agent

In addition to the Java EE Agent, a Standalone Agent, similar to the one available in previous Oracle Data Integrator releases, is still available. It runs in a simple Java Virtual Machine and can be deployed where needed to perform the integration flows.

Connected Scheduler

Both agent flavors are now always connected to a master repository, and are started with the built-in scheduler service. This scheduler service takes its schedules from all the Work Repositories attached to the connected Master.

HTTP Protocol for Component Communication

Communications with the run-time agents (for example, when sending an execution request to a remote agent) now use standard HTTP protocol. This feature facilitates network management and security for Oracle Data Integrator components in distributed environments.

Oracle WebLogic Server Integration

Oracle Data Integrator components integrate seamlessly with Oracle's Java EE application server.

Java EE Agent Template Generation

Oracle Data Integrator provides a wizard to automatically generate templates for deploying Java EE agents in Oracle WebLogic Server. Such a template includes the Java EE Agent and its configuration, and can optionally include the JDBC datasources definitions required for this agent as well as the drivers and libraries files for these datasources to work.

By using the Oracle WebLogic Configuration Wizard, domain administrators can extend their domains or create a new domain for the Oracle Data Integrator Java EE runtime agents.

Automatic Datasource Creation for WebLogic Server

Java EE Components use JDBC datasources to connect to the repositories as well as to the source and target data servers, and benefit, when deployed in an application server, from the connection pooling feature of their container.

To facilitate the creation of these datasources in the application server, Oracle Data Integrator Studio provides an option to deploy a datasource into a remote Oracle WebLogic application server.

Pre-Packaged WebLogic Server Templates for Java EE Components

Oracle Data Integrator Java EE components that can be deployed in an application server are provided now with pre-packaged templates for Oracle WebLogic Server. Oracle Data Integrator provides templates for:

  • Java EE Runtime Agent

  • Oracle Data Integrator Console

  • Public Web Service

These templates are used to create a WLS domain for Oracle Data Integrator or extend an existing domain with Oracle Data Integrator components.

Web Services

Oracle Data Integrator web services support has been enhanced with the features listed in this section.

JAX-WS Support for Web Services

Oracle Data Integrator Web Services - including the Public Web Service as well as the generated Data Services - now support the market standard Java API for XML Web Services (JAX-WS 2.0). As a consequence, they can be deployed into any web service container that implements this API. The use of the Axis2 stack for these web services is deprecated.

Web Services Changes and Reorganization

The web services have been reorganized and the following run-time web operations are now part of the run-time agent application:

  • getVersion - Retrieve agent version. This operation is new in this version.

  • getSessionStatus - Retrieve the status of a session.

  • invokeRestartSess - Restart a session.

  • invokeStartScen - Start a scenario.

The Public Web Service application retains the following operations:

  • listScenario - List the scenarios.

  • listContext- List the contexts.

Advanced Security Capabilities

Security in Oracle Data Integrator can be hardened with the enterprise features listed in this section.

External Password Storage

Source and target data server passwords, as well as the context passwords, can optionally be stored in an external credential store instead of storing them in an encrypted form in the master repository. This credential store is accessed via the Java Platform Security (JPS) Credential Store Framework (CSF). The password storage method (internal or external with JPS) is defined at repository creation, and can be switched for existing repositories.

With this password storage approach, administrators can choose to rely on a corporate credential store for securing their data server passwords.

External Authentication and SSO

Oracle Data Integrator users can be authenticated using an external authentication service. Using Oracle Platform Security Services (OPSS), Oracle Data Integrator users authenticate against an external Enterprise Identity Store (LDAP, Oracle Internet Directory, Active Directory), which contains in a central place enterprise user and passwords.

With this feature, the master repository retains the Oracle Data Integrator-specific privileges and the user names, but passwords rely in a centralized identity store, and authentication always takes place against this external store. The authentication mode (internal or external) is defined at repository creation, and can be switched for existing repositories.

This feature enables Single Sign-On (SSO) for Oracle Data Integrator Console, and seamless authentication integration between Enterprise Manager and Oracle Data Integrator Console.

Default Password Policy

Oracle Data Integrator is now installed with a default password policy that prevents from setting passwords with a low security level.

Java EE Components Passwords in Credential Store

When deploying in Oracle WebLogic Server a Java EE component that requires a bootstrap connection to a repository (Java EE Agent, Oracle Data Integrator Console), the configuration of this component contains a Supervisor user login. To enforce a strong security policy this user's password is not stored within the application configuration, but centralized in the WLS Credential Store. The configuration will refer to this centralized store.

Production and Monitoring

Oracle Data Integrator provides new features for an enhanced experience in production.

Enhanced Error Messages

Error messages raised by Oracle Data Integrator Components and Sessions have been enhanced to provide administrators and production operators with precise information for troubleshooting and fixing the status of the architecture, and debugging the sessions. Enhanced messages cover:

  • Component lifecycle (startup, shutdown, schedule refresh, etc.)

  • Session lifecycle (incorrect scenario version,u load balancing issue, agent not available, etc.)

  • Session Tasks/Steps (source/target not available, interface error). Database errors are enriched with information allowing developers or production operators to quickly identify the location and reason for an error.

These error messages are standardized with Oracle Data Integrator error codes.

Enhanced Notifications and Logging

Oracle Data Integrator components are now using the Oracle Logging Framework. Logging in any component can be configured to meet the requirements of development, test and production environments.

In addition to this logging capability, agent components can now raise status and session information in the form of Java Management Extension (JMX) notifications that propagate to any administration console.

Error Tables

Error tables can now be managed via Oracle Data Integrator Console. Production operators can review the content of the error tables and purge their content selectively.

Purge Log on Session Count

The OdiPurgeLog tool has been enhanced to support a purge of the log while retaining only a number of sessions in the log. Purged sessions can be automatically archived by the tool before performing the purge.

New Oracle Data Integrator Console

The Metadata Navigator UI has been replaced with the Oracle Data Integrator Console. This web interface for production operations has been rewritten using the ADF-Faces Ajax Framework for a rich user experience. Using this console, production users can set up an environment, export/import the repositories, manage run-time operations, monitor the sessions, diagnose the errors, browse through design-time artifacts, and generate lineage reports.

This web interface integrates seamlessly with Oracle Fusion Middleware Control Console and allows Fusion Middleware administrators to drill down into the details of Oracle Data Integrator components and sessions.

Oracle Fusion Middleware Control Console Integration

Oracle Data Integrator provides an extension integrated into the Oracle Fusion Middleware Control Console. The Oracle Data Integrator components can be monitored as a domain via this console and administrators can have a global view of these components along with other Fusion Middleware components from a single administration console.

This extension discovers Oracle Data Integrator components and allows administrators to:

  • Monitor the status and view the metrics of the master and work repositories, Java EE and Standalone Agents components, and the Oracle Data Integrator Console

  • Review from a central location the notifications raised by any of these components

  • Transparently drill down into Oracle Data Integrator console to browse detailed information stored in the repositories

  • Start and stop Oracle Data Integrator Console and Java EE Agent applications

  • Monitor session executions and review session statistics attached to any of those components

  • Search for specific sessions, view a session status, and drill down into the session details in Oracle Data Integrator Console.

Kill Sessions Immediate

Sessions can now be stopped in an immediate mode. This new mode attempts to abort the current operation (for example, SQL statements launched against a database engine) instead of waiting for its completion.

High Availability

For an enterprise scale deployment, the features enable high available of the Oracle Data Integrator components.

Stale Session Detection and Management

Oracle Data Integrator is now able to detect sessions pending due to an unexpected shutdown of the agent or repository. Such stale session are now managed and pushed to an error state.

Repository Connection Retry

The Agent, when connected to a repository based on Oracle RAC technology, can be configured with connection retry logic. If the one of the Oracle RAC nodes supporting sessions for an agent becomes unavailable, the agent is able to retry and continue its session on another node of the Oracle RAC infrastructure.

Support for WLS Clustering

Clustering is supported for the Java EE agents deployed on a WebLogic Server. Clustering includes schedule porting on a different cluster node. Unrecoverable running sessions are automatically moved to an error state.

OPMN Integration

Standalone agent can be now made highly available using Oracle Process Manager and Notification Server (OPMN). Scripts are provided to configure OPMN to protect standalone agents against failure.

Improved Integration Design

Integration interface design and performance are enhanced with the following features.

Partitioning

Oracle Data Integrator now supports partitioning features of the data servers. Partitions can be reverse-engineered using RKMs or manually created in models. When designing an interface, it is possible to define the partition to address on the sources and target datastores. Oracle Data Integrator code generation handles the partition usage syntax for each technology that supports this feature.

Lookups

A wizard is available in the interface editor to create lookups using a source as the driving table and a model or target datastore as the driving table. These lookups now appear as a compact graphical object in the Sources diagram of the interface. The user can choose how the lookup is generated: as a Left Outer Join in the FROM clause or as an expression in the SELECT clause (in-memory lookup with nested loop). This second syntax is sometimes more efficient on small lookup tables.

This feature simplifies the design and readability of interfaces using lookups, and allows for optimized code for executing lookups.

Datasets and Set-Based Operators

This major enhancement introduces the notion of dataset in interfaces. A dataset represents the data flow coming from a group of joined and filtered source datastores. Each dataset includes the target mappings for this group of sources. Several datasets can be merged into the interface target datastore using set-based operators such as Union and Intersect.

This feature accelerates the interface design and reduces the number of interfaces needed to merge several data flows into the same target datastore.

Derived Select for Temporary Interfaces

When using a temporary interface as a source or a lookup table in another interface, you can choose not to persist the target of the temporary interface, and generate instead a Derived Select (sub-select) statement corresponding to the loading of the temporary datastore. Consequently, the temporary interface no longer needs to be executed to load the temporary datastore. The code generated for the sub-select is either a default generated code, or a customized syntax defined in an IKM.

This feature eliminates the need for complex packages handling temporary interfaces and simplifies the execution of cascades of temporary interfaces.

Support for Native Sequences

Oracle Data Integrator now provides support for a new type of sequence that directly maps to database-defined sequences. When created, these can be picked from a list retrieved from the database. Native Sequences are used as regular Oracle Data Integrator sequences, and the code generation automatically handles technology-specific syntax for sequences.

This feature simplifies the use of native sequences in all expressions, and enables cross references when using such sequences.

Support for Natural Joins

Oracle Data Integrator now provides support for the Natural join, defined at technology level. This join does not require any join expression to be specified, and is handled by the engine that processes it. This engine matches automatically columns with the same name.

Automatic Temporary Index Management

When creating joins or filters on source tables, it is possible to have Oracle Data Integrator automatically generate temporary indexes for optimizing the execution of these joins or filters. The user selects the type of index that needs to be created in the list of index types for the technology. Knowledge modules automatically generate the code for handling indexes creation before executing the join and filters as well as deletion after usage.

This feature provides automated optimization of the joins and filters execution, and enables better performances for integration interfaces.

New Interface Editor

The interface editor, used to create the integration interfaces, has been entirely redesigned to use the JDeveloper diagramming framework.

The advantages of this new diagram include:

  • Improved look and feel and better user experience

  • Support for graphical options on diagram objects. For example, compact and expanded view can be used for better readability.

  • Thumbnail and zoom in/out is supported on the sources and flow diagram to navigate large diagrams.

  • Multiple source columns can be dropped directly onto the target datastore for faster mapping.

  • Target mapping table is improved. Mapping properties (Position, Indicator, Name and Mapping expression) can be displayed selectively and sorted.

  • Sources, targets, filters, joins can be selected and edited directly in the flow diagram.

Quick-Edit

The new interface editor includes a new Quick-Edit tab to edit the interface diagram faster. Quick-Edit displays these components in a tabular form, supports mass-updates and intuitive keyboard navigation.

Auto fixing

When saving an interface or clicking the error button from the interface editor toolbar, a list of all the design errors in the interface is displayed with meaningful messages and tips. Automated fixes are suggested and can be applied with a single click.

Code Simulation

When performing an execution of design-time objects from the Oracle Data Integrator Studio (for example, when running an interface, a procedure, a package or a customized reverse-engineering process), it is possible to make a code simulation instead of a full execution.

Code simulation displays a session simulation report. This report includes complete session, step, and task information and contains the full generated code. The session simulation report can be reviewed and saved in XML or HTML format.

With this features Oracle Data Integrator developers can easily review the generated code for troubleshooting, debugging, optimization purposes, and save this generated code for documentation or archive purposes.

Reverse-Engineering Improvements

When a model is created the reverse-engineering context is automatically set to the default context, instead of having to select it manually. In addition, when performing a selective reverse-engineering, the system tables are now hidden from the display.

Scenario Naming Convention

When generating a scenario or a group of scenarios from the Studio or using a tool, the naming convention that is used for naming the scenario can be defined in a pattern (using the object name, folder path or project name) using the Scenario Naming Convention user parameter.

Object Name Length Extension

Object names have been extended to support long database object names (128 characters) and repository object labels (400 characters).

Oracle Data Integrator Java API

Oracle Data Integrator provides a Java API for managing run-time and design time artifacts. Using this API, Java developers can embed Oracle Data Integrator in their product and can drive integration process creation from their own user interface.

Oracle Data Integrator Studio

Oracle Data Integrator provides a new IDE called the Studio, based on JDeveloper. This component includes the following features:

New Navigator Organization

The new Oracle Data Integrator studio is used as a replacement for all Oracle Data Integrator modules (Designer, Topology, Operator and Security Manager). All the features of these modules now appear as Navigators within the Oracle Data Integrator Studio window.

This new Navigator organization provides the following features:

  • Navigators can be docked/undocked and displayed/hidden using the View menu. These Navigators allow access to the former module-specific actions from their Navigator toolbar menu (for example, the export/import master repository operations in the Topology Navigator)

  • Accordions group the tree views that appear in the Navigators (for example the Project and Models accordions in the Designer Navigator). Accordions that are not frequently used can be minimized into the lower section of the Navigator to allow more room for the other tree views. Accordions allow access to the tree view-specific actions from their toolbar menu (for example, import project from the Project Accordion in the Designer Navigator).

  • Tree Views objects are provided with context menus and markers the same way as in Oracle Data Integrator 10g. Tree view objects can be dragged and dropped within a tree view or across tree views for defining the security policies. Double clicking an object opens by default the corresponding Object Editor.

  • Context Menus have been reorganized into groups with separators and normalized across the interface.

This feature provides a single user interface from which the user can perform all the tasks in a project lifecycle. It also provides a better productivity for the user.

New Look and Feel

The look and feel of Oracle Data Integrator has been enhanced with the use of the JDeveloper base IDE. This new look and feel is customizable with the Preferences menu option. Icons are being redesigned in a new, trendy style to enhance the overall visual appeal of Oracle Data Integrator

Redesigned Editors

All object editors in Oracle Data Integrator have been redesigned for better usability.

Main changes include:

  • Tabs are organized as finger tabs on the left hand-side of the editor. Complex editors (as for example Interface or Package Editors) have also tabs appearing in the bottom of the editor.

  • Fields have been grouped under headers. These field groups implement an expand/collapse behavior.

  • Fields and labels have been organized in a standard way for all editors for a better readability of the editors.

  • Text Buttons in the editors are transformed into hyperlinks, and all buttons appearing in editors have been redesigned.

  • Knowledge Modules, Actions and Procedure editors have been redesigned in order to edit the Lines directly from the main editor instead of opening a separate editor.

Window Management

The windows, editors and navigators in the Oracle Data Integrator Studio benefit from the following JDeveloper IDE features:

  • Full Docking Support: All windows, editors and navigators can now be docked and undocked intuitively. The visual feedback provided when repositioning editor windows and dockable windows has been improved. You now see an outline shape of where the window will be placed when the mouse is released. You can also now reorder the document tabs using drag and drop.

  • Fast maximize and restore: To quickly maximize a dockable window or the editor area, double-click on the title bar of the window you want to maximize. To restore the window to its previous dimensions, double-click again on the title bar.

  • Title bars as tabs: The tab for a dockable window (when tabbed with another dockable window) is now also the title bar. This makes more effective use of the space on the screen. Reposition a window by dragging its tab. Some additional related enhancements include a new context menu from the gray background area behind the tab, change in terminology from "auto-hide" and "show" to "minimize" and "restore", ability to minimize a set of tabbed windows with a single click, and toggling the display of a minimized window by clicking on its button.

Document Management and Navigation

Object edition has been enhanced in the Oracle Data Integrator Studio with improved document management. This includes:

  • Save and close multiple editors: You can easily save all your work with a single click using the File > Save All option and close all opened editors similarly. You can also close all the editors but the current one.

  • Forward and back buttons: Now you can easily return to a previously visited document with the convenient browser-style forward and back buttons on the main toolbar. These buttons maintain a history, so you can drop down the back or forward button to get a list of the documents and edit locations you have visited. Alt+Left and Alt+Right activate the back and forward buttons.

  • Quick document switching: Switching between editors and navigators is also possible. Now when you press Ctrl+Tab or Ctrl+F6, you can choose which document you want to switch from a list ordered by the most recently used. You can use the same technique to switch between open dockable windows by first placing focus in a dockable window, then pressing Ctrl+Tab or Ctrl+F6.

Improved User Assistance

Oracle Data Integrator introduces intuitive new features that improve usability:

  • Help Center/Welcome Page: The Welcome page has been transformed into the Help Center, redesigned to provide the user with quick access to help topics and common tasks, as well as links to useful Oracle resources.

  • New On-Line Help: The online help has been entirely re-written for supporting the new user interface.

  • Help bookmarks: The Help window has a tab labeled Favorites. While browsing the help, you can click on the Add to Favorites button to add the document to this tab.

Export/Import

Export/import is enhanced in this new release with the following features:

Import Report

After objects have been imported, an import report displays the objects that have been imported or deleted in the target repository. In addition, missing objects referenced by the imported objects are indicated as missing references, and missing references fixed by the import are also indicated. Import reports can be saved in XML or HTML format

With this feature, importing objects becomes a very transparent operation as all changes can be identified and archived.

Repository Corruption Prevention

When importing objects across repositories, the following cases have been taken into account to avoid the risks of import errors and repository corruption:

  • The import in Synonym mode that may result in overwriting a text (for example, a mapping expression) with a text from a different origin (for example, a filter expression) is now verified and not allowed.

  • It is not allowed to import objects from two repositories with the same repository identifier into a target repository. This avoids object collision and corruption.

  • When attaching a work repository that contains objects imported from another repository, a warning is raised to the user.

In addition, import of objects that reference non-existing objects now create missing references, identified in the import report. Such references can be resolved by importing the missing object.

Repository Renumbering

It is now possible to change the identifier of a master or work repository after its creation. This operation automatically updates the internal identifier of the objects created in this repository to match the new identifier.

This feature facilitates configuration management and fixing import/export situations when multiple repositories have been created with the same identifier.

KM Enhancements in Release 10.1.3 Patch sets

The following improved and new knowledge modules have been delivered in 10gR3 patch sets and are available in this release.

Oracle GoldenGate Knowledge Modules

Oracle Data Integrator uses Oracle GoldenGate to replicate online data from a source to a staging database. A Journalizing KM manages the Oracle Data Integrator CDC infrastructure and automatically generates the configuration for Oracle GoldenGate.

Oracle E-Business Suite Knowledge Modules

Oracle Data Integrator Knowledge Modules for E-Business Suite provide comprehensive, bidirectional connectivity between Oracle Data Integrator and E-Business Suite, which enables you to extract and load data. The Knowledge Modules support all modules of E-Business Suite and provide bidirectional connectivity through EBS objects tables/views and interface tables.

Oracle OLAP Knowledge Modules

The Oracle Data Integrator Knowledge Modules for Oracle OLAP provide integration and connectivity between Oracle Data Integrator and Oracle OLAP cubes. The Oracle Data Integrator KMs for Oracle OLAP support reverse-engineering of Oracle OLAP data structures (all tables used by a ROLAP or a MOLAP cube) and data integration in an Oracle Analytical Workspace target in incremental update mode.

Oracle PeopleSoft Knowledge Modules

The Oracle Data Integrator Knowledge Modules for PeopleSoft provide integration and connectivity between Oracle Data Integrator and the PeopleSoft platform. These KMs enable Data-level integration for PeopleSoft and support reverse-engineering of PeopleSoft data structures (Business Objects, tables, views, columns, keys, and foreign keys) and data extraction from PeopleSoft.

Oracle Siebel Knowledge Modules

The Oracle Data Integrator Siebel Knowledge Modules support reverse-engineering Siebel data structures (Business Components and Business Objects) and Enterprise Integration Manager (EIM) tables, data extraction from Siebel using data-level integration and data extraction and integration with Siebel using the EIM tables

JDE EnterpriseOne Knowledge Modules

The Oracle Data Integrator Knowledge Modules for JDE EnterpriseOne provide connectivity and integration of the JDE EnterpriseOne platform with any database application through Oracle Data Integrator. These KM support reverse-engineering of JDE EnterpriseOne data structures, data extraction from JDE EnterpriseOne (Direct Database Integration) and integration through the Z-tables to an JDE Application (Interface Table Integration)

Oracle Changed Data Capture Adapters/Attunity Streams Knowledge Modules

The Oracle Data Integrator CDC Knowledge Module provides integration from Oracle Changed Data Capture Adapters/Attunity Streams Staging Areas via a JDBC interface. This KM reads changed data, loads this data into a staging area and handles the Oracle Changed Data Capture Adapters/Attunity Streams context to ensure consistent consumption of the changes read.

Hyperion Adapters

Knowledge Modules and technologies have been added for integrating the Hyperion technologies using Oracle Data Integrator.

These KMs support the following Hyperion products:

  • Hyperion Financial Management, to load and extract metadata and data.

  • Hyperion Planning, to load metadata and data into Hyperion Planning.

  • Hyperion Essbase, to load and extract Essbase metadata and data.

Row-By-Row KMs for Debugging

Knowledge modules supporting row-by-row loading (LKM SQL to SQL (row by row)) and integration (IKM SQL Incremental Update (row by row)) have been introduced for debugging purposes. These KMs allow logging of each row operation performed by the KM.

Teradata Optimizations

Teradata knowledge modules have been enhanced for Teradata to enable best performances.

This includes the following features:

  • Support for Teradata Utilities (TTU).

  • Support for customized Primary Indexes (PI) for temporary tables.

  • Support for named pipes when using TTU.

  • Optimized Management of Temporary tables.

SAP ERP Adapter

The SAP ERP Adapter allows extraction of data from SAP ERP systems. The Oracle Data Integrator SAP ABAP Knowledge Modules included in this adapter provide integration from SAP ERP systems using SAP JCo libraries and generated ABAP programs.

SAP BW Adapter

The SAP BW Adapter allows extraction of data from SAP BW systems. The Oracle Data Integrator SAP BW Knowledge Modules included in this adapter provide integration from SAP BW using SAP JCo libraries and generated ABAP programs. This adapter supports ODS, Info Objects, Info Cubes, Open Hub and Delta Extraction.

KM Enhancements in Release 11.1.1

The following knowledge modules enhancements are new to this release.

KM Enhancements for New Core Features

Knowledge modules have been enhanced to support the core features added in this version of Oracle Data Integrator. The following KMs have been updated to support these features:

  • Support for Partitioning: Oracle RKM reverse-engineers partitions.

  • Datasets and Set-Based Operators: All IKMs have been updated to support this feature.

  • Automatic Temporary Index Management: Oracle and Teradata IKMs and LKMs have been updated to support this feature.

Oracle Business Intelligence Enterprise Edition - Physical

Oracle Data Integrator provides the ability to reverse-engineer View Objects that are exposed in Oracle Business Intelligence Enterprise Edition (OBI-EE) physical layer. These objects can be used as sources of integration interfaces.

Oracle Multi-Table Inserts

A new Integration KM for Oracle allows populating several target tables from a single source, reading the data only once. It uses the INSERT ALL statement.

Teradata Multi-Statements

A new Teradata Integration KM provides support for Teradata Multi-Statements, allowing integration of several flows in parallel.

PKzݛ(PKp\EOEBPS/shortcuts.htmj Working with Shortcuts

17 Working with Shortcuts

This chapter gives an introduction to shortcuts and describes how to work with shortcuts in Oracle Data Integrator.

This chapter includes the following sections:

17.1 Introduction to Shortcuts

Oracle Data Integrator is often used for populating very large data warehouses sourcing from various versions of source applications. To express the large commonality that often exists between two different versions of the same source application, such as same tables and columns, same constraints, and same transformations, shortcuts have been introduced into Oracle Data Integrator. Shortcuts are created for common objects in separate locations. At deployment time, for example during an export from the design repository to the runtime repository, these shortcuts are materialized as final objects for the given version of the source application.

17.1.1 Shortcutting Concepts

A shortcut is a link to an Oracle Data Integrator object. You can create a shortcut for datastores, integration interfaces, packages, and procedures.

A referenced object is the object directly referenced by the shortcut. The referenced object of a shortcut may be a shortcut itself.

The base object is the original base object. It is the real object associated with the shortcut. Changes made to the base object are reflected in all shortcuts created for this object.

When a shortcut is materialized, it is converted in the design repository to a real object with the same properties as the ultimate base object. The materialized shortcut retains its name, relationships, and object ID.

Release tags have been introduced to manage the materialization of shortcuts based on specific tags. Release tags can be added to folders and model folders.

See Section 17.4, "Working with Shortcuts in your Projects" for more information.

17.1.2 Shortcut Objects

You can create a shortcut for the following ODI objects: datastores, integration interfaces, packages, and procedures.

Shortcuts can be distinguished from the original object by the arrow that appears on the icon. The shortcut icons are listed in Table 17-1.

Table 17-1 Shortcut Icons

Shortcut IconShortcut Object
Datastore shortcut

Datastore

Interface shortcut

Integration Interface

Package shortcut

Package

Procedure shortcut

Procedure


Shortcut children display the same nodes as the real object in Designer Navigator.

Guidelines for Creating Shortcuts

Shortcuts are generally used like the objects they are referring to. However, the following rules apply when creating shortcuts for:

  • Datastores: It is possible to create an object shortcut for datastores across different models/sub models but the source and destination models must be defined with the same technology. Also, a model cannot contain a datastore and a shortcut to another datastore with the same table name. Two shortcuts within a model cannot contain the same base object.

    Datastore shortcuts can be used as sources or the target of an integration interface and as datastores within a package. The interfaces and packages containing datastore shortcuts refer to the datastore shortcut and the model in addition to the base datastore.

  • Packages, Interfaces, and Procedures: It is possible to create an object shortcut for packages, interfaces, and procedures belonging to a specific ODI folder.

    Interface, procedure, and package shortcuts within a Project can only refer to objects (integration interfaces, procedures, and packages) that belong to the same object.

    • Package shortcuts can be used in Load Plan steps

    • Interface shortcuts can be used within an interface, a package, or a Load Plan step

    • Procedure shortcuts can be used in a package or a Load Plan step

    When adding a shortcut to a Load Plan step, Oracle Data Integrator converts the shortcut object into a Run Scenario step.

17.2 Introduction to the Shortcut Editor

The Shortcut editor provides a single environment for editing and managing shortcuts in Oracle Data Integrator. Figure 17-1 gives an overview of the Shortcut editor.

Figure 17-1 Shortcut Editor of a Package Shortcut

Description of Figure 17-1 follows

The Shortcut Editor has the following tabs:

  • Definition

    Includes the names of the shortcut, the referenced object, and the base object

  • Execution (only for shortcuts of Packages, Interfaces and Procedures)

    Is organized into the Direct Executions and the Scenario Execution tabs and shows the results of previous executions

  • Scenarios (only for shortcuts of Packages, Interfaces and Procedures)

    Displays in a table view the scenarios generated for this component

  • Version

    Includes the details necessary to manage versions of the shortcut

The Shortcut Editor provides two buttons to handle its references:

  • View Referenced Object: Click to display the editor of the referenced object

  • View Base Object: Click to display the editor of the base object

17.3 Creating a Shortcut

Shortcuts can have the same name as the base object. It is possible to rename a shortcut but note that the shortcut object name must be used instead of the base name for object usages and materialization purposes.

Shortcuts can be created for one or multiple objects at a time and also for the same object in more than one location.

Also note that two shortcut objects within a folder cannot refer to the same base object and follow the Guidelines for Creating Shortcuts.

To create a shortcut:

  1. In Designer Navigator, select the object that you want to create a shortcut to.

    Note that you can select multiple objects of the same type.

  2. Right-click the object(s) and select Copy.

  3. Go to the location where you want to insert the shortcut. This must be the parent of another folder or model.

  4. Right-click the folder or model and select Paste as Shortcut.

    Note that the menu item Paste as Shortcut is only enabled if:

    • The previous operation was a Copy operation.

    • The copied object is an object for which shortcuts can be created. See Section 17.1.2, "Shortcut Objects" for more information.

    • The new location is legal for the copied objects. Legal locations are:

      • For datastore shortcuts: A model or sub-model node different of the source model

      • For interface, package, and procedure shortcuts: A folder node in the same project as the source folder but not the source folder

  5. The new shortcut appears in Designer Navigator.


Tip:

It is possible to create several shortcuts at once. See Section 17.4.1, "Duplicating a Selection with Shortcuts" for more information.

17.4 Working with Shortcuts in your Projects

This section describes the actions that you can perform when you work with shorts in your Oracle Data Integrator projects. These actions include:

17.4.1 Duplicating a Selection with Shortcuts

It is possible to create several shortcuts at once for the objects within a given model or folder.

  • If you perform a quick massive shortcut creation on a model node, the new model will be a copy of the source model with all datastores created as shortcuts.

  • If you perform a quick massive shortcut creation on a folder node, the new folder will be a copy of the source folder with all interfaces, packages, and procedures created as shortcuts.

To perform a quick massive shortcut creation:

  1. In Designer Navigator, select a folder or model node.

  2. Right-click and select Duplicate Selection with Shortcuts.

17.4.2 Jump to the Reference Shortcut

Use this action if you want to move the current selection in Designer Navigator to the referenced object.

To jump to the referenced object:

  1. In Designer Navigator, select the shortcut whose referenced object you want to find.

  2. Right-click and select Shortcut > Follow Shortcut.

The referenced object is selected in Designer Navigator.

17.4.3 Jump to the Base Object

Use this action if you want to move the current selection in Designer Navigator to the base object.

To jump to the base object:

  1. In Designer Navigator, select the shortcut whose base object you want to find.

  2. Right-click and select Shortcut > Jump to Base.

The base object is selected in Designer Navigator.

17.4.4 Executing Shortcuts

Executing a shortcut executes the underlying procedure the shortcut is referring to. Shortcuts are executed like any other object in Oracle Data Integrator. See Chapter 21, "Running Integration Processes" for more information.

17.4.5 Materializing Shortcuts

When a shortcut is materialized, it is converted in the design repository to a real object with the same properties as the ultimate base object. The materialized shortcut retains its name, relationships, and object ID. All direct references to the shortcut are automatically updated to point to the new object. This applies also to release tags. If the materialized shortcut contained release tags, all references to the base object within the release tag folder or model would be changed to the new object.


Note:

When materializing an interface shortcut and this is a temporary interface or the interface has a multilogical schema environment, for example when the interface has a staging area on the same logical schema as the source datastore(s), the materialized interface might contain errors such as changes in the Flow. Please review the materialized interface.

17.4.6 Exporting and Importing Shortcuts

Shortcuts can be exported and imported either as materialized objects or as shortcuts.

Standard and multiple export do not support materialization. When using standard or multiple export, a shortcut is exported as a shortcut object. Any import will import the shortcut as a shortcut.

When you perform a Smart export and your export contains shortcuts, you can choose to materialize the shortcuts:

  • If you select not to materialize the shortcuts, both the shortcuts and the base objects will be exported.

  • If you select to materialize the shortcuts, the export file will contain the converted shortcuts as real objects with the same object ID as the shortcut. You can import this export file into a different repository or back to the original repository.

    • When this export file is imported into a different repository, the former shortcut objects will now be real objects.

    • When this export file is imported back into the original repository, the materialized object ID will be matched with the shortcut ID. Use the Smart import feature to manage this matching process. The Smart import feature is able to replace the shortcut by the materialized object.

See Section 20.2.7, "Smart Export and Import" for more information.

17.4.7 Using Release Tags

Release tags have been introduced to manage the materialization of shortcuts based on specific tags. Release tags can be added in form of a text string to folders and model folders.

Note the following concerning release tags:

  • No two models may have the same release tag and logical schema. The release tag is set in the model and in the folder.

  • The release tag is used only during materialization and export.

  • The release tag on a folder applies only to the package, interface, and procedure contents of the folder. The release tag is not inherited by any subfolder.

To add a new release tag or assign an existing release tag:

  1. From the Designer Navigator toolbar menu, select Edit Release Tag...

    This opens the Release Tag wizard.

  2. In the Release Tag Name field, do one of the following:

    • Enter a new release tag name.

    • Select a release tag name from the list.

    This release tag name will be added to a given folder.

  3. The available folders are displayed on the left, in the Available list. From the Available list, select the folder(s) to which you wish to add the release tag and use the arrows to move the folder to the Selected list.

  4. Click Next to add the release tag to a model.

    You can click Finish if you do not want to add the release tag to a model.

  5. The available models and model folders are displayed on the left, in the Available list. From the Available list, select the model(s) and/or model folder(s) to which you wish to add the release tag and use the arrows to move the model(s) and/or model folder(s) to the Selected list.

  6. Click Finish.

The release tag is added to the selected project folders and models.


Tip:

You can use release tags when performing a Smart Export by choosing to add all objects of a given release to the Smart Export. See Section 20.2.7.1, "Performing a Smart Export" for more information.

17.4.8 Advanced Actions

This section describes the advanced actions you can perform with shortcuts. Advanced actions include:

Data/View Data Action on a Datastore Shortcut

You can perform a Data or View Data action on a datastore shortcut to view or edit the data of the underlying datastore the shortcut is referring to.

To view or edit the datastore's data the shortcut is referring to, follow the standard procedure described in Section 5.4, "Editing and Viewing a Datastore's Data".

Perform a Static Check on a Model, Submodel or Datastore Shortcut

You can perform a perform a static check on a model, submodel or datastore shortcut. This performs a static check on the underlying object this shortcut is referring to.

To perform a static check on a model, submodel or datastore shortcut, follow the standard procedure described in Section 5.6.3, "Perform a Static Check on a Model, Sub-Model or Datastore".

Review Erroneous Records of a Datastore Shortcut

You can review erroneous records of the datastore a datastore shortcut is referring to.

To review erroneous records of the datastore shortcut, follow the standard procedure described in Section 5.6.4, "Reviewing Erroneous Records".

Generate Scenarios of a Shortcut

You can generate a scenario from interface, package, and procedure shortcuts. This generates a scenario of the underlying object this shortcut is referring to. Note that the generated scenario will appear under the shortcut and not the referenced object in Designer Navigator.

To generate a scenario of a shortcut, follow the standard procedure described in Section 13.2, "Generating a Scenario".

Reverse-Engineer a Shortcut Model

You can reverse-engineer a model containing datastore shortcuts using the RKM Oracle. This Knowledge Module provides the option SHORTCUT_HANDLING_MODE to manage shortcuts that have the same table name as actual tables being retrieved from the database. This option can take three values:

  • ALWAYS_MATERIALIZE: Conflicted shortcuts are always materialized and datastores are reversed (default).

  • ALWAYS_SKIP: Conflicted shortcuts are always skipped and not reversed.

  • PROMPT: The Shortcut Conflict Detected dialog is displayed. You can define how to handle conflicted shortcuts:

    • Select Materialize, to materialize and reverse-engineer the conflicted datastore shortcut.

    • Leave Materialize unselected, to skip the conflicted shortcuts. Unselected datastores are not reversed and the shortcut remains.


Note:

When you reverse-engineer a model that contains datastore shortcuts and you choose to materialize the shortcuts, the reverse-engineering process will be incremental for database objects that have different columns than the datastores shortcuts. For example, if the datastore shortcut has a column that does not exist in the database object, the column will not be removed from the reversed and materialized datastore under the assumption that the column is used somewhere else.

For more information on reverse-engineering, see Chapter 5, "Creating and Reverse-Engineering a Model".

PK^jjPKp\EOEBPS/partpage5.htm\ Managing Integration Projects

Part V

Managing Integration Projects

This part describes how to organize and maintain your Oracle Data Integrator projects.

This part contains the following chapters:

PK a\PKp\EOEBPS/admin_reps.htm Administering the Oracle Data Integrator Repositories

3 Administering the Oracle Data Integrator Repositories

This chapter describes how to create and administer Oracle Data Integrator repositories. An overview of the repositories used in Oracle Data Integrator is provided.

This chapter includes the following sections:

3.1 Introduction to Oracle Data Integrator Repositories

There are two types of repositories in Oracle Data Integrator:

  • Master Repository: This is a data structure containing information on the topology of the company's IT resources, on security and on version management of projects and data models. This repository is stored on a relational database accessible in client/server mode from the different Oracle Data Integrator modules. In general, you need only one master repository. However, it may be necessary to create several master repositories in one of the following cases:

    • Project construction over several sites not linked by a high-speed network (off-site development, for example).

    • Necessity to clearly separate the interfaces' operating environments (development, test, production), including on the database containing the master repository. This may be the case if these environments are on several sites.

  • Work Repository: This is a data structure containing information on data models, projects, and their use. This repository is stored on a relational database accessible in client/server mode from the different Oracle Data Integrator modules. Several work repositories can be created with several master repositories if necessary. However, a work repository can be linked with only one master repository for version management purposes.

The standard method for creating repositories is using Repository Creation Utility (RCU). RCU automatically manages storage space as well as repository creation. However, if you want to create the repositories manually, it is possible to manually create and configure the repositories.

The steps needed to create and configure repositories are detailed in the following sections:


Note:

Oracle recommends that you regularly perform the following maintenance operations: purge the execution logs in order to reduce the work repository size, and back up the Oracle Data Integrator repositories on the database.

Advanced actions for administering repositories are detailed in Section 3.8, "Advanced Actions for Administering Repositories".

3.2 Creating Repository Storage Spaces

Oracle Data Integrator repositories can be installed on database engines supported by Oracle Fusion Middleware 11g. For the latest list of supported databases versions as well as the requirements for each database, see:

http://www.oracle.com/technology/software/products/ias/files/fusion_certification.html

For each database that will contain a repository, a storage space must be created.


Caution:

For reasons of maintenance and back-up, we strongly recommend that repositories be stored in a different space from where your application data is kept (for example in a different schema for an Oracle database, or in a different database for Sybase or Microsoft SQL Server).

Your master repository can be stored in the same schema as one of your work repositories. However, you cannot create two different work repositories in the same schema.

The examples in the following table are supplied as a guide:

TechnologySteps to follow
OracleCreate a schema odim to host the Master repository and a schema odiw to host the work repository.

The schemas are created by the following SQL commands:

SQL> create user MY_SCHEMA identified by MY_PASS
       default tablespace MY_TBS 
       temporary tablespace MY_TEMP; 
SQL> grant connect, resource to MY_SCHEMA;
SQL> grant execute on dbms_lock to MY_SCHEMA;

Where:

MY_SCHEMA corresponds to the name of the schema you want to create.

MY_PASS corresponds to the password you have given it <MY_TBS> the Oracle tablespace where the data will be stored

MY_TEMP temporary default tablespace

Microsoft SQL Server or Sybase ASECreate a database db_odim to host the master repository and a database db_odiw to host the work repository. Create two logins odim and odiw which have these databases by default.

Use Enterprise Manager to create the two databases db_odim and db_odiw.

Use Query Analyzer or I-SQL to launch the following commands:

CREATE LOGIN mylogin
     WITH PASSWORD = 'mypass',
     DEFAULT_DATABASE = defaultbase,
     DEFAULT_LANGUAGE = us_english;
USE defaultbase;
CREATE USER dbo FOR LOGIN mylogin;
GO

Where:

mylogin corresponds to odim or odiw.

mypass corresponds to a password for these logins.

defaultbase corresponds to db_odim and db_odiw respectively.

Note: It is recommended to configure the Microsoft SQL Server databases that store the repository information with a case-sensitive collation. This enables reverse-engineering and creating multiple objects with the same name but a different case (for example: tablename and TableNAme).

DB2/400Create a library odim to host the Master repository and a schema odiw to host the work repository. Create two users odim and odiw who have these libraries by default.

Note: The libraries must be created in the form of SQL collections.

DB2/UDBPre-requisites:
  • Master and work repository users must have access to tablespaces with minimum 16k pagesize

  • The database must have a temporary tablespace with minimum 16 k

For example:

CREATE  LARGE  TABLESPACE ODI16 PAGESIZE 16 K  MANAGED BY AUTOMATIC STORAGE ;
GRANT USE OF TABLESPACE ODI16 TO USER ODIREPOS; 

3.3 Creating the Master Repository

Creating the master repository creates an empty repository structure and seeds metadata (for example, technology definitions, or built-in security profiles) into this repository structure.

To create the master repository:

  1. Open the New Gallery by choosing File > New.

  2. In the New Gallery, in the Categories tree, select ODI.

  3. Select from the Items list the Master Repository Creation Wizard.

  4. Click OK.

    The Master Repository Creation wizard opens.

  5. Specify the Database Connection parameters as follows:

    • Technology: From the list, select the technology that will host your master repository. Default is Oracle.

    • JDBC Driver: The driver used to access the technology, that will host the repository.

    • JDBC URL: The URL used to establish the JDBC connection to the database.

      Note that the parameters JDBC Driver and URL are synchronized and the default values are technology dependant.

    • User: The user ID / login of the owner of the tables (for example, odim).

    • Password: This user's password.

    • DBA User: The database administrator's username

    • DBA Password: This user's password

  6. Specify the Repository Configuration parameters as follows:

    • ID: A specific ID for the new repository, rather than the default 0.


      Note:

      It is strongly recommended that this ID is unique and not used for any other master repository, as it affects imports and exports between repositories

  7. Click Test Connection to test the connection to your master repository.

    The Information dialog opens and informs you whether the connection has been established. If the connection fails, fix the connection to your master repository before moving to next step.

  8. Click Next.

  9. Do one of the following:

    • Select Use ODI Authentication to manage users using ODI's internal security system and enter the following supervisor login information:

      PropertiesDescription
      Supervisor UserUser name of the ODI supervisor.
      Supervisor PasswordThis user's password
      Confirm PasswordThis user's password

    • Select Use External Authentication to use an external enterprise identity store, such as Oracle Internet Directory, to manage user authentication and enter the following supervisor login information:

      PropertiesDescription
      Supervisor UserUser name of the ODI supervisor
      Supervisor PasswordThis user's password


      Note:

      In order to use the external authentication option, ODI Studio has to be configured for external authentication. See Section 24.3.2, "Setting Up External Authentication" for more information and restart ODI Studio.

  10. Click Next.

  11. Specify the password storage details:

    • Select Internal Password Storage if you want to store passwords in the Oracle Data Integrator master repository

    • Select External Password Storage if you want use JPS Credential Store Framework (CSF) to store the data server and context passwords in a remote credential store. Indicate the MBean Server Parameters to access the credential store. Refer to Chapter 24, "Managing the Security in Oracle Data Integrator" for more information.

  12. In the Master Repository Creation Wizard click Finish to validate your entries.

Oracle Data Integrator begins creating your master repository. You can follow the procedure on your Messages – Log. To test your master repository, refer to the Section 3.4, "Connecting to the Master Repository".

3.4 Connecting to the Master Repository

To connect to the Master repository:

  1. Open the New Gallery by choosing File > New.

  2. In the New Gallery, in the Categories tree, select ODI.

  3. Select from the Items list Create a New ODI Repository Login.

  4. Click OK.

    The Repository Connection Information dialog appears.

  5. Specify the Oracle Data Integrator connection details as follows:

    • Login name: A generic alias (for example: Repository)

    • User: The ODI supervisor user name you have defined when creating the master repository or an ODI user name you have defined in the Security Navigator after having created the master repository.

    • Password: The ODI supervisor password you have defined when creating the master repository or an ODI user password you have defined in the Security Navigator after having created the master repository.

  6. Specify the Database Connection (Master Repository) details as follows:

    • User: Database user ID/login of the schema (database, library) that contains the ODI master repository

    • Password: This user's password

    • Driver List: Select the driver required to connect to the DBMS supporting the master repository you have just created from the dropdown list.

    • Driver Name: The complete driver name

    • JDBC URL: The URL used to establish the JDBC connection to the database hosting the repository

      Note that the parameters JDBC Driver and URL are synchronized and the default values are technology dependant.

  7. Select Master Repository Only.

  8. Click Test to check that the connection is working.

  9. Click OK to validate your entries.

3.5 Creating a Work Repository

Several work repositories can be designated with several master repositories if necessary. However, a work repository can be linked with only one master repository for version management purposes.

To create a new work repository:

  1. In the Topology Navigator, go to the Repositories panel.

  2. Right-click the Work Repositories node and select New Work Repository.

    The Create Work Repository Wizard opens.

  3. Specify the Oracle Data Integrator work repository connection details as follows:

    • Technology: Choose the technology of the server to host your work repository. Default is Oracle.

    • JDBC Driver: The driver used to access the technology, that will host the repository.

    • JDBC URL: The complete path of the data server to host the work repository.

      Note that the parameters JDBC Driver and URL are synchronized and the default values are technology dependant.

      It is recommended to use the full machine name instead of localhost in the JDBC URL to avoid connection issues. For example for remote clients, the client (ODI Studio or SDK) is on a different machine than the work repository and localhost points to the current client machine instead of the one hosting the work repository.

    • User: User ID / login of the owner of the tables you are going to create and host of the work repository.

    • Password: This user's password. This password is requested for attaching this work repository to a different master.

  4. Click Test Connection to verify that the connection is working.

  5. Click Next.

    Oracle Data Integrator verifies whether a work repository already exists on the connection specified in step 3:

    • If an existing work repository is detected on this connection, the next steps will consist in attaching the work repository to the master repository. Refer to "Specify the Password of the Oracle Data Integrator work repository to attach." for further instructions.

    • If no work repository is detected on this connection, a new work repository is created. Continue with the creation of a new work repository and provide the work repository details in step 6.

  6. Specify the Oracle Data Integrator work repository properties:

    • ID: A specific ID for the new repository, rather than the default 0.


      Note:

      It is strongly recommended that this ID is unique and not used for any other work repository, as it affects imports and exports between repositories

    • Name: Give a unique name to your work repository (for example: DEVWORKREP1).

    • Password: Enter the password for the work repository.

    • Type: Select the type for the work repository:

      • Development: This type of repository allows management of design-time objects such as data models and projects (including interfaces, procedures, etc). A development repository includes also the run-time objects (scenarios and sessions). This type of repository is suitable for development environments.

      • Execution: This type of repository only includes run-time objects (scenarios, schedules and sessions). It allows launching and monitoring of data integration jobs in Operator Navigator. Such a repository cannot contain any design-time artifacts. Designer Navigator cannot be used with it. An execution repository is suitable for production environments.

  7. Click Finish.

  8. The Create Work Repository login dialog opens. If you want to create a login for the work repository, click Yes and you will be asked to enter the Login Name in a new dialog. If you do not want to create a work repository login, click No.

  9. Click Save in the toolbar.

For more information, refer to Section 3.6, "Connecting to a Work Repository".

3.6 Connecting to a Work Repository

To connect to an existing work repository and launch Designer Navigator:

  1. Open the New Gallery by choosing File > New.

  2. In the New Gallery, in the Categories tree, select ODI.

  3. Select from the Items list Create a New ODI Repository Login.

  4. Click OK.

    The Repository Connection Information dialog opens.

  5. Specify the Oracle Data Integrator connection details as follows:

    • Login name: A generic alias (for example: Repository)

    • User: The ODI supervisor user name you have defined when creating the master repository or an ODI user name you have defined in the Security Navigator after having created the master repository.

    • Password: The ODI supervisor password you have defined when creating the master repository or an ODI user password you have defined in the Security Navigator after having created the master repository.

  6. Specify the Database Connection (Master Repository) details as follows:

    • User: Database user ID/login of the schema (database, library) that contains the ODI master repository

    • Password: This user's password

    • Driver List: Select the driver required to connect to the DBMS supporting the master repository you have just created from the dropdown list.

    • Driver Name: The complete driver name

    • URL: The url used to establish the JDBC connection to the database hosting the repository

  7. Click on Test Connection to check the connection is working.

  8. Select Work Repository and specify the work repository details as follows:

    • Work repository name: The name you gave your work repository in the previous step (WorkRep1 in the example). You can display the list of work repositories available in your master repository by clicking on the button to the right of this field.

  9. Click OK to validate your entries.

3.7 Changing the Work Repository Password

To change the work repository password:

  1. In the Repositories tree of Topology Navigator expand the Work Repositories node.

  2. Double-click the work repository. The Work Repository Editor opens.

  3. On the Definition tab of the Work Repository Editor click Change password.

  4. Enter the current password and the new one.

  5. Click OK.

3.8 Advanced Actions for Administering Repositories

Advanced actions for administering repositories do not concern the creation process of repositories. The actions described in this section deal with advanced actions performed on already existing repositories. Once the repositories are created you may want to switch the password storage or you may need to recover the password storage after a credential store crash. Actions dealing with password handling are covered in Section 24.3.1, "Setting Up External Password Storage". The export and import of master and work repositories is covered in Chapter 20, "Exporting/Importing".

This section contains the following topics:

3.8.1 Attaching and Deleting a Work Repository

Attaching a work repository consists of linking an existing work repository to the current master repository. This existing work repository already exists in the database and has been previously detached from this or another master repository.

Deleting a work repository deletes its link to the master repository. This is an opposite operation to attaching. This operation does not destroy the work repository content.

Attaching a Work Repository

To attach a work repository to a master repository:

  1. In the Topology Navigator, go to the Repositories panel.

  2. Right-click the Work Repositories node and select New Work Repository.

    The Create Work Repository Wizard opens.

  3. Specify the Oracle Data Integrator work repository connection details as follows:

    • Technology: From the list, select the technology that will host your work repository. Default is Oracle.

    • JDBC Driver: The driver used to access the technology, that will host the repository.

    • JDBC URL: The complete path of the data server to host the work repository.

      Note that the parameters JDBC Driver and URL are synchronized and the default values are technology dependant

    • User: User ID / login of the owner of the tables you are going to create and host of the work repository.

    • Password: This user's password.

  4. Click Test Connection to check the connection is working.

  5. Click Next.

  6. Specify the Password of the Oracle Data Integrator work repository to attach.

  7. Click Next.

  8. Specify the Name of the Oracle Data Integrator work repository to attach.

  9. Click Finish.

Deleting a Work Repository

To delete the link to the master repository:

  1. In the Topology Navigator, go to the Repositories panel.

  2. Expand the Work Repositories node and right-click the work repository you want to delete.

  3. Select Delete.

  4. In the Confirmation dialog click Yes.

  5. The work repository is detached from the master repository and is deleted from the Repositories panel in Topology Navigator.

3.8.2 Erasing a Work Repository

Deleting a work repository is equivalent to detaching a work repository from the master repository. For more information, refer to Section 3.8.1, "Attaching and Deleting a Work Repository".

Erasing a work repository consists of deleting the work repository from the database.


WARNING:

Erasing your work repository is an irreversible operation. All information stored in the work repository will be definitively deleted, including the metadata of your models, projects and run-time information such as scenarios, schedules, and logs.


Erasing a Work Repository

To erase a work repository from the database:

  1. In the Topology Navigator, go to the Repositories panel.

  2. Expand the Work Repositories node and right-click the work repository you want to delete.

  3. Select Erase from Database.

  4. In the Confirmation dialog click Yes, if you want to definitively erase the work repository from the database.

  5. The work repository is erased from the database and is deleted from the Repositories panel in Topology Navigator.

3.8.3 Renumbering Repositories

Renumbering a repository consists of changing the repository ID and the internal ID of the objects stored in the repository.

Renumbering a repository is advised when two repositories have been created with the same ID. Renumbering one of these repositories allows object import/export between these repositories without object conflicts.


WARNING:

Renumbering a repository is an administrative operation that requires you to perform a backup of the repository that will b renumbered on the database.


Renumbering a Work Repository

To renumber a work repository:

  1. In the Topology Navigator, go to the Repositories panel.

  2. Expand the Work Repositories node and right-click the work repository you want to renumber.

  3. Select Renumber...

  4. In the Renumbering the Repository - Step 1 dialog click Yes.

  5. In the Renumbering the Repository - Step 2 dialog click OK.

  6. In the Renumbering the Repository - Step 3 dialog enter a new and unique ID for the work repository and click OK.

  7. In the Renumbering the Repository - Step 4 dialog click Yes.

  8. The work repository and all the objects attached to it are renumbered.

Renumbering a Master Repository

  1. In the Topology Navigator, go to the Repositories panel.

  2. Expand the Master Repositories node and right-click the master repository you want to renumber.

  3. Select Renumber...

  4. In the Renumbering the Repository - Step 1 dialog click Yes.

  5. In the Renumbering the Repository - Step 2 dialog enter a new and unique ID for the master repository and click OK.

  6. The master repository and all the details stored in it such as topology, security, and version management details are renumbered.

3.8.4 Tuning the Repository

Concurrent connections to the repository database may be controlled and limited by the database engine where the repository is stored. On Oracle the property limiting the number of connections is max_processes. When running a large number of parallel executions, you may need to tune the database to increase the maximum number of connections allowed to the repository database.

The number of connections required depends on the number of sessions running the connection:

  • Each session execution requires two database connections (one to the master, one to the work repository) for the duration of execution, and also a third database connection is required for a security check for a very short period when the session begins.

  • For non-Oracle databases, each Load Plan step consumes an additional connection as a lock while the Load Plan is being executed.

PKH Working with Version Management

19 Working with Version Management

This chapter describes how to work with version management in Oracle Data Integrator.

Oracle Data Integrator provides a comprehensive system for managing and safeguarding changes. The version management system allows flags on developed objects (such as projects, models, etc) to be automatically set, to indicate their status, such as new or modified. It also allows these objects to be backed up as stable checkpoints, and later restored from these checkpoints. These checkpoints are created for individual objects in the form of versions, and for consistent groups of objects in the form of solutions.


Note:

Version management is supported for master repositories installed on database engines such as Oracle, Hypersonic SQL, and Microsoft SQL Server. For a complete list of certified database engines supporting version management refer to the Platform Certifications document on OTN at: http://www.oracle.com/technology/products/oracle-data-integrator/index.html.

This chapter includes the following sections:

19.1 Working with Object Flags

When an object is created or modified in Designer Navigator, a flag is displayed in the tree on the object icon to indicate its status. Table 19-1 lists these flags.

Table 19-1 Object Flags

FlagDescription

inserted icon


Object status is inserted.

updated icon


Object status is updated.


When an object is inserted, updated or deleted, its parent objects are recursively flagged as updated. For example, when a step is inserted into a package, it is flagged as inserted, and the package, folder(s) and project containing this step are flagged as updated.

When an object version is checked in (Refer to Section 19.2, "Working with Versions" for more information.), the flags on this object are reset.

19.2 Working with Versions

A version is a backup copy of an object. It is checked in at a given time and may be restored later. Versions are saved in the master repository. They are displayed in the Version tab of the object window.

The following objects can be checked in as versions:

  • Project, Folder

  • Package, Scenario

  • Interface, Procedure, Knowledge Module

  • Sequence, User Function, Variable

  • Model, Model Folder

  • Solution

  • Load Plan

Checking in a version

To check in a version:

  1. Select the object for which you want to check in a version.

  2. Right-click, then select Version > Create...

  3. In the Create dialog, click Previous Versions (>>) to expand the list of versions already checked in.

  4. A version number is automatically generated in the Version field. Modify this version number if necessary.

  5. Enter the details for this version in the Description field.

  6. Click OK.

When a version is checked in, the flags for the object are reset.

Displaying previous versions of an object

To display previous versions of an object:

When editing the object, the Version tab provides a list of versions checked in, with the check in date and the name of the user who performed the check in operation.

Restoring a version

To restore a version:

  1. Select the object for which you want to restore a version.

  2. Right-click, then select Version > Restore...

  3. The Restore dialog displays the list of existing versions.

  4. Select the version you want to restore and click OK.

  5. Click OK to confirm the restore operation.


WARNING:

Restoring a version cannot be undone. It permanently erases the current object and replaces it by the selected version.


Browsing versions

To browse versions:

Oracle Data Integrator contains a tool, the Version Browser, which is used to display the versions stored in the repository.

  1. From the main menu, select ODI > Version Browser...

  2. Use the object Type and object Name drop-down lists to filter the objects for which you want to display the list of versions.

From the Version Browser, you can restore a version, export a version as an XML file or delete an existing version.


Note:

The Version Browser displays the versions that existed when you opened it. Click Refresh to view all new versions created since then.

Deleting a version with the Version Browser

To delete a version with the Version Browser:

  1. Open the Version Browser.

  2. Select the version you want to delete.

  3. Right-click, then select Delete.

The version is deleted.

Restoring a version with the Version Browser

To restore a version with the Version Browser:

  1. Open the Version Browser.

  2. Select the version you want to restore.

  3. Right-click, then select Restore.

  4. Click OK to confirm the restore operation.

The version is restored in the repository.

Exporting a version with the Version Browser

To export a version with the Version Browser:

This operation exports the version to a file without restoring it. This export can be imported in another repository.


Note:

Exporting a version, exports the object contained in the version and not the version information. This allows you exporting an old version without having to actually restore it in the repository.

  1. Open the Version Browser.

  2. Select the version you want to export.

  3. Right-click, then select Export.

  4. Select the Export Directory and specify the Export Name. Select Replace Existing Files without Warning to erase existing export files without confirmation.

  5. Click OK.

The version is exported to the given location.

19.3 Working with the Version Comparison Tool

Oracle Data Integrator provides a comprehensive version comparison tool. This graphical tool is to view and compare two different versions of an object.

The version comparison tool provides the following features:

  • Color-coded side-by-side display of comparison results: The comparison results are displayed in two panes, side-by-side, and the differences between the two compared versions are color coded.

  • Comparison results organized in tree: The tree of the comparison tool displays the comparison results in a hierarchical list of node objects in which expanding and collapsing the nodes is synchronized.

  • Report creation and printing in PDF format: The version comparison tool is able to generate and print a PDF report listing the differences between two particular versions of an object.

  • Supported objects: The version comparison tool supports the following objects: Project, Folder, Package, Scenario, Interface, Procedure, Knowledge Module, Sequence, User Function, Variable, Model, Model folder, and Solution.

  • Difference viewer functionality: This version comparison tool is a difference viewer and is provided only for consultation purposes. Editing or merging object versions is not supported. If you want to edit the object or merge the changes between two versions, you have to make the changes manually directly in the concerned objects.

19.3.1 Viewing the Differences between two Versions

To view the differences between two particular versions of an object, open the Version Comparison tool.

There are three different way of opening the version comparison tool:

By selecting the object in the Projects tree

  1. From the Projects tree in Designer Navigator, select the object whose versions you want to compare.

  2. Right-click the object.

  3. Select Version > Compare with version...

  4. In the Compare with version editor, select the version with which you want to compare the current version of the object.

  5. Click OK.

  6. The Version Comparison tool opens.

Via the Versions tab of the object

  1. In Designer Navigator, open the object editor of the object whose versions you want to compare.

  2. Go to the Version tab.

    The Version tab provides the list of all versions created for this object. This list also indicates the creation date, the name of the user who created the version, and a description (if specified).

  3. Select the two versions you want to compare by keeping the <CTRL> key pressed.

  4. Right-click and select Compare...

  5. The Version Comparison tool opens.

Via the Version Browser

  1. From the main menu, select ODI > Version Browser...

  2. Select the two versions you want to compare. Note that you can compare only versions of the same object.

  3. Right-click and select Compare...

  4. The Version Comparison tool opens.

The Version Comparison tool shows the differences between two versions: on the left pane the newer version and on the right pane the older version of your selected object.

The differences are color highlighted. The following color code is applied:

ColorDescription
White (default)unchanged
Reddeleted
Greenadded/new
Yellowobject modified
Orangefield modified (the value inside of this fields has changed)


Note:

If one object does not exist in one of the versions (for example, when it has been deleted), it is represented as an empty object (with empty values).

19.3.2 Using Comparison Filters

Once the version of an object is created, the Version Comparison tool can be used at different points in time.

Creating or checking in a version is covered in the topic Working with Versions.

The Version Comparison tool provides two different types of filters for customizing the comparison results:

  • Object filters: By selecting the corresponding check boxes (New and/or Deleted and/or Modified and/or Unchanged) you can decide whether you want only newly added and/or deleted and/or modified and/or unchanged objects to be displayed.

  • Field filters: By selecting the corresponding check boxes (New and/or Deleted and/or Modified and/or Unchanged) you can decide whether you want newly added fields and/or deleted fields and/or modified fields and/or unchanged fields to be displayed.

19.3.3 Generating and Printing a Report of your Comparison Results

To generate a report of your comparison results in Designer Navigator:

  1. In the Version Comparison tool, click the Printer icon.

  2. In the Report Generation dialog, set the object and field filters according to your needs.

  3. In the PDF file location field, specify a file name to write the report to. If no path is specified, the file will be written to the default directory for PDF files. This is a user preference.

  4. Check the box next to Open file after generation if you want to view the file after its generation.

    Select Open the file after the generation to view the generated report in Acrobat® Reader™ .


    Note:

    In order to view the generated report, you must specify the location of Acrobat® Reader™ in the user parameters. Refer to Appendix B, "User Parameters" for more information.

  5. Click Generate.

A report in Adobe™ PDF format is written to the file specified in step 0

19.4 Working with Solutions

A solution is a comprehensive and consistent set of interdependent versions of objects. Like other objects, it can be checked in at a given time as a version, and may be restored at a later date. Solutions are saved into the master repository. A solution assembles a group of versions called the solution's elements.

A solution is automatically assembled using cross-references. By scanning cross-references, a solution automatically includes all dependant objects required for a particular object. For example, when adding a project to a solution, versions for all the models used in this project's interfaces are automatically checked in and added to the solution. You can also manually add or remove elements into and from the solution.

Solutions are displayed in the Solutions accordion in Designer Navigator and in Operator Navigator.

The following objects may be added into solutions:

  • Projects

  • Models, Model Folders

  • Scenarios

  • Load Plans

  • Global Variables, Knowldge Modules, User Functions and Sequences.

To create a solution:

  1. In Designer Navigator or Operator Navigator, from the Solutions toolbar menu select New Solution.

  2. In the Solutions editor, enter the Name of your solution and a Description.

  3. From the File menu select Save.

The resulting solution is an empty shell into which elements may then be added.

19.4.1 Working with Elements in a Solution

This section details the different actions that can be performed when working with elements of a solution.

Adding Elements

To add an element, drag the object from the tree into the Elements list in the solution editor. Oracle Data Integrator scans the cross-references and adds any Required Elements needed for this element to work correctly. If the objects being added have been inserted or updated since their last checked in version, you will be prompted to create new versions for these objects.

Removing Elements

To remove an element from a solution, select the element you want to remove in the Elements list and then click the Delete button. This element disappears from the list. Existing checked in versions of the object are not affected.

Rolling Back Objects

To roll an object back to a version stored in the solution, select the elements you want to restore and then click the Restore button. The elements selected are all restored from the solution's versions.

19.4.2 Synchronizing Solutions

Synchronizing a solution automatically adds required elements that have not yet been included in the solution, creates new versions of modified elements and automatically removes unnecessary elements. The synchronization process brings the content of the solution up to date with the elements (projects, models, etc) stored in the repository.

To synchronize a solution:

  1. Open the solution you want to synchronize.

  2. Click Synchronize in the toolbar menu of the Elements section.

  3. Oracle Data Integrator scans the cross-references. If the cross-reference indicates that the solution is up to date, then a message appears. Otherwise, a list of elements to add or remove from the solution is shown. These elements are grouped into Principal Elements (added manually), Required Elements (directly or indirectly referenced by the principal elements) and Unused Elements (no longer referenced by the principal elements).

  4. Check the Accept boxes to version and include the required elements or delete the unused ones.

  5. Click OK to synchronize the solution. Version creation windows may appear for elements requiring a new version to be created.

You should synchronize your solutions regularly to keep the solution contents up-to-date. You should also do it before checking in a solution version.

19.4.3 Restoring and Checking in a Solution

The procedure for checking in and restoring a solution version is similar to the method used for single elements. See Section 19.2, "Working with Versions" for more details.

You can also restore a solution to import scenarios into production in Operator Navigator or Designer Navigator.

To restore a scenario from a solution:

  1. Double-click a solution to open the Solution editor.

  2. Select a scenario from the Principal or Required Elements section. Note that other elements, such as projects and interfaces, cannot be restored.

  3. Click Restore in the toolbar menu of the Elements section.

The scenario is now accessible in the Scenarios tab.

Note that you can also use the Version Browser to restore scenarios. See Restoring a version with the Version Browser.


Note:

When restoring a solution, elements in the solution are not automatically restored. They must be restored manually from the Solution editor.

19.4.4 Importing and Exporting Solutions

Solutions can be exported and imported similarly to other objects in Oracle Data Integrator. Export/Import is used to transfer solutions from one master repository to another. Refer to Chapter 20, "Exporting/Importing" for more information.

PKmmPKp\EOEBPS/img/lp_done.gifGIF89a6 7 7 8 9 9 : ; ; ; < = > ? > ?@@ACBCDEFFGHHI G I L L K M N O P Q R S R S U V X YRY[\ cf dk"n"s&h*e(|)f*|(j+v'*l, ,!~,!)"x/#k0#-$-%.%.(n5(0)/)0*1+0+1,2-2.3041525365wA=}H@{m\egpjs~ւŅ­𠠤!,6 7 7 8 9 9 : ; ; ; < = > ? > ?@@ACBCDEFFGHHI G I L L K M N O P Q R S R S U V X YRY[\ cf dk"n"s&h*e(|)f*|(j+v'*l, ,!~,!)"x/#k0#-$-%.%.(n5(0)/)0*1+0+1,2-2.3041525365wA=}H@{m\egpjs~ւŅ­𠠤Hp`8رpÅnh!D7j\̑(h丱1F9d`I32^hHʖ1a|钅H &qSg-W(a19wjQsGINT4B8Lܹ&l%A@\PXεk١B3J]bdH ACt`c -XABlj Ĝ  2l1Wƒ $h;U9R>Yh;U9R>Yq>>r@@sDCwHH|HN{JL~MLMMQQT_VUXgZZ\\]^^]`acuddfffhg{hhikjqpqquwyyzzzz{{{|}~ȑΕϖ՗ΛΛϜМٝѠӠޡӣ䧦֩ܩ骪ު䶵ж⿿𠠤!,:q>>r@@sDCwHH|HN{JL~MLMMQQT_VUXgZZ\\]^^]`acuddfffhg{hhikjqpqquwyyzzzz{{{|}~ȑΕϖ՗ΛΛϜМٝѠӠޡӣ䧦֩ܩ骪ު䶵ж⿿𠠤H&*\ȰCO"JHAx  Qdl{00gp@@@SwMMM\\\CYnCippAg...봵_ɛk|@@$$$jsy8FR僧֪RRS344 ЌNrPPX|ދƙ珐렠싪ʌߑĦ˖y]{dꟳƷlekm쮻```fns7=AUyJU_RrUmӟ_quvwCHLFLPU\`9;rzAabfjڪdddKlׯ789GGG''')+^df',0ѢiiiԶ000bxUXhoJc{ʿ eeeeffXxes˞̼GI!,0H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵׯ`ÊKٳhӪ]˶۷pʝKݻx˷߿ LÈSM"ǐ#KL˘3k,g)HxӨS̺3\ƥU˞M[skТC0raN+_<УKث3νwl‹O" &?x˧ˏF ݽ\{g|ڧ`~ 2yF`{‡| .wGBnPW{a}bMxp)"s-Nbv1"x#9#=#vAbywc:.\=y_[F^%rZnɥ^~ TJx'fkv&`n'Ohi|領59('*gi'NJi._%ZY禝YIꞡN!HY!Jkꔠk&gnZl"+)2'Xq ljJbAd{ 2BE|zyzl&gPȷLamG"9p*";>|"»f\@Y\p%ķ2_ }1ps EtnvTWw5x~~2et6+G/P\}qNE -bEx8~[a >DzD.w`;d@ܚCL> O^@hWQO{bu18:::iua@lAޅH(=b1qpkXT'~AF;b@B4`;AhB0F*a8@@)TOX '(DO^d4p@]| "44B2\U@P@j Psa! dx.qm hD7-g@ 䐰q]͇ Ǧad6G Y R{&gJT$BulW >uQ-4 ĸRY4ĦT`?{1{ !ŽIR7D~0w@uixTT%r.d8b'&ssKYus{&b/T bw&"C[2e`2*Q (;>s#cF1oҜ`̇9@B}!@5_zt8&ƶO0qdxIPet(w P Yu ; ht%L3jWH@A{'-50f]9UO$"Y%5cʮю-Sԧ@$Dyue#1 P2M7 .,1g^aOج;RzX!|Py`7$V܀x^@< ݯqt8PP孋  ֔88 pno4/ ʝdu ^0|_7T'_jx%O%KY?/iMw)g`z++@_E %̱O۪+'dԑR%#Cr.I-:|X3ռnof$Tmc3h'\N_ɀbۧ4{y-`34mԚn%Xd9úΛn"%5eڗriij[L@HDZ;ȶQa^ЎҩϲlzGTlwBk> f@hSP[" )/wcmb!-i)(N)mop!xT-mܛ pdg]Ϻ|ܠQ!TK!\`#3h98ۓ6 Aro5CJÖ.|-6+ؐ {6âAH- VH @,qRÎu^@Cu,Fv< :Lr$y`.b`vӻ9كQ"m!To.@H0тV Ւp3d NKMϝel? q{nV~'S;q;B?N;c㖅cXrK#w?chdxn!Q%[}ŁCP*`csaP0hu5贋0n#sva9Fi2Y%.kɤ^DkmaZjl n `@) fpjʧCצ*Xُ054{Pw-ҥ|L&q5b:#xwҡxJ/3/:y#ҡsYת)a꘨80,GQpwNSfCĒC:ȪNwƑf%=UV>?PLF~6g$jYZYȬI;j铩ںHivGVsHF71~%jv?ѭߚxhy䊋􃧳1 iĊ_P*>c+"%˳ƣ3;A*wg|F\"![%R;)5j\hLt=*X)9ds;#nWV XkɱTGQ[&8wWC&Cֺ4c^qOF 7 Nk| 9AI)SIף%t.Ip{R[뷖A+`">9{Hc0G{r۰Ǒf3̻.;JeD:}p.ѭ˽!)Ӽ{}ÒyT@@H^"Zimgh.+k-:d#+VE6W-8.UKY?\ +ik2ʓ6%7VL[ Bˁž,q/,;x2\x˒Q-0J35 :IP01hQp8@AXdz(dBh jm,Hoܖrf\0\+;cZzjǝT6kJ#)Bz2pȣܷkDd:ch&&=)2` >!>%^(࠽@0 [ @D2B 6 Ƨ(1 ۀ _.df(N+S^W[_c^1ᅾmQ.W 0 z^^怮gj~ n0m8hꩮ\ꂾ4^n0 "|MN  ƾ~>~ @N윞.p`P 0k 1 﩮Y˞鰮p<-Z蛎n',$! ~X~~!j%?˰ "@L`鮎9?To ~([S nB30OSo)\CON~DJD$ 0SNac _lqo _ l x]nWĿ/ս.MGtpN0/R_v sL( X` .NB+H6Te $HDd ٩ q :Y²e t 4eʡC83WQISQNZUYNMaA  b++@j#Ǐ! dJ!+[9Ӑ͛9 ТK+%`]=8V!hqF5pAY^.aY&Ν 倃Ź6o)< Ėib1Hv]钯La3(8pyKn̚9/-Zni(U:z 7^a\'N}n.cPdKag4Yaյrlx_`s ,_y^իh=^j`JX:V}벝juBmvPn `vK3ίu֩~X7 ! hБyVgs{8JZel7ާ<%ygp ΁ZM]`{Pa&U|iiGҘJꠍ ) 38HArd`?P0WhQ\xK9|C r0\>[Z1%STSe8E攚8{J8nrD8zD'S*d7S| JCйs{"F.ԬOiUltTjP8039e ұ`c 0p5E^iRi9LJ `p<'@<2_NqRkTG=2*؅(*zS *~5- չWm.Tl6~#\95roQ+IFAVvFE.L;Lz p?@H 4 nRU͋ur?,: `*aObūrXrqy.-tPSOu:!STef3;"Ebms_`^&0ԕwҿp}|]Mǃ`$"l-̌0a=hiRQ$rc P9c \0!~",x` oyixP p9Wbaj_tb`(}co͸u04bEm[&(Dzm-}& / 5]05:Hp]Ԭ'MliXC @ <0RBX}8(4$ajw&鏭f0JTM=\b*8IA AE/.jS-5*#~X_\d|MPX6h&! XiOB|@-p~-CE^z  $!mL1 EE5Ux%qMz{T|G~EA#?<9^[亚G b.wKA+^jC?h X'A2 DT/q]ߋw;!vԝ2]yc\!%lr}wyȣo$̠ (h ۏ!SO8<*mP8uޗB{ɻcX8a5CLS>=ܛ/|(;c*<9,c 8(A?ߠ 5A(<++?ذ;KT A0\ Ss4ޣҹS Ts䋸 @$\ {@DhӠB›=/$A1,C& |B8%Y 8pЅ>4*J[8µJ4?> %hrB578D?K:$B0 =8CA$5I #+t9ķM @_E=EE+1s X9z5x/) '>'PDD!(47!@_#tF$EWI+l^&E_ FTF~į:F/d\FG$FpB4=@, a4+>;@.t?`F idjl=Xm-==[% rlJD> d@L4@ŗ2]GPIGGٟĽLCTIIlHD9T;GIZKȎ^̂x~"Ydx==hF<KўHF(;l\D[eh4qL[0XoJ|KtR0`?#o,Ä Ka_Xˠ/^UR_]v` .E X?3x1x xIЁ{_dZ%@͢]TVVaW`%2ET4EGqKtdPeR6eT>ngXEsem [.J XT[ f #Z Mj~TM.hfc^`g$gsFhNVhvfw~舦ddCPPov䠎.F^vnhxVyWjlqjΊ=hpx }@2`i(ᕖSg曮뵴jQM4ipdIkMvg紮gF? Ūj,hFp6slulfj~3:YfsMTKfHqƔSPIEUXiŕW`EYhŖ[E]xŗ_֛a&\~)4 A`j`C)gN!ivj=5K7*XYj9r0bve` y*|yyK5/YQ}[ :(~/zSO3R>7Ԏ9faj[JHۈ}AHTgpX &0$YU`g hVmZx|dmӍ(mx)2֥PrTSK9rAJ1'. |j-ꡰʺ 㤼JXگDj:l";8 m@-{9'HI=R+Cf*쑓$yl=)xƧNA-)wp|\hC,Bs]""i+4ڂz #Ȳ\)'#BbE3m=M;7ҳr<O=*rfBPE2U\-l7EnJv 3_N1{0zy5*O8lay5vUDZr(s@˓j}H|JT Q]a,UNTjUj&VVU,ib9cN-Iv N(BAA D"<5{kNB6?l* R6P8 <hbA? m{?ͱhmjLG-V+\⢠`',`务ny.]\+ݝis*{avuckБ5f݁8y_.  t# w!'#B&rd\ W ?iyb!^e{fεqݛw8«Z"< Ԗjx&z!t7/I8$ST!9NB8 \"Mf#G #Ƀ|A#GAqpD yGr(A&KQ:oCqJju'Ocvdh|X`Ʋ,p:l=S%8NTpwԃڃB)Jw3 C>0o ЋW%y >"B8y^9xPBʡ.$ I>:`;f";^z^ EfθIH:\oCi XпR+>MC80)By@5,P]C)Ԁ/B|F)H<|,H\!HEV!H*A!lؠSOZ]iyݫa`fDZebdH4*v݈L FZ:lI]@8C8A3A5=ܒY DH$  J@9@9^`&jA@ϕC9nQ~!JJ=aDa1%J 4<][AW`4FcʡA-CATC8I)b"E`9$.#񀏉+' *҉b),Jd-<@؜/RcY6Y1ZZ`FH㬵eq#$8[o#PҟPJС@  Y=#PwTM,@TCCE$NeBD%R"`)D AA9Ida- KeqM7Ea B0DOdU~Q֚5.J.'\g_u٪9\/y@7JL%UUV֚VbAȵ X#+hMՔ2,@,F@$_$]`*^mbrDBN&Il X{+b A!8 J&8mn]N.;TC2 gPgd&l 0Bf(%q]S^T>@!+")\Ew}< ==C`e#,M|!H,_ _l``BcjR2i&+lA+ګqZBMfF1Q*,Pe:qB,Qv$HŞ A/DXWW%7`)..$#wTnxoh㜟'Q+Q,^ܙaM oضnhx2)Ё;p75)< MjQt.CAq7)5]& z/]/o%a3p2BX#+p-/4p0/®9On~Q}Bt@ T<83DTUUqiu%$֒8q]h.TytPp@P1+l)TՎV _ o) #ecgAcA/D6j]i>1WoAEiEn0$>rGSr1rKVd;5Tk4=S/b@ӯ.ZZ'eQҝ8BYQCAS5 4{NFNGchHLn=BEL8|Y5ZӴM.\2*`5bk5Q1u> 6P%bWsFW\5duH_Xp9cX%5`rhO6QlgڶMk\`Lo;vlBm7hn8DF$NwpW#,;6rG{d7 A`[EgX AsDKgw(m7w{<+jv}kL~w6u;۶B)nn؍xVA35rV[ A CBJx0G_88^cix.2\S KLk9A.)8}v2(与XC,qcSdge;7<@=.'_99.ĸ|Zj \"dz߫- xr_yN#8h; 9q86GrV3C$0># mQyszd —[ߤjzLkxC5/ey>j#$A'c kXC- o+H7zO %yA6D=o ;[;ecɣA@xBj@=@2 5o y##y?Sa9,ғ|ɛ#|>ˇn=۝wz{Q>U{;I `<-B G<73{e[3BoW.@#.l߽ٟך /YNG lACFo4tbxcF50r!@H#Ip(ੌ"P\ps%(D1AE{,Ǚ7bIx IǍI= 4D jr$lXɖ5{mڌLtKe\ hGQDyIgP@˔g-HBYԪCf4܅`0 L5 QZիYv#ȷ㞔r%1gęsgϟA=tiӧQV$kAϤ(8}}볱gw},8QIM<}.nRNJU@w;;XD)0 0c1F2?`1r!Р4OKe1#t;|+PBq8Cn:#CMˎƋlS!GkHK>樺J"<8"rJ;2̰Ś`AcK{,ɨ 6i 3,Tsز/2896݄sSNQI5SeT]HU܃$i"V FtPH_2 (;BD ːQG;PR?)Qk_ܖm}6Xa5,#eMS*hZ}_òBkD;c}cK)| 'RFbln-.5`2rEad!E5lCMTŃUE,.Mg`5cE&3S^cYӲl1 uEo(N{byg`R ,-Cp$G;Uqr0( N$C=$$ҞgOM݄7!4 ]3éqRx¡  F A(@ .|U hKRt5hMj>.tzӟC)hԈzKCHB!!e=aNXeqUb (NƢUeEEVUx+ /ӝ1S$BD N+2]"Q*Q>m],TUBkg,}믨^ `RvEJ̬f:ڻ?Dm{\zӆ˝j{p7'.~b+)yLsl(/,uBl;f  s{os8/癒h^4t6Q|V3oh)Zӳ!`9=GC{Km1Dzխv]wƺ$O*)ԨF C-_:ӫy>TE jiͦuoO:6[O!l`YNPIPox8?yBC" lcU >-;!?PHS.Ԧ}nB[eXn<$[w`Y"v} `-Lb1ݹxus! !)Dv0/sn2t?[=r斻.3ʑ Ɩyhv2ڶ uB7[| (qNz$>Ix*ۻXjo= $D`n}YL7_քOqd)"+HAI̭^GOL J)2p΂ta/` B ؎ to1khB@P ZjpL*&o?jl!JJohCbtkDE+e0lq ;xk| Aҋ(!o| NLi~niHo`|odЄLU0gL@0uNS/ͨ^gq6q1gq)koQuQqiE1Sm|x1 q Qxq1 AQ"n !R!/!I! "#i[*2"K e9R%lOBRF&cX&9qZ2T^'hz&^4*R***+R++2`ݺR,ǒ,,P,-R,T.@ ./R///0}.SA01S1.r@1#S2'1 1/33S373;3?4CS4G4K4O5SS5W5[5_6cS6g6k6o7sS7w7{78S8889S999:3;PKDe5NNPKp\EOEBPS/img/filter_actv_icon.gif@GIF89a1j< Q@I$0R#RB @ MT҈0Mv)›)2aQ" ƒ중>s2g ;@x¤H 8d0xs| pa tc#GG`OK@X4 GC1C*$䅂.EM* }& pnœ ;PKO PKp\EOEBPS/img/odi_console.gifGIF89abg f]vϫ+!E'U9e<BThvOR WrM !%1F(b"$'*444+D1>sG7R\HlFzCx]=MFBRQ[RbvMxyR}R|HzQRSP^2\=[]3`Ea_\m\^_SCRFeAfcHd$n^2e2cLa^aiJfplhheXqJlDsQtf>vavyeif_qpqlop}ttxsttvRDiZ_zb|~w|q9GDL*VYeC]=frVx{ՑϙX0}rd܌[-fD1>sG7R\HlFzCx]=MFBRQ[RbvMxyR}R|HzQRSP^2\=[]3`Ea_\m\^_SCRFeAfcHd$n^2e2cLa^aiJfplhheXqJlDsQtf>vavyeif_qpqlop}ttxsttvRDiZ_zb|~w|q9GDL*VYeC]=frVx{ՑϙX0}rd܌[-f}S8 1 X#cA_C+jLHa  S A2h4J0B`ѱ<s|ɸXj# @zȩ/`uw|^h?K m(@a1>  sN> >Pl]\@M)>)*l0CԸYyS yn-: y;;X-^TCk{FD_/x=dpGR`3L65o? bgMcEBg D!1`C$|Xx1Y+PL5Qơ)0b'hSߟ2"vhSP@ Wg~ P%yj2!mpzGx7{v<; [o;"GTA3h:8bX5>HY|Q!- +x y!DE1)(Y}%4LFW](Eȟ>>X 4!SyL]@/gFmQQ]9^T)Mmz\MTkv! #؅f:cEhY~D5ŗc –@5Fdiuu똒BVmC'R&IIM⩮<ַUp+fۡgt1#9 Kj`-um`fr,Wr,r]H.I:$GNjfԦ7ipVmmՌ, \y2m:.Qy @$Q ـ05rcD[AZ-(u[KrQrA\9PB)"2*NNo /.pP/ЙHdtWUH;Z#d5へ. !kB,RvV;$䜀x.q< @:#'B2[]H=<<(˒͝,,2B9Y$ePijeH4v^/{ͦ.8>P}򓭀6MV~eP<>e5+aHQ U{3M ;H-5rpe4' [RdG.R1.`3pY- Hܡ .í#8]`RDVcL*quNJ8gd:B?NraX:K[|q;Aҳ;9!f렧 92XJv+jμfq2䂽|<.)PȰa&,C4حiO=3Gi@})U=Lw]}-Kg Rqw5>LCu50mVX#P*ۄaZ qBQ#F/UBtY8Wq<]VƆ[zh}a֔C.HD}IŲO6+ Ö6F!~,?ʻ&DWgh0u H6,OCKoo,cЂ-0V:-c I||Vz%Y&q~g؎dB$ߘ;J~[hֆg-h~Lf2ŰHtTS Yȋ0 ّIEG^$Fnbq.^5v}|\,xD젒cD`|H(WSC?j2kYZ낎DgDnpՔOi*gXXاZ|Yq*7xYZ=爢ZhepbH I kȆ8FÌ$FoF-b9\Řr 3iI%HGTQX8eurHd.y%KH&XxLa09'Lr9 [v Y9,IiG|0c,\T.t$.,ȈdYYy蹞ٞyr9YYٟ:Zz ڠ:Zzf3 ":$Z&z(*,ڢ.02:4Z6z8:<ڣ>@B:DZFzHJL3 0TZVzXZ\ڥ^`b:dZfzhjlڦnpr:tZvzxz|ڧ~j  Pzڨ:Zzک:Zzڪ  zګ:ZzȚʺڬ:Zzؚںڭ w蚮꺮ڮ:Zzگ;[{ ۰K!{۱ ";$[&{(,;02;4[6{8:<۳>@B;D[F{HJtҲNP;ʴV{X{\۵^`b;d[f{hjl۶npr;[v{xKzRKx;t;[{붬۸;[{۹j;a2}{ ۺ;[{ۻ;r{zq3/>0b`[ ؛{н;[{蛾껾۾֛{ 1Q  ,{r{ ;<\|m˿A2kk,.2<4\6|83<,b+4K1+GL|:PR^~] n >$!^2.0.ދb0Q$}[<ڛ≻2~HJLNPR>T^V~XZ\^`b>d ӛ^lnpr>t^v~xz|.=/}>^~舞芾[8^~阞难Rn7{Y~ꨞꪾ>^~븞뺾>^~Ȟʾ^Ł.^糾>Z1_>^>N'~_>^&) ^O֮0(*. ` ,?4_68:<>.DJo/J< R?T_VXZP DmsE\h_o02pr?t_vm?^cevI0#?OPPo^?_(%IO<UlIAʿ/{O!ȟ껀?O/_fAA1B@ D[>QD-^ߠLvYVOH%MDRJ-GRSL5męSN=}TPEETO=FlU^V.~[! ͞EVZC- FO+ W^cX`… FXbr}ԭ[Vjf~!Zh3|diG9sjڵmƝ[n޽P2eV/[f<]tխ_Ǟ]v K\}G^zU*YSև+8 "%ٶ΅a9'0;Ǝ- ? 3=G$DOD:8szVi {(B F$K"D2I&@MC8O^SK/3L.n> 2[>$"8aM;p*P@h@4QEMlhŅ`$1@j.=90.,B.MIROOuM;bKa4Dh72%˲By3YeeYg3܊K# kPaӁHe[ N<)<)z &]祡aQBXDK=TDdkh(Lz(^5_Pm@1{F W z(P9 ؾ}MsF&NKk]%8A V"]qv=vl rGFv^z_7c|C}V~лW0#oC\XC*B6܎r׊e8 8.P, -F6э 6t! G(\QT\0CIP# 0qd S[8Y=Rc΁HD6q+E4Q9pهh4Df29"ȲhrJ2ҚXMn/{Gf8ʼnH(d“Dg:չNvR IN AYx3qh@:PԠEhBP6ԡhD%:QVԢhFgNF#fyMBJURԥ/m% SԦ7})lр BNqf:K:TԨGEjRT6թOjTjQUeNZZ`N:Vլ6*vԪv$Bj\2Uծwk^WUmU RiίF&pV6ֱMZ طֲlf5YvֳM(ei2Mh4v Tdnmle;[ֶmnu[HREZ׸EnrTXV\*׹nv]v׻}.5kVO݊^׽o|bEo~-ʪ,p<`]'}&0`tp5JT0iX:sl1 7&>qe0sQ+%ֲ,hJWҗƉi*m0`F5k[ -~;T !tk%Z F0tFd L>lf c M*Iv,4_jͥuiG`DksjT6*aNdbڰuEX"ȱ.a18"᭶%>q~An&D.6mTh5iu=TB8b`롄 ok < xLIBp \xֵcZ}ڰ8y9`3)r6ڈF~Ǽ j**󘛓0=p>tW A$"bP0a Dmjd}E?zҗG}Uzַ}e?{}u{~?|WdUTAE2? ܁$\mEy>EM^[®>W$8:9"X xϳ2{86켘b ,;*4;7+;!Ȅs]ȄLh9x3 {6H7wcAp-3@ط+`h<`ˆk  Aqt633D4T5d6t789Cճ3coDA/AL!<[0Ch>l7ZK?_ $H?ī'/G ?h x  C?]2`a$b4cDdTe^];12^4 !68; PE`HAJ LM h8l!taxPڴ͟{?0 ؃:TPrщlEaa؈=}F;|(TzѠTTNTND+ů,5 dX֚2E3J`ITԊ`CmЊ ݁ !T$dԝQkHu]]tVp!]nqWIWp p}MnstQxuO%؂PR#UrՓvj!`3(KtPue+ ؁" vRDuSe]֍g(i=zIjOYRsWNoh(qWu Z0pWy-nԧ}ZIV 5ذ8S'+,cPm7Vd'.hB\}XjUU[TD$AyCYE'GM\`E7[ـrKt7VRi`r{܏)WuPRM]LYKA)ؘ,FBzV֕]߭+-Î^b (X'[ގ=0x\ͥ&t%WP |=U] %ۥ8m(A+ϗ_v/\ץ4]V _]EpW%_Fema_p+^ _v/tY-Wm_rQΝh`Ojխ5_R`a`6#F=L[DKbJu,:=Tx+,Z$bu/0V5f'bLmctJ9<}cCkb@&B6c>V^lǖll`>3MuVdjK&{Kcmkf.f, l >&6@&妈v'φOWn灾d(F@m>Fh1ض>%`X|)І?kX``/x)oٕc36ni^p^ ΰFiC0Dn莉Gg (O O MqM؃U؃؁=Є2$ g)nѶdfRѐPKar&ܾ ko2?VIu-soj]8onCK90s;~mho4k(n i@yd8tDp FDA0  yfhz~ QrNj|'} ЄzrWuMu'x_2'O'Uk67,_m~GhsadwvEzwvFji&ƹ[++%enrh:bYeAZz)vD`y8{^_*&꧝" t'Ë< ːBJMDb=7*9ݠl9Mq)ēI6f<">'2Ҁ17FC}IIf9ҰAH%͗P0ƥЕ$ C6+wIhY$%L#Fl2;ܾ\:f#<$Ch"o !v֓I;9p:]25u(+r_*2cn(A/}|\K1o .~8W鬶ѭԘީZ9{y?~8K,a8$ b (Hop^gחⷥ4䐇p g,h"&h[ @ʄp'*P5Ƣe}+.|o @.jE!Eo\,{X7p;#qJtq\0ʵG(t V屏~ch5A t&5ju 6 d'~b hR)"^=u zKNENdC' YҲ.m^%dI! Sކk(3P$αIC ,=Plh ."Y$5d!1F5(cfC lA_js,asC(G.jA5XBgr dS{Rt"W(IDCRa'yіv-DLgԚV=;u*u'g*5}*ۖR˜*EVqt+*8V[ZEXp A&QVbqCȤRDWPDT,cQpJA٠QWj؍+,uI P wJiG~Qz98\fj"_ry_B-mvrCc-!^d-eYZ45NMYFhXÓjW k>|,_h:NMB3keb4 \h :eYF <5HKNTGOлDXbIH \@^sXtHZTy-_Y IK aR10c)TL0EՌ슂F4'|M4s4׬ЃN}ڧI0Puݾ9D[yЁ>tnɘ1ClWzj٢&ڑu{ [ jמf6/cZEPь#/[]4 "Dno|Leߦ r|w@ ?zOFoW} *Xk$N|H|R (Pև o>_(< ?G;~j?O}@ťB0t5Ȁ%& _TSN V^`j^EWW{C%1 aM@_ILCQȑ$DA# !bM a)n(P_>!_IUvaa.AI(z]{l̡!`Yy[0".D`.<1 a D`&'a`[hЁC&^,΢,Gޢaia.ޢ.Ȁ #2&1/a0F3N#5*  Ş!0y9"$'c%*6< '6D1@&ND=c$DFDN$EVE^$Ffd0 RFH$IbFFGdI$KdEdG~$M֤M$N TB6l@DW ǫ@}TFT~A9A!Dc>^W%u#Z%[[%\ƥ %Z[Q[O0R#9|P|%45A@(dbcd>9,AffJ&e^gad@hL浭@0ve_\Cvb"4vCy@PBPg=zJuSyz'X|g{R|f]Ơo.(6>(2h4굑z#Q^Jt$BTBJucjhv糥 &%@,4?J!cg>?i 獾mF(6>)FX 0Lhh ^pQcDrD @bta&Dk(iwU(}#vVjΩ(h(zww)h(bŠ-*6>*FN*V^*0-TVѕߖb!JBBB  D8'TJ孀0dgbj珢g}'6Ɲr|ky"UJ&mvg0Z&d~++tL++k4 u*t*A%eMeAdIiNi0f0B>5LjRg<#,֬,ZBr,-VNk¬z쉕N%( db,؆؎-ٖٚB  =r*R`b)脦nY:צ.&kڪ-O2m n@-lhzՈN%.(r..ꦮꮮv4>RGET¾-jnHm nkR(Ხ/&ڬn6EtVn~cԚ8/~/U5Id 8mgo||o0#dBLC4D~d tbBoton.{p.f0 '.#4&fZΦ3IK5\u꺴ֵ]5Br;orb_D4t@ ̃8p4H/c;6dAH}b' xyvgzg{o[\6k5^ǶlO(QQg;q'q1I@|:q;Awr'D.jYC,Vkf& vw7x+BR)yy7zzwm67b.)5#9wB,ò~׃+Bi j+vpI7?8x7W21_qnWswoo w7s-fgZ+Z*kBQ.8|G8M9ch1Cw؇6}g7CGs.PF[9v.lj:l臾>闾>ꧾ>\n8ch+S;A<|}hՓrk=d6ij>5n?w˾2o<ڇ;{҃6b<K~ڧMg}@+'@LvB0BHaC!F8bE1N$hP&[n1cI'QJ$w圔ȭ'7]V93"z?:hQG&UiSO|9jUWfպkW_Qjñ9 o<0nݹG kQP(2ATRotPo y&<((e5 zs@Xukjcۚe[um$6m˝*ebjqǑ'WnmϡG>zu}o͛xXp=#D3fߙ3*DgW03msR[ͺ;dm;xbh32.0Yæ3 yh 3%rĜiqQj. R!;Ht .̣([;Ld Ab.@@ М8CF?,%SOp9 &i8jckJ& I&r숥OA UT|SQMUULVg{<˨Z!00 L2S 3ը%3$fO322aU`mr[Bw e[1]I-L& g0Nq{R>X Z4O^R.G؀1>i`3hBX؅sNOfpU^}aYfR;qj!dgf]8ʆW.YaR4^є+z7CYrBYAk`o~W֫H_' OZ7v8&=-DCU^q@i|IQQO]uVn&Lp]Wu?{2҉/ӃO^j_b>{_)r{[k%?:x_Gإ0QH ?[ kA N1A nA >%Y>OE>qP0wC=VPx7)CÊCs,P<(aE-nAӧ0hyE> 51p` D+Z\,_fG=}_y1"Ј.lt3R$p$ Ie(\D7'AJQH^i؅"'y#1B1J>|S/AyJR4Y7OV-$ a"8)hiM̗];$FC|32PM12sPdOyӝ')Mr< zt!D2 1$b240@ ABNwj0)54pOt\ Ÿ>C0: zHb1RCT!XFF3#(A\1IJAj,UٙL׽ԁET%Vӛ*"e2318QM3ϼ ASU N!H)V0mf!yȚT"-u5t5a6Mlgt<GP[dC&kQh`q[VWg萕"=hB% 22V%qK^4'Xgk`ֶyћ<ު$XmWW( h0C(*]q 8n A80Ql4]j AkF'P;9h|a&<`D_cd/ǼOVOOBE0fwU DQ#τ815»dj4ɬF2 h헇,3 CHb{3vʽvgRM 9DVՙ2ㆻPn0j >ZЈ@hHIF) $jJjQJΝloL9hjwS5!C-}E:sɔnZ[zvvJB:VMTmQKFL @%RZöp n= $B5piy\qrgIFmT{ L7uILC_u7 OG6t~{-w^-ăSs4sM{Pv=h -o8(txMol־m4AM T;eqO8e|N{^=ԃ] 4Q}{`>ȵaDB!p9PvPr0/y.xߎ #rSͽepNL/ũh"`ֱ+<yh(Llp]0'#rmLFslF`)I3EsKiNo+@\Oaa@PA2ʡMiI3 V2L:` 0Xd@RP2>HAţ 3਄KoC#,: yH܎+  '4BY2`3@㣊 5&2!x!o&MB>C p0J,1L8*$!PaIMQ1UqY]a1eqMP O0" ]   "E 4Z4Pîe 3.cZ`0)c!ЩT?FQ`.W go sqvv !&p @#۱!Q9EJ?q:1$ DYzE#J$- +'ur'y'}'2(r(((7hA1a1! &XAt`03B!-3먂!-Тޒd%M%Q0N(Q?+R.U~g2C%b)!rbռb$ l`>1nh61K]Z%&1#XQ/YS0gS H2!x<239,(0S37:3o KǞ O4o2Z3:@!;^EE\:8<ų<8(s=)TƢ3NJ(`5b?i HVbP tR\S6%FF]B%&$;ÓAx.33+tIS*S > n JAN4,ȡ SEeE"LQ"mA VA A4O<O~bBB3 9vxG,b"\.E qʥq7T,& C \.q%aF7ERTY"EXdDa$ATO(RO!uP7P7V**&+bcHs 8ZFT%P>lԴ\"[=Lkl@y"QEQ@E<$XtX'ʢ"YB3=CC *+v"P [ @s] cϴ!bucocC&F%Ppb״Rc(AhV@_5gi`v`a  5H^i"+vSF6]0v]ХB>lekjdє]p(R\NF~Wq6g2sh` ahQMoH4(b6EdU[5&]b\f~#[frw]-l^5dE$7lCH^SaHn4aviv!o3VnLaW>   6 q&P\%H\!fRs:%RV?gtCײ6Qw{Svw(dvmws=w7IYWJc* 7ͦcLF1k4#s}sv 8%86~Yxvp#W"7y7"@C;3OqGD2*VcKׂVhhZ^p8NՏ D花8xXu 5p7Xx;X!0NP&2pICp˩[p Ѕ yU0~C!b{嘎#B ٰ0bR 򰇪Q9 ·}XæԃyZxx"" Y*/cє Y$St584*!%B""모Q&+rWY"Bq`Z9",= D-r132OtN)ֿNuVW!0qEzv!!^I#;mps.-q&CVC3cY$@ Z .@ וsz*wښ.HWפQW_:ź֧뙑j/zM% x;{ [=Z8)z{*c ;ۯ 'A;E{I[(r)+K٦헪9Dv~.膮辚Ɇq8vS{.2;i$br۩E(涷3B~;ꊛ>9{iڔ:4AuA(+ rR4"n.B ;%s(){qb<"#G*zG4{2{([l]'{X"Xp;NB)73o#f %~CBF0mlbNN5)OP=\;/HgٕN!/(m „q QbeORCDPuCDXU-TυV3$ouUϠUi4]c7fTWUwW{W5g1q%˯6!Ɉ snv̅jYHs%yImsMЏuY]#sRwjUn)h}^[I=KX3?@9Gr?&/7!Tr<.cYKlr5Z9jr48Rt)Aŷi"lOYx}iW_ .D]I J^/ՄM]O2!Q0%00_-N ^:Lrj>&Rt^ J@rk vSj8bÂ+1ƍ;z2ȑ$K<1,[ R/Zř4kڼ3Ν<{ 4СD=4ҥ6 ɖ̊NެuARg6dˎ6mElIgzehx V@vbصÉbk85~l`"5B\yJ1 #Fw4f+4|íOupj(-(r{^D =ԫ=1{>˛'w﫽Rmaٶ֞ε `C^{`LT)4pSR. OYb"HbJb*b9V/ W~dZ:BbEst`F o0p< eRNIeV^eZne^~9ebIff@26zKVX?yӜLh׀'_g`5>hh.hhL A@N" L;!z_ f & i,p~~O~柏~~Oߏo ؕjcXҕ8"@f;roOځVmAnXbj# hJI&*8 Th ֬{) op< qD,cA_|+ |Vۍ\zw B@Ѡ= ri3_D˘1  r&j41_cfyi*c;(C"rf,e`0D4@eq35x%@E%Ѥ*uLmS ըJuTU:a6t i|I(:SEP@0 Y"F= 1ӥ(BjQQb|_ v-a*vmc Jve/jvgJ["ՑGq&m-26@~)iJ׭N8OPO w-q*wms Jwԭujww j LD YX7)[!mn8+d x.+x n Kx/ kx? BeGML>%}qR˾Uoi x< yD.$+yLnl~y4"uXIZIdӊZ9U\ 8yt|.I :F~bIpI Z 30G+*6JX*^Y2ʥ8 :ujBn&o3a ٤NJ6qLa*pA7`ڢqf]稷19yْ|iXFG}W}Wx%v/IQPrLY.7pjYy(z: YͺaJ%*@+I6oqq@p `ޠj`~E~y0ʭejyz[jq)ҟƏcᏰGqt$ ` ʊKFC>eDnb}wp'O0 'TN, rKv{>ߌNw:> 0 jӹ炡ؠ ʘaL@fĻ,-W{L[M3!B ^SnMϾ>ij.䎮gNa ,B/.ٵ֫!ֵݽ:^ _>V '[bH~/wSwkQ;,9[>&]&"E߮" @"#<oS<_&-TGۼ L]ot\dOgooc\w {/? ,/d2 Fu_uOJqsxs` _M؄ Xq7Q*I?84!܀&r"' ڬ o D @ JЅO 2يF͠AA %:zA!fewδ9f5: ԃ)X;+m*AqOءqɕ/gsѥOu;rw̛;G4h{< @Vlͭm_;>n@h:1TpAtA#"쨩P;J" !83$*NCrG>3-:G ෰^J jC 4H$TrI D! %;RKqFYd@Cp\t7^P_Np2HJA AGJ?\Z38 9Ly&o.pԑyQ S2`69Ʌ(M9$Q7`2Xd_ MxB$c\IH r";TTv8&DA 7m\o(db4]`+\^ EikC{poz94sw3H8%>Qs?eTBˆ,J \ 㨁LDNdzׄ݇?0ʑ#Qbqc'=BEQc+M{5Z좁 Ac\%_!I7sOY3NR *6 c0yMl:ј1)[T 7>s 13UKg:DZ /2Zط<iP$]HH0g)ZթHC}Oڔ/ k`֧q}l3Zc }hGڗɧ}+aބj[iͶ}km̆3x|FwC U.pMp#Lt V|s%ҀGTdChc~q&I\EQJA<Ǭ>鉠axOq}1zE<K`|FWy>Ń!k.~.a yLpww浮"^ؒbc 'E\x^?ef |z$__\ko\׆q>$ӧ?Jeh>"=-=j(*Z.в?+@*e9T334󛱾[[kq= ?j O ?^ MA%ܐ4?=(B5C[B.̦&t5'ȁ@3 B5l/L-\C8DC l]p{8'j2KB> NKC#;ۋ76"DA|D@C;LD`{+DGDOl"?_jOLEӒD>R˾HER8TTE]Pt!:Zt+)E%/] ]Ʈy:Yt)zL\@A3:#@Fu<:V웽{_a2ZFk cF:p > :_IuHuFsH_q\Hᰯ:Y{2IZ QHAhHLNEFA G{Ȑ\D" 2IKt7i/d4"4/9ԟ K1J>F{GKAjȣIt+@ R ᳤Xx4jI<́H)p|K @A/4l@ 0 l@MDL HȄ3c䍸5\ZM9=ۥdƸ^j,̞\MܤvʽLJڴS\|NIJ 8dNd(\OiMhXK9 bO|ԯO4COJ:DD } P  %jPjOэƉ"=@:rP]RؚLjl3KP&>qҌPRObp$e҈QR/ӿ:;(5)35D83xAM;H*̉HS9}T4Nܶ2 KjXSqTHչӋx!|̓ySj@zT33:|0< I7 ÔUhI=DJCU<)UDUo`a 3G s8i,76jhWU$\uOnWo0}WHӸ02=%I6ZX0F}VyXV{}]oWY ˡ1<-y@eXxXmX؍M|qZq+&Y YYeZ5ZP\ZmR- }JYo[-h9ZPLڭ][hگذEnU1$2K[Zx[ZԂU=ɟ [k}.E\uۯ%\ȕå\ܓ`\ԍ Y- Z$ESdԭݖX]}\ Sa:0ƮȾ\Zٸ ;=:u+Lْۜ=^M]޿FF{4LJ(s$Q[mߊ^ǵD S1|F:8_ k#;IF<.(JVFu_m݊TjTs  n x4@ ˶5›/jӌ*]` u`յlL|0sH Հ4"%LQ a'ΈZaGxO@aKA@H =~,|W&~b66f\ተWشb0.^x?8L$N? ]b6n䈈b݈uꄇfid%LW4M;^]cGNeUz㟨a)6 nsKd&^URHߘ[]i]N^fй,g-ƕ>|dFeE !w3^HjNkvlr#rsNtFuvN PxyvBs І 6]FgsfepHN [h AH!1f>]~Ò]|{P`齡ݑfgh_m{`ӈ,!k-߉|8mB`Al`hkSVg]F R{fb!dWA)Ix.V\8~v<:Wxʤ DMCvձkiNP-U{Nqk{( l; 16btl%EF>޳> ldT`i׾dԐV#9v"FοI>j`{6642$T-;k5&6>6Fn(j +uPΥl jfI \_fpuτL :6,tog oNߋ~ G Oq﷕pqƣq5lBq GoЦ>rHM2r&K'%8rm+\!\"+(nr1_R2\3?Ms s6F75s~ra=+8sUd==s@DA'A7tBDEgt=wHItKϸ:tFtPCQ'tR)S?u8LFrqWGsBu?Z_OXӖVu.JCubW[u>GGWe_Mc`aUϩl vhtiwkWr7D~ugwtwfrZPov\ w|Oz'ƀGgGw'ņΉ!SO7w^яwCy,?xD/2o{y"y7xyTwNwO?yɜyӵgϢx-OzQRy¥WrzzUzVzW«{XOyǵo{|,{w cW"')'ä2f|Gr÷O-SY|ǿ|~9ş|)}7yG}wI, gz'ɉڷu|5{~zoEh0R}_~m~~/?O_o[o}jx'p "Lp!ÆB(q"Ŋ/b̨q#ǎ-$hP&[ѨD)ɔ(t͚6iʩs͞.'t(ѢF"Mt)ӦNB*u*ժE ZYK&Q`אVϢMv-۶n+w.ݺvͫw/_XrM '2{ܩxb@,y2{"U0hc˚,z4ҦONz5֮Zeב]>ܸb!<F;r%TZ]v|;޿/~e;Nm03C~;y?X" 2=gY}5rء#X'+آ `I^n&_N՗R4R~/Y+mMgcCYWb[r٥_e,)X_|:#JRSYw♧{٧ZY=d]OZbrکZ&hf6a8#cJ&["2۬B`Ruګ߂[碛ۮ.>^J7ikR'.\# 3ܰCS\"̵nm|#\'+ܲ/3\7Ӝެ+ WG#K3ݴOCSS]W9auy_c]gkݶos˝>cл ݷ^WC0CS^cs޹矃w B?)ў޻_#/={7ySO_Us߽߃h3RzoJ/^lZ_dfB1Ɏ 89 q B0wȜ*oL*)"R T p*\! [02J8#Ѫ1$$1ӫ 1J\"'B1EDlAqӪ/ObT׀E!1j\#71r#(F9ZUD_oG4[W$! id+; GB2$%+iKb2DV"?' JiSb:W2%-ki[2ܥ*5V=n݊ic"sjK&3gB3Ҝ&5ikbS=&(ؘ.'9is3\';w306'?4(A jЃ"4 ]H#왘Pڡ E+jыb4ݨqGKjғ4*])KiPq1j 4:)O{S"͍)L5=C})SԧB5R*UjիbJ*Wկ5b+Yjֳ5j]+[ַ5r+]j׻5z+_׳f5,a kX*5hcB6ajb6,g;ς6-iJӢ65LQ*әB}jZ6-\e7.qk"7].sB׷.uk]Ub^7.yk7M7/}k7/80"8 ^py")+l60b 8Kl8*^1[821kl8:1{Lb9B2l^8J'CC2+c97/9b3l39j^379rl;U3?:| >#:3c:C:Ғ4+mKc:Ӛ4OXѢNKP:գ^AOs:ֲ5km[:n~5_9n cʶgC;Ҟ6m+#;.mnZ˽cj;^7]ms㞷]hJ.5x|8 nO; n}ڦ:ŷ!hc\ <"5Kn⌣ܽ2 laU|PNyMns <:9~s2wr_[!C/r3< o#>_<C><+oc><;σ>=Koӣ>_=[>=ko>={o;PKrc@6PKp\EOEBPS/img/ws_in_action.gifGIF89a8hi33vuwvw~YWXf̞ŤΊҶƓLMUx筮߼ՠά=;(A8ȓ+?s5e>=CAbԧ+*{M3xOP|vSG/Pt?y?|}AxEAz܂ 6F(Vh.Tq._ A !&t`(6b⋜@.hce2XAua-0 iS6:@ƍA"hۀ}N"Ẃ'8CWO楍eFnyKx圛yw.zHgsT:E  3 h@Gj#;]둈/QD{Ճ6Q!}P%2Ug(/-rʕ䔬8C  d$0Jʴ6p>X+I*?E!}^ P>WQ櫔*_O;Hw&0F]אa)y6'Qe2 7ݙU@l!,gюmJ4 !-$+gIiC6@ ҖjsQЁY+ ^ )qF&9n`$2SlaaE|7t!an +4 iP.=ůN-:(X[@|6\f(T8k [!:bh.>ĴlpCp:LB#yq*ʪ!W0jV `5q:BvcD PDZ$,b Ar3!Bf |602ӁsX;h`+kHlq6:L 6cr};p#hFvmx(I1jc8v^!a=͂f4y1O=RC$QDs36\eB vxp.&|p7n!n\a8i :DtekAÁAx[B](7WNDog -4&Mti7 %Oj| [Wnvl54?ðn(ɩvwh08Lf]D{>h0z3 ЭvuW8㐏'Ÿ8F rx>=ZB;_E2,Yg> }@nl7퐏.hŽe]oԁĬ-}_ ^եf qЂ2*̯1*!9̡:@ޘ,^`%4 _%uwW}\ya }\! o$P w'~ v% Xrp7Nw<7s 9`P^ӧavf}qst v9 f@o0Ur07p8ǃ"8ꐂY sU2H6;؃?~  |! twQO uze1jg uuHpX jȆx9؁t ?U‡ pHkXnqȋ' Ø(x``itMp4n|'~uvOo & ЌiXx8tx~  %6 Y؆  d@&  rh@Pe|w7o'q'jxH d9@80Oi y"苍9*ْIi`_^r5ESvgFqpׅvp{y|ug|]XΑG@9Xؖ[ɒK@u n Xpshژ,ٕtiv쐇40wei_EӴM_($KaDi`0#0s@3Xh )'Yd9&Y3f)(Wni0`9Wcgi Wⷊ`QQ ~׹z$1I9i*sYJE8Rh9ɟIɛ.j pU]p`VAa40Ĕ^6o&!i㉣kqI-2z#|PR8JɈ)=ziMUxGС!Kv|!'2vYy~J)oj$u#PQٙxz<> k  e[0sYxViTi`@  _JyHY }ʦjK@") yj~*Zj1Fa b>G%vVL&i|PٕxK  yJy "7{*z(=ʞʪ:4kOVufGzd՚|` Nxs[4 Q@ Jɫ;/KGI7"ȀaV)h 1;tjwSKpd'4#p7KZ hxPizt k2}з:PjY)yi4 i;sXZG J@ 5GG '' gٯq.kɼ: ~ A :Ƌȫ-˸{ cq DLy'.aa )azL \VE@ujh%KP'B Dkz; "4`xʮ @D u p]j` Rs 0p +3\S kZ|W@z˕  X c3Kaq-`F)00ct\ Ap{,,|ń\Ȇ\%K7 9|*\Ź<ȕ< y9O@ n\rܾʄ|b S˷۫*Aܿ\ɖJjW{A -Ŏ͛D\--V@KżE O,B[d, ʒp @pP =Xpp$]J|) 70A\øKŸ C N` ӡ8?vEG@z} NĬ/hCp p Y|X٣~|R=Ւ]Sn  UaԬ{Qbp۸ۺ۸']ԂMI}X p.},k]b3hXϚ!mC,ؐ bp@r7 `]CI $}$DG Ԇ7p i٧ Pݗ׍< ԤM ]0ׅ{5w_`a} >L^> ppC  f ܏ V  N(M7ּ bI]nX h +3^%@P4G'<Ɋ"䔞׀ !V Lbuإ_1p+0Sn٘mh jmn1P`bc ,nڏ r`AA  A 4paPs|]0~) `H^Mn -C@N~j C h m~g lN<cb  ` LC<rP@0h NGP `@ʊQa݀|`E}G AT>>_jb`"700od1e>nN7P'+O.O0oخͤ@ tbn``a?Co2| }^NVPzUc?nHʏ.W8ILN7,a? ؿԓ 0Pxk/(bTK?u]xw^z3(aT49`T[RVU>*{VB|ט*eb1' B*BP -DU n\E] Tڥwg{g_B ŗ}DC]uo`'J d._N7MN`GcAfְB$Njr8!tpW|LH4s"_R]`ڃ2e5,?BF#Z&5l$N#nvd:Veq?,q裗~zr(âoBUjg=$':&_BtuVh{AP"8׷r!o+ >0t =CϨAUs+N0iZP ! q>%[) /)Hj2C:Y'nNS6&S=ȲݸEp$"%m[oYWVq2"LU{&(CR9U ۶EvluىY&U,e5kOTEݶ(/ܤȄW]cx~;ǼOk5 /)sA]z޵,fSͦ=BU܂R9<~qjrlYX+QUuVt\ZN:OKF` 'F]dT>3f *+KW'†Ma"f[TDc&sǞ1^_h7lj~CVct׺-OVcv l5s2]=iPsuum}܃4®}k^vT](Œ<=0we\/~`hmpd;\9ՅqVBő ll*CN (耀n>L@޲;6K@0(?#+ .رUpt0^=갿cH@>.Ȳ2   44:@B"% .((2Ч0;>9؃BU#tܣp.j xjpAbb:'AoZABk*3"LDEl#;Xp;C-hØLH|0x[J"`:p58KK>@C Ę|H))qAL{(D\Dg|MqaSK7>j! HU$  `pC˛9b-.p-V -t2F hDkl!؋S 鸇 = x5dGpG}$kF,I m88z=84=`q,6%6ȍ)E4;<~B|J1{*$nI! 8 H{ǣCV e41ѻFvHKKKKKK L_𙩬 ;J>؎x+3JDQJ<ĒøT LMThM|M،ڬMۼۜMMd NV,NckUfqunۙgfhhhhhhh^Mi.i6r~NsvhRh Ձ-y.o`02'X}P2g3nf6en9D@h H][Z^1.Z0 SFc Vc`quSu&}i)}d`_ރ (%ց0R.^RehQVUMjj^XUMeffk~k\>*T0x0=lom؃vu'\;f;0kUk .4H&=ώim.m.kkfmu׆kOھϞVkX聦n3SÙ햻F6Z>d\nmvSkRf=mEopW8U ym^knn^ g o(w>qMqz\уD\y$/oWeo6Q m"OۖqsXo' W6=Q).M7PEsƥ6O !(5rq=gN3rDߔWWtLa܃'=u-6τSK41@Ё4tsOwVP;V2/OvnfxfuyP]=/Os703reOf >vk/QlmW!gxxxxxy?yg@Utuvǔwwww3 ^=ҁeߍ2|m8= ^XB Cu=Ѓ80 0݂ 8c߂p =^2$$ hz=$(xP;Hz`42L>^UQU(Q ǍxVN#`_Z|oqx|ȇ|ʯ|˟| |z||}}8}?}gX}gx}؏}}A}ۿ}AXy5mwyyyWyxy[ !u NoP (z-( 2d x=؁3^7P{Ѓ$i؄h@Tgԅ 0(Ѩ&ĕr,Y*Wl%̘,_!ALP'P%Z(ҤJjhiRj*V)r+X?ƒ-kvj zf-\LȬk.޼+3,ۭ0Ċ3n10lٲ{dY"UD!FyD֞e Pf -O~L8#E'GMρ2_4لN<uOE)I=TSe5!V]ugiX[[s8b]l(@"b"-"18e52Zgyhi>FMVmqp{wA&[s3u@`܁xbg${XT@G%ПIe$L5ݤN=!xE5OE(`!W!!"駇:*,x***Na4#@أ㑱•p.XX0q-0Pghǝ~)b!A_EIҞ|'Z P6(T#)zeWnZϬj1{R; wl.w2><5|s93=A =4+7}4* Ms4OՙyuGj},.LAaNc#E!:FOƓq  ׺AŁvvĸE zB@"$H$_tȽoEp/pS /pVqkHqZf;<7E͜ -,2l9BtK7 ERXX5]qiV!@(m0b;Uf(0 M X0pBF vpa6[0h`J3Aia P :@&`~H&A ӞBAAc>A3G2O] dNwCJG/x#3w<)͓6FԳʲׂq{߫Y0>o Ct>F5ӿO, `hK/ 7Nq``:,S(׿apb%(ډq (*T1s!GtXL5JxZBU7cvA1 6 NmFSϖ[ _Qs2bO  nh:DD.n"EOjJ3r:bdKäۘg-m0o{/}Aﶳ]ut4n)ې8n(=j:2p[ 2BͤptϛjY9|.#/nb!b/_N4@Ӽ'Be@0$,[Ľl0 lFᐁȐu$Ё{XDB•b;< 3 !!&.!6"!UEF;RHَhLt@C",C *Жa\"CT UTŒie) !oM_YT_0I‘?#@@dAVQP X֦iaqaQ\ 3tʩB@@7@@DD:8U=8UaU$,02XN O;D d##$^tS>^db l |+4 b4 4A.¥t2340>H9CDPjD\5.0 A&A.]@!,G$C`[dc?#@@A$B.,T5<EdZL\d!&RoNUUaQERRBw~>I%UWrCu@!4\H4Fpf'044}V耢o耆`|fPIh7TuJ$klfmmfnn&g9IC '(0 tF'TU]'dl'wt'xn)>T~}Vy[Ψ>g!~B[[~pB$fYb0dԆok(G AN)L=(Ĩh:Q&'qJq^)E*ӀM*s6)RH'JVPi%ZvFXM]|i'b8@dY*pB|g~iN0$<lIMp@CcC\ÑlJC xD0XxALx$BƚjLlΦVΎjF)"&gEEJJ+SN*k1N=S=Y+T{W)'nyzk,hbpm-(DA%0L'@ '$Be@‚BCV-^,ȖJIOņdAAtAhGوjl2: lPvR,l"Ȓ7 -um(v"LE+eTkyA Q-Y[Hϴb!t@B ȥBN WvF9^7IP؀D  &*(ʪ樢|nĬdU %ꪮnFjdtR0;H:F[*n JOOS kٜ $ (^]%מ9itp(n6 hf&\(,&0ONtr/CUY:"skCwi څϸ)| XA ; ^!^iɲjj&&/BnrO9q,n3"Ev1|1(RJ^d:4!ڲ D,2qSMu-;u$8WJc]o1r Y-VEXT_EKLr?Z2n61 ЀsĖhB(@wu8@kBnSJJ)]taUH'B;hwiXlWw*kZ+*E"oɕki@:qDQvaN&_W;Ms-A5ՑYm37Wo #8 W]m/::+"9#<==#`'=`ۨ`WC+y(@+@@C'i=8@Q=tlS2@= َ!s sK ̤%=XihYVM[Nմ5'RQ2t1hQ$ aSpPEEZNU5NcUgVGZW[Wc۶9XD7 @ԛLA$\7R;0o\_۳6`k>v$6:J1K|2d_]=U_RPu -.GUTZ-,^:@,CmD)8[o[$ppk!őItOw=18a7\Mm)1-YUcwKd9we@L@$j{Ce|5[}/o[GR;T:xY\k$q.8?"%܃'vt<i9t`tg_T,yЁmw XVϥ]mnԏlyrRK7KTqv[@rYEsygKYL_]9@LyL{7;绾;$LB$@ <$<$X/<7?5 5 9\>go壀>{>链X+fnv[Fl&:23-\AC&Y\_9浱!3HꡉD (+='[ j%܁z-{wWe5m|5m>Qs?ē{y!^egP)p'pTOŹi~pz"qjW+APЌ`F6fJWPBd2!gdkQаe /Fk=! GX!IcNJ#ї$@iR옚.eÔ޺/oF..]@7ft/2LbR+nMH5M.A;4A{% p*tlCX$6/ϛ(jz#o{T3m3 ewa D4:e/ `}RXE ,)j Mlh;RE0lRd%"D/VF{vy-ZT&tֶd*VK*ψbBL'Jr*jIRO,drV Kۧ.]%+ ߿FT}Yߌ+h\f'e$ =:HD8^iԄ%5Ra7αny*6e ti 2Jri\0ѕX-%êOMI0/\vI]E%⻰-W˛y9fs5' g8wv˃aA/Dݰ6BT:`q7Mwj,i0_5&#?-{ݣ@` >tRի@@#lJ[olT~"6,(Ϙ(KY2TS`UyѲl}Kb_Jt#|ɑs G/:`xdQxMZBwс;U܉uLe)F+gJKCǾ!V=t9dIt _5 ;\Ȱiݚg~N|g!]Wzb(JW>E],iIs.ȹWuh$|GA;QD0LVi)m eLs@szȏgnwF0ams^LY=edžΫN nXjo @@kuD)~AԌ"TnZnm˲VrnNvGhPJ"*BO) £0i챨iʏ-/ @l/ ) &  bK؂kL% D>A $ j&6*8Hn~'+d \pbPknPVBuOg.T.@ڀX(q$ + 9+Jώ)n-F"m 3ɂӸ` Ր1&a2i,C܁ko B 3P4< @M(֬PjTq[b/iQrn"EHt TuJ̎$Ҭ ~RIQwR ٢1Qp,^ѩ0b&P%-l, 1 n:iQ&# a :r!i!"v@"}"Gb`:"5F$Y$ `)3S3ۂ3-bB34?Z4O4e@5W5[S5}6cS6aSj6o7m3| d8-&""Vjsu8.>kg@: l( )2+-xRSGFcb+sP{(~'%3Vq$Y2?:[v!5WS6o3zGqHqFUcH  mP,I-4J38y ` ALK4K]7[7J*-JCM9 tD.2O-2ws;,)KS γ~3'mWs+qfB$# X 0h:.?!K "sNF'JPMbϡESEatf`jFs5cG}4THH tI6J44X]cK4LǔELMYE_ONOmPȿ,3Ϙ)5aqP+aU_-U+0u,5Uf`!a@B$&@b @ tCq5 aYXi21Lg&% (YQn2GZZGM5wT7[T.D\UIòIT]וJUf]A^s@L}dCaP"H* =4j&>6_uR,3*1F`BA/1! &A2` `u#2 Nh `՞$i6(2NH〮d䲢ZWkA5uT[GvHHmm]uJJv$oL^W_s 7i~*PA3 ;3I3ES4I5\S]s68{s#6[ hrWddd)6ݞaqu FBK :C`!r 2pw : w)*3؄W[F1j-[ɶ6[tH{m͕W]]nAK}dXF0{is[{U֖m4|]㖋׋t}wo *r3+Hv$vb0(S,YB` ,Cx,6aoW x2QriNiňWqz!Ezl{xmXn4wnX経[؛Jژ8x}7[1D]x7^YRԑ9? FxY`uOwh!1x8)r(2 (TdnxXjk}'G%@E@ 8kAA Y^E%d 4Ve:iz$؊8WK%8R7"Yo7`-; QcVvg9`-uUD$:`AH`५zeJq@!jڦ4L(!{| @ըWD/wiZpřKڪyt6ٙ빬˻cA4pPsIQ)VЊBR !VwT7 C  ` A~m%k.;e>;WH> @GKH>FA@IDIkAja;}A1vv̘a.;EَHYu喬uJۼٺ%0nGV_ཹ*qz;;i@A6 ] &\kc!&‹k©YX  Mj}@?U#L@K;Ԗ D>@`el}RE fˀLpK<xP3 ea`=½']=(݁B'= ܉A;ӈʝQSgUˋg˱;W{4ݜ%|5ϣR*{$4ouFCq !)Wk$ӡe vC"7PSTv{B`?T֡U"A}Zn)A! u]@!E D@T`  `z3a` "_( _#/?7_;1C?<_1(ߋrߣ:33E* ߡRTrhW H7SNU2 /@ 2!r׆}^^'j}=]սϽaȔS"a u,[֡AfwjQF !D`pL q2 Lj2"F DJ,Q2(dVIxd˖}P$"Sx 7ܹtڽ7޽|7>81_- ;n0aɔ34xcƍ2>~tѣq6nj&Mfl9l&PjCႛ ݻZ⭈&@j7"r2dܸY+WMX7I)s6-'$˛#[{ٿoo~6M7`4q!v@@qđO|H 4 3B d 2H܈c:3nAW F$\J.ydPFdSHdD|  5JDUtf2Бg=qcKJHS gN#(QR "a%50>x\Gʁt[Q'Y[)jZeة*c56Uvg &i6+1 TFol@qǕr9]Qnvixwy^{'}gG|H`[qaxȆQDA3̠o bQ0 БS TyC,MAS# @$S |HՈ9> >\"x~Xթ,w* F6dyk[dذ4zh2s졇{Զ<:D@в[Da&'BCrحႫvnyۮ'|o3 `Q|AhpDaGa<E{d4YP2e0YAfF3ly&[iF#lIJ"P**)"!B AJNDto2" T!!8 xs©s}JLl"\(Jqo{LiqpC0&`rH`;`8Z x A'0` 7hYpExC!Z z  [rݎ0"(!@`$XH>>P2Dp nAsHN.q$eU7JҔA\N.BW_<>4@V{I' ;p'BВV! `mm9$"q\ -'JfEo)%l!! q!0 mZ0_8`( җK-\,EFX2H,sl3 (Mj6њ. Xngʌ8e4Jh-i(7Yɚm㬲vZVUxy;J*^ j#]iMq3tX8b&|n58&B,- zP 4fU[@"%VmKZѳw˭i?&:*@C" ],1^B>chh2mqe_\hPL?k wm\f*ɝ&Ums*4z֒|^K6yde.^`}2ϝLc:],ٿ0}`}Ly@ 0!@ =l|C`f'IIW+& nk)d!+P;0,|B;k> ]L1#a /ѭ#]N `nűdiޒlcFdY[m e@,`v(_ N&H'K>Tx~Ixq"On.j b^h &g( 1&v !cc yq w(7.XPEɴ9]=:v+0XBH&E `AI"q" u (uDu07& %` l`qXȁoaM`)ZAWMEfffq]glEfddnvidr7wWw =#Q=~v!(SHI2"VU`]X8bXfN'熣zF+=VsG960ߥO=.Id7mٖ} rRf)ArF`*,rQJ犵hN9YRNj04 y{u%-ha`8v@Fo{Hd)2hhD(MQUYcRH1X8bEBƇ}vfmy3\uǗ9VPrϳ,i7tv!4#jE?@?a p` 7y:OדAM%@ f*GII5Az8Xd[yyLP#9Ҏ''g R$i ]']ƍrIvȡ{ɗAr9BlI&wirQo,=$Y@!W~3P? L4$C8`9Zډpɍk'XpdNvLԩ%5;ֹzٛ8,0Z y]ɗ̱:➄yiiy !!@[@1pbZ`$[ 1# 3`zʡ4@!:BzZnyfȃyLZY)@8~X꣝*㩂nMyDZysIᘐI2jO) #UdW.ZZ\ڥHp#W=!t$ !!'f| 3`A Pw@/pzRdJcFv3Hv.FjU!JD@J+zi iz-,&PҖ_${ĵ 2kVjek,ؔ16-::mKII"+//!+ /烙Igp%[A+C uEQI{˺뺯 P 7d{+uWt o KMzS}\h% J=1"ѺL`L?t:&Z A빷GˌZyʾ˹8O5"e  eYvۙq;ʻگ [1k٫O>@8 苹:{XK:<`e/p/@UlWY[l_ a,cLelgi%F"zeʼ?,AL9CLEͳINˈY=-˲<}O޻*M߳ߖ#ߚ>७ ~ VR/M̰}䦬 lV):\P@% a.jF|1ȓμl闎e-9-MntP (؍]]ѓm7]fa.d^}Zjl;Mt>v. \qW0? lxxn雾Y]oH̃]-v^RnүXN߷N^HM~*XD+^_ώ..}^Q/C[ 8^ =^c+/HN׷|0$@-sLT2?J}8\Ron!pas;B xW@W$vJ8@_B{DnHKu?8P̦]~YZ\_/aYŽ W~:Lq/ F.yOnB9k}|a  =&`DNnдZ!,Hl!@@ `AB>BD+Q3=RH$RJ-]5m6N=}TP֌FAR >UjTtUUkV]~uXe@VZibW$2eN W.}Xތƛ7 Y-[YfΘ|ZhҠ˜FZj֨06$[ ^upakV͝?g;pug_n Gu7䧜?JzlYfѪ]ܿOK$4L!A,#!(z,B.L# 9*CA:);GtjB Wd1FgܮFo1GwGq ܣDro 8+CR>-.K3LZ#׌Ds|#8s9=˂O? n$T;+ Dp|B Չ0a ;֑B,4L< ~6Xa%XbY)eYgY#Zk6[m%I&$IJ\I'CN!f=l2^/LL{-3_ԠMl37S9N<jO3%lP/`;D6F}RI'uԴ6崿@MBQIE HUr uVmW1DV]vij[!kek6sklGCGf;IpGw\r+5x^\L}Yhns >تc8vXb)b QEG#n(L5O?<}9@is^hg{砅&h67i_fjk^t.m&mߎ۹EO{:? '>6wq$*g9`N0:םϵ){cP׾̌uԧ`6ڑvӀ&; ih 1#x"1Zd<&c՜5qMz]^Ɔjotk^Xnyװoo{_O0nXU9>m1 uE-) XyeX:,f`3 3!t!3DwLb/A 8"І8vCA^x6#{Sb.e|; cEa^[D潼8JMc$~ÿ4g k#ވ8r̜1Q| .: d6D.!#KIN#p `# C^(Jp^D*SCЇ? "8KZ2=HeHKw7cz)/%2 g єk0mX79S(x$[C |{:-DD4hj\ 0%\@HO 4 E垎'EL!! h w< %CjBxEfQW %G=6T$IOJ fj1a S647jZxil6ysQ96rqtn#HF=|a) @ pUe\`ʊCiY[;U2đ7a np+iUBzdoiav6,:J5_-2K 9=ck+Z풆:sUjְ*"5M-,Pd ?> d+p)$WD/[ޜ׽ oЈa0Ё|%p 4&pg>t=hBЇFt`#a8paц~#f4/ŭeq_exV 1hu <SU&H*S\35\n vL p ?]/VU U`CUo5{qWxL5XMӝ6#kAj6Z˩M@x8yGZΡ`j $0f%2%D 1 Ƭ@ RBRׂۡ"PKT; Wu݀ oyӻ`^( !FaRGay2K!UCDD%?yA@>8t<%ME)c8X}JZzدd=n ;8{)pW*%$帘UG"(at#B?U+BG`VO5_7LvUrMᎶJvʺl |$£ȁăŋqǫ@ &8<"&.D郄c[ 2(R"C/Spp$'K p˧C3jcnKy#@A=(D+i꟭XPc @Z@ <  <p,)Hh*Ha_$,2!5,L[b*rg4}P 7:9t+:\ uԀ}z](6h8K?0h-q0[?E<=h9cD|D$[DE@ DX,a|¥>?AAz7TRQvp E&0E-ؘ{5++D`Dhe % ,"Izÿ|lC8@@HL$D ((8 L\ghsP&h[p-- @@n>d: cs0 A̴x]uc~%"}\ab!6]emXՃЁ@ H=(H-X]/ݍLYU <L=p^0^GtY>VO?T#]"}PPZ @R=U4I=c~TN;"P텵u>:."T%^Sf t|xe;IeCpmc&>{CÐ(@k8hfJx~]ʽ `MN5͎n g2B%<5SJbxgH]u~./ތ0n Ԕ@D*]S=@laڈht^J>:FB"*J8@b@Z  GHn"P`ҕoo(<o^&h+&:>jZdmTDQ^xwx0n@70L¥8 X0@?- G 8'k˭6a$WqJlY#FP&ք @exN{el⊅7 fS?@qW@uh`M@"^Nn$T$(e2OͰ#3'㵤6UF6Fڔqod9 xJ@KJHeÌFj,FeOFzk(aZd6|t>@7CuZqxhDL v2+ Y|go']'bu^^1 =쁴xnDl=eg~ϦrhNNA(k3m<4_ق Y+VHB9<@@2`Z21hXKIn^j_| lCaxEHH.1^B(`F^c,>zJ(]Z_WPXW w0su@d` '^G|e}6+]. fnvxvf("Ƿrcs_-!OwwfX9$̀t}HE4_?&Oz`*)xo*?®*nD>hHB Púf3F)V@i`tqGgUL/2H`^S^z5.`*FرQ* *JeR Hp!C (/jW:a[70M (Q'nQl1y>m- pt)ӦP*u*ժVbͪu+׮^ +v,YHHV kn)KݺEwo~,…XE"3Рb=b4jy3??7]c4ҦG?z& ž† > ubx m΍@)5 |}Hj|0 Jt( 0Ë2磹8_ˋ%?$Q`2!@ZeqUy%[\r%8H'JZs@  Agqc#Dr DgttdA 1)FS֓*aGH$J-QZFv|A@Qh1C{٧Vl{Zmm[]vu^|ׂ6`iC:b ]P`Ygj֦Ox*uk5[mor'qanI78Pgv@V|G"7c{ZyP2 Q>dK (&8)` B蠄ZU"a!X"'A$jH "P<9r1|EֈVHE(r =S^TYǖ%%K.DLtEZF;*?;hN&oDª[Xrfelq4D*Uz%]92R] bW'묧Vkkݦ WG,Q= umwՖw%E­3PwP`|x#€w\7@*L'J2E0S`A F% -ςn -C >A 2` 4KZJ}&\! e6%7kc[U`nTa*;`%q eCf7o9;Á7U:* ]+~硄XE /҇wi+e CsVu=(S /&\c\Ъ{$`I.\.t } T~E3H.u{ S!*wmnWSְ ehI!!NQ8U3 cD `abZ8(74aQ83X\V_ 12SۢiЊgPZ rj̀Rn u@܁ehR+\BVV6 ZƮݹ-1 M>$=ͽWjV|6 0 -#7`+ , F٦-WxO PR95UVudMTO%5w^1Q1# "!p P ,CО@@ vM*liFci!/݃R@Ur t}_}f;^x],xBe3.tNi'mw{>nlZ^tCz d%/s ֻp)84-sx,p0mTQE8?f|9̋O^!9pٍsF~i@8M_CC0o~ڝx_1[%< | _x#ۋgaonH>V`%V9ٛރ4s ~aCP\-^_TT^yu) %5_| 9W!\!сu MM=C:AV4]!O@QaMif5ʋōh)iJe[O$s @YCV扜mÅ^!#EĝavivZa7xh^R^TY *  8i@:8pIBZ4@ c1j!؁%M:3ݝu!_^jUPQ!`'ǜ M|}%"B4I6"N=EbZL"XE$ b(2 Pܜ F̵"yP\ ex4E,@8 ȁ`TAt B52"N3BO%X`aZX56+vcOHl ta::b9z< @0B$@%#0XB{a$ jz@T Ź$UARV`&n" 'f\} 6yFzYi  X[PbbKzKd! `- HBTCTb]P6Q ٬],e7>eDTcUUVܕ9:ze ;^RHXԹP vxG @w4>Ns .PfBB!@.( ^ BJPm] PAݺm]ۇvBr C>Ì")fZE~m]@NmAFʼnexx@ԤldmfnfoA Ȃ4<