PK U5zDoa,mimetypeapplication/epub+zipPKU5zDiTunesMetadata.plistb artistName Oracle Corporation book-info cover-image-hash 375464210 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 328443894 publisher-unique-id E39359-03 unique-id 132848599 genre Oracle Documentation itemName Oracle® Fusion Middleware Developer's Guide for Oracle Data Integrator, 12c (12.1.2) releaseDate 2014-01-07T16:45:03Z year 2014 PK"QgbPKU5zDMETA-INF/container.xml PKYuPKU5zDOEBPS/cover.htm  Cover

Oracle Corporation

PK@t` PKU5zDOEBPS/whatsnew.htm$R What's New In Oracle Data Integrator?

What's New In Oracle Data Integrator?

This document describes the new and enhanced features introduced with Oracle Data Integrator 12c (12.1.2).

New Features in Oracle Data Integrator 12c (12.1.2)

Oracle Data Integrator 12c (12.1.2) introduces the following enhancements:

Declarative Flow-Based User Interface

The new declarative flow-based user interface combines the simplicity and ease-of-use of the declarative approach with the flexibility and extensibility of configurable flows. Mappings (the successor of the Interface concept in Oracle Data Integrator 11g) connect sources to targets through a flow of components such as Join, Filter, Aggregate, Set, Split, and so on.

Reusable Mappings

Reusable Mappings can be used to encapsulate flow sections that can then be reused in multiple mappings. A reusable mapping can have input and output signatures to connect to an enclosing flow; it can also contain sources and targets that are encapsulated inside the reusable mapping.

Multiple Target Support

A mapping can now load multiple targets as part of a single flow. The order of target loading can be specified, and the Split component can be optionally used to route rows into different targets, based on one or several conditions.

Step-by-Step Debugger

Mappings, Packages, Procedures, and Scenarios can now be debugged in a step-by-step debugger. Users can manually traverse task execution within these objects and set breakpoints to interrupt execution at pre-defined locations. Values of variables can be introspected and changed during a debugging session, and data of underlying sources and targets can be queried, including the content of uncommitted transactions.

Runtime Performance Enhancements

The runtime execution has been improved to enhance performance. Various changes have been made to reduce overhead of session execution, including the introduction of blueprints, which are cached execution plans for sessions. Performance is improved by loading sources in parallel into the staging area. Parallelism of loads can be customized in the physical view of a map. Users also have the option to use unique names for temporary database objects, allowing parallel execution of the same mapping.

Oracle GoldenGate Integration Improvements

The integration of Oracle GoldenGate as a source for the Change Data Capture (CDC) framework has been improved in the following areas:

Standalone Agent Management with WebLogic Management Framework

Oracle Data Integrator standalone agents are now managed through the WebLogic Management Framework. This has the following advantages:

Integration with OPSS Enterprise Roles

Oracle Data Integrator can now use the authorization model in Oracle Platform Security Services (OPSS) to control access to resources. Enterprise roles can be mapped into Oracle Data Integrator roles to authorize enterprise users across different tools.

XML Improvements

The following XML Schema constructs are now supported:

Oracle Warehouse Builder Integration

Oracle Warehouse Builder (OWB) jobs can now be executed in Oracle Data Integrator through the OdiStartOwbJob tool. The OWB repository is configured as a data server in Topology. All the details of the OWB job execution are displayed as a session in the Operator tree. For more information about this feature, see "OdiStartOwbJob".

Unique Repository IDs

Master and Work Repositories now use unique IDs following the GUID convention. This avoids collisions during import of artifacts and allows for easier management and consolidation of multiple repositories in an organization.

Oracle Warehouse Builder to Oracle Data Integrator Migration Utility

ODI 12c supports an easier mapping between Oracle Warehouse Builder 11gR2 (11.2.0.4) concepts and objects and their ODI 12c (12.1.2) counterparts. A migration utility is provided that automatically translates many OWB objects and mappings into their ODI equivalents.

The migration utility requires ODI patch 17053768 and the ODI 12.1.2.0.1 bundle patch (patch number 17836908), and OWB patch 17830453. For more information about the migration utility, see Migrating from Oracle Warehouse Builder to Oracle Data Integrator.

PK W5$$PKU5zDOEBPS/partpagea.htm Appendices

Part VIII

Appendices

Part VIII contains the following appendices:

PK Developing Integration Projects

Part IV

Developing Integration Projects

This part describes how to develop integration projects in Oracle Data Integrator.

This part contains the following chapters:

PKa0PKU5zDOEBPS/data_services.htmNV Creating and Using Data Services

8 Creating and Using Data Services

This chapter describes how to configure and generate data services with Oracle Data Integrator. Data services enable access to your data via a web service interface. It also allows access to the changes captured using Oracle Data Integrator's Changed Data Capture feature.

This chapter includes the following sections:

Introduction to Data Services

Data Services are specialized Web Services that provide access to data in datastores, and to changes captured for these datastores using Changed Data Capture. These Web Services are automatically generated by Oracle Data Integrator and deployed to a Web Services container in an application server.

Data Services can be generated and deployed into a web service stack implementing the Java API for XML Web Services (JAX-WS), such as Oracle WebLogic Server or IBM WebSphere.


WARNING:

Axis2 is removed in this version of Oracle Data Integrator. Customers previously using Axis2 should migrate their data services implementation by regenerating and re-deploying them in a JAX-WS container.


Setting Up Data Services

Data services are deployed in a web service container (an application server into which the web service stack is installed). This web service container must be declared in the topology in the form of a data server, attached to the Axis2 or JAX-WS technology.

As data services are deployed in an application server, data sources must also be defined in the topology for accessing the data from this application server, and deployed or created in the application server.

Setting up data services involves steps covered in the following sections:

Configuring the Web Services Container

You must declare the web service container as a data server in the topology, in order to let Oracle Data Integrator deploy Data Services into it.


Note:

Be careful not to mistake the web service containers and the servers containing the data. While both are declared as data servers in Oracle Data Integrator, the former do not contain any data. They are only used to publish Data Services.


Web service containers declared in Oracle Data Integrator have one of three modes of deploying Web Services:

  • Copying files directly onto the server, if you have file access to the server.

  • Uploading onto the server by FTP.

  • Uploading with the Web Service Upload method with Axis2.

The next steps in the configuration of the Web Services container depend on type of web service container and the deployment mode you choose to use.

To configure a web service container:

  1. In Topology Navigator expand the Technologies node in the Physical Architecture panel.

  2. Select the technology corresponding to the web server container: Axis2 or JAX-WS. If you are using Oracle WebLogic Server or another JEE 5 compatible application server, use JAX-WS.

  3. Right-click and select New Data Server

  4. Fill in the following fields in the Definition tab:

    • Name: Name of the Data Server that will appear in Oracle Data Integrator.

      For naming data servers, it is recommended to use the following naming standard: <TECHNOLOGY_NAME>_<SERVER_NAME>.

    • Base URL for published services: Enter the base URL from which the web services will be available. For Axis2, it is http://<host>:<HTTP Port>/axis2/services/, and for Oracle WebLogic Server it is http://<host>:<HTTP Port>/

  5. Select one of the following Deployment options:

    • Save web services into directory: directory into which the web service will be created. It can be a network directory on the application server or a local directory if you plan to deploy the web services separately into the container.

    • Upload web services by FTP: select this option to upload the generated web service to the container. You must provide a FTP URL as well as a User name and Password for performing the upload operation.

    • Upload web services with Axis2: select this option to upload the generated web service to the container using Axis2 web service upload mechanism. This option appears only for Axis2 containers. You must provide the Base URL for Axis2 web application - typically http://<host>:<HTTP Port>/axis2/axis2admin/ - as well as an Axis2 User name and Password for performing the upload operation.

  6. From the File menu, click Save. The data server appears in the physical architecture.

  7. Select this data server, right-click and select New Physical Schema. A new physical schema Editor appears. In the Context tab, and create a logical schema for this new physical schema, or associate it to an existing logical schema. The process for creating logical schemas is detailed in Chapter 4, "Setting Up a Topology."

  8. From the File menu, click Save.

You only need to configure one physical schema for the web container. Note that the logical schema/context/physical schema association is important here as the context will condition the container into which deployment will take place.

Setting up the Data Sources

The Data Services generated by Oracle Data Integrator do not contain connection information for sources and targets. Instead, they make use of data sources defined within the Web Services container or on the application server. These data sources contain connection properties required to access data, and must correspond to data servers already defined within the Oracle Data Integrator topology.

To set up a data source, you can either:

  • Configure the data sources from the application server console. For more information, refer to your application server documentation.

  • Deploy the data source from Oracle Data Integrator if the container is an Oracle WebLogic Server. See Chapter 4, "Setting Up a Topology," for more information on data source deployment.

Configuring the Model

To configure Data Services, you must first create and populate a model. See Chapter 5, "Creating and Using Data Models and Datastores," for more information.

You should also have imported the appropriate Service Knowledge Module (SKM) into one of your projects. The SKM contains the code template from which the Data Services will be generated. For more information on importing KMs, see Chapter 9, "Creating an Integration Project."

To configure a model for data services:

  1. In the Models tree in the Designer Navigator, select the model.

  2. Double-click this model to edit it.

  3. Fill in the following fields in the Services tab:

    • Application server: Select the logical schema corresponding to the container you have previously defined.

    • Namespace: Type in the namespace that will be used in the web services WSDL.

    • Package name: Name of the generated Java package that contains your Web Service. Generally, this is of the form com.<company name>.<project name>.

    • Name of the data source, as defined in your container. Depending on the Application server you are using, the data source might be local or global:

      • If your data source is global, you only need to enter the data source name in the Datasource name field.

      • If your data source is local, the data source name should be prefixed by java:/comp/env/.

      Note that OC4J uses per default a global data source, Tomcat a local data source. Refer to the documentation of your application server for more information.

    • Name of data service: This name is used for the data services operating at the model level. You can also define a data service name for each datastore later.

  4. Select a Service Knowledge Module (SKM) from the list, and set its options. See the Connectivity and Knowledge Modules Guide for Oracle Data Integrator for more information about this KM and its options. Only SKMs imported into projects appear in this list.

  5. Go to the Deployed Datastores tab.

  6. Select every datastore that you wish to expose with a data service. For each of those, specify a Data Service Name and the name of the Published Entity.

  7. From the File menu, click Save.

Although not required, you can also fine-tune the configuration of the generated data services at the datastore and attribute level.

For example, you can specify the operations that will be permitted for each attribute. One important use of this is to lock an attribute against being written to via data services.

To configure data services options at the datastore level:

  1. In the Models tree in the Designer Navigator, select the datastore.

  2. Double-click this datastore to edit it.

  3. Select the Services tab.

  4. Check Deploy as Data Service if you want the datastore to be deployed.

  5. Enter the Data Service Name and the name of the Published Entity for the datastore.

  6. From the File menu, click Save.

To configure data service options at the attribute level:

  1. In the Models tree in the Designer Navigator, select the attribute.

  2. Double-click this attribute to edit it.

  3. Select the Services tab.

  4. Check the operation that you want to allow: SELECT, INSERT, DELETE. The INSERT action includes the UPDATE action.

  5. From the File menu, click Save.

Generating and Deploying Data Services

Once the model, data sources and container have been configured, it is possible to generate and deploy the data services.

Generating and Deploying Data Services

Generating data services for a model generates model-level data services as well as the data services for the selected datastores in this model.

To generate Data Services for a model:

  1. In the Models tree in the Designer Navigator, select the model.

  2. Right-click, and select Generate Service. The Generating Data Service window opens.

  3. In the Generating Data Service window, fill in the following fields:

    • Store generated Data Services in: Oracle Data Integrator places the generated source code and the compiled Web Service here. This directory is a temporary location that can be deleted after generation. You can review the generated source code for the data services here.

    • Context: Context into which the data services are generated and deployed. This context choice has three effects:

      • Determining the JDBC/Java datatype bindings at generation time.

      • Determining which physical schemas are used to serve the data.

      • Determining which physical Web Services container is deployed to

    • Generation Phases: Choose one or more generation phases. For normal deployment, all three phases should be selected. However, it may be useful to only perform the generation phase when testing new SKMs, for instance. See below for the meaning of these phases.

  4. Click OK to start data service generation and deployment.

PhaseDescription

Generate code

This phase performs the following operation.

  • Deletes the content of the generation directory.

  • Generates the Java source code for the data services using the code template from the SKM.

Compilation

This phase performs the following operations:

  • Extracts web service framework.

  • Compiles the Java source code.

Deployment

This phase performs the following operations:

  • Packages the compiled code.

  • Deploys the package to the deployment target, using the deployment method selected for the container.

deprecated

Generate 10.x style WSDL

deprecated

This is not an generation phase. This is an option available when generating Axis2 web services. Select this option to generate web services compatible with a 10g ODI WSDL.


Overview of Generated Services

The data services generated by Oracle Data Integrator include model-level services and datastore level services. These services are described below.

Model-level services

Data services are generated at model-level when the model is enabled for consistent set CDC.

The following services are available at model-level:

  • extend Window (no parameters): Carries out an extend window operation.

  • lock (Subscriber Name): Locks the consistent set for the named subscriber. To lock the consistent set for several subscribers, call the service several times, using several OdiInvokeWebService steps for example.

  • unlock (Subscriber Name): Unlocks the consistent set for the named subscriber.

  • purge (no parameters): Purges consumed changes.

See Chapter 6, "Using Journalizing," for more information on these operations.

Datastore-level services

The range of operations offered by each generated data service depends on the SKM used to generate it. There are several common properties shared by the SKMs available with Oracle Data Integrator. In almost every case the name of the published entity forms part of the name of each operation. In the following examples, the published entity "Customer" is used.

The following operations are available at datastore-level:

  • Operations on a single entity. These operations allow a single record to be manipulated, by specifying a value for its primary key. Other fields may have to be supplied to describe the new row, if any. Examples: addcustomer, getcustomer, deletecustomer, updatecustomer.

  • Operations on a group of entities specified by filter. These operations involve specifying values for one or several fields to define a filter, then optionally supplying other values for the changes to made to those rows. In general, a maximum number of rows to return can also be specified. Examples: getcustomerfilter, deletecustomerfilter, updatecustomerfilter.

  • Operations on a list of entities. This list is constructed by supplying several individual entities, as described in the "single entity" case above. Examples: addcustomerlist, deletecustomerlist, getcustomerlist, updatecustomerlist.

Testing Data Services

The easiest way to test generated data services is to use the graphical interface for the OdiInvokeWebService Oracle Data Integrator tool. See Chapter 16, "Using Web Services," for more information on this subject.

PK JSVNVPKU5zDOEBPS/packages.htm Creating and Using Packages

10 Creating and Using Packages

This chapter gives an introduction to Packages and Steps. It also passes through the creating process of a Package and provides additional information about handling steps within a Package.

This chapter includes the following sections:

Introduction to Packages

The Package is a large unit of execution in Oracle Data Integrator. A Package is made up of a sequence of steps organized into an execution diagram.

Each step can either succeed or fail its execution. Depending on the execution result (success or failure), a step can branch to another step.

Introduction to Steps

Table 10-1 lists the different types of steps. References are made to sections that provide additional details

Table 10-1 Step Types

TypeDescriptionSee Section

Flow (Mapping)

Executes an Mapping.

"Adding a Mapping step"


Procedure

Executes a Procedure.

"Adding a Procedure step"


Variable

Declares, sets, refreshes or evaluates the value of a variable.

"Variable Steps"


Oracle Data Integrator Tools

These tools, available in the Toolbox, provide access to all Oracle Data Integrator API commands, or perform operating system calls.

"Adding Oracle Data Integrator Tool Steps"


Models, Sub-models, and Datastores

Performs journalizing, static check or reverse-engineering operations on these objects

"Adding a Model, Sub-Model or Datastore"



Figure 10-1 Sample Package

Description of Figure 10-1 follows

For example, the "Load Customers and Invoice" Package example shown in Figure 10-1 performs the following actions:

  1. Execute procedure "System Backup" that runs some backup operations.

  2. Execute mapping "Customer Group" that loads the customer group datastore.

  3. Execute mapping "Customer" that loads the customer datastore.

  4. Execute mapping "Product" that loads the product datastore.

  5. Refresh variable "Last Invoice ID" step to set the value of this variable for use later in the Package.

  6. Execute mapping "Invoice Headers" that load the invoice header datastore.

  7. Execute mapping "Invoices" that load the invoices datastore.

  8. If any of the steps above fails, then the Package runs the "OdiSendMail 2" step that sends an email to the administrator using an Oracle Data Integrator tool.

Introduction to Creating Packages

Packages are created in the Package Diagram Editor. See "Introduction to the Package editor" for more information.

Creating a Package consists of the following main steps:

  1. Creating a New Package. See "Creating a new Package" for more information.

  2. Working with Steps in the Package (add, duplicate, delete, and so on). See "Working with Steps" for more information.

  3. Defining Step Sequences. See "Defining the Sequence of Steps" for more information.

  4. Running the Package. See "Running a Package" for more information.

Introduction to the Package editor

The Package editor provides a single environment for designing Packages. Figure 10-2 gives an overview of the Package editor.

Figure 10-2 Package editor

Description of Figure 10-2 follows

Table 10-2 Package editor Sections

SectionLocation in FigureDescription

Package Diagram

Middle

You drag components such as mappings, procedures, datastores, models, sub-models or variables from the Designer Navigator into the Package Diagram for creating steps for these components.

You can also define sequence of steps and organize steps in this diagram.

Package Toolbox

Left side of the Package diagram

The Toolbox shows the list of Oracle Data Integrator tools available and that can be added to a Package. These tools are grouped by type.

Package Toolbar

Top of the Package diagram

The Package Toolbar provides tools for organizing and sequencing the steps in the Package.

Properties Panel

Under the Package diagram

This panel displays the properties for the object that is selected in the Package Diagram.


Creating a new Package

To create a new Package:

  1. In the Project tree in Designer Navigator, click the Packages node in the folder where you want to create the Package.

  2. Right-click and select New Package.

  3. In the New Package dialog, type in the Name, and optionally a Description, of the Package. Click OK.

  4. Use the Overview tab to set properties for the package.

  5. Use the Diagram tab to design your package, adding steps as described in "Working with Steps".

  6. From the File menu, click Save.

Working with Steps

Packages are an organized sequence of steps. Designing a Package consists mainly in working with the steps of this Package.

Adding a Step

Adding a step depends on the nature of the steps being inserted. See Table 10-1, "Step Types" for more information on the different types of steps. The procedures for adding the different type of steps are given below.

Adding a Mapping step

To insert a Mapping step:

  1. Open the Package editor and go to the Diagram tab.

  2. In the Designer Navigator, expand the project node and then expand the Mappings node, to show your mappings for this project.

  3. Drag and drop a mapping into the diagram. A Flow (Mapping) step icon appears in the diagram.

  4. Click the step icon in the diagram. The properties panel shows the mapping's properties.

  5. In the properties panel, modify properties of the mapping as needed.

  6. From the File menu, click Save.

Adding a Procedure step

To insert a Procedure step:

  1. Open the Package editor and go to the Diagram tab.

  2. In the Designer Navigator, expand the project node and then expand the Procedures node, to show your procedures for this project.

  3. Drag and drop a procedure into the diagram. A Procedure step icon appears in the diagram.

  4. Click the step icon in the diagram. The properties panel shows the procedure's properties.

  5. In the properties panel, modify properties of the procedure as needed.

  6. From the File menu, click Save.

Variable Steps

There are different variable step types within Oracle Data Integrator:

  • Declare Variable: When a variable is used in a Package (or in elements of the topology which are used in the Package), Oracle strongly recommends that you insert a Declare Variable step in the Package. This step explicitly declares the variable in the Package.

  • Refresh Variable: This variable step refreshes the variable by running the query specified in the variable definition.

  • Set Variable: There are two functions for this step:

    • Assign sets the current value of a variable.

    • Increment increases or decreases a numeric value by the specified amount.

  • Evaluate Variable: This variable step type compares the value of the variable with a given value according to an operator. If the condition is met, then the evaluation step is true, otherwise it is false. This step allows for branching in Packages.

Adding a Variable step

To add a Variable step (of any type):

  1. Open the Package editor and go to the Diagram tab.

  2. In the Designer Navigator, expand the project node and then expand the Variables node, to show your variables for this project. Alternatively, expand the Global Objects node and expand the Variables node, to show global variables.

  3. Drag and drop a variable into the diagram. A Variable step icon appears in the diagram.

  4. Click the step icon in the diagram. The properties panel shows the variable's properties.

  5. In the properties panel, modify properties of the variable as needed. On the General tab, select the variable type from the Type list.

    • For Set Variables, select Assign, or Increment if the variable is of Numeric type. For Assign, type into the Value field the value to be assigned to the variable (this value may be another variable). For Increment, type into the Increment field a numeric constant by which to increment the variable.

    • For Evaluate Variables, select the Operator used to compare the variable value. Type in the Value field the value to compare with your variable. This value may be another variable.


      Notes:

      • You can specify a list of values in the Value field. When using the IN operator, use the semicolon character (;) to separate the values of a list.

      • An evaluate variable step can be branched based on the evaluation result. See "Defining the Sequence of Steps" for more information on branching steps.


  6. From the File menu, click Save.

Adding Oracle Data Integrator Tool Steps

Oracle Data Integrator provides tools that can be used within Packages for performing simple operations. The tools are either built-in tools or Open Tools that enable you to enrich the data integrator toolbox.

To insert an Oracle Data Integrator Tool step:

  1. Open the Package editor and go to the Diagram tab.

  2. From the Package Toolbox, select the tool that you want to use. Note that Open tools appear in the Plugins group.

  3. Click in the Package diagram. A step corresponding to your tool appears.


    Tip:

    As long as a tool is selected, left-clicking in the diagram will continue to place steps. To stop placing steps, click the Free Choice button in the Package Toolbar. The mouse pointer changes to an arrow, indicating you are no longer placing tools.


  4. Click the step icon in the diagram. The properties panel shows the tool's properties.

  5. Set the values for the parameters of the tool. The parameters descriptions appear when you select one, and are detailed in Appendix A, "Oracle Data Integrator Tools Reference."

  6. You can edit the code of this tool call in the Command tab.

  7. From the File menu, click Save.

The following tools are frequently used in Oracle Data Integrator Package:

  • OdiStartScen: starts an Oracle Data Integrator scenario synchronously or asynchronously. To create an OdiStartScen step, you can directly drag and drop the scenario from the Designer Navigator into the diagram.

  • OdiInvokeWebService: invokes a web service and saves the response in an XML file.

  • OS Command: calls an Operating System command. Using an operating system command may make your Package platform-dependent.

The Oracle Data Integrator tools are listed in Appendix A, "Oracle Data Integrator Tools Reference."


Note:

When setting the parameters of a tool using the steps properties panel, graphical helpers allow value selection in a user-friendly manner. For example, if a parameter requires a project identifier, the graphical mapping will redesign it and display a list of project names for selection. By switching to the Command tab, you can review the raw command and see the identifier.


Adding a Model, Sub-Model or Datastore

You can perform journalizing, static check or reverse-engineering operations on models, sub-models, and datastores.

To insert a check, reverse engineer, or journalizing step in a Package:


Notes:

  • To perform a static check, you must define the CKM in the model.

  • To perform journalizing operations, you must define the JKM in the model.

  • Reverse engineering options set in the model definition are used for performing reverse-engineering processes in a package.


  1. Open the Package editor and go to the Diagram tab.

  2. In Designer Navigator, select the model, sub-model or datastore to check from the Models tree.

  3. Drag and drop this model, sub-model or datastore into the diagram.

  4. In the General tab of the properties panel, select the Type: Check, Reverse Engineer, or Journalizing.

    • For Check steps, select Delete Errors from the Checked Tables if you want this static check to remove erroneous rows from the tables checked in this process.

    • For Journalizing steps, set the journalizing options. See Chapter 6, "Using Journalizing," for more information on these options.

  5. From the File menu, click Save.

Deleting a Step


Caution:

It is not possible to undo a delete operation in the Package diagram.


To delete a step:

  1. In the Package toolbar tab, select the Free Choice tool.

  2. Select the step to delete in the diagram.

  3. Right-click and then select Delete Step. Or, hit the Delete key on your keyboard.

  4. Click Yes to continue.

The step disappears from the diagram.

Duplicating a Step

To duplicate a step:

  1. In the Package toolbar tab, select the Free Choice tool.

  2. Select the step to duplicate in the diagram.

  3. Right-click and then select Duplicate Step.

A copy of the step appears in the diagram.

Running a Step

To run a step:

  1. In the Package toolbar tab, select the Free Choice tool.

  2. Select the step to run in the diagram.

  3. Right-click and then select Execute Step.

  4. In the Run dialog, select the execution parameters:

    • Select the Context into which the step must be executed.

    • Select the Logical Agent that will run the step.

    • Select a Log Level.

    • Optionally, select Simulation. This option performs a simulation of the run operation and generates a run report, without actually affecting data.

  5. Click OK.

  6. The Session Started Window appears.

  7. Click OK.

You can review the step execution in the Operator Navigator.

Editing a Step's Linked Object

The step's linked object is the mapping, procedure, variable, or other object from which the step is created. You can edit this object from the Package diagram.

To edit a step's linked object:

  1. In the Package toolbar tab, select the Free Choice tool.

  2. Select the step to edit in the diagram.

  3. Right-click and then select Edit Linked Object.

The Editor for the linked object opens.

Arranging the Steps Layout

The steps can be rearranged in the diagram in order to make it more readable.

To arrange the steps in the diagram:

  1. From the Package toolbar menu, select the Free Choice tool.

  2. Select the steps you wish to arrange using either of the following methods:

    • Keep the CTRL key pressed and select each step.

    • Drag a box around multiple items in the diagram with the left mouse button pressed.

  3. To arrange the selected steps, you may either:

    • Drag them to arrange their position into the diagram

    • Right-click, then select a Vertical Alignment or Horizontal Alignment option from the context menu.

You can also use the Reorganize button from the toolbar to automatically reorganize all of the steps in your package.

Defining the Sequence of Steps

Once the steps are created, you must order them into a data processing chain. This chain has the following rules:

  • It starts with a unique step defined as the First Step.

  • Each step has two termination states: Success or Failure.

  • A step in failure or success can be followed by another step, or by the end of the Package.

  • In case of failure, it is possible to define a number of retries.

A Package has one entry point, the First Step, but several possible termination steps.

Failure Conditions

The table below details the conditions that lead a step to a Failure state. In other situations, the steps ends in a Success state.

Step TypeFailure conditions

Flow

  • Error in a mapping command.

  • Maximum number or percentage of errors allowed reached.

Procedure

Error in a procedure command.

Refresh Variable

Error while running the refresh query.

Set Variable

Error when setting the variable (invalid value).

Evaluate Variable

The condition defined in the step is not matched.

Declare Variable

This step has no failure condition and always succeeds.

Oracle Data Integrator Tool

Oracle Data Integrator Tool return code is different from zero. If this tool is an OS Command, a failure case is a command return code different from zero.

Journalize Datastore, Model or Sub-Model

Error in a journalizing command.

Check Datastore, Model or Sub-Model

Error in the check process.

Reverse Model

Error in the reverse-engineering process.


Defining the Sequence

To define the first step of the Package:

  1. In the Package toolbar tab, select the Free Choice tool.

  2. Select the step to set as the first one in the diagram.

  3. Right-click and then select First Step.

The first step symbol appears on the step's icon.

To define the next step upon success:

  1. In the Package toolbar tab, select the Next Step on Success tool.

  2. Drag a line from one step to another, using the mouse.

  3. Repeat this operation to link all your steps in a success path sequence. This sequence should start from the step defined as the First Step.

Green arrows representing the success path are shown between the steps, with an ok labels on these arrows. In the case of an evaluate variable step, the label is true.

To define the next step upon failure:

  1. In the Package toolbar tab, select the Next Step on Failure tool.

  2. Drag a line from one step to another, using the mouse.

  3. Repeat this operation to link steps according to your workflow logic.

Red arrows representing the failure path are shown between the steps, with a ko labels on these arrows. In the case of an evaluate variable step, the arrow is green and the label is false.

To define the end of the Package upon failure:

By default, a step that is linked to no other step after a success or failure condition will terminate the Package when this success or failure condition is met. You can set this behavior by editing the step's behavior.

  1. In the Package toolbar tab, select the Free Choice tool.

  2. Select the step to edit.

  3. In the properties panel, select the Advanced tab.

  4. Select End in Processing after failure or Processing after success. The links after the step disappear from the diagram.

  5. You can optionally set a Number of attempts and a Time between attempts for the step to retry a number of times with an interval between the retries.

Running a Package

To run a Package:

  1. Use any of the following methods:

    • In the Projects node of the Designer Navigator, expand a project and select the Package you want to execute. Right-click and select Run, or click the Run button in the ODI Studio toolbar, or select Run from the Run menu of the ODI menu bar.

    • In the package editor, select the package by clicking the tab with the package name at the top of the editor. Click the Run button in the ODI Studio toolbar, or select Run from the Run menu of the ODI menu bar.

  2. In the Run dialog, select the execution parameters:

    • Select the Context into which the package must be executed.

    • Select the Logical Agent that will run the package.

    • Select a Log Level.

    • Optionally, select Simulation. This option performs a simulation of the run operation and generates a run report, without actually affecting data.

  3. Click OK.

  4. The Session Started Window appears.

  5. Click OK.

You can review the Package execution in the Operator Navigator.

PKPKU5zDOEBPS/app_tools.htm Oracle Data Integrator Tools Reference

A Oracle Data Integrator Tools Reference

This appendix provides a reference of Oracle Data Integrator (ODI) tools. It is intended for application developers who want to use these tools to design integration scenarios.

This appendix includes the following sections:

Using Oracle Data Integrator Tools

Oracle Data Integrator tools (also called Oracle Data Integrator commands) are commands provided for performing specific tasks at runtime. These tasks can be as simple as waiting for a certain time or producing a sound, or as sophisticated as executing Ant scripts or reading email from a server.

Oracle Data Integrator tools are used in Packages, Procedure Commands, Knowledge Modules Commands, or directly from a command line.


Note:

Previous versions of Oracle Data Integrator supported calling built-in tools from Jython or Java scripts using their internal Java classes (such as SnpsSendMail and SendMail). This approach is no longer supported.



Note:

Carriage returns in commands are not permitted.


Using a Tool in a Package

Adding and using an Oracle Data Integrator tool in a Package is described in "Adding Oracle Data Integrator Tool Steps".

You can sequence the tool steps within the package and organize them according to their success and failure. For more information about sequencing, see "Defining the Sequence of Steps" and "Arranging the Steps Layout".

You can use variable values, sequences, or Oracle Data Integrator substitution method calls directly in tool parameters. See Chapter 13, "Creating and Using Procedures, Variables, Sequences, and User Functions," for more information.

Using a Tool in a Knowledge Module or Procedure Command

Using an Oracle Data Integrator tool in a Knowledge Module or Procedure is described in "Working with Procedures".

You can use variable values, sequences, Oracle Data Integrator substitution method calls, or the results from a SELECT statement directly in tool parameters. See Chapter 13, "Creating and Using Procedures, Variables, Sequences, and User Functions," for more information.

Using a Tool From a Command Line

Command line scripts for Oracle Data Integrator tools are run from the DOMAIN_HOME/bin directory. To run a tool from a command line, you must first create an ODI Physical Agent instance in the ODI Topology and configure an ODI Standalone Agent instance in a Domain. For more information about performing these tasks, see Installing and Configuring Oracle Data Integrator.

When you run a tool from a command line, you must specify the -INSTANCE parameter, where <agent_name> is the name of the physical agent you configured (for example, OracleDIAgent1).

To use an Oracle Data Integrator tool from a command line:

  1. Launch the command shell for your environment (Windows or UNIX).

  2. Navigate to the DOMAIN_HOME/bin directory.

  3. Launch the startcmd.cmd (Windows) or startcmd.sh (UNIX) command and run an Oracle Data Integrator tool with the following syntax:

    startcmd.<cmd|sh> -INSTANCE=<agent_name> <command_name> [<command_parameters>]*
    

Command names and command parameters are case-sensitive.

Important Notes

Note the following:

  • On Windows platforms, command arguments that contain equal (=) signs or spaces must be surrounded with double quotation marks. This differs from the UNIX command call. For example:

    startcmd.cmd OdiSleep "-INSTANCE=OracleDIAgent1" "-DELAY=5000"
    ./startcmd.sh OdiSleep -INSTANCE=OracleDIAgent1 -DELAY=5000
    
  • The following tools do not support direct invocation through a command line:

    • OdiRetrieveJournalData

    • OdiRefreshJournalCount

Using Open Tools

The Open Tools feature provides an extensible platform for developing custom third-party tools that you can use in Packages and Procedures. As with the standard tools delivered with Oracle Data Integrator, Open Tools can interact with the operating system and manipulate data.

Open Tools are written in Java. Writing your own Open Tools is described in "Developing Open Tools".

Open Tools are delivered as a Java package (.zip or .jar) that contains several files:

  • A compiled Java .class file

  • Other resources, such as icon files

Installing and Declaring an Open Tool

Before you can use an Open Tool, you must install and add it.

Installing an Open Tool

To install an Open Tool, you must add the Open Tool JAR into the classpath or the component using the tool.

Open Tool JARs must be added to the DOMAIN_HOME/lib directory. Drivers are added to the same location.

To deploy an Open Tool JAR with a Java EE agent, generate a server template for this agent. The Open Tool displays in the Libraries and Drivers list in the Template Generation Wizard. See "Creating a Server Template for the Java EE Agent" for more information.


Note:

This operation must be performed for each Oracle Data Integrator Studio from which the tool is being used, and for each agent that will run sessions using this tool.


Declaring a New Open Tool

This operation declares an Open Tool in a master repository and enables the tool to display in Oracle Data Integrator Studio.

To declare a new tool:

  1. In Oracle Data Integrator Studio, select the ODI menu and then select Add Remove/Open Tools. The Add Open Tools dialog displays.

  2. Enter the name of the class in the Open Tool Class Name field.

or:

  1. Click Find in the ClassPath, then browse to the name of the Open Tool's Java class. To search for the class by name, enter part of the name in the field at the top.

  2. Click OK.

    Note that all classes currently available to Oracle Data Integrator are displayed, including those that are not Open Tools. You must know the name of your class in order to add it.

  3. Click Add Open Tool.

  4. Select the line containing your Open Tool.

    • If the tool was correctly found on the classpath, the supplied icons, and the tool's syntax, description, provider, and version number are displayed.

    • If the tool was not found, an error message is displayed. Change the classpath, or move the Open Tool to the correct directory.


    Note:

    This operation to declare a new Open Tool must be performed only once for a given master repository.



    Note:

    An Open Tool name cannot start with Snp or Odi. An Open Tool with a name that starts with these strings is ignored.


Using Open Tools in a Package or Procedure

You can use Open Tools in a Package or Procedure, similar to the tools provided with Oracle Data Integrator.

Developing Open Tools

An Open Tool is a Java package that contains a compiled Java class that implements the interface oracle.odi.sdk.opentools.IOpenTool. For a complete description of classes and methods, see the Oracle Data Integrator Open Tools Java API Reference (JavaDoc).

An Open Tool package typically should also contain two icons, which are used to represent the Open Tool in the Oracle Data Integrator graphical interface.

Classes

The following table lists and describes Open Tool classes and interfaces.

Class or InterfaceDescription

IOpenTool

Interface that every Open Tool must implement.

OpenToolAbstract

Abstraction of the interface with some helper methods. Preferably extend this class rather than implementing the interface directly.

IOpenToolParameter

Interface that parameters used by Open Tools must implement. In most cases, OpenToolParameter should be used rather than implementing this interface.

OpenToolParameter

Complete implementation of IOpenToolParameter. Each OpenToolParameter holds one parameter.

OpenToolsExecutionException

Exception class that should be thrown if necessary by Open Tool methods.

SimpleOpenToolExample

A simple example of an Open Tool, which can be used as a starting point.


Developing a New Open Tool

The following steps describe the development of a basic Open Tool, SimpleMessageBox. The source code for this class is available in the demo/plugins/src directory.

  1. Define the syntax. In this example, the Open Tool is called as follows:

    SimpleMessageBox "-TEXT=<text message>" "-TITLE=<window title>"
    
  2. Create 16x16 and 32x32 icons (usually in .gif format).

  3. Create and implement the class. See "Implementing the Class".

  4. Compile the class and create a package with the two icon files.

  5. Install and declare the Open Tool as described in "Installing and Declaring an Open Tool".

Implementing the Class

Implementing the class consists of the following steps:

  1. Declaration

  2. Importing Packages

  3. Defining the Parameters

  4. Implementing Informational Functions

  5. Execution

Declaration

Before you declare the class, you must name the package.

Naming the Package

Put the class in a package named appropriately. The package name is used to identify the Open Tool when installing it.

package com.myCompany.OpenTools;

Declaring the Class

There are two basic approaches to developing an Open Tool:

  • Extend an existing class that you want to convert into an Open Tool. In this case, simply implement the interface IOpenTool directly on the existing class.

  • Develop a new class. In this case, it is easiest to extend the abstract class OpenToolAbstract. This abstract class also contains additional helper methods for working with parameters.

    public class SimpleMessageBox extends OpenToolAbstract {
    
Importing Packages

Almost every Open Tool must import the following Open Tool SDK packages:

import oracle.odi.sdk.opentools.IOpenTool; /* All Open Tool classes need these three classes */

import oracle.odi.sdk.opentools.IOpenToolParameter;

import oracle.odi.sdk.opentools.OpenToolExecutionException;

import oracle.odi.sdk.opentools.OpenToolAbstract; /* The abstract extended for the Open Tool */

import oracle.odi.sdk.opentools.OpenToolParameter; /* The class used for parameters */

In this particular example, a package to create the message box is also needed:

import javax.swing.JOptionPane; /* Needed for the message box used in this example */
Defining the Parameters

Add a property to store the OpenToolParameter objects. This is used to both define them for the syntax, and to retrieve the values of the parameters from the eventual user. It is easiest to define the parameters of the Open Tool with a static array as follows. This array should be private, as it will be accessed through an accessor function.

private static final IOpenToolParameter[] mParameters = new IOpenToolParameter[]
{
    new OpenToolParameter("-TEXT", "Message text", "Text to show in the messagebox (Mandatory).", true),
    new OpenToolParameter("-TITLE", "Messagebox title", "Title of the messagebox.", false)
};

The four parameters passed to the OpenToolParameter() constructor are as follows:

  1. The code of the parameter, including the initial hyphen. This code must correspond to the syntax returned by getSyntax().

  2. The user-friendly name, which is used if the user is using the graphical interface to set parameters.

  3. A descriptive help text.

  4. Whether the parameter is mandatory. This is an indication to the user.


    Note:

    Oracle Data Integrator does not enforce the mandatory flag on parameters. Your class must be able to handle any combination of parameters being provided.


You must implement the accessor function getParameters() to retrieve the parameters:

public IOpenToolParameter[] getParameters()
{
      return mParameters;
}
Implementing Informational Functions

Implement functions to return information about your Open Tool: getDescription(), getVersion(), getProvider().

public String getDescription() { return "This Open Tool displays a message box when executed."; }
public String getVersion() { return "v1.0"; }
public String getProvider() { return "My Company, Inc."; }

The getSyntax() function determines the name of the Open Tool as it is displayed in the Oracle Data Integrator graphical interface, and also the initial values of the parameter. Make sure the names of the parameters here match the names of the parameters returned by getParameters().

public String getSyntax()
{
        return "SimpleMessageBox \"-TEXT=<text message>\" \"-TITLE=<window title>\"";
}

The getIcon() method should then return paths to two appropriately sized images. It should look something like this:

public String getIcon(int pIconType)
{
        switch (pIconType)
        {
               case IOpenTool.SMALL_ICON:
               return "/com/myCompany/OpenTools/images/SimpleMessageBox_16.gif";
             case IOpenTool.BIG_ICON:
             return "/com/myCompany/OpenTools/images/SimpleMessageBox_32.gif";
             default:
             return "";
     }
}
Execution

Finally, the execute() method, which carries out the functionality provided by the Open Tool. In this case, a message box is shown. If you are extending the OpenToolAbstract class, use the getParameterValue() method to easily retrieve the values of parameters, as they are set at runtime.


Note:

You must catch all exceptions and only raise an OpenToolExecutionException.


public void execute() throws OpenToolExecutionException
{
     try
     {
     if (getParameterValue("-TITLE") == null || getParameterValue("-TITLE").equals("")) /* title was not filled in by user */
     {
               JOptionPane.showMessageDialog(null, (String) getParameterValue("-TEXT"), (String) "Message", JOptionPane.INFORMATION_MESSAGE);
          } else
          {
               JOptionPane.showMessageDialog(null, (String) getParameterValue("-TEXT"),
                       (String) getParameterValue("-TITLE"),
                       JOptionPane.INFORMATION_MESSAGE);
          }
     }
     /* Traps any exception and throw them as OpenToolExecutionException */
     catch (IllegalArgumentException e)
     {
          throw new OpenToolExecutionException(e);
     }
}

Open Tools at Runtime

In general, your Open Tool class is instantiated only very briefly, and is used in the following ways.

Installation

When the user chooses to install an Open Tool, Oracle Data Integrator instantiates the class and calls the methods getDescription(), getProvider(), getIcon(), and getVersion() to retrieve information about the class.

Use in a Package

When the Open Tool is used in a package, the class is instantiated briefly to call the methods getDescription(), getProvider(), getIcon(), and getVersion(). Additionally, getSyntax() is called to retrieve the code name of the Open Tool and its default arguments. The method getParameters() is called to display the list of arguments to the user.

Execution

Each time the Open Tool is executed in a package or procedure, the class is instantiated again; it has no persistence after its execution. The execute() method is called just once.


Tip:

See also "Using Open Tools" and Open Tools SDK documentation (JavaDoc).


Alphabetical List of Oracle Data Integrator Tools

This section lists Oracle Data Integrator tools in alphabetical order.

OdiAnt

Use this command to execute an Ant buildfile.

For more details and examples of Ant buildfiles, refer to the online documentation: http://jakarta.apache.org/ant/manual/index.html

Usage

OdiAnt -BUILDFILE=<file> -LOGFILE=<file> [-TARGET=<target>]
[-D<property name>=<property value>]* [-PROJECTHELP] [-HELP]
[-VERSION] [-QUIET] [-VERBOSE] [-DEBUG] [-EMACS]
[-LOGGER=<classname>] [-LISTENER=<classname>] [-FIND=<file>]

Parameters

ParametersMandatoryDescription

-BUILDFILE=<file>

Yes

Ant buildfile. XML file containing the Ant commands.

-LOGFILE=<file>

Yes

Use given file for logging.

-TARGET=<target>

No

Target of the build process.

-D<property name>=<property value>

No

List of properties with their values.

-PROJECTHELP

No

Displays the help on the project.

-HELP

No

Displays Ant help.

-VERSION

No

Displays Ant version.

-QUIET

No

Run in nonverbose mode.

-VERBOSE

No

Run in verbose mode.

-DEBUG

No

Prints debug information.

-EMACS

No

Displays the logging information without adornments.

-LOGGER=<classname>

No

Java class performing the logging.

-LISTENER=<classname>

No

Adds a class instance as a listener.

-FIND=<file>

No

Looks for the Ant buildfile from the root of the file system and uses it.


Examples

Download the *.html files from the directory /download/public using FTP from ftp.mycompany.com to the directory C:\temp.

Step 1: Generate the Ant buildfile:

OdiOutFile -FILE=c:\temp\ant_cmd.xml
<?xml version="1.0"?>
<project name="myproject" default="ftp" basedir="/">
     <target name="ftp">
          <ftp action="get" remotedir="/download/public" 
          server="ftp.mycompany.com" userid="anonymous"
          password="me@mycompany.com">
                <fileset dir="c:\temp">
           <include name="**/*.html"/>
                </fileset>
     </ftp>
   </target>
</project>

Step 2: Run the Ant buildfile:

OdiAnt -BUILDFILE=c:\temp\ant_cmd.xml -LOGFILE=c:\temp\ant_cmd.log

OdiBeep

Use this command to play a default beep or sound file on the machine hosting the agent.

The following file formats are supported by default:

  • WAV

  • AIF

  • AU


Note:

To play other file formats, you must add the appropriate JavaSound Service Provider Interface (JavaSound SPI) to the application classpath.


Usage

OdiBeep [-FILE=<sound_file>]

Parameters

ParametersMandatoryDescription

-FILE

No

Path and file name of sound file to play. If not specified, the default beep sound for the machine is used.


Examples

Play the sound file c:\wav\alert.wav.

OdiBeep -FILE=c:\wav\alert.wav

OdiDeleteScen

Use this command to delete a given scenario version.

Usage

OdiDeleteScen -SCEN_NAME=<name> -SCEN_VERSION=<version>

Parameters

ParametersMandatoryDescription

-SCEN_NAME=<name>

Yes

Name of the scenario to delete.

-SCEN_VERSION=<version>

Yes

Version of the scenario to delete.


Examples

Delete the DWH scenario in version 001.

OdiDeleteScen -SCEN_NAME=DWH -SCEN_VERSION=001

OdiEnterpriseDataQuality

Use this command to invoke an Oracle Enterprise Data Quality (Datanomic) job.


Note:

The OdiEnterpriseDataQuality tool supports Oracle Enterprise Data Quality version 8.1.6 and later.


Usage

OdiEnterpriseDataQuality "-JOB_NAME=<EDQ job name>" 
"-PROJECT_NAME=<EDQ project name>" "-HOST=<EDQ server host name>" 
"-PORT=<EDQ server JMX port>" "-USER=<EDQ user>"
"-PASSWORD=<EDQ user's password>" "-SYNCHRONOUS=<yes|no>" "-DOMAIN=-<EDQ_DOMAIN>"

Parameters

ParametersMandatoryDescription

-JOB_NAME=<EDQ job name>

Yes

Name of the Enterprise Data Quality job.

-PROJECT_NAME=<EDQ project name>

Yes

Name of the Enterprise Data Quality project.

-HOST=<EDQ server host name>

Yes

Host name of the Enterprise Data Quality server. Example: localhost

-PORT=<EDQ server JMX port>

Yes

JMX port of the Enterprise Data Quality server. Example: 9005

-USER=<EDQ user>

Yes

User name of the Enterprise Data Quality server user. Example: dnadmin

-PASSWORD=<EDQ user's password>

Yes

Password of the Enterprise Data Quality user.

-SYNCHRONOUS=<yes|no>

No

If set to Yes (default), the tool waits for the quality process to complete before returning, with possible error code. If set to No, the tool ends immediately with success and does not wait for the quality process to complete.

-DOMAIN=<EDQ_DOMAIN>

No

Name of the MBean domain. The default value is dndirector.


Examples

Execute the Enterprise Data Quality job CLEANSE_CUSTOMERS located in the project CUSTOMERS.

EnterpriseDataQuality "-JOB_NAME=CLEANSE_CUSTOMERS" "-PROJECT_NAME=CUSTOMERS"
"-HOST=machine.oracle.com" "-PORT=9005" "-USER=odi" "-PASSWORD=odi"
"DOMAIN=dndirector"

OdiExportAllScen

Use this command to export a group of scenarios from the connected repository.

The export files are named SCEN_<scenario name><scenario version>.xml. This command reproduces the behavior of the export feature available in Designer Navigator and Operator Navigator.

Usage

OdiExportAllScen -TODIR=<directory> [-FORCE_OVERWRITE=<yes|no>] 
[-FROM_PROJECT=<project_id>] [-FROM_FOLDER=<folder_id>]
[-FROM_PACKAGE=<package_id>] [-RECURSIVE_EXPORT=<yes|no>]
[-XML_VERSION=<1.0>] [-XML_CHARSET=<charset>]
[-JAVA_CHARSET=<charset>] [-EXPORT_MAPPING=<yes|no>]
[-EXPORT_PACK=<yes|no>] [-EXPORT_POP=<yes|no>]
[-EXPORT_TRT=<yes|no>] [-EXPORT_VAR=<yes|no>]

Parameters

ParametersMandatoryDescription

-TODIR=<directory>

Yes

Directory into which the export files are created.

-FORCE_OVERWRITE=<yes|no>

No

If set to Yes, existing export files are overwritten without warning. The default value is No.

-FROM_PROJECT=<project_id>

No

ID of the project containing the scenarios to export. This value is the Global ID that displays in the Version tab of the project window in Studio. If this parameter is not set, scenarios from all projects are taken into account for the export.

-FROM_FOLDER=<folder_id>

No

ID of the folder containing the scenarios to export. This value is the Global ID that displays in the Version tab of the folder window in Studio. If this parameter is not set, scenarios from all folders are taken into account for the export.

-FROM_PACKAGE=<package_id>

No

ID of the source package of the scenarios to export. This value is the Global ID that displays in the Version tab of the package window in Studio. If this parameter is not set, scenarios from all components are taken into account for the export.

-RECURSIVE_EXPORT=<yes|no>

No

If set to Yes (default), all child objects (schedules) are exported with the scenarios.

-XML_VERSION=<1.0>

No

Sets the XML version shown in the XML header. The default value is 1.0.

-XML_CHARSET=<charset>

No

Encoding specified in the XML export file in the tag <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-JAVA_CHARSET=<charset>

No

Target file encoding. The default value is ISO8859_1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-EXPORT_MAPPING=<yes|no>

No

Indicates if the mapping scenarios should be exported. The default value is No.

-EXPORT_PACK=<yes|no>

No

Indicates if the scenarios attached to packages should be exported. The default value is Yes.

-EXPORT_POP=<yes|no>

No

Indicates if the scenarios attached to mappings should be exported. The default value is No.

-EXPORT_TRT=<yes|no>

No

Indicates if the scenarios attached to procedures should be exported. The default value is No.

-EXPORT_VAR=<yes|no>

No

Indicates if the scenarios attached to variables should be exported. The default value is No.


Examples

Export all scenarios from the DW01 project of Global ID 2edb524d-eb17-42ea-8aff-399ea9b13bf3 into the /temp/ directory, with all dependent objects.

OdiExportAllScen -FROM_PROJECT=2edb524d-eb17-42ea-8aff-399ea9b13bf3 -TODIR=/temp/ -RECURSIVE_EXPORT=yes

OdiExportEnvironmentInformation

Use this command to export the details of the technical environment into a comma separated (.csv) file into the directory of your choice. This information is required for maintenance or support purposes.

Usage

OdiExportEnvironmentInformation -TODIR=<toDir> -FILE_NAME=<FileName>
[-CHARSET=<charset>] [-SNP_INFO_REC_CODE=<row_code>]
[-MASTER_REC_CODE=<row_code>] [-WORK_REC_CODE=<row_code>]
[-AGENT_REC_CODE=<row_code>] [-TECHNO_REC_CODE=<row_code>]
[-RECORD_SEPARATOR_HEXA=<rec_sep>]
[-FIELD_SEPARATOR_HEXA=<field_sep] [-TEXT_SEPARATOR=<text_sep>]

Parameters

ParametersMandatoryDescription

-TODIR=<toDir>

Yes

Target directory for the export.

-FILE_NAME=<FileName>

Yes

Name of the CSV export file. The default value is snps_tech_inf.csv.

-CHARSET=<charset>

No

Character set of the export file.

-SNP_INFO_REC_CODE=<row_code>

No

Code used to identify rows that describe the current version of Oracle Data Integrator and the current user. This code is used in the first field of the record. The default value is SUNOPSIS.

-MASTER_REC_CODE=<row_code>

No

Code for rows containing information about the master repository. The default value is MASTER.

-WORK_REC_CODE=<row_code>

No

Code for rows containing information about the work repository. The default value is WORK.

-AGENT_REC_CODE=<row_code>

No

Code for rows containing information about the various agents that are running. The default value is AGENT.

-TECHNO_REC_CODE=<row_code>

No

Code for rows containing information about the data servers, their versions, and so on. The default value is TECHNO.

-RECORD_SEPARATOR_HEXA=<rec_sep>

No

One or several characters in hexadecimal code separating lines (or records) in the file. The default value is O0D0A.

-FIELD_SEPARATOR_HEXA=<field_sep>

No

One or several characters in hexadecimal code separating the fields in a record. The default value is 2C.

-TEXT_SEPARATOR=<text_sep>

No

Character in hexadecimal code delimiting a STRING field. The default value is 22.


Examples

Export the details of the technical environment into the /temp/snps_tech_inf.csv export file.

OdiExportEnvironmentInformation "-TODIR=/temp/"
"-FILE_NAME=snps_tech_inf.csv" "-CHARSET=ISO8859_1" 
"-SNP_INFO_REC_CODE=SUNOPSIS" "-MASTER_REC_CODE=MASTER"
"-WORK_REC_CODE=WORK" "-AGENT_REC_CODE=AGENT"
"-TECHNO_REC_CODE=TECHNO" "-RECORD_SEPARATOR_HEXA=0D0A"
"-FIELD_SEPARATOR_HEXA=2C" "-TEXT_SEPARATOR_HEXA=22"

OdiExportLog

Use this command to export the execution log into a ZIP export file.

Usage

OdiExportLog -TODIR=<toDir> [-EXPORT_TYPE=<logsToExport>]
[-ZIPFILE_NAME=<zipFileName>] [-XML_CHARSET=<charset>]
[-JAVA_CHARSET=<charset>] [-FROMDATE=<from_date>] [-TODATE=<to_date>] 
[-AGENT=<agent>] [-CONTEXT=<context>] [-STATUS=<status>] 
[-USER_FILTER=<user>] [-NAME=<sessionOrLoadPlanName>]

Parameters

ParametersMandatoryDescription

-EXPORT_TYPE=<logsToExport>

No

Export the log of:

  • LOAD_PLAN_RUN: All Load Plan runs that match the export criteria are exported, including all sessions launched by the Load Plan runs along the child session's hierarchy.

  • SESSION: All session logs that match the export filter criteria are exported. All Load Plan sessions will be excluded when exporting the session logs.

  • ALL: All Load Plan runs and session logs that match the filter criteria are exported.

-TODIR=<toDir>

Yes

Target directory for the export.

-ZIPFILE_NAME=<zipFileName>

No

Name of the compressed file.

-XML_CHARSET=<charset>

No

XML version specified in the export file. Parameter xml version in the XML file header. <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-JAVA_CHARSET=<charset>

No

Result file Java character encoding. The default value is ISO8859_1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-FROMDATE=<from_date>

No

Beginning date for the export, using the format yyyy/MM/dd hh:mm:ss. All sessions from this date are exported.

-TODATE=<to_date>

No

End date for the export, using the format yyyy/MM/dd hh:mm:ss. All sessions to this date are exported.

-AGENT=<agent>

No

Exports only sessions executed by the agent <agent>.

-CONTEXT=<context>

No

Exports only sessions executed in the context code <context>.

-STATUS=<status>

No

Exports only sessions in the specified state. Possible states are Done, Error, Queued, Running, Waiting, and Warning.

-USER_FILTER=<user>

No

Exports only sessions launched by <user>.

-NAME=<sessionOrLoadPlanName>

No

Name of the session or Load Plan to be exported.


Examples

Export and compress the log into the /temp/log2.zip export file.

OdiExportLog "-EXPORT_TYPE=ALL" "-TODIR=/temp/" "-ZIPFILE_NAME=log2.zip"
"-XML_CHARSET=ISO-8859-1" "-JAVA_CHARSET=ISO8859_1"

OdiExportMaster

Use this command to export the master repository to a directory or ZIP file. The versions and/or solutions stored in the master repository are optionally exported.

Usage

OdiExportMaster -TODIR=<toDir> [-ZIPFILE_NAME=<zipFileName>]
[-EXPORT_SOLUTIONS=<yes|no>] [-EXPORT_VERSIONS=<yes|no>]
[-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>]

Parameters

ParametersMandatoryDescription

-TODIR=<toDir>

Yes

Target directory for the export.

-ZIPFILE_NAME=<zipFileName>

No

Name of the compressed file.

-EXPORT_SOLUTIONS=<yes|no>

No

Exports all solutions that are stored in the repository. The default value is No.

-EXPORT_VERSIONS=<yes|no>

No

Exports all versions of objects that are stored in the repository. The default value is No.

-XML_CHARSET=<charset>

No

XML version specified in the export file. Parameter xml version in the XML file header. <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-JAVA_CHARSET=<charset>

No

Result file Java character encoding. The default value is ISO8859_1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html


Examples

Export and compress the master repository into the export.zip file located in the /temp/ directory.

OdiExportMaster "-TODIR=/temp/" "-ZIPFILE_NAME=export.zip"
"-XML_CHARSET=ISO-8859-1" "-JAVA_CHARSET=ISO8859_1"
"-EXPORT_VERSIONS=YES"

OdiExportObject

Use this command to export an object from the current repository. This command reproduces the behavior of the export feature available in the user interface.

Usage

OdiExportObject -CLASS_NAME=<class_name> -I_OBJECT=<object_id>
[-EXPORT_DIR=<directory>] [-EXPORT_NAME=<export_name>|-FILE_NAME=<file_name>] 
[-FORCE_OVERWRITE=<yes|no>] [-RECURSIVE_EXPORT=<yes|no>] [-XML_VERSION=<1.0>]
[-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>]

Parameters

ParametersMandatoryDescription

-CLASS_NAME=<class_name>

Yes

Class of the object to export (see the following list of classes).

-I_OBJECT=<object_id>

Yes

Object identifier. This value is the Global ID that displays in the Version tab of the object edit window.

-FILE_NAME=<file_name>

No

Export file name. Absolute path or relative path from EXPORT_DIR.

This file name may or may not comply with the Oracle Data Integrator standard export file prefix and suffix. To comply with these standards, use the -EXPORT_NAME parameter instead. This parameter cannot be used if -EXPORT_NAME is set.

-EXPORT_DIR=<directory>

No

Directory where the object will be exported. The export file created in this directory is named based on the -FILE_NAME and -EXPORT_NAME parameters.

If -FILE_NAME or -EXPORT_NAME are not specified, the export file is automatically named <object_prefix>_<object_name>.xml. For example, a project named Datawarehouse would be exported to PRJ_Datawarehouse.xml.

-EXPORT_NAME=<export_name>

No

Export name. Use this parameter to generate an export file named <object_prefix>_<export_name>.xml. This parameter cannot be used with -FILE_NAME.

-FORCE_OVERWRITE=<yes|no>

No

If set to Yes, an existing export file with the same name is forcibly overwritten. The default value is No.

-RECURSIVE_EXPORT=<yes|no>

No

If set to Yes (default), all child objects are exported with the current object. For example, if exporting a project, all folders, KMs, and so on in this project are exported into the project export file.

-XML_VERSION=<1.0>

No

Sets the XML version that appears in the XML header. The default value is 1.0.

-XML_CHARSET=<charset>

No

Encoding specified in the XML file, in the tag <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-JAVA_CHARSET=<charset>

No

Target file encoding. The default value is ISO8859_1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html


List of Classes

ObjectClass Name

Column

SnpCol

Condition/Filter

SnpCond

Context

SnpContext

Data Server

SnpConnect

Datastore

SnpTable

Folder

SnpFolder

Interface

SnpPop

Language

SnpLang

Loadplan

SnpLoadPlan

Mapping

SnpMapping

Model

SnpModel

Package

SnpPackage

Physical Schema

SnpPschema

Procedure or KM

SnpTrt

Procedure or KM Option

SnpUserExit

Project

SnpProject

Reference

SnpJoin

Reusable Mapping

SnpMapping

Scenario

SnpScen

Sequence

SnpSequence

Step

SnpStep

Sub-Model

SnpSubModel

Technology

SnpTechno

User Functions

SnpUfunc

Variable

SnpVar

Version of an Object

SnpVer


Examples

Export the DW01 project of Global ID 2edb524d-eb17-42ea-8aff-399ea9b13bf3 into the /temp/dw1.xml export file, with all dependent objects.

OdiExportObject -CLASS_NAME=SnpProject
-I_OBJECT=2edb524d-eb17-42ea-8aff-399ea9b13bf3
-FILE_NAME=/temp/dw1.xml -FORCE_OVERWRITE=yes
-RECURSIVE_EXPORT=yes

OdiExportScen

Use this command to export a scenario from the current work repository.

Usage

OdiExportScen -SCEN_NAME=<scenario_name> -SCEN_VERSION=<scenario_version>
[-EXPORT_DIR=<directory>] [-FILE_NAME=<file_name>|EXPORT_NAME=<export_name>]
[-FORCE_OVERWRITE=<yes|no>] [-RECURSIVE_EXPORT=<yes|no>] [-XML_VERSION=<1.0>]
[-XML_CHARSET=<encoding>] [-JAVA_CHARSET=<encoding>]

Parameters

ParametersMandatoryDescription

-SCEN_NAME=<scenario_name>

Yes

Name of the scenario to be exported.

-SCEN_VERSION=<scenario_version>

Yes

Version of the scenario to be exported.

-FILE_NAME=<file_name>

Yes

Export file name. Absolute path or relative path from -EXPORT_DIR.

This file name may or not comply with the Oracle Data Integrator standard export file prefix and suffix for scenarios. To comply with these standards, use the -EXPORT_NAME parameter instead. This parameter cannot be used if -EXPORT_NAME is set.

-EXPORT_DIR=<directory>

No

Directory where the scenario will be exported. The export file created in this directory is named based on the -FILE_NAME and -EXPORT_NAME parameters.

If -FILE_NAME or -EXPORT_NAME are not specified, the export file is automatically named SCEN_<scenario_name><scenario_version>.xml.

-EXPORT_NAME=<export_name>

No

Export name. Use this parameter to generate an export file named SCEN_<export_name>.xml. This parameter cannot be used with -FILE_NAME.

-FORCE_OVERWRITE=<yes|no>

No

If set to Yes, overwrites the export file if it already exists. The default value is No.

-RECURSIVE_EXPORT=<yes|no>

No

Forces the export of the objects under the scenario. The default value is Yes.

-XML_VERSION=<1.0>

No

Version specified in the generated XML file, in the tag <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is 1.0.

-XML_CHARSET=<encoding>

No

Encoding specified in the XML file, in the tag <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-JAVA_CHARSET=<encoding>

No

Target file encoding. The default value is ISO8859_1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html


Examples

Export the LOAD_DWH scenario in version 1 into the /temp/load_dwh.xml export file, with all dependent objects.

OdiExportScen -SCEN_NAME=LOAD_DWH -SCEN_VERSION=1
-FILE_NAME=/temp/load_dwh.xml -RECURSIVE_EXPORT=yes

OdiExportWork

Use this command to export the work repository to a directory or ZIP export file.

Usage

OdiExportWork -TODIR=<directory> [-ZIPFILE_NAME=<zipFileName>]
[-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>]

Parameters

ParametersMandatoryDescription

-TODIR=<directory>

Yes

Target directory for the export.

-ZIPFILE_NAME=<zipFileName>

No

Name of the compressed file.

-XML_CHARSET=<charset>

No

XML version specified in the export file. Parameter xml version in the XML file header. <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-JAVA_CHARSET=<charset>

No

Result file Java character encoding. The default value is ISO8859_1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html


Examples

Export and compress the work repository into the /temp/workexport.zip export file.

OdiExportWork "-TODIR=/temp/" "-ZIPFILE_NAME=workexport.zip"

OdiFileAppend

Use this command to concatenate a set of files into a single file.

Usage

OdiFileAppend -FILE=<file> -TOFILE=<target_file> [-OVERWRITE=<yes|no>]
[-CASESENS=<yes|no>] [-HEADER=<n>] [-KEEP_FIRST_HEADER=<yes|no]

Parameters

ParametersMandatoryDescription

-FILE=<file>

Yes

Full path of the files to concatenate. Use * to specify generic characters.

Examples:

/var/tmp/*.log (all files with the log extension in the folder /var/tmp)

arch_*.lst (all files starting with arch_ and with the extension lst)

-TOFILE=<target_file>

Yes

Target file.

-OVERWRITE=<yes|no>

No

Indicates if the target file must be overwritten if it already exists. The default value is No.

-CASESENS=<yes|no>

No

Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No).

-HEADER=<n>

No

Number of header lines to be removed from the source files before concatenation. By default, no lines are removed.

When the -HEADER parameter is omitted, the concatenation does not require file edition, and therefore runs faster.

-KEEP_FIRST_HEADER=<yes|no>

No

Keep the header lines of the first file during the concatenation. The default value is Yes.


Examples

Concatenate the files *.log of the folder /var/tmp into the file /home/all_files.log.

OdiFileAppend -FILE=/var/tmp/*.log -TOFILE=/home/all_files.log

Concatenate the files of the daily sales of each shop while keeping the header of the first file.

OdiFileAppend -FILE=/data/store/sales_*.dat -TOFILE=/data/all_stores.dat
-OVERWRITE=yes -HEADER=1 -KEEP_FIRST_HEADER=yes

OdiFileCopy

Use this command to copy files or folders.

Usage

OdiFileCopy -DIR=<directory> -TODIR=<target_directory> [-OVERWRITE=<yes|no>]
[-RECURSE=<yes|no>] [-CASESENS=<yes|no>]

OdiFileCopy -FILE=<file> -TOFILE=<target_file>|-TODIR=<target_directory>
[-OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>]

Parameters

ParametersMandatoryDescription

-DIR=<directory>

Yes if -FILE is omitted

Directory (or folder) to copy.

-FILE=<file>

Yes if -DIR is omitted

The full path of the files to copy. Use * to specify the generic character.

Examples:

/var/tmp/*.log (all files with the log extension in folder /var/tmp)

arch_*.lst (all files starting with arch_ and with the extension lst)

-TODIR=<target_directory>

Yes if -DIR is specified

Target directory for the copy.

If a directory is copied (-DIR), this parameter indicates the name of the copied directory.

If one or several files are copied (-FILE), this parameter indicates the destination directory.

-TOFILE=<target_file>

Yes if -TODIR is omitted

Destination file(s). This parameter cannot be used with parameter -DIR.

This parameter contains:

  • The name of the destination file if only one file is copied (no generic character).

  • The mask of the new name of the destination files if several files are copied.

Note that -TODIR and -TOFILE are exclusive parameters. If both are specified, only -TODIR is taken into account, and -TOFILE is ignored.

-OVERWRITE=<yes|no>

No

Indicates if the files of the folder are overwritten if they already exist. The default value is No.

-RECURSE=<yes|no>

No

Indicates if files are copied recursively when the directory contains other directories. The value No indicates that only the files within the directory are copied, not the subdirectories. The default value is Yes.

-CASESENS=<yes|no>

No

Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No).


Examples

Copy the file hosts from the directory /etc to the directory /home.

OdiFileCopy -FILE=/etc/hosts -TOFILE=/home/hosts

Copy all *.csv files from the directory /etc to the directory /home and overwrite.

OdiFileCopy -FILE=/etc/*.csv -TODIR=/home -OVERWRITE=yes

Copy all *.csv files from the directory /etc to the directory /home while changing their extension to .txt.

OdiFileCopy -FILE=/etc/*.csv -TOFILE=/home/*.txt -OVERWRITE=yes

Copy the directory C:\odi and its subdirectories into the directory C:\Program Files\odi.

OdiFileCopy -DIR=C:\odi "-TODIR=C:\Program Files\odi" -RECURSE=yes

OdiFileDelete

Use this command to delete files or directories.

The most common uses of this tool are described in the following table where:

  • x means is supplied

  • o means is omitted

-DIR-FILE-RECURSEBehavior

x

x

x

Every file with the name or with a name matching the mask specified in -FILE is deleted from -DIR and from all of its subdirectories.

x

o

x

The subdirectories from -FILE are deleted.

x

x

o

Every file with the name or with a name matching the mask specified in -FILE is deleted from -DIR.

x

o

o

The -DIR is deleted.


Usage

OdiFileDelete -DIR=<directory> -FILE=<file> [-RECURSE=<yes|no>]
[-CASESENS=<yes|no>] [-NOFILE_ERROR=<yes|no>] [-FROMDATE=<from_date>]
[-TODATE=<to_date>]

Parameters

ParametersMandatoryDescription

-DIR=<directory>

Yes if -FILE is omitted

If -FILE is omitted, specifies the name of the directory (folder) to delete.

If -FILE is supplied, specifies the path where files should be deleted from.

-FILE=<file>

Yes if -DIR is omitted

Name or mask of file(s) to delete. If -DIR is not specified, provide the full path. Use * to specify wildcard characters.

Examples:

/var/tmp/*.log (all files with the log extension of the directory /var/tmp)

/arch_*.lst (all files starting with arch_ and with the extension lst)

-RECURSE=<yes|no>

No

If -FILE is omitted, the -RECURSE parameter has no effect: all subdirectories are implicitly deleted.

If -FILE is supplied, the -RECURSE parameter specifies if the files should be deleted from this directory and from all of its subdirectories.

The default value is Yes.

-CASESENS=<yes|no>

No

Specifies that Oracle Data Integrator should distinguish uppercase and lowercase when matching file names. The default value is No.

-NOFILE_ERROR=<yes|no>

Yes

Indicates that an error should be generated if the specified directory or files are not found. The default value is Yes.

-FROMDATE=<from_date>

No

All files with a modification date later than this date are deleted. Use the format yyyy/MM/dd hh:mm:ss.

The -FROM_DATE is not inclusive.

If -FROMDATE is omitted, all files with a modification date earlier than the -TODATE date are deleted.

If both -FROMDATE and -TODATE are omitted, all files matching the -FILE parameter value are deleted.

-TODATE=<to_date>

No

All files with a modification date earlier than this date are deleted. Use the format yyyy/MM/dd hh:mm:ss.

The TO_DATE is not inclusive.

If -TODATE is omitted, all files with a modification date later than the -FROMDATE date are deleted.

If both -FROMDATE and -TODATE parameters are omitted, all files matching the -FILE parameter value are deleted.



Note:

You cannot delete a file and a directory at the same time by combining the -DIR and -FILE parameters. To achieve that, you must make two calls to OdiFileDelete.


Examples

Delete the file my_data.dat from the directory c:\data\input, generating an error if the file or directory is missing.

OdiFileDelete -FILE=c:\data\input\my_data.dat -NOFILE_ERROR=yes

Delete all .txt files from the bin directory, but not .TXT files.

OdiFileDelete "-FILE=c:\Program Files\odi\bin\*.txt" -CASESENS=yes

This statement has the same effect:

OdiFileDelete "-DIR=c:\Program Files\odi\bin" "-FILE=*.txt" -CASESENS=yes

Delete the directory /bin/usr/nothingToDoHere.

OdiFileDelete "-DIR=/bin/usr/nothingToDoHere"

Delete all files under the C:\temp directory whose modification time is between 10/01/2008 00:00:00 and 10/31/2008 22:59:00, where 10/01/2008 and 10/31/2008 are not inclusive.

OdiFileDelete -DIR=C:\temp -FILE=* -NOFILE_ERROR=NO -FROMDATE=FROMDATE=10/01/2008 00:00:00 -TODATE=10/31/2008 22:59:00

Delete all files under the C:\temp directory whose modification time is earlier than 10/31/2008 17:00:00.

OdiFileDelete -DIR=C:\temp -FILE=* -NOFILE_ERROR=YES -TODATE=10/31/2008 17:00:00

Delete all files under the C:\temp directory whose modification time is later than 10/01/2008 08:00:00.

OdiFileDelete -DIR=C:\temp -FILE=* -NOFILE_ERROR=NO -FROMDATE=10/01/2008 08:00:00

OdiFileMove

Use this command to move or rename files or a directory into files or a directory.

Usage

OdiFileMove -FILE=<file> -TODIR=<target_directory> -TOFILE=<target_file>
[-OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>]

OdiFileMove -DIR=<directory> -TODIR=<target_directory> [-OVERWRITE=<yes|no>]
[-RECURSE=<yes|no>] [-CASESENS=<yes|no>]

Parameters

ParametersMandatoryDescription

-DIR=<directory>

Yes if -FILE is omitted

Directory (or folder) to move or rename.

-FILE=<file>

Yes if -DIR is omitted

Full path of the file(s) to move or rename. Use * for generic characters.

Examples:

/var/tmp/*.log (all files with the log extension in the directory /var/tmp)

arch_*.lst (all files starting with arch_ and with the extension lst)

-TODIR=<target_directory>

Yes if -DIR is specified

Target directory of the move.

If a directory is moved (-DIR), this parameter indicates the new name of the directory.

If a file or several files are moved (-FILE), this parameter indicates the target directory.

-TOFILE=<target_file>

Yes if -TODIR is omitted

Target file(s). This parameter cannot be used with parameter -DIR.

This parameter is:

  • The new name of the target file if one single file is moved (no generic character).

  • The mask of the new file names if several files are moved.

-OVERWRITE=<yes|no>

No

Indicates if the files or directory are overwritten if they exist. The default value is No.

-RECURSE=<yes|no>

No

Indicates if files are moved recursively when the directory contains other directories. The value No indicates that only files contained in the directory to move (not the subdirectories) are moved. The default value is Yes.

-CASESENS=<yes|no>

No

Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No).


Examples

Rename the hosts file to hosts.old.

OdiFileMove -FILE=/etc/hosts -TOFILE=/etc/hosts.old

Move the file hosts from the directory /etc to the directory /home/odi.

OdiFileMove -FILE=/etc/hosts -TOFILE=/home/odi/hosts

Move all files *.csv from directory /etc to directory /home/odi with overwrite.

OdiFileMove -FILE=/etc/*.csv -TODIR=/home/odi -OVERWRITE=yes

Move all *.csv files from directory /etc to directory /home/odi and change their extension to .txt.

OdiFileMove -FILE=/etc/*.csv -TOFILE=/home/odi/*.txt -OVERWRITE=yes

Rename the directory C:\odi to C:\odi_is_wonderful.

OdiFileMove -DIR=C:\odi -TODIR=C:\odi_is_wonderful

Move the directory C:\odi and its subfolders into the directory C:\Program Files\odi.

OdiFileMove -DIR=C:\odi "-TODIR=C:\Program Files\odi" -RECURSE=yes

OdiFileWait

Use this command to manage file events. This command regularly scans a directory and waits for a number of files matching a mask to appear, until a given timeout is reached. When the specified files are found, an action on these files is triggered.

Usage

OdiFileWait -DIR=<directory> -PATTERN=<pattern>
[-ACTION=<DELETE|COPY|MOVE|APPEND|ZIP|NONE>] [-TODIR=<target_directory>]
[-TOFILE=<target_file>] [-OVERWRITE=<yes|no>] [-CASESENS=<yes|no>]
[-FILECOUNT=<n>] [-TIMEOUT=<n>] [-POLLINT=<n>] [-HEADER=<n>]
[-KEEP_FIRST_HEADER=<yes|no>] [-NOFILE_ERROR=<yes|no>]

Parameters

ParametersMandatoryDescription

-ACTION=

<DELETE|COPY|MOVE|APPEND|ZIP|NONE>

No

Action taken on the files found:

DELETE: Delete the files found.

COPY: Copy the files found into the directory -TODIR.

MOVE: Move or rename the files found into folder -TODIR by naming them as specified by -TOFILE.

APPEND: Concatenates all files found and creates a result file -TOFILE. Source files are deleted.

ZIP: Compress the files found and store them in ZIP file -TOFILE.

NONE (default): No action is performed.

-DIR=<directory>

Yes

Directory (or folder) to scan.

-PATTERN=<pattern>

Yes

Mask of file names to scan. Use * to specify the generic characters.

Examples:

*.log (all files with the log extension)

arch_*.lst (all files starting with arch_ and with the extension lst)

-TODIR=<target_directory>

No

Target directory of the action. When the action is:

COPY: Directory where the files are copied.

MOVE: Directory where the files are moved.

-TOFILE=<target_file>

No

Destination file(s). When the action is:

MOVE: Renaming mask of the moved files.

APPEND: Name of the file resulting from the concatenation.

ZIP: Name of the resulting ZIP file.

COPY: Renaming mask of the copied files.

Renaming rules:

  • Any alphanumeric character is replaced in the original file name with the alphanumeric characters specified for <target_file>.

  • ? at -TOFILE leaves origin symbol on this position.

  • * at -TOFILE means all remaining symbols from origin file name.

-OVERWRITE=<yes|no>

No

Indicates if the destination file(s) will be overwritten if they exist. The default value is No.

Note that if this option is used with APPEND, the target file will only contain the contents of the latest file processed.

-CASESENS=<yes|no>

No

Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No).

-FILECOUNT=<n>

No

Maximum number of files to wait for (the default value is 0). If this number is reached, the command ends.

The value 0 indicates that Oracle Data Integrator waits for all files until the timeout is reached.

If this parameter is 0 and the timeout is also 0, this parameter is then forced implicitly to 1.

-TIMEOUT=<n>

No

Maximum waiting time in milliseconds (the default value is 0).

If this delay is reached, the command yields control to the following command and uses its value -FILECOUNT.

The value 0 is used to specify an infinite waiting time (wait until the maximum number of messages to read as specified in the parameter -FILECOUNT).

-POLLINT=<n>

No

Interval in milliseconds to search for new files. The default value is 1000 (1 second), which means that Oracle Data Integrator looks for new messages every second. Files written during the OdiFileWait are taken into account only after being closed (file size unchanged) during this interval.

-HEADER=<n>

No

This parameter is valid only for the APPEND action.

Number of header lines to suppress from the files before concatenation. The default value is 0 (no processing).

-KEEP_FIRST_HEADER=<yes|no>

No

This parameter is valid only for the APPEND action.

Keeps the header lines of the first file during the concatenation. The default value is Yes.

-NOFILE_ERROR=<yes|no>

No

Indicates the behavior if no file is found.

The default value is No, which means that no error is generated if no file is found.


Examples

Wait indefinitely for file flag.txt in directory c:\events and proceed when this file is detected.

OdiFileWait -ACTION=NONE -DIR=c:\events -PATTERN=flag.txt -FILECOUNT=1
-TIMEOUT=0 -POLLINT=1000

Wait indefinitely for file flag.txt in directory c:\events and suppress this file when it is detected.

OdiFileWait -ACTION=DELETE -DIR=c:\events -PATTERN=flag.txt -FILECOUNT=1
-TIMEOUT=0 -POLLINT=1000

Wait for the sales files *.dat for 5 minutes and scan every second in directory c:\sales_in, then concatenate into file sales.dat in directory C:\sales_ok. Keep the header of the first file.

OdiFileWait -ACTION=APPEND -DIR=c:\sales_in -PATTERN=*.dat
TOFILE=c:\sales_ok\sales.dat -FILECOUNT=0 -TIMEOUT=350000 -POLLINT=1000
-HEADER=1 -KEEP_FIRST_HEADER=yes -OVERWRITE=yes

Wait for the sales files *.dat for 5 minutes every second in directory c:\sales_in, then copy these files into directory C:\sales_ok. Do not overwrite.

OdiFileWait -ACTION=COPY -DIR=c:\sales_in -PATTERN=*.dat -TODIR=c:\sales_ok
-FILECOUNT=0 -TIMEOUT=350000 -POLLINT=1000 -OVERWRITE=no

Wait for the sales files *.dat for 5 minutes every second in directory c:\sales_in and then archive these files into a ZIP file.

OdiFileWait -ACTION=ZIP -DIR=c:\sales_in -PATTERN=*.dat
-TOFILE=c:\sales_ok\sales.zip -FILECOUNT=0 -TIMEOUT=350000
-POLLINT=1000 -OVERWRITE=yes

Wait for the sales files *.dat for 5 minutes every second into directory c:\sales_in, then move these files into directory C:\sales_ok. Do not overwrite. Append .bak to the file names.

OdiFileWait -ACTION=MOVE -DIR=c:\sales_in -PATTERN=*.dat
-TODIR=c:\sales_ok -TOFILE=*.bak -FILECOUNT=0 -TIMEOUT=350000
-POLLINT=1000 -OVERWRITE=no

OdiFtp

Use this command to use the FTP protocol to connect to a remote system and to perform standard FTP commands on the remote system. Trace from the script is recorded against the task representing the OdiFtp step in Operator Navigator.

Usage

OdiFtp -HOST=<ftp server host name> -USER=<ftp user>
[-PASSWORD=<ftp user password>] -REMOTE_DIR=<remote dir on ftp host>
-LOCAL_DIR=<local dir> [-PASSIVE_MODE=<yes|no>] [-TIMEOUT=<time in seconds>]
[-STOP_ON_FTP_ERROR=<yes|no>] -COMMAND=<command>

Parameters

ParametersMandatoryDescription

-HOST=<ftp server host name>

Yes

Host name of the FTP server.

-USER=<ftp user>

Yes

User on the FTP server.

-PASSWORD=<ftp user password>

No

Password of the FTP user.

-REMOTE_DIR=<remote dir on ftp host>

Yes

Directory path on the remote FTP host.

-LOCAL_DIR=<local dir>

Yes

Directory path on the local machine.

-PASSIVE_MODE=<yes|no>

No

If set to No, the FTP session uses Active Mode. The default value is Yes, which means the session runs in passive mode.

-TIMEOUT=<time in seconds>

No

Time in seconds after which the socket connection times out.

-STOP_ON_FTP_ERROR=<yes|no>

No

If set to Yes (default), the step stops when an FTP error occurs instead of running to completion.

-COMMAND=<command>

Yes

Raw FTP command to execute. For a multiline command, pass the whole command as raw text after the OdiFtp line without the -COMMAND parameter.

Supported commands:

APPE, CDUP, CWD, DELE, LIST, MKD, NLST, PWD, QUIT, RETR, RMD, RNFR, RNTO, SIZE, STOR


Examples

Execute a script on a remote host that makes a directory, changes directory into the directory, puts a file into the directory, and checks its size. The script appends another file, checks the new size, and then renames the file to dailyData.csv. The -STOP_ON_FTP_ERROR parameter is set to No so that the script continues even if the directory exists.

OdiFtp -HOST=machine.oracle.com -USER=odiftpuser -PASSWORD=<password>
-LOCAL_DIR=/tmp -REMOTE_DIR=c:\temp -PASSIVE_MODE=YES -STOP_ON_FTP_ERROR=No
MKD dataDir
CWD dataDir
STOR customers.csv
SIZE customers.csv
APPE new_customers.csv customers.csv
SIZE customers.csv
RNFR customers.csv
RNTO dailyData.csv

OdiFtpGet

Use this command to download a file from an FTP server.

Usage

OdiFtpGet -HOST=<ftp server host name> -USER=<ftp user> 
[PASSWORD=<ftp user password>] -REMOTE_DIR=<remote dir on ftp host>
[-REMOTE_FILE=<file name under the -REMOTE_DIR>] -LOCAL_DIR=<local dir>
[-LOCAL_FILE=<file name under the –LOCAL_DIR>] [-PASSIVE_MODE=<yes|no>] 
[-TIMEOUT=<time in seconds>]

Parameters

ParametersMandatoryDescription

-HOST=<host name of the ftp server>

Yes

Host name of the FTP server.

-USER=<host name of the ftp user>

Yes

User on the FTP server.

-PASSWORD=<password of the ftp user>

No

Password of the FTP user.

-REMOTE_DIR=<dir on the ftp host>

Yes

Directory path on the remote FTP host.

-REMOTE_FILE=<file name under -REMOTE DIR>

No

File name under the directory specified in the -REMOTE_DIR argument. If this argument is missing, the file is copied with the -LOCAL_FILE file name. If the -LOCAL_FILE argument is also missing, the -LOCAL_DIR is copied recursively to the -REMOTE_DIR.

-LOCAL_DIR=<local dir path>

Yes

Directory path on the local machine.

-LOCAL_FILE=<local file>

No

File name under the directory specified in the -LOCAL_DIR argument. If this argument is missing, all files and directories under the -LOCAL_DIR are copied recursively to the -REMOTE_DIR.

To filter the files to be copied, use * to specify the generic characters.

Examples:

  • *.log (all files with the log extension)

  • arch_*.lst (all files starting with arch_ and with the extension lst)

-PASSIVE_MODE=<yes|no>]

No

If set to No, the FTP session uses Active Mode. The default value is Yes, which means the session runs in passive mode.

-TIMEOUT=<time in seconds>

No

The time in seconds after which the socket connection times out.


Examples

Copy the remote directory /test_copy555 on the FTP server recursively to the local directory C:\temp\test_copy.

OdiFtpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555

Copy all files matching the Sales*.txt pattern under the remote directory / on the FTP server to the local directory C:\temp\ using Active Mode for the FTP connection.

OdiFtpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/ -PASSIVE_MODE=NO

OdiFtpPut

Use this command to upload a local file to an FTP server.

Usage

OdiFtpPut -HOST=<ftp server host name> -USER=<ftp user>
[PASSWORD=<ftp user password>] -REMOTE_DIR=<remote dir on ftp host>
[-REMOTE_FILE=<file name under the -REMOTE_DIR>] -LOCAL_DIR=<local dir>
[-LOCAL_FILE=<file name under the –LOCAL_DIR>] [-PASSIVE_MODE=<yes|no>]
[-TIMEOUT=<time in seconds>]

Parameters

ParametersMandatoryDescription

-HOST=<host name of the ftp server>

Yes

Host name of the FTP server.

-USER=<host name of the ftp user>

Yes

User on the FTP server.

-PASSWORD=<password of the ftp user>

No

Password of the FTP user.

-REMOTE_DIR=<dir on the ftp host>

Yes

Directory path on the remote FTP host.

-REMOTE_FILE=<file name under -REMOTE DIR>

No

File name under the directory specified in the -REMOTE_DIR argument. If this argument is missing, the file is copied with the -LOCAL_FILE file name. If the -LOCAL_FILE argument is also missing, the -LOCAL_DIR is copied recursively to the -REMOTE_DIR.

-LOCAL_DIR=<local dir path>

Yes

Directory path on the local machine.

-LOCAL_FILE=<local file>

No

File name under the directory specified in the -LOCAL_DIR argument. If this argument is missing, all files and directories under the -LOCAL_DIR are copied recursively to the -REMOTE_DIR.

To filter the files to be copied, use * to specify the generic characters.

Examples:

  • *.log (all files with the log extension)

  • arch_*.lst (all files starting with arch_ and with the extension lst)

-PASSIVE_MODE=<yes|no>

No

If set to No, the FTP session uses Active Mode. The default value is Yes, which means the session runs in passive mode.

-TIMEOUT=<time in seconds>

No

The time in seconds after which the socket connection times out.


Examples

Copy the local directory C:\temp\test_copy recursively to the remote directory /test_copy555 on the FTP server.

OdiFtpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
 -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555"

Copy all files matching the Sales*.txt pattern under the local directory C:\temp\ to the remote directory / on the FTP server.

OdiFtpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/

Copy the Sales1.txt file under the local directory C:\temp\ to the remote directory / on the FTP server as a Sample1.txt file.

OdiFtpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt 

OdiGenerateAllScen

Use this command to generate a set of scenarios from design-time components (Packages, Mappings, Procedures, or Variables) contained in a folder or project, filtered by markers.

Usage

OdiGenerateAllScen -PROJECT=<project_id> [-FOLDER=<folder_id>]
[-MODE=<REPLACE|CREATE>] [-GRPMARKER=<marker_group_code>
[-MARKER=<marker_code>] [-MATERIALIZED=<yes|no>]
[-GENERATE_MAP=<yes|no>] [-GENERATE_PACK=<yes|no>]
[-GENERATE_POP=<yes|no>] [-GENERATE_TRT=<yes|no>]
[-GENERATE_VAR=<yes|no>]

Parameters

ParametersMandatoryDescription

-PROJECT=<project_id>

Yes

ID of the Project containing the components to generate scenarios for.

-FOLDER=<folder_id>

No

ID of the Folder containing the components to generate scenarios for.

-MODE=<REPLACE|CREATE>

No

Scenario generation mode:

  • REPLACE (default): Causes the last scenario generated for the component to be replaced by the new one generated, with no change of name or version. Any schedules linked to this scenario are deleted.

    If no scenario exists, a new one is generated.

  • CREATE: Creates a new scenario with the same name as the latest scenario generated for the component, with the version number automatically incremented (if the latest version is an integer) or set to the current date (if the latest version is not an integer).

    If no scenario has been created for the component, a first version of the scenario is automatically created.

    New scenarios are named after the component according to the Scenario Naming Convention user parameter.

-GRPMARKER=<marker_group_code>

No

Group containing the marker used to filter the components for which scenarios must be generated.

When -GRPMARKER and -MARKER are specified, scenarios will be (re-)generated only for components flagged with the marker identified by the marker code and the marker group code.

-MARKER=<marker_code>

No

Marker used to filter the components for which scenarios must be generated.

When -GRPMARKER and -MARKER are specified, scenarios will be (re-)generated only for components flagged with the marker identified by the marker code and the marker group code.

-MATERIALIZED=<yes|no>

No

Specifies whether scenarios should be generated as if all underlying objects are materialized. The default value is No.

-GENERATE_MAP=<yes|no>

No

Specifies whether scenarios should be generated from the mapping. The default value is No.

-GENERATE_PACK=<yes|no>

No

Specifies whether scenarios attached to packages should be (re-)generated. The default value is Yes.

-GENERATE_POP=<yes|no>

No

Specifies whether scenarios attached to mappings should be (re-)generated. The default value is No.

-GENERATE_TRT=<yes|no>

No

Specifies whether scenarios attached to procedures should be (re-)generated. The default value is No.

-GENERATE_VAR=<yes|no>

No

Specifies whether scenarios attached to variables should be (re-)generated. The default value is No.


Examples

Generate all scenarios in the project whose ID is 1003 for the current repository.

OdiGenerateAllScen -PROJECT=1003

OdiImportObject

Use this command to import the contents of an export file into a repository. This command reproduces the behavior of the import feature available from the user interface.

Use caution when using this tool. It may work incorrectly when importing objects that depend on objects that do not exist in the repository. It is recommended that you use this API for importing high-level objects (projects, models, and so on).


WARNING:

The import type and the order in which objects are imported into a repository should be carefully specified. Refer to Chapter 20, "Exporting and Importing," for more information on import.


Usage

OdiImportObject -FILE_NAME=<FileName> [-WORK_REP_NAME=<workRepositoryName>]
-IMPORT_MODE=<DUPLICATION|SYNONYM_INSERT|SYNONYM_UPDATE|SYNONYM_INSERT_UPDATE>]
[-IMPORT_SCHEDULE=<yes|no>] [-UPGRADE_KEY=<upgradeKey>]

Parameters

ParametersMandatoryDescription

-FILE_NAME=<FileName>

Yes

Name of the XML export file to import.

-WORK_REP_NAME=<workRepositoryName>

No

Name of the work repository into which the object must be imported. This work repository must be defined in the connected master repository. If this parameter is not specified, the object is imported into the current master or work repository.

-IMPORT_MODE=<DUPLICATION|SYNONYM_INSERT|SYNONYM_UPDATE|SYNONYM_INSERT_UPDATE>

Yes

Import mode for the object. The default value is DUPLICATION. For more information about import types, see "Import Types".

-IMPORT_SCHEDULE=<yes|no>

No

If the selected file is a scenario export, imports the schedules contained in the scenario export file. The default value is No.

-UPGRADE_KEY=<upgradeKey>

No

Upgrade key to import repository objects from earlier versions of Oracle Data Integrator (pre-12c).


Examples

Import the /temp/DW01.xml export file (a project) into the WORKREP work repository using DUPLICATION mode.

OdiImportObject -FILE_NAME=/temp/DW01.xml -WORK_REP_NAME=WORKREP
-IMPORT_MODE=DUPLICATION

OdiImportScen

Use this command to import a scenario into the current work repository from an export file.

Usage

OdiImportScen -FILE_NAME=<FileName>
[-IMPORT_MODE=<DUPLICATION|SYNONYM_INSERT|SYNONYM_UPDATE|SYNONYM_INSERT_UPDATE>]
[-IMPORT_SCHEDULE=<yes|no>] [-FOLDER=<parentFolderGlobalId>]
[-UPGRADE_KEY=<upgradeKey>]

Parameters

ParametersMandatoryDescription

-FILE_NAME=<FileName>

Yes

Name of the export file.

-IMPORT_MODE=<DUPLICATION|SYNONYM_INSERT|SYNONYM_UPDATE|SYNONYM_INSERT_UPDATE>

No

Import mode of the scenario. The default value is DUPLICATION. For more information about import types, see "Import Types".

-IMPORT_SCHEDULE=<yes|no>

No

Imports the schedules contained in the scenario export file. The default value is No.

-FOLDER=<parentFolderGlobalId>

No

Global ID of the parent scenario folder.

-UPGRADE_KEY=<upgradeKey>

No

Upgrade key to import repository objects from earlier versions of Oracle Data Integrator (pre-12c).


Examples

Import the /temp/load_dwh.xml export file (a scenario) into the current work repository using DUPLICATION mode.

OdiImportScen -FILE_NAME=/temp/load_dwh.xml -IMPORT_MODE=DUPLICATION 

OdiInvokeWebService


Note:

This tool replaces the OdiExecuteWebService tool.


Use this command to invoke a web service over HTTP/HTTPS and write the response to an XML file.

This tool invokes a specific operation on a port of a web service whose description file (WSDL) URL is provided.

If this operation requires a web service request, it is provided either in a request file, or directly written out in the tool call (<XML Request>). This request file can have two different formats (XML, which corresponds to the XML body only, or SOAP, which corresponds to the full-formed SOAP envelope including a SOAP header and body) specified in the -RESPONSE_FILE_FORMAT parameter. The response of the web service request is written to an XML file that can be processed afterwards in Oracle Data Integrator. If the web service operation is one-way and does not return any response, no response file is generated.


Note:

This tool cannot be executed in a command line with startcmd.


Usage

OdiInvokeWebService -URL=<url> -PORT=<port> -OPERATION=<operation>
[<XML Request>] [-REQUEST_FILE=<xml_request_file>]
[-RESPONSE_MODE=<NO_FILE|NEW_FILE|FILE_APPEND>]
[-RESPONSE_FILE=<xml_response_file>] [-RESPONSE_XML_ENCODING=<charset>]
[-RESPONSE_FILE_CHARSET=<charset>] [-RESPONSE_FILE_FORMAT=<XML|SOAP>]
[-HTTP_USER=<user>]
[-HTTP_PASS=<password>] [-TIMEOUT=<timeout>]

Parameters

ParametersMandatoryDescription

-URL=<url>

Yes

URL of the Web Service Description File (WSDL) file describing the web service.

-PORT_TYPE=<port_type>

Yes

Name of the WSDL port type to invoke.

-OPERATION=<operation>

Yes

Name of the web service operation to invoke.

<XML Request>

No

Request message in SOAP (Simple Object Access Protocol) format. This message should be provided on the line immediately following the OdiInvokeWebService call.

The request can alternately be passed through a file whose location is provided with the -REQUEST_FILE parameter.

-REQUEST_FILE=<xml_request_file>

No

Location of the XML file containing the request message in SOAP format.

The request can alternately be directly written out in the tool call (<xmlRequest>).

-RESPONSE_MODE=<NO_FILE|NEW_FILE|FILE_APPEND>

No

Generation mode for the response file. This parameter takes the following values:

  • NO_FILE (default): No response file is generated.

  • NEW_FILE: A new response file is generated. If the file already exists, it is overwritten.

  • FILE_APPEND: The response is appended to the file. If the file does not exist, it is created.

-RESPONSE_FILE=<file>

Depends

The name of the result file to write. Mandatory if -RESPONSE_MODE is NEW_FILE or APPEND.

-RESPONSE_FILE_CHARSET=<charset>

Depends

Response file character encoding. See the following table. Mandatory if -RESPONSE_MODE is NEW_FILE or APPEND.

-RESPONSE_XML_ENCODING=<charset>

Depends

Character encoding that will be indicated in the XML declaration header of the response file. See the following table. Mandatory if -RESPONSE_MODE is not NO_FILE.

-RESPONSE_FILE_FORMAT=<XML|SOAP>

No

Format of the request and response file.

  • If XML is selected (default), the request is processed as a SOAP body. The tool adds a default SOAP header and envelope content to this body before sending the request. The response is stripped from its SOAP envelope and headers and only the response's body is written to the response file.

  • If SOAP is selected, the request is processed as a full-formed SOAP envelope and is sent as is. The response is also written to the response file with no processing.

-HTTP_USER=<user>

No

User account authenticating on the HTTP server.

-HTTP_PASS=<password>

No

Password of the HTTP user.

-TIMEOUT=<timeout>

No

The web service request waits for a reply for this amount of time before considering that the server will not provide a response and an error is produced. The default value is 15 seconds.


The following table lists the most common XML/Java character encoding schemes. For a more complete list, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

XML CharsetJava Charset

US-ASCII

ASCII

UTF-8

UTF8

UTF-16

UTF-16

ISO-8859-1

ISO8859_1


Examples

The following web service call returns the capital city for a given country (the ISO country code is sent in the request). Note that the request and response format, as well as the port and operations available, are defined in the WSDL passed in the URL parameter.

OdiInvokeWebService -
-URL=http://www.oorsprong.org/websamples.countryinfo/CountryInfoService.wso
?WSDL -PORT_TYPE=CountryInfoServiceSoapType -OPERATION=CapitalCity
-RESPONSE_MODE=NEW_FILE -RESPONSE_XML_ENCODING=ISO-8859-1
"-RESPONSE_FILE=/temp/result.xml" -RESPONSE_FILE_CHARSET=ISO8859_1 -RESPONSE_FILE_FORMAT=XML
<CapitalCityRequest>
<sCountryISOCode>US</sCountryISOCode>
</CapitalCityRequest>

The generated /temp/result.xml file contains the following:

<CapitalCityResponse>
<m:CapitalCityResponse>
<m:CapitalCityResult>Washington</m:CapitalCityResult>
</m:CapitalCityResponse>
</CapitalCityResponse>

Packages

Oracle Data Integrator provides a special graphical interface for calling OdiInvokeWebService in packages. See Chapter 16, "Using Web Services," for more information.

OdiKillAgent

Use this command to stop a standalone agent.

Java EE Agents deployed in an application server cannot be stopped using this tool and must be stopped using the application server utilities.

Usage

OdiKillAgent (-PORT=<TCP/IP Port>|-NAME=<physical_agent_name>)
[-IMMEDIATE=<yes|no>] [-MAX_WAIT=<timeout>]

Parameters

ParametersMandatoryDescription

-PORT=<TCP/IP Port>

No

If this parameter is specified, the agent running on the local machine with the specified port is stopped.

-NAME=<physical_agent_name>

Yes

If this parameter is specified, the physical agent whose name is provided is stopped. This agent may be a local or remote agent. It must be declared in the master repository.

-IMMEDIATE=<yes|no>

No

If this parameter is set to Yes, the agent is stopped without waiting for its running sessions to complete. If this parameter is set to No, the agent is stopped after its running sessions reach completion or after the -MAX_WAIT timeout is reached. The default value is No.

-MAX_WAIT=<timeout>

No

This parameter can be used when -IMMEDIATE is set to No. The parameter defines a timeout in milliseconds after which the agent is stopped regardless of the running sessions. The default value is 0, which means no timeout and the agent is stopped after its running sessions complete.


Examples

Stop the ODI_AGT_001 physical agent immediately.

OdiKillAgent -NAME=ODI_AGT_001 -IMMEDIATE=yes

OdiManageOggProcess

Use this command to start and stop Oracle GoldenGate processes.

The -NB_PROCESS parameter specifies the number of processes on which to perform the operation and applies only to Oracle GoldenGate Delivery processes.

If -NB_PROCESS is not specified, the name of the physical process is derived from the logical process. For example, if logical schema R1_LS maps to physical process R1, an Oracle GoldenGate process named R1 is started or stopped.

If -NB_PROCESS is specified with a positive value, sequence numbers are appended to the process and all processes are started or stopped with the new name. For example, if the value is set to 3, and logical schema R2_LS maps to physical process R2, processes R21, R22 and R23 are started or stopped.

If Start Journal is used to start the CDC (Changed Data Capture) process with Oracle GoldenGate JKMs (Journalizing Knowledge Modules), Oracle Data Integrator generates the Oracle GoldenGate Delivery process with the additional sequence number in the process name. For example, if Delivery process RP is used for the Start Journal action, Start Journal generates an Oracle GoldenGate Delivery process named RP1. To stop and start the process using the OdiManageOggProcess tool, set -NB_PROCESS to 1. The maximum value of -NB_PROCESS is the value of the -NB_APPLY_PROCESS parameter of the JKM within the model.

Usage

OdiManageOggProcess -OPERATION=<start|stop>
-PROCESS_LSCHEMA=<OGG logical schema> [-NB_PROCESS=<number of processes>]

Parameters

ParametersMandatoryDescription

-OPERATION=<start|stop>

Yes

Operation to perform on the process.

-PROCESS_LSCHEMA=<OGG logical schema>

Yes

Logical schema of the process.

-NB_PROCESS=<number of processes>

No

Number of processes on which to perform the operation.


Examples

Start Oracle GoldenGate process R1, which maps to logical schema R1_LS.

OdiManageOggProcess "-OPERATION=START" "-PROCESS_LSCHEMA=R1_LS

Start Oracle GoldenGate processes R21, R22, and R23.

OdiManageOggProcess "-OPERATION=START" "-PROCESS_LSCHEMA=R2_LS" "-NB_PROCESS=3"

OdiMkDir

Use this command to create a directory structure.

If the parent directory does not exist, this command recursively creates the parent directories.

Usage

OdiMkDir -DIR=<directory>

Parameters

ParametersMandatoryDescription

-DIR=<directory>

Yes

Directory (or folder) to create.


Examples

Create the directory odi in C:\temp. If C:\temp does not exist, it is created.

OdiMkDir "-DIR=C:\temp\odi"

OdiOSCommand

Use this command to invoke an operating system command shell to carry out a command, and redirect the output result to files.

The following operating systems are supported:

  • Windows operating systems, using cmd

  • POSIX-compliant operating systems, using sh

The following operating systems are not supported:

  • Mac OS

Usage

OdiOSCommand [-OUT_FILE=<stdout_file>] [-ERR_FILE=<stderr_file>]
[-FILE_APPEND=<yes|no>] [-WORKING_DIR=<workingdir>] [-SYNCHRONOUS=<yes|no>]
[CR/LF <command> | -COMMAND=<command>]

Parameters

ParametersMandatoryDescription

-COMMAND=<command>

Yes

Command to execute. For a multiline command, pass the whole command as raw text after the OdiOSCommand line without the -COMMAND parameter.

-OUT_FILE=<stdout_file>

No

Absolute name of the file to redirect standard output to.

-ERR_FILE=<stderr_file>

No

Absolute name of the file to redirect standard error to.

-FILE_APPEND=<yes|no>

No

Whether to append to the output files, rather than overwriting them. The default value is Yes.

-WORKING_DIR=<workingdir>

No

Directory in which the command is executed.

-SYNCHRONOUS=<yes|no>

No

If set to Yes (default), the session waits for the command to terminate. If set to No, the session continues immediately with error code 0. The default is synchronous mode.


Examples

Execute the file c:\work\load.bat (on a Windows machine) and append the output streams to files.

OdiOSCommand "-OUT_FILE=c:\work\load-out.txt"
"-ERR_FILE=c:\work\load-err.txt" "-FILE_APPEND=YES"
"-WORKING_DIR=c:\work" c:\work\load.bat

OdiOutFile

Use this command to write or append content to a text file.

Usage

OdiOutFile -FILE=<file_name> [-APPEND] [-CHARSET_ENCODING=<encoding>]
[-XROW_SEP=<hexadecimal_line_break>] [CR/LF <text> | -TEXT=<text>]

Parameters

ParametersMandatoryDescription

-FILE=<file_name>

Yes

Target file. Its path may be absolute or relative to the execution agent location.

-APPEND

No

Indicates whether <text> must be appended at the end of the file. If this parameter is not specified, the file is overwritten if it exists.

-CHARSET_ENCODING=<encoding>

No

Target file encoding. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-XROW_SEP=<hexadecimal_line_break>

No

Hexadecimal code of the character used as a line separator (line break). The default value is 0A (UNIX line break). For a Windows line break, the value is 0D0A.

CR/LF <text> or -TEXT=<text>

No

Text to write in the file. This text can be typed on the line following the OdiOutFile command (a carriage return - CR/LF - indicates the beginning of the text), or can be defined with the -TEXT parameter. The -TEXT parameter should be used when calling this Oracle Data Integrator command from an OS command line. The text can contain variables or substitution methods.


Examples

Generate the file /var/tmp/my_file.txt on the UNIX system of the agent that executed it.

OdiOutFile -FILE=/var/tmp/my_file.txt
Welcome to Oracle Data Integrator
This file has been overwritten by <%=odiRef.getSession("SESS_NAME")%> 

Add the entry PLUTON into the file hosts of the Windows system of the agent that executed it.

OdiOutFile -FILE=C:\winnt\system32\drivers\etc\hosts -APPEND
195.10.10.6 PLUTON pluton

OdiPingAgent

Use this command to perform a test on a given agent. If the agent is not started, this command raises an error.

Usage

OdiPingAgent -AGENT_NAME=<physical_agent_name>

Parameters

ParametersMandatoryDescription

-AGENT_NAME=<physical_agent_name>

Yes

Name of the physical agent to test.


Examples

Test the physical agent AGENT_SOLARIS_DEV.

OdiPingAgent -AGENT_NAME=AGENT_SOLARIS_DEV

OdiPurgeLog

Use this command to purge the execution logs.

The OdiPurgeLog tool purges all session logs and/or Load Plan runs that match the filter criteria.

The -PURGE_TYPE parameter defines the objects to purge:

  • Select SESSION to purge all session logs matching the criteria. Child sessions and grandchild sessions are purged if the parent session matches the criteria. Note that sessions launched by a Load Plan execution, including the child sessions, are not purged.

  • Select LOAD_PLAN_RUN to purge all load plan logs matching the criteria. Note that all sessions launched from the Load Plan run are purged even if the sessions attached to the Load Plan runs themselves do not match the criteria.

  • Select ALL to purge both session logs and Load Plan runs matching the criteria.

The -COUNT parameter defines the number of sessions and/or Load Plan runs (after filter) to preserve in the log. The -ARCHIVE parameter enables automatic archiving of the purged sessions and/or Load Plan runs.


Note:

Load Plans and sessions in running, waiting, or queued status are not purged.


Usage

OdiPurgeLog 
[-PURGE_TYPE=<SESSION|LOAD_PLAN_RUN|ALL>]
[-COUNT=<session_number>] [-FROMDATE=<from_date>] [TODATE=<to_date>]
[-CONTEXT_CODE=<context_code>] [-USER_NAME=<user_name>]
[-AGENT_NAME=<agent_name>] [-PURGE_REPORTS=<Yes|No>] [-STATUS=<D|E|M>]
[-NAME=<session_or_load_plan_name>] [-ARCHIVE=<Yes|No>] [-TODIR=<directory>]
[-ZIPFILE_NAME=<zipfile_name>] [-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>]
[-REMOVE_TEMPORARY_OBJECTS=<yes|no>]

Parameter

ParametersMandatoryDescription

-PURGE_TYPE=<SESSION|LOAD_PLAN_RUN|ALL>

No

Purges only session logs, Load Plan logs, or both. The default is session.

-COUNT=<session_number>

No

Retains the most recent count number of sessions and/or Load Plan runs that match the specified filter criteria and purges the rest. If this parameter is not specified or equals 0, purges all sessions and/or Load Plan runs that match the filter criteria.

-FROMDATE=<from_date>

No

Starting date for the purge, using the format yyyy/MM/dd hh:mm:ss.

If -FROMDATE is omitted, the purge is done starting with the oldest session and/or Load Plan run.

-TODATE=<to_date>

No

Ending date for the purge, using the format yyyy/MM/dd hh:mm:ss.

If -TODATE is omitted, the purge is done up to the most recent session and/or Load Plan run.

-CONTEXT_CODE=<context_code>

No

Purges only sessions and/or Load Plan runs executed in <context_code>.

If -CONTEXT_CODE is omitted, the purge is done on all contexts.

-USER_NAME=<user_name>

No

Purges only sessions and/or Load Plan runs launched by <user_name>.

-AGENT_NAME=<agent_name>

No

Purges only sessions and/or Load Plan runs executed by <agent_name>.

-PURGE_REPORTS=<0|1>

No

If set to 1, scenario reports (appearing under the execution node of each scenario) are also purged.

-STATUS=<D|E|M>

No

Purges only the sessions and/or Load Plan runs with the specified state:

  • D: Done

  • E: Error

  • M: Warning

If this parameter is not specified, sessions and/or Load Plan runs in all of these states are purged.

-NAME=<session_or_load_plan_name>

No

Session name or Load Plan name.

-ARCHIVE=<Yes|No>

No

If set to Yes, exports the sessions and/or Load Plan runs before they are purged.

-TODIR=<directory>

No

Target directory for the export. This parameter is required if -ARCHIVE is set to Yes.

-ZIPFILE_NAME=<zipfile_name>

No

Name of the compressed file.

Target directory for the export. This parameter is required if -ARCHIVE is set to Yes.

-XML_CHARSET=<charset>

No

XML encoding of the export files. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-JAVA_CHARSET=<charset>

No

Export file encoding. The default value is ISO8859_1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-REMOVE_TEMPORARY_OBJECTS=<yes|no>

No

If set to Yes (default), cleanup tasks are performed before sessions are purged so that any temporary objects are removed.


Examples

Purge all sessions executed between 2001/03/25 00:00:00 and 2001/08/31 21:59:00.

OdiPurgeLog "-FROMDATE=2001/03/25 00:00:00" "-TODATE=2001/08/31 21:59:00"

Purge all Load Plan runs that were executed in the GLOBAL context by the Internal agent and that are in Error status.

OdiPurgeLog "-PURGE_TYPE=LOAD_PLAN_RUN" "-CONTEXT_CODE=GLOBAL" 
"-AGENT_NAME=Internal" "-STATUS=E"

OdiReadMail

Use this command to read emails and attachments from a POP or IMAP account.

This command connects the mail server -MAILHOST using the connection parameters specified by -USER and -PASS. The execution agent reads messages from the mailbox until -MAX_MSG messages are received or the maximum waiting time specified by -TIMEOUT is reached. The extracted messages must match the filters such as those specified by the parameters -SUBJECT and -SENDER. When a message satisfies these criteria, its content and its attachments are extracted in a directory specified by the parameter -FOLDER. If the parameter -KEEP is set to No, the retrieved message is suppressed from the mailbox.

Usage

OdiReadMail -MAILHOST=<mail_host> -USER=<mail_user>
-PASS=<mail_user_password> -FOLDER=<folder_path>
[-PROTOCOL=<pop3|imap>] [-FOLDER_OPT=<none|sender|subject>] 
[-KEEP=<no|yes>] [-EXTRACT_MSG=<yes|no>] [-EXTRACT_ATT=<yes|no>]
[-MSG_PRF=<my_prefix>] [-ATT_PRF=<my_prefix>] [-USE_UCASE=<no|yes>]
[-NOMAIL_ERROR=<no|yes>] [-TIMEOUT=<timeout>] [-POLLINT=<pollint>]
[-MAX_MSG=<max_msg>] [-SUBJECT=<subject_filter>] [-SENDER=<sender_filter>]
[-TO=<to_filter>] [-CC=<cc_filter>]

Parameters

ParametersMandatoryDescription

-MAILHOST=<mail_host>

Yes

IP address of the POP or IMAP mail server.

-USER=<mail_user>

Yes

Valid mail server account.

-PASS=<mail_user_password>

Yes

Password of the mail server account.

-FOLDER=<folder_path>

Yes

Full path of the storage folder for attachments and messages.

-PROTOCOL=<pop3|imap>

No

Type of mail accessed (POP3 or IMAP). The default is POP3.

-FOLDER_OPT=<none|sender|subject>

No

Allows the creation of a subdirectory in the directory -FOLDER according to the following parameters:

  • none (default): No action.

  • sender: A subdirectory is created with the external name of the sender.

  • subject: A subdirectory is created with the subject of the message.

For the sender and subject folder options, the spaces and nonalphanumeric characters (such as @) are replaced by underscores in the generated folder's name.

-KEEP=<no|yes>

No

If set to Yes, keep the messages that match the filters in the mailbox after reading them.

If set to No (default), delete the messages that match the filters of the mailbox after reading them.

-EXTRACT_MSG=<yes|no>

No

If set to Yes (default), extract the body of the message into a file.

If set to No, do not extract the body of the message into a file.

-EXTRACT_ATT=<yes|no>

No

If set to Yes (default), extract the attachments into files.

If set to No, do not extract attachments.

-MSG_PRF=<my_prefix>

No

Prefix of the file that contains the body of the message. The default is MSG.

-ATT_PRF=<my_prefix>

No

Prefix of the files that contain the attachments. The original file names are kept.

-USE_UCASE=<no|yes>

No

If set to Yes, force the file names to uppercase.

If set to No (default), keep the original letter case.

-NOMAIL_ERROR=<no|yes>

No

If set to Yes, generate an error if no mail matches the specified criteria.

If set to No (default), do not generate an error when no mail corresponds to the specified criteria.

-TIMEOUT=<timeout>

No

Maximum waiting time in milliseconds. If this waiting time is reached, the command ends.

The default value is 0, which means an infinite waiting time (as long as needed for the maximum number of messages specified with -MAX_MSG to be received).

-POLLINT=<pollint>

No

Searching interval in milliseconds to scan for new messages. The default value is 1000 (1 second).

-MAX_MSG=<max_msg>

No

Maximum number of messages to extract. If this number is reached, the command ends. The default value is 1.

-SUBJECT=<subject_filter>

No

Parameter used to filter the messages according to their subjects.

-SENDER=<sender_filter>

No

Parameter used to filter messages according to their sender.

-TO=<to_filter>

No

Parameter used to filter messages according to their addresses. This option can be repeated to create multiple filters.

-CC=<cc_filter>

No

Parameter used to filter messages according to their addresses in copy. This option can be repeated to create multiple filters.


Examples

Automatic reception of the mails of support with attachments detached in the folder C:\support on the system of the agent. Wait for all messages with a maximum waiting time of 10 seconds.

OdiReadMail -MAILHOST=mail.mymail.com -USER=myaccount -PASS=mypass
-KEEP=no -FOLDER=c:\support -TIMEOUT=0 -MAX_MSG=0
-SENDER=support@mycompany.com -EXTRACT_MSG=yes -MSG_PRF=TXT
-EXTRACT_ATT=yes

Wait indefinitely for 10 messages and check for new messages every minute.

OdiReadMail -MAILHOST=mail.mymail.com -USER=myaccount -PASS=mypass
-KEEP=no -FOLDER=c:\support -TIMEOUT=0 -MAX_MSG=10 -POLLINT=60000
-SENDER=support@mycompany.com -EXTRACT_MSG=yes -MSG_PRF=TXT
-EXTRACT_ATT=yes

OdiRefreshJournalCount

Use this command to refresh for a given journalizing subscriber the number of rows to consume for the given table list or CDC set. This refresh is performed on a logical schema and a given context, and may be limited.


Note:

This command is suitable for journalized tables in simple or consistent mode and cannot be executed in a command line with startcmd.


Usage

OdiRefreshJournalCount -LSCHEMA=<logical_schema>
-SUBSCRIBER_NAME=<subscriber_name>
(-TABLE_NAME=<table_name> | -CDC_SET_NAME=<cdc set name>)
[-CONTEXT=<context>] [-MAX_JRN_DATE=<to_date>]

Parameters

ParametersMandatoryDescription

-LSCHEMA=<logical_schema>

Yes

Logical schema containing the journalized tables.

-TABLE_NAME=<table_name>

Yes for working with simple CDC

Journalized table name, mask, or list to check. This parameter accepts three formats:

  • Table Name

  • Table Name Mask: This mask selects the tables to poll. The mask is specified using the SQL LIKE syntax: the % symbol replaces an unspecified number of characters and the _ symbol acts as a wildcard.

  • Table Names List: List of table names separated by commas. Masks as defined above are not allowed.

Note that this option works only for tables in a model journalized in simple mode.

This parameter cannot be used with -CDC_SET_NAME. It is mandatory if -CDC_SET_NAME is not set.

-CDC_SET_NAME=<cdcSetName>

Yes for working with consistent set CDC

Name of the CDC set to check.

Note that this option works only for tables in a model journalized in consistent mode.

This parameter cannot be used with -TABLE_NAME. It is mandatory if -TABLE_NAME is not set.

-SUBSCRIBER_NAME=<subscriber_name>

Yes

Name of the subscriber for which the count is refreshed.

-CONTEXT=<context>

No

Context in which the logical schema will be resolved. If no context is specified, the execution context is used.

-MAX_JRN_DATE=<to_date>

No

Date (and time) until which the journalizing events are taken into account.


Examples

Refresh for the CUSTOMERS table in the SALES_APPLICATION schema the count of modifications recorded for the SALES_SYNC subscriber. This datastore is journalized in simple mode.

OdiRefreshJournalCount -LSCHEMA=SALES_APPLICATION
-TABLE_NAME=CUSTOMERS -SUBSCRIBER_NAME=SALES_SYNC

Refresh for all tables from the SALES CDC set in the SALES_APPLICATION schema the count of modifications recorded for the SALES_SYNC subscriber. These datastores are journalized with consistent set CDC.

OdiRefreshJournalCount -LSCHEMA=SALES_APPLICATION
-SUBSCRIBER_NAME=SALES_SYNC -CDC_SET_NAME=SALES

OdiReinitializeSeq

Use this command to reinitialize an Oracle Data Integrator sequence.

Usage

OdiReinitializeSeq -SEQ_NAME=<sequence_name> -CONTEXT=<context>
-STD_POS=<position>

Parameters

ParametersMandatoryDescription

-SEQ_NAME=<sequence_name>

Yes

Name of the sequence to reinitialize. It must be prefixed with GLOBAL. for a global sequence, or by <project code>. for a project sequence.

-CONTEXT=<context>

Yes

Context in which the sequence must be reinitialized.

-STD_POS=<position>

Yes

Position to which the sequence must be reinitialized.


Examples

Reset the global sequence SEQ_I to 0 for the GLOBAL context.

OdiReinitializeSeq -SEQ_NAME=GLOBAL.SEQ_I -CONTEXT=GLOBAL
-STD_POS=0

OdiRemoveTemporaryObjects

Use this command to remove temporary objects that could remain between executions. This is performed by executing the cleanup tasks for the sessions identified by the parameters specified in the tool parameters.

Usage

OdiRemoveTemporaryObjects [-COUNT=<session_number>] [-FROMDATE=<from_date>]
[-TODATE=<to_date>] [-CONTEXT_CODE=<context_code>]
[-AGENT_NAME=<agent_name>] [-USER_NAME=<user_name>]
[-NAME=<session_name>] [-ERRORS_ALLOWED=<number_of_errors_allowed>]

Parameters

ParametersMandatoryDescription

-COUNT=<session_number>

No

Number of sessions for which to skip cleanup. The most recent number of sessions (<session_number>) is kept and the rest are cleaned up.

-FROMDATE=<from_date>

No

Start date for the cleanup, using the format yyyy/MM/dd hh:mm:ss. All sessions started after this date are cleaned up. If -FROMDATE is omitted, the cleanup starts with the oldest session.

-TODATE=<to_date>

No

End date for the cleanup, using the format yyyy/MM/dd hh:mm:ss. All sessions started before this date are cleaned up. If -TODATE is omitted, the cleanup starts with the most recent session.

-CONTEXT_CODE=<context_code>

No

Cleans up only those sessions executed in this context (<context_code>). If -CONTEXT_CODE is omitted, cleanup is performed on all contexts.

-AGENT_NAME=<agent_name>

No

Cleans up only those sessions executed by this agent (<agent_name>).

-USER_NAME=<user_name>

No

Cleans up only those sessions launched by this user (<user_name>).

-NAME=<session_name>

No

Session name.

-ERRORS_ALLOWED=<number_of_errors_allowed>

No

Number of errors allowed before the step ends with OK. If set to 0, the step ends with OK regardless of the number of errors encountered during the cleanup phase.


Examples

Remove the temporary objects by performing the cleanup tasks of all sessions executed between 2013/03/25 00:00:00 and 2013/08/31 21:59:00.

OdiRemoveTemporaryObjects "-FROMDATE=2013/03/25 00:00:00" "-TODATE=2013/08/31 21:59:00"

Remove the temporary objects by performing the cleanup tasks of all sessions executed in the GLOBAL context by the Internal agent.

OdiRemoveTemporaryObjects "-CONTEXT_CODE=GLOBAL" "-AGENT_NAME=Internal"

OdiRetrieveJournalData

Use this command to retrieve the journalized events for a given journalizing subscriber, a given table list or CDC set. The retrieval is performed specifically for the technology containing the tables. This retrieval is performed on a logical schema and a given context.


Note:

This tool works for tables journalized using simple or consistent set modes and cannot be executed in a command line with startcmd.


Usage

OdiRetrieveJournalData -LSCHEMA=<logical_schema>
-SUBSCRIBER_NAME=<subscriber_name>
(-TABLE_NAME=<table_name> | -CDC_SET_NAME=<cdc_set_name>)
[-CONTEXT=<context>] [-MAX_JRN_DATE=<to_date>]

Parameters

ParametersMandatoryDescription

-LSCHEMA=<logical_schema>

Yes

Logical schema containing the journalized tables.

-TABLE_NAME=<table_name>

No

Journalized table name, mask, or list to check. This parameter accepts three formats:

  • Table Name

  • Table Name Mask: This mask selects the tables to poll. The mask is specified using the SQL LIKE syntax: the % symbol replaces an unspecified number of characters and the _ symbol acts as a wildcard.

  • Table Names List: List of table names separated by commas. Masks as defined above are not allowed.

Note that this option works only for tables in a model journalized in simple mode.

This parameter cannot be used with -CDC_SET_NAME. It is mandatory if -CDC_SET_NAME is not set.

-CDC_SET_NAME=<cdc_set_name>

No

Name of the CDC set to update.

Note that this option works only for tables in a model journalized in consistent mode.

This parameter cannot be used with -TABLE_NAME. It is mandatory if -TABLE_NAME is not set.

-SUBSCRIBER_NAME=<subscriber_name>

Yes

Name of the subscriber for which the data is retrieved.

-CONTEXT=<context>

No

Context in which the logical schema will be resolved. If no context is specified, the execution context is used.

-MAX_JRN_DATE=<to_date>

No

Date (and time) until which the journalizing events are taken into account.


Examples

Retrieve for the CUSTOMERS table in the SALES_APPLICATION schema the journalizing events for the SALES_SYNC subscriber.

OdiRetrieveJournalData -LSCHEMA=SALES_APPLICATION
-TABLE_NAME=CUSTOMERS -SUBSCRIBER_NAME=SALES_SYNC

OdiReverseGetMetaData

Use this command to reverse-engineer metadata for the given model in the reverse tables using the JDBC driver capabilities. This command is typically preceded by OdiReverseResetTable and followed by OdiReverseSetMetaData.



Usage

OdiReverseGetMetaData -MODEL=<model_id>

Parameters

Notes:

  • This command uses the same technique as the standard reverse-engineering, and depends on the capabilities of the JDBC driver used.

  • The use of this command is restricted to DEVELOPMENT type Repositories because the metadata is not available on EXECUTION type Repositories.

ParametersMandatoryDescription

-MODEL=<model_id>

Yes

Model to reverse-engineer.


Examples

Reverse the RKM's current model.

OdiReverseGetMetaData -MODEL=<%=odiRef.getModel("ID")%>

OdiReverseManageShortcut

Use this command to define how to handle shortcuts when they are reverse-engineered in a model.

Usage

OdiReverseManageShortcut "-MODEL=<model_id>" "-MODE=MATERIALIZING_MODE"

Parameters

ParametersMandatoryDescription

-MODEL=<model_id>

Yes

Global identifier of the model to be reversed.

-MODE=ALWAYS_MATERIALIZE|ALWAYS_SKIP|PROMPT

Yes

This parameter accepts the following values:

  • ALWAYS_MATERIALIZE: Conflicted shortcuts are always materialized and datastores are reversed (default).

  • ALWAYS_SKIP: Conflicted shortcuts are always skipped and not reversed.

  • PROMPT: The Shortcut Conflict Detected dialog is displayed. You can define how to handle conflicted shortcuts. Select Materialize, to materialize and reverse-engineer the conflicted datastore shortcut. Leave Materialize unselected, to skip the conflicted shortcuts. Unselected datastores are not reversed and the shortcut remains.


Example

Reverse model 125880 in ALWAYS_MATERIALIZE mode.

OdiReverseManageShortcut -MODEL=125880 -MODE=ALWAYS_MATERIALIZE

OdiReverseResetTable

Use this command to reset the content of reverse tables for a given model. This command is typically used at the beginning of a customized reverse-engineering process.

Usage

OdiReverseResetTable -MODEL=<model_id>

Parameters

ParametersMandatoryDescription

-MODEL=<model_id>

Yes

Global identifier of the model to be reversed.


Examples

OdiReverseResetTable -MODEL=123001

OdiReverseSetMetaData

Use this command to integrate metadata from the reverse tables into the Repository for a given data model.

Usage

OdiReverseSetMetaData -MODEL=<model_id> [-USE_TABLE_NAME_FOR_UPDATE=<true|false>]

Parameters

ParametersMandatoryDescription

-MODEL=<model_id>

Yes

Global identifier of the model to be reversed.

-USE_TABLE_NAME_FOR_UPDATE=<true|false>

No

  • If true, the TABLE_NAME is used as an update key on the target tables.

  • If false (default), the RES_NAME is used as the update key on the target tables.


Examples

Reverse model 125880, using the TABLE_NAME as an update key on the target tables.

OdiReverseSetMetaData -MODEL=123001 -USE_TABLE_NAME_FOR_UPDATE=true

OdiSAPALEClient and OdiSAPALEClient3

Use this command to generate SAP Internal Documents (IDoc) from XML source files and transfer these IDocs using ALE (Application Link Enabling) to a remote tRFC server (SAP R/3 server).


Note:

The OdiSAPALEClient tool supports SAP Java Connector 2.x. To use the SAP Java Connectors 3.x, use the OdiSAPALEClient3 tool.


Usage for OdiSAPALEClient

OdiSAPALEClient -USER=<sap_logon> -ENCODED_PASSWORD=<password>
-GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number>  
-MESSAGESERVERHOST=<message_server> -R3NAME=<system_name>
-APPLICATIONSERVERSGROUP=<group_name>
[-DIR=<directory>] [-FILE=<file>] [-CASESENS=<yes|no>]
[-MOVEDIR=<target_directory>] [-DELETE=<yes|no>] [-POOL_KEY=<pool_key>]
[-LANGUAGE=<language>] [-CLIENT=<client>] [-MAX_CONNECTIONS=<n>]
[-TRACE=<no|yes>]

Usage for OdiSAPALEClient3

OdiSAPALEClient3 -USER=<sap_logon> -ENCODED_PASSWORD=<password>
-GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number>  
-MESSAGESERVERHOST=<message_server> -R3NAME=<system_name>
-APPLICATIONSERVERSGROUP=<group_name>
[-DIR=<directory>] [-FILE=<file>] [-CASESENS=<yes|no>]
[-MOVEDIR=<target_directory>] [-DELETE=<yes|no>] [-POOL_KEY=<pool_key>]
[-LANGUAGE=<language>] [-CLIENT=<client>] [-MAX_CONNECTIONS=<n>]
[-TRACE=<no|yes>]

Parameters

ParametersMandatoryDescription

-USER=<sap_logon>

Yes

SAP logon. This user may be a system user.

-PASSWORD=<password>

Deprecated

SAP logon password. This command is deprecated. Use -ENCODED_PASSWORD instead.

-ENCODED_PASSWORD=<password>

Yes

SAP logon password, encrypted. The OS command encode <password> can be used to encrypt this password.

-GATEWAYHOST=<gateway_host>

No

Gateway host, mandatory if -MESSAGESERVERHOST is not specified.

-SYSTEMNR=<system_number>

No

SAP system number, mandatory if -GATEWAYHOST is used. The SAP system number enables the SAP load balancing feature.

-MESSAGESERVERHOST=<message_server>

No

Message server host name, mandatory if -GATEWAYHOST is not specified. If -GATEWAYHOST and -MESSAGESERVERHOST are both specified, -MESSAGESERVERHOST is used.

-R3NAME=<system_name>

No

Name of the SAP system (r3name), mandatory if -MESSAGESERVERHOST is used.

-APPLICATIONSERVERSGROUP=<group_name>

No

Application servers group name, mandatory if -MESSAGESERVERHOST is used.

-DIR=<directory>

No

XML source file directory. This parameter is taken into account if -FILE is not specified. At least one of the -DIR or -FILE parameters must be specified.

-FILE=<file>

No

Name of the source XML file. If this parameter is omitted, all files in -DIR are processed. At least one of the -DIR or -FILE parameters must be specified.

-CASESENS=<yes|no>

No

Indicates if the source file names are case-sensitive. The default value is No.

-MOVEDIR=<target_directory>

No

If this parameter is specified, the source files are moved to this directory after being processed.

-DELETE=<yes|no>

No

Deletes the source files after their processing. The default value is Yes.

-POOL_KEY=<pool_key>

No

Name of the connection pool. The default value is ODI.

-LANGUAGE=<language>

No

Language code used for error messages. The default value is EN.

-CLIENT=<client>

No

Client identifier. The default value is 001.

-MAX_CONNECTIONS=<n>

No

Maximum number of connections in the pool. The default value is 3.

-TRACE=<no|yes>

No

The generated IDoc files are archived in the source file directory. If the source files are moved (-MOVEDIR parameter), the generated IDocs are also moved. The default value is No.


Examples

Process all files in the /sap directory and send them as IDocs to the SAP server. The original XML and generated files are stored in the /log directory after processing.

OdiSAPALEClient -USER=ODI -ENCODED_PASSWORD=xxx -SYSTEMNR=002
-GATEWAYHOST=GW001 -DIR=/sap -MOVEDIR=/log -TRACE=yes

OdiSAPALEServer and OdiSAPALEServer3

Use this command to start a tRFC listener to receive SAP IDocs transferred using ALE (Application Link Enabling). This listener transforms incoming IDocs into XML files in a given directory.


Note:

The OdiSAPALEServer tool supports SAP Java Connector 2.x. To use the SAP Java Connectors 3.x, use the OdiSAPALEServer3 tool.


Usage of OdiSAPALEServer

OdiSAPALEServer -USER=<sap_logon> -ENCODED_PASSWORD=<password>
-GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number>
-GATEWAYNAME=<gateway_name> -PROGRAMID=<program_id> -DIR=<target_directory>
[-TIMEOUT=<n>] [-POOL_KEY=<pool_key>] [-LANGUAGE=<Language>]
[-CLIENT=<client>] [-MAX_CONNECTIONS=<n>]
[-INTERREQUESTTIMEOUT=<n>] [-MAXREQUEST=<n>] [-TRACE=<no|yes>]

Usage of OdiSAPALEServer3

OdiSAPALEServer3 -USER=<sap_logon> -ENCODED_PASSWORD=<password>
-GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number>
-GATEWAYNAME=<gateway_name> -PROGRAMID=<program_id> -DIR=<target_directory>
[-TIMEOUT=<n>] [-POOL_KEY=<pool_key>] [-LANGUAGE=<Language>]
[-CLIENT=<client>] [-MAX_CONNECTIONS=<n>]
[-INTERREQUESTTIMEOUT=<n>] [-MAXREQUEST=<n>] [-TRACE=<no|yes>]

Parameters

ParametersMandatoryDescription

-USER=<UserName>

Yes

SAP logon. This user may be a system user.

-ENCODED_PASSWORD=<password>

Yes

SAP logon password, encrypted. The system command encode <password> can be used to encrypt this password.

-GATEWAYHOST=<gateway_host>

Yes

Gateway host.

-SYSTEMNR=<system_number>

Yes

SAP system number.

-GATEWAYNAME=<gateway_name>

Yes

Gateway name.

-PROGRAMID=<program_id>

Yes

The program ID. External name used by the tRFC server.

-DIR=<target_directory>

Yes

Directory in which the target XML files are stored. These files are named <IDOC Number>.xml, and are located in subdirectories named after the IDoc type. The default is ./FromSAP.

-POOL_KEY=<pool_key>

Yes

Name of the connection pool. The default value is ODI.

-LANG=<language>

Yes

Language code used for error messages. The default value is EN.

-CLIENT=<client>

Yes

SAP client identifier. The default value is 001.

-TIMEOUT=<n>

No

Life span in milliseconds for the server. At the end of this period the server stops automatically. If this timeout is set to 0, the server life span is infinite. The default value is 0.

-MAX_CONNECTIONS=<n>

Yes

Maximum number of connections allowed for the pool of connections. The default value is 3.

-INTERREQUESTTIMEOUT=<n>

No

If no IDOC is received during an interval of n milliseconds, the listener stops. If this timeout is set to 0, the timeout is infinite. The default value is 0.

-MAXREQUEST=<n>

No

Maximum number of requests after which the listener stops. If this parameter is set to 0, the server expects an infinite number of requests. The default value is 0.

Note: If -TIMEOUT, -INTERREQUESTTIMEOUT, and -MAXREQUEST are set to 0 or left empty, then -MAXREQUEST automatically takes the value 1.

-TRACE=<no|yes>

No

Activate the debug trace. The default value is No.


Examples

Wait for 2 IDoc files and generate the target XML files in the /temp directory.

OdiSAPALEServer -POOL_KEY=ODI -MAX_CONNECTIONS=3 -CLIENT=001
-USER=ODI -ENCODED_PASSWORD=xxx -LANGUAGE=EN
-GATEWAYHOST=SAP001 -SYSTEMNR=002 -GATEWAYNAME=GW001
-PROGRAMID=ODI01 -DIR=/tmp -MAXREQUEST=2

OdiScpGet

Use this command to download a file from an SSH server.

Usage

OdiScpGet -HOST=<ssh server host name> -USER=<ssh user>
[-PASSWORD=<ssh user password>] -REMOTE_DIR=<remote dir on ssh host>
[-REMOTE_FILE=<file name under the REMOTE_DIR>] -LOCAL_DIR=<local dir>
[-LOCAL_FILE=<file name under the LOCAL_DIR>] [-PASSIVE_MODE=<yes|no>]
[-TIMEOUT=<time in seconds>] 
[-IDENTITY_FILE=<full path to the private key file of the user>]
[-KNOWNHOSTS_FILE=<full path to known hosts file>] [COMPRESSION=<yes|no>]
[-STRICT_HOSTKEY_CHECKING=<yes|no>] [-PROXY_HOST=<proxy server host name>]
[-PROXY_PORT=<proxy server port>] [-PROXY_TYPE=<HTTP|SOCKS5>]

Parameters

ParametersMandatoryDescription

-HOST=<ssh server host name>

Yes

Host name of the SSH server.

-USER=<ssh user>

Yes

User on the SSH server.

-PASSWORD=<ssh user password>

No

The password of the SSH user or the passphrase of the password-protected identity file. If the –IDENTITY_FILE argument is provided, this value is used as the passphrase for the password-protected private key file. If public key authentication fails, it falls back to the normal user password authentication.

-REMOTE_DIR=<dir on remote SSH>

Yes

Directory path on the remote SSH host.

-REMOTE_FILE=<file name under -REMOTE DIR>

No

File name under the directory specified in the -REMOTE_DIR argument. Note that all subdirectories matching the remote file name will also be transferred to the local folder.

If this argument is missing, the file is copied with the -LOCAL_FILE file name. If -LOCAL_FILE is also missing, the -LOCAL_DIR is copied recursively to the -REMOTE_DIR.

-LOCAL_DIR=<local dir path>

Yes

Directory path on the local machine.

-LOCAL_FILE=<local file>

No

File name under the directory specified in the -LOCAL_DIR argument. If this argument is missing, all files and directories under -LOCAL_DIR are copied recursively to the -REMOTE_DIR.

To filter the files to be copied use * to specify the generic characters.

Examples:

  • *.log (all files with the log extension)

  • arch_*.lst (all files starting with arch_ and with the extension lst)

-IDENTITY_FILE=<full path to the private key file of the user>

No

Private key file of the local user. If this argument is specified, public key authentication is performed. The –PASSWORD argument is used as the password for the password-protected private key file. If authentication fails, it falls back to normal user password authentication.

-KNOWNHOSTS_FILE=<full path to the known hosts file on the local machine>

No

Full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines the user trusts. If this argument is missing, the <user home dir>/.ssh/known_hosts file is used as the known hosts file if it exists.

-COMPRESSION=<yes|no>

No

If set to Yes, data compression is used. The default value is No.

-STRICT_HOSTKEY_CHECKING=<yes|no>

No

If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in –KNOWNHOSTS_FILE.

-PROXY_HOST=<proxy server host name>

No

Host name of the proxy server to be used for the connection.

-PROXY_PORT=<proxy server port>

No

Port number of the proxy server.

-PROXY_TYPE=<HTTP|SOCKS5>

No

Type of proxy server you are connecting to, HTTP or SOCKS5.

-TIMEOUT=<time in seconds>

No

Time in seconds after which the socket connection times out.


Examples

Copy the remote directory /test_copy555 on the SSH server recursively to the local directory C:\temp\test_copy.

OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
 -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555

Copy all files matching the Sales*.txt pattern under the remote directory / on the SSH server to the local directory C:\temp\.

OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
 -LOCAL_DIR=C:\temp -REMOTE_FILE=Sales*.txt -REMOTE_DIR=/

Copy the Sales1.txt file under the remote directory / on the SSH server to the local directory C:\temp\ as a Sample1.txt file.

OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
 -REMOTE_DIR=/ REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp 
-LOCAL_FILE=Sample1.txt

Copy the Sales1.txt file under the remote directory / on the SSH server to the local directory C:\temp\ as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.

OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
-REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp
-LOCAL_FILE=Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts

Copy the Sales1.txt file under the remote directory / on the SSH server to the local directory C:\temp\ as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING parameter.

OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
-REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt
-IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa
-STRICT_HOSTKEY_CHECKING=NO

OdiScpPut

Use this command to upload a file to an SSH server.

Usage

OdiScpPut -HOST=<SSH server host name> -USER=<SSH user>
[-PASSWORD=<SSH user password>] -LOCAL_DIR=<local dir>
[-LOCAL_FILE=<file name under the LOCAL_DIR>] -REMOTE_DIR=<remote dir on ssh host>
[-REMOTE_FILE=<file name under the REMOTE_DIR>] [-PASSIVE_MODE=<yes|no>]
[-TIMEOUT=<time in seconds>]
[-IDENTITY_FILE=<full path to the private key file of the user>]
[-KNOWNHOSTS_FILE=<full path to known hosts file>] [-COMPRESSION=<yes|no>]
[-STRICT_HOSTKEY_CHECKING=<yes|no>] [<-PROXY_HOST=<proxy server host name>]
[-PROXY_PORT=<proxy server port>] [-PROXY_TYPE=<HTTP|SOCKS5>]

Parameters

ParametersMandatoryDescription

-HOST=<host name of the SSH server>

Yes

Host name of the SSH server.

-USER=<host name of the SSH user>

Yes

User on the SSH server.

-PASSWORD=<password of the SSH user>

No

Password of the SSH user or the passphrase of the password-protected identity file. If the –IDENTITY_FILE argument is provided, this value is used as the passphrase for the password-protected private key file. If public key authentication fails, it falls back to the normal user password authentication.

-REMOTE_DIR=<dir on remote SSH

Yes

Directory path on the remote SSH host.

-REMOTE_FILE=<file name under -REMOTE DIR>

No

File name under the directory specified in the -REMOTE_DIR argument. If this argument is missing, the file is copied with the -LOCAL_FILE file name. If the -LOCAL_FILE argument is also missing, the -LOCAL_DIR is copied recursively to the -REMOTE_DIR.

-LOCAL_DIR=<local dir path>

Yes

Directory path on the local machine.

-LOCAL_FILE=<local file>

No

File name under the directory specified in the -LOCAL_DIR argument. If this argument is missing, all files and directories under the -LOCAL_DIR are copied recursively to the -REMOTE_DIR.

To filter the files to be copied use * to specify the generic characters.

Examples:

  • *.log (all files with the log extension)

  • arch_*.lst (all files starting with arch_ and with the extension lst)

-IDENTITY_FILE=<full path to the private key file of the user>

No

Private key file of the local user. If this argument is specified, public key authentication is performed. The –PASSWORD argument is used as the password for the password-protected private key file. If authentication fails, it falls back to normal user password authentication.

-KNOWNHOSTS_FILE=<full path to the known hosts file on the local machine>

No

Full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines the user trusts. If this argument is missing, the <user home dir>/.ssh/known_hosts file is used as the known hosts file if it exists.

-COMPRESSION=<yes|no>

No

If set to Yes, data compression is used. The default value is No.

-STRICT_HOSTKEY_CHECKING=<yes|no>

No

If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in –KNOWNHOSTS_FILE.

-PROXY_HOST=<proxy server host name>

No

Host name of the proxy server to be used for the connection.

-PROXY_PORT=<proxy server port>

No

Port number of the proxy server.

-PROXY_TYPE=<HTTP|SOCKS5>

No

Type of proxy server you are connecting to, HTTP or SOCKS5.

-TIMEOUT=<timeout value>

No

Time in seconds after which the socket connection times out.


Examples

Copy the local directory C:\temp\test_copy recursively to the remote directory /test_copy555 on the SSH server.

OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555

Copy all files matching the Sales*.txt pattern under the local directory C:\temp\ to the remote directory / on the SSH server.

OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/

Copy the Sales1.txt file under the local directory C:\temp\ to the remote directory / on the SSH server as a Sample1.txt file.

OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
-LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/ -REMOTE_FILE=Sample1.txt

Copy the Sales1.txt file under the local directory C:\temp\ to the remote directory / on the SSH server as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.

OdiScpPut -HOST=machine.oracle.com -USER=test_ftp
-PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt
-REMOTE_DIR=/ -REMOTE_FILE=Sample1.txt
-IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa
-KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts

Copy the Sales1.txt file under the local directory C:\temp\ to the remote directory / on the SSH server as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING parameter.

OdiScpPut -HOST=machine.oracle.com -USER=test_ftp
-PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt
-REMOTE_DIR=/ -REMOTE_FILE=Sample1.txt
-IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa
-STRICT_HOSTKEY_CHECKING=NO

OdiSendMail

Use this command to send an email to an SMTP server.

Usage

OdiSendMail -MAILHOST=<mail_host> -FROM=<from_user> -TO=<address_list>
[-CC=<address_list>] [-BCC=<address_list>] [-SUBJECT=<subject>]
[-ATTACH=<file_path>]* [-MSGBODY=<message_body> | CR/LF<message_body>]

Parameters

ParametersMandatoryDescription

-MAILHOST=<mail_host>

Yes

IP address of the SMTP server.

-FROM=<from_user>

Yes

Address of the sender of the message.

Example: support@mycompany.com

To send the external name of the sender, the following notation can be used:

"-FROM=Support center <support@mycompany.com>"

-TO=<address_list>

Yes

List of email addresses of the recipients, separated by commas.

Example:

"-TO=sales@mycompany.com, support@mycompany.com"

-CC=<address_list>

No

List of e-mail addresses of the CC-ed recipients, separated by commas.

Example:

"-CC=info@mycompany.com"

-BCC=<address_list>

No

List of email-addresses of the BCC-ed recipients, separated by commas.

Example:

"-BCC=manager@mycompany.com"

-SUBJECT=<subject>

No

Object (subject) of the message.

-ATTACH=<file_path>

No

Path of the file to join to the message, relative to the execution agent. To join several files, repeat -ATTACH.

Example: Attach the files .profile and .cshrc to the mail:

-ATTACH=/home/usr/.profile -ATTACH=/home/usr/.cshrc

CR/LF <message_body>

or -MSGBODY=<message_body>

No

Message body (text). This text can be typed on the line following the OdiSendMail command (a carriage return - CR/LF - indicates the beginning of the mail body), or can be defined with the -MSGBODY parameter. The -MSGBODY parameter should be used when calling this Oracle Data Integrator command from an OS command line.


Examples

OdiSendMail -MAILHOST=mail.mymail.com "-FROM=Application Oracle Data
Integrator<odi@mymail.com>" -TO=admin@mymail.com "-SUBJECT=Execution OK"
-ATTACH=C:\log\job.log -ATTACH=C:\log\job.bad
Hello Administrator !
Your process finished successfully. Attached are your files.
Have a nice day!
Oracle Data Integrator.

OdiSftp

Use this command to connect to an SSH server with an enabled SFTP subsystem and perform standard FTP commands on the remote system. Trace from the script is recorded against the task representing the OdiSftp step in Operator Navigator.

Usage

OdiSftp -HOST=<ssh server host name> -USER=<ssh user>
[-PASSWORD=<ssh user password>] -LOCAL_DIR=<local dir>
-REMOTE_DIR=<remote dir on ssh host> [-PASSIVE_MODE=<yes|no>]
[-TIMEOUT=<time in seconds>] [-IDENTITY_FILE=<full path to private key file of user>] [-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>]
[-COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>]
[-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>]
 [-PROXY_TYPE=<HTTP|SOCKS5>] [STOP_ON_FTP_ERROR=<yes|no>]
-COMMAND=<command>

Parameters

ParametersMandatoryDescription

-HOST=<ssh server host name>

Yes

Host name of the SSH server.

-USER=<ssh user>

Yes

User on the SSH server.

-PASSWORD=<ssh user password>

No

Password of the SSH user.

-LOCAL_DIR=<local dir>

Yes

Directory path on the local machine.

-REMOTE_DIR=<remote dir on ssh host>

Yes

Directory path on the remote SSH host.

-TIMEOUT=<time in seconds>

No

Time in seconds after which the socket connection times out.

-IDENTITY_FILE=<full path to private key file of user>

No

Private key file of the local user. If specified, public key authentication is performed. The –PASSWORD argument is used as the password for the password-protected private key file. If authentication fails, normal user password authentication is performed.

-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>

No

Full path to the known hosts file on the local machine. The known hosts file contains host keys for all remote machines trusted by the user. If this argument is missing, the <user home dir>/.ssh/known_hosts file is used as the known hosts file if it exists.

-COMPRESSION=<yes|no>

No

If set to Yes, data compression is used. The default value is No.

-STRICT_HOSTKEY_CHECKING=<yes|no>

No

If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in –KNOWNHOSTS_FILE.

-PROXY_HOST=<proxy server host name>

No

Host name of the proxy server to be used for the connection.

-PROXY_PORT=<proxy server port>

No

Port number of the proxy server.

-PROXY_TYPE<HTTP|SOCKS5>

No

Type of proxy server you are connecting to, HTTP or SOCKS5.

STOP_ON_FTP_ERROR=<yes|no>

No

If set to Yes (default), the step stops with an Error status if an error occurs rather than running to completion.

-COMMAND=<command>

Yes

Raw FTP command to execute. For a multiline command, pass the whole command as raw text after the OdiSftp line without the -COMMAND parameter.

Supported commands:

APPE, CDUP, CWD, DELE, LIST, MKD, NLST, PWD, QUIT, RETR, RMD, RNFR, RNTO, SIZE, STOR


Examples

Execute a script on a remote host that changes directory into a directory, deletes a file from the directory, changes directory into the parent directory, and removes the directory.

OdiSftp -HOST=machine.oracle.com -USER=odiftpuser -PASSWORD=<password>
-LOCAL_DIR=/tmp -REMOTE_DIR=/tmp -STOP_ON_FTP_ERROR=No
CWD /tmp/ftpToolDir1
DELE ftpToolFile
CDUP
RMD ftpToolDir1

OdiSftpGet

Use this command to download a file from an SSH server with an enabled SFTP subsystem.

Usage

OdiSftpGet -HOST=<ssh server host name> -USER=<ssh user>
[-PASSWORD=<ssh user password>] -REMOTE_DIR=<remote dir on ssh host>
[-REMOTE_FILE=<file name under REMOTE_DIR>] -LOCAL_DIR=<local dir>
[-LOCAL_FILE=<file name under LOCAL_DIR>] [-PASSIVE_MODE=<yes|no>]
[-TIMEOUT=<time in seconds>]
[-IDENTITY_FILE=<full path to private key file of user>]
[-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>]
[-COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>]
[-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>]
[-PROXY_TYPE=<HTTP|SOCKS5>]

Parameters

ParametersMandatoryDescription

-HOST=<ssh server host name>

Yes

Host name of the SSH server.

You can add the port number to the host name by prefixing it with a colon (:). For example: machine.oracle.com:25

If no port is specified, port 22 is used by default.

-USER=<ssh user>

Yes

User on the SSH server.

-PASSWORD=<ssh user password>

No

Password of the SSH user.

-REMOTE_DIR=<remote dir on ssh host>

Yes

Directory path on the remote SSH host.

-REMOTE_FILE=<file name under -REMOTE DIR>

No

File name under the directory specified in the -REMOTE_DIR argument. If this argument is missing, the file is copied with the -LOCAL_FILE file name. If the -LOCAL_FILE argument is also missing, the -LOCAL_DIR is copied recursively to the -REMOTE_DIR.

-LOCAL_DIR=<local dir>

Yes

Directory path on the local machine.

-LOCAL_FILE=<file name under LOCAL_DIR>

No

File name under the directory specified in the -LOCAL_DIR argument. If this argument is missing, all files and directories under the -LOCAL_DIR are copied recursively to the -REMOTE_DIR.

To filter the files to be copied, use * to specify the generic characters.

Examples:

  • *.log (all files with the log extension)

  • arch_*.lst (all files starting with arch_ and with the extension lst)

-IDENTITY_FILE=<full path to private key file of user>

No

Private key file of the local user. If this argument is specified, public key authentication is performed. The –PASSWORD argument is used as the password for the password-protected private key file. If authentication fails, it falls back to normal user password authentication.

-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>

No

The full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines the user trusts. If this argument is missing, the <user home dir>/.ssh/known_hosts file is used as the known hosts file if it exists.

-COMPRESSION=<yes|no>

No

If set to Yes, data compression is used. The default value is No.

-STRICT_HOSTKEY_CHECKING=<yes|no>

No

If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in –KNOWNHOSTS_FILE.

-PROXY_HOST=<proxy server host name>

No

Host name of the proxy server to be used for the connection.

-PROXY_PORT=<proxy server port>

No

Port number of the proxy server.

-PROXY_TYPE=<HTTP|SOCKS5>

No

Type of proxy server you are connecting to, HTTP or SOCKS5.

-TIMEOUT=<time in seconds>

No

Time in seconds after which the socket connection times out.


Examples

Copy the remote directory /test_copy555 on the SSH server recursively to the local directory C:\temp\test_copy.

OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555

Copy all files matching the Sales*.txt pattern under the remote directory / on the SSH server to the local directory C:\temp\.

OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -REMOTE_FILE=Sales*.txt -REMOTE_DIR=/

Copy the Sales1.txt file under the remote directory / on the SSH server to the local directory C:\temp\ as a Sample1.txt file.

OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -REMOTE_DIR=/ -LOCAL_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt

Copy the Sales1.txt file under the remote directory / on the SSH server to the local directory C:\temp\ as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.

OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
-REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt
-IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa
-KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts

Copy the Sales1.txt file under the remote directory / on the SSH server to the local directory C:\temp\ as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING parameter.

OdiSftpGet -HOST=dev3 -USER=test_ftp -PASSWORD=<password>
-REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt
-IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa
-STRICT_HOSTKEY_CHECKING=NO

OdiSftpPut

Use this command to upload a file to an SSH server with the SFTP subsystem enabled.

Usage

OdiSftpPut -HOST=<ssh server host name> -USER=<ssh user>
[-PASSWORD=<ssh user password>] -LOCAL_DIR=<local dir>
[-LOCAL_FILE=<file name under LOCAL_DIR>] -REMOTE_DIR=<remote dir on ssh host>
[-REMOTE_FILE=<file name under REMOTE_DIR>] [-PASSIVE_MODE=<yes|no>]
[-TIMEOUT=<time in seconds>]
[-IDENTITY_FILE=<full path to private key file of user>]
[-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>]
[-COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>]
[-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>]
[-PROXY_TYPE=<HTTP|SOCKS5>]

Parameters

ParametersMandatoryDescription

-HOST=<ssh server host name>

Yes

Host name of the SSH server.

You can add the port number to the host name by prefixing it with a colon (:). For example: machine.oracle.com:25

If no port is specified, port 22 is used by default.

-USER=<ssh user>

Yes

User on the SSH server.

-PASSWORD=<ssh user password>

No

Password of the SSH user or the passphrase of the password-protected identity file. If the –IDENTITY_FILE argument is provided, this value is used as the passphrase for the password-protected private key file. If public key authentication fails, it falls back to normal user password authentication.

-REMOTE_DIR=<remote dir on ssh host

Yes

Directory path on the remote SSH host.

-REMOTE_FILE=<file name under -REMOTE DIR>

No

File name under the directory specified in the -REMOTE_DIR argument. If this argument is missing, the file is copied with the -LOCAL_FILE file name. If the -LOCAL_FILE argument is also missing, the -LOCAL_DIR is copied recursively to the -REMOTE_DIR.

-LOCAL_DIR=<local dir>

Yes

Directory path on the local machine.

-LOCAL_FILE=<file name under LOCAL_DIR>

No

File name under the directory specified in the -LOCAL_DIR argument. If this argument is missing, all files and directories under the -LOCAL_DIR is copied recursively to the -REMOTE_DIR.

To filter the files to be copied, use * to specify the generic characters.

Examples:

  • *.log (all files with the log extension)

  • arch_*.lst (all files starting with arch_ and with the extension lst)

-IDENTITY_FILE=<full path to private key file of user>

No

Private key file of the local user. If this argument is specified, public key authentication is performed. The –PASSWORD argument is used as the password for the password-protected private key file. If authentication fails, it falls back to normal user password authentication.

-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>

No

Full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines the user trusts. If this argument is missing, the <user home dir>/.ssh/known_hosts file is used as the known hosts file if it exists.

-COMPRESSION=<yes|no>

No

If set to Yes, data compression is used. The default value is No.

-STRICT_HOSTKEY_CHECKING=<yes|no>

No

If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in –KNOWNHOSTS_FILE.

-PROXY_HOST=<proxy server host name>

No

Host name of the proxy server to be used for the connection.

-PROXY_PORT=<proxy server port>

No

Port number of the proxy server.

-PROXY_TYPE=<HTTP|SOCKS5>

No

Type of proxy server you are connecting to, HTTP or SOCKS5.

-TIMEOUT=<time in seconds>

No

Time in seconds after which the socket connection times out.


Examples

Copy the local directory C:\temp\test_copy recursively to the remote directory /test_copy555 on the SSH server.

OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555

Copy all files matching the Sales*.txt pattern under the local directory C:\temp\ to the remote directory / on the SSH server.

OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/

Copy the Sales1.txt file under the local directory C:\temp\ to the remote directory / on the SSH server as a Sample1.txt file.

OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt

Copy the Sales1.txt file under the local directory C:\temp\ to the remote directory / on the SSH server as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.

OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
-LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt
-IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa
-KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts

Copy the Sales1.txt file under the local directory C:\temp\ to the remote directory / on the SSH server as a Sample1.txt file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING parameter.

OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password>
-LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt
-IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa
-STRICT_HOSTKEY_CHECKING=NO

OdiSleep

Use this command to wait for <delay> milliseconds.

Usage

OdiSleep -DELAY=<delay>

Parameters

ParametersMandatoryDescription

-DELAY=<delay>

Yes

Number of milliseconds to wait


Examples

OdiSleep -DELAY=5000

OdiSqlUnload

Use this command to write the result of a SQL query to a file.

This command executes the SQL query <sql_query> on the data server whose connection parameters are provided by <driver>, <url>, <user>, and <encoded_pass>. The resulting resultset is written to <file_name>.

Usage

OdiSqlUnload -FILE=<file_name> -DRIVER=<driver> -URL=<url> -USER=<user>
-PASS=<password> [-FILE_FORMAT=<file_format>] [-FIELD_SEP=<field_sep> |
-XFIELD_SEP=<field_sep>] [-ROW_SEP=<row_sep> | -XROW_SEP=<row_sep>]
[-DATE_FORMAT=<date_format>] [-CHARSET_ENCODING=<encoding>]
[-XML_CHARSET_ENCODING=<encoding>] [-FETCH_SIZE=<array_fetch_size>]
( CR/LF <sql_query> | -QUERY=<sql_query> | -QUERY_FILE=<sql_query_file> )

Parameters

ParametersMandatoryDescription

-FILE=<file_name>

Yes

Full path to the output file, relative to the execution agent.

-DRIVER=<driver>

Yes

Name of the JDBC driver used to connect to the data server.

-URL=<url>

Yes

JDBC URL to the data server.

-USER=<user>

Yes

Login of the user on the data server that will be used to run the SQL query.

-PASS=<password>

Yes

Encrypted password for the login to the data server. This password can be encrypted with the system command encode <clear_text_password>.

Note that agent(.bat or .sh) is located in the /bin subdirectory of your Oracle Data Integrator installation directory.

-FILE_FORMAT=<file_format>

No

Specifies the file format with one of the following three values:

  • fixed: Fixed size recording

  • variable: Variable size recording

  • xml: XML file

If <file_format> is not specified, the format defaults to variable.

If <file_format> is xml, the XML nodes generated have the following structure:

<TABLE>

<ROW>

<column_name>![CDATA[VALUE]]</column_name>

<column_name>![CDATA[VALUE]]</column_name>

...

</ROW>

....

</TABLE>

-FIELD_SEP=<field_sep>

No

Field separator character in ASCII format if -FILE_FORMAT=variable. The default <field_sep> is a tab character.

-XFIELD_SEP=<field_sep>

No

Field separator character in hexadecimal format if -FILE_FORMAT=variable. The default <field_sep> is a tab character.

-ROW_SEP=<row_sep>

No

Record separator character in ASCII format. The default <row_sep> is a Windows carriage return. For instance, the following values can be used:

  • UNIX: -ROW_SEP=\n

  • Windows: -ROW_SEP=\r\n

-XROW_SEP=<row_sep>

No

Record separator character in hexadecimal format. Example: 0A.

-DATE_FORMAT=<date_format>

No

Output format used for date datatypes. This date format is specified using the Java date and time format patterns. For a list of these patterns, see: http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.

-CHARSET_ENCODING=<encoding>

No

Target file encoding. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-XML_CHARSET_ENCODING=<encoding>

No

Encoding specified in the XML file, in the tag <?xml version="1.0" encoding="ISO-8859-1"?>. The default value is ISO-8859-1. For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-FETCH_SIZE=<array_fetch_size>

No

Number of rows (records read) requested by Oracle Data Integrator in each communication with the data server.

-CR/LF=<sql_query> | -QUERY=<sql_query> | -QUERY_FILE=<sql_query_file>

Yes

SQL query to execute on the data server. The query must be a SELECT statement or a call to a stored procedure returning a valid recordset. This query can be entered on the line following the OdiSqlUnload command (a carriage return - CR/LF - indicates the beginning of the query). The query can be provided within the -QUERY parameter, or stored in a file specified with the -QUERY_FILE parameter. The -QUERY or -QUERY_FILE parameters must be used when calling this command from an OS command line.


Examples

Generate the file C:\temp\clients.csv separated by ; containing the result of the query on the Customers table.

OdiSqlUnload -FILE=C:\temp\clients.csv -DRIVER=sun.jdbc.odbc.JdbcOdbcDriver
-URL=jdbc:odbc:NORTHWIND_ODBC -USER=sa
-PASS=NFNEKKNGGJHAHBHDHEHJDBGBGFDGGH -FIELD_SEP=;
"-DATE_FORMAT=dd/MM/yyyy hh:mm:ss"

select cust_id, cust_name, cust_creation_date from Northwind.dbo.Customers

OdiStartLoadPlan

Use this command to start a Load Plan.

The -SYNC parameter starts a load plan in synchronous or asynchronous mode. In synchronous mode, the tool ends with the same status as the completed load plan run.

Usage

OdiStartLoadPlan -LOAD_PLAN_NAME=<load_plan_name> [-LOG_LEVEL=<log_level>]
[-CONTEXT=<context_code>] [-AGENT_URL=<agent_url>]
[-AGENT_CODE=<logical_agent_code>] [-ODI_USER=<ODI User>] 
[-ODI_PASS=<ODI Password>] [-KEYWORDS=<Keywords>]
[-<PROJECT_CODE>.<VARIABLE>=<var_value> ...] [-SYNC=<yes|no>] [-POLLINT=<msec>]

Parameters

ParametersMandatoryDescription

-LOAD_PLAN_NAME=<load_plan_name>

Yes

Name of the load plan to start.

-LOG_LEVEL=<log_level>

No

Level of logging information to retain. All sessions with a defined log level lower than or equal to this value are kept in the session log when the session completes. However, if object execution ends abnormally, all tasks are kept, regardless of this setting.

Note that log level 6 has the same behavior as log level 5, but with the addition of variable and sequence tracking. See "Tracking Variables and Sequences" for more information.

[-CONTEXT=<context_code>]

Yes

Code of the execution context. If this parameter is omitted, the load plan starts in the execution context of the calling session, if any.

[-AGENT_URL=<agent_url>]

No

URL of the remote agent that starts the load plan.

[-AGENT_CODE=<logical_agent_code>]

No

Code of the logical agent responsible for starting this load plan. If this parameter and -AGENT_URL are omitted, the current agent starts this load plan. This parameter is ignored if -AGENT_URL is specified.

[-ODI_USER=<ODI user>]

No

Oracle Data Integrator user to be used to start the load plan. The privileges of this user are used. If this parameter is omitted, the load plan is started with privileges of the user launching the parent session.

[-ODI_PASS=<ODI Password>]

No

Password of the Oracle Data Integrator user. This password must be encoded. This parameter is required if -ODI_USER is specified.

-KEYWORDS=<keywords>

No

Comma-separated list of keywords attached to this load plan. These keywords make load plan execution identification easier.

-<VARIABLE>=<value>

No

List of project or global variables whose value is set as the default for the execution of the load plan. Project variables should be named <project_code>.<variable_name> and global variables should be named GLOBAL.<variable_name>. This list is of the form -<variable>=<value>.

-SYNC=<yes|no>

No

Specifies whether the load plan should be executed synchronously or asynchronously.

If set to Yes (synchronous mode), the load plan is started and runs to completion with a status of Done or Error before control is returned.

If set to No (asynchronous mode), the load plan is started and control is returned before the load plan runs to completion. The default value is No.

-POLLINT=<msec>

No

The time in milliseconds to wait between polling the load plan run status for completion state. The -SYNC parameter must be set to Yes. The default value is 1000 (1 second). The value must be greater than 0.


Examples

Start load plan LOAD_DWH in the GLOBAL context on the same agent.

OdiStartLoadPlan -LOAD_PLAN_NAME=LOAD_DWH -CONTEXT=GLOBAL

OdiStartOwbJob

Use this command to execute Oracle Warehouse Builder (OWB) objects from within Oracle Data Integrator and to retrieve the execution audit data into Oracle Data Integrator.

This command uses an Oracle Warehouse Builder runtime repository data server that can be created in Topology Navigator. This data server must connect as an Oracle Warehouse Builder user who can access an Oracle Warehouse Builder workspace. The physical schemas under this data server represent the Oracle Warehouse Builder workspaces that this user can access. For information about the Oracle Data Integrator topology, see "Setting Up the Topology".

Usage

OdiStartOwbJob -WORKSPACE=<logical_owb_repository> -LOCATION=<owb_location>
-OBJECT_NAME=<owb_object> -OBJECT_TYPE=<owb_object_type>
[-EXEC_PARAMS=<exec_params>] [-CONTEXT=<context_code>] [-LOG_LEVEL=<log_level>]
[-SYNC_MODE=<1|2>] [-POLLINT=<n>] [-SESSION_NAME=<session_name>]
[-KEYWORDS=<keywords>] [<OWB parameters>]

Parameters

ParametersMandatoryDescription

-WORKSPACE=<logical_owb_repository>

Yes

Logical schema of the OWB Runtime Repository technology. This resolves to a physical schema that represents the Oracle Warehouse Builder workspace that contains the Oracle Warehouse Builder object to be executed. The Oracle Warehouse Builder workspace was chosen when you added a Physical Schema under the OWB Runtime Repository DataServer in Topology Navigator.

The context for this mapping can also be specified using the -CONTEXT parameter.

-LOCATION=<owb_location>

Yes

Name of the Oracle Warehouse Builder location that contains the Oracle Warehouse Builder object to be executed. This location must exist in the physical workspace that resolves from -WORKSPACE.

-OBJECT_NAME=<owb_object>

Yes

Name of the Oracle Warehouse Builder object. This object must exist in -LOCATION.

-OBJECT_TYPE=<owb_object_type>

Yes

Type of Oracle Warehouse Builder object, for example:

PLSQLMAP, PROCESSFLOW, SQLLOADERCONTROLFILE, MAPPING, DATAAUDITOR, ABAPFILE

-EXEC_PARAMS=<exec_params>

No

Custom and/or system parameters for the Oracle Warehouse Builder execution.

-CONTEXT=<context_code>

No

Execution context of the Oracle Warehouse Builder object. This is the context in which the logical workspace will be resolved. Studio editors use this value or the Default Context. Execution uses this value or the Parent Session context.

-LOG_LEVEL=<log_level>

No

Log level (0-5). The default value is 5, which means that maximum details are captured in the log.

-SYNC_MODE=<1|2>

No

Synchronization mode of the Oracle Warehouse Builder job:

1 - Synchronous (default). Execution of the session waits until the Oracle Warehouse Builder job terminates.

2 - Asynchronous. Execution of the session continues without waiting for the Oracle Warehouse Builder job to terminate.

-POLLINT=<n>

No

The period of time in milliseconds to wait between each transfer of Oracle Warehouse Builder audit data to Oracle Data Integrator log tables. The default value is 0, which means that audit data is transferred at the end of the execution.

-SESSION_NAME=<session_name>

No

Name of the Oracle Warehouse Builder session as it appears in the log.

-KEYWORDS=<keywords>

No

Comma-separated list of keywords attached to the session.

<OWB parameters>

No

List of values for the Oracle Warehouse Builder parameters relevant to the object. This list is of the form -PARAM_NAME=value. Oracle Warehouse Builder system parameters should be prefixed by OWB_SYSTEM, for example, OWB_SYSTEM.AUDIT_LEVEL.


Examples

Execute the Oracle Warehouse Builder process flow LOAD_USERS that has been deployed to the Oracle Workflow DEV_OWF.

OdiStartOwbJob -WORKSPACE=OWB_WS1 -CONTEXT=QA
-LOCATION=DEV_OWF -OBJECT_NAME=LOAD_USERS -OBJECT_TYPE=PROCESSFLOW

Execute the Oracle Warehouse Builder PL/SQL map STAGE_USERS that has been deployed to the database location DEV_STAGE. Poll and transfer the Oracle Warehouse Builder audit data every 5 seconds. Pass the input parameter AGE_LIMIT whose value is obtained from an Oracle Data Integrator variable, and specify an Oracle Warehouse Builder system parameter relevant to a PL/SQL map.

OdiStartOwbJob -WORKSPACE=OWB_WS1 -CONTEXT=QA
-LOCATION=DEV_STAGE -OBJECT_NAME=STAGE_USERS -OBJECT_TYPE=PLSQLMAP
-POLLINT=5000 -OWB_SYSTEM.MAX_NO_OF_ERRORS=25 -AGE_LIMIT=#VAR_MINAGE

OdiStartScen

Use this command to start a scenario.

The optional parameter -AGENT_CODE is used to dedicate this scenario to another agent other than the current agent.

The parameter -SYNC_MODE starts a scenario in synchronous or asynchronous mode.


Note:

The scenario that is started should be present in the repository into which the command is launched. If you go to production with a scenario, make sure to also take all scenarios called by your scenario using this command. The Solutions can help you group scenarios for this purpose.


Usage

OdiStartScen -SCEN_NAME=<scenario> -SCEN_VERSION=<version>
[-CONTEXT=<context>] [-ODI_USER=<odi user> -ODI_PASS=<odi password>]
[-SESSION_NAME=<session_name>] [-LOG_LEVEL=<log_level>]
[-AGENT_CODE=<logical_agent_name>] [-SYNC_MODE=<1|2>]
[-KEYWORDS=<keywords>] [-<VARIABLE>=<value>]*

Parameters

ParametersMandatoryDescription

-SCEN_NAME=<scenario>

Yes

Name of the scenario to start.

-SCEN_VERSION=<version>

Yes

Version of the scenario to start. If the version specified is -1, the last version of the scenario is executed.

-CONTEXT=<context>

No

Code of the execution context. If this parameter is omitted, the scenario is executed in the execution context of the calling session.

-ODI_USER=<odi user>

No

Oracle Data Integrator user to be used to run the scenario. The privileges of this user are used. If this parameter is omitted, the scenario is executed with privileges of the user launching the parent session.

-ODI_PASS=<odi password>

No

Password of the Oracle Data Integrator user. This password should be encoded. This parameter is required if the user is specified.

-SESSION_NAME=<session_name>

No

Name of the session that will appear in the execution log.

-LOG_LEVEL=<log_level>

No

Trace level (0 .. 5) to keep in the execution log. The default value is 5.

-AGENT_CODE=<logical_agent_name>

No

Name of the logical agent responsible for executing this scenario. If this parameter is omitted, the current agent executes this scenario.

-SYNC_MODE=<1|2>

No

Synchronization mode of the scenario:

1 - Synchronous mode (default). The execution of the calling session is blocked until the scenario finishes its execution.

2 - Asynchronous mode. The execution of the calling session continues independently from the return of the called scenario.

-KEYWORDS=<keywords>

No

Comma-separated list of keywords attached to this session. These keywords make session identification easier.

-<VARIABLE>=<value>

No

List of variables whose value is set for the execution of the scenario. This list is of the form PROJECT.VARIABLE=value or GLOBAL.VARIABLE=value.


Examples

Start the scenario LOAD_DWH in version 2 in the production context (synchronous mode).

OdiStartScen -SCEN_NAME=LOAD_DWH -SCEN_VERSION=2
-CONTEXT=CTX_PRODUCTION

Start the scenario LOAD_DWH in version 2 in the current context in asynchronous mode on the agent UNIX Agent while passing the values of the variables START_DATE (local) and COMPANY_CODE (global).

OdiStartScen -SCEN_NAME=LOAD_DWH -SCEN_VERSION=2 -SYNC_MODE=2
"-AGENT_CODE=UNIX Agent" -MY_PROJECT.START_DATE=10-APR-2002
-GLOBAL.COMPANY_CODE=SP4356

OdiUnZip

Use this command to extract an archive file to a directory.

Usage

OdiUnZip -FILE=<file> -TODIR=<target_directory> [-OVERWRITE=<yes|no>]
[-ENCODING=<file_name_encoding>]

Parameters

ParametersMandatoryDescription

-FILE=<file>

Yes

Full path to the ZIP file to extract.

-TODIR=<target_file>

Yes

Destination directory or folder.

-OVERWRITE=<yes|no>

No

Indicates if the files that already exist in the target directory must be overwritten. The default value is No.

-ENCODING=<file_name_encoding>

No

Character encoding used for file names inside the archive file. For a list of possible values, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

Defaults to the platform's default character encoding.


Examples

Extract the file archive_001.zip from directory C:\archive\ into directory C:\TEMP.

OdiUnZip "-FILE=C:\archive\archive_001.zip" -TODIR=C:\TEMP\

OdiUpdateAgentSchedule

Use this command to force an agent to recalculate its schedule of tasks.

Usage

OdiUpdateAgentSchedule -AGENT_NAME=<physical_agent_name>

Parameters

ParametersMandatoryDescription

-AGENT_NAME=<physical_agent_name>

Yes

Name of the physical agent to update.


Examples

Cause the physical agent agt_s1 to update its schedule.

OdiUpdateAgentSchedule -AGENT_NAME=agt_s1

OdiWaitForChildSession

Use this command to wait for the child session (started using the OdiStartScen tool) of the current session to complete.

This command checks every <polling_interval> to determine if the sessions launched from <parent_sess_number> are finished. If all child sessions (possibly filtered by their name and keywords) are finished (status of Done, Warning, or Error), this command terminates.

Usage

OdiWaitForChildSession [-PARENT_SESS_NO=<parent_sess_number>]
[-POLL_INT=<polling_interval>] 
[-SESSION_NAME_FILTER=<session_name_filter>]
[-SESSION_KEYWORDS=<session_keywords>]
[-MAX_CHILD_ERROR=ALL|<error_number>]

Parameters

ParametersMandatoryDescription

-PARENT_SESS_NO=<parent_sess_number>

No

ID of the parent session. If this parameter is not specified, the current session ID is used.

-POLL_INT=<polling_interval>

No

Interval in seconds between each sequence of termination tests for the child sessions. The default value is 1.

-SESSION_NAME_FILTER=<session_name_filter>

No

Only child sessions whose names match this filter are tested. This filter can be a SQL LIKE-formatted pattern.

-SESSION_KEYWORDS=<session_keywords>

No

Only child sessions for which ALL keywords have a match in this comma-separated list are tested. Each element of the list can be a SQL LIKE-formatted pattern.

-MAX_CHILD_ERROR= ALL|<error_number>

No

This parameter enables OdiWaitForChildSession to terminate in error if a number of child sessions have terminated in error:

  • ALL: Error if all child sessions terminate in error.

  • <error_number>: Error if <error_number> or more child sessions terminate in error.

If this parameter is equal to 0, negative, or not specified, OdiWaitForChildSession never terminates in an error status, regardless of the number of failing child sessions.


Examples

Wait and poll every 5 seconds for all child sessions of the current session with a name filter of LOAD% and keywords MANDATORY and CRITICAL to finish.

OdiWaitForChildSession -PARENT_SESS_NO=<%=odiRef.getSession("SESS_NO")%>
-POLL_INT=5 -SESSION_NAME_FILTER=LOAD%
-SESSION_KEYWORDS=MANDATORY,CRITICAL

OdiWaitForData

Use this command to wait for a number of rows in a table or set of tables. This can also be applied to a number of objects containing data, such as views.

The OdiWaitForData command tests that a table, or a set of tables, has been populated with a number of records. This test is repeated at regular intervals (-POLLINT) until one of the following conditions is met: the desired number of rows for one of the tables has been detected (-UNIT_ROWCOUNT), the desired, cumulated number of rows for all of the tables has been detected (-GLOBAL_ROWCOUNT), or a timeout (-TIMEOUT) has been reached.

Filters may be applied to the set of counted rows. They are specified by an explicit SQL where clause (-SQLFILTER) and/or the -RESUME_KEY_xxx parameters to determine field-value-operator clause. These two methods are cumulative (AND).

The row count may be considered either in absolute terms (with respect to the total number of rows in the table) or in differential terms (the difference between a stored reference value and the current row count value).

When dealing with multiple tables:

  • The -SQLFILTER and -RESUME_KEY_xxx parameters apply to ALL tables concerned.

  • The -UNIT_ROWCOUNT parameter determines the row count to be expected for each table. The -GLOBAL_ROWCOUNT parameter determines the SUM of the row count number cumulated over the set of tables. When only one table is concerned, the -UNIT_ROWCOUNT and -GLOBAL_ROWCOUNT parameters are equivalent.

Usage

OdiWaitForData -LSCHEMA=<logical_schema> -TABLE_NAME=<table_name>
[-OBJECT_TYPE=<list of object types>] [-CONTEXT=<context>]
[-RESUME_KEY_VARIABLE=<resumeKeyVariable> 
-RESUME_KEY_COL=<resumeKeyCol>
[-RESUME_KEY_OPERATOR=<resumeKeyOperator>]|-SQLFILTER=<SQLFilter>]
[-TIMEOUT=<timeout>] [-POLLINT=<pollInt>] 
[-GLOBAL_ROWCOUNT=<globalRowCount>]
[-UNIT_ROWCOUNT=<unitRowCount>] [-TIMEOUT_WITH_ROWS_OK=<yes|no>]
[-INCREMENT_DETECTION=<no|yes> [-INCREMENT_MODE=<M|P|I>]
[-INCREMENT_SEQUENCE_NAME=<incrementSequenceName>]]

Parameters

ParametersMandatoryDescription

-LSCHEMA=<logical_schema>

Yes

Logical schema containing the tables.

-TABLE_NAME=<table_name>

Yes

Table name, mask, or list of table names to check. This parameter accepts three formats:

  • Table Name

  • Table Name Mask: This mask selects the tables to poll. The mask is specified using the SQL LIKE syntax: the % symbol replaces an unspecified number of characters and the _ symbol is a single character wildcard.

  • Table Names List: Comma-separated list of table names. Masks as defined above are allowed.

-OBJECT_TYPE=<list of object types>

No

Type of objects to check. By default, only tables are checked. To take into account other objects, specify a comma-separated list of object types. Supported object types are:

  • T: Table

  • V: View

-CONTEXT=<context>

No

Context in which the logical schema will be resolved. If no context is specified, the execution context is used.

-SQLFILTER=<SQLFilter>

No

Explicit SQL filter to be applied to the table(s). This statement must be valid for the technology containing the checked tables.

Note that this statement must not include the WHERE keyword.

-RESUME_KEY_VARIABLE=<resumeKeyVariable>

-RESUME_KEY_COL=<resumeKeyCol>

[-RESUME_KEY_OPERATOR=<resumeKeyOperator>]

No

The RESUME_KEY_xxx parameters enable filtering of the set of counted rows in the polled tables.

  • <key_column>: Name of a column in the checked table.

  • <operator>: Valid comparison operator for the technology containing the checked tables. If this parameter is omitted, the value > is used by default.

  • <variable_name>: Variable name whose value has been previously set. The variable name must be prefixed with : (bind) or # (substitution). The variable scope should be explicitly stated in the Oracle Data Integrator syntax; GLOBAL.<variable name> for global variables or <project code>.<variable name> for project variables.

-TIMEOUT=<timeout>

No

Maximum period of time in milliseconds over which data is polled. If this value is equal to 0, the timeout is infinite. The default value is 0.

-POLLINT=<pollInt>

No

The period of time in milliseconds to wait between data polls. The default value is 1000.

-UNIT_ROWCOUNT=<unitRowCount>

No

Number of rows expected in a polled table to terminate the command. The default value is 1.

-GLOBAL_ROWCOUNT=<globalRowCount>

No

Total number of rows expected cumulatively, over the set of tables, to terminate the command. If not specified, the default value 1 is used.

-INCREMENT_DETECTION=<no|yes>

No

Defines the mode in which the command considers row count: either in absolute terms (with respect to the total number of rows in the table) or in differential terms (the difference between a stored reference value and the current row count value).

  • If set to Yes, the row count is performed in differential mode. The number of additional rows in the table is compared to a stored reference value. The reference value depends on the -INCREMENT_MODE parameter.

  • If set to No, the count is performed in absolute row count mode.

The default value is No.

-INCREMENT_MODE=<M|P|I>

No

This parameter specifies the persistence mode of the reference value between successive OdiWaitForData calls.

Possible values are:

  • M: Memory. The reference value is nonpersistent. When OdiWaitForData is called, the reference value takes a value equal to the number of rows in the polled table. When OdiWaitForData ends the value is lost. A following call in this mode sets a new reference value.

  • P: Persistent. The reference value is persistent. It is read from the increment sequence when OdiWaitForData starts and saved in the increment sequence when OdiWaitForData ends. If the increment sequence is not set (at initial call time) the current table row count is used.

  • I: Initial. The reference value is initialized and is persistent. When OdiWaitForData starts, the reference value takes a value equal to the number of rows in the polled table. When OdiWaitForData ends, it is saved in the increment sequence as for the persistent mode.

The default value is M.

Note that using the Persistent or Initial modes is not supported when a mask or list of tables is polled.

-INCREMENT_SEQUENCE_NAME=<incrementSequenceName>

No

This parameter specifies the name of an automatically allocated storage space used for reference value persistence. This increment sequence is stored in the Repository. If this name is not specified, it takes the name of the table.

Note that this Increment Sequence is not an Oracle Data Integrator Sequence and cannot be used as such outside a call to OdiWaitForData.

-TIMEOUT_WITH_ROWS_OK=<yes|no>

No

If this value is set to Yes, at least one row was detected, and the timeout occurs before the expected number of rows has been inserted, the API exits with a return code of 0. Otherwise, it signals an error. The default value is Yes.


Examples

Wait for the DE1P1 table in the ORA_WAITFORDATA schema to contain 200 records matching the filter.

OdiWaitForData -LSCHEMA=ORA_WAITFORDATA -TABLE_NAME=DE1P1
-GLOBAL_ROWCOUNT=200 "-SQLFILTER=DATMAJ >
to_date('#MAX_DE1_DATMAJ_ORACLE_CHAR', 'DD/MM/YYYY HH24:MI:SS')"

Wait for a maximum of 4 hours for new data to appear in either the CITY_SRC or the CITY_TRG table in the logical schema SQLSRV_SALES.

OdiWaitForData -LSCHEMA=SQLSRV_SALES -TABLE_NAME=CITY%
-TIMEOUT=14400000 -INCREMENT_DETECTION=yes

OdiWaitForLoadPlans

Use this command to wait for load plan runs to complete.

Usage

OdiWaitForLoadPlans [-PARENT_SESS_NO=<parent_sess_guid>]
[-LP_NAME_FILTER=<load_plan_name_filter>] [-LP_KEYWORDS=<load_plan_keywords>]
[-MAX_LP_ERROR=ALL|<number_of_lp_errors>] [-POLLINT=<polling_interval_msec>]

Parameters

ParametersMandatoryDescription

-PARENT_SESS_NO=<parent_sess_guid>

No

Global ID of the parent session that started the load plan. If this parameter is not specified, the global ID of the current session is used.

-LP_NAME_FILTER=<load_plan_name_filter>

No

Only load plan runs whose name matches this filter are tested for completion status. This filter can be a SQL LIKE-formatted pattern.

-LP_KEYWORDS=<load_plan_keywords>

No

Only load plan runs whose keywords contain all entries in this comma-separated list are tested for completion status. Each element in the list can be a SQL LIKE-formatted pattern.

-MAX_LP_ERROR=ALL|<number_of_lp_errors>

No

OdiWaitForLoadPlans terminates in error if a number of load plan runs are in Error status:

  • ALL: Error if all load plan runs complete in Error status.

  • <number_of_lp_errors>: Error if the number of load plan runs in Error status is at or above this value <number_of_lp_errors> when all load plan runs are complete.

If this parameter is not specified or its value is less than 1, OdiWaitForLoadPlans never terminates in error, regardless of the number of load plan runs in Error status.

-POLLINT=<polling_interval_msec>

No

The time in milliseconds to wait between polling load plan run status for completion state. The default value is 1000 (1 second). The value must be greater than 0.


Examples

Wait and poll every 5 seconds for all load plan runs started by the current session with a name filter of POPULATE% and keywords MANDATORY and CRITICAL to finish in a Done or Error status. If 2 or more load plan runs are in Error status when execution is complete for all selected load plan runs, OdiWaitForLoadPlans ends in error.

OdiWaitForLoadPlans -PARENT_SESS_NO=<%=odiRef.getSession("SESS_GUID")%>
-LP_NAME_FILTER=POPULATE% -LP_KEYWORDS=MANDATORY,CRITICAL
-POLLINT=5000 -MAX_LP_ERROR=2

OdiWaitForLogData

Use this command to wait for a number of modifications to occur on a journalized table or a list of journalized tables.

The OdiWaitForLogData command determines whether rows have been modified on a table or a group of tables. These changes are detected using the Oracle Data Integrator changed data capture (CDC) in simple mode (using the -TABLE_NAME parameter) or in consistent mode (using the -CDC_SET_NAME parameter). The test is repeated every -POLLINT milliseconds until one of the following conditions is met: the desired number of row modifications for one of the tables has been detected (-UNIT_ROWCOUNT), the desired cumulative number of row modifications for all of the tables has been detected (-GLOBAL_ROWCOUNT), or a timeout (-TIMEOUT) has been reached.


Note:

This command takes into account all journalized operations (inserts, updates and deletes).

The command is suitable for journalized tables only in simple or consistent mode.


Usage

OdiWaitForLogData -LSCHEMA=<logical_schema>  -SUBSCRIBER_NAME=<subscriber_name>
(-TABLE_NAME=<table_name> | -CDC_SET_NAME=<cdcSetName>)
[-CONTEXT=<context>] [-TIMEOUT=<timeout>] [-POLLINT=<pollInt>]
[-GLOBAL_ROWCOUNT=<globalRowCount>] 
[-UNIT_ROWCOUNT=<unitRowCount> [-OPTIMIZED_WAIT=<yes|no|AUTO>]
[-TIMEOUT_WITH_ROWS_OK=<yes|no>]

Parameters

ParametersMandatoryDescription

-CONTEXT=<context>

No

Context in which the logical schema will be resolved. If no context is specified, the execution context is used.

-GLOBAL_ROWCOUNT=<globalRowCount>

No

Total number of changes expected in the tables or the CDC set to end the command. The default value is 1.

-LSCHEMA=<logical_schema>

Yes

Logical schema containing the journalized tables.

-OPTIMIZED_WAIT=<yes|no|AUTO>

No

Method used to access the journals.

  • yes: Optimized method. This method works for later versions of journalizing. It runs faster than the nonoptimized mode.

  • no: Nonoptimized method. A count is performed on the journalizing table. This method is of lower performance but compatible with earlier versions of the journalizing feature.

  • AUTO: If more than one table is checked, the optimized method is used. Otherwise, the nonoptimized method is used.

The default value is AUTO.

-POLLINT=<pollInt>

No

The period of time in milliseconds to wait between polls. The default value is 2000.

-SUBSCRIBER_NAME=<subscriber_name>

Yes

Name of the subscriber used to get the journalizing information.

-TABLE_NAME=<table_name>

Yes

Journalized table name, mask, or list to check. This parameter accepts three formats:

  • Table Name

  • Table Name Mask: This mask selects the tables to poll. The mask is specified using the SQL LIKE syntax: the % symbol replaces an unspecified number of characters and the _ symbol acts as a wildcard.

  • Table Names List: List of table names separated by commas. Masks as defined above are not allowed.

Note that this option works only for tables in a model journalized in simple mode.

This parameter cannot be used with -CDC_SET_NAME. It is mandatory if -CDC_SET_NAME. is not set.

-CDC_SET_NAME=<cdcSetName>

Yes

Name of the CDC set to check. This CDC set name is the fully qualified model code, typically PHYSICAL_SCHEMA_NAME.MODEL_CODE.

It can be obtained in the current context using a substitution method API call, as shown below: <%=odiRef.getObjectName("L", "model_code", "logical_schema", "D")%>.

Note that this option works only for tables in a model journalized in consistent mode.

This parameter cannot be used with -TABLE_NAME. It is mandatory if -TABLE_NAME is not set.

-TIMEOUT=<timeout>

No

Maximum period of time in milliseconds over which changes are polled. If this value is equal to 0, the timeout is infinite. The default value is 0.

-TIMEOUT_WITH_ROWS_OK=<yes|no>

No

If this parameter is set to Yes, at least one row was detected, and the timeout occurs before the predefined number of rows has been polled, the API exits with a return code of 0. Otherwise, it signals an error. The default value is Yes.

-UNIT_ROWCOUNT=<unitRowCount>

No

Number of changes expected in one of the polled tables to end the command. The default value is 1.

Note that -UNIT_ROWCOUNT is not taken into account with -CDC_SET_NAME.


Examples

Wait for the CUSTOMERS table in the SALES_APPLICATION schema to have 200 row modifications recorded for the SALES_SYNC subscriber.

OdiWaitForLogData -LSCHEMA=SALES_APPLICATION
-TABLE_NAME=CUSTOMERS -GLOBAL_ROWCOUNT=200
-SUBSCRIBER_NAME=SALES_SYNC

OdiWaitForTable

Use this command to wait for a table to be created and populated with a predefined number of rows.

The OdiWaitForTable command regularly tests whether the specified table has been created and has been populated with a number of records. The test is repeated every -POLLINT milliseconds until the table exists and contains the desired number of rows (-GLOBAL_ROWCOUNT), or until a timeout (-TIMEOUT) is reached.

Usage

OdiWaitForTable -CONTEXT=<context> -LSCHEMA=<logical_schema>
-TABLE_NAME=<table_name> [-TIMEOUT=<timeout>] [-POLLINT=<pollInt>]
[-GLOBAL_ROWCOUNT=<globalRowCount>] [-TIMEOUT_WITH_ROWS_OK=<yes|no>]

Parameters

ParametersMandatoryDescription

-CONTEXT=<context>

No

Context in which the logical schema will be resolved. If no context is specified, the execution context is used.

-GLOBAL_ROWCOUNT=<globalRowCount>

No

Total number of rows expected in the table to terminate the command. The default value is 1. If not specified, the command finishes when a new row is inserted into the table.

-LSCHEMA=<logical_schema>

Yes

Logical schema in which the table is searched for.

-POLLINT=<pollInt>

No

Period of time in milliseconds to wait between each test. The default value is 1000.

-TABLE_NAME=<table_name>

Yes

Name of table to search for.

-TIMEOUT=<timeout>

No

Maximum time in milliseconds the table is searched for. If this value is equal to 0, the timeout is infinite. The default value is 0.

-TIMEOUT_WITH_ROWS_OK=<yes|no>

No

If this parameter is set to Yes, at least one row is detected, and the timeout occurs before the expected number of records is detected, the API exits with a return code of 0. Otherwise, it signals an error. The default value is Yes.


Examples

Wait for the DE1P1 table in the ORA_WAITFORDATA schema to exist, and to contain at least 1 record.

OdiWaitForTable -LSCHEMA=ORA_WAITFORDATA -TABLE_NAME=DE1P1
-GLOBAL_ROWCOUNT=1

OdiXMLConcat

Use this command to concatenate elements from multiple XML files into a single file.

This tool extracts all instances of a given element from a set of source XML files and concatenates them into one target XML file. The tool parses and generates well formed XML. It does not modify or generate a DTD for the generated files. A reference to an existing DTD can be specified in the -HEADER parameter or preserved from the original files using -KEEP_XML_PROLOGUE.


Note:

XML namespaces are not supported by this tool. Provide the local part of the element name (without the namespace or prefix value) in the -ELEMENT_NAME parameter.


Usage

OdiXMLConcat -FILE=<file_filter> -TOFILE=<target_file>     
-XML_ELEMENT=<element_name> [-CHARSET_ENCODING=<encoding>]
[-IF_FILE_EXISTS=<overwrite|skip|error>]
[-KEEP_XML_PROLOGUE=<all|xml|doctype|none>] [-HEADER=<header>]
[-FOOTER=<footer>]

Parameters

ParametersMandatoryDescription

-FILE=<file_filter>

Yes

Filter for the source XML files. This filter uses standard file wildcards (?,*). It includes both file names and directory names. Source files can be taken from the same folder or from different folders.

The following file filters are valid:

  • /tmp/files_*/customer.xml

  • /tmp/files_*/*.*

  • /tmp/files_??/customer.xml

  • /tmp/files/customer_*.xml

  • /tmp/files/customer_??.xml

-TOFILE=<target_file>

Yes

Target file into which the elements are concatenated.

-XML_ELEMENT=<element_name>

Yes

Local name of the XML element (without enclosing <> characters, prefix, or namespace information) to be extracted with its content and child elements from the source files.

Note that this element detection is not recursive. If a given instance of <element_name> contains other instances of <element_name>, only the element of higher level is taken into account and child elements are only extracted as a part of the top element's content.

-CHARSET_ENCODING=<encoding>

No

Target files encoding. The default value is ISO-8859-1. For the list of supported encodings, see: http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-IF_FILE_EXISTS=<overwrite|skip|error>

No

Define behavior when the target file exists.

  • overwrite: Overwrite the target file if it exists.

  • skip: Do nothing for this file.

  • error: Raise an error.

-KEEP_XML_PROLOGUE=<all|xml|doctype|none>

No

Copies the source file XML prologue in the target file. Depending on this parameter's value, the following parts of the XML prologue are preserved:

  • all: Copies all of the prologue (XML and document type declaration).

  • xml: Copies only the XML declaration <?xml...?> and not the document type declaration.

  • doctype: Copies only the document type declaration and not the XML declaration.

  • none: Does not copy the prologue from the source file.

Note: If all or part of the prologue is not preserved, it should be specified in the -HEADER parameter.

-HEADER=<header>

No

String that is appended after the prologue (if any) in each target file. You can use this parameter to create a customized XML prologue or root element.

-FOOTER=<footer>

No

String that is appended at the end of each target file. You can use this parameter to close a root element added in the header.


Examples

Concatenate the content of the IDOC elements in the files ord1.xml, ord2.xml, and so on in the ord_i subfolder into the file MDSLS.TXT.XML, with the root element <WMMBID02> added to the target.

OdiXMLConcat "-FILE=./ord_i/ord*.xml" "-TOFILE=./MDSLS.TXT.XML" -XML_ELEMENT=IDOC
"-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=xml
"-HEADER=<WMMBID02>" "-FOOTER=</WMMBID02>"

OdiXMLConcat "-FILE=./o?d_*/ord*.xml" "-TOFILE=./MDSLS.TXT.XML" -XML_ELEMENT=IDOC
"-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=none
"-HEADER=<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<WMMBID02>"
"-FOOTER=</WMMBID02>"

Concatenate the EDI elements of the files ord1.xml, ord2.xml, and so on in the ord_i subfolder into the file MDSLS2.XML. This file will have the new root element EDI_BATCH above all <EDI> elements.

OdiXMLConcat "-FILE=./o?d_?/ord*.xml" "-TOFILE=./MDSLS2.XML" -XML_ELEMENT=EDI "-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=xml "-HEADER= <EDI_BATCH>" "-FOOTER=</EDI_BATCH>"

OdiXMLSplit

Use this command to split elements from an XML file into several files.

This tool extracts all instances of a given element stored in a source XML file and splits it over several target XML files. This tool parses and generates well formed XML. It does not modify or generate a DTD for the generated files. A reference to an existing DTD can be specified in the -HEADER parameter or preserved from the original files using -KEEP_XML_PROLOGUE.


Note:

XML namespaces are not supported by this tool. Provide the local part of the element name (without the namespace or prefix value) in the -ELEMENT_NAME parameter.


Usage

OdiXMLSplit -FILE=<file> -TOFILE=<file_pattern> -XML_ELEMENT=<element_name>
[-CHARSET_ENCODING=<encoding>] [-IF_FILE_EXISTS=<overwrite|skip|error>]
[-KEEP_XML_PROLOGUE=<all|xml|doctype|none>] [-HEADER=<header>]
[-FOOTER=<footer>]

Parameters

ParametersMandatoryDescription

-FILE=<file>

Yes

Source XML file to split.

-TOFILE=<file_pattern>

Yes

File pattern for the target files. Each file is named after a pattern containing a mask representing a generated number sequence or the value of an attribute of the XML element used to perform the split:

  • Number Sequence Mask: Use the * (star) value to indicate the place of the file number value. For example, if the <file_ pattern> is equal to target_*.xml, the files created are named target_1.xml, target_2.xml, and so on.

  • Attribute Value Mask: Between square brackets, specify the name of the attribute of <element_name> whose value should be pushed to create the file name. For example, customer_[CUSTID].xml creates files named customer_041.xml, customer_123.xml, and so on, depending on the value of the attribute CUSTID of the element used to split. Note that if a value repeats over several successive elements, target files may be overwritten according to the value of the -OVERWRITE parameter.

Note that the pattern can be used for creating different files within a directory or files in different directories. The following patterns are valid:

  • /tmp/files_*/customer.xml

  • /tmp/files_[CUSTID]/customer.xml

  • /tmp/files/customer_*.xml

  • /tmp/files/customer_[CUSTID].xml

-XML_ELEMENT=<element_name>

Yes

Local name of the XML element (without enclosing <> characters, prefix, or namespace information) to be extracted with its content and child elements from the source files.

Note that this element detection is not recursive. If a given instance of <element_name> contains other instances of <element_name>, only the element of higher level is taken into account and child elements are only extracted as a part of the top element's content.

-CHARSET_ENCODING=<encoding>

No

Target files encoding. The default value is ISO-8859-1. For the list of supported encodings, see: http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

-IF_FILE_EXISTS=<overwrite|skip|error>

No

Define behavior when the target file exists.

  • overwrite: Overwrite the target file if it exists.

  • skip: Do nothing for this file.

  • error: Raise an error.

-KEEP_XML_PROLOGUE=<all|xml|doctype|none>

No

Copies the source file XML prologue in the target file. Depending on this parameter's value, the following parts of the XML prologue are preserved:

  • all: Copies all of the prologue (XML and document type declaration).

  • xml: Copies only the XML declaration <?xml...?> and not the document type declaration.

  • doctype: Copies only the document type declaration and not the XML declaration.

  • none: Does not copy the prologue from the source file.

Note: If all or part of the prologue is not preserved, it should be specified in the -HEADER parameter.

-HEADER=<header>

No

String that is appended after the prologue (if any) in each target file. You can use this parameter to create a customized XML prologue or root element.

-FOOTER=<footer>

No

String that is appended at the end of each target file. You can use this parameter to close a root element added in the header.


Examples

Split the file MDSLS.TXT.XML into several files. The files ord1.xml, ord2.xml, and so on are created and contain each instance of the IDOC element contained in the source file.

OdiXMLSplit "-FILE=./MDSLS.TXT.XML" "-TOFILE=./ord_i/ord*.xml" -XML_ELEMENT=IDOC
"-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=xml
"-HEADER= <WMMBID02>" "-FOOTER= </WMMBID02>"

Split the file MDSLS.TXT.XML the same way as in the previous example except name the files using the value of the BEGIN attribute of the IDOC element that is being split. The XML prologue is not preserved in this example but entirely generated in the header.

OdiXMLSplit "-FILE= ./MDSLS.TXT.XML" "-TOFILE=./ord_i/ord[BEGIN].xml"
-XML_ELEMENT=IDOC "-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML
PROLOGUE=none "-HEADER= <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<WMMBID02>"
"-FOOTER=</WMMBID02>"

OdiZip

Use this command to create a ZIP file from a directory or several files.

Usage

OdiZip -DIR=<directory> -FILE=<file> -TOFILE=<target_file> [-OVERWRITE=<yes|no>]
[-RECURSE=<yes|no>] [-CASESENS=<yes|no>]
[-ENCODING=<file_name_encoding>]

Parameters

ParametersMandatoryDescription

-DIR=<directory>

Yes if -FILE is omitted

Base directory (or folder) that will be the future root in the ZIP file to generate. If only -DIR and not -FILE is specified, all files under this directory are archived.

-FILE=<file>

Yes if -DIR is omitted

Path from the base directory of the file(s) to archive. If only -FILE and not -DIR is specified, the default directory is the current work directory if the -FILE path is relative.

Use * to specify the generic characters.

Examples:

/var/tmp/*.log (all files with the log extension of the directory /var/tmp)

arch_*.lst (all files starting with arch_ and with the extension lst)

-TOFILE=<target_file>

Yes

Target ZIP file.

-OVERWRITE=<yes|no>

No

Indicates whether the target ZIP file must be overwritten (Yes) or simply updated if it already exists (No). By default, the ZIP file is updated if it already exists.

-RECURSE=<yes|no>

No

Indicates if the archiving is recursive in the case of a directory that contains other directories. The value No indicates that only the files contained in the directory to copy (without the subfolders) are archived.

-CASESENS=<yes|no>

No

Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No).

-ENCODING=<file_name_encoding>

No

Character encoding to use for file names inside the archive file.

For the list of supported encodings, see:

http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

This defaults to the platform's default character encoding.


Examples

Create an archive of the directory C:\Program files\odi.

OdiZip "-DIR=C:\Program Files\odi" -FILE=*.* -TOFILE=C:\TEMP\odi_archive.zip

Create an archive of the directory C:\Program files\odi while preserving the odi directory in the archive.

OdiZip "-DIR=C:\Program Files" -FILE=odi\*.* -TOFILE=C:\TEMP\odi_archive.zip
PK*'LPKU5zDOEBPS/title.htmt Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator, 12c (12.1.2)

Oracle® Fusion Middleware

Developer's Guide for Oracle Data Integrator

12c (12.1.2)

E39359-03

January 2014


Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator, 12c (12.1.2)

E39359-03

Copyright © 2010, 2014, Oracle and/or its affiliates. All rights reserved.

Primary Authors:  Laura Hofman Miquel, Joshua Stanley

Contributing Authors: Alex Kotopoulis, Michael Reiche, Jon Patt, Alex Prazma, Gail Risdal

Contributors: David Allan, Linda Bittarelli, Sophia Chen, Pratima Chennupati, Victor Chow, Sowmya Dhandapani, Daniel Gallagher, Gary Hostetler, Kevin Hwang, Aslam Khan, Sebu T. Koleth, Christian Kurz, Venkata Lakkaraju, Thomas Lau, Deekshit Mantampady, Kevin McDonough, Luda Mogilevich, Ale Paez, Suresh Pendap, Sandrine Riley, Julien Testut, Sachin Thatte, Julie Timmons, Jay Turner, Vikas Varma, Robert Velisar, Winnie Wan, Geoff Watters, Jennifer Waywell

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate failsafe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

PK9SytPKU5zDOEBPS/versioning.htmwr Using Version Control

19 Using Version Control

This chapter describes how to work with version management in Oracle Data Integrator.

Oracle Data Integrator provides a comprehensive system for managing and safeguarding changes. The version management system allows flags on developed objects (such as projects, models, etc) to be automatically set, to indicate their status, such as new or modified. It also allows these objects to be backed up as stable checkpoints, and later restored from these checkpoints. These checkpoints are created for individual objects in the form of versions, and for consistent groups of objects in the form of solutions.


Note:

Version management is supported for master repositories installed on database engines such as Oracle, Hypersonic SQL, and Microsoft SQL Server. For a complete list of certified database engines supporting version management refer to the Platform Certifications document on OTN at: http://www.oracle.com/technology/products/oracle-data-integrator/index.html.


This chapter includes the following sections:

Working with Object Flags

When an object is created or modified in Designer Navigator, a flag is displayed in the tree on the object icon to indicate its status. Table 19-1 lists these flags.

Table 19-1 Object Flags

FlagDescription

inserted icon


Object status is inserted.

updated icon


Object status is updated.


When an object is inserted, updated or deleted, its parent objects are recursively flagged as updated. For example, when a step is inserted into a package, it is flagged as inserted, and the package, folder(s) and project containing this step are flagged as updated.

When an object version is checked in (refer to "Working with Versions" for more information), the flags on this object are reset.

Working with Versions

A version is a backup copy of an object. It is checked in at a given time and may be restored later. Versions are saved in the master repository. They are displayed in the Version tab of the object window.

The following objects can be checked in as versions:

  • Projects, Folders

  • Packages, Scenarios

  • Mappings (including Resuable Mappings), Procedures, Knowledge Modules

  • Sequences, User Functions, Variables

  • Models, Model Folders

  • Solutions

  • Load Plans

Checking in a version

To check in a version:

  1. Select the object for which you want to check in a version.

  2. In the property inspector, select the Version tab. In the Versions table, click the Create a new version button (a green plus-sign icon).

  3. In the Versioning dialog, review Previous Versions to see the list of versions already checked in.

  4. A version number is automatically generated in the Version field. Modify this version number if necessary.

  5. Enter the details for this version in the Description field.

  6. Click OK.

When a version is checked in, the flags for the object are reset.

Displaying previous versions of an object

To display previous versions of an object:

When editing the object, the Version tab provides creation and update information, the internal and global IDs for this object, and a list of versions checked in, with the check in date and the name of the user who performed the check in operation.

Restoring a version from the Version tab


Note:

You can also restore a version from the Version Browser. See: "Restoring a version with the Version Browser".



WARNING:

Restoring a version cannot be undone. It permanently erases the current object and replaces it by the selected version. Consider creating a new version of your current object before restoring a version.


To restore a version from the Version tab:

  1. Select the object for which you want to restore a version.

  2. In the property inspector, select the Version tab. In the Versions table, select the row corresponding to the version you want to restore. Click the Restore a version button, or right-click the row and select Restore from the context menu.

  3. Click Yes to confirm the restore operation.

Browsing versions

To browse versions:

Oracle Data Integrator contains a tool, the Version Browser, which is used to display the versions stored in the repository.

  1. From the main menu, select ODI > Version Browser...

  2. Use the Object Type and Object Name drop down lists to filter the objects for which you want to display the list of versions.

From the Version Browser, you can compare two versions, restore a version, export a version as an XML file or delete an existing version.


Note:

The Version Browser displays the versions that existed when you opened it. Click Refresh to view all new versions created since then.


Comparing two versions with the Version Browser

To compare two versions with the Version Browser, see "Working with the Version Comparison Tool".

Deleting a version with the Version Browser

To delete a version with the Version Browser:

  1. Open the Version Browser.

  2. Select the version you want to delete.

  3. Click the Delete icon in the version table (a red X button), or right-click and select Delete from the context menu.

The version is deleted.

Restoring a version with the Version Browser


WARNING:

Restoring a version cannot be undone. It permanently erases the current object and replaces it by the selected version. Consider creating a new version of your current object before restoring a version.


To restore a version with the Version Browser:

  1. Open the Version Browser.

  2. Select the version you want to restore.

  3. Click the Restore this version button, or right-click and select Restore from the context menu.

  4. Click OK to confirm the restore operation.

The version is restored in the repository.

Exporting a version with the Version Browser

To export a version with the Version Browser:

This operation exports the version to a file without restoring it. This exported version can be imported into another repository.


Note:

Exporting a version exports the object contained in the version and not the version information. This allows you to export an old version without having to actually restore it in the repository.


  1. Open the Version Browser.

  2. Select the version you want to export.

  3. Click the Export this version as an XML file button, or right-click and select Export from the context menu.

  4. Select the Export Directory and specify the Export Name. Select Replace existing files without warning to overwrite files of the same name in the export directory without confirmation.

  5. Click OK.

The version is exported to the given location.

Working with the Version Comparison Tool

Oracle Data Integrator provides a comprehensive version comparison tool. This graphical tool is to view and compare two different versions of an object.

The version comparison tool provides the following features:

  • Color-coded side-by-side display of comparison results: The comparison results are displayed in two panes, side-by-side, and the differences between the two compared versions are color coded.

  • Comparison results organized in tree: The tree of the comparison tool displays the comparison results in a hierarchical list of node objects in which expanding and collapsing the nodes is synchronized.

  • Report creation and printing in PDF format: The version comparison tool is able to generate and print a PDF report listing the differences between two particular versions of an object.

  • Supported objects: The version comparison tool supports the following objects: Project, Folder, Package, Scenario, Mapping, Procedure, Knowledge Module, Sequence, User Function, Variable, Model, Model folder, and Solution.

  • Difference viewer functionality: This version comparison tool is a difference viewer and is provided only for consultation purposes. Editing or merging object versions is not supported. If you want to edit the object or merge the changes between two versions, you have to make the changes manually directly in the concerned objects.

Viewing the Differences between two Versions

To view the differences between two particular versions of an object, open the Version Comparison tool.

There are three different way of opening the version comparison tool:

By selecting the object in the Projects tree

  1. From the Projects tree in Designer Navigator, select the object whose versions you want to compare.

  2. Right-click the object.

  3. Select Version > Compare with version...

  4. In the Compare with version editor, select the version with which you want to compare the current version of the object.

  5. Click OK.

  6. The Version Comparison tool opens.

Using the Versions tab of the object

  1. In Designer Navigator, open the object editor of the object whose versions you want to compare.

  2. Go to the Version tab.

    The Version tab provides the list of all versions created for this object. This list also indicates the creation date, the name of the user who created the version, and a description (if specified).

  3. Select the two versions you want to compare by keeping the <CTRL> key pressed.

  4. Right-click and select Compare...

  5. The Version Comparison tool opens.

Using the Version Browser

  1. Open the Version Browser.

  2. Select two versions that you want to compare, by using control-left click to multi-select two different rows. You must select rows that correspond to the exact same object.

  3. Click the Compare two versions for identical objects icon in the version table, or right-click and select Compare from the context menu.

  4. The Version Comparison tool opens.

  5. To print a copy of the object comparison, click Print. See: "Generating and Printing a Report of your Comparison Results".

    Click Close when you are done reviewing the comparison.

The Version Comparison tool shows the differences between two versions: on the left pane the newer version and on the right pane the older version of your selected object.

The differences are color highlighted. The following color code is applied:

ColorDescription

White (default)

unchanged

Red

deleted

Green

added/new

Yellow

object modified

Orange

field modified (the value inside of this fields has changed)



Note:

If one object does not exist in one of the versions (for example, when it has been deleted), it is represented as an empty object (with empty values).


Using Comparison Filters

Once the version of an object is created, the Version Comparison tool can be used at different points in time.

Creating or checking in a version is covered in"Working with Versions".

The Version Comparison tool provides two different types of filters for customizing the comparison results:

  • Object filters: By selecting the corresponding check boxes (New and/or Deleted and/or Modified and/or Unchanged) you can decide whether you want only newly added and/or deleted and/or modified and/or unchanged objects to be displayed.

  • Field filters: By selecting the corresponding check boxes (New and/or Deleted and/or Modified and/or Unchanged) you can decide whether you want newly added fields and/or deleted fields and/or modified fields and/or unchanged fields to be displayed.

Generating and Printing a Report of your Comparison Results

To generate a report of your comparison results in Designer Navigator:

  1. In the Version Comparison tool, click the Printer icon.

  2. In the Report Generation dialog, set the object and field filters according to your needs.

  3. In the PDF file location field, specify a file name to write the report to. If no path is specified, the file will be written to the default directory for PDF files. This is a user preference.

  4. Check the box next to Open file after generation if you want to view the file after its generation.

    Select Open the file after the generation to view the generated report in a PDF viewer.


    Note:

    In order to view the generated report, you must specify the location of Adobe Acrobat Reader in the user parameters. You can also set the default PDF generation directory. To set these values, select the Preferences... option from the Tools menu. Expand the ODI node, and then the System node, and select Reports. Enter (or search for) the location of your preferred PDF Viewer, and of your Default PDF generation directory.


  5. Click Generate.

A report in Adobe PDF format is written to the file specified in step 0

Working with Solutions

A solution is a comprehensive and consistent set of interdependent versions of objects. Like other objects, it can be checked in at a given time as a version, and may be restored at a later date. Solutions are saved into the master repository. A solution assembles a group of versions called the solution's elements.

A solution is automatically assembled using cross-references. By scanning cross-references, a solution automatically includes all dependent objects required for a particular object. For example, when adding a project to a solution, versions for all the models used in this project's interfaces are automatically checked in and added to the solution. You can also manually add or remove elements into and from the solution.

Solutions are displayed in the Solutions accordion in Designer Navigator and in Operator Navigator.

The following objects may be added into solutions:

  • Projects

  • Models, Model Folders

  • Scenarios

  • Load Plans

  • Global Variables, Knowledge Modules, User Functions and Sequences.

To create a solution:

  1. In Designer Navigator or Operator Navigator, from the Solutions toolbar menu select New Solution.

  2. In the Solutions editor, enter the Name of your solution and a Description.

  3. From the File menu select Save.

The resulting solution is an empty shell into which elements may then be added.

Working with Elements in a Solution

This section details the different actions that can be performed when working with elements of a solution.

Adding Elements

To add an element, drag the object from the tree into the Elements list in the solution editor. Oracle Data Integrator scans the cross-references and adds any Required Elements needed for this element to work correctly. If the objects being added have been inserted or updated since their last checked in version, you will be prompted to create new versions for these objects.

Removing Elements

To remove an element from a solution, select the element you want to remove in the Elements list and then click the Delete button. This element disappears from the list. Existing checked in versions of the object are not affected.

Rolling Back Objects

To roll an object back to a version stored in the solution, select the elements you want to restore and then click the Restore button. The elements selected are all restored from the solution's versions.

Synchronizing Solutions

Synchronizing a solution automatically adds required elements that have not yet been included in the solution, creates new versions of modified elements and automatically removes unnecessary elements. The synchronization process brings the content of the solution up to date with the elements (projects, models, etc) stored in the repository.

To synchronize a solution:

  1. Open the solution you want to synchronize.

  2. Click Synchronize in the toolbar menu of the Elements section.

  3. Oracle Data Integrator scans the cross-references. If the cross-reference indicates that the solution is up to date, then a message appears. Otherwise, a list of elements to add or remove from the solution is shown. These elements are grouped into Principal Elements (added manually), Required Elements (directly or indirectly referenced by the principal elements) and Unused Elements (no longer referenced by the principal elements).

  4. Check the Accept boxes to version and include the required elements or delete the unused ones.

  5. Click OK to synchronize the solution. Version creation windows may appear for elements requiring a new version to be created.

You should synchronize your solutions regularly to keep the solution contents up-to-date. You should also do it before checking in a solution version.

Restoring and Checking in a Solution

The procedure for checking in and restoring a solution version is similar to the method used for single elements. See "Working with Versions" for more details.

You can also restore a solution to import scenarios into production in Operator Navigator or Designer Navigator.

To restore a scenario from a solution:

  1. Double-click a solution to open the Solution editor.

  2. Select a scenario from the Principal or Required Elements section. Note that other elements, such as projects and mappings, cannot be restored.

  3. Click Restore in the toolbar menu of the Elements section.

The scenario is now accessible in the Scenarios tab.

Note that you can also use the Version Browser to restore scenarios. See "Restoring a version with the Version Browser".


Note:

When restoring a solution, elements in the solution are not automatically restored. They must be restored manually from the Solution editor.


Importing and Exporting Solutions

Solutions can be exported and imported similarly to other objects in Oracle Data Integrator. Export/Import is used to transfer solutions from one master repository to another. Refer to Chapter 20, "Exporting and Importing," for more information.

PKPh-wwPKU5zD OEBPS/loe.htm List of Examples PK|D PKU5zDOEBPS/intro.htm Introduction to Oracle Data Integrator

1 Introduction to Oracle Data Integrator

This chapter introduces Oracle Data Integrator.

This chapter includes the following sections:

Introduction to Data Integration with Oracle Data Integrator

Data Integration ensures that information is timely, accurate, and consistent across complex systems. This section provides an introduction to data integration and describes how Oracle Data Integrator provides support for Data Integration.

Data Integration

Integrating data and applications throughout the enterprise, and presenting them in a unified view is a complex proposition. Not only are there broad disparities in technologies, data structures, and application functionality, but there are also fundamental differences in integration architectures. Some integration needs are Data Oriented, especially those involving large data volumes. Other integration projects lend themselves to an Event Driven Architecture (EDA) or a Service Oriented Architecture (SOA), for asynchronous or synchronous integration.

Data Integration ensures that information is timely, accurate, and consistent across complex systems. Although it is still frequently referred as Extract-Load-Transform (ETL) - Data Integration was initially considered as the architecture used for loading Enterprise Data Warehouse systems - data integration now includes data movement, data synchronization, data quality, data management, and data services.

Oracle Data Integrator

Oracle Data Integrator provides a fully unified solution for building, deploying, and managing complex data warehouses or as part of data-centric architectures in a SOA or business intelligence environment. In addition, it combines all the elements of data integration—data movement, data synchronization, data quality, data management, and data services—to ensure that information is timely, accurate, and consistent across complex systems.

Oracle Data Integrator (ODI) features an active integration platform that includes all styles of data integration: data-based, event-based and service-based. ODI unifies silos of integration by transforming large volumes of data efficiently, processing events in real time through its advanced Changed Data Capture (CDC) framework, and providing data services to the Oracle SOA Suite. It also provides robust data integrity control features, assuring the consistency and correctness of data. With powerful core differentiators - heterogeneous E-LT, Declarative Design and Knowledge Modules - Oracle Data Integrator meets the performance, flexibility, productivity, modularity and hot-pluggability requirements of an integration platform.

E-LT

Traditional ETL tools operate by first Extracting the data from various sources, Transforming the data in a proprietary, middle-tier ETL engine that is used as the staging area, and then Loading the transformed data into the target data warehouse or integration server. Hence the term ETL represents both the names and the order of the operations performed, as shown in Figure 1-1.

Figure 1-1 Traditional ETL versus ODI E-LT

Description of Figure 1-1 follows

The data transformation step of the ETL process is by far the most compute-intensive, and is performed entirely by the proprietary ETL engine on a dedicated server. The ETL engine performs data transformations (and sometimes data quality checks) on a row-by-row basis, and hence, can easily become the bottleneck in the overall process. In addition, the data must be moved over the network twice – once between the sources and the ETL server, and again between the ETL server and the target data warehouse. Moreover, if one wants to ensure referential integrity by comparing data flow references against values from the target data warehouse, the referenced data must be downloaded from the target to the engine, thus further increasing network traffic, download time, and leading to additional performance issues.

In response to the issues raised by ETL architectures, a new architecture has emerged, which in many ways incorporates the best aspects of manual coding and automated code-generation approaches. Known as E-LT, this new approach changes where and how data transformation takes place, and leverages existing developer skills, RDBMS engines and server hardware to the greatest extent possible. In essence, E-LT moves the data transformation step to the target RDBMS, changing the order of operations to: Extract the data from the source tables, Load the tables into the destination server, and then Transform the data on the target RDBMS using native SQL operators. Note, with E-LT there is no need for a middle-tier engine or server as shown in Figure 1-1.

Oracle Data Integrator supports both ETL- and E-LT-Style data integration. See "Designing E-LT and ETL-Style Mappings" for more information.

Oracle Data Integrator Concepts

This section provides an introduction to the main concepts of Oracle Data Integrator.

Introduction to Declarative Design

To design an integration process with conventional ETL systems, a developer needs to design each step of the process: Consider, for example, a common case in which sales figures must be summed over time for different customer age groups. The sales data comes from a sales management database, and age groups are described in an age distribution file. In order to combine these sources then insert and update appropriate records in the customer statistics systems, you must design each step, which includes:

  1. Load the customer sales data in the engine

  2. Load the age distribution file in the engine

  3. Perform a lookup between the customer sales data and the age distribution data

  4. Aggregate the customer sales grouped by age distribution

  5. Load the target sales statistics data into the engine

  6. Determine what needs to be inserted or updated by comparing aggregated information with the data from the statistics system

  7. Insert new records into the target

  8. Update existing records into the target

This method requires specialized skills, depending on the steps that need to be designed. It also requires significant efforts in development, because even repetitive succession of tasks, such as managing inserts/updates in a target, need to be developed into each task. Finally, with this method, maintenance requires significant effort. Changing the integration process requires a clear understanding of what the process does as well as the knowledge of how it is done. With the conventional ETL method of design, the logical and technical aspects of the integration are intertwined. Declarative Design is a design method that focuses on "What" to do (the Declarative Rules) rather than "How" to do it (the Process). In our example, "What" the process does is:

  • Relate the customer age from the sales application to the age groups from the statistical file

  • Aggregate customer sales by age groups to load sales statistics

"How" this is done, that is the underlying technical aspects or technical strategies for performing this integration task – such as creating temporary data structures or calling loaders – is clearly separated from the declarative rules.

Declarative Design in Oracle Data Integrator uses the well known relational paradigm to declare in the form of a mapping the declarative rules for a data integration task, which includes designation of sources, targets, and transformations.

Declarative rules often apply to metadata to transform data and are usually described in natural language by business users. In a typical data integration project (such as a Data Warehouse project), these rules are defined during the specification phase in documents written by business analysts in conjunction with project managers. They can very often be implemented using SQL expressions, provided that the metadata they refer to is known and qualified in a metadata repository.

The four major types of Declarative Rules are mappings, joins, filters and constraints:

  • A mapping is a business rule implemented as an SQL expression. It is a transformation rule that maps source attributes (or fields) onto one of the target attributes. It can be executed by a relational database server at run-time. This server can be the source server (when possible), a middle tier server or the target server.

  • A join operation links records in several data sets, such as tables or files. Joins are used to link multiple sources. A join is implemented as an SQL expression linking the attributes (fields) of two or more data sets. Joins can be defined regardless of the physical location of the source data sets involved. For example, a JMS queue can be joined to an Oracle table. Depending on the technology performing the join, it can be expressed as an inner join, right outer join, left outer join and full outer join.

  • A filter is an expression applied to source data sets attributes. Only the records matching this filter are processed by the data flow.

  • A constraint is an object that defines the rules enforced on data sets' data. A constraint ensures the validity of the data in a given data set and the integrity of the data of a model. Constraints on the target are used to check the validity of the data before integration in the target.

Table 1-1 gives examples of declarative rules.

Table 1-1 Examples of declarative rules

Declarative RuleTypeSQL Expression

Sum of all amounts or items sold during October 2005 multiplied by the item price

Mapping

SUM(
 CASE WHEN SALES.YEARMONTH=200510 THEN
  SALES.AMOUNT*product.item_PRICE
 ELSE
  0
 END
)

Products that start with 'CPU' and that belong to the hardware category

Filter

Upper(PRODUCT.PRODUCT_NAME)like 'CPU%'
And PRODCUT.CATEGORY = 'HARDWARE'

Customers with their orders and order lines

Join

CUSTOMER.CUSTOMER_ID = ORDER.ORDER_ID
And ORDER.ORDER_ID = ORDER_LINE.ORDER_ID

Reject duplicate customer names

Unique Key Constraint

Unique key (CUSTOMER_NAME)

Reject orders with a link to an non-existent customer

Reference Constraint

Foreign key on ORDERS(CUSTOMER_ID) references CUSTOMER(CUSTOMER_ID)

Introduction to Knowledge Modules

Knowledge Modules (KM) implement "how" the integration processes occur. Each Knowledge Module type refers to a specific integration task:

A Knowledge Module is a code template for a given integration task. This code is independent of the Declarative Rules that need to be processed. At design-time, a developer creates the Declarative Rules describing integration processes. These Declarative Rules are merged with the Knowledge Module to generate code ready for runtime. At runtime, Oracle Data Integrator sends this code for execution to the source and target systems it leverages in the E-LT architecture for running the process.

Knowledge Modules cover a wide range of technologies and techniques. Knowledge Modules provide additional flexibility by giving users access to the most-appropriate or finely tuned solution for a specific task in a given situation. For example, to transfer data from one DBMS to another, a developer can use any of several methods depending on the situation:

  • The DBMS loaders (Oracle's SQL*Loader, Microsoft SQL Server's BCP, Teradata TPump) can dump data from the source engine to a file then load this file to the target engine

  • The database link features (Oracle Database Links, Microsoft SQL Server's Linked Servers) can transfer data directly between servers

These technical strategies amongst others corresponds to Knowledge Modules tuned to exploit native capabilities of given platforms.

Knowledge modules are also fully extensible. Their code is opened and can be edited through a graphical user by technical experts willing to implement new integration methods or best practices (for example, for higher performance or to comply with regulations and corporate standards). Without having the skill of the technical experts, developers can use these custom Knowledge Modules in the integration processes.

For more information on Knowledge Modules, refer to the Connectivity and Modules Guide for Oracle Data Integrator and the Knowledge Module Developer's Guide for Oracle Data Integrator.

Introduction to Mappings

A mapping is an Oracle Data Integrator object stored that enables the loading of target datastores with data transformed from source datastores, based on declarative rules implemented as joins, filters and constraints.

A mapping also references the Knowledge Modules (code templates) that will be used to generate the integration process.

Datastores

A datastore is a data structure that can be used as a source or a target in a mapping. It can be:

  • a table stored in a relational database

  • an ASCII or EBCDIC file (delimited, or fixed length)

  • a node from a XML file

  • a JMS topic or queue from a Message Oriented Middleware

  • a node from a enterprise directory

  • an API that returns data in the form of an array of records

Regardless of the underlying technology, all data sources appear in Oracle Data Integrator in the form of datastores that can be manipulated and integrated in the same way. The datastores are grouped into data models. These models contain all the declarative rules –metadata - attached to datastores such as constraints.

Declarative Rules

The declarative rules that make up a mapping can be expressed in human language, as shown in the following example: Data is coming from two Microsoft SQL Server tables (ORDERS joined to ORDER_LINES) and is combined with data from the CORRECTIONS file. The target SALES Oracle table must match some constraints such as the uniqueness of the ID column and valid reference to the SALES_REP table.

Data must be transformed and aggregated according to some mappings expressed in human language as shown in Figure 1-2.

Figure 1-2 Example of a business problem

Description of Figure 1-2 follows

Translating these business rules from natural language to SQL expressions is usually straightforward. In our example, the rules that appear in Figure 1-2 could be translated as shown in Table 1-2.

Table 1-2 Business rules translated

TypeRuleSQL Expression/Constraint

Filter

Only ORDERS marked as closed

ORDERS.STATUS = 'CLOSED'

Join

A row from LINES has a matching ORDER_ID in ORDERS

ORDERS.ORDER_ID = LINES.ORDER_ID

Mapping

Target's SALES is the sum of the order lines' AMOUNT grouped by sales rep, with the corrections applied

SUM(LINES.AMOUNT + CORRECTIONS.VALUE)

Mapping

Sales Rep = Sales Rep ID from ORDERS

ORDERS.SALES_REP_ID

Constraint

ID must not be null

ID is set to not null in the data model

Constraint

ID must be unique

A unique key is added to the data model with (ID) as set of columns

Constraint

The Sales Rep ID should exist in the Target SalesRep table

A reference (foreign key) is added in the data model on SALES.SALES_REP = SALES_REP.SALES_REP_ID


Implementing this business problem using Oracle Data Integrator is a very easy and straightforward exercise. It is done by simply translating the business rules into a mapping. Every business rule remains accessible from the mapping's diagram, as shown in Figure 1-3.

Figure 1-3 Implementation using Oracle Data Integrator

Description of Figure 1-3 follows

Data Flow

Business rules defined in the mapping are automatically converted into a data flow that will carry out the joins filters, mappings, and constraints from source data to target tables.

By default, Oracle Data Integrator will use the Target RDBMS as a staging area for loading source data into temporary tables and applying all the required mappings, staging filters, joins and constraints. The staging area is a separate area in the RDBMS (a user/database) where Oracle Data Integrator creates its temporary objects and executes some of the rules (mapping, joins, final filters, aggregations etc.). When performing the operations this way, Oracle Data Integrator behaves like an E-LT as it first extracts and loads the temporary tables and then finishes the transformations in the target RDBMS.

In some particular cases, when source volumes are small (less than 500,000 records), this staging area can be located in memory in Oracle Data Integrator's in-memory relational database – In-Memory Engine. Oracle Data Integrator would then behave like a traditional ETL tool.

Figure 1-4 shows the data flow automatically generated by Oracle Data Integrator to load the final SALES table. The business rules will be transformed into code by the Knowledge Modules (KM). The code produced will generate several steps. Some of these steps will extract and load the data from the sources to the staging area (Loading Knowledge Modules - LKM). Others will transform and integrate the data from the staging area to the target table (Integration Knowledge Module - IKM). To ensure data quality, the Check Knowledge Module (CKM) will apply the user defined constraints to the staging data to isolate erroneous records in the Errors table.

Figure 1-4 Oracle Data Integrator Knowledge Modules in action

Description of Figure 1-4 follows

Oracle Data Integrator Knowledge Modules contain the actual code that will be executed by the various servers of the infrastructure. Some of the code contained in the Knowledge Modules is generic. It makes calls to the Oracle Data Integrator Substitution API that will be bound at run-time to the business-rules and generates the final code that will be executed.

At design time, declarative rules are defined in the mappings and Knowledge Modules are only selected and configured.

At run-time, code is generated and every Oracle Data Integrator API call in the Knowledge Modules (enclosed by <% and %>) is replaced with its corresponding object name or expression, with respect to the metadata provided in the Repository. The generated code is orchestrated by Oracle Data Integrator run-time component - the Agent – on the source and target systems to make them perform the processing, as defined in the E-LT approach.

Refer to Chapter 11, "Creating and Using Mappings" for more information on how to work with mappings.

Typical ODI Integration Projects

Oracle Data Integrator provides a wide range of integration features. This section introduces the most typical ODI Integration Projects.

Batch Oriented Integration

ODI is a comprehensive data integration platform with a built-in connectivity to all major databases, data warehouse and analytic applications providing high-volume and high-performance batch integration.

The main goal of a data warehouse is to consolidate and deliver accurate indicators to business users to help them make decisions regarding their everyday business. A typical project is composed of several steps and milestones. Some of these are:

  • Defining business needs (Key Indicators)

  • Identifying source data that concerns key indicators; specifying business rules to transform source information into key indicators

  • Modeling the data structure of the target warehouse to store the key indicators

  • Populating the indicators by implementing business rules

  • Measuring the overall accuracy of the data by setting up data quality rules

  • Developing reports on key indicators

  • Making key indicators and metadata available to business users through adhoc query tools or predefined reports

  • Measuring business users' satisfaction and adding/modifying key indicators

Oracle Data Integrator will help you cover most of these steps, from source data investigation to metadata lineage, and through loading and data quality audit. With its repository, ODI will centralize the specification and development efforts and provide a unique architecture on which the project can rely to succeed.

Scheduling and Operating Scenarios

Scheduling and operating scenarios is usually done in the Test and Production environments in separate Work Repositories. Any scenario can be scheduled by an ODI Agent or by any external scheduler, as scenarios can be invoked by an operating system command.

When scenarios are running in production, agents generate executioa[n logs in an ODI Work Repository. These logs can be monitored either through the Operator Navigator or through any web browser when Oracle Data Integrator Console is setup. Failing jobs can be restarted and ad-hoc tasks submitted for execution.

E-LT

ODI uses a unique E-LT architecture that leverages the power of existing RDBMS engines by generating native SQL and bulk loader control scripts to execute all transformations.

Event Oriented Integration

Capturing events from a Message Oriented Middleware or an Enterprise Service Bus has become a common task in integrating applications in a real-time environment. Applications and business processes generate messages for several subscribers, or they consume messages from the messaging infrastructure.

Oracle Data Integrator includes technology to support message-based integration and that complies with the Java Message Services (JMS) standard. For example, a transformation job within Oracle Data Integrator can subscribe and source messages from any message queue or topic. Messages are captured and transformed in real time and then written to the target systems.

Other use cases of this type of integration might require capturing changes at the database level. Oracle Data Integrator Changed Data Capture (CDC) capability identifies and captures inserted, updated, or deleted data from the source and makes it available for integration processes.

ODI provides two methods for tracking changes from source datastores to the CDC framework: triggers and RDBMS log mining. The first method can be deployed on most RDBMS that implement database triggers. This method is optimized to minimize overhead on the source systems. For example, changed data captured by the trigger is not duplicated, minimizing the number of input/output operations, which slow down source systems. The second method involves mining the RDBMS logs—the internal change history of the database engine. This has little impact on the system's transactional performance and is supported for Oracle (through the Log Miner feature).

The CDC framework used to manage changes, based on Knowledge Modules, is generic and open, so the change-tracking method can be customized. Any third-party change provider can be used to load the framework with changes.

Changes frequently involve several data sources at the same time. For example, when an order is created, updated, or deleted, both the orders table and the order lines table are involved. When processing a new order line, it is important that the new order, to which the line is related, is taken into account too. ODI provides a mode of change tracking called Consistent Set CDC. This mode allows for processing sets of changes for which data consistency is guaranteed.

For example, incoming orders can be detected at the database level using CDC. These new orders are enriched and transformed by ODI before being posted to the appropriate message queue or topic. Other applications such as Oracle BPEL or Oracle Business Activity Monitoring can subscribe to these messages, and the incoming events will trigger the appropriate business processes.

For more information on how to use the CDC framework in ODI, refer to Chapter 6, "Using Journalizing".

Service-Oriented Architecture

Oracle Data Integrator can be integrated seamlessly in a Service Oriented Architecture (SOA) in several ways:

Data Services are specialized Web services that provide access to data stored in database tables. Coupled with the Changed Data Capture capability, data services can also provide access to the changed records for a given subscriber. Data services are automatically generated by Oracle Data Integrator and deployed as Web services to a Web container, usually a Java application server. For more information on how to set up, generate and deploy data services, refer to Chapter 8, "Creating and Using Data Services".

Oracle Data Integrator can also expose its transformation processes as Web services to enable applications to use them as integration services. For example, a LOAD_SALES batch process used to update the CRM application can be triggered as a Web service from any service-compliant application, such as Oracle BPEL, Oracle Enterprise Service Bus, or Oracle Business Activity Monitoring. Transformations developed using ODI can therefore participate in the broader Service Oriented Architecture initiative.

Third-party Web services can be invoked as part of an ODI workflow and used as part of the data integration processes. Requests are generated on the fly and responses processed through regular transformations. Suppose, for example, that your company subscribed to a third-party service that exposes daily currency exchange rates as a Web service. If you want this data to update your multiple currency data warehouse, ODI automates this task with a minimum of effort. You would simply invoke the Web service from your data warehouse workflow and perform any appropriate transformation to the incoming data to make it fit a specific format. For more information on how to use web services in ODI, refer to Chapter 16, "Using Web Services".

Data Quality with ODI

With an approach based on declarative rules, Oracle Data Integrator is the most appropriate tool to help you build a data quality framework to track data inconsistencies.

Oracle Data Integrator uses declarative data integrity rules defined in its centralized metadata repository. These rules are applied to application data to guarantee the integrity and consistency of enterprise information. The Data Integrity benefits add to the overall Data Quality initiative and facilitate integration with existing and future business processes addressing this particular need.

Oracle Data Integrator automatically retrieves existing rules defined at the data level (such as database constraints) by a reverse-engineering process. ODI also allows developers to define additional, user-defined declarative rules that may be inferred from data discovery and profiling within ODI, and immediately checked.

Oracle Data Integrator provides a built-in framework to check the quality of your data in two ways:

  • Check data in your data servers, to validate that this data does not violate any of the rules declared on the datastores in Oracle Data Integrator. This data quality check is called a static check and is performed on data models and datastores. This type of check allows you to profile the quality of the data against rules that are not enforced by their storage technology.

  • Check data while it is moved and transformed by a mapping, in a flow check that checks the data flow against the rules defined on the target datastore. With such a check, correct data can be integrated into the target datastore while incorrect data is automatically moved into error tables.

Both static and flow checks are using the constraints that are defined in the datastores and data models, and both use the Check Knowledge Modules (CKMs). For more information refer to "Flow Control and Static Control".

Managing Environments

Integration projects exist in different environments during their lifecycle (development, test, product) and may even run in different environments in production (multiple site deployment). Oracle Data Integrator makes easier the definition and maintenance of these environments, as well as the lifecycle of the project across these environments using the Topology

The Topology describes the physical and logical architecture of your Information System. It gives you a very flexible way of managing different servers, environments and agents. All the information of the Topology is stored in the master repository and is therefore centralized for an optimized administration. All the objects manipulated within Work Repositories refer to the Topology. That's why it is the most important starting point when defining and planning your architecture.

The Topology is composed of data servers, physical and logical schemas, and contexts.

Data servers describe connections to your actual physical application servers and databases. They can represent for example:

  • An Oracle Instance

  • An IBM DB2 Database

  • A Microsoft SQL Server Instance

  • A File System

  • An XML File

  • and so forth.

At runtime, Oracle Data Integrator uses the connection information you have described to connect to the servers.

Physical schemas indicate the physical location of the datastores (tables, files, topics, queues) inside a data server. All the physical schemas that need to be accessed have to be registered under their corresponding data server, physical schemas are used to prefix object names and access them with their qualified names. When creating a physical schema, you need to specify a temporary, or work schema that will store temporary or permanent object needed at runtime.

A logical schema is an alias that allows a unique name to be given to all the physical schemas containing the same datastore structures. The aim of the logical schema is to ensure the portability of procedures and models on different design-time and run-time environments.

A Context represents one of these environments. Contexts are used to group physical resources belonging to the same environment.

Typical projects will have separate environments for Development, Test and Production. Some projects will even have several duplicated Test or Production environments. For example, you may have several production contexts for subsidiaries running their own production systems (Production New York, Production Boston, and so forth). There is obviously a difference between the logical view of the information system and its physical implementation as described in Figure 1-5.

Figure 1-5 Logical and Physical View of the Infrastructure

Description of Figure 1-5 follows

The logical view describes logical schemas that represent the physical schemas of the existing applications independently of their physical implementation. These logical schemas are then linked to the physical resources through contexts.

Designers always refer to the logical view defined in the Topology. All development done therefore becomes independent of the physical location of the resources they address. At runtime, the logical information is mapped to the physical resources, given the appropriate contexts. The same scenario can be executed on different physical servers and applications simply by specifying different contexts. This brings a very flexible architecture where developers don't have to worry about the underlying physical implementation of the servers they rely on.

Oracle Data Integrator Architecture

The architecture of Oracle Data Integrator relies on different components that collaborate together, as described in Figure 1-6.

Figure 1-6 Functional Architecture Overview

Description of Figure 1-6 follows

Repositories

The central component of the architecture is the Oracle Data Integrator Repository. It stores configuration information about the IT infrastructure, metadata of all applications, projects, scenarios, and the execution logs. Many instances of the repository can coexist in the IT infrastructure. The architecture of the repository is designed to allow several separated environments that exchange metadata and scenarios (for example: Development, Test, Maintenance and Production environments). In the figure above, two repositories are represented: one for the development environment, and another one for the production environment. The repository also acts as a version control system where objects are archived and assigned a version number. The Oracle Data Integrator Repository can be installed on an OLTP relational database.

The Oracle Data Integrator Repository is composed of a master repository and several Work Repositories. Objects developed or configured through the user s are stored in one of these repository types.

There is usually only one master repository that stores the following information:

  • Security information including users, profiles and rights for the ODI platform

  • Topology information including technologies, server definitions, schemas, contexts, languages etc.

  • Versioned and archived objects.

The Work Repository is the one that contains actual developed objects. Several work repositories may coexist in the same ODI installation (for example, to have separate environments or to match a particular versioning life cycle). A Work Repository stores information for:

  • Models, including schema definition, datastores structures and metadata, fields and attributes definitions, data quality constraints, cross references, data lineage etc.

  • Projects, including business rules, packages, procedures, folders, Knowledge Modules, variables etc.

  • Scenario execution, including scenarios, scheduling information and logs.

When the Work Repository contains only the execution information (typically for production purposes), it is then called an Execution Repository.

For more information on how to manage ODI repositories, refer to Chapter 3, "Administering Repositories".

Users

Administrators, Developers and Operators use the Oracle Data Integrator Studio to access the repositories. This Fusion Client Platform (FCP) based UI is used for administering the infrastructure (security and topology), reverse-engineering the metadata, developing projects, scheduling, operating and monitoring executions.

Business users (as well as developers, administrators and operators), can have read access to the repository, perform topology configuration and production operations through a web based UI called Oracle Data Integrator Console. This Web application can deployed in a Java EE application server such as Oracle WebLogic.

ODI Studio provides four Navigators for managing the different aspects and steps of an ODI integration project:

Topology Navigator

Topology Navigator is used to manage the data describing the information system's physical and logical architecture. Through Topology Navigator you can manage the topology of your information system, the technologies and their datatypes, the data servers linked to these technologies and the schemas they contain, the contexts, the language and the agents, as well as the repositories. The site, machine, and data server descriptions will enable Oracle Data Integrator to execute the same mappings in different environments.

Designer Navigator

Designer Navigator is used to design data integrity checks and to build transformations such as for example:

  • Automatic reverse-engineering of existing applications or databases

  • Graphical development and maintenance of transformation and mappings

  • Visualization of data flows in the mappings

  • Automatic documentation generation

  • Customization of the generated code

The main objects you handle through Designer Navigator are Models and Projects.

Operator Navigator

Operator Navigator is the production management and monitoring tool. It is designed for IT production operators. Through Operator Navigator, you can manage your executions in the sessions, as well as the scenarios in production.

Security Navigator

Security Navigator is the tool for managing the security information in Oracle Data Integrator. Through Security Navigator you can create users and profiles and assign user rights for methods (edit, delete, etc) on generic objects (data server, datatypes, etc), and fine-tune these rights on the object instances (Server 1, Server 2, etc).

Design-time Projects

A typical project is composed of several steps and milestones.

Some of these are:

  • Define the business needs

  • Identify and declare the sources and targets in the Topology

  • Design and Reverse-engineer source and target data structures in the form of data models

  • Implement data quality rules on these data models and perform static checks on these data models to validate the data quality rules

  • Develop mappings using datastores from these data models as sources and target

  • Develop additional components for tasks that cannot be achieved using s, such as Receiving and sending e-mails, handling files (copy, compress, rename and such), executing web services

  • Integrate mappings and additional components for building Package workflows

  • Version your work and release it in the form of scenarios

  • Schedule and operate scenarios.

Oracle Data Integrator will help you cover most of these steps, from source data investigation to metadata lineage, and through loading and data quality audit. With its repository, Oracle Data Integrator will centralize the specification and development efforts and provide a unique architecture on which the project can rely to succeed.

Chapter 2, "Overview of an Integration Project" introduces you to the basic steps of creating an integration project with Oracle Data Integrator. Chapter 9, "Creating an Integration Project" gives you more detailed information on the several steps of creating an integration project in ODI.

Run-Time Agent

At design time, developers generate scenarios from the business rules that they have designed. The code of these scenarios is then retrieved from the repository by the Run-Time Agent. This agent then connects to the data servers and orchestrates the code execution on these servers. It retrieves the return codes and messages for the execution, as well as additional logging information – such as the number of processed records, execution time etc. - in the Repository.

The Agent comes in two different flavors:

  • The Java EE Agent can be deployed as a web application and benefit from the features of an application server.

  • The Standalone Agent runs in a simple Java Machine and can be deployed where needed to perform the integration flows.

Both these agents are multi-threaded java programs that support load balancing and can be distributed across the information system. This agent holds its own execution schedule which can be defined in Oracle Data Integrator, and can also be called from an external scheduler. It can also be invoked from a Java API or a web service. Refer to Chapter 4, "Setting Up a Topology" for more information on how to create and manage agents.

ODI Domains

An ODI domain contains the Oracle Data Integrator components that can be managed using Oracle Enterprise Manager Cloud Control (EMCC). An ODI domain contains:

  • Several Oracle Data Integrator Console applications. An Oracle Data Integrator Console application is used to browse master and work repositories.

  • Several Run-Time Agents attached to the Master Repositories. These agents must be declared in the Master Repositories to appear in the domain. These agents may be Standalone Agents or Java EE Agents. See Chapter 4, "Setting Up a Topology" for information about how to declare Agents in the Master Repositories.

In EMCC, the Master Repositories and Agent pages display both application metrics and information about the Master and Work Repositories. You can also navigate to Oracle Data Integrator Console from these pages, for example to view the details of a session. In order to browse Oracle Data Integrator Console in EMCC, the connections to the Work and Master repositories must be declared in Oracle Data Integrator Console. See Installing and Configuring Oracle Data Integrator for more information.

PKkaPKU5zDOEBPS/odi_console.htm Using Oracle Data Integrator Console

24 Using Oracle Data Integrator Console

This chapter describes how to work with Oracle Data Integrator Console. An overview of the Console user interface is provided.

This chapter includes the following sections:

Introduction to Oracle Data Integrator Console

Oracle Data Integrator Console is a web-based console for managing and monitoring an Oracle Data Integrator run-time architecture and for browsing design-time objects.

This section contains the following topics:

Oracle Data Integrator Console Concepts

Oracle Data Integrator Console is a web-based console available for different types of users:

  • Administrators use Oracle Data Integrator Console to create and import repositories and to configure the Topology (data servers, schemas, and so forth).

  • Production operators use Oracle Data Integrator Console to manage scenarios and Load Plans, monitor sessions and Load Plan runs, and manage the content of the error tables generated by Oracle Data Integrator.

  • Business users and developers browse development artifacts in this interface, using, for example, the Data Lineage and Flow Map features.

This web interface integrates seamlessly with Oracle Fusion Middleware Control Console and allows Fusion Middleware administrators to drill down into the details of Oracle Data Integrator components and sessions.


Note:

Oracle Data Integrator Console is required for the Fusion Middleware Control Extension for Oracle Data Integrator. It must be installed and configured for this extension to discover and display the Oracle Data Integrator components in a domain.


Oracle Data Integrator Console Interface

Oracle Data Integrator Console is a web interface using the ADF-Faces framework.

Figure 24-1 shows the layout of Oracle Data Integrator Console.

Figure 24-1 Oracle Data Integrator Console

This image shows ODI Console.

Oracle Data Integrator Console displays the objects available to the current user in two Navigation tabs in the left panel:

  • Browse tab displays the repository objects that can be browsed and edited. In this tab you can also manage sessions and error tables.

  • Management tab is used to manage the repositories and the repository connections. This tab is available to connection users having Supervisor privileges, or to any user to set up the first repository connections.

The right panel displays the following tabs:

  • Search tab is always visible and allows you to search for objects in the connected repository.

  • One Master/Details tab is displayed for each object that is being browsed or edited. Note that it is possible to browse or edit several objects at the same time.

The search field above the Navigation tabs allows you to open the search tab when it is closed.

Working with the Navigation Tabs

In the Navigation tabs, you can browse for objects contained in the repository. When an object or node is selected, the Navigation Tab toolbar displays icons for the actions available for this object or node. If an action is not available for this object, the icon is grayed out. For example, you can edit and add data server objects under the Topology node in the Browse Tab, but you cannot edit Projects under the Designer node. Note that the number of tabs that you can open at the same time is limited to ten.

Using Oracle Data Integrator Console

This section explains the different types of operations available in Oracle Data Integrator console. It does not focus on each type of object that can be managed with the console, but gives keys to manage objects with the console.

This section includes the following topics:


Note:

Oracle Data Integrator Console uses the security defined in the master repository. Operations that are not allowed for a user will appear grayed out for this user.

In addition, the Management tab is available only for users with Supervisor privileges.


Connecting to Oracle Data Integrator Console

Oracle Data Integrator console connects to a repository via a Repository Connection, defined by an administrator.

Note that you can only connect to Oracle Data Integrator Console if it has been previously installed. See Installing and Configuring Oracle Data Integrator for more information about installing Oracle Data Integrator Console.


Note:

The first time you connect to Oracle Data Integrator Console, if no repository connection is configured, you will have access to the Management tab to create a first repository connection. See "Creating a Repository Connection" for more information. After your first repository connection is created, the Management tab is no longer available from the Login page, and is available only for users with Supervisor privileges.


Connecting to Oracle Data Integrator Console

To connect to Oracle Data Integrator Console:

  1. Open a web browser, and connect to the URL where Oracle Data Integrator Console is installed. For example: http://odi_host:8001/odiconsole/.

  2. From the Repository list, select the Repository connection corresponding to the master or work repository you want to connect.

  3. Provide a User ID and a Password.

  4. Click Sign In.

Generic User Operations

This section describes the generic operations available in Oracle Data Integrator Console for a typical user.

This section includes the following operations:


Note:

Creating, editing, and deleting operations are not allowed for Scenarios and Load Plans. For more information on the possible actions that can be performed with these objects in ODI Console, see "Managing Scenarios and Sessions" and "Managing Load Plans".


Viewing an Object

To view an object:

  1. Select the object in the Browse or Management Navigation tab.

  2. Click View in the Navigation tab toolbar. The simple page or the Master/Detail page for the object opens.

Editing an Object

To edit an object:

  1. Select the object in the Browse or Management Navigation tab.

  2. Click Update in the Navigation tab toolbar. The edition page for the object opens.

  3. Change the value for the object fields.

  4. Click Save in the edition page for this object.

Creating an Object

To create an object:

  1. Navigate to the parent node of the object you want to create in the Browse or Management Navigation tab. For example, to create a Context, navigate to the Topology > Contexts node in the Browse tab.

  2. Click Create in the Navigation tab toolbar. An Add dialog for this object appears.

  3. Provide the values for the object fields.

  4. Click Save in the Add dialog of this object. The new object appears in the Navigation tab.

Deleting an Object

To delete an object:

  1. Select the object in the Browse or Management Navigation tab.

  2. Click Delete in the Navigation tab toolbar.

  3. Click OK in the confirmation window.

Searching for an Object

To search for an object:

  1. In the Search tab, select the tab corresponding to the object you want to search:

    • Design Time tab allows you to search for design-time objects

    • Topology tab allows you to search for topology objects

    • Runtime tab allows you to search for run-time objects such as Load Plans, Scenarios, Scenario Folders, or Session Folders

    • Sessions tab allows you to search for sessions

    • Load Plan Execution tab allows you to search for Load Plan runs

  2. Set the search parameters to narrow your search.

    For example when searching design-time or topology objects:

    1. In the Search Text field, enter a part of the name of the object that you want to search.

    2. Select Case sensitive if you want the search to be case sensitive (this feature is not provided for the sessions or Load Plan execution search.

    3. Select in Models/Project (Designer tab) or Topology (Topology tab) the type of object you want to search for. Select All to search for all objects.

  3. Click Search.

  4. The Search Results appear, grouped by object type. You can click an object in the search result to open its master/details page.

Managing Scenarios and Sessions

This section describes the operations related to scenarios and sessions available in Oracle Data Integrator Console.

This section includes the following operations:

Importing a Scenario

To import a scenario:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Scenarios.

  3. Click Import in the Navigation tab toolbar.

  4. Select an Import Mode and select an export file in Scenario XML File.

  5. Click Import Scenario.

Exporting a Scenario

To export a scenario:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Scenarios.

  3. Click Export in the Navigation tab toolbar.

  4. In the Export Scenario dialog, set the parameters as follows:

    • From the Scenario Name list, select the scenario to export.

    • In the Encoding Java Charset field, enter the Java character set for the export file.

    • In the Encoding XML Charset field, enter the encoding to specify in the export file.

    • In the XML Version field, enter the XML Version to specify in the export file.

    • Optionally, select Include Dependant objects to export linked child objects.

  5. Click Export Scenario.

Running a Scenario

To execute a scenario:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Scenarios.

  3. Select the scenario you want to execute.

  4. Click Execute in the Navigation tab toolbar.

  5. Select an Agent, a Context, and a Log Level for this execution.

  6. Click Execute Scenario.

Stopping a Session

Note that you can perform a normal or an immediate kill of a running session. Sessions with the status Done, Warning, or Error cannot be killed.

To kill a session:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Sessions.

  3. Select the session you want to stop.

  4. Click Kill in the Navigation tab toolbar.

Restarting a Session

To restart a session:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Sessions.

  3. Select the session you want to restart.

  4. Click Restart in the Navigation tab toolbar.

  5. In the Restart Session dialog, set the parameters as follows:

    • Agent: From the list, select the agent you want to use for running the new session.

    • Log Level: From the list, select the log level. Select Log Level 6 in the Execution or Restart Session dialog to enable variable tracking. Log level 6 has the same behavior as log level 5, but with the addition of variable tracking.

  6. Click Restart Session.

Cleaning Stale Sessions

To clean stale sessions:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Sessions.

  3. Click Clean in the Navigation tab toolbar.

  4. In the Clean Stale Sessions dialog, select the Agent for which you want to clean stale sessions.

  5. Click OK.

Managing Data Statistics and Erroneous Records

Oracle Data Integrator Console allows you to browse the details of a session, including the record statistics. When a session detects erroneous data during a flow or static check, these errors are isolated into error tables. You can also browse and manage the erroneous rows using Oracle Data Integrator Console.


Note:

Sessions with erroneous data detected finish in Warning status.


To view the erroneous data:

  1. Select the Browse Navigation tab.

  2. Navigate to a given session using Runtime > Sessions/Load Plan Executions > Sessions. Select the session and click View in the Navigation tab toolbar.

    The Session page is displayed.

  3. In the Session page, go to the Relationships section and select the Record Statistics tab.

    This tab shows each physical table targeting in this session, as well as the record statistics.

  4. Click the number shown in the Errors column. The content of the error table appears.

    • You can filter the errors by Constraint Type, Name, Message Content, Detection date, and so forth. Click Filter Result to apply a filter.

    • Select a number of errors in the Query Results table and click Delete to delete these records.

    • Click Delete All to delete all the errors.


Note:

Delete operations cannot be undone.


Managing Load Plans

This section describes the operations related to Load Plans available in Oracle Data Integrator Console.

This section includes the following operations:

Importing a Load Plan

To import a Load Plan:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Load Plans.

  3. Click Import in the Navigation tab toolbar.

  4. In the Import Load Plan dialog, select an Import Mode and select an export file in the Select Load Plan XML File field.

  5. Click Import.


Note:

When you import a Load Plan that has been previously exported, the imported Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be imported separately. See "Importing a Scenario" for more information.


Exporting a Load Plan

To export a Load Plan:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Load Plans.

  3. Select the Load Plan to export.

  4. Click Export in the Navigation tab toolbar.

  5. In the Export dialog, set the parameters as follows:

    • From the Load Plan Name list, select the Load Plan to export.

    • In the Encoding Java Charset field, enter the Java character set for the export file.

    • In the Encoding XML Charset field, enter the encoding to specify in the export file.

    • In the XML Version field, enter the XML Version to specify in the export file.

    • Optionally, select Include Dependant objects to export linked child objects.

  6. Click Export.


Note:

The export of a Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be exported separately. See "Exporting a Scenario" for more information.


Running a Load Plan

To run a Load Plan:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Scenarios/Load Plans > Load Plans.

  3. Select the Load Plan you want to execute.

  4. Click Execute in the Navigation tab toolbar.

  5. Select a Logical Agent, a Context, a Log Level, and if your Load Plan uses variables, specify the Startup values for the Load Plan variables.

  6. Click Execute.

Stopping a Load Plan Run

Note that you can perform a normal or an immediate kill of a Load Plan run. Any running or waiting Load Plan Run can be stopped.

To stop a Load Plan Run:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Load Plan Executions.

  3. Select the Load Plan run you want to stop.

  4. Click Kill in the Navigation tab toolbar.

Restarting a Load Plan Run

A Load Plan can only be restarted if the selected run of the current Load Plan instance is in Error status and if there is no other instance of the same Load Plan currently running.

To restart a Load Plan Run:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions > Load Plan Executions.

  3. Select the Load Plan run you want to restart.

  4. In the Restart Load Plan Dialog, select the Physical Agent that restarts the Load Plan. Optionally, select a different log level.

  5. Click Restart in the Navigation tab toolbar.

Purging the Log

This section describes how to purge the log in Oracle Data Integrator Console by removing past sessions and/or Load Plan runs from the log.

To purge the log:

  1. Select the Browse Navigation tab.

  2. Navigate to Runtime > Sessions/Load Plan Executions.

  3. Click Purge in the Navigation tab toolbar.

  4. In the Purge Sessions/Load Plan Executions dialog, set the purge parameters listed in Table 24-1.

    Table 24-1 Purge Log Parameters

    ParameterDescription

    Purge Type

    Select the objects to purge.

    From ... To

    Sessions and/or Load Plan runs in this time range will be deleted.

    When you choose to purge session logs only, then the sessions launched as part of the Load Plan runs are not purged even if they match the filter criteria.When you purge Load Plan runs, the Load Plan run which matched the filter criteria and the sessions launched directly as part of the Load Plan run and its child/grand sessions will be deleted.

    Context

    Sessions and/or Load Plan runs executed in this context will be deleted.

    Agent

    Sessions and/or Load Plan runs executed by this agent will be deleted.

    Status

    Session and/or Load Plan runs in this status will be deleted.

    User

    Sessions and/or Load Plan runs executed by this user will be deleted.

    Name

    Sessions and/or Load Plan runs matching this session name will be deleted. Note that you can specify session name masks using % as a wildcard.

    Purge scenario reports

    If you select Purge scenario reports, the scenario reports (appearing under the execution node of each scenario) will also be purged.


    Only the sessions and/or Load Plan runs matching the specified filters will be removed:

    • When you choose to purge session logs only, then the sessions launched as part of the Load Plan runs are not purged even if they match the filter criteria.

    • When you purge Load Plan runs, the Load Plan run which matched the filter criteria and the sessions launched directly as part of Load Plan run and its child/grand sessions will be deleted.

    • When a Load Plan run matches the filter, all its attached sessions are also purged irrespective of whether they match the filter criteria or not.

  5. Click OK.

Oracle Data Integrator Console removes the sessions and/or Load Plan runs from the log.

Using Data Lineage and Flow Map

This section describes how to use the Data Lineage and Flow Map features available in Oracle Data Integrator Console.

  • Data Lineage provides graph displaying the flows of data from the point of view of a given datastore. In this graph, you can navigate back and forth and follow this data flow.

  • Flow Map provides a map of the relations that exist between the data structures (models, sub-models and datastores) and design-time objects (projects, folders, packages, mappings). This graph allows you to draw a map made of several data structures and their data flows.

This section includes the following operations:

Working with the Data Lineage

To view the Data Lineage:

  1. Select the Browse Navigation tab.

  2. Navigate to Design Time > Models > Data Lineage.

  3. Click View in the Navigation tab toolbar.

  4. In the Data Lineage page, select a Model, then a Sub-Model and a datastore in this model.

  5. Select Show Mappings if you want that mappings are displayed between the datastores nodes.

  6. Select the prefix to add in your datastores and mapping names in the Naming Options section.

  7. Click View to draw the Data Lineage graph. This graph is centered on the datastore selected in step 4.

    In this graph, you can use the following actions:

    • Click Go Back to return to the Data Lineage options and redraw the graph.

    • Use the Hand tool and then click a datastore to redraw the lineage centered on this datastore.

    • Use the Hand tool and then click a mapping to view this mapping's page.

    • Use the Arrow tool to expand/collapse groups.

    • Use the Move tool to move the graph.

    • Use the Zoom In/Zoom Out tools to resize the graph.

    • Select View Options to change the display options have the graph refreshed with this new option.

Working with the Flow Map

To view the Flow Map:

  1. Select the Browse Navigation tab.

  2. Navigate to Design Time > Models > Flow Map.

  3. Click View in the Navigation tab toolbar.

  4. In the Data Lineage page, select one or more Model. Select All to select all models.

  5. Select one of more Projects. Select All to select all projects.

  6. In the Select the level of details of the map section, select the granularity of the map. The object that you select here will be the nodes of your graph.

    Check Do not show Projects, Folders... if you want the map to show only data structure.

  7. Optionally, indicate the grouping for the data structures and design-time objects in the map, using the options in the Indicate how to group Objects in the Map section.

  8. Click View to draw the Flow Map graph.

    In this graph, you can use the following actions:

    • Click Go Back to return to the Flow Map options and redraw the graph.

    • Use the Hand tool and then click a node (representing a datastore, an mapping, and so forth) in the map to open this object's page.

    • Use the Arrow tool to expand/collapse groups.

    • Use the Move tool to move the graph.

    • Use the Zoom In/Zoom Out tools to resize the graph.

Performing Administrative Operations

This section describes the different administrative operations available in Oracle Data Integrator Console. These operations are available for a user with Supervisor privileges.

This section includes the following operations:

Creating a Repository Connection

A repository connection is a connection definition for Oracle Data Integrator Console. A connection does not include Oracle Data Integrator user and password information.

To create a repository connection:

  1. Navigate to the Repository Connections node in the Management Navigation tab.

  2. Click Create in the Navigation tab toolbar. A Create Repository Connection dialog for this object appears.

  3. Provide the values for the repository connection:

    • Connection Alias: Name of the connection that will appear on the Login page.

    • Master JNDI URL: JNDI URL of the datasource to connect the master repository database.

    • Supervisor User Name: Name of the Oracle Data Integrator user with Supervisor privileges that Oracle Data Integrator Console will use to connect to the repository. This user's password must be declared in the WLS or WAS Credential Store.

    • Work JNDI URL: JNDI URL of the datasource to connect the work repository database. If no value is given in this field. The repository connection will allow connection to the master only, and the Navigation will be limited to Topology information.

    • JNDI URL: Check this option if you want to use the environment naming context (ENC). When this option is checked, Oracle Data Integrator Console automatically prefixes the data source name with the string java:comp/env/ to identify it in the application server's JNDI directory. Note that the JNDI Standard is not supported by Oracle WebLogic Server and for global data sources.

    • Default: Check this option if you want this Repository Connection to be selected by default on the login page.

  4. Click Save. The new Repository Connection appears in the Management Navigation tab.

Testing a Data Server or a Physical Agent Connection

This sections describes how to test the data server connection or the connection of a physical agent in Oracle Data Integrator Console.

To test the data server connection:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Data Servers.

  3. Select the data server whose connection you want to test.

  4. Click Test Connection in the Navigation tab toolbar.

  5. In the Test Connection dialog, select the:

    • Physical Agent that will carry out the test

    • Transaction on which you want to execute the command. This parameter is only displayed if there is any On Connect/Disconnect command defined for this data server. The transactions from 0 to 9 and the Autocommit transaction correspond to connection created by sessions (by procedures or knowledge modules). The Client Transaction corresponds to the client components (ODI Console and Studio).

  6. Click Test.

A dialog showing "Connection successful!" is displayed if the test has worked. If not, an error message is displayed.

To test the physical agent connection:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Agents > Physical Agents.

  3. Select the physical agent whose connection you want to test.

  4. Click Test Connection in the Navigation tab toolbar.

A dialog showing "Connection successful!" is displayed if the test has worked. If not, an error message is displayed.

Administering Repositories

Oracle Data Integrator Console provides you with features to perform management operations (create, import, export) on repositories. These operations are available from the Management Navigation tab, under the Repositories node. These management operations reproduce in a web interface the administrative operations available via the Oracle Data Integrator Studio and allow setting up and maintaining your environment from the ODI Console.

See Chapter 3, "Administering Repositories," and Chapter 20, "Exporting and Importing," for more information on these operations.

Administering Java EE Agents

Oracle Data Integrator Console allows you to add JDBC datasources and create templates to deploy physical agents into WebLogic Server.

See Chapter 4, "Setting Up a Topology," for more information on Java EE Agents, datasources and templates.

To add a datasource to a physical agent:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Agents > Physical Agents.

  3. Select the agent you want to manage.

  4. Click Edit in the Navigation tab toolbar.

  5. Click Add Datasource

  6. Provide a JNDI Name for this datasource and select the Data Server Name. This datasource will be used to connect to this data server from the machine into which the Java EE Agent will be deployed.

  7. Click OK.

  8. Click Save to save the changes to the physical agent.

To create a template for a physical agent:

  1. Select the Browse Navigation tab.

  2. Navigate to Topology > Agents > Physical Agents.

  3. Select the agent you want to manage.

  4. Click Edit in the Navigation tab toolbar.

  5. Click Agent Deployment.

  6. Follow the steps of the Agent Deployment wizard. This wizard reproduces in a web interface the Server Template Generation wizard. See "Deploying an Agent in a Java EE Application Server" for more details.

PKPV~PKU5zDOEBPS/mappings.htm Creating and Using Mappings

11 Creating and Using Mappings

This chapter describes how to create and use mappings.

This chapter includes the following sections:

Introduction to Mappings

Mappings are the logical and physical organization of your data sources, targets, and the transformations through which the data flows from source to target. You create and manage mappings using the mapping editor, a new feature of ODI 12c.

The mapping editor opens whenever you open a mapping. Mappings are organized in folders under individual projects, found under Projects in the Designer Navigator.

Parts of a Mapping

A mapping is made up of and defined by the following parts:

  • Datastores

    Source datastores are read by a mapping, and can be filtered during the loading process. Target datastores are the elements that are loaded by the mapping. Datastores act as Projector Components.

    Datastores that will be used as sources and targets of the loading process must be populated into the data models before you can use them in a mapping. See Chapter 5, "Creating and Using Data Models and Datastores" for more information.

  • Datasets

    Optionally, you can use datasets within mappings as sources. Datasets provide a logical container in which you can organize sources, and define joins and filters on them through an entity-relationship mechanism, rather than the flow mechanism used elsewhere in mappings. Datasets operate similarly to ODI 11g interfaces, and if you import 11g interfaces into ODI 12c, ODI will automatically create datasets based on your interface logic. Datasets act as selector components.

  • Reusable Mappings

    Reusable mappings are modular, encapsulated flows of components which you can save and re-use. You can place a reusable mapping inside another mapping, or another reusable mapping (that is, reusable mappings may be nested). A reusable mapping can also include datastores as sources and targets itself, like other mapping components. Reusable mappings act as projector components.

  • Other Components

    ODI provides additional components that are used in between sources and targets to manipulate the data. These components are available on the component palette in the mapping diagram.

    The following are the available component in the component palette:

    • Expression

    • Aggregate

    • Distinct

    • Set

    • Filter

    • Join

    • Lookup

    • Sort

    • Split

  • Connections

    Connections create a flow of data between mapping components. Most components can have both input and output connections. Datastores with only output connections are considered sources; datastores with only input connections are considered targets. Some components can support multiple input or output connections; for example, the split component supports two or more output connections, allowing you to split data into multiple downstream flows.

  • Staging Schemas

    Optionally, you can specify a staging area for a mapping or for a specific deployment specification of a mapping. If you want to define a different staging area than any of the source or target datastores, you must define the correct physical and logical schemas in the mapping's execution context before creating a mapping,. See Chapter 4, "Setting Up a Topology" for more information.

  • Knowledge Modules

    Knowledge modules define how data will be transferred between data servers and loaded into data targets. Knowledge Modules (IKMs, LKMs, EKMs, and CKMs) that will be selected in the flow must be imported into the project or must be available as global Knowledge Modules.

    IKMs allows you to define (or specify) how the actual transformation and loading is performed.

    LKMs allow you to specify how the transfer of the data between one data server to another data server is performed.

    EKMs define how data will be extracted from your data sources.

    When used as flow control, CKMs allow you to check for errors in the data flow during the loading of records into a target datastore. When used as static control, CKMs can be used to check for any errors in a target table after the data is loaded by the main mapping logic.

    You can select a strategy to perform these tasks by selecting an appropriate KM. For example, you can decide whether to use an ODI agent to transfer data between two databases, or use an Oracle database link if the transfer is between two Oracle databases.

    See Chapter 9, "Creating an Integration Project" for more information. See

  • Variables, Sequences, and User Functions

    Variables, Sequences, and User Functions that will be used in expressions within your mappings must be created in the project. See Chapter 13, "Creating and Using Procedures, Variables, Sequences, and User Functions" for more information.

Navigating the Mapping Editor

The mapping editor provides a single environment for designing and editing mappings.

Mappings are organized within folders in a project in the Designer Navigator. Each folder has a mappings node, within which all mappings are listed.

To open the mapping editor, right-click an existing mapping and select Open, or double-click the mapping. To create a new mapping, right-click the Mappings node and select New Mapping. The mapping is opened as a tab on the main pane of ODI Studio. Select the tab corresponding to a mapping to view the mapping editor.

Figure 11-1 Mapping Editor

Description of Figure 11-1 follows

The mapping editor consists of the sections described in Table 11-1:

Table 11-1 Mapping Editor Sections

SectionLocation in Figure 11-1Description

Mapping Diagram

Middle

The mapping diagram displays an editable logical or physical view of a mapping. These views are sometimes called the logical diagram or the physical diagram.

You can drag datastores into the diagram from the Models tree, and Reusable Mappings from the Global Objects or Projects tree, into the mapping diagram. You can also drag components from the component palette to define various data operations.

Mapping Editor tabs

Middle, at the bottom of the mapping diagram

The Mapping Editor tabs are ordered according to the mapping creation process. These tabs are:

  • Overview: displays the general properties of the mapping

  • Logical: displays the logical organization of the mapping in the mapping diagram

  • Physical: displays the physical organization of the mapping in the mapping diagram

Property Inspector

Bottom

Displays properties for the selected object.

If the Property Inspector does not display, select Properties from the Window menu.

Component Palette

Right

Displays the mapping components you can use for creating mappings. You can drag and drop components into the logical or physical mapping diagram from the components palette.

If the Component Palette does not display, select Components from the Window menu.

Structure Panel

Not shown

Displays a text-based hierarchical tree view of a mapping, which is navigable using the tab and arrow keys.

The Structure Panel does not display by default. To open it, select Structure from the Window menu.

Thumbnail Panel

Not shown

Displays a miniature graphic of a mapping, with a rectangle indicating the portion currently showing in the mapping diagram. This panel is useful for navigating very large or complex mappings.

The Thumbnail Panel does not display by default. To open it, select Thumbnail from the Window menu.


Creating a Mapping

Creating a mapping follows a standard process which can vary depending on the use case.

Using the logical diagram of the mapping editor, you can construct your mapping by dragging components onto the diagram, dragging connections between the components, dragging attributes across those connections, and modifying the properties of the components using the property inspector. When the logical diagram is complete, you can use the physical diagram to define where and how the integration process will run on your physical infrastructure. When the logical and physical design of your mapping is complete, you can run it.

The following step sequence is usually performed when creating a mapping, and can be used as a guideline to design your first mappings:

  1. Creating a New Mapping

  2. Adding and Removing Components

  3. Connecting and Configuring Components

  4. Defining a Physical Configuration

  5. Running Mappings


Note:

You can also use the Property Inspector and the Structure Panel to perform the steps 2 to 5. See "Editing Mappings Using the Property Inspector and the Structure Panel" for more information.


Creating a New Mapping

To create a new mapping:

  1. In Designer Navigator select the Mappings node in the folder under the project where you want to create the mapping.

  2. Right-click and select New Mapping. The New Mapping dialog is displayed.

  3. In the New Mapping dialog, fill in the mapping Name. Optionally, enter a Description. If you want the new mapping to contain a new empty dataset, select Create Empty Dataset. Click OK.


    Note:

    You can add and remove datasets (including this empty dataset) after you create a mapping. Datasets mimic behavior from older versions of ODI, but they are entirely optional and all behavior of a dataset can be created using other components in the mapping editor.


    Your new mapping opens in a new tab in the main pane of ODI Studio.


    Tip:

    To display the editor of a datastore, a reusable mapping, or a dataset that is used in the Mapping tab, you can right-click the object and select Open.


Adding and Removing Components

Add components to the logical diagram by dragging them from the Component Palette. Drag datastores and reusable mappings from the Designer Navigator.

Delete components from a mapping by selecting them, and then either hitting the Delete key, or using the right-click context menu to select Delete. A confirmation dialog is shown.

Source and target datastores are the elements that will be read by, and loaded by, the mapping.

Between the source and target datastores are arranged all the other components of a mapping. When the mapping is run, data will flow from the source datastores, through the components you define, and into the target datastores.

Preserving and Removing Downstream or Upstream Expressions

Where applicable, when you delete a component, a check box in the confirmation dialog allows you to preserve, or remove, downstream or upstream expressions; such expressions may have been created when you connected or modified a component. By default ODI preserves these expressions.

This feature allows you to make changes to a mapping without destroying work you have already done. For example, when a source datastore is mapped to a target datastore, the attributes are all mapped. You then realize that you need to filter the source data. To add the filter, one option is to delete the connection between the two datastores, but preserve the expressions, and then connect a filter in the middle. None of the mapping expressions are lost.

Connecting and Configuring Components

Create connectors between components by dragging from the originating connector port to the destination connector port. Connectors can also be implicitly created by dragging attributes between components. When creating a connector between two ports, an attribute matching dialog can be shown to automatically map attributes based on name or position.

Attribute Matching

The Attribute Matching Dialog is displayed when a connector is drawn to a projector component (see: "Projector Components") in the Mapping Editor. The Attribute Matching Dialog gives you an option to automatically create expressions to map attributes from the source to the target component based on a matching mechanism. It also gives the option to create new attributes on the target based on the source, or new attributes on the source based on the target.

This feature allows you to easily define a set of attributes in a component that are inherited from another component. For example, you could drag a connection from a new, empty Set component to a downstream target datastore. If you leave checked the Create Attributes On Source option in the Attribute Matching dialog, the Set component will be populated with all of the attributes of the target datastore. When you connect the Set component to upstream components, you will already have the target attributes ready for you to map the upstream attributes to.

Connector Points and Connector Ports

Connector points define the connections between components inside a mapping. A connection point is a single pathway for input or output for a component.

Connector ports are the small circles on the left and/or right sides of components displayed in the mapping diagram.

Most components have both input and output connector points (the component type may place limitations on how many connector points are allowed, and some components can have only input or only output connections). Some components allow the addition or deletion of connector points in the property inspector.

You can click a connector port on one component and drag a line to another component's connector port to define a connection. ODI will either use an unused existing connector point on each component, or create an additional connector point as needed.

For example, a Join component by default has two input connector points and one output connector point. If you drag a third connection to the input connector port of a join component, ODI creates a third input connector point. You can also select a Join component and, in the property inspector, in the Connector Points section, click the green plus icon to add an additional Input Connector Point.


Note:

You cannot drag a connection to or from a port that already has the maximum number of connections. For example, a Join component can have only one output connector point; if you try to drag another connection from the output connector port, no connection is created.


You can delete a connector by right-clicking the line between two connector points and selecting Delete, or by selecting the line and pressing the delete key.

Defining New Attributes

When you add components to a mapping, you may need to create attributes in them in order to move data across the flow from sources, through intermediate components, to targets. Typically you define new attributes to perform transformations of the data.

Use any of the following methods to define new attributes:

  • Attribute Matching Dialog: This dialog is displayed in certain cases when dragging a connection from a connector port on one component to a connector port on another, when at least one component is a projector component.

    The attribute matching dialog includes an option to create attributes on the target. If target already has attributes with matching names, ODI will automatically map to these attributes. If you choose By Position, ODI will map the first attributes to existing attributes in the target, and then add the rest (if there are more) below it. For example, if there are three attributes in the target component, and the source has 12, the first three attributes map to the existing attributes, and then the remaining nine are copied over with their existing labels.

  • Drag and drop attributes: Drag and drop a single (or multi-selected) attribute from a one component into another component (into a blank area of the component graphic, not on top of an existing attribute). ODI creates a connection (if one did not already exist), and also creates the attribute.


    Tip:

    If the graphic for a component is "full", you can hover over the attributes and a scroll bar appears on the right. Scroll to the bottom to expose a blank line. You can then drag attributes to the blank area.

    If you drag an attribute onto another attribute, ODI maps it into that attribute, even if the names do not match. This does not create a new attribute on the target component.


  • Add new attributes in the property inspector: In the property inspector, on the Attributes tab, use the green plus icon to create a new attribute. You can select or enter the new attribute's name, data type, and other properties in the Attributes table. You can then map to the new attribute by dragging attributes from other components onto the new attribute.


    Caution:

    ODI will allow you to create an illegal data type connection. Therefore, you should always set the appropriate data type when you create a new attribute. For example, if you intend to map an attribute with a DATE data type to a new attribute, you should set the new attribute to have the DATE type as well.

    Type-mismatch errors will be caught during validation or run-time.


Defining Expressions and Conditions

Expressions and conditions are used to map individual attributes from component to component. Component types determine the default expressions and conditions that will be converted into the underlying code of your mapping.

For example, any target component has an expression for each attribute. A filter, join, or lookup component will use code (such as SQL) to create the expression appropriate to the component type.

You can modify the expressions and conditions of any component by modifying the code displayed in various property fields.

Expressions have a result type, such as VARCHAR or NUMERIC. Conditions are boolean, meaning, the result of a condition should always evaluate to TRUE or FALSE. A condition is needed for filter, join, and lookup (selector) components, while an expression is used in datastore, aggregate, and distinct (projector) components, to perform some transformation or create the attribute-level mappings.

Every projector component can have an expression on its incoming values. If you modify the expression for an attribute, a small "f" icon appears on the attribute in the logical diagram. This icon provides a visual cue that a function has been placed there.

To define the mapping of a target attribute:

  1. In the mapping editor, select an attribute to display the attribute's properties in the Property Inspector.

  2. In the Target tab (for expressions) or Condition tab (for conditions), modify the Expression or Condition field(s) to create the required logic.


    Tip:

    The attributes from any component in the diagram can be drag-and-dropped into an expression field to automatically add the fully-qualified attribute name to the code.


  3. Optionally, select or hover over any field in the property inspector containing an expression, and then click the gear icon that appears to the right of the field, to open the advanced Expression Editor.

    The attributes on the left are only the ones that are in scope (have already been connected). So if you create a component with no upstream or downstream connection to a component with attributes, no attributes are listed.

  4. Optionally, after modifying an expression or condition, consider validating your mapping to check for errors in your SQL code. Click the green check mark icon at the top of the logical diagram. Errors, if any, will be displayed in an error dialog.

Defining a Physical Configuration

In the Physical tab of the mapping editor, you define the loading and integration strategies for mapped data. Oracle Data Integrator automatically computes the flow depending on the configuration in the mapping's logical diagram. It proposes default knowledge modules (KMs) for the data flow. The Physical tab enables you to view the data flow and select the KMs used to load and integrate data.

For more information about physical design, see "Physical Design".

Running Mappings

Once a mapping is created, you can run it. This section briefly summarizes the process of running a mapping. For detailed information about running your integration processes, see: Chapter 21, "Running Integration Processes."

To run a mapping:

  1. From the Projects menu of the Designer Navigator, right-click a mapping and select Run.

    Or, with the mapping open in the mapping editor, click the run icon in the toolbar. Or, select Run from the Run menu.

  2. In the Run dialog, select the execution parameters:

    • Select the Context into which the mapping must be executed. For more information about contexts, see: "Contexts".

    • Select the Deployment Specification you want to run. See: "Creating and Managing Deployment Specifications".

    • Select the Logical Agent that will run the mapping. The object can also be executed using the agent that is built into Oracle Data Integrator Studio, by selecting Local (No Agent). For more information about logical agents, see: "Agents".

    • Select a Log Level to control the detail of messages that will appear in the validator when the mapping is run. For more information about logging, see: "Managing the Log".

    • Check the Simulation box if you want to preview the code without actually running it. In this case no data will be changed on the source or target datastores. For more information, see: "Simulating an Execution".

  3. Click OK.

  4. The Information dialog appears. If your session started successfully, you will see "Session started."

  5. Click OK.


    Notes:

    • When you run a mapping, the Validation Results pane opens. You can review any validation warnings or errors there.

    • You can see your session in the Operator navigator Session List. Expand the Sessions node and then expand the mapping you ran to see your session. The session icon indicates whether the session is still running, completed, or stopped due to errors. For more information about monitoring your sessions, see: Chapter 23, "Monitoring Integration Processes."


Using Mapping Components

In the logical view of the mapping editor, you design a mapping by combining datastores with other components. You can use the mapping diagram to arrange and connect components such as datasets, filters, sorts, and so on. You can form connections between data stores and components by dragging lines between the connector ports displayed on these objects.

Mapping components can be divided into two categories which describe how they are used in a mapping: projector components and selector components.

Projector Components

Projectors are components that influence the attributes present in the data that flows through a mapping. Projector components define their own attributes: attributes from preceding components are mapped through expressions to the projector's attributes. A projector hides attributes originating from preceding components; all succeeding components can only use the attributes from the projector.

Review the following topics to learn how to use the various projector components:

Selector Components

Selector components reuse attributes from preceding components. Join and Lookup selectors combine attributes from the preceding components. For example, a Filter component following a datastore component reuses all attributes from the datastore component. As a consequence, selector components don't display their own attributes in the diagram and as part of the properties; they are displayed as a round shape. (The Expression component is an exception to this rule.)

When mapping attributes from a selector component to another component in the mapping, you can select and then drag an attribute from the source, across a chain of connected selector components, to a target datastore or next projector component. ODI will automatically create the necessary queries to bring that attribute across the intermediary selector components.

Review the following topics to learn how to use the various selector components:

The Expression Editor

Most of the components you use in a mapping are actually representations of an expression in the code that acts on the data as it flows from your source to your target datastores. When you create or modify these components, you can edit the expression's code directly in the Property Inspector.

To assist you with more complex expressions, you can also open an advanced editor called the Expression Editor. (In some cases, the editor is labeled according to the type of component; for example, from a Filter component, the editor is called the Filter Condition Advanced Editor. However, the functionality provided is the same.)

To access the Expression Editor, select a component, and in the Property Inspector, select or hover over with the mouse pointer any field containing code. A gear icon appears to the right of the field. Click the gear icon to open the Expression Editor.

For example, to see the gear icon in a Filter component, select or hover over the Filter Condition field on the Condition tab; to see the gear icon in a Datastore component, select or hover over the Journalized Data Filter field of the Journalizing tab.

A typical example view of the Expression Editor is shown in Figure 11-2

Figure 11-2 Example Expression Editor

Description of Figure 11-2 follows

The Expression Editor is made up of the following panels:

  • Attributes: This panel appears on the left of the Expression Editor. When editing an expression for a mapping, this panel contains the names of attributes which are "in scope," meaning, attributes that are currently visible and can be referenced by the expression of the component. For example, if a component is connected to a source datastore, all of the attributes of that datastore are listed.

  • Expression: This panel appears in the middle of the Expression Editor. It displays the current code of the expression. You can directly type code here, or drag and drop elements from the other panels.

  • Technology functions: This panel appears below the expression. It lists the language elements and functions appropriate for the given technology.

  • Variables, Sequences, User Functions and odiRef API: This panel appears to the right of the technology functions and contains:

    • Project and global variables.

    • Project and global Sequences.

    • Project and global User-Defined Functions.

    • OdiRef Substitution Methods.

Standard editing functions (cut/copy/paste/undo/redo) are available using the toolbar buttons.

Source and Target Datastores

To insert a source or target datastore in a mapping:

  1. In the Designer Navigator, expand the Models tree and expand the model or sub-model containing the datastore to be inserted as a source or target.

  2. Select this datastore, then drag it into the mapping panel. The datastore appears.

  3. To make the datastore a source, drag a link from one or more components to the output (right) connector of the datastore. A datastore is not a source until it has at least one outgoing connection.

    To make the datastore a target, drag a link from one or more components to the input (left) connector of the datastore. A datastore is not a target until it has at least one incoming connection.

Once you have defined a datastore you may wish to view its data.

To display the data of a datastore in a mapping:

  1. Right-click the title of the datastore in the mapping diagram.

  2. Select Data...

The Data Editor opens.

Creating Filters

A filter is a selector component (see: "Selector Components") that can select a subset of data based on a filter condition. The behavior follows the rules of the SQL WHERE clause.

Filters can be located in a dataset or directly in a mapping as a flow component.

When used in a dataset, a filter is connected to one datastore or reusable mapping to filter all projections of this component out of the dataset. For more information, see Creating a Mapping Using a Dataset.

To define a filter in a mapping:

  1. Drag and drop a Filter component from the component palette into the logical diagram.

  2. Drag an attribute from the preceding component onto the filter component. A connector will be drawn from the preceding component to the filter, and the attribute will be referenced in the filter condition.

    In the Condition tab of the Property Inspector, edit the Filter Condition and complete the expression. For example, if you want to select from the CUSTOMER table (that is the source datastore with the CUSTOMER alias) only those records with a NAME that is not null, an expression could be CUSTOMER.NAME IS NOT NULL.


    Tip:

    Click the gear icon to the right of the Filter Condition field to open the Filter Condition Advanced Editor. The gear icon is only shown when you have selected or are hovering over the Filter Condition field with your mouse pointer. For more information about the Filter Condition Advanced Editor, see: "The Expression Editor".


  3. Optionally, on the General tab of the Property Inspector, enter a new name in the Name field. Using a unique name is useful if you have multiple filters in your mapping.

  4. Optionally, set an Execute on Hint, to indicate your preferred execution location: No hint, Source, Staging, or Target. The physical diagram will locate the execution of the filter according to your hint, if possible. For more information, see "Configuring Execution Locations".

Creating Joins and Lookups

This section contains the following topics:

About Joins

A Join is a selector component (see: "Selector Components") that creates a join between multiple flows. The attributes from upstream components are combined as attributes of the Join component.

A Join can be located in a dataset or directly in a mapping as a flow component. A join combines data from two or more components, datastores, datasets, or reusable mappings.

When used in a dataset, a join combines the data of the datastores using the selected join type. For more information, see Creating a Mapping Using a Dataset.

A join used as a flow component can join two or more sources of attributes, such as datastores or other upstream components. A join condition can be formed by dragging attributes from two or more components successively onto a join component in the mapping diagram; by default the join condition will be an equi-join between the two attributes.

About Lookups

A Lookup is a selector component (see: "Selector Components") that returns data from a lookup flow being given a value from a driving flow. The attributes of both flows are combined, similarly to a join component. A lookup can be implemented in generated code either through a Left Outer Join or a nested Select statement.

Lookups can be located in a dataset or directly in a mapping as a flow component.

When used in a dataset, a Lookup is connected to two datastores or reusable mappings combining the data of the datastores using the selected join type. For more information, see Creating a Mapping Using a Dataset.

Lookups used as flow components (that is, not in a dataset) can join two or more flows. A lookup condition can be created by dragging an attribute from the driving flow and then the lookup flow onto the lookup component; the lookup condition will be an equi-join between the two attributes.

Creating a Join or Lookup

To create a join or a lookup between two upstream components:

  1. Drag a join or lookup from the component palette into the logical diagram.

  2. Drag the attributes participating in the join or lookup condition from the preceding components onto the join or lookup component. For example, if attribute ID from source datastore CUSTOMER and then CUSTID from source datastore ORDER are dragged onto a join, then the join condition CUSTOMER.ID = ORDER.CUSTID is created.


    Note:

    When more than two attributes are dragged into a join or lookup, ODI compares and combines attributes with an AND operator. For example, if you dragged attributes from sources A and B into a Join component in the following order:

    A.FIRSTNAME
    B.FIRSTNAME
    A.LASTNAME
    B.LASTNAME
    

    The following join condition would be created:

    A.FIRSTNAME=B.FIRSTNAME AND A.LASTNAME=B.LASTNAME
    

    You can continue with additional pairs of attributes in the same way.

    You can edit the condition after it is created, as necessary.


  3. In the Condition tab of the Property Inspector, edit the Join Condition or Lookup Condition and complete the expression.


    Tip:

    Click the gear icon to the right of the Join Condition or Lookup Condition field to open the Expression Editor. The gear icon is only shown when you have selected or are hovering over the condition field with your mouse pointer. For more information about the Expression Editor, see: "The Expression Editor".


  4. Optionally, set an Execute on Hint, to indicate your preferred execution location: No hint, Source, Staging, or Target. The physical diagram will locate the execution of the filter according to your hint, if possible.

  5. For a join:

    Select the Join Type by checking the various boxes (Cross, Natural, Left Outer, Right Outer, Full Outer (by checking both left and right boxes), or (by leaving all boxes empty) Inner Join). The text describing which rows are retrieved by the join is updated.

    For a lookup:

    Select the Lookup Type by selecting an option from the drop down list. The Technical Description field is updated with the SQL code representing the lookup, using fully-qualified attribute names.

  6. Optionally, for joins, if you want to use an ordered join syntax for this join, check the Generate ANSI Syntax box.

    The Join Order box will be checked if you enable Generate ANSI Syntax, and the join will be automatically assigned an order number.

  7. For joins inside of datasets, define the join order. Check the Join Order check box, and then in the User Defined field, enter an integer. A join component with a smaller join order number means that particular join will be processed first among other joins. The join order number determines how the joins are ordered in the FROM clause. A smaller join order number means that the join will be performed earlier than other joins. This is important when there are outer joins in the dataset.

    For example: A mapping has two joins, JOIN1 and JOIN2. JOIN1 connects A and B, and its join type is LEFT OUTER JOIN. JOIN2 connects B and C, and its join type is RIGHT OUTER JOIN.

    To generate (A LEFT OUTER JOIN B) RIGHT OUTER JOIN C, assign a join order 10 for JOIN1 and 20 for JOIN2.

    To generate A LEFT OUTER JOIN (B RIGHT OUTER JOIN C), assign a join order 20 for JOIN1 and 10 for JOIN2.

Creating Sets

A set component is a projector component (see: "Projector Components") that combines multiple input flows into one using set operation such as UNION, INTERSECT, EXCEPT, MINUS and others. The behavior reflects the SQL operators.

Additional input flows can be added to the set component by connecting new flows to it. The number of input flows is shown in the list of Input Connector Points in the Operators tab. If an input flow is removed, the input connector point needs to be removed as well.

To create a set from two or more sources:

  1. Drag and drop a Set component from the component palette into the logical diagram.

  2. Define the attributes of the set if the attributes will be different from the source components. To do this, select the Attributes tab in the property inspector, and click the green plus icon to add attributes. Select the new attribute names in the Target column and assign them appropriate values.

    If Attributes will be the same as those in a source component, use attribute matching (see step 4).

  3. Create a connection from the first source by dragging a line from the connector port of the source to the connector port of the Set component.

  4. The Attribute Matching dialog will be shown. If attributes of the set should be the same as the source component, check the Create Attributes on Target box (see: "Attribute Matching").

  5. If necessary, map all attributes from source to target that were not mapped through attribute matching, and create transformation expressions as necessary (see: "Defining Expressions and Conditions").

  6. All mapped attributes will be marked by a yellow arrow in the logical diagram. This shows that not all sources have been mapped for this attribute; a set has at least two sources.

  7. Repeat the connection and attribute mapping steps for all sources to be connected to this set component. After completion, no yellow arrows should remain.

  8. In the property inspector, select the Operators tab and select cells in the Operator column to choose the appropriate set operators (UNION, EXCEPT, INTERSECT, and so on). UNION is chosen by default. You can also change the order of the connected sources to change the set behavior.


Note:

You can set Execute On Hint on the attributes of the set component, but there is also an Execute On Hint property for the set component itself. The hint on the component indicates the preferred location where the actual set operation (UNION, EXCEPT, and so on) is performed, while the hint on an attribute indicates where the preferred location of the expression is performed.

A common use case is that the set operation is performed on a staging execution group, but some of its expressions can be done on the source execution group. For more information about execution groups, see "Configuring Execution Locations".


Creating Aggregates

The aggregate component is a projector component (see: "Projector Components") which groups and combines attributes using aggregate functions, such as average, count, maximum, sum, and so on. ODI will automatically select attributes without aggregation functions to be used as group-by attributes. You can override this by using the Is Group By and Manual Group By Clause properties.

To create an aggregate component:

  1. Drag and drop the aggregate component from the component palette into the logical diagram.

  2. Define the attributes of the aggregate if the attributes will be different from the source components. To do this, select the Attributes tab in the property inspector, and click the green plus icon to add attributes. Enter new attribute names in the Target column and assign them appropriate values.

    If attributes in the aggregate component will be the same as those in a source component, use attribute matching (see Step 4).

  3. Create a connection from a source component by dragging a line from the connector port of the source to the connector port of the aggregate component.

  4. The Attribute Matching dialog will be shown. If attributes in the aggregate component will be the same as those in a source component, check the Create Attributes on Target box (see: "Attribute Matching").

  5. If necessary, map all attributes from source to target that were not mapped though attribute matching, and create transformation expressions as necessary (see: "Defining Expressions and Conditions").

  6. In the property inspector, the attributes are listed in a table on the Attributes tab. Specify aggregation functions for each attribute as needed. By default all attributes not mapped using aggregation functions (such as sum, count, avg, max, min, and so on) will be used as Group By.

    You can modify an aggregation expression by clicking the attribute. For example, if you want to calculate average salary per department, you might have two attributes: the first attribute called AVG_SAL, which you give the expression AVG(EMP.SAL), while the second attribute called DEPTNO has no expression. If Is Group By is set to Auto, DEPTNO will be automatically included in the GROUP BY clause of the generated code.

    You can override this default by changing the property Is Group By on a given attribute from Auto to Yes or No, by double-clicking on the table cell and selecting the desired option from the drop down list.

    You can set a different default for the entire aggregate component. Select the General tab in the property inspector, and then set a Manual Group by Clause. For example, set the Manual Group by Clause to YEAR(customer.birthdate) to group by birthday year.

  7. Optionally, add a HAVING clause by setting the HAVING property of the aggregate component: for example, SUM(order.amount) > 1000.

Creating Multiple Targets

In Oracle Data Integrator 12c, creating multiple targets in a mapping is straightforward. Every datastore component which has inputs but no outputs in the logical diagram is considered a target.

ODI allows splitting a component output into multiple flows at any point of a mapping. You can also create a single mapping with multiple independent flows, avoiding the need for a package to coordinate multiple mappings.

The output port of many components can be connected to multiple downstream components, which will cause all rows of the component result to be processed in each of the downstream flows. If rows should be routed or conditionally processed in the downstream flows, a split component should be used to define the split conditions.


See Also:

"Creating Splits"


Specifying Target Order

Mappings with multiple targets do not, by default, follow a defined order of loading data to targets. You can define a partial or complete order by using the Target Load Order property. Targets which you do not explicitly assign an order will be loaded in an arbitrary order by ODI.


Note:

Target load order also applies to reusable mappings. If a reusable mapping contains a source or a target datastore, you can include the reusable mapping component in the target load order property of the parent mapping.


The order of processing multiple targets can be set in the Target Load Order property of the mapping:

  1. Click the background in the logical diagram to deselect objects in the mapping. The property inspector displays the properties for the mapping.

  2. In the property inspector, enter a target load order in the Target Load Order field:

    Select or hover over the Target Load Order field and click the gear icon to open the Target Load Order Dialog. This dialog displays all available datastores (and reusable mappings containing datastores) that can be targets, allowing you to move one or more to the Ordered Targets field. In the Ordered Targets field, use the icons on the right to rearrange the order of processing.


Tip:

Target Order is useful when a mapping has multiple targets and there are foreign key (FK) relationships between the targets. For example, suppose a mapping has two targets called EMP and DEPT, and EMP.DEPTNO is a FK to DEPT.DEPTNO. If the source data contains information about the employee and the department, the information about the department (DEPT) must be loaded first before any rows about the employee can be loaded (EMP). To ensure this happens, the target load order should be set to DEPT, EMP.


Creating Sorts

A Sort is a projector component (see: "Projector Components") that will apply a sort order to the rows of the processed dataset, using the SQL ORDER BY statement.

To create a sort on a source datastore:

  1. Drag and drop a Sort component from the component palette into the logical diagram.

  2. Drag the attribute to be sorted on from a preceding component onto the sort component. If the rows should be sorted based on multiple attributes, they can be dragged in desired order onto the sort component.

  3. Select the sort component and select the Condition tab in the property inspector. The Sorter Condition field follows the syntax of the SQL ORDER BY statement of the underlying database; multiple fields can be listed separated by commas, and ASC or DESC can be appended after each field to define if the sort will be ascending or descending.

Creating Splits

A Split is a selector component (see: "Selector Components") that divides a flow into two or more flows based on specified conditions. Split conditions are not necessarily mutually exclusive: a source row is evaluated against all split conditions and may be valid for multiple output flows.

If a flow is divided unconditionally into multiple flows, no split component is necessary: you can connect multiple downstream components to a single outgoing connector port of any preceding component, and the data output by that preceding component will be routed to all downstream components.

A split component is used to conditionally route rows to multiple proceeding flows and targets.

To create a split to multiple targets in a mapping:

  1. Drag and drop a Split component from the component palette into the logical diagram.

  2. Connect the split component to the preceding component by dragging a line from the preceding component to the split component.

  3. Connect the split component to each following component. If either of the upstream or downstream components contain attributes, the Attribute Mapping Dialog will appear. In the Connection Path section of the dialog, it will default to the first unmapped connector point and will add connector points as needed. Change this selection if a specific connector point should be used.

  4. In the property inspector, open the Split Conditions tab. In the Output Connector Points table, enter expressions to select rows for each target. If an expression is left empty, all rows will be mapped to the selected target. Check the Remainder box to map all rows that have not been selected by any of the other targets.

Creating Distincts

A distinct is a projector component (see: "Projector Components") that projects a subset of attributes in the flow. The values of each row have to be unique; the behavior follows the rules of the SQL DISTINCT clause.

To select distinct rows from a source datastore:

  1. Drag and drop a Distinct component from the component palette into the logical diagram.

  2. Connect the preceding component to the Distinct component by dragging a line from the preceding component to the Distinct component.

    The Attribute Mapping Dialog will appear: select Create Attributes On Target to create all of the attributes in the Distinct component. Alternatively, you can manually map attributes as desired using the Attributes tab in the property inspector.

  3. The distinct component will now filter all rows that have all projected attributes matching.

Creating Expressions

An expression is a selector component (see: "Selector Components") that inherits attributes from a preceding component in the flow and adds additional reusable attributes. An expression can be used to define a number of reusable expressions within a single mapping. Attributes can be renamed and transformed from source attributes using SQL expressions. The behavior follows the rules of the SQL SELECT clause.

The best use of an expression component is in cases where intermediate transformations are used multiple times, such as when pre-calculating fields that are used in multiple targets.

If a transformation is used only once, consider performing the transformation in the target datastore or other component.


Tip:

If you want to reuse expressions across multiple mappings, consider using reusable mappings or user functions, depending on the complexity. See: "Reusable Mappings", and "Working with User Functions".


To create an expression component:

  1. Drag and drop an Expression component from the component palette into the logical diagram.

  2. Connect a preceding component to the Expression component by dragging a line from the preceding component to the Expression component.

    The Attribute Mapping Dialog will appear; select Create Attributes On Target to create all of the attributes in the Expression component.

    In some cases you might want the expression component to match the attributes of a downstream component. In this case, connect the expression component with the downstream component first and select Create Attributes on Source to populate the Expression component with attributes from the target.

  3. Add attributes to the expression component as desired using the Attributes tab in the property inspector. It might be useful to add attributes for pre-calculated fields that are used in multiple expressions in downstream components.

  4. Edit the expressions of individual attributes as necessary (see: "Defining Expressions and Conditions").

Calling a Reusable Mapping

Reusable mappings may be stored within folders in a project, or as global objects within the Global Objects tree, of the Designer Navigator.

To add a reusable mapping to a mapping:

  1. To add a reusable mapping stored within the current project:

    In the Designer Navigator, expand the Projects tree and expand the tree for the project you are working on. Expand the Reusable Mappings node to list all reusable mappings stored within this project.

    To add a global reusable mapping:

    In the Designer Navigator, expand the Global Objects tree, and expand the Reusable Mappings node to list all global reusable mappings.

  2. Select a reusable mapping, and drag it into the mapping diagram. The reusable mapping appears in the diagram.

Creating a Mapping Using a Dataset

A dataset component is a container component that allows you to group multiple data sources and join them through relationship joins. A dataset can contain the following components:

  • Datastores

  • Joins

  • Lookups

  • Filters

  • Reusable Mappings: Only reusable mappings with no input signature and one output signature are allowed.

Create Joins and lookups by dragging an attribute from one datastore to another inside the dataset. A dialog is shown to select if the relationship will be a join or lookup.


Note:

A driving table will have the key to look up, while the lookup table has additional information to add to the result.

In a dataset, drag an attribute from the driving table to the lookup table. An arrow will point from the driving table to the lookup table in the diagram.

By comparison, in a flow-based lookup (a lookup in a mapping that is not inside a dataset), the driving and lookup sources are determined by the order in which connections are created. The first connection is called DRIVER_INPUTn, the second connection LOOKUP_INPUTn.


Create a filter by dragging a datastore or reusable mapping attribute onto the dataset background. Joins, lookups, and filters cannot be dragged from the component palette into the dataset.

This section contains the following topics:

Differences Between Flow and Dataset Modeling

Datasets are container components which contain one or more source datastores, which are related using filters and joins. To other components in a mapping, a dataset is indistinguishable from any other projector component (like a datastore); the results of filters and joins inside the dataset are represented on its output port.

Within a dataset, data sources are related using relationships instead of a flow. This is displayed using an entity relationship diagram. When you switch to the physical tab of the mapping editor, datasets disappear: ODI models the physical flow of data exactly the same as if a flow diagram had been defined in the logical tab of the mapping editor.

Datasets mimic the ODI 11g way of organizing data sources, as opposed to the flow metaphor used in an ODI 12c mapping. If you import projects from ODI 11g, interfaces converted into mappings will contain datasets containing your source datastores.

When you create a new, empty mapping, you are prompted whether you would like to include an empty dataset. You can delete this empty dataset without harm, and you can always add an empty dataset to any mapping. The option to include an empty dataset is purely for your convenience.

A dataset exists only within a mapping or reusable mapping, and cannot be independently designed as a separate object.

Creating a Dataset in a Mapping

To create a dataset in a mapping, drag a dataset from the component palette into the logical diagram. You can then drag datastores into the dataset from the Models section of the Designer Navigator. Drag attributes from one datastore to another within a dataset to define filter and join relationships.

Drag a connection from the dataset's output connector point to the input connector point on other components in your mapping, to integrate it into your data flow.

Physical Design

The physical tab shows the distribution of execution among different execution units that represent physical servers. ODI computes a default deployment specification containing execution units and groups based on the logical design, the topology of those items and any rules you have defined.

You can also customize this design by using the physical diagram. You can use the diagram to move components between execution units, or onto the diagram background, which creates a separate execution unit. Multiple execution units can be grouped into execution groups, which enable parallel execution of the contained execution units.

A mapping can have multiple deployment specifications; they are listed in tabs under the diagram. By having multiple deployment specifications you can create different execution strategies for the same mapping. In order to create or delete deployment specifications, right-click on the deployment specification tabs.

Physical components define how a mapping is executed at runtime; they are the physical representation of logical components. Depending on the logical component a physical component might have a different set of properties.

This section contains the following topics:

About the Physical Mapping Diagram

In the physical diagram, the following items appear:

  • Deployment Specification: The entire physical diagram represents one deployment specification. Click the background or select the white tab representing with the deployment specification label to display the physical mapping properties. By default, the staging location is collocated on the target, but you can explicitly select a different staging location to cause ODI to automatically move staging to a different host.

    You can define additional deployment specifications by clicking the small tab at the bottom of the physical diagram, next to the current deployment spec tab. A new deployment spec is created automatically from the logical design of the mapping.

  • Execution Groups: Yellow boxes display groups of objects called execution units, which are executed in parallel among each other within the same execution group. These are usually Source Groups and Target Groups:

    • Source Execution Group(s): Source Datastores that are within the same dataset or are located on the same physical data server are grouped in a single source execution group in the physical diagram. A source execution group represents a group of datastores that can be extracted at the same time.

    • Target Execution Group(s): Target Datastores that are located on the same physical data server are grouped in a single target execution group in the physical diagram. A target execution group represents a group of datastores that can be written to at the same time.

  • Execution Units: Within the yellow execution groups are blue boxes called execution units. Execution units within a single execution group are on the same physical data server, but may be different structures.

  • Access Points: In the target execution group, whenever the flow of data goes from one execution unit to another there is an access point (shown with a round icon). Loading Knowledge Modules (LKMs) control how data is transferred from one execution unit to another.

    An access point is created on the target side of a pair of execution units, when data moves from the source side to the target side (unless you use Execute On Hint in the logical diagram to suggest a different execution location). You cannot move an access point node to the source side. However, you can drag an access point node to the empty diagram area and a new execution unit will be created, between the original source and target execution units in the diagram.

  • Components: mapping components such as joins, filters, and so on are also shown on the physical diagram.

You use the following knowledge modules (KMs) in the physical tab:

  • Loading Knowledge Modules (LKMs): LKMs define how data is moved. One LKM is selected for each access point for moving data from the sources to a staging area. An LKM can be also selected to move data from a staging area not located within a target execution unit, to a target, when a single technology IKM is selected for the staging area. Select an access point to define or change its LKM in the property inspector.

  • Integration Knowledge Modules (IKMs) and Check Knowledge Modules (CKMs): IKMs and CKMs define how data is integrated into the target. One IKM and one CKM is typically selected on a target datastore. When the staging area is different from the target, the selected IKM can be a multi-technology IKM that moves and integrates data from the staging area into the target. Select a target datastore to define or change its IKM and CKM in the property inspector.


Notes:

  • Only built-in KMs, or KMs that have already been imported into the project or the global KM list, can be selected in the mapping. Make sure that you have imported the appropriate KMs in the project before proceeding.

  • For more information on the KMs and their options, refer to the KM description and to the Connectivity and Knowledge Modules Guide for Oracle Data Integrator.


Selecting LKMs, IKMs and CKMs

ODI automatically selects knowledge modules in the physical diagram as you create your logical diagram.


Note:

The Integration Type property of a target datastore (which can have the values Control Append, Incremental Update, or Slowly Changing Dimension) is referenced by ODI when it selects a KM. This property is also used to restrict the IKM selection shown, so you will only see IKMs listed that are applicable.


You can use the physical diagram to change the KMs in use.

To change the LKM in use:

  1. In the physical diagram, select an access point. The Property Inspector opens for this object.

  2. Select the Loading Knowledge Module tab, and then select a different LKM from the Loading Knowledge Module list.

  3. KMs are set with default options that work in most use cases. You can optionally modify the KM Options.


    Note:

    If an identically-named option exists, when switching from one KM to another KM options of the previous KM are retained. However, by changing KMs several times you might lose custom KM option values.


To change the IKM in use:

  1. In the physical diagram, select a target datastore by clicking its title. The Property Inspector opens for this object.

  2. In the Property Inspector, select the Integration Knowledge Module tab, and then select an IKM from the Integration Knowledge Module list.

  3. KMs are set with default options that work in most use cases. You can optionally modify the KM Options.


    Note:

    If an identically-named option exists, when switching from one KM to another KM options of the previous KM are retained. However, by changing KMs several times you might lose custom KM option values.


To change the CKM in use:

  1. In the physical diagram, select a target datastore by clicking its title. The Property Inspector opens for this object.

  2. In the Property Inspector, select the Check Knowledge Module tab, and then select a CKM from the Check Knowledge Module list.

  3. KMs are set with default options that work in most use cases. You can optionally modify the KM Options.


    Note:

    If an identically-named option exists, when switching from one KM to another KM options of the previous KM are retained. However, by changing KMs several times you might lose custom KM option values.


Configuring Execution Locations

In the physical tab of the mapping editor, you can change the staging area and determine where components will be executed. When you designed the mapping using components in the logical diagram, you optionally set preferred execution locations using the Execute On Hint property. In the physical diagram, ODI attempts to follow these hints where possible.

You can further manipulate execution locations in the physical tab. See the following topics for details:

Moving Physical Nodes

You can move the execution location of a physical node. Select the node and drag it from one Execution Group into another Execution Group. Or, drag it to a blank area of the physical diagram, and ODI will automatically create a new Execution Group for the component.

Moving Expressions

You can move expressions in the physical diagram. Select the Execution Unit and in the property inspector, select the Expressions tab. The execution location of the expression is shown in the Execute on property. Double-click the property to alter the execution location.

Defining New Execution Units

You can define a new execution unit by dragging a component from its current execution unit onto a blank area of the physical diagram. A new execution unit is created. Select the execution unit to modify its properties using the property inspector.

Configuring In-Session Parallelism

ODI agent is the scheduler that runs an entire ODI mapping job on a given host. If your have two or more loads, it will either run them one after another (serialized), or simultaneously (parallelized, using separate processor threads).

Execution units in the same execution group are parallelized. If you move an execution unit into its own group, it is no longer parallelized with other execution units: it is now serialized. The system will select the order in which separate execution groups are run.

You might choose to run loads serially to reduce instantaneous system resource usage, while you might choose to run loads in parallel to reduce the longevity of system resource usage.

Configuring Parallel Target Table Load

You can enable parallel target table loading in a deployment specification. Select the deployment spec (by clicking on the tab at the bottom of the physical diagram, or clicking an empty area of the diagram) and in the property inspector, check the box for the property Use Unique Temporary Object Names.

This option allows multiple instances of the same mapping to be executed concurrently. To load data from source to staging area, C$ tables are created in the staging database.


Note:

In ODI 11g, C$ table names were derived from the target table of the interface. As a result, when multiple instances of the same mapping were executed at the same time, data from different sessions could load into the same C$ table and cause conflicts.

In ODI 12c, if the option Use Unique Temporary Object Names is set to true, the system generates a globally-unique name for C$ tables for each mapping execution. This prevents any conflict from occurring.


Configuring Temporary Indexes

If you want ODI to automatically generate a temporary index to optimize the execution of a filter, join, or datastore, select the node in the physical diagram. In the property inspector, select the Temporary Indexes tab. You can double-click the Index Type field to select a temporary index type.


Note:

The creation of temporary indexes may be a time consuming operation in the overall flow. Oracle recommends reviewing execution statistics and comparing the execution time saved by the indexes to the time spent creating them.


Configuring Journalizing

A source datastore can be configured in the physical diagram to use journalized data only. This is done by enabling Journalized Data Only in the General properties of a source datastore. The check box is only available if the referenced datastore is added to CDC in the model navigator.

Only one datastore per mapping can have journalizing enabled.

For more information about journalizing, see Chapter 6, "Using Journalizing."

Configuring Extraction Options

Each component in the physical diagram, excluding access points and target datastores, has an Extraction Options tab in the property inspector. Extraction options influence the way that SQL is generated for the given component. Most components have an empty list of extraction options, meaning that SQL generation is not further configurable.

Extraction options are driven by the Extract Knowledge Module (XKM) that can be selected in the Advanced sub-tab of the Extract Options tab. XKMs are part of ODI and cannot be created or modified by the user.

Creating and Managing Deployment Specifications

The entire physical diagram represents one deployment specification. Click the background or select the white tab with the deployment specification label to display the physical mapping properties for the displayed deployment specification.

You can define additional deployment specifications by clicking the small tab at the bottom of the physical diagram, next to the current deployment specification tab(s). A new deployment specification is created automatically, generated from the logical design of the mapping. You can modify this deployment specification, and save it as part of the mapping.

For example, you could use one deployment specification for your initial load, and another deployment specification for incremental load using changed data capture (CDC). The two deployment specifications would have different journalizing and knowledge module settings.

As another example, you could use different optimization contexts for each deployment specification. Each optimization context represents a slightly different users' topology. One optimization context can represent a development environment, and another context represents a testing environment. You could select different KMs appropriate for these two different topologies.

Reusable Mappings

Reusable mappings allow you to encapsulate a multi-step integration (or portion of an integration) into a single component, which you can save and use just as any other components in your mappings. Reusable mappings are a convenient way to avoid the labor of creating a similar or identical subroutine of data manipulation that you will use many times in your mappings.

For example, you could load data from two tables in a join component, pass it through a filter component, and then a distinct component, and then output to a target datastore. You could then save this procedure as a reusable mapping, and place it into future mappings that you create or modify.

After you place a reusable mapping component in a mapping, you can select it and make modifications to it that only affect the current mapping.

Reusable mappings consist of the following:

  • Input Signature and Output Signature components: These components describe the attributes that will be used to map into and out of the reusable mapping. When the reusable mapping is used in a mapping, these are the attributes that can be matched by other mapping components.

  • Regular mapping components: Reusable mappings can include all of the regular mapping components, including datastores, projector components,mW and selector components. You can use these exactly as in regular mappings, creating a logical flow.

By combining regular mapping components with signature components, you can create a reusable mapping intended to serve as a data source, as a data target, or as an intermediate step in a mapping flow. When you work on a regular mapping, you can use a reusable mapping as if it were a single component.

Creating a Reusable Mapping

You can create a reusable mapping within a project, or as a global object. To create a reusable mapping, perform the following steps:

  1. From the designer navigator:

    Open a project, right-click Reusable Mappings, and select New Reusable Mapping.

    Or, expand the Global Objects tree, right click Global Reusable Mappings, and select New Reusable Mapping.

  2. Enter a name and, optionally, a description for the new reusable mapping. Optionally, select Create Default Input Signature and/or Create Default Output Signature. These options add empty input and output signatures to your reusable mapping; you can add or remove input and output signatures later while editing your reusable mapping.

  3. Drag components from the component palette into the reusable mapping diagram, and drag datastores and other reusable mappings from the designer navigator, to assemble your reusable mapping logic. Follow all of the same processes as for creating a normal mapping.


    Note:

    When you have a reusable mapping open for editing, the component palette contains the Input Signature and Output Signature components in addition to the regular mapping components.


  4. Validate your reusable mapping by clicking the Validate the Mapping button (a green check mark icon). Any errors will be displayed in a new error pane.

    When you are finished creating your reusable mapping, click File and select Save, or click the Save button, to save your reusable mapping. You can now use your reusable mapping in your mapping projects.

Editing Mappings Using the Property Inspector and the Structure Panel

You can use the Property Inspector with the Structure Panel to perform the same actions as on the logical and physical diagrams of the mapping editor, in a non-graphical form.

Using the Structure Panel

When creating and editing mappings without using the logical and physical diagrams, you will need to open the Structure Panel. The Structure Panel provides an expandable tree view of a mapping, which you can traverse using the tab keys, allowing you to select the components of your mapping. When you select a component or attribute in the Structure Panel, its properties are shown in the Property Inspector exactly the same as if you had selected the component in the logical or physical diagram.

The Structure Panel is useful for accessibility requirements, such as when using a screen reader.

To open the structure panel, select Window from the main menu and then click Structure. You can also open the Structure Panel using the hotkey Ctrl+Shift-S.

This section contains the following topics:

Adding and Removing Components

With the Property Inspector, the Component Palette, and the Structure Panel, you can add or remove components of a mapping.

Adding Components

To add a component to a mapping with the Property Inspector and the Structure Panel:

  1. With the mapping open in the Mapping Editor, open the Component Palette.

  2. Select the desired component using the Tab key, and hit Enter to add the selected component to the mapping diagram and the Structure Panel.

Removing Components

To remove a component with the Structure Panel:

  1. In the Structure Panel, select the component you want to remove.

  2. While holding down Ctrl+Shift, hit Tab to open a pop-up dialog. Keep holding down Ctrl+Shift, and use the arrow keys to navigate to the left column and select the mapping. You can then use the right arrow key to select the logical or physical diagram. Release the Ctrl+Shift keys after you select the logical diagram.

    Alternatively, select Windows > Documents... from the main menu bar. Select the mapping from the list of document windows, and click Switch to Document.

  3. The component you selected in the Structure Panel in step 1 is now highlighted in the mapping diagram. Hit Delete to delete the component. A dialog box confirms the deletion.

Editing a Component

To edit a component of a mapping using the Structure Panel and the Property Inspector:

  1. In the Structure Panel, select a component. The component's properties are shown in the Property Inspector.

  2. In the Property Inspector, modify properties as needed. Use the Attributes tab to add or remove attributes. Use the Connector Points tab to add connections to other components in your mapping.

  3. Expand any component in the Structure Panel to list individual attributes. You can then select individual attributes to show their properties in the Property Inspector.

Customizing Tables

There are two ways to customize the tables in the Property Inspector to affect which columns are shown. In each case, open the Structure Panel and select a component to display its properties in the Property Inspector. Then, select a tab containing a table and use one of the following methods:

  • From the table toolbar, click the Select Columns... icon (on the top right corner of the table) and then, from the drop down menu, select the columns to display in the table. Currently displayed columns are marked with a check mark.

  • Use the Customize Table Dialog:

    1. From the table toolbar, click Select Columns....

    2. From the drop down menu, select Select Columns...

    3. In the Customize Table Dialog, select the columns to display in the table.

    4. Click OK.

Using Keyboard Navigation for Common Tasks

This section describes the keyboard navigation in the Property Inspector.

Table 11-2 shows the common tasks and the keyboard navigation used in the Property Inspector.

Table 11-2 Keyboard Navigation for Common Tasks

NavigationTask

Arrow keys

Navigate: move one cell up, down, left, or right

TAB

Move to next cell

SHIFT+TAB

Move to previous cell

SPACEBAR

Start editing a text, display items of a list, or change value of a checkbox

CTRL+C

Copy the selection

CTRL+V

Paste the selection

ESC

Cancel an entry in the cell

ENTER

Complete a cell entry and move to the next cell or activate a button

DELETE

Clear the content of the selection (for text fields only)

BACKSPACE

Delete the content of the selection or delete the preceding character in the active cell (for text fields only)

HOME

Move to the first cell of the row

END

Move to the last cell of the row

PAGE UP

Move up to the first cell of the column

PAGE DOWN

Move down to the last cell of the column


Flow Control and Static Control

In a mapping, it is possible to set two points of control. Flow Control checks the data in the incoming flow before it gets integrated into a target, and Static Control checks constraints on the target datastore after integration.

IKMs can have options to run FLOW_CONTROL and to run STATIC_CONTROL. If you want to enable either of these you must set the option in the IKM, which is a property set on the target datastore. In the physical diagram, select the datastore, and select the Integration Knowledge Module tab in the property inspector. If flow control options are available, they are listed in the Options table. Double-click an option to change it.


Note:

In ODI 11g the CKM to be used when flow or static control is invoked was defined on the interface. ODI 12c supports multiple targets on different technologies within the same mapping, so the CKM is now defined on each target datastore


This section contains the following topics:

Setting up Flow Control

The flow control strategy defines how data is checked against the constraints defined on a target datastore before being integrated into this datastore. It is defined by a Check Knowledge Module (CKM). The CKM can be selected on the target datastore physical node. The constraints that checked by a CKM are specified in the properties of the datastore component on the logical tab.

To define the CKM used in a mapping, see: "Selecting LKMs, IKMs and CKMs".

Setting up Static Control

The post-integration control strategy defines how data is checked against the constraints defined on the target datastore. This check takes place once the data is integrated into the target datastore. It is defined by a CKM. In order to have the post-integration control running, you must set the STATIC_CONTROL option in the IKM to true. Post-integration control requires that a primary key is defined in the data model for the target datastore of your mapping.

The settings Maximum Number of Errors Allowed and Integration Errors as Percentage can be set on the target datastore component. Select the datastore in the logical diagram, and in the property inspector, select the Target tab.

Post-integration control uses the same CKM as flow control.

Defining the Update Key

If you want to use update or flow control features in your mapping, it is necessary to define an update key on the target datastore.

The update key of a target datastore component contains one or more attributes. It can be the unique key of the datastore that it is bound to, or a group of attributes that are marked as the key attribute. The update key identifies each record to update or check before insertion into the target.

To define the update key from a unique key:

  1. In the mapping diagram, select the header of a target datastore component. The component's properties will be displayed in the Property Inspector.

  2. In the Target properties, select an Update Key from the drop down list.


Notes:

  • The Target properties are only shown for datastores which are the target of incoming data. If you do not see the Target properties, your datastore does not have an incoming connection defined.

  • Only unique keys defined in the model for this datastore appear in this list.


You can also define an update key from the attributes if:

  • You don't have a unique key on your datastore.

  • You want to specify the key regardless of already defined keys.

When you define an update key from the attributes, you select manually individual attributes to be part of the update key.

To define the update key from the attributes:

  1. Unselect the update key, if it is selected.

  2. In the Target Datastore panel, select one of the attributes that is part of the update key to display the Property Inspector.

  3. In the Property Inspector, under Target properties, check the Key box. A key symbol appears in front of the key attribute(s) in the datastore component displayed in the mapping editor logical diagram.

  4. Repeat the operation for each attribute that is part of the update key.

Designing E-LT and ETL-Style Mappings

In an E-LT-style integration mapping, ODI processes the data in a staging area, which is located on the target. Staging area and target are located on the same RDBMS. The data is loaded from the source(s) to the target. To create an E-LT-style integration mapping, follow the standard procedure described in "Creating a Mapping".

In an ETL-style mapping, ODI processes the data in a staging area, which is different from the target. The data is first extracted from the source(s) and then loaded to the staging area. The data transformations take place in the staging area and the intermediate results are stored in temporary tables in the staging area. The data loading and transformation tasks are performed with the standard ELT KMs.

Oracle Data Integrator provides two ways for loading the data from the staging area to the target:

Depending on the KM strategy that is used, flow and static control are supported. See "Designing an ETL-Style Mapping" in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator for more information.

Using a Multi-connection IKM

A multi-connection IKM allows updating a target where the staging area and sources are on different data servers. Figure 11-3 shows the configuration of an integration mapping using a multi-connection IKM to update the target data.

Figure 11-3 ETL-Mapping with Multi-connection IKM

Description of Figure 11-3 follows

See the chapter in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area for more information on when to use a multi-connection IKM.

To use a multi-connection IKM in an ETL-style mapping:

  1. Create a mapping using the standard procedure as described in "Creating a Mapping". This section describes only the ETL-style specific steps.

  2. In the Physical tab of the Mapping Editor, select a deployment spec by clicking the desired deployment spec tab and clicking on the diagram background. In the property inspector, the field Preset Staging Location defines the staging location. The empty entry specifies the target schema as staging location. Select a different schema as a staging location other than the target.

  3. Select an Access Point component in the physical schema and go to the property inspector. [Link to access point definition]

  4. Select an LKM from the LKM Selector list to load from the source(s) to the staging area. See the chapter in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the LKM you can use.

  5. Optionally, modify the KM options.

  6. In the Physical diagram, select a target datastore. The property inspector opens for this target object.

    In the Property Inspector, select an ETL multi-connection IKM from the IKM Selector list to load the data from the staging area to the target. See the chapter in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the IKM you can use.

  7. Optionally, modify the KM options.

Using an LKM and a mono-connection IKM

If there is no dedicated multi-connection IKM, use a standard exporting LKM in combination with a standard mono-connection IKM. Figure 11-4 shows the configuration of an integration mapping using an exporting LKM and a mono-connection IKM to update the target data. The exporting LKM is used to load the flow table from the staging area to the target. The mono-connection IKM is used to integrate the data flow into the target table.

Figure 11-4 ETL-Mapping with an LKM and a Mono-connection IKM

Description of Figure 11-4 follows

Note that this configuration (LKM + exporting LKM + mono-connection IKM) has the following limitations:

  • Neither simple CDC nor consistent CDC are supported when the source is on the same data server as the staging area (explicitly chosen in the Mapping Editor)

  • Temporary Indexes are not supported

See the chapter in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area for more information on when to use the combination of a standard LKM and a mono-connection IKM.

To use an LKM and a mono-connection IKM in an ETL-style mapping:

  1. Create a mapping using the standard procedure as described in "Creating a Mapping". This section describes only the ETL-style specific steps.

  2. In the Physical tab of the Mapping Editor, select a deployment spec by clicking the desired deployment spec tab and clicking on the diagram background. In the property inspector, the field Preset Staging Location defines the staging location. The empty entry specifies the target schema as staging location. Select a different schema as a staging location other than the target.

  3. Select an Access Point component in the physical schema and go to the property inspector. For more information about Access Points, see: "About the Physical Mapping Diagram".

  4. In the Property Inspector, in the Loading Knowledge Module tab, select an LKM from the Loading Knowledge Module drop-down list to load from the source(s) to the staging area. See the chapter in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the LKM you can use.

  5. Optionally, modify the KM options. Double-click a cell in the Value column of the options table to change the value.

  6. Select the access point node of a target execution unit. In the Property Inspector, in the Loading Knowledge Module tab, select an LKM from the Loading Knowledge Module drop-down list to load from the staging area to the target. See the chapter in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the LKM you can use.

  7. Optionally, modify the options.

  8. Select the Target by clicking its title. The Property Inspector opens for this object.

    In the Property Inspector, in the Integration Knowledge Module tab, select a standard mono-connection IKM from the Integration Knowledge Module drop-down list to update the target. See the chapter in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator that corresponds to the technology of your staging area to determine the IKM you can use.

  9. Optionally, modify the KM options.

PKP+PKU5zDOEBPS/partpage1.htmB Understanding Oracle Data Integrator

Part I

Understanding Oracle Data Integrator

This part provides an introduction to Oracle Data Integrator and the basic steps of creating an integration project with Oracle Data Integrator.

This part contains the following chapters:

PKGBPKU5zDOEBPS/setup_topology.htm Setting Up a Topology

4 Setting Up a Topology

This chapter describes how to set up the topology in Oracle Data Integrator. An overview of Oracle Data Integrator topology concepts and components is provided.

This chapter includes the following sections:

Introduction to the Oracle Data Integrator Topology

The Oracle Data Integrator Topology is the physical and logical representation of the Oracle Data Integrator architecture and components.

Before you can perform the procedures described in this chapter, you must have installed and configured Oracle Data Integrator, database schemas, domains, and agents, as described in the Installation Guide for Oracle Data Integrator.


Note:

The Installation Guide uses the term "topology" in some sections to refer to the organization of servers, folders, and files on your physical servers. This chapter refers to the "topology" configured using the Topology Navigator in ODI Studio.


This section contains these topics:

Physical Architecture

The physical architecture defines the different elements of the information system, as well as their characteristics taken into account by Oracle Data Integrator. Each type of database (Oracle, DB2, etc.), file format (XML, Flat File), or application software is represented in Oracle Data Integrator by a technology.

A technology handles formatted data. Therefore, each technology is associated with one or more data types that allow Oracle Data Integrator to generate data handling scripts.

The physical components that store and expose structured data are defined as data servers. A data server is always linked to a single technology. A data server stores information according to a specific technical logic which is declared into physical schemas attached to this data server. Every database server, JMS message file, group of flat files, and so forth, that is used in Oracle Data Integrator, must be declared as a data server. Every schema, database, JMS Topic, etc., used in Oracle Data Integrator, must be declared as a physical schema.

Finally, the physical architecture includes the definition of the Physical Agents. These are the Java software components that run Oracle Data Integrator jobs.

Contexts

Contexts bring together components of the physical architecture (the real Architecture) of the information system with components of the Oracle Data Integrator logical architecture (the Architecture on which the user works).

For example, contexts may correspond to different execution environments (Development, Test and Production) or different execution locations (Boston Site, New-York Site, and so forth.) where similar physical resource exist.

Note that during installation the default GLOBAL context is created.

Logical Architecture

The logical architecture allows a user to identify as a single Logical Schema a group of similar physical schemas - that is containing datastores that are structurally identical - but located in different physical locations. Logical Schemas, like their physical counterpart, are attached to a technology.

Context allow to resolve logical schemas into physical schemas. In a given context, one logical schema resolves in a single physical schema.

For example, the Oracle logical schema Accounting may correspond to two Oracle physical schemas:

  • Accounting Sample used in the Development context

  • Accounting Corporate used in the Production context

These two physical schemas are structurally identical (they contain accounting data), but are located in different physical locations. These locations are two different Oracle schemas (Physical Schemas), possibly located on two different Oracle instances (Data Servers).

All the components developed in Oracle Data Integrator are designed on top of the logical architecture. For example, a data model is always attached to logical schema, and data flows are defined with this model. By specifying a context at run-time, the model's logical schema resolves to a single physical schema, and the data contained in this schema in the data server can be accessed by the integration processes.

Agents

Oracle Data Integrator run-time Agents orchestrate the execution of jobs. These agents are Java components.

The run-time agent functions as a listener and a scheduler agent. The agent executes jobs on demand (model reverses, packages, scenarios, mappings, and so forth), for example when the job is manually launched from a user interface or from a command line. The agent is also to start the execution of scenarios according to a schedule defined in Oracle Data Integrator.

Third party scheduling systems can also trigger executions on the agent. See "Scheduling a Scenario or a Load Plan with an External Scheduler" for more information.

Typical projects will require a single Agent in production; however, "Load Balancing Agents" describes how to set up several load-balanced agents.

ODI Studio can also directly execute jobs on demand. This internal "agent" can be used for development and initial testing. However, it does not have the full production features of external agents, and is therefore unsuitable for production data integration. When running a job, in the Run dialog, select Local (No Agent) as the Logical Agent to directly execute the job using ODI Studio. Note the following features are not available when running a job locally:

  • Stale session cleanup

  • Ability to stop a running session

  • Load balancing

If you need any of these features, you should use an external agent.

Agent Lifecycle

The lifecycle of an agent is as follows:

  1. When the agent starts it connects to the master repository.

  2. Through the master repository it connects to any work repository attached to the Master repository and performs the following tasks at startup:

    • Execute any outstanding tasks in all Work Repositories that need to be executed upon startup of this Agent.

    • Clean stale sessions in each work repository. These are the sessions left incorrectly in a running state after an agent or repository crash.

    • Retrieve its list of scheduled scenarios in each work repository, and compute its schedule.

  3. The agent starts listening on its port.

    • When an execution request arrives on the agent, the agent acknowledges this request and starts the session.

    • The agent launches the sessions start according to the schedule.

    • The agent is also able to process other administrative requests in order to update its schedule, stop a session, respond to a ping or clean stale sessions. The standalone agent can also process a stop signal to terminate its lifecycle.

Refer to Chapter 21, "Running Integration Processes" for more information about a session lifecycle.

Agent Features

Agents are not data transformation servers. They do not perform any data transformation, but instead only orchestrate integration processes. They delegate data transformation to database servers, operating systems or scripting engines.

Agents are multi-threaded lightweight components. An agent can run multiple sessions in parallel. When declaring a physical agent, it is recommended that you adjust the maximum number of concurrent sessions it is allowed to execute simultaneously from a work repository. When this maximum number is reached, any new incoming session will be queued by the agent and executed later when other sessions have terminated. If you plan to run multiple parallel sessions, you can consider load balancing executions as described in "Load Balancing Agents".

Standalone and Java EE Agents

The Oracle Data Integrator agents exists in two flavors: standalone agent and Java EE agent.

A standalone agent runs in a separate Java Virtual Machine (JVM) process. It connects to the work repository and to the source and target data servers via JDBC. Standalone agents can be installed on any server with a Java Virtual Machine installed. This type of agent is more appropriate when you need to use a resource that is local to one of your data servers (for example, the file system or a loader utility installed with the database instance), and you do not want to install a Java EE application server on this machine.

A Java EE agent is deployed as a web application in a Java EE application server (for example Oracle WebLogic Server or IBM WebSphere). The Java EE agent can benefit from all the features of the application server (for example, JDBC data sources or clustering for Oracle WebLogic Server). This type of agent is more appropriate when there is a need for centralizing the deployment and management of all applications in an enterprise application server, or when you have requirements for high availability.

It is possible to mix in a single environment standalone and Java EE agents in a distributed environment.

Physical and Logical Agents

A physical agent corresponds to a single standalone agent or a Java EE agent. A physical agent should have a unique name in the Topology.

Similarly to schemas, physical agents having an identical role in different environments can be grouped under the same logical agent. A logical agent is related to physical agents through contexts. When starting an execution, you indicate the logical agent and the context. Oracle Data Integrator will translate this information into a single physical agent that will receive the execution request.

Agent URL

An agent runs on a host and a port and is identified on this port by an application name. The agent URL also indicates the protocol to use for the agent connection. Possible values for the protocol are http or https. These four components make the agent URL. The agent is reached via this URL.

For example:

  • A standalone agent started on port 8080 on the odi_production machine will be reachable at the following URL:

    http://odi_production:8080/oraclediagent.


    Note:

    The application name for a standalone agent is always oraclediagent and cannot be changed.


  • A Java EE agent started as an application called oracledi on port 8000 in a WLS server deployed on the odi_wls host will be reachable at the following URL:

    http://odi_wls:8000/oracledi.

Languages

Languages defines the languages and language elements available when editing expressions at design-time. Languages provided by default in Oracle Data Integrator do not require any user change.

Repositories

The topology contains information about the Oracle Data Integrator repositories. Repository definition, configuration and installation is covered in the Installation and Upgrade Guide for Oracle Data Integrator.

Setting Up the Topology

The following steps are a guideline to create the topology. You can always modify the topology after an initial setting:

  1. Create the contexts corresponding to your different environments. See "Creating a Context".

  2. Create the data servers corresponding to the servers used by Oracle Data Integrator. See "Creating a Data Server".

  3. For each data server, create the physical schemas corresponding to the schemas containing data to be integrated with Oracle Data Integrator. See "Creating a Physical Schema".

  4. Create logical schemas and associate them with physical schemas in the contexts. See "Creating a Logical Schema".

  5. Create the physical agents corresponding to the standalone or Java EE agents that are installed in your information systems. See "Creating a Physical Agent".

  6. Create logical agents and associate them with physical agents in the contexts. See "Creating a Logical Agent".

Creating a Context

To create a context:

  1. In Topology Navigator expand the Contexts accordion.

  2. Click New context in the accordion header.

  3. Fill in the following fields:

    • Name: Name of the context, as it appears in the Oracle Data Integrator graphical interface.

    • Code: Code of the context, allowing a context to be referenced and identified among the different repositories.

    • Password: Password requested when the user requests switches to this context in a graphical interface. It is recommended to use a password for critical contexts (for example, contexts pointing to Production data)

    • Check Default if you want this context to be displayed by default in the different lists in Designer Navigator or Operator Navigator.

  4. From the File menu, click Save.

Creating a Data Server

A Data Server corresponds for example to a Database, JMS server instance, a scripting engine or a file system accessed with Oracle Data Integrator in the integration flows. Under a data server, subdivisions are created in the form of Physical Schemas.


Note:

Frequently used technologies have their data server creation methods detailed in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator.


Pre-requisites and Guidelines

It is recommended to follow the guidelines below when creating a data server.

Review the Technology Specific Requirements

Some technologies require the installation and the configuration of elements such as:

Refer to the documentation of the technology you are connecting to through the data server and to the Connectivity and Knowledge Modules Guide for Oracle Data Integrator. The connection information may also change depending on the technology. Refer to the server documentation provided, and contact the server administrator to define the connection methods.

Create an Oracle Data Integrator User

For each database engine used by Oracle Data Integrator, it is recommended to create a user dedicated for ODI on this data server (typically named ODI_TEMP).

Grant this user privileges to

  • Create/drop objects and perform data manipulation in his own schema.

  • Manipulate data into objects of the other schemas of this data server according to the operations required for the integration processes.

This user should be used as follows:

  • Use this user name/password in the data server user/password definition.

  • Use this user's schema as your Work Schema for all data schemas on this server.

Creating a Data Server

To create a Data Server:

  1. In Topology Navigator expand the Technologies node in the Physical Architecture accordion.


    Tip:

    The list of technologies that are displayed in the Physical Architecture accordion may be very long. To narrow the list of displayed technologies, you can hide unused technologies by selecting Hide Unused Technologies from the Topology Navigator toolbar menu.


  2. Select the technology you want to create a data server for.

  3. Right-click and select New Data Server

  4. Fill in the following fields in the Definition tab:

    • Name: Name of the Data Server that will appear in Oracle Data Integrator.

      For naming data servers, it is recommended to use the following naming standard: <TECHNOLOGY_NAME>_<SERVER_NAME>.

    • ... (Data Server): This is the physical name of the data server used by other data servers to identify it. Enter this name if your data servers can be inter-connected in a native way. This parameter is not mandatory for all technologies.

      For example, for Oracle, this name corresponds to the name of the instance, used for accessing this data server from another Oracle data server through DBLinks.

    • User/Password: User name and password for connecting to the data server. This parameter is not mandatory for all technologies, as for example for the File technology.

      Depending on the technology, this could be a "Login", a "User", or an "account". For some connections using the JNDI protocol, the user name and its associated password can be optional (if they have been given in the LDAP directory).

  5. Define the connection parameters for the data server:

    A technology can be accessed directly through JDBC or the JDBC connection to this data server can be served from a JNDI directory.

    If the technology is accessed through a JNDI directory:

    1. Check the JNDI Connection on the Definition tab.

    2. Go to the JNDI tab, and fill in the following fields:

    FieldDescription

    JNDI authentication

    • None: Anonymous access to the naming or directory service

    • Simple: Authenticated access, non-encrypted

    • CRAM-MD5: Authenticated access, encrypted MD5

    • <other value>: authenticated access, encrypted according to <other value>

    JNDI User/Password

    User/password connecting to the JNDI directory

    JNDI Protocol

    Protocol used for the connection

    Note that only the most common protocols are listed here. This is not an exhaustive list.

    • LDAP: Access to an LDAP directory

    • SMQP: Access to a SwiftMQ MOM directory

    • <other value>: access following the sub-protocol <other value>

    JNDI Driver

    The driver allowing the JNDI connection

    Example Sun LDAP directory: com.sun.jndi.ldap.LdapCtxFactory

    JNDI URL

    The URL allowing the JNDI connection

    For example: ldap://suse70:389/o=linuxfocus.org

    JNDI Resource

    The directory element containing the connection parameters

    For example: cn=sampledb


    If the technology is connected through JDBC:

    1. Un-check the JNDI Connection box.

    2. Go to the JDBC tab, and fill in the following fields:

      FieldDescription

      JDBC Driver

      Name of the JDBC driver used for connecting to the data server

      JDBC URL

      URL allowing you to connect to the data server.


    You can get a list of pre-defined JDBC drivers and URLs by clicking Display available drivers or Display URL sample.

  6. From the File menu, click Save to validate the creation of the data server.

Creating a Data Server (Advanced Settings)

The following actions are optional:

Adding Connection Properties

These properties are passed when creating the connection, in order to provide optional configuration parameters. Each property is a (key, value) pair.

  • For JDBC: These properties depend on the driver used. Please see the driver documentation for a list of available properties. It is possible in JDBC to specify here the user and password for the connection, instead of specifying there in the Definition tab.

  • For JNDI: These properties depend on the resource used.

To add a connection property to a data server:

  1. On the Properties tab click Add a Property.

  2. Specify a Key identifying this property. This key is case-sensitive.

  3. Specify a value for the property.

  4. From the File menu, click Save.

Defining Data Sources

On the Data Sources tab you can define JDBC data sources that will be used by Oracle Data Integrator Java EE Agents deployed on application servers to connect to this data server. Note that data sources are not applicable for standalone agents.

Defining data sources is not mandatory, but allows the Java EE agent to benefit from the data sources and connection pooling features available on the application server. Connection pooling allows reusing connections across several sessions. If a data source is not declared for a given data server in a Java EE agent, this Java EE agent always connects the data server using direct JDBC connection, that is without using any of the application server data sources.

Before defining the data sources in Oracle Data Integrator, please note the following:

  • Datasources for WebLogic Server should be created with the Statement Cache Size parameter set to 0 in the Connection Pool configuration. Statement caching has a minor impact on data integration performances, and may lead to unexpected results such as data truncation with some JDBC drivers. Note that this concerns only data connections to the source and target data servers, not the repository connections.

  • If using Connection Pooling with datasources, it is recommended to avoid ALTER SESSION statements in procedures and Knowledge Modules. If a connection requires ALTER SESSION statements, it is recommended to disable connection pooling in the related datasources.

To define JDBC data sources for a data server:

  1. On the DataSources tab of the Data Server editor click Add a DataSource

  2. Select a physical Agent in the Agent field.

  3. Enter the data source name in the JNDI Name field.

    Note that this name must match the name of the data source in your application server.

  4. Check JNDI Standard if you want to use the environment naming context (ENC).

    When JNDI Standard is checked, Oracle Data Integrator automatically prefixes the data source name with the string java:comp/env/ to identify it in the application server's JNDI directory.

    Note that the JNDI Standard is not supported by Oracle WebLogic Server and for global data sources.

  5. From the File menu, click Save.

After having defined a data source for a Java EE agent, you must create it in the application server into which the Java EE agent is deployed. There are several ways to create data sources in the application server, including:

Setting Up On Connect/Disconnect Commands

On the On Connect/Disconnect tab you can define SQL commands that will be executed when a connection to a data server defined in the physical architecture is created or closed.

The On Connect command is executed every time an ODI component, including ODI client components, connects to this data server.

The On Disconnect command is executed every time an ODI component, including ODI client components, disconnects from this data server.

These SQL commands are stored in the master repository along with the data server definition.

Before setting up commands On Connect/Disconnect, please note the following:

  • The On Connect/Disconnect commands are only supported by data servers with a technology type Database (JDBC).

  • The On Connect and Disconnect commands are executed even when using data sources. In this case, the commands are executed when taking and releasing the connection from the connection pool.

  • Substitution APIs are supported. Note that the design time tags <% are not supported. Only the execution time tags <? and <@ are supported.

  • Only global variables in substitution mode (#GLOBAL.<VAR_NAME> or #<VAR_NAME> ) are supported. See "Variable Scope" for more information. Note that the Expression Editor only displays variables that are valid for the current data server.

  • The variables that are used in On Connect and Disconnect commands are only replaced at runtime, when the session starts. A command using variables will fail when testing the data server connection or performing a View Data operation on this data server. Make sure that these variables are declared in the scenarios.

  • Oracle Data Integrator Sequences are not supported in the On Connect and Disconnect commands.

The commands On Connect/Disconnect have the following usage:

  • When a session runs, it opens connections to data servers. every time a connection is opened on a data server that has a command On Connect defined, a task is created under a specific step called Command on Connect. This task is named after the data server to which the connection is established, the step and task that create the connection to this data server. It contains the code of the On Connect command.

  • When the session completes, it closes all connections to the data servers. Every time a connection is closed on a data server that has a command On Disunite defined, a task is created under a specific step called Command on Disconnect. This task is named after the data server that is disconnected, the step and task that dropped the connection to this data server. It contains the code of the On Disconnect command.

  • When an operation is made in ODI Studio or ODI Console that requires a connection to the data server (such as View Data or Test Connection), the commands On Connect/Disconnect are also executed if the Client Transaction is selected for this command.


Note:

You can specify whether or not to show On Connect and Disconnect steps in Operator Navigator. If the user parameter Hide On Connect and Disconnect Steps is set to Yes, On Connect and Disconnect steps are not shown.


To set up On Connect/Disconnect commands:

  1. On the On Connect/Disconnect tab of the Data Server editor, click Launch the Expression Editor in the On Connect section or in the On Disconnect section.

  2. In the Expression Editor, enter the SQL command.


    Note:

    The Expression Editor displays only the substitution methods and keywords that are available for the technology of the data server. Note that global variables are only displayed if the connection to the work repository is available.


  3. Click OK. The SQL command is displayed in the Command field.

  4. Optionally, select Commit, if you want to commit the connection after executing the command. Note that if AutoCommit or Client Transaction is selected in the Execute On list, this value will be ignored.

  5. Optionally, select Ignore Errors, if you want to ignore the exceptions encountered during the command execution. Note that if Ignore Errors is not selected, the calling operation will end in error status. A command with Ignore Error selected that fails during a session will appear as a task in a Warning state.

  6. From the Log Level list, select the logging level (from 1 to 6) of the connect or disconnect command. At execution time, commands can be kept in the session log based on their log level. Default is 3.

  7. From the Execute On list, select the transaction(s) on which you want to execute the command.


    Note:

    Transactions from 0 to 9 and the Autocommit transaction correspond to connection created by sessions (by procedures or knowledge modules). The Client Transaction corresponds to the client components (ODI Console and Studio).


    You can select Select All or Unselect All to select or unselect all transactions.

  8. From the File menu, click Save.

You can now test the connection, see "Testing a Data Server Connection" for more information.

Testing a Data Server Connection

It is recommended to test the data server connection before proceeding in the topology definition.

To test a connection to a data server:

  1. In Topology Navigator expand the Technologies node in the Physical Architecture accordion and then expand the technology containing your data server.

  2. Double-click the data server you want to test. The Data Server Editor opens.

  3. Click Test Connection.

    The Test Connection dialog is displayed.

  4. Select the agent that will carry out the test. Local (No Agent) indicates that the local station will attempt to connect.

  5. Click Detail to obtain the characteristics and capacities of the database and JDBC driver.

  6. Click Test to launch the test.

A window showing "connection successful!" is displayed if the test has worked; if not, an error window appears. Use the detail button in this error window to obtain more information about the cause of the connection failure.

Creating a Physical Schema

An Oracle Data Integrator Physical Schema corresponds to a pair of Schemas:

  • A (Data) Schema, into which Oracle Data Integrator will look for the source and target data structures for the mappings.

  • A Work Schema, into which Oracle Data Integrator can create and manipulate temporary work data structures associated to the sources and targets contained in the Data Schema.

Frequently used technologies have their physical schema creation methods detailed in the Connectivity and Knowledge Modules Guide for Oracle Data Integrator.

Before creating a Physical Schema, note the following:

  • Not all technologies support multiple schemas. In some technologies, you do not specify the work and data schemas since one data server has only one schema.

  • Some technologies do not support the creation of temporary structures. The work schema is useless for these technologies.

  • The user specified in the data server to which the Physical Schema is attached must have appropriate privileges on the schemas attached to this data server.

To create a Physical Schema:

  1. Select the data server, Right-click and select New Physical Schema. The Physical Schema Editor appears.

  2. If the technology supports multiple schemas:

    1. Select or type the Data Schema for this Data Integrator physical schema in ... (Schema). A list of the schemas appears if the technologies supports schema listing.

    2. Select or type the Work Schema for this Data Integrator physical schema in ... (Work Schema). A list of the schemas appears if the technologies supports schema listing.

  3. Check the Default box if you want this schema to be the default one for this data server (The first physical schema is always the default one).

  4. Go to the Context tab.

  5. Click Add.

  6. Select a Context and an existing Logical Schema for this new Physical Schema.

    If no Logical Schema for this technology exists yet, you can create it from this Editor.

    To create a Logical Schema:

    1. Select an existing Context in the left column.

    2. Type the name of a Logical Schema in the right column.

      This Logical Schema is automatically created and associated to this physical schema in this context when saving this Editor.

  7. From the File menu, click Save.

Creating a Logical Schema

To create a logical schema:

  1. In Topology Navigator expand the Technologies node in the Logical Architecture accordion.

  2. Select the technology you want to attach your logical schema to.

  3. Right-click and select New Logical Schema.

  4. Fill in the schema name.

  5. For each Context in the left column, select an existing Physical Schema in the right column. This Physical Schema is automatically associated to the logical schema in this context. Repeat this operation for all necessary contexts.

  6. From the File menu, click Save.

Creating a Physical Agent

To create a Physical Agent:

  1. In Topology Navigator right-click the Agents node in the Physical Architecture accordion.

  2. Select New Agent.

  3. Fill in the following fields:

    • Name: Name of the agent used in the Java graphical interface. Note: Avoid using Internal as agent name. Oracle Data Integrator uses the Internal agent when running sessions using the internal agent and reserves the Internal agent name.

    • Host: Network name or IP address of the machine the agent will been launched on.

    • Port: Listening port used by the agent. By default, this port is the 20910.

    • Web Application Context: Name of the web application corresponding to the Java EE agent deployed on an application server. For standalone agents, this field should be set to oraclediagent.

    • Protocol: Protocol to use for the agent connection. Possible values are http or https. Default is http.

    • Maximum number of sessions supported by this agent.

    • Maximum number of threads: Controls the number of maximum threads an ODI Agent can use at any given time. Tune this as per your system resources and CPU capacity.

    • Maximum threads per session: ODI supports executing sessions with multiple threads. This limits maximum parallelism for a single session execution.

    • Session Blueprint cache Management:

      • Maximum cache entries: For performance, session blueprints are cached. Tune this parameter to control the JVM memory consumption due to the Blueprint cache.

        Unused Blueprint Lifetime (sec): Idle time interval for flushing a blueprint from the cache.

  4. If you want to setup load balancing, go to the Load balancing tab and select a set of linked physical agent to which the current agent can delegate executions. See "Setting Up Load Balancing" for more information.

  5. If the agent is launched, click Test. The successful connection dialog is displayed.

  6. Click Yes.

Creating a Logical Agent

To create a logical agent:

  1. In Topology Navigator right-click the Agents node in the Logical Architecture accordion.

  2. Select New Logical Agent.

  3. Fill in the Agent Name.

  4. For each Context in the left column, select an existing Physical Agent in the right column. This Physical Agent is automatically associated to the logical agent in this context. Repeat this operation for all necessary contexts.

  5. From the File menu, click Save.

Managing Agents

This section describes how to work with a standalone agent, a Java EE agent and how to handle load balancing.

Standalone Agent

Managing the standalone agent involves the actions discussed in these sections:


Note:

The agent command line scripts, which are required for performing the tasks described in this section, are only available if you have installed the Oracle Data Integrator Standalone Agent. See Installing and Configuring Oracle Data Integrator for information about how to install the Standalone Agent.


Launching a Standalone Agent

The standalone agent is able to execute scenarios on predefined schedules or on demand. The instructions for launching the standalone agent are provided in "Starting the Node Manager and Standalone Agent," in the Installation Guide for Oracle Data Integrator.

Stopping an Agent

The procedure for stopping a standalone agent is described in "Stopping Your Oracle Data Integrator Agents," in the Installation Guide for Oracle Data Integrator

Java EE Agent

Managing a Java EE agent involves the actions discussed in the sections:

Creating a Server Template for the Java EE Agent

Oracle Data Integrator provides a Server Template Generation wizard to help you create a server template for a run-time agent.

To open the Server Template Generation wizard:

  1. From the Physical Agent Editor toolbar menu, select Generate Server Template. This starts the Template Generation wizard.

  2. In the Agent Information step, review the agent information and modify the default configuration if needed.

    The Agent Information includes the following parameters:

    • General

      Agent Name: Displays the name of the Agent that you want to deploy.

      From the list, select the server on which you want to deploy the agent. Possible values are Oracle WebLogic and IBM WebSphere.

    • Master Repository Connection

      Datasource JNDI Name: The name of the datasource used by the Java EE agent to connect to the master repository. The template can contain a definition of this datasource. Default is jdbc/odiMasterRepository.

    • Connection Retry Settings

      Connection Retry Count: Number of retry attempts done if the agent loses the connection to the repository. Note that setting this parameter to a non-zero value, enables a high availability connection retry feature if the ODI repository resides on an Oracle RAC database. If this feature is enabled, the agent can continue to execute sessions without interruptions even if one or more Oracle RAC nodes become unavailable.

      Retry Delay (milliseconds): Interval (in milliseconds) between each connection retry attempt.

    • Supervisor Authentication

      Supervisor Key: Name of the key in the application server credential store that contains the login and the password of an ODI user with Supervisor privileges. This agent will use this user credentials to connect to the repository.

  3. Click Next.

  4. In the Libraries and Drivers step, select from the list the external libraries and drivers to deploy with this agent. Only libraries added by the user appear here.

    Note that the libraries can be any JAR or ZIP file that is required for this agent. Additional JDBC drivers or libraries for accessing the source and target data servers must be selected here.

    You can use the corresponding buttons in the toolbar to select or deselect all libraries and/or drivers in the list.

  5. Click Next.

  6. In the Datasources step, select the datasources definitions that you want to include in this agent template. You can only select datasources from the wizard. Naming and adding these datasources is done in the Data Sources tab of the Physical Agent editor.

  7. Click Next.

  8. In Template Target and Summary step, enter the Target Template Path where the server template will be generated.

  9. Click Finish to close the wizard and generate the server template.

    The Template generation information dialog appears.

  10. Click OK to close the dialog.

The generated template can be used to deploy the agent in WLS or WAS using the respective configuration wizard. Refer to Installing and Configuring Oracle Data Integrator for more information.

Declare the Supervisor in the Credential Store

After deploying the template, it is necessary to declare the Supervisor into the WLS or WAS Credential Store. Refer to Installing and Configuring Oracle Data Integrator for more information.

Deploying Datasources from Oracle Data Integrator in an application server for an Agent

You can create datasources from the Topology Navigator into an application server (either Oracle WebLogic Server or IBM WebSphere) for which a Java EE agent is configured.

To deploy datasources in an application server:

  1. Open the Physical Agent Editor configured for the application server into which you want to deploy the datasources.

  2. Go to the Datasources tab.

  3. Drag and Drop the source/target data servers from the Physical Architecture tree in the Topology Navigator into the DataSources tab.

  4. Provide a JNDI Name for these datasources.

  5. Right-click any of the datasource, then select Deploy Datasource on Server.

  6. On the Datasources Deployment dialog, select the server on which you want to deploy the data sources. Possible values are WLS or WAS server.

  7. In the Deployment Server Details section, fill in the following fields:

    • Host: Host name or IP address of the application server.

    • Port: Bootstrap port of the deployment manager

    • User: Server user name.

    • Password: This user's password

  8. In the Datasource Deployment section, provide the name of the server on which the datasource should be deployed, for example odi_server1.

  9. Click OK.


Note:

This operation only creates the Datasources definition in WebLogic Server or WebSphere Application Server. It does not install drivers or library files needed for these datasources to work. Additional drivers added to the Studio classpath can be included into the Agent Template. See "Creating a Server Template for the Java EE Agent" for more information.


WLS Datasource Configuration and Usage

When setting up datasources in WebLogic Server for Oracle Data Integrator, please note the following:

  • Datasources should be created with the Statement Cache Size parameter set to 0 in the Connection Pool configuration. Statement caching has a minor impact on data integration performances, and may lead to unexpected results such as data truncation with some JDBC drivers.

  • If using Connection Pooling with datasources, it is recommended to avoid ALTER SESSION statements in procedures and Knowledge Modules. If a connection requires ALTER SESSION statements, it is recommended to disable connection pooling in the related datasources, as an altered connection returns to the connection pool after usage.

Load Balancing Agents

Oracle Data Integrator allows you to load balance parallel session execution between physical agents.

Each physical agent is defined with:

  • A maximum number of sessions it can execute simultaneously from a work repository

    The maximum number of sessions is a value that must be set depending on the capabilities of the machine running the agent. It can be also set depending on the amount of processing power you want to give to the Oracle Data Integrator agent.

  • Optionally, a number of linked physical agents to which it can delegate sessions' executions.

An agent's load is determined at a given time by the ratio (Number of running sessions / Maximum number of sessions) for this agent.

Delegating Sessions

When a session is started on an agent with linked agents, Oracle Data Integrator determines which one of the linked agents is less loaded, and the session is delegated to this linked agent.

An agent can be linked to itself, in order to execute some of the incoming sessions, instead of delegating them all to other agents. Note that an agent not linked to itself is only able to delegate sessions to its linked agents, and will never execute a session.

Delegation cascades in the hierarchy of linked agents. If agent A has agent B1 and B2 linked to it, and agent B1 has agent C1 linked to it, then sessions started on agent A will be executed by agent B2 or agent C1. Note that it is not recommended to make loops in agents links.

If the user parameter "Use new Load Balancing" is set to Yes, sessions are also re-balanced each time a session finishes. This means that if an agent runs out of sessions, it will possibly be reallocated sessions already allocated to another agent.

Agent Unavailable

When for a given agent the number of running sessions reaches its maximum number of sessions, the agent will put incoming sessions in a "queued" status until the number of running sessions falls below the maximum of sessions.

If an agent is unavailable (because it crashed for example), all its sessions in queue will be2 re-assigned to another load balanced agent that is neither running any session nor having sessions in queue if the user parameter Use the new load balancing is set to Yes.

Setting Up Load Balancing

To setup load balancing:

  1. Define a set of physical agents, and link them in a hierarchy of agents (see "Creating a Physical Agent" for more information).

  2. Start all the physical agents corresponding to the agents defined in the topology.

  3. Run the executions on the root agent of your hierarchy. Oracle Data Integrator will balance the load of the executions between its linked agents.

PK2~PKU5zDOEBPS/loadplans.htm Using Load Plans

15 Using Load Plans

This chapter gives an introduction to Load Plans. It describes how to create a Load Plan and provides information about how to work with Load Plans.

This chapter includes the following sections:

Introduction to Load Plans

Oracle Data Integrator is often used for populating very large data warehouses. In these use cases, it is common to have thousands of tables being populated using hundreds of scenarios. The execution of these scenarios has to be organized in such a way that the data throughput from the sources to the target is the most efficient within the batch window. Load Plans help the user organizing the execution of scenarios in a hierarchy of sequential and parallel steps for these type of use cases.

A Load Plan is an executable object in Oracle Data Integrator that can contain a hierarchy of steps that can be executed conditionally, in parallel or in series. The leaf nodes of this hierarchy are Scenarios. Packages, mappings, variables, and procedures can be added to Load Plans for executions in the form of scenarios. For more information, see "Creating a Load Plan".

Load Plans allow setting and using variables at multiple levels. See "Working with Variables in Load Plans" for more information. Load Plans also support exception handling strategies in the event of a scenario ending in error. See "Handling Load Plan Exceptions and Restartability" for more information.

Load Plans can be started, stopped, and restarted from a command line, from Oracle Data Integrator Studio, Oracle Data Integrator Console or a Web Service interface. They can also be scheduled using the run-time agent's built-in scheduler or an external scheduler. When a Load Plan is executed, a Load Plan Instance is created. Each attempt to run this Load Plan Instance is a separate Load Plan Run. See "Running Load Plans" for more information.

A Load Plan can be modified in production environments and steps can be enabled or disabled according to the production needs. Load Plan objects can be designed and viewed in the Designer and Operator Navigators. Various design operations (such as create, edit, delete, and so forth) can be performed on a Load Plan object if a user connects to a development work repository, but some design operations will not be available in an execution work repository. See "Editing Load Plan Steps" for more information.

Once created, a Load Plan is stored in the work repository. The Load Plan can be exported then imported to another repository and executed in different contexts. Load Plans can also be versioned. See "Exporting, Importing and Versioning Load Plans" for more information.

Load Plans appear in Designer Navigator and in Operator Navigator in the Load Plans and Scenarios accordion. The Load Plan Runs are displayed in the Load Plan Executions accordion in Operator Navigator.

Load Plan Execution Lifecycle

When running or scheduling a Load Plan you provide the variable values, the contexts and logical agents used for this Load Plan execution.

Executing a Load Plan creates a Load Plan instance and a first Load Plan run. This Load Plan instance is separated from the original Load Plan, and the Load Plan Run corresponds to the first attempt to execute this instance. If a run is restarted a new Load Plan run is created under this Load Plan instance. As a consequence, each execution attempt of the Load Plan Instance is preserved as a different Load Plan run in the Log. See "Running Load Plans" for more information.

Differences between Packages, Scenarios, and Load Plans

A Load Plan is the largest executable object in Oracle Data Integrator. It uses Scenarios in its steps. When an executable object is used in a Load Plan, it is automatically converted into a scenario. For example, a package is used in the form of a scenario in Load Plans. Note that Load Plans cannot be added to a Load Plan. However, it is possible to add a scenario in form of a Run Scenario step that starts another Load Plan using the OdiStartLoadPlan tool.

Load plans are not substitutes for packages or scenarios, but are used to organize at a higher level the execution of packages and scenarios.

Unlike packages, Load Plans provide native support for parallelism, restartability and exception handling. Load plans are moved to production as is, whereas packages are moved in the form of scenarios. Load Plans can be created in Production environments.

The Load Plan instances and Load Plan runs are similar to Sessions. The difference is that when a session is restarted, the existing session is overwritten by the new execution. The new Load Plan Run does not overwrite the existing Load Plan Run, it is added after the previous Load Plan Runs for this Load Plan Instance. Note that the Load Plan Instance cannot be modified at run-time.

Load Plan Structure

A Load Plan is made up of a sequence of several types of steps. Each step can contain several child steps. Depending on the step type, the steps can be executed conditionally, in parallel or sequentially. By default, a Load Plan contains an empty root serial step. This root step is mandatory and the step type cannot be changed.

Table 15-1 lists the different types of Load Plan steps and the possible child steps.

Table 15-1 Load Plan Steps

TypeDescriptionPossible Child Steps

Serial Step

Defines a serial execution of its child steps. Child steps are ordered and a child step is executed only when the previous one is terminated.

The root step is a Serial step.

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Parallel Step

Defines a parallel execution of its child steps. Child steps are started immediately in their order of Priority.

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Run Scenario Step

Launches the execution of a scenario.

This type of step cannot have a child steps.

Case Step

When Step

Else Steps

The combination of these steps allows conditional branching based on the value of a variable.

Note: If you have several When steps under a Case step, only the first enabled When step that satisfies the condition is executed. If no When step satisfies the condition or the Case step does not contain any When steps, the Else step is executed.

Of a Case Step:

  • When step

  • Else step

Of a When step:

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Of an Else step:

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step

Exception Step

Defines a group of steps that is executed when an exception is encountered in the associated step from the Step Hierarchy. The same exception step can be attached to several steps in the Steps Hierarchy.

  • Serial step

  • Parallel step

  • Run Scenario step

  • Case step


Figure 15-1 shows a sample Load Plan created in Oracle Data Integrator. This sample Load Plan loads a data warehouse:

  • Dimensions are loaded in parallel. This includes the LOAD_TIME_DIM, LOAD_PRODUCT_DIM, LOAD_CUSTOMER_DIM scenarios, the geographical dimension and depending on the value of the ODI_VAR_SESS1 variable, the CUST_NORTH or CUST_SOUTH scenario.

  • The geographical dimension consists of a sequence of three scenarios (LOAD_GEO_ZONE_DIM, LOAD_COUNTRIES_DIM, LOAD_CITIES_DIM).

  • After the dimensions are loaded, the two fact tables are loaded in parallel (LOAD_SALES_FACT and LOAD_MARKETING_FACT scenarios).

Figure 15-1 Sample Load Plan

Description of Figure 15-1 follows

Introduction to the Load Plan Editor

The Load Plan Editor provides a single environment for designing Load Plans. Figure 15-2 gives an overview of the Load Plan Editor.

Figure 15-2 Steps Tab of the Load Pan Editor

Description of Figure 15-2 follows

The Load Plan steps are added, edited and organized in the Steps tab of the Load Plan Editor. The Steps Hierarchy table defines the organization of the steps in the Load Plan. Each row in this table represents a step and displays its main properties.

You can drag components such as packages, integration mappings, variables, procedures, or scenarios from the Designer Navigator into the Steps Hierarchy table for creating Run Scenario steps for these components.

You can also use the Add Step Wizard or the Quick Step tool to add Run Scenario steps and other types of steps into this Load Plan. See "Adding Load Plan Steps" for more information.

The Load Plan Editor toolbar, located on top of the Steps Hierarchy table, provides tools for creating, organizing, and sequencing the steps in the Load Plan. Table 15-2 details the different toolbar components.

Table 15-2 Load Plan Editor Toolbar

IconNameDescription
Search

Search

Searches for a step in the Steps Hierarchy table.

Expand All icon

Expand All

Expands all tree nodes in the Steps Hierarchy table.

Collapse All icon

Collapse All

Collapses all tree nodes in the Steps Hierarchy table.

Add Step

Add Step

Opens a Add Step menu. You can either select the Add Step Wizard or a Quick Step tool to add a step. See "Adding Load Plan Steps" for more information.

Remove Step icon

Remove Step

Removes the selected step and all its child steps.

Navigation arrows

Reorder arrows: Move Up, Move Down, Move Out, Move In

Use the reorder arrows to move the selected step to the required position.


The Properties Panel, located under the Steps Hierarchy table, displays the properties for the object that is selected in the Steps Hierarchy table.

Creating a Load Plan

This section describes how to create a new Load Plan in ODI Studio.

  1. Define a new Load Plan. See "Creating a New Load Plan" for more information.

  2. Add Steps into the Load Plan and define the Load Plan Sequence. See "Defining the Load Plan Step Sequence" for more information.

  3. Define how the exceptions should be handled. See "Handling Load Plan Exceptions and Restartability" for more information.

Creating a New Load Plan

Load Plans can be created from the Designer or Operator Navigator.

To create a new Load Plan:

  1. In Designer Navigator or Operator Navigator, click New Load Plan in the toolbar of the Load Plans and Scenarios accordion. The Load Plan Editor is displayed.

  2. In the Load Plan Editor, type in the Name and a Description for this Load Plan.

  3. Optionally, set the following parameters:

    • Log Sessions: Select how the session logs should be preserved for the sessions started by the Load Plan. Possible values are:

      • Always: Always keep session logs (Default)

      • Never: Never keep session logs. Note that for Run Scenario steps that are configured as Restart from Failed Step or Restart from Failed Task, the agent will behave as if the parameter is set to Error as the whole session needs to be preserved for restartability.

      • Error: Only keep the session log if the session completed in an error state.

    • Log Session Step: Select how the logs should be maintained for the session steps of each of the session started by the Load Plan. Note that this applies only when the session log is preserved. Possible values are:

      • By Scenario Settings: Session step logs are preserved depending on the scenario settings. Note that for scenarios created from packages, you can specify whether to preserve or not the steps in the advanced step property called Log Steps in the Journal. Other scenarios preserve all the steps (Default).

      • Never: Never keep session step logs. Note that for Run Scenario steps that are configured as Restart from Failed Step or Restart from Failed Task, the agent will behave as if the parameter is set to Error as the whole session needs to be preserved for restartability.

      • Errors: Only keep session step log if the step is in an error state.

    • Session Task Log Level: Select the log level for the sessions. This value corresponds to the Log Level value when starting unitary scenarios. Default is 5. Note that when Run Scenario steps are configured as Restart from Failed Step or Restart From Failed Task, this parameter is ignored as the whole session needs to be preserved for restartability.

    • Keywords: Enter a comma separated list of keywords that will be set on the sessions started from this load plan. These keywords improve the organization of ODI logs by session folders and automatic classification. Note that you can overwrite these keywords at the level of the child steps. See "Managing the Log" for more information.

  4. Go to the Steps tab and add steps as described in "Defining the Load Plan Step Sequence".

  5. If your Load Plan requires conditional branching, or if your scenarios use variables, go to the Variables tab and declare variables as described in "Declaring Load Plan Variables".

  6. To add exception steps that are used in the event of a load plan step failing, go to the Exceptions tab and define exception steps as described in "Defining Exceptions Flows".

  7. From the File menu, click Save.

The Load Plan appears in the Load Plans and Scenarios accordion. You can organize your Load Plans by grouping related Load Plans and Scenarios into a Load Plan and Scenarios folder.

Defining the Load Plan Step Sequence

Load Plans are an organized hierarchy of child steps. This hierarchy allows conditional processing of steps in parallel or in series.

The execution flow can be configured at two stages:

  • At Design-time, when defining the Steps Hierarchy:

    • When you add a step to a Load Plan, you select the step type. The step type defines the possible child steps and how these child steps are executed: in parallel, in series, or conditionally based on the value of a variable (Case step). See Table 15-1, "Load Plan Steps" for more information on step types.

    • When you add a step to a Load Plan, you also decide where to insert the step. You can add a child step, a sibling step after the selected step, or a sibling step before the selected step. See "Adding Load Plan Steps" for more information.

    • You can also reorganize the order of the Load Plan steps by dragging the step to the wanted position or by using the arrows in the Step table toolbar. See Table 15-2, "Load Plan Editor Toolbar" for more information.

  • At design-time and run-time by enabling or disabling a step. In the Steps hierarchy table, you can enable or disable a step. Note that disabling a step also disables all its child steps. Disabled steps and all their child steps are not executed when you run the load plan.

This section contains the following topics:

Adding Load Plan Steps

A Load Plan step can be added either by using the Add Step Wizard or by selecting the Quick Step tool for a specific step type. See Table 15-1, "Load Plan Steps" for more information on the different types of Load Plan steps. To create Run Scenario steps, you can also drag components such as packages, mappings, variables, procedures, or scenarios from the Designer Navigator into the Steps Hierarchy table. Oracle Data Integrator automatically creates a Run Scenario step for the inserted component.

When a Load Plan step is added, it is inserted into the Steps Hierarchy with the minimum required settings. See "Editing Load Plan Steps" for more information on how to configure Load Plan steps.

Adding a Load Plan Step with the Add Step Wizard

To insert Load Plan step with the Add Step Wizard:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. Select a step in the Steps Hierarchy table.

  3. In the Load Plan Editor toolbar, select Add Step > Add Step Wizard.

  4. In the Add Step Wizard, select:

    • Step Type. Possible step types are: Serial, Parallel, Run Scenario, Case, When, and Else. See Table 15-1, "Load Plan Steps" for more information on the different step types.

    • Step Location. This parameter defines where the step is added.

      • Add a child step to selection: The step is added under the selected step.

      • Add a sibling step after selection: The step is added on the same level after the selected step.

      • Add a sibling step before selection: The step is added on the same level before the selected step.


    Note:

    Only values that are valid for the current selection are displayed for the Step Type and Step Location.


  5. Click Next.

  6. Follow the instructions in Table 15-3 for the step type you are adding.

    Table 15-3 Add Step Wizard Actions

    Step TypeDescription and Action Required

    Serial or Parallel step

    Enter a Step Name for the new Load Plan step.

    Run Scenario step

    1. Click the Lookup Scenario button.

    2. In the Lookup Scenario dialog, you can select the scenario you want to add to your Load Plan and click OK.

      Alternately, to create a scenario for an executable object and use this scenario, select this object type in the Executable Object Type selection box, then select the executable object that you want to run with this Run Scenario step and click OK. Enter the new scenario name and version and click OK. A new scenario is created for this object and used in this Run Scenario Step.

      Tip: At design time, you may want to create a Run Scenario step using a scenario that does not exist yet. In this case, instead of selecting an existing scenario, enter directly a Scenario Name and a Version number and click Finish. Later on, you can select the scenario using the Modify Run Scenario Step wizard. See "Change the Scenario of a Run Scenario Step" for more information.

      Note that when you use the version number -1, the latest version of the scenario will be used.

    3. The Step Name is automatically populated with the name of the scenario and the Version field with the version number of the scenario. Optionally, change the Step Name.

    4. Click Next.

    5. In the Add to Load Plan column, select the scenario variables that you want to add to the Load Plan variables. If the scenario uses certain variables as its startup parameters, they are automatically added to the Load Plan variables.

      See "Working with Variables in Load Plans" for more information.

    Case

    1. Select the variable you want to use for the conditional branching. Note that you can either select one of the load plan variables from the list or click Lookup Variable to add a new variable to the load plan and use it for this case step.

      See "Working with Variables in Load Plans" for more information.

    2. The Step Name is automatically populated with the step type and name of the variable. Optionally, change the Step Name.

      See "Editing Load Plan Steps" for more information.

    When

    1. Select the Operator to use in the WHEN clause evaluation. Possible values are:

      • Less Than (<)

      • Less Than or Equal (<=)

      • Different (<>)

      • Equals (=)

      • Greater Than (>)

      • Greater Than or Equal (>=)

      • Is not Null

      • Is Null

    2. Enter the Value to use in the WHEN clause evaluation.

    3. The Step Name is automatically populated with the operator that is used. Optionally, change the Step Name.

      See "Editing Load Plan Steps" for more information.

    Else

    The Step Name is automatically populated with the step type. Optionally, change the Step Name.

    See "Editing Load Plan Steps" for more information.


  7. Click Finish.

  8. The step is added in the steps hierarchy.


Note:

You can reorganize the order of the Load Plan steps by dragging the step to the desired position or by using the reorder arrows in the Step table toolbar to move a step in the Steps Hierarchy.


Adding a Load Plan Step with the Quick Step Tool

To insert Load Plan step with the Quick Step Tool:

  1. Open the Load Plan editor and go to the Steps tab.

  2. In the Steps Hierarchy, select the Load Plan step under which you want to create a child step.

  3. In the Steps toolbar, select Add Step and the Quick Step option corresponding to the Step type you want to add. Table 15-4 lists the options of the Quick Step tool.

    Table 15-4 Quick Step Tool

    Quick Step tool optionDescription and Action Required

    serial step icon


    Adds a serial step after the selection. Default values are used. You can modify these values in the Steps Hierarchy table or in the Property Inspector. See "Editing Load Plan Steps" for more information.

    parallel step icon


    Adds a parallel step after the selection. Default values are used. You can modify these values in the Steps Hierarchy table or in the Property Inspector. See "Editing Load Plan Steps" for more information.

    run scenario step icon

    Adds a run scenario step after the selection. Follow the instructions for Run Scenario steps in Table 15-3.

    case step icon

    Adds a Case step after the selection. Follow the instructions for Case steps in Table 15-3.

    when step icon

    Adds a When step after the selection. Follow the instructions for When steps in Table 15-3.

    else step icon

    Adds an Else step after the selection. Follow the instructions for Else steps in Table 15-3.



    Note:

    Only step types that are valid for the current selection are enabled in the Quick Step tool.


Editing Load Plan Steps

To edit a Load Plan step:

  1. Open the Load Plan editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the Load Plan step you want modify. The Property Inspector displays the step properties.

  3. Edit the Load Plan step properties according to your needs.

The following operations are common tasks when editing steps:

Change the Scenario of a Run Scenario Step

To change the scenario:

  1. In the Steps Hierarchy table of the Steps or Exceptions tab, select the Run Scenario step.

  2. In the Step Properties section of the Properties Inspector, click Lookup Scenario. This opens the Modify Run Scenario Step wizard.

  3. In the Modify Run Scenario Step wizard, click Lookup Scenario and follow the instructions in Table 15-3 corresponding to the Run Scenario step.

Set Advanced Options for Run Scenario Steps

You can set the following properties for Run Scenario steps in the Property Inspector:

  • Priority: Priority for this step when the scenario needs to start in parallel. The integer value range is from 0 to 100 (100 being the highest priority). Default is 0. The priority of a Run Scenario step is evaluated among all runnable scenarios within a running Load Plan. The Run Scenario step with the highest priority is executed first.

  • Context: Context that is used for the step execution. Default context is the Load Plan context that is defined in the Start Load Plan Dialog when executing a Load Plan. Note that if you only specify the Context and no Logical Agent value, the step is started on the same physical agent that started the Load Plan, but in this specified context.

  • Logical Agent: Logical agent that is used for the step execution. By default, the logical agent, which is defined in the Start Load Plan Dialog when executing a Load Plan, is used. Note that if you set only the Logical Agent and no context, the step is started with the physical agent corresponding to the specified Logical Agent resolved in the context specified when starting the Load Plan. If no Logical Agent value is specified, the step is started on the same physical agent that started the Load Plan (whether a context is specified for the step or not).

Open the Linked Object of Run Scenario Steps

Run Scenario steps can be created for packages, mappings, variables, procedures, or scenarios. Once this Run Scenario step is created, you can open the Object Editor of the original object to view and edit it.

To view and edit the linked object of Run Scenario steps:

  1. In the Steps Hierarchy table of the Steps or Exceptions tab, select the Run Scenario step.

  2. Right-click and select Open the Linked Object.

The Object Editor of the linked object is displayed.

Change the Test Variable in Case Steps

To change the variable that is used for evaluating the tests defined in the WHEN statements:

  1. In the Steps Hierarchy table of the Steps tab or Exceptions tab, select the Case step.

  2. In the Step Properties section of the Properties Inspector, click Lookup Variable. This opens the Modify Case Step Dialog.

  3. In the Modify Case Step Dialog, click Lookup Variable and follow the instructions in Table 15-3, "Add Step Wizard Actions" corresponding to the Case step.

Define the Exception and Restart Behavior

Exception and Restart behavior can be set on the steps in the Steps Hierarchy table. See "Handling Load Plan Exceptions and Restartability" for more information.

Regenerate Scenarios

To regenerate all the scenarios of a given Load Plan step, including the scenarios of its child steps:

  1. From the Steps Hierarchy table of the Steps tab or Exceptions tab, select the Load Plan step.

  2. Right-click and select Regenerate. Note that this option is not available for scenarios with the version number -1.

  3. Click OK.


Caution:

Regenerating a scenario cannot be undone. For important scenarios, it is better to generate a scenario with a new version number.


Refresh Scenarios to Latest Version

To modify all the scenario steps of a given Load Plan step, including the scenarios of its child steps, and set the scenario version to the latest version available for each scenario:

  1. From the Steps Hierarchy table of the Steps tab or Exceptions tab, select the Load Plan step.

  2. Right-click and select Refresh Scenarios to Latest Version. Note that the latest scenario version is determined by the Scenario Creation timestamp. While during the ODI agent execution, the latest scenario is determined by alphabetical ascending sorting of the Scenario Version string value and picking up the last from the list.


    Note:

    This option is not available for scenarios with the version number -1.


  3. Click OK.

Deleting a Step

To delete a step:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the step to delete.

  3. In the Load Plan Editor toolbar, select Remove Step.

The step and its child steps are removed from the Steps Hierarchy table.


Note:

It is not possible to undo a delete operation in the Steps Hierarchy table.


Duplicating a Step

To duplicate a step:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. In the Steps Hierarchy table, right-click the step to duplicate and select Duplicate Selection.

  3. A copy of this step, including its child steps, is created and added as a sibling step after the original step to the Step Hierarchy table.

You can now move and edit this step.

Working with Variables in Load Plans

Project and Global Variables used in a Load Plan are declared as Load Plan Variables in the Load Plan editor. These variables are automatically available in all steps and their value passed to the Load Plan steps.

The variables values are passed to the Load Plan on startup as startup parameters. At a step level, you can overwrite the variable value (by setting it or forcing a refresh) for this step and its child steps.


Note:

At startup, Load Plans do not take into account the default value of a variable, or the historized/latest value of a variable in the execution context. The value of the variable is either the one specified when starting the Load Plan, or the value set/refreshed within the Load Plan.


You can use variables in Run Scenario steps - the variable values are passed as startup parameters to the scenario - or in Case/When/Else steps for conditional branching.

This section contains the following topics:

Declaring Load Plan Variables

To declare a Load Plan variable:

  1. Open the Load Plan editor and go to the Variables tab.

  2. From the Load Plan Editor toolbar, select Add Variable. The Lookup Variable dialog is displayed.

  3. In the Lookup Variable dialog, select the variable to add your Load Plan.

  4. The variable appears in the Variables tab of the Load Plan Editor and in the Property Inspector of each step.

Setting Variable Values in a Step

Variables in a step inherit their value from the value from the parent step and ultimately from the value specified for the variables when starting the Load Plan.

For each step, except for Else and When steps, you can also overwrite the variable value, and change the value used for this step and its child steps.

To override variable values at step level:

  1. Open the Load Plan editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the step for which you want to overwrite the variable value.

  3. In the Property Inspector, go to the Variables section. The variables that are defined for this Load Plan are listed in this Variables table. You can modify the following variable parameters:

    Select Overwrite, if you want to specify a variable value for this step and all its children. Once you have chosen to overwrite the variable value, you can either:

    • Set a new variable value in the Value field.

    • Select Refresh to refresh this variable prior to executing the step. The Refresh option can be selected only for variables with a Select Query defined for refreshing the variable value.

Handling Load Plan Exceptions and Restartability

Load Plans provide two features for handling error cases in the execution flows: Exceptions and Restartability.

Exceptions

An Exception Step contains a hierarchy of steps that is defined on the Exceptions tab of the Load Plan editor.

You can associate a given exception step to one or more steps in the Load Plan. When a step in the Load Plan errors out, the associated exception step is executed automatically.

Exceptions can be optionally raised to the parent step of the failing step. Raising an exception fails the parent step, which can consequently execute its exception step.

Restartability

When a Load Plan Run is restarted after a failure, the failed Load Plan steps are restarted depending on the Restart Type parameter. For example, you can define whether a parallel step should restart all its child steps or only those that have failed.

This section contains the following topics:

Defining Exceptions Flows

Exception steps are created and defined on the Exceptions tab of the Load Plan Editor.

This tab contains a list of Exception Steps. Each Exception Step consists in a hierarchy of Load Plan steps.The Exceptions tab is similar to the Steps tab in the Load Plan editor. The main differences are:

  • There is no root step for the Exception Step hierarchy. Each exception step is a separate root step.

  • The Serial, Parallel, Run Scenario, and Case steps have the same properties as on the Steps tab but do not have an Exception Handling properties group. An exception step that errors out cannot raise another exception step.

An Exception step can be created either by using the Add Step Wizard or with the Quick Step tool by selecting the Add Step > Exception Step in the Load Plan Editor toolbar. By default, the Exception step is created with the Step name: Exception. You can modify this name in the Steps Hierarchy table or in the Property Inspector.

To create an Exception step with the Add Step Wizard:

  1. Open the Load Plan Editor and go to the Exceptions tab.

  2. In the Load Plan Editor toolbar, select Add Step > Add Step Wizard.

  3. In the Add Step Wizard, select Exception from the Step Type list.


    Note:

    Only values that are valid for the current selection are displayed for the Step Type.


  4. Click Next.

  5. In the Step Name field, enter a name for the Exception step.

  6. Click Finish.

  7. The Exception step is added in the steps hierarchy.

You can now define the exception flow by adding new steps and organizing the hierarchy under this exception step.

Using Exception Handling

Defining exception handling for a Load Plan step consists of associating an Exception Step to this Load Plan step and defining the exception behavior. Exceptions steps can be set for each step except for When and Else steps.

To define exception handling for a Load Plan step:

  1. Open the Load Plan Editor and go to the Steps tab.

  2. In the Steps Hierarchy table, select the step for which you want to define an exception behavior. The Property Inspector displays the Step properties.

  3. In the Exception Handling section of the Property Inspector, set the parameters as follows:

    • Timeout (s): Enter the maximum time (in seconds) that this step takes before it is aborted by the Load Plan. When a time-out is reached, the step is marked in error and the Exception step (if defined) is executed. In this case, the exception step never times out. If needed, a timeout can be set on a parent step to safe guard such a potential long running situation.

      If the step fails before the timeout and an exception step is executed, then the execution time of the step plus the execution time of the exception step should not exceed the timeout, otherwise the exception step will fail when the timeout is reached.

      Note that the default value of zero (0) indicates an infinite timeout.

    • Exception Step: From the list, select the Exception step to execute if this step fails. Note that only Exception steps that have been created and defined on the Exceptions tab of the Load Plan Editor appear in this list. See "Defining Exceptions Flows" for more information on how to create an Exception step.

    • Exception Behavior: Defines how this step behaves in case an exception is encountered. Select one of the following:

      • Run Exception and Raise: Runs the Exception Step (if any) and raises the exception to the parent step.

      • Run Exception and Ignore: Runs the Exception Step (if any) and ignores the exception. The parent step is notified of a successful run. Note that if an exception is caused by the exception step itself, the parent step is notified of the failure.

    For Parallel steps only, the following parameters may be set:

    Max Error Child Count: Displays the maximum number of child steps in error that is accepted before this step is to be considered in error. When the number of failed child steps exceeds this value, the parallel step is considered failed. The currently running child steps are continued or stopped depending on the Restart Type parameter for this parallel step:

    • If the Restart type is Restart from failed children, the Load Plan waits for all child sessions (these are the currently running sessions and the ones waiting to be executed) to run and complete before it raises the error to the parent step.

    • If the Restart Type is Restart all children, the Load Plan kills all running child sessions and does not start any new ones before it raises the error to the parent.

Defining the Restart Behavior

The Restart Type option defines how a step in error restarts when the Load Plan is restarted. You can define the Restart Type parameter in the Exception Handling section of the Properties Inspector.

Depending on the step type, the Restart Type parameter can take the values listed in Table 15-5.

Table 15-5 Restart Type Values

Step TypeValues and Description

Serial

  • Restart all children: When the Load Plan is restarted and if this step is in error, the sequence of steps restarts from the first one.

  • Restart from failure: When the Load Plan is restarted and if this step is in error, the sequence of child steps starts from the one that has failed.

Parallel

  • Restart all children: When the Load Plan is restarted and if this step is in error, all the child steps are restarted regardless of their status. This is the default value.

  • Restart from failed children: When the Load Plan is restarted and if this step is in error, only the failed child steps are restarted in parallel.

Run Scenario

  • Restart from new session: When restarting the Load Plan and this Run Scenario step is in error, start the scenario and create a new session. This is the default value.

  • Restart from failed step: When restarting the Load Plan and this Run Scenario step is in error, restart the session from the step in error. All the tasks under this step are restarted.

  • Restart from failed task: When restarting the Load Plan and this Run Scenario step is in error, restart the session from the task in error.

The same limitation as those described in "Restarting a Session" apply to the sessions restarted from a failed step or failed task.


Running Load Plans

You can run a Load Plan from Designer Navigator or Operator Navigator in ODI Studio.

To run a Load Plan in Designer Navigator or Operator Navigator:

  1. In the Load Plans and Scenarios accordion, select the Load Plan you want to execute.

  2. Right-click and select Execute.

  3. In the Start Load Plan dialog, select the execution parameters:

    • Select the Context into which the Load Plan will be executed.

    • Select the Logical Agent that will run the Load Plan.

    • In the Variables table, enter the Startup values for the variables used in this Load Plan.

  4. Click OK.

  5. The Load Plan Started dialog is displayed.

  6. Click OK.

The Load Plan execution starts: a Load Plan instance is created along with the first Load Plan run. You can review the Load Plan execution in the Operator Navigator. See Chapter 23, "Monitoring Integration Processes," for more information. See also Chapter 21, "Running Integration Processes," for more information on the other run-time operations on Load Plans.

Using Load Plans in Production

Using Load Plans in production involves the following tasks:

Running Load Plans in Production

In Production, the following tasks can be performed to execute Load Plans interactively:

Scheduling Load Plans

You can schedule the executions of your scenarios and Load Plans using the Oracle Data Integrator built-in scheduler or an external scheduler. See "Scheduling Scenarios and Load Plans" for more information.

Exporting, Importing and Versioning Load Plans

A Load Plan can be exported and then imported into a development or execution repository. This operation is used to deploy Load Plans in a different repository, possibly in a different environment or site.

The export (and import) procedure allows you to transfer Oracle Data Integrator objects from one repository to another.

Exporting Load Plans

It is possible to export a single Load Plan or several Load Plans at once.

Exporting one single Load Plan follows the standard procedure described in "Exporting one ODI Object".

For more information on exporting several Load Plans at once, see "Export Multiple ODI Objects".

Note that when you export a Load Plan and you select Export child objects, all its child steps, schedules, and variables are also exported.


Note:

The export of a Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be exported separately. How to export scenarios is described in "Exporting Scenarios".


Importing Load Plans

Importing a Load Plan in a development repository is performed via Designer or Operator Navigator. With an execution repository, only Operator Navigator is available for this purpose.

The Load Plan import uses the standard object import method. See "Importing Objects" for more information.


Note:

The export of a Load Plan does not include the scenarios referenced by the Load Plan. Scenarios used in a Load Plan need to be imported separately.


Versioning Load Plans

Load Plans can also be deployed and promoted to production using versions and solutions. See Chapter 19, "Using Version Control," for more information.

PKl  PKU5zDOEBPS/app_12c_patch.htm Oracle Warehouse Builder to Oracle Data Integrator Migration Utility Patch

C Oracle Warehouse Builder to Oracle Data Integrator Migration Utility Patch

This appendix provides information about the new and enhanced features that are provided in ODI 12.1.2 patch number 17053768 "Oracle Warehouse Builder to Oracle Data Integrator Migration Utility Patch".

The ODI 12.1.2 patch number 17053768 is available for search and download through My Oracle Support. To access My Oracle Support, click the following URL:

http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info

This appendix includes the following sections:

New Features

This section provides information about the new features that are available in ODI 12.1.2 patch number 17053768. This section includes the following topics:

Pivot Component

A pivot component is a projector component (see: "Projector Components") that lets you transform data that is contained in multiple input rows into a single output row. The pivot component lets you extract data from a source once and produce one row from a set of source rows that are grouped by attributes in the source data. The pivot component can be placed anywhere in the data flow of a mapping.

Example: Pivoting Sales Data

SALES shows a sample of data from the SALES relational table. The QUARTER attribute has 4 possible character values, one for each quarter of the year. All the sales figures are contained in one attribute, SALES.

Table C-1 SALES

YEARQUARTERSALES

2010

Q1

10.5

2010

Q2

11.4

2010

Q3

9.5

2010

Q4

8.7

2011

Q1

9.5

2011

Q2

10.5

2011

Q3

10.3

2011

Q4

7.6


PIVOTED DATA depicts data from the relational table SALES after unpivoting the table. The data that was formerly contained in the QUARTER attribute (Q1, Q2, Q3, and Q4) corresponds to 4 separate attributes (Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales). The sales figures formerly contained in the SALES attribute are distributed across the 4 attributes for each quarter.

Table C-2 PIVOTED DATA

YearQ1_SalesQ2_SalesQ3_SalesQ4_Sales

2010

10.5

11.4

9.5

8.7

2011

9.5

10.5

10.3

7.6


The Row Locator

When you use the pivot component, multiple input rows are transformed into a single row based on the row locator. The row locator is an attribute that you must select from the source to correspond with the set of output attributes that you define. It is necessary to specify a row locator to perform the unpivot operation.

In this example, the row locator is the attribute QUARTER from the SALES table and it corresponds to the attributes Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales attributes in the pivoted output data.

Using the Pivot Component

To use a pivot component in a mapping:

  1. Drag and drop the source datastore into the logical diagram.

  2. Drag and drop a Pivot component from the component palette into the logical diagram.

  3. From the source datastore drag and drop the appropriate attributes on the pivot component. In this example, the YEAR attribute.


    Note:

    Do not drag the row locator attribute or the attributes that contain the data values that correspond to the output attributes. In this example, QUARTER is the row locator attribute and SALES is the attribute that contain the data values (sales figures) that correspond to the Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales output attributes.


  4. Select the pivot component. The properties of the unpivot component are displayed in the Property Inspector.

  5. Enter a name and description for the pivot component.

  6. Type in the expression or use the Expression Editor to specify the row locator. In this example, since the QUARTER attribute in the SALES table is the row locator, the expression will be SALES.QUARTER.

  7. Under Row Locator Values, click the + sign to add the row locator values. In this example, the possible values for the row locator attribute QUARTER are Q1, Q2, Q3, and Q4.

  8. Under Attributes, add output attributes to correspond to each input row. If required, you can add new attributes or rename the listed attributes.

    In this example, add 4 new attributes, Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales that will correspond to 4 input rows Q1, Q2, Q3, and Q4 respectively.

  9. If required, change the expression for each attribute to pick up the sales figures from the source and select a matching row for each attribute.

    In this example, set the expressions for each attribute to SALES.SALES and set the matching rows to Q1, Q2, Q3, and Q4 respectively.

  10. Drag and drop the target datastore into the logical diagram.

  11. Connect the pivot component to the target datastore by dragging a link from the output (right) connector of the pivot component to the input (left) connector of the target datastore.

  12. Drag and drop the appropriate attributes of the pivot component on to the target datastore. In this example, YEAR, Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales.

  13. Go to the physical diagram and assign new KMs if you want to.

    Save and execute the mapping to perform the pivot operation.

Unpivot Component

An unpivot component is a projector component (see: "Projector Components") that lets you transform data that is contained across attributes into multiple rows.

The unpivot component does the reverse of what the pivot component does. Similar to the pivot component, an unpivot component can be placed anywhere in the flow of a mapping.

The unpivot component is specifically useful in situations when you extract data from non-relational data sources such as a flat file, which contains data across attributes rather than rows.

Example: Unpivoting Sales Data

The external table, QUARTERLY_SALES_DATA, shown in Table C-3, contains data from a flat file. There is a row for each year and separate attributes for sales in each quarter.

Table C-3 QUARTERLY_SALES_DATA

YearQ1_SalesQ2_SalesQ3_SalesQ4_Sales

2010

10.5

11.4

9.5

8.7

2011

9.5

10.5

10.3

7.6


Table C-4 shows a sample of the data after an unpivot operation is performed. The data that was formerly contained across multiple attributes (Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales) is now contained in a single attribute (SALES). The unpivot component breaks the data in a single attribute (Q1_Sales) into two attributes (QUARTER and SALES). A single row in QUARTERLY_SALES_DATA corresponds to 4 rows (one for sales in each quarter) in the unpivoted data.

Table C-4 UNPIVOTED DATA

YEARQUARTERSALES

2010

Q1

10.5

2010

Q2

11.4

2010

Q3

9.5

2010

Q4

8.7

2011

Q1

9.5

2011

Q2

10.5

2011

Q3

10.3

2011

Q4

7.6


The Row Locator

The row locator is an output attribute that corresponds to the repeated set of data from the source. The unpivot component transforms a single input attribute into multiple rows and generates values for a row locator. The other attributes that correspond to the data from the source are referred as value locators. In this example, the attribute QUARTER is the row locator and the attribute SALES is the value locator.


Note:

To use the unpivot component, you are required to create the row locator and the value locator attributes for the unpivot component.


Using the Unpivot Component

To use an unpivot component in a mapping:

  1. Drag and drop the source data store into the logical diagram.

  2. Drag and drop an Unpivot component from the component palette into the logical diagram.

  3. From the source datastore drag and drop the appropriate attributes on the unpivot component. In this example, the YEAR attribute.


    Note:

    Do not drag the attributes that contain the data that corresponds to the value locator. In this example, Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales.


  4. Select the unpivot component. The properties of the unpivot component are displayed in the Property Inspector.

  5. Enter a name and description for the unpivot component.

  6. Create the row locator and value locator attributes using the Attribute Editor. In this example, you need to create two attributes named QUARTER and SALES.


    Note:

    Do not forget to define the appropriate data types and constraints (if required) for the attributes.


  7. In the Property Inspector, under UNPIVOT, select the row locator attribute from the Row Locator drop-down list. In this example, QUARTER.

    Now that the row locator is selected, the other attributes can act as value locators. In this example, SALES.

  8. Under UNPIVOT TRANSFORMS, click + to add transform rules for each output attribute. Edit the default values of the transform rules and specify the appropriate expressions to create the required logic.

    In this example, you need to add 4 transform rules, one for each quarter. The transform rules define the values that will be populated in the row locator attribute QUARTER and the value locator attribute SALES. The QUARTER attribute must be populated with constant values (Q1, Q2, Q3, and Q4), while the SALES attribute must be populated with the values from source datastore attributes (Q1_Sales, Q2_Sales, Q3_Sales, and Q4_Sales).

  9. Leave the INCLUDE NULLS check box selected to generate rows with no data for the attributes that are defined as NULL.

  10. Drag and drop the target datastore into the logical diagram.

  11. Connect the unpivot component to the target datastore by dragging a link from the output (right) connector of the unpivot component to the input (left) connector of the target datastore.

  12. Drag and drop the appropriate attributes of the unpivot component on to the target datastore. In this example, YEAR, QUARTER, and SALES.

  13. Go to the physical diagram and assign new KMs if you want to.

  14. Click Save and then execute the mapping to perform the unpivot operation.

Table Function Component

A table function component is a projector component (see: "Projector Components") that represents a table function in a mapping. Table function components enable you to manipulate a set of input rows and return another set of output rows of the same or different cardinality. The set of output rows can be queried like a physical table. A table function component can be placed anywhere in a mapping, as a source, a target, or a data flow component.

A table function component can have multiple input connector points and one output connector point. The input connector point attributes act as the input parameters for the table function, while the output connector point attributes are used to store the return values.

For each input connector, you can define the parameter type, REF_CURSOR or SCALAR, depending on the type of attributes the input connector point will hold.

To use a table function component in a mapping:

  1. Create a table function in the database if it does not exist.

  2. Right-click the Mappings node and select New Mapping.

  3. Drag and drop the source datastore into the logical diagram.

  4. Drag and drop a table function component from the component palette into the logical diagram. A table function component is created with no input connector points and one default output connector point.

  5. Click the table function component. The properties of the table function component are displayed in the Property Inspector.

  6. In the property inspector, go to the Attributes tab.

  7. Type the name of the table function in the Name field. If the table function is in a different schema, type the function name as SCHEMA_NAME.FUNCTION_NAME.

  8. Go to the Connector Points tab and click the + sign to add new input connector points. Do not forget to set the appropriate parameter type for each input connector.


    Note:

    Each REF_CURSOR attribute must be held by a separate input connector point with its parameter type set to REF_CURSOR. Multiple SCALAR attributes can be held by a single input connector point with its parameter type set to SCALAR.


  9. Go to the Attributes tab and add attributes for the input connector points (created in previous step) and the output connector point. The input connector point attributes act as the input parameters for the table function, while the output connector point attributes are used to store the return values.

  10. Drag and drop the required attributes from the source datastore on the appropriate attributes for the input connector points of the table function component. A connection between the source datastore and the table function component is created.

  11. Drag and drop the target datastore into the logical diagram.

  12. Drag and drop the output attributes of the table function component on the attributes of the target datastore.

  13. Go to the physical diagram of the mapping and ensure that the table function component is in the correct execution unit. If it is not, move the table function to the correct execution unit.

  14. Assign new KMs if you want to.

  15. Save and then execute the mapping.

Subquery Filter Component

A subquery filter component is a projector component (see: "Projector Components") that lets you to filter rows based on the results of a subquery. The conditions that you can use to filter rows are EXISTS, NOT EXISTS, IN, and NOT IN.

For example, the EMP datastore contains employee data and the DEPT datastore contains department data. You can use a subquery to fetch a set of records from the DEPT datastore and then filter rows from the EMP datastore by using one of the subquery conditions.

A subquery filter component has two input connector points and one output connector point. The two input connector points are Driver Input connector point and Subquery Filter Input connector point. The Driver Input connector point is where the main datastore is set, which drives the whole query. The Subquery Filter Input connector point is where the datastore that is used in the sub-query is set. In the example, EMP is the Driver Input connector point and DEPT is the Subquery Filter Input connector point.

To filter rows using a subquery filter component:

  1. Drag and drop a subquery filter component from the component palette into the logical diagram.

  2. Connect the subquery filter component with the source datastores and the target datastore.

  3. Drag and drop the input attributes from the source datastores on the subquery filter component.

  4. Drag and drop the output attributes of the subquery filter component on the target datastore.

  5. Go to the Connector Points tab and select the input datastores for the driver input connector point and the subquery filter input connector point.

  6. Click the subquery filter component. The properties of the subquery filter component are displayed in the Property Inspector.

  7. Go to the Attributes tab. The output connector point attributes are listed. Set the expressions for the driver input connector point and the subquery filter connector point.


    Note:

    You are required to set an expression for the subquery filter input connector point only if the subquery filter input role is set to one of the following:

    IN, NOT IN, =, >, <, >=, <=, !=, <>, ^=


  8. Go to the Condition tab.

  9. Type an expression in the Subquery Filter Condition field. It is necessary to specify a subquery filter condition if the subquery filter input role is set to EXISTS or NOT EXISTS.

  10. Select a subquery filter input role from the Subquery Filter Input Role drop-down list.

  11. Select a group comparison condition from the Group Comparison Condition drop-down list. A group comparison condition can be used only with the following subquery input roles:

    =, >, <, >=, <=, !=, <>, ^=

  12. Save and then execute the mapping.

Enhanced Features

This section provides information about the existing features that have been enhanced in ODI 12.1.2 patch number 17053768. This section includes the following topics:

Lookup Component Enhancements

This section provides information about the enhancements done to the Lookup component. Two new properties, Multiple Match Rows and No-Match Rows are added to the Lookup properties. These properties appear under Match Row Rules section of the Lookup properties page.

Multiple Match Rows

The Lookup Type property has been replaced with Multiple Match Rows.

The Multiple Match Rows property defines which row from the lookup result must be selected as the lookup result if the lookup returns multiple results. Multiple rows are returned when the lookup condition specified matches multiple records.

You can select one of the following options to specify the action to perform when multiple rows are returned by the lookup operation:

  • Error: multiple rows cause mapping to fail

    This option indicates that when the lookup operation returns multiple rows, the mapping execution fails.

  • All Rows (number of result rows may differ from the number of input rows)

    This option indicates that when the lookup operation returns multiple rows, all the rows should be returned as the lookup result.

  • Select any single row

    This option indicates that when the lookup operation returns multiple rows, any one row from the returned rows must be selected as the lookup result.

  • Select first single row

    This option indicates that when the lookup operation returns multiple rows, the first row from the returned rows must be selected as the lookup result.

  • Select nth single row

    This option indicates that when the lookup operation returns multiple rows, the nth row from the result rows must be selected as the lookup result. When you select this option, the Nth Row Number field appears, where you can specify the value of n.

Lookup Attributes Order:

Use the Lookup Attributes Default Value & Order By table to specify how the result set that contains multiple rows should be ordered. Ensure that the attributes are listed in the same order (from top to bottom) in which you want the result set to be ordered. For example, to implement an ordering such as ORDER BY attr2, attr3, and then attr1, the attributes should be listed in the same order. You can use the arrows to change the position of the attributes to specify the order.

No-Match Rows

The No-Match Rows property indicates the action to be performed when there are no rows that satisfy the lookup condition. You can select one of the following options to specify the action to perform when no rows are returned by the lookup operation:

  • Return no row

    This option does not return any row when no row in the lookup results satisfies the lookup condition.

  • Return a row with the following default values

    This option returns a row that contains default values when no row in the lookup results satisfies the lookup condition. Use the Lookup Attributes Default Value & Order By: table below this option to specify the default values for each lookup attribute.

Sequence Enhancements

Sequence in Oracle Data Integrator is enhanced to support the CURRVAL operator. The expression editor now displays the NEXTVAL and CURRVAL operators for each sequence that is listed in the ODI objects panel as shown in Figure C-1.

Figure C-1 Expression editor enhancement: SEQUENCE

Description of Figure C-1 follows

PK"ukPKU5zDOEBPS/preface.htmK Preface

Preface

This manual describes how to develop data integration projects using Oracle Data Integrator.

This preface contains the following topics:.

Audience

This document is intended for developers and administrators who want to use Oracle Data Integrator (ODI) as a development tool for their integration processes. This guide explains how to work with the ODI graphical user interface, primarily ODI Studio and ODI Console. It guides you through common tasks and examples of development, as well as conceptual and background information on the features and functionalities of ODI.

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

Related Documents

For more information, see the following Oracle resources:

Conventions

The following text conventions are used in this document:

ConventionMeaning

boldface

Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.

italic

Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.

monospace

Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.


PKPKPKU5zDOEBPS/partpage7.htm| Managing Security Settings

Part VII

Managing Security Settings

This part describes how to manage the security settings in Oracle Data Integrator.

This part contains the following chapters:

PK6|PKU5zDOEBPS/web_services.htm Using Web Services

16 Using Web Services

This chapter describes how to work with Web services in Oracle Data Integrator.

This chapter includes the following sections:

Introduction to Web Services in Oracle Data Integrator

Oracle Data Integrator provides the following entry points into a service-oriented architecture (SOA):

  • Data services

  • Oracle Data Integrator run-time services

  • Invoking third-party Web services

Figure 16-1 shows an overview of how the different types of Web services can interact.

Figure 16-1 Web Services in Action

Description of Figure 16-1 follows

Figure 16-1 shows a simple example with the Data Services, Run-Time Web services (Public Web service and Agent Web service) and the OdiInvokeWebService tool.

The Data Services and Run-Time Web services components are invoked by a third-party application, whereas the OdiInvokeWebService tool invokes a third-party Web service:

  • The Data Services provides access to data in data stores (both source and target data stores), as well as changes trapped by the Changed Data Capture framework. This Web service is generated by Oracle Data Integrator and deployed in a Java EE application server.

  • The Agent Web service commands the Oracle Data Integrator Agent to: start and monitor a scenario; restart a session; get the ODI version; start, stop or restart a load plan; or refresh agent configuration. Note that this Web service is built in the Java EE or Standalone Agent.

  • The OdiInvokeWebService tool is used in a package and invokes a specific operation on a port of the third-party Web service, for example to trigger a BPEL process.

Oracle Data Integrator Run-Time Services and Data Services

Oracle Data Integrator Run-Time Web services and Data Services are two different types of Web services:

  • Oracle Data Integrator Run-Time Services (Agent Web service) are Web services that enable users to leverage Oracle Data Integrator features in a service-oriented architecture (SOA).

    Oracle Data Integrator Run-Time Web services enable you to access the Oracle Data Integrator features through Web services. These Web services are invoked by a third-party application and manage execution of runtime artifacts developed with Oracle Data Integrator.

    How to perform the different ODI execution tasks with the ODI Run-Time Services, such as executing a scenario and restarting a session, is detailed in "Managing Executions Using Web Services". That topic also provides examples of SOAP requests and responses.

  • Data Services are specialized Web services that provide access to data in datastores, and to changes captured for these datastores using Changed Data Capture. Data Services are generated by Oracle Data Integrator to give you access to your data through Web services. These Web services are deployed to a Web services container in an application server.

    For more information on how to set up, generate and deploy Data Services refer to Chapter 8, "Creating and Using Data Services."

Invoking Third-Party Web Services

This section describes how to invoke third-party Web services in Oracle Data Integrator.

This section includes the following topics:

Introduction to Web Service Invocation

You can invoke Web services:

  • In Oracle Data Integrator packages or procedures using the HTTP Analyzer tool. This tool allows you to invoke any third party Web service, and save the response in a SOAP file that can be processed with Oracle Data Integrator.

    You can use the results to debug a locally or remotely deployed Web service.

  • For testing Data Services. The easiest way to test whether your generated data services are running correctly is to use the HTTP Analyzer tool.

  • In Oracle Data Integrator packages or procedures using the OdiInvokeWebService tool. This tool allows you to invoke any third party Web service, and save the response in an XML file that can be processed with Oracle Data Integrator.

The three approaches are described in the sections that follow.

Using HTTP Analyzer

The HTTP Analyzer allows you to monitor request/response traffic between a Web service client and the service. The HTTP Analyzer helps you to debug your Web service in terms of the HTTP traffic sent and received.

When you run the HTTP Analyzer, there are a number of windows that provide information for you.

The HTTP Analyzer enables you to:

  • Observe the exact content of the request and response TCP packets of your Web service.

  • Edit a request packet, re-send the packet, and see the contents of the response packet.

  • Test Web services that are secured using policies; encrypted messages will be decrypted.

This section describes the following topics:

Using HTTP Analyzer: Main Steps

To examine the packets sent and received by the client to a Web service:


Note:

In order to use the HTTP Analyzer, you may need to update the proxy settings.


  1. Create and run the Web service.

  2. Start the HTTP Analyzer by selecting Tools > HTTP Analyzer.

    You can also start it from the HTTP Analyzer button on the General tab of the OdiInvokeWebService tool step.

    It opens in its own window.

  3. Click the Create New Soap Request button in the HTTP Analyzer Log window.

  4. Enter the URL of a Web service, or open the WSDL for a Web service, to get started.

  5. Run the client proxy to the Web service. The request/response packet pairs are listed in the HTTP Analyzer Test window.

    The test window allows you examine the headers and parameters of a message. You can test the service by entering a parameter that is appropriate and clicking Send Request.

  6. You can examine the contents of the HTTP headers of the request and response packets to see the SOAP structure (for JAX-WS Web services), the HTTP content, the Hex content or the raw message contents by choosing the appropriate tab at the bottom of the HTTP Analyzer Test window.


    Note:

    The WADL structure (for RESTful services) is not supported by Oracle Data Integrator.


  7. You can test Web services that are secured using policies by performing one of the following tasks:

    • Select an existing credential from the Credentials list.

      Oracle Data Integrator delivers with a set of preconfigured credentials, HTTPS Credential.

    • Click New to create a new credential. In the Credential dialog, define the credentials to use in the HTTP Analyzer Test window.

What Happens When You Run the HTTP Analyzer

When you start the HTTP Analyzer and test a Web service, the Web service sends its traffic via the HTTP Analyzer, using the proxy settings in the HTTP Analyzer Preferences dialog.

By default, the HTTP Analyzer uses a single proxy on an analyzer instance (the default is 8099), but you can add additional proxies of your own if you need to.

Each analyzer instance can have a set of rules to determine behavior, for example, to redirect requests to a different host/URL, or to emulate a Web service.

How to Specify HTTP Analyzer Settings

By default, the HTTP Analyzer uses a single proxy on an analyzer instance (the default is 8099), but you can add additional proxies of your own if you need to.

To set HTTP Analyzer preferences:

  1. Open the HTTP Analyzer preferences dialog by doing one of the following:

    • Click the Start HTTP Analyzer button in the HTTP Analyzer Instances window or Log window.

    • Choose Tools > Preferences to open the Preferences dialog, and navigating to the HTTP Analyzer page.

    For more information at any time, press F1 or click Help from the HTTP Analyzer preferences dialog.

  2. Make the changes you want to the HTTP Analyzer instance. For example, to use a different host and port number, open the Proxy Settings dialog by clicking Configure Proxy.

How to Use the Log Window

When you open the HTTP Analyzer from the Tools menu, the HTTP Analyzer Log window opens, illustrated in Figure 16-2.

Figure 16-2 HTTP Analyzer Log Screen

Description of Figure 16-2 follows

When HTTP Analyzer runs, it outputs request/response messages to the HTTP Analyzer log window. You can group and reorder the messages:

  • To reorder the messages, select the Sequence tab, then sort using the column headers (click on the header to sort, double-click to secondary sort).

  • To group messages, click the Correlation tab.

  • To change the order of columns, grab the column header and drag it to its new position.

Table 16-1 HTTP Analyzer Log Window Toolbar Icons

IconNameFunction
Analyzer Preferences

Analyzer Preferences

Click to open the HTTP Analyzer Preferences dialog where you can specify a new listener port, or change the default proxy. An alternative way to open this dialog is to choose Tools > Preferences, and then navigate to the HTTP Analyzer page. For more information, see

Create New Request

Create New Request

Click to open the HTTP Analyzer Test window, where you enter payload details, and edit and resend messages.

Start HTTP Analyzer

Start HTTP Analyzer

Click to start the HTTP Analyzer running. The monitor runs in the background, and only stops when you click Stop or exit JDeveloper. If you have more than one listener defined clicking this button starts them all. To start just one listener, click the down arrow and select the listener to start.

Stop HTTP Analyzer

Stop HTTP Analyzer

Click to stop the HTTP Analyzer running. If you have more than one listener running, clicking this button stops them all. To stop just one listener click the down arrow and select the listener to stop.

Send request

Send Request

Click to resend a request when you have changed the content of a request. The changed request is sent and you can see any changes in the response that is returned.

Open ws-i log

Open WS-I log file

Click to open the Select WS-I Log File to Upload dialog, where you can navigate to an existing WS-I log file.

Save packet data

Save Packet Data

Click to save the contents of the HTTP Analyzer Log Window to a file.

WS-i analyze

WS-I Analyze

This tool does not apply to Oracle Data Integrator.

select all

Select All

Click to select all the entries in the HTTP Analyzer Log Window.

Deselect all

Deselect All

Click to deselect all the entries in the HTTP Analyzer.

Clear selected history

Clear Selected History (Delete)

Click to clear the entries in the HTTP Analyzer.


How to Use the Test Window

An empty HTTP Analyzer test window appears when you click the Create New Soap Request button in the HTTP Analyzer Log window.

Enter the URL of a Web service, or open the WSDL for a Web service, and then click Send Request. The results of the request are displayed in the test window, as shown in Figure 16-3.

Figure 16-3 HTTP Analyzer Test Window

Description of Figure 16-3 follows

You can examine the contents of the HTTP headers of the request and response packets to see the SOAP structure, the HTTP content, the Hex content or the raw message contents by choosing the appropriate tab at the bottom of the HTTP Analyzer Test window.

The test window allows you examine the headers and parameters of a message. You can test the service by entering a parameter that is appropriate and clicking Send Request.

The tabs along the bottom of the test window allow you choose how you see the content of the message. You can choose to see the message as:

  • The SOAP structure, illustrated in Figure 16-3.

  • The HTTP code, for example:

    <?xml version = '1.0' encoding = 'UTF-8'?>
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://www.example.com/wls">
       <env:Header/>
       <env:Body>
          <ns1:sayHello>
             <arg0/>
          </ns1:sayHello>
       </env:Body>
    
  • The hex content of the message, for example:

    [000..015] 3C 65 65 20 78 6D ...   <env:Envelope xm
    [016..031] 6C 6E 70 3A 2F 2F ...   lns:env="http://
    [032..047] 73 63 6F 61 70 2E ...   schemas.xmlsoap.
    [048..063] 6F 72 65 6C 6F 70 ...   org/soap/envelop
    [064..079] 65 2F 22 20 78 6D ...   e/" xmlns:ns1="h
    [080..095] 74 74 70 3A 2F 2F ...   ttp://www.bea.co
    [096..111] 6D 2F 77 6C 73 22 ...   m/wls"><env:Head
    [112..127] 65 72 2F 3E 3C 65 ...   er/><env:Body><n
    [128..143] 73 31 3A 73 61 79 ...   s1:sayHello><arg
    [144..159] 30 3E 3C 2F 61 72 ...   0></arg0></ns1:s
    [160..175] 61 79 48 65 6C 6C ...   ayHello></env:Bo
    [176..191] 64 79 3E 3C 2F 65 ...   dy></env:Envelop
    [192..193] 65 3E             ...   e>
    
  • The raw message, for example:

    POST http://localhost:7001/MySimpleEjb/MySimpleEjbService HTTP/1.1
    Content-Type: text/xml; charset=UTF-8
    SOAPAction: ""
    Host: localhost:7001
    Content-Length: 194
    X-HTTPAnalyzer-Rules: 3@localhost:8099
     
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://www.example.com/wls">
       <env:Header/>
       <env:Body>
          <ns1:sayHello>
             <arg0/>
          </ns1:sayHello>
       </env:Body>
    </env:Envelope>
    
    

How to Use the Instances Window

When you open the HTTP Analyzer from the Tools menu, the HTTP Analyzer tab appears by default.

Click the HTTP Analyzer Instances tab. The HTTP Analyzer Instances window appears, as shown in Figure 16-4.

This window provides information about the instances of the HTTP Analyzer that are currently running, or that were running and have been stopped. The instance is identified by the host and port, and any rules are identified. You can start and stop the instance from this window.

Figure 16-4 HTTP Analyzer Instances Window

Description of Figure 16-4 follows

You create a new instance in the HTTP Analyzer dialog, which opens when you click the Create New Soap Request button.

Table 16-2 HTTP Analyzer Instances Window Toolbar Icons

IconNameFunction
Analyzer preferences

Analyzer Preferences

Click to open the HTTP Analyzer dialog where you can specify a new listener port, or change the default proxy.

Create new request

Create New Request

Click to open a new instance of the HTTP Analyzer Test window, where you enter payload details, and edit and resend messages.

Start HTTP Analyzer

Start HTTP Analyzer

Click to start the HTTP Analyzer running. The monitor runs in the background, and only stops when you click Stop or exit JDeveloper. If you have more than one listener defined clicking this button starts them all. To start just one listener, click the down arrow and select the listener to start.

Stop HTTP Analyzer

Stop HTTP Analyzer

Click to stop the HTTP Analyzer running. If you have more than one listener running, clicking this button stops them all. To stop just one listener click the down arrow and select the listener to stop.


How to Use Multiple Instances

You can have more than one instance of HTTP Analyzer running. Each will use a different host and port combination, and you can see a summary of them in the HTTP Analyzer Instances window.

To add an additional HTTP Analyzer Instance:

  1. Open the HTTP Analyzer preferences dialog by doing one of the following:

    • Click the Analyzer Preferences button in the HTTP Analyzer Instances window or Log window.

    • Choose Tools > Preferences to open the Preferences dialog, and navigating to the HTTP Analyzer page.

    For more information at any time, press F1 or click Help from the HTTP Analyzer preferences dialog.

  2. To create a new HTTP Analyzer instance, that is a new listener, click Add. The new listener is listed and selected by default for you to change any of the values.

Using Credentials With HTTP Analyzer

You can use the HTTP Analyzer to test Web services that are secured using policies. You choose the credentials to use in the HTTP Analyzer Test window.

HTTP Analyzer supports the following credentials for this purpose:

  • HTTPS. The message is encrypted prior to transmission using a public key certificate that is signed by a trusted certificate authority. The message is decrypted on arrival.

  • Username token. This token does not apply to Oracle Data Integrator.

    This is a way of carrying basic authentication information using a token based on username/password.

  • X509. This token does not apply to Oracle Data Integrator.

    This is a PKI standard for single sign-on authentication, where certificates are used to provide identity, and to sign and encrypt messages.

  • STS. This token does not apply to Oracle Data Integrator.

    Security Token Service (STS) is a Web service which issues and manages security tokens.

Using SSL With HTTP Analyzer

You can use the HTTP Analyzer with secured services or applications, for example, Web services secured by policies. Oracle Data Integrator includes a credential, HTTPS Credential, for this purpose.

Once you have configured the credentials, you can choose which to use in the HTTP Analyzer Test window.

HTTPS encrypts an HTTP message prior to transmission and decrypts it upon arrival. It uses a public key certificate signed by a trusted certificate authority. When the integrated application server is first started, it generates a DemoIdentity that is unique, and the key in it is used to set up the HTTPS channel.

For more information about keystores and keystore providers, see Understanding Security for Oracle WebLogic Server. When the default credential HTTPS Credential is selected, you need to specify the keystores that the HTTP Analyzer should use when handl^=ing HTTPS traffic.

Two keystores are required to run the HTTP Analyzer:

  • The "Client Trusted Certificate Keystore," containing the certificates of all the hosts to be trusted by the Analyzer (client trust) when it makes onward connections. The server's certificate must be in this keystore.

    The "Client Keystore" is required only when mutual authentication is required.

  • The "Server Keystore," containing a key that the Analyzer can use to authenticate itself to calling clients.

To configure the HTTP Analyzer to use different HTTPS values:

  1. From the main menu, choose Tools > Preferences.

  2. In the Preferences dialog, select the Credentials node. For more information, press F1 or click Help from within the dialog page.

  3. Enter the new keystore and certificate details you want to use.

How to Debug Web Pages Using the HTTP Analyzer

You can use the HTTP Analyzer when you are debugging Web pages, such as HTML, JSP, or JSF pages. This allows you to directly examine the traffic that is sent back and forth to the browser.

To debug Web pages using the HTTP Analyzer:

  1. Configure a browser to route messages through the HTTP Analyzer so that you can see the traffic between the web browser and client.

  2. Start the HTTP Analyzer running.

  3. Run the class, application, or Web page that you want to analyze in the usual way.

    Each request and response packet is listed in the HTTP Analyzer Log window, and detailed in the HTTP Analyzer Test Window.

How to Use Rules to Determine Behavior

You can set rules so that the HTTP Analyzer runs using behavior determined by those rules. You can set more than one rule in an HTTP Analyzer instance. If a service's URL matches a rule, the rule is applied. If not, the next rule in the list is checked. If the service does not match any of the rules the client returns an error. For this reason, you should always use a Pass Through rule with a blank filter (which just passes the request through) as the last rule in a list to catch any messages not caught by the preceding rules.

The types of rule available are:

  • Pass Through Rule

  • Forward Rule

  • URL Substitution Rule

  • Tape Rule

Using the Pass Through Rule

The Pass Through simply passes a request on to the service if the URL filter matches. When you first open the Rule Settings dialog, two Pass Through Rules are defined:

  • The first has a URL filter of http://localhost:631 to ignore print service requests.

  • The second has a blank URL filter, and it just which just passes the request to the original service. This rule should normally be moved to end of the list if new rules are added.

Using the Forward Rule

The Forward rule is used to intercept all URLs matched by the filter and it forwards the request on to a single URL.

Using the URL Substitution Rule

The URL Substitution rule allows you to re-host services by replacing parts of URL ranges. For example, you can replace the machine name when moving between the integrated application server and Oracle WebLogic Server.

Using the Tape Rule

The tape rule allows you to run the HTTP Analyzer in simulator mode, where a standard WS-I log file is the input to the rule. When you set up a tape rule, there are powerful options that you can use:

  • Loop Tape, which allows you to run the tape again and again.

  • Skip to matching URL and method, which only returns if it finds a matching URL and HTTP request method. This means that you can have a WSDL and an endpoint request in the same tape rule.

  • Correct header date and Correct Content Size, which allow you change the header date and content size of the message to current values so that the request does not fail.

An example of using a tape rule would be to test a Web service client developed to run against an external Web service.

To test a Web service client developed to run against an external Web service:

  1. Create the client to the external Web service.

  2. Run the client against the Web service with the HTTP Analyzer running, and save the results as a WS-I log file.

    You can edit the WS-I file to change the values returned to the client.

  3. In the HTTP Analyzer page of the Preferences dialog, create a tape rule.

    Ensure that it is above the blank Pass Through rule in the list of rules.

  4. In the Rule Settings dialog, use the path of the WS-I file as the Tape path in the Rule Settings dialog.

    When you rerun the client, it runs against the entries in the WS-I file instead of against the external Web service.

    There are other options that allow you to:

    • Correct the time and size of the entries in the WS-I log file so the message returned to the client is correct.

    • Loop the tape so that it runs more than once.

    • Skip to a matching URL and HTTP request method, so that you can have a WSDL and an endpoint request in the same tape rule.


Note:

Tape Rules will not work with SOAP messages that use credentials or headers with expiry dates in them.


How to Set Rules

You can set rules so that the HTTP Analyzer runs using behavior determined by those rules. Each analyzer instance can have a set of rules to determine behavior, for example, to redirect requests to a different host/URL, or to emulate a Web service.

To set rules for an HTTP Analyzer instance:

  1. Open the HTTP Analyzer by choosing Tools > HTTP Analyzer. The HTTP Analyzer docked window opens.

    Alternatively, the HTTP Analyzer automatically opens when you choose Test Web Service from the context menu of a Web service container in the Applications window.

  2. Click the Analyzer Preferences button to open the HTTP Analyzer preferences dialog, in which you can specify a new listener port, or change the default proxy.

    Alternatively, choose Tools > Preferences, and then navigate to the HTTP Analyzer page.

  3. Click Configure Rules to open the Rule Settings dialog in which you define rules to determine the actions the HTTP Analyzer should take. For more help at any time, press F1 or click Help in the Rule Settings dialog.

  4. In the Rule Settings dialog, enter the URL of the reference service you want to test against as the Reference URL. This will help you when you start creating rules, as you will be able to see if and how the rule will be applied.

  5. Define one or more rules for the service to run the client against. To add a new rule, click the down arrow next to Add, and choose the type of rule from the list. The fields in the dialog depend on the type of rule that is currently selected.

  6. The rules are applied in order from top to bottom. Reorder them using the up and down reorder buttons. It is important that the last rule is a blank Pass Through rule.

Reference: Troubleshooting the HTTP Analyzer

This section contains information to help resolve problems that you may have when running the HTTP Analyzer.

Running the HTTP Analyzer While Another Application is Running

If you have an application waiting for a response, do not start or stop the HTTP Analyzer. Terminate the application before starting or stopping the HTTP Analyzer.

The HTTP Analyzer can use one or more different sets of proxy settings. These settings are specific to the IDE only. If enabled, Oracle Data Integrator uses these settings to access the Internet through your organization proxy server. If you do not enable the proxy server setting, then your Web application may not be able to access the Internet. Proxy server settings are visible in the preferences settings for your machine's default browser.

When you run the HTTP Analyzer, it can use one or more different sets of proxy settings. These proxy settings override the HTTP Proxy Server settings when the HTTP Analyzer is running.

Changing Proxy Settings

When you use the HTTP Analyzer, you may need to change the proxy settings in Oracle Data Integrator. For example:

  • If you are testing an external service and your machine is behind a firewall, ensure that Oracle Data Integrator is using the HTTP proxy server.

  • If you are testing a service in the integrated application server, for example when you choose Test Web Service from the context menu of a Web service in the Applications window, ensure that Oracle Data Integrator is not using the HTTP proxy server.

If you run the HTTP Analyzer, and see the message

500 Server Error
The following error occurred: [code=CANT_CONNECT_LOOPBACK] Cannot connect due to potential loopback problems

you probably need to add localhost|127.0.0.1 to the proxy exclusion list.

To set the HTTP proxy server and edit the exception list:

  1. Choose Tools > Preferences, and select Web Browser/Proxy.

  2. Ensure that Use HTTP Proxy Server is selected or deselected as appropriate.

  3. Add any appropriate values to the Exceptions list, using | as the separator.

    In order for Java to use localhost as the proxy ~localhost must be in the Exceptions list, even if it is the only entry.

Using the OdiInvokeWebService Tool

The OdiInvokeWebService tool invokes a Web service using the HTTP or HTTPS protocol and is able to write the returned response to an XML file, which can be an XML payload or a full-formed SOAP message including a SOAP header and body.

You can configure OdiInvokeWebService tool parameters using Http Analyzer. To do this, click the Http Analyzer button on the General tab of the OdiInvokeWebService step in the package editor. This opens OdiInvokeWebServiceAdvance, which you can use to configure command parameters.

See "OdiInvokeWebService" for details on the OdiInvokeWebService tool parameters.

The OdiInvokeWebService tool invokes a specific operation on a port of a Web service whose description file (WSDL) URL is provided. If this operation requires a SOAP request, it is provided either in a request file or in the tool command. The response of the Web service request is written to an XML file that can be used in Oracle Data Integrator.


Note:

When using the XML payload format, the OdiInvokeWebService tool does not support the SOAP headers of the request. In order to work with SOAP headers, for example for secured Web service invocation, use a full SOAP message and manually modify the SOAP headers.


This tool can be used as a regular Oracle Data Integrator tool in a tool step of a package and also in procedures and knowledge modules. See "Adding Oracle Data Integrator Tool Steps" for information on how to create a tool step in a package.

You can process the information from your responses using regular Oracle Data Integrator interfaces sourcing for the XML technology. Refer to the Connectivity and Modules Guide for Oracle Data Integrator for more information on XML file processing.


Note:

Each XML file is defined as a model in Oracle Data Integrator. When using XML file processing for the request or response file, a model will be created for each request or response file. It is recommended to use model folders to arrange them. See "Organizing Models with Folders" for more information.


Oracle Data Integrator provides the OdiXMLConcat and OdiXMLSplit tools for processing the Web service response. Refer to the XML section of "Oracle Data Integrator Tools by Category" for details on how to use these tools.

Using the Binding Mechanism for Requests

It is possible to use the Binding mechanism when using a Web service call in a Procedure. With this method, it is possible to call a Web service for each row returned by a query, parameterizing the request based on the row's values. Refer to "Binding Source and Target Data" for more information.

PKIh^PKU5zDOEBPS/procedures.htm Creating and Using Procedures, Variables, Sequences, and User Functions

13 Creating and Using Procedures, Variables, Sequences, and User Functions

This chapter describes how to work with procedures, variables, sequences, and user functions. An overview of these components and how to work with them is provided.

This chapter includes the following sections:

Working with Procedures

This section provides an introduction to procedures and describes how to create and use procedures in Oracle Data Integrator.

The following sections describe how to create and use procedures:

Introduction to Procedures

A Procedure is a set of commands that can be executed by an agent. These commands concern all technologies accessible by Oracle Data Integrator (OS, JDBC, JMS commands, etc).

A Procedure is a reusable component that allows you to group actions that do not fit in the mapping framework. Procedures should be considered only when what you need to do can't be achieved in a mapping. In this case, rather than writing an external program or script, you would include the code in Oracle Data Integrator and execute it from your packages. Procedures require you to develop all your code manually, as opposed to mappings.

A procedure is composed of command lines, possibly mixing different languages. Every command line may contain two commands that can be executed on a source and on a target. The command lines are executed sequentially. Some command lines may be skipped if they are controlled by an option. These options parameterize whether or not a command line should be executed as well as the code of the commands.

The code within a procedure can be made generic by using string options and the ODI Substitution API.

Before creating a procedure, note the following:

  • Although you can perform data transformations in procedures, using them for this purpose is not recommended; use mappings instead.

  • If you start writing a complex procedure to automate a particular recurring task for data manipulation, you should consider converting it into a Knowledge Module. Refer to the Knowledge Module Developer's Guide for Oracle Data Integrator for more information.

  • Whenever possible, try to avoid operating-system-specific commands. Using them makes your code dependent on the operating system that runs the agent. The same procedure executed by agents on two different operating systems (such as UNIX and Windows) will not work properly.

Creating Procedures

Creating a procedure follows a standard process which can vary depending on the use case. The following step sequence is usually performed when creating a procedure:

  1. Create a New Procedure

  2. Define the Procedure's Options

  3. Create and Manage the Procedure's Tasks.

When creating procedures, it is important to understand the following coding guidelines:

Create a New Procedure

To create a new procedure:

  1. In Designer Navigator select the Procedures node in the folder under the project where you want to create the procedure.

  2. Right-click and select New Procedure.

  3. On the Definition tab fill in the procedure Name.

  4. Check Multi-Connections if you want the procedure to manage more than one connection at a time.

    Multi-Connections: It is useful to choose a multi-connection procedure if you wish to use data that is retrieved by a command sent on a source connection in a command sent to another (target) connection. This data will pass though the execution agent. By enabling Multi-Connections, you can use both Target and Source fields in the Tasks (see "Create and Manage the Procedure's Tasks").

    If you access one connection at a time (which enables you to access different connections, but only one at a time) leave the Multi-Connections box unchecked. Only Target tasks will be used.

  5. Select the Target Technology, and if the Multi-Connections box is checked, also select the Source Technology. Each new Procedure line will be based on this technology. You can also leave these fields empty and specify the technologies in each procedure command.


    Caution:

    Source and target technologies are not mandatory for saving the Procedure. However, the execution of the Procedure might fail, if the related commands require to be associated with certain technologies and logical schemas.


  6. Optionally, select Use Unique Temporary Object Names and Remove Temporary Objects On Error:

    • Use Unique Temporary Object Names: If this procedure can be run concurrently, enable this option to create non-conflicting temporary object names.

    • Remove Temporary Objects On Error: Enable this option to run cleanup tasks even when a session encounters an error.

  7. Optionally, enter a Description of this procedure.

  8. From the File menu, click Save.

A new procedure is created, and appears in the Procedures list in the tree under your Project.

Define the Procedure's Options

Procedure options act like parameters for your steps and improve the code reusability.

There are two types of options:

  • Boolean options. Their value can be used to determine whether individual command are executed or not. They act like an "if" state