Skip Headers
Oracle® Application Server Adapters for Files, FTP, Databases, and Enterprise Messaging User's Guide
10g Release 2 (10.1.2)
B25307-02
  Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
 
Next
Next
 

4 Oracle Application Server Adapter for Databases

This chapter describes the Oracle Application Server Adapter for Databases (database adapter), which works in conjunction with Oracle BPEL Process Manager. Support for stored procedures and functions (for Oracle databases only) is also described. References to use cases for the database adapter and for stored procedures are provided.

This chapter contains the following topics:

Introduction to the Database Adapter

The database adapter enables a BPEL process to communicate with Oracle databases or third-party databases through JDBC. The database adapter service is defined within a BPEL process partner link using the Adapter Configuration Wizard of Oracle BPEL Process Manager.

This section contains the following topics:

Database Adapter Features

The database adapter connects to any relational database. For nonrelational databases and legacy systems, application and mainframe adapters are available. See Oracle Application Server Adapter Concepts for information about application and mainframe adapters.

To access an existing relational schema, you use the Adapter Configuration Wizard to do the following:

While your BPEL process deals with XML and invokes Web services, database rows and values are queried, inserted, and updated. Unlike other solutions that give you a way to access data using a fixed schema, stored procedures, streams, or queues, with the database adapter, you access table data directly and transparently.

Features of the database adapter include:

  • Compliance with open standards. The database adapter is an implementation of the JCA 1.5 connector. Like the other adapters that work with Oracle BPEL Process Manager, the database adapter is compatible with open standards such as BPEL, WSIF, and WSDL.

  • Connectivity to any relational (SQL 92) database using JDBC, or ODBC using the Sun JdbcOdbcBridge

  • Ability to map any existing relational schema to XML. The mapping is nonintrusive to the schema and no changes need to be made to it.

  • Web services abstraction of SQL operations. The generated WSDL operations are merge, insert, update, write, delete, select, queryByExample, and inbound polling, which includes physical delete, logical delete, and sequencing-based polling strategies.

  • Leveraging of OracleAS TopLink technology, an advanced object-to-relational persistence framework. You can access the underlying TopLink project, and use the OracleAS TopLink Mapping Workbench interface for advanced mapping and configuration, sequencing, batch and joined relationship reading, batch writing, parameter binding, statement caching, connection pooling, external transaction control (JTS and JTA), UnitOfWork for minimal updates, caching, optimistic locking, advanced query support, and query by example.

See the following for more information:

You can also access the forums from Oracle Technology Network at

http://www.oracle.com/technology

Design Overview

Figure 4-1 shows how the database adapter interacts with the various design-time and deployment artifacts.

Figure 4-1 How the Database Adapter Works

How the Database Adapter Works
Description of "Figure 4-1 How the Database Adapter Works"

The database adapter is a separate JCA 1.5 connector. It is deployed to the application server during installation, and is configured using oc4j-ra.xml. The oc4j-ra.xml file is the database adapter deployment descriptor file for Oracle Application Server.

Each entry in oc4j-ra.xml has a Java Naming and Directory Interface (JNDI) name (location) and session and login properties, and represents a single database and database adapter instance. The connector is tied to the application server; therefore, it can be used independently, but any change to oc4j-ra.xml requires restarting the application server. This file is created by the application server the first time Oracle BPEL Server comes up. Therefore, in a standalone installation, you do not see this file unless you start Oracle BPEL Server at least once.

When a business process is executed in Oracle BPEL Process Manager, a Web service (WSDL) may be invoked (using WSIF) against the database. The jca:address tag in the WSDL is used to look up an adapter instance, and the jca:operation tag in the WSDL is used to set up an interaction (outbound) or activation (inbound) with the database adapter using a JCA interface. The jca:operation tag contains a link to OracleAS TopLink metadata for executing a query to push XML data to a relational schema, or vice versa.

The toplink_mappings.xml file and WSDL (with custom jca:operation and jca:address tags) are created during design time. In JDeveloper BPEL Designer, you create an endpoint, or partner link, for interacting with the database. Each partner link has its own WSDL. The WSDL defines all the operations (queries) that can be performed with that database.

To create the WSDL, you use the Adapter Configuration Wizard, where you import the relational schema, map it to an XML schema, and define one or more operations. This produces an XSD representing the schema and a toplink_mappings.xml file.

The Adapter Configuration Wizard creates an OracleAS TopLink Mapping Workbench project (a .mwp file) as part of the JDeveloper BPEL Designer project. Like the JDeveloper BPEL Designer .jpr file, it enables a user to go back and visually change mappings or leverage OracleAS TopLink to set advanced properties. Saving the MWP project does not regenerate toplink_mappings.xml; that is done by running through the wizard again in edit mode. (No changes are needed; you simply run through it.)

During deployment, a copy of toplink_mappings.xml is included in the BPEL suitcase. It is later read by the database adapter and the metadata is cached.

The database adapter is used for relational-to-XML mapping; therefore, no Java class files are needed. The database adapter generates byte codes for the classes in memory based on information in the descriptors. You do not compile class files or deal with class path issues when using the database adapter. The MWP project in the JDeveloper BPEL Designer project may create Java files as a by-product of using the wizard, but they are needed at design time only.

Database Adapter Concepts

This section contains the following topics:

Relational-to-XML Mapping

For a flat table or schema, the relational-to-XML mapping is easy to see. Each row in the table becomes a complex XML element. The value for each column becomes a text node in the XML element. Both column values and text elements are primitive types.

Table 4-1 shows the structure of the MOVIES table. This table is used in the use cases described in this chapter. See Chapter4, "Use Cases for the Database Adapter" for more information.

Table 4-1 MOVIES Table Description

Name Null? Type

TITLE

NOT NULL

VARCHAR2(50)

DIRECTOR

--

VARCHAR2(20)

STARRING

--

VARCHAR2(100)

SYNOPSIS

--

VARCHAR2(255)

GENRE

--

VARCHAR2(70)

RUN_TIME

--

NUMBER

RELEASE_DATE

--

DATE

RATED

--

VARCHAR2(6)

RATING

--

VARCHAR2(4)

VIEWER_RATING

--

VARCHAR2(5)

STATUS

--

VARCHAR2(11)

TOTAL_GROSS

--

NUMBER

DELETED

--

VARCHAR2(5)

SEQUENCENO

--

NUMBER

LAST_UPDATED

--

DATE


The corresponding XML schema definition (XSD) is as follows:

<xs:complexType name="Movies">
  <xs:sequence>
    <xs:element name="director" type="xs:string" minOccurs="0" nillable="true"/>
    <xs:element name="genre" type="xs:string" minOccurs="0" nillable="true"/>
    <xs:element name="rated" type="xs:string" minOccurs="0" nillable="true"/>
    <xs:element name="rating" type="xs:string" minOccurs="0" nillable="true"/>
    <xs:element name="releaseDate" type="xs:dateTime" minOccurs="0" nillable="true"/>
    <xs:element name="runTime" type="xs:double" minOccurs="0" nillable="true"/>
    <xs:element name="starring" type="xs:string" minOccurs="0" nillable="true"/>
    <xs:element name="status" type="xs:string" minOccurs="0" nillable="true"/>
    <xs:element name="synopsis" type="xs:string" minOccurs="0" nillable="true"/>
    <xs:element name="title" type="xs:string"/> 
    <xs:element name="totalGross" type="xs:double" minOccurs="0" nillable="true"/>
    <xs:element name="viewerRating" type="xs:string" minOccurs="0" nillable="true"/>
  </xs:sequence> 
</xs:complexType>

As the preceding code example shows, MOVIES is not just a single CLOB or XMLTYPE column containing the entire XML string. Instead, it is an XML complexType comprised of elements, each of which corresponds to a column in the MOVIES table. For flat tables, the relational-to-XML mapping is straightforward.

Table 4-2 and Table 4-3 show the structure of the EMP and DEPT tables, respectively. These tables are used in the MasterDetail use case. See Chapter4, "Use Cases for the Database Adapter" for more information.

Table 4-2 EMP Table Description

Name Null? Type

EMPNO

NOT NULL

NUMBER(4)

ENAME

--

VARCHAR2(10)

JOB

--

VARCHAR2(9)

MGR

--

NUMBER(4)

HIREDATE

--

DATE

SAL

--

NUMBER(7,2)

COMM

--

NUMBER(7,2)

DEPTNO

--

NUMBER(2)


Table 4-3 DEPT Table Description

Name Null? Type

DEPTNO

NOT NULL

NUMBER(2)

DNAME

--

VARCHAR2(14)

LOC

--

VARCHAR2(13)


As the preceding table definitions show, and as is typical of a normalized relational schema, an employee's department number is not stored in the EMP table. Instead, one of the columns of EMP (DEPTNO) is a foreign key, which equals the primary key (DEPTNO) in DEPT.

However, the XML equivalent has no similar notion of primary keys and foreign keys. Consequently, in the resulting XML, the same data is represented in a hierarchy, thereby preserving the relationships by capturing the detail record inline (embedded) inside the master.

An XML element can contain elements that are either a primitive type (string, decimal), or a complex type, that is, another XML element. Therefore, an employee element can contain a department element.

The corresponding XML shows how the relationship is materialized, or shown inline. DEPTNO is removed from EMP, and instead you see the DEPT itself.

<EmpCollection>
  <Emp>
    <comm xsi:nil = "true" ></comm> 
    <empno >7369.0</empno>
    <ename >SMITH</ename>
    <hiredate >1980-12-17T00:00:00.000-08:00</hiredate>
    <job >CLERK</job>
    <mgr >7902.0</mgr
    <sal >800.0</sal>
    <dept>
      <deptno >20.0</deptno>
      <dname >RESEARCH</dname>
      <loc >DALLAS</loc>
    </dept>
  </Emp>
    ...
</EmpCollection>

Materializing the relationship makes XML human readable, and allows the data to be sent as one packet of information. No cycles are allowed in the XML; therefore, an element cannot contain itself. This is handled automatically by the database adapter. However, you may see duplication (that is, the same XML detail record appearing more than once under different master records). For example, if a query returned two employees, both of whom work in the same department, then, in the returned XML, you see the same DEPT record inline in both the EMP records.

Therefore, when you import tables and map them as XML, it is recommended that you avoid excessive duplication, although the database adapter does not print an element inside itself. The database adapter prints the following:

<Emp>
  <name>Bob</name>
  <spouse> 
    <name>June</name>
  </spouse
</Emp>

But not:

<Emp>
  <name>Bob</name>
  <spouse>
    <name>June</name> 
    <spouse>
      <name>Bob</name> 
      <spouse>
        ...
      </spouse> 
    </spouse> 
  </spouse> 
</Emp>

To avoid duplication, you can do the following:

  • Import fewer tables. If you import only EMP, then DEPT does not appear.

  • Remove the relationship between EMP and DEPT in the wizard. This removes the relationship, but the foreign key column is put back.

In both these cases, the corresponding XML is as follows:

<EmpCollection> 
  <Emp>
    <comm xsi:nil = "true" ></comm>
    <empno >7369.0</empno>
    <ename >SMITH</ename> 
    <hiredate >1980-12-17T00:00:00.000-08:00</hiredate>
    <job >CLERK</job>
    <mgr >7902.0</mgr>
    <sal >800.0</sal> 
    <deptno >20.0</deptno>
  </Emp> 
   ...
</EmpCollection>

Note that one of the two preceding solutions is feasible only if getting back the foreign key suffices, as opposed to getting back the complete detail record in its entirety.

Relational Types to XML Schema Types

Table 4-4 shows how database datatypes are converted to XML primitive types when you import tables from a database.

Table 4-4 Mapping Database Datatypes to XML Primitive Types

Database Type XML Type (Prefixed with xs:)

VARCHAR, VARCHAR2, CHAR, NCHAR, NVARCHAR, NVARCHAR2, MEMO, TEXT, CHARACTER, CHARACTER VARYING, UNICHAR, UNIVARCHAR, SYSNAME, NATIONAL CHARACTER, NATIONAL CHAR, NATIONAL CHAR VARYING, NCHAR VARYING, LONG, CLOB, NCLOB, LONGTEXT, LONGVARCHAR, NTEXT

string

BLOB, BINARY, IMAGE, LONGVARBINARY, LONG RAW, VARBINARY, GRAPHIC, VARGRAPHIC, DBCLOB, BIT VARYING

base64Binary

BIT, NUMBER(1) DEFAULT 0, SMALLINT DEFAULT 0, SMALLINT DEFAULT 0

boolean

TINYINT, BYTE

byte

SHORT, SMALLINT

short

INT, SERIAL

int

INTEGER, BIGINT

integer

NUMBER, NUMERIC, DECIMAL, MONEY, SMALLMONEY, UNIQUEIDENTIFIER

decimal

FLOAT FLOAT16, FLOAT(16), FLOAT32, FLOAT(32), DOUBLE, DOUBLE PRECIS, REAL

double

TIME, DATE, DATETIME, TIMESTAMP, TIMESTAMP(6), SMALLDATETIME, TIMESTAMPTZ, TIMESTAMPLTZ, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE

dateTime


Essentially, NUMBER goes to DECIMAL, the most versatile XML datatype for numbers, VARCHAR2 and CLOB to string, BLOB to base64Binary (to meet the plain-text requirement), and date types to dateTime.

Any type not mentioned in this discussion defaults to java.lang.String and xs:string. Timestamp support is basic, because only the xs:dateTime format is supported. The BFILE, USER DEFINED, OBJECT, STRUCT, VARRAY, and REF types are specifically not supported.

Because XML is plain text, BLOB and byte values are base 64/MIME encoded so that they can be passed as character data.

Mapping Any Relational Schema to Any XML Schema

The database adapter supports mapping any relational schema on any relational database to an XML schema, although not any XML schema of your choice, because the wizard generates the XML schema with no explicit user control over the layout of the elements. You can control how you map the schema in both the Adapter Configuration Wizard and later in OracleAS TopLink Mapping Workbench. By pairing the database adapter with a transformation step, you can map any relational schema to any XML schema.

SQL Operations as Web Services

After mapping a relational schema as XML, you must also map basic SQL operations as Web services. Each operation discussed in the following sections has a corresponding tutorial and readme. It is recommended that you start with these and try to run one or more as you read this section. As the tutorials demonstrate, some operations translate directly to the SQL equivalent, while others are more complex.

See the following sections for details:

DML Operations

Data manipulation language (DML) operations align with basic SQL INSERT, UPDATE, and SELECT operations. SQL INSERT, UPDATE, DELETE, and SELECT are all mapped to Web service operations of the same name. The WRITE is either an INSERT or UPDATE, based on the results of an existence check. A distinction is made between the data manipulation operations—called outbound writes—and the SELECT operations—called outbound reads. The connection between the Web service and the SQL for merge (the default for outbound write) and queryByExample are not as obvious as for basic SQL INSERT, UPDATE, and SELECT.

Merge

Merge first reads the corresponding records in the database, calculates any changes, and then performs a minimal update. INSERT, UPDATE, and WRITE make the most sense when you are thinking about a single row and a single table. However, your XML can contain complex types and map to multiple rows on multiple tables. Imagine a DEPT with many EMPS, each with an ADDRESS. In this case, you must calculate which of possibly many rows have changed and which to insert, update, or delete. If a particular row did not change or only one field changed, the DML calls will be minimal.

querybyExample

Unlike the SELECT operation, queryByExample does not require a selection criteria to be specified at design time. Instead, for each invoke, a selection criteria is inferred from an exemplar input XML record.

For instance, if the output xmlRecord is an employee record, and the input is a sample xmlRecord with lastName = "Smith", then on execution, all employees with a last name of Smith are returned.

A subset of queryByExample is to query by primary key, which can be implemented by passing in sample XML records where only the primary key attributes are set.

Use queryByExample when you do not want to create a query using the visual query builder, and want the flexibility of allowing the input record to share the same XML schema as the output records.

The queryByExample operation is slightly less performant because a new SELECT needs to be prepared for each execution. This is because the attributes that are set in the example XML record can vary each time, and therefore the selection criteria varies.

Input xmlRecord:
<Employee>
      <id/>
      <lastName>Smith</lastName>
</Employee>

Output xmlRecord:

<EmployeeCollection>
      <Employee>
         <id>5</id>
         <lastName>Smith</lastName>
         ....
      </Employee>
      <Employee>
         <id>456</id>
         <lastName>Smith</lastName>
         ....
      </Employee>
</EmployeeCollection>

Use Cases for Outbound Invoke Operations

Outbound invoke operations are demonstrated in the following tutorial files:

  • Insert

  • Update

  • Delete

  • Merge

  • SelectAll

  • SelectAllByTitle

  • PureSQLSelect

  • QueryByExample

For these files, go to

Oracle_Home\integration\orabpel\samples\tutorials\122.DBAdapter

Polling Strategies

The inbound receive allows you to listen to and detect events and changes in the database, which in turn can be the initiators of a business process. This is not a one-time action, but rather an activation. A polling thread is started, which polls a database table for new rows or events.

Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL Process Manager. The stratagem is to poll every record once. The initial SELECT has to be repeated over time, to receive the rows that exist at the start and all new rows as they are inserted over time. However, a new row once read is not likely to be deleted, and therefore can possibly be read repeatedly with each polling.

The various ways to poll for events—called polling strategies, also known as after-read strategies or publish strategies—range from simple and intrusive to sophisticated and nonintrusive. Each strategy employs a different solution for the problem of what to do after reading a row or event so as not to pick it up again in the next polling interval. The simplest (and most intrusive) solution is to delete the row so that you do not query it again.

This section discusses the following polling strategies and factors to help you determine which strategy to employ for a particular situation:

Physical Delete

The physical delete polling strategy polls the database table for records and deletes them after processing. This strategy can be used to capture events related to INSERT operations and cannot capture database events related to DELETE and UPDATE operations on the parent table. This strategy cannot be used to poll child table events. This strategy allows multiple adapter instances to go against the same source table. There is zero data replication.

Preconditions: You must have deletion privileges on the parent and associated child tables to use the delete polling strategy. Table 4-5 describes the requirements for using the delete polling strategy.

Table 4-5 Delete Polling Strategy Preconditions

Requirements Met Conflicts with

Poll for inserts

No delete on source

Shallow deleteFoot 1 

No updates on source

Cascading deleteFootref 1

Poll for updates

Minimal SQL

Poll for deletes

Zero data replication

Poll for child updates

Default

--

Allows raw SQL

--

Concurrent pollingFoot 2 

--


Footnote 1 Delete can be configured to delete the top-level row, to cascade all, or to cascade on a case-by-case basis.

Footnote 2 Concurrent polling can be configured for both delete and logical delete polling strategies.

Configuration: You can configure the delete polling strategy to delete the top-level row, to cascade all, or to cascade on a case-by-case basis. This enables deleting only the parent rows and not the child rows, cascaded deletes, and optional cascaded deletes, determined on a case-by-case basis. You can configure the polling interval for performing an event publish at design time.

Delete Cascade Policy: The optional advanced configuration is to specify the cascade policy of the DELETE. For instance, after polling for an employee with an address and many phone numbers, the phone numbers are deleted because they are privately owned (default for one-to-many), but not the address (default for one-to-one). This can be altered by configuring toplink_mappings.xml, as in the following example:

<database-mapping>
   <attribute-name>orders</attribute-name>
   <reference-class>taxonomy.Order</reference-class>
   <is-private-owned>true</is-private-owned>

You can also configure the activation itself to delete only the top level (master row), or to delete everything.

A receive operation appears in an inbound WSDL as:

<operation name="receive">
   <jca:operation
      ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
         …      PollingStrategyName="DeletePollingStrategy"
      DeleteDetailRows="true"

Logical Delete

The logical delete polling strategy involves updating a special field on each row processed, and updating the WHERE clause at run time to filter out processed rows. It mimics logical delete, wherein applications rows are rarely deleted but instead a status column isDeleted is set to true. The status column and the read value must be provided, but the modified WHERE clause and the post-read update are handled automatically by the database adapter.

Preconditions: You must have the logical delete privilege or a one-time alter schema (add column) privilege on the source table. Table 4-6 describes the requirements for using the logical delete polling strategy.

Table 4-6 Logical Delete Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

No updates on source

No delete on source

Poll for deletes

Minimal SQL

--

Zero data replication

--

Minimal configuration

--

Allows raw SQL

--

Poll for updatesFoot 1 

--

Poll for child updatesFoot 2 

--

Concurrent pollingFoot 3 

--


Footnote 1 By adding a trigger.

Footnote 2 By adding a trigger.

Footnote 3 By specifying additional mark unread and reserved values.

Configuration: The logical delete polling strategy requires minimal configuration. You must specify the mark read column, and the value that indicates a processed record.

A receive operation appears in an inbound WSDL as:

<operation name="receive">
   <jca:operation
      ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
         …
      PollingStrategyName="LogicalDeletePollingStrategy"
      MarkReadField="STATUS"
      MarkReadValue="PROCESSED"

Given the configuration for logical delete, the database adapter appends the following WHERE clause to every polling query:

AND (STATUS IS NULL) OR (STATUS <> 'PROCESSED')

Database Configuration: A status column on the table being polled must exist. If it does not exist already, you can add one to an existing table.

Support for Polling for Updates: Given that rows are not deleted with each read, it is possible to repetitively read a row multiple times. You should add a trigger to reset the mark read field whenever a record is changed, as follows:

create trigger Employee_modified
before update on Employee
for each row
begin
   :new.STATUS := 'MODIFIED';
end;

Support for Concurrent Access Polling: Just as a single instance should never process an event more than once, the same applies to a collection of instances. Therefore, before processing a record, an instance needs to reserve that record with a unique value. Again, the status column can be used:

<operation name="receive">
   <jca:operation
      ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
         …
      PollingStrategyName="LogicalDeletePollingStrategy"
      MarkReadField="STATUS"
      MarkUnreadValue="UNPROCESSED"
      MarkReservedValue="RESERVED-1"
      MarkReadValue="PROCESSED"

The polling query instead looks like the following:

Update EMPLOYE set STATUS = 'RESERVED-1' where (CRITERIA) AND (STATUS = 'UNPROCESSED');

Select … from EMPLOYEE where (CRITERIA) AND (STATUS = 'RESERVED-1');

The after-read UPDATE is faster because it can update all:

Update EMPLOYEE set STATUS = 'PROCESSED' where (CRITERIA) AND (STATUS = 'RESERVED-1');

Sequencing Table: Last-Read Id

This polling strategy involves using a helper table to remember a sequence value. The source table is not modified; instead, rows that have been read in a separate helper table are recorded. A sequence value of 1000, for example, means that every record with a sequence less than that value has already been processed. Because many tables have some counter field that is always increasing and maintained by triggers or the application, this strategy can often be used for noninvasive polling. No fields on the processed row ever need to be modified by the database adapter.

Native sequencing with a preallocation size of 1 can ensure that rows are inserted with primary keys that are always increasing over time.

This strategy is also called a nondestructive delete because no updates are made to the source rows, and a sequencing strategy such as the sequence field can be used to order the rows in a sequence for processing. When the rows are ordered in a line, the database adapter knows which rows are processed and which are not with a single unit of information.

Preconditions: You must have a sequencing table or create table privilege on the source schema. The source table has a column that is monotonically increasing with every INSERT (an Oracle native sequenced primary key) or UPDATE (the last-modified timestamp). Table 4-7 describes the requirements for using the sequencing polling strategy.

Table 4-7 Sequencing Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

Poll for deletes

Poll for updates

Allows raw SQL

No delete on source

Concurrent polling

No updates on source

Poll for child updates

One extra SQL select

--

Zero data replication

--

Moderate configuration

--


Configuration: A separate helper table must be defined. On the source table, you must specify which column is ever increasing.

<operation name="receive">
<jca:operation
ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
…
PollingStrategyName="SequencingPollingStrategy"
SequencingFieldName="MODIFIED_DATE"
SequencingFieldType="java.sql.Date"
SequencingTableNameFieldValue="EMPLOYEE"
SequencingTableName="SEQUENCING_HELPER"
SequencingTableNameFieldName="TABLE_NAME"
SequencingTableValueFieldName="LAST_READ_DATE"

The sequencing field type can be excluded if it is actually a number.

Database Configuration: A sequencing table must be configured once for a given database. Multiple processes can share the same table. Given the ActivationSpec specified in the preceding example, the CREATE TABLE command looks as follows:

CREATE TABLE SEQUENCING_HELPER 
(
TABLE_NAME VARCHAR2(32) NOT NULL,
LAST_READ_DATE DATE
)
;

Polling for Updates: In the preceding example, the polling is for new objects or updates, because every time an object is changed, the modified time is updated.

A sample trigger to set the modified time on every insert or update is as follows:

create trigger Employee_modified
before insert or update on Employee
for each row
begin
  :new.modified_date := sysdate;
end;

Using a Sequence Number: A sequence number can be used for either insert or update polling. Native sequencing returns monotonically increasing primary keys, as long as an increment by 1 is used. You can also use the sequence number of a materialized view log.

Sequencing Table: Last Updated

This polling strategy involves using a helper table to remember a last_updated value. A last_updated value of 2005-01-01 12:45:01 000, for example, means that every record last updated at that time or earlier has already been processed. Because many tables have rows with a last_updated or creation_time column maintained by triggers or the application, this strategy can often be used for noninvasive polling. No fields on the processed row ever need to be modified by the database adapter.

This strategy is also called a nondestructive delete because no updates are made to the source rows, and a sequencing strategy such as the last_updated field can be used to order the rows in a sequence for processing. When the rows are ordered in a line, the database adapter knows which rows are processed and which are not with a single unit of information.

See Sequencing Table: Last-Read Id for information about preconditions and configuration.

Control Tables

The control table polling strategy involves using a control table to store the primary key of every row that has yet to be processed. With a natural join between the control table and the source table (by primary key), polling against the control table is practically the same as polling against the source table directly. However, an extra layer of indirection allows the following:

  • Destructive polling strategies such as the delete polling strategy can be applied to rows in the control table alone, while shielding any rows in the source table.

  • Only rows that are meant to be processed have their primary key appear in the control table. Information that is not in the rows themselves can be used to control which rows to process (a good WHERE clause may not be enough).

  • The entire row is not copied to a control table, and any structure under the source table, such as detail rows, can also be raised without copying.

Streams and materialized view logs make good control tables.

Preconditions: You must have create/alter triggers privilege on the source table. Table 4-8 describes the requirements for using the control table polling strategy.

Table 4-8 Control Table Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

Advanced configuration: the native XML from the database will have control header, and triggers are required.

Poll for updates

--

Poll for deletes

--

Poll for child updates

Minimal data replication (primary keys are stored in control table)

No delete on source

--

No updates on source

--

No extra SQL selects

--

Concurrent polling

--

Allows raw SQL

--

Auditing

--


Using triggers, whenever a row is modified, an entry is added to a control table, containing the name of the master table, and the primary keys. At design time, the control table is defined to be the root table, with a one-to-one mapping to the master table, based on the matching primary keys. The control table can contain extra control information, such as a timestamp, and operation type (INSERT, UPDATE, and so on).

The delete polling strategy is useful with this setup. It is important to keep the control table small, and if the option shouldDeleteDetailRows="false" is used, then only the control rows are deleted, giving you a nondestructive delete (the DELETE is not cascaded to the real tables).

It is possible to reuse the same control table for multiple master tables. In OracleAS TopLink, you can map the same table to multiple descriptors by mapping the control table as one abstract class with multiple children. Each child has a unique one-to-one mapping to a different master table. The advantage of this approach is that you can specify for each child a class indicator field and value so that you do not need an explicit WHERE clause for each polling query.

Some sample triggers follow for polling for changes both to a department table and any of its child employee rows:

CREATE OR REPLACE TRIGGER EVENT_ON_DEPT 
   AFTER INSERT OR UPDATE ON DEPARTMENT 
   REFERENCING NEW AS newRow 
   FOR EACH ROW 
   DECLARE X NUMBER;
BEGIN
   SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO);
   IF X = 0 then
   insert into DEPT_CONTROL values (:newRow. DEPTNO);
   END IF;
END;
CREATE OR REPLACE TRIGGER EVENT_ON_EMPLOYEE
   AFTER INSERT OR UPDATE ON EMPLOYEE
   REFERENCING OLD AS oldRow NEW AS newRow
   FOR EACH ROW
   DECLARE X NUMBER;
BEGIN
   SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO);
   IF X = 0 then
   INSERT INTO DEPT_CONTROL VALUES (:newRow.DEPTNO);
   END IF;
   IF (:oldRow.DEPTNO <> :newRow.DEPTNO) THEN
      SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :oldRow.DEPTNO);
      IF (X = 0) THEN
         INSERT INTO DEPT_CONTROL VALUES (:oldRow.DEPTNO);
      END IF;
   END IF;
END;

Use Cases for Polling Strategies

Polling strategies are demonstrated in the following tutorials:

  • PollingLogicalDeleteStrategy

  • PollingLastUpdatedStrategy

  • PollingLastReadIdStrategy

  • PollingControlTableStrategy

  • MasterDetail (for physical delete polling strategy

For these files, go to

Oracle_Home\integration\orabpel\samples\tutorials\122.DBAdapter

Use Cases for the Database Adapter

Using the database adapter is demonstrated in the 122.DBAdapter tutorial. Go to

Oracle_Home\integration\orabpel\samples\tutorials\122.DBAdapter

Table 4-9 shows the database adapter samples that are provided with Oracle BPEL Process Manager.

Table 4-9 Database Adapter Use Cases

Tutorial Name Description

Delete

Illustrates the outbound delete operation of the database adapter. An XML record is passed to the operation and the row in the database with the same primary key is deleted.

File2StoredProcedure

Describes a simple scenario in which the file adapter is used to provide instance XML to a stored procedure, ADDEMPLOYEES, which is then executed. The instance XML provides a value for the parameter of the stored procedure. The ADDEMPLOYEES procedure must be installed in an Oracle database (not Oracle Lite).

File2Table

Illustrates the use of an input a native (CSV) data file defined in a custom format. The input file is a purchase order, which the file adapter processes and publishes as an XML message to the FIle2Table BPEL process. The message is transformed to another purchase order format and routed to an invoke activity.

Insert

Illustrates the outbound insert operation of the database adapter. An XML record is passed to the operation and inserted into the database as relational data. (In JDeveloper BPEL Designer, Merge (Insert or Update) is provided.)

InsertWithCatch

Illustrates the extra steps (based on the Insert tutorial) needed to add fault handling to your BPEL process.

JPublisherWrapper

Illustrates a workaround for using PL/SQL RECORD types. JPublisher is used to create a corresponding OBJECT type whose attributes match the fields of the RECORD, and conversion APIs that convert from RECORD to OBJECT and vice versa. JPublisher also generates a wrapper procedure (or function) that accepts the OBJECT and invokes the underlying method using the conversion APIs in both directions. The invoked methods must be installed in an in an Oracle database (not Oracle Lite).

MasterDetail

Illustrates how to migrate data from one set of tables to another. The sample uses the database adapter to read data from one set of tables, process the data, and write it in to another set of database tables using the adapter.

Merge

Illustrates the outbound merge operation of the database adapter. An XML record is passed to the operation and a corresponding row in the database is either inserted or updated.

PollingControlTableStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. When a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL Process Manager. This strategy uses a control table to store the primary key of every row that has not yet been processed. With a natural join between the control table and the source table (by primary key), polling against the control table is practically the same as polling against the source table directly.

PollingLastReadIdStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL Process Manager. This strategy uses a helper table to remember a sequence value.

PollingLastUpdatedStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL Process Manager. This strategy involves using a helper table to remember a last_updated value.

PollingLogicalDeleteStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL Process Manager. This strategy involves updating a special field on each row processed, and updating the WHERE clause at run time to filter out processed rows.

PureSQLPolling

Illustrates how to poll a table based on a date field.

PureSQLSelect

Illustrates how to bypass the JDeveloper BPEL Designer WHERE-clause builder to specify arbitrarily complex SQL strings for SELECT operations.

QueryByExample

Illustrates the outbound queryByExample operation of the database adapter. A SELECT SQL query is built dynamically based on fields set in an example XML record, and any matching records are returned.

ResultSetConverter

Illustrates a workaround for using REF CURSORs. The solution involves the use of a Java stored procedure to convert the corresponding java.sql.ResultSet into a collection (either VARRAY or NESTED TABLE) of OBJECTs.

SelectAll

Illustrates the outbound SelectAll operation of the database adapter. With no WHERE clause, all rows in the MOVIES table are returned as XML.

SelectAllByTitle

Illustrates the outbound SelectAllByTitle operation of the database adapter. The row in the MOVIES table with the selected title is returned as XML.

Update

Illustrates the outbound Update operation of the database adapter. An XML record is passed to the operation and the row in the database with the same primary key is updated. (In JDeveloper BPEL Designer, Merge (Insert or Update) is provided.)


See Table 4-1 for the structure of the MOVIES table, which is used for many of the use cases. The readme.txt files that are included with most of the samples provide instructions.

The Adapter Configuration Wizard

Using the Adapter Configuration Wizard, you can import tables from the database, specify relationships spanning multiple tables, generate corresponding XML schema definitions, and create services to expose the necessary SQL or database operations. These services are consumed to define partner links that are used in the BPEL process. You use the Adapter Configuration to both create and edit adapter services.

This section contains the following topics:

Starting the Adapter Configuration Wizard

After you create a BPEL project in JDeveloper BPEL Designer, you can start defining a database adapter. If you lose focus on the window, use alt-tab to get it back.

To launch the Adapter Configuration Wizard:

  1. Ensure that Process Activities is selected in the drop-down list of the Component Palette section.

  2. Drag and drop a PartnerLink activity onto the right side of the designer window.

  3. Enter a name in the Create Partner Link window.

  4. Click the Define Adapter Service icon to start the Adapter Configuration Wizard.

    Description of adapter_service.gif follows
    Description of the illustration adapter_service.gif

  5. Click Next on the Welcome window.

  6. Select Database Adapter for the Adapter Service Type and click Next.

See Chapter4, "Connecting to a Database" to continue using the wizard.

Connecting to a Database

Figure 4-2 shows where you select the database connection that you are using with the service. This is the database from which you import tables to configure the service.

You can provide a Java Naming and Directory Interface (JNDI) name to identify the database connection, or use the default name that is provided. The JNDI name acts as a placeholder for the connection used when your service is deployed to Oracle BPEL Server. This enables you to use different databases for development and production. The Adapter Configuration Wizard captures the design-time connection in the generated WSDL as well, to serve as a fallback in case the run-time lookup fails.

Figure 4-2 Adapter Configuration Wizard: Service Connection

Description of Figure 4-2 follows
Description of "Figure 4-2 Adapter Configuration Wizard: Service Connection"

Note the following:

  • In production environments, it is recommended that you add the JNDI entry to the adapter deployment descriptor (oc4j-ra.xml). This way, the database adapter is more performant by working in a managed mode. In a nonmanaged mode, the database adapter uses the design-time connection information.

  • When you click Next, a connection to the database is attempted. If a connection cannot be made, you are not able to proceed to the next window, even if you are editing an existing partner link.

See Chapter4, "Selecting the Operation Type" to continue using the wizard.

Selecting the Operation Type

Figure 4-3 shows where you indicate the type of operation you want to configure for this service.

Figure 4-3 Adapter Configuration Wizard: Operation Type

Description of Figure 4-3 follows
Description of "Figure 4-3 Adapter Configuration Wizard: Operation Type"

The follow operation types are available:

  • Call a Stored Procedure or Function

    Select this option if you want the service to execute a stored procedure or function. See Chapter4, "Stored Procedure and Function Support" for more information.

  • Perform an Operation on a Table

    Select this option for outbound operations. You can select Insert or Update, Delete, Select, or any combination of the three. These operations loosely translate to SQL INSERT, UPDATE, DELETE, and SELECT operations. See Chapter4, "DML Operations" for more information.

    If you select all three, then after you run the wizard, you see the following operations in the Operation list of the Invoke window: merge, insert, update, write, delete, serviceNameSelect, and queryByExample.

    Invoke window showing available operations
    Description of the illustration adapter_wizard23.gif

    Note the following:

    • The operations merge, insert, update, and write are created from selecting Insert or Update.

    • The preceding Invoke window shows the MergeService service name as part of the Select operation, that is, MergeServiceSelect.

    • The queryByExample operation appears in every WSDL.

    • If the Operation list is initially blank, reselect the partner link and click the Operation list again.

  • Poll for New or Changed Records in a Table

    Select this option for an inbound operation (that is, an operation that is associated with a Receive activity). This operation type polls a specified table and returns for processing any new rows that are added. You can also specify the polling frequency. See Chapter4, "Polling Strategies" for more information.

See Chapter4, "Selecting and Importing Tables" to continue using the wizard.

Selecting and Importing Tables

Figure 4-4 shows where you select the root database table for your operation. If you are using multiple, related tables, then this is the highest-level table (or highest parent table) in the relationship tree.

Figure 4-4 Adapter Configuration Wizard: Select Table

Description of Figure 4-4 follows
Description of "Figure 4-4 Adapter Configuration Wizard: Select Table"

This window shows all the tables that have been previously imported in the JDeveloper BPEL Designer project (including tables that were imported for other partner links). This enables you to reuse configured table definitions across multiple partner links in a given BPEL project. These are the generated TopLink descriptors.

If the root database table you want to use for this operation has not been previously imported, you can click Import Tables.... If you want to reimport a table (if the table structure has changed on the database, for example), import it again. You can then reimport the table and overwrite the previously configured table definition.


Note:

If you reimport a table, you lose any custom relationships you may have defined on that table, as well as any custom WHERE clauses (if the table being imported was the root table).

See Chapter4, "Defining Primary Keys" to continue using the wizard.

Defining Primary Keys

If any of the tables you have imported do not have primary keys defined on the database, you are prompted to provide a primary key for each one, as shown in Figure 4-5. You must specify a primary key for all imported tables. You can select multiple fields if you need to specify a multipart primary key.

Figure 4-5 Adapter Configuration Wizard: Define Primary Keys

Description of Figure 4-5 follows
Description of "Figure 4-5 Adapter Configuration Wizard: Define Primary Keys"

The primary key that you specify here is recorded on the offline database table and is not persisted back to the database schema; the database schema is left untouched.

See Chapter4, "Creating Relationships" to continue using the wizard.

Creating Relationships

Figure 4-6 shows the relationships defined on the root database table and any other related tables. You can click Create Relationships… to create a new relationship between two tables, or Remove Relationship to remove it.

Figure 4-6 Adapter Configuration Wizard: Relationships

Description of Figure 4-6 follows
Description of "Figure 4-6 Adapter Configuration Wizard: Relationships"

Note the following regarding creating relationships:

  • If foreign key constraints between tables already exist on the database, then two relationships are created automatically when you import the tables: a one-to-one (1:1) from the source table (the table containing the foreign key constraints) to the target table, as well as a one-to-many (1:M) from the target table to the source table.

  • As Figure 4-6 shows, you see only the relationships that are reachable from the root database table. If, after removing a relationship, other relationships are no longer reachable from the root table, then they are not shown in the Relationships window. Consider the following set of relationships:

    A --1:1--> B --1:1--> C --1:M--> D --1:1--> E --1:M--> F

    (1) (2) (3) (4) (5)

    If you remove relationship 3, then you see only:

    A --1:1--> B

    B --1:1--> C

    If you remove relationship 2, then you see only:

    A --1:1--> B

    If you remove relationship 1, you no longer see any relationships.

Figure 4-7 shows where you can create a new relationship.

Figure 4-7 Creating Relationships

Description of Figure 4-7 follows
Description of "Figure 4-7 Creating Relationships"

To create a new relationship:

  1. Select the parent and child tables.

  2. Select the mapping type (one-to-many, one-to-one, or one-to-one with the foreign key on the child table).

  3. Associate the foreign key fields to the primary key fields.

  4. Optionally name the relationship (a default name is generated).


Note:

Only tables that are reachable from the root table can be selected as a parent.

What Happens When Relationships Are Created or Removed

When tables are initially imported into the wizard, a TopLink direct-to-field mapping corresponding to each field in the database is created. Consider the schemas shown in Figure 4-8 and Figure 4-9:

Figure 4-8 EMPLOYEE Schema

Description of Figure 4-8 follows
Description of "Figure 4-8 EMPLOYEE Schema"

Figure 4-9 ADDRESS Schema

Description of Figure 4-9 follows
Description of "Figure 4-9 ADDRESS Schema"

Immediately after importing these two tables, the following mappings in the Employee descriptor are created:

Employee:

  • id (direct mapping to the ID field, for example, 151)

  • name (direct mapping to the NAME field, for example, Stephen King)

  • addrId (direct mapping to the ADDR_ID field, for example, 345)

When creating a relationship mapping, the direct-to-field mappings to the foreign key fields are removed and replaced with a single relationship (one-to-one, one-to-many) mapping. Therefore, after creating a one-to-one relationship between Employee and Address called homeAddress, the Employee descriptor looks like this:

Employee:

  • id

  • name

  • homeAddress (one-to-one mapping to the ADDRESS table; this attribute now represents the entire Address object.)

When a relationship is removed, the direct mappings for the foreign keys are restored.

Different Types of One-to-One Mappings

The following ways of specifying one-to-one relationships are supported:

Figure 4-10 Foreign Keys on the Parent Table EMPLOYEE

Description of Figure 4-10 follows
Description of "Figure 4-10 Foreign Keys on the Parent Table EMPLOYEE"

Figure 4-11 Foreign Keys on the Parent Table ADDRESS

Description of Figure 4-11 follows
Description of "Figure 4-11 Foreign Keys on the Parent Table ADDRESS"

Figure 4-12 Foreign Keys on the Child Table EMPLOYEE

Description of Figure 4-12 follows
Description of "Figure 4-12 Foreign Keys on the Child Table EMPLOYEE"

Figure 4-13 Foreign Keys on the Child Table ADDRESS

Description of Figure 4-13 follows
Description of "Figure 4-13 Foreign Keys on the Child Table ADDRESS"

Creating the Object Model

Figure 4-14 shows the object model that is created from the imported table definitions, including any relationships that you may have defined.

Figure 4-14 Adapter Configuration Wizard: Object Model

Description of Figure 4-14 follows
Description of "Figure 4-14 Adapter Configuration Wizard: Object Model"

If your object model contains self-relationships (for example, the employee-to-employee manager relationship), then you see these as loops in the tree. These loops are not present in the XSD. This is the descriptor object model, not the XSD.

See Chapter4, "Defining a WHERE Clause" to continue using the wizard.

Defining a WHERE Clause

If your service contains a SELECT query (that is, inbound polling services, or outbound services that contain a SELECT), then you can customize the WHERE clause of the SELECT statement.

Figure 4-15 shows where you define a WHERE clause for an outbound service. For inbound services, you do not see the Parameters section.

Figure 4-15 Adapter Configuration Wizard: Define WHERE Clause

Description of Figure 4-15 follows
Description of "Figure 4-15 Adapter Configuration Wizard: Define WHERE Clause"


Note:

The WHERE clause applies to SELECT operations only (that is, polling for new or changed records, or performing a SELECT operation on a table). It does not apply to INSERT, UPDATE, and DELETE operations.

The most basic expression in a WHERE clause can be one of the following three cases, depending on what the right-hand side (RHS) is:

  1. EMP.ID = 123

    In this case, the RHS is a literal value. This RHS is the Literal option shown in Figure 4-16.

  2. EMP.ADDR_ID = ADDR.ID

    In this case, the RHS is another database field. This RHS is the Query Key option shown in Figure 4-16.

  3. EMP.ID = ?

    In this case, the RHS value must be specified at run time. This is the Parameter option shown in Figure 4-16.

You create the parameters that you need in the WHERE clause by clicking Add before you move on to build the WHERE clause. To build the WHERE clause, click Edit… to launch the Expression Builder, as shown in Figure 4-16.

Figure 4-16 Expression Builder

Description of Figure 4-16 follows
Description of "Figure 4-16 Expression Builder"

See the following for more information:

See Chapter4, "Choosing an After-Read Strategy" to continue using the wizard.

Choosing an After-Read Strategy

When configuring an inbound operation, you have the following options about what to do after a row or rows have been read:

Figure 4-17 shows these options.

Figure 4-17 Adapter Configuration Wizard: After-Read Strategies

Description of Figure 4-17 follows
Description of "Figure 4-17 Adapter Configuration Wizard: After-Read Strategies"

See Chapter4, "Polling Strategies" for more information.

Delete the Rows that Were Read

With this option, the rows are deleted from the database after they have been read and processed by the adapter service.

Update a Field in the Table (Logical Delete)

With this option, you update a field in the root database table to indicate that the rows have been read. The WHERE clause of the query is updated automatically after you complete the configuration, as shown in Figure 4-18.

Figure 4-18 Adapter Configuration Wizard: Logical Delete

Description of Figure 4-18 follows
Description of "Figure 4-18 Adapter Configuration Wizard: Logical Delete"

Using this approach, your database table looks something like Figure 4-19.

Figure 4-19 Updating Fields in a Table

Description of Figure 4-19 follows
Description of "Figure 4-19 Updating Fields in a Table"

Note the following:

  • Rows 150 and 153 have been previously read and processed.

  • At the next polling event, row 152 will be read and processed because it contains UNPROCESSED in the Status column. Because an explicit Unread Value was provided, row 151 will not be read.

  • Row 154 has been flagged as LOCKED and will not be read. You can use this reserved value if your table is in use by other processes.

Update a Sequencing Table

With this option, you are keeping track of the last-read rows in a separate sequence table. Figure 4-20 shows the information you provide. The WHERE clause of your query is updated automatically after you complete the configuration.

Figure 4-20 Adapter Configuration Wizard: Last Read IDs Table

Description of Figure 4-20 follows
Description of "Figure 4-20 Adapter Configuration Wizard: Last Read IDs Table"

Using these settings, your sequence table looks something like Figure 4-21.

Figure 4-21 Updating a Sequence Table

Description of Figure 4-21 follows
Description of "Figure 4-21 Updating a Sequence Table"

Whenever a row is read, this table is updated with the ID that was just read. Then when the next polling event occurs, it will search for rows that have an ID greater than the last-read ID (154).

Typical columns used are event_id, transaction_id, scn (system change number), id, or last_updated. These columns typically have (monotonically) increasing values, populated from a sequence number or sysdate.

Internal Processes at Design Time

This section describes happens internally at design time when you use the Adapter Configuration Wizard to configure the database adapter.

Importing Tables

When you import a table, the offline table support of JDeveloper BPEL Designer creates an offline snapshot of the database table. You can modify this offline version of the table (for example, you can add a foreign key constraint) without affecting the real database table. This creates a TopLink descriptor and associated Java source file for the table, and all the attributes in the descriptor are automapped to their corresponding database columns. The TopLink descriptor maps the Java class to the offline database table.

Most typical data columns are mapped as direct-to-field mappings, meaning that the value in the database column is directly mapped to the attribute. For example, a SALARY column in the database is mapped to a salary attribute in the object model, and that attribute contains the value of that column.

If foreign key constraints are already present in the imported tables, then relationship mappings are autogenerated between the tables. To cover as many scenarios as possible, two mappings are generated for every foreign key constraint encountered: a one-to-one mapping from the source table to the target table, and a one-to-many mapping in the opposite direction. After this is done, you are left with an OracleAS TopLink Mapping Workbench project in your BPEL project.


Note:

The Java classes that are created as part of the descriptor generation process are never actually deployed with your process or used at run time. They are present in the design time because OracleAS TopLink Mapping Workbench is expecting each descriptor to be associated with a Java class. When your process is deployed, the mapping metadata is stored in toplink_mappings.xml.

When you have finished importing tables, you must select a root database table. In doing so, you are actually selecting which TopLink descriptor stores the autogenerated query.

Creating Relationships

When you create or remove a relationship, you are actually modifying the TopLink descriptor mappings. Creating a new relationship does the following:

  • Creates a foreign key constraint in the offline database table

  • Adds a one-to-one or one-to-many mapping to the descriptor

  • Removes the direct-to-field mappings to the foreign key fields

Removing a relationship mapping does the following:

  • Removes the one-to-one or one-to-many mapping from the descriptor

  • Removes the foreign key constraint from the offline database table

  • Adds direct-to-field mappings for each foreign key field involved in the relationship

Generating Design-Time Artifacts

The following files are generated:

  • service_name.wsdl—contains the database adapter service definition

  • RootTable.xsd—the XML type definition of the root object

  • toplink_mappings.xml—contains the TopLink mapping metadata for your BPEL project. It is the only Toplink artifact that is deployed to the server.

Advanced Configuration

The Adapter Configuration Wizard generates everything you need to use the database adapter as part of a BPEL process. The following sections describe what happens in the background when you use the wizard, as well as performance considerations.

This section contains the following topics:

The OracleAS TopLink Mapping Workbench Project

The wizard works by creating an OracleAS TopLink Mapping Workbench project as part of your BPEL process project. This TopLink project contains metadata for mapping a database schema to objects/XML.

The TopLink mappings are stored in two formats. The toplink_mappings.mwp file is your design time project, which you can edit visually in JDeveloper BPEL Designer. In contrast, the toplink_mappings.xml file is an XML representation of your project for use at run time. It is not as easy as editing the .bpel file, where there is only one file, but you can toggle between Diagram View and Source.

Note the following:

  • Rather than edit the toplink_mappings.xml file directly, it is recommended that you edit the toplink_mappings.mwp visually, and regenerate all the BPEL artifacts to reflect the changes. You can do this by double-clicking the partner link to open the Adapter Configuration Wizard in edit mode, and then clicking through the wizard until you can click Finish. Changing the MWP version does not update the XML version until you click through the wizard in edit mode.

  • When running the wizard, any changes that affect the TopLink project (importing tables, creating or removing mappings, specifying an expression, and so on) are applied immediately, and are not undone if you cancel the wizard.

Deleting a Descriptor

You cannot remove TopLink descriptors from your project from within the wizard because removing descriptors can potentially affect other partner links that are sharing that descriptor. To explicitly remove a descriptor, do the following:

  • Click the TopLink Mappings node under Application Sources under your project in the Applications - Navigator.

  • Select the descriptor from the tree in the TopLink Mappings - Structure pane.

  • Right-click and select Remove.

    Description of adapter_wizard4.gif follows
    Description of the illustration adapter_wizard4.gif

Returning Partial Objects When Querying

Currently, the Adapter Configuration Wizard does not have built-in support for partial object reading, that is, returning only specific fields from a table. To achieve this functionality, you can manually unmap any attributes that you do not want to include in the result set. Relationship mappings can be unmapped by removing them in the Relationships window, but direct mappings must be explicitly unmapped on the TopLink descriptor.

To unmap attributes:

  1. Click the TopLink Mappings node under Application Sources under your project in the Applications - Navigator.

  2. Select the descriptor containing the attribute you want to unmap from the tree in the TopLink Mappings - Structure pane.

  3. Right-click the attribute you want to unmap and select Map As > Unmapped.

    Description of adapter_wizard11.gif follows
    Description of the illustration adapter_wizard11.gif

To remap the attribute, you can do the following:

  1. Click the TopLink Mappings node under Application Sources under your project in the Applications - Navigator.

  2. Select the descriptor containing the attribute you want to remap from the tree in the TopLink Mappings - Structure pane.

  3. Right-click the attribute you want to remap and select Map As > Direct to Field.

    The TopLink Mappings Editor automatically opens in the JDeveloper BPEL Designer window.

  4. From Database Field, select the column to which the attribute should be mapped.

    Description of adapter_wizard12.gif follows
    Description of the illustration adapter_wizard12.gif

Renaming a Mapping

Open the corresponding Java source file and change the name. Then go to the structure/Mappings pane, and the newly named attribute will appear unmapped. Right-click it and select Map As to remap it. Then save and regenerate BPEL artifacts.

Keep in mind there are four views, the project view, the table/descriptor view, and the individual attribute/column view you can access from the TopLink Mappings structure window. The Java source view is not exactly a TopLink view, but can be treated as such (when renaming a mapping).

Configuring Offline Database Tables

Offline database tables are internal to the OracleAS TopLink Mapping Workbench project. When you run the wizard, a TopLink project is created. When you import tables, they are saved as offline table definitions.

You can use the offline database tables to control the micromapping from database datatype to XML datatype. If you are using a third-party database, you may need to edit these objects as a workaround. For instance, a serial field on a third-party database may need to be mapped as Integer so that it is recognized by the wizard and mapped to xs:integer. See AppendixA, "Problems Importing Third-Party Database Tables" for more information.

Run the wizard once. Then add the following to your JDeveloper BPEL Designer project: database/schemaName/schemaName.schema

Click the table name (see Figure 4-22) after it is added to your project and change the types of any of the columns. When you run the wizard again (in edit mode) and click Finish, the toplink_mappings.xml and XSD file are remapped based on the new database datatypes.

Figure 4-22 Configuring Offline Tables

Description of Figure 4-22 follows
Description of "Figure 4-22 Configuring Offline Tables"

Edit the offline table in your JDeveloper BPEL Designer project (see Figure 4-23), not the table reachable from the ConnectionManager. If you try the latter, the column types will not be editable, because you are editing the table itself, not an offline representation of it.

Figure 4-23 Editing Offline Tables

Description of Figure 4-23 follows
Description of "Figure 4-23 Editing Offline Tables"

Relational-to-XML Mappings (toplink_mappings.xml)

The database adapter is implemented using OracleAS TopLink. For every business process, there is an underlying TopLink project, which contains metadata from mapping a database schema to objects/XML.

In OracleAS TopLink terminology, toplink_mappings.xml is an XML deployment file. It is generated from a.mwp project file for use at run time. It is recommended that you edit the project in OracleAS TopLink Mapping Workbench and periodically refresh toplink_mappings.xml.

The toplink_mappings.xml file is the run-time version of the OracleAS TopLink Mapping Workbench project. If you edit this file directly, keep in mind that changes are not reflected in the design-time toplink_mappings.mwp. Therefore, any changes are lost when you edit a partner link.

The toplink_mappings.xml file consists of a set of descriptors and mappings. Descriptors roughly represent a single table in the database schema, and mappings represent either a single column in the table (direct to field), or a one-to-one or one-to-many relationship to another table (foreign reference).

When modifying the toplink_mappings.xml file, the recommended approach is to use OracleAS TopLink Mapping Workbench. The following is an example of a mapping and a descriptor from a toplink_mappings.xml file.

<mappings>
   <database-mapping>
      <attribute-name>fname</attribute-name>
      <read-only>false</read-only>
      <field-name>ACTOR.FNAME</field-name>
      <attribute-classification>java.lang.String</attribute-classification>
      <type>oracle.toplink.mappings.DirectToFieldMapping</type>
   </database-mapping>

and:

<descriptor>
   <java-class>BusinessProcess.Actor</java-class>
   <tables>
      <table>ACTOR</table>
   </tables>
   <primary-key-fields>
      <field>ACTOR.ID</field>
      <field>ACTOR.PROGRAM_ID</field>
      <field>ACTOR.PROGRAM_TYPE</field>
   </primary-key-fields>

However, the recommended approach is to work from the OracleAS TopLink Mapping Workbench.

Useful attributes on foreign reference mappings (one-to-one, one-to-many) include:

  • <privately-owned>false/true

    If a relationship is privately owned, that means that any target rows are deleted whenever any source rows are deleted.

    This is important for one-to-many relationships because, if you remove Dept without first deleting its Emp rows, you get a 'child records found' constraint exception.

    If you set privately-owned to true, the database adapter automatically deletes child records before deleting source rows. In XML everything is assumed to be privately owned; therefore, this tag is set to true by default.

  • <uses-batch-reading>false/true and <uses-joining>false/true

    There are two key optimizations in relation to reading rows with detail rows from the database.

    The following shows the series of selects that OracleAS TopLink uses to read two department objects (1 and 2), and their employees:

    Unoptimized:

    SELECT DEPT_COLUMNS FROM DEPT WHERE (subQuery)SELECT EMP_COLUMNS FROM EMP WHERE (DEPTID = 1)SELECT EMP_COLUMNS FROM EMP WHERE (DEPTID = 2)
    
    

    Batch Reading:

    SELECT DEPT_COLUMNS FROM DEPT WHERE (subQuery)SELECT EMP_COLUMNS FROM EMP e, DEPT d WHERE ((subQuery) AND (e.DEPTID = d.DEPTID))
    
    

    Joined Reading:

    SELECT DEPT_COLUMNS, EMP_COLUMNS FROM DEPT d, EMP e WHERE ((subQuery) AND (e.DEPTID = d.DEPTID))
    
    

    Joined reading appears to be the more advanced, but only works for one-to-one mappings currently, and the detail record cannot be null because the join is not an outer join.

    Therefore, by default, batch reading is enabled, but not joined reading. This can easily be reversed to improve performance.

    If you specify raw SQL for a query, that query cannot be a batched or joined read. To use batched or joined reading, you must not use raw SQL.

You can set other properties in toplink_mappings.xml.

The Service Definition (WSDL)

The WSDL generated by the Adapter Configuration Wizard defines the adapter service. This WSDL specifies the various operations exposed by the service. Table 4-10 specifies the operations that are generated based on your selection in the wizard.

Table 4-10 WSDL Operations Generated by the Adapter Configuration Wizard

Adapter Configuration Wizard Selection Generated WSDL Operation

Insert or Update

insert, update, merge, write, queryByExample

Delete

delete, queryByExample

Select

serviceNameSelect, queryByExample

Poll for New or Changed Records in a Table

receive


Of the preceding operations, receive is associated with a BPEL receive activity, whereas the rest of the preceding operations are associated with a BPEL invoke activity.See Chapter4, "SQL Operations as Web Services" for more information on the preceding operations.

This section discusses the database adapter-specific parameters in the generated WSDL. This is intended for advanced users who want information about all the parameters in the generated WSDL.

A given database adapter service is meant for either continuous polling of a data source (translates to a JCA Activation) or for performing a one-time DML operation (translates to a JCA Interaction). In the continuous polling case, the WSDL contains only one receive operation with a corresponding activation spec defined in the binding section. In the one-time DML operation case, the WSDL contains multiple operations, all of which have a corresponding interaction spec defined in the binding section.

Table 4-11 specifies the JCA Activation/Interaction spec associated with each of the preceding operations:

Table 4-11 Operation and JCA Activation/Interaction Spec

WSDL Operation JCA Activation/Interaction Spec

insert, update, merge, write, delete

oracle.tip.adapter.db.DBWriteInteractionSpec

select, queryByExample

oracle.tip.adapter.db.DBReadInteractionSpec

receive

oracle.tip.adapter.db.DBActivationSpec


DBWriteInteractionSpec

The following code example shows the binding section corresponding to the movie service to write to the Movies table:

    <binding name="movie_binding" type="tns:movie_ptt">
        <jca:binding />
        <operation name="merge">
            <jca:operation
                InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec"
                DescriptorName="BPELProcess1.Movies"
                DmlType="merge"
                MappingsMetaDataURL="toplink_mappings.xml" />
            <input/>
        </operation>
        <operation name="insert">
            <jca:operation
                InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec"
                DescriptorName="BPELProcess1.Movies"
                DmlType="insert"
                MappingsMetaDataURL="toplink_mappings.xml" />
            <input/>
        </operation>
        <operation name="update">
            <jca:operation
                InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec"
                DescriptorName="BPELProcess1.Movies"
                DmlType="update"
                MappingsMetaDataURL="toplink_mappings.xml" />
            <input/>
        </operation>
        <operation name="write">
            <jca:operation
                InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec"
                DescriptorName="BPELProcess1.Movies"
                DmlType="write"
                MappingsMetaDataURL="toplink_mappings.xml" />
            <input/>
        </operation>
        <operation name="delete">
            <jca:operation
                InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec"
                DescriptorName="BPELProcess1.Movies"
                DmlType="delete"
                MappingsMetaDataURL="toplink_mappings.xml" />
            <input/>
    </binding>

Table 4-12 describes the DBWriteInteractionSpec parameters:

Table 4-12 DBWriteInteractionSpec Parameters

Parameter Description Mechanism to Update

DescriptorName

Indirect reference to the root database table that is being written to

Wizard updates automatically. Do not modify this manually.

DmlType

The DML type of the operation (insert, update, merge, write)

Wizard updates automatically. Do not modify this manually.

MappingsMetaDataURL

Reference to file containing relational-to-XML mappings (toplink_mappings.xml)

Wizard updates automatically. Do not modify this manually.


DBReadInteractionSpec

The following code example corresponds to the movie service to query the Movies table:

 <binding name="movie_binding" type="tns:movie_ptt">
        <jca:binding />
        <operation name="movieSelect">
            <jca:operation
                InteractionSpec="oracle.tip.adapter.db.DBReadInteractionSpec"
                DescriptorName="BPELProcess1.Movies"
                QueryName="movieSelect"
                MappingsMetaDataURL="toplink_mappings.xml" />
            <input/>
        </operation>
        <operation name="queryByExample">
            <jca:operation
                InteractionSpec="oracle.tip.adapter.db.DBReadInteractionSpec"
                DescriptorName="BPELProcess1.Movies"
                IsQueryByExample="true"
                MappingsMetaDataURL="toplink_mappings.xml" />
            <input/>
        </operation>
    </binding>

Table 4-13 describes the DBReadInteractionSpec parameters:

Table 4-13 DBReadInteractionSpec Parameters

Parameter Description Mechanism to Update

DescriptorName

Indirect reference to the root database table that is being queried

Wizard updates automatically. Do not modify this manually.

QueryName

Reference to the SELECT query inside the relational-to-XML mappings file

Wizard updates automatically. Do not modify this manually.

IsQueryByExample

Indicates if this query is a queryByExample or not

Wizard updates automatically. Do not modify this manually. This parameter is needed for queryByExample only.

MappingsMetaDataURL

Reference to file containing relational-to-XML mappings (toplink_mappings.xml)

Wizard updates automatically. Do not modify this manually.


DBActivationSpec

The following code example shows the binding section corresponding to the MovieFetch service to poll the Movies table using DeletePollingStrategy:

    <binding name="MovieFetch_binding" type="tns:MovieFetch_ptt">
        <pc:inbound_binding/>
        <operation name="receive">
            <jca:operation
                ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
                DescriptorName="BPELProcess1.Movies"
                QueryName="MovieFetch"
                PollingStrategyName="DeletePollingStrategy"
                MaxRaiseSize="1"
                MaxTransactionSize="unlimited"
                PollingInterval="5"
                MappingsMetaDataURL="toplink_mappings.xml" />
        <input/>
        </operation>
    </binding>

Table 4-14 describes the DBActivationSpec parameters:

Table 4-14 DBActivationSpec Parameters

Parameter Description Mechanism to Update

DescriptorName

Indirect reference to the root database table that is being queried

Wizard updates automatically. Do not modify this manually.

QueryName

Reference to the SELECT query inside the relational-to-XML mappings file

Wizard updates automatically. Do not modify this manually.

PollingStrategyName

Indicates the polling strategy to be used

Wizard updates automatically. Do not modify this manually.

PollingInterval

Indicates how often to poll the root database table for new events (in seconds)

Wizard updates automatically. Do not modify this manually.

MaxRaiseSize

Indicates the maximum number of database records that can be raised at a time to the BPEL engine

Modify manually in the generated WSDL.

MaxTransactionSize

Indicates the maximum number of rows to process as part of one database transaction

Modify manually in the generated WSDL.

MappingsMetaDataURL

Reference to file containing relational-to-XML mappings (toplink_mappings.xml)

Wizard updates automatically. Do not modify this manually.


The following code example is the binding section corresponding to the MovieFetch service to poll the Movies table using LogicalDeletePollingStrategy:

    <binding name="PollingLogicalDeleteService_binding"
             type="tns:PollingLogicalDeleteService_ptt">
        <pc:inbound_binding/>
        <operation name="receive">
            <jca:operation
                ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
                DescriptorName="PollingLogicalDeleteStrategy.Movies"
                QueryName="PollingLogicalDeleteService"
                PollingStrategyName="LogicalDeletePollingStrategy"
                MarkReadFieldName="DELETED"
                MarkReadValue="TRUE"
                MarkReservedValue="MINE"
                MarkUnreadValue="FALSE"
                MaxRaiseSize="1"
                MaxTransactionSize="unlimited"
                PollingInterval="10"
                MappingsMetaDataURL="toplink_mappings.xml" />
        <input/>
        </operation>
    </binding>

Table 4-15 describes all of the additional DBActivationSpec parameters for LogicalDeletePollingStrategy:

Table 4-15 DBActivationSpec Parameters for LogicalDeletePollingStrategy

Parameter Description Mechanism to Update

MarkReadFieldName

Specifies the database column to use to mark the row as read

Wizard updates automatically. Do not modify this manually.

MarkReadValue

Specifies the value to which the database column is set to mark the row as read

Wizard updates automatically. Do not modify this manually.

MarkReservedValue

Specifies the value to which the database column is set to mark the row as reserved. This parameter is optional. You can use it when multiple adapter instances are providing the same database adapter service.

Wizard updates automatically. Do not modify this manually.

MarkUnreadValue

Specifies the value to which the database column is set to mark the row as unread. This parameter is optional. Use it when you want to indicate specific rows that the database adapter must process.

Wizard updates automatically. Do not modify this manually.


The following code example shows the binding section corresponding to the MovieFetch service to poll the Movies table using SequencingPollingStrategy:

    <binding name="PollingLastReadIdStrategyService_binding"
             type="tns:PollingLastReadIdStrategyService_ptt">
        <pc:inbound_binding/>
        <operation name="receive">
            <jca:operation
                ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
                DescriptorName="PollingLastReadIdStrategy.Movies"
                QueryName="PollingLastReadIdStrategyService"
                PollingStrategyName="SequencingPollingStrategy"
                SequencingFieldName="SEQUENCENO"
                SequencingTableNameFieldValue="MOVIES"
                SequencingTableName="PC_SEQUENCING"
                SequencingTableNameFieldName="TABLE_NAME"
                SequencingTableValueFieldName="LAST_READ_ID"
                MaxRaiseSize="1"
                MaxTransactionSize="unlimited"
                PollingInterval="10"
                MappingsMetaDataURL="toplink_mappings.xml" />
        <input/>
        </operation>
    </binding>

Table 4-16 describes all of the additional DBActivationSpec parameters for SequencingPollingStrategy:

Table 4-16 DBActivationSpec Parameters for SequencingPollingStrategy

Parameter Description Mechanism to update

SequencingFieldName

Specifies the database column that is monotonically increasing

Wizard updates automatically. Do not modify this manually.

SequencingFieldType

Specifies the type of the database column that is monotonically increasing. This parameter is optional. Use it if the type is not NUMBER.

Wizard updates automatically. Do not modify this manually.

SequencingTableNameFieldValue

Specifies the root database table for this polling query

Wizard updates automatically. Do not modify this manually.

SequencingTableName

Name of the database table that is serving as the helper table

Wizard updates automatically. Do not modify this manually.

SequencingTableNameFieldName

Specifies the database column in the helper table that is used to store the root database table name

Wizard updates automatically. Do not modify this manually.

SequencingTableValueFieldName

Specifies the database column in the helper table that is used to store the sequence number of the last processed row in the root database table name

Wizard updates automatically. Do not modify this manually.


See Chapter4, "Deployment" for details about the service section of the WSDL.

XML Schema Definition (XSD)

From a database schema, the wizard generates an XML schema representation of that object. This schema is used by the BPEL process.

For example, from the table named Movies, the following is generated:

<?xml version = '1.0' encoding = 'UTF-8'?>
<xs:schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/top/SelectAllByTitle" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/SelectAllByTitle" elementFormDefault="unqualified" attributeFormDefault="unqualified"
 xmlns:xs="http://www.w3.org/2001/XMLSchema">
   <xs:element name="MoviesCollection" type="MoviesCollection"/>
   <xs:element name="Movies" type="Movies"/>
   <xs:complexType name="MoviesCollection">
      <xs:sequence>
         <xs:element name="Movies" type="Movies" minOccurs="0"
                     maxOccurs="unbounded"/>
      </xs:sequence>
   </xs:complexType>
   <xs:complexType name="Movies">
      <xs:sequence>
         <xs:element name="director" type="xs:string" minOccurs="0"
                     nillable="true"/>
         <xs:element name="genre" type="xs:string" minOccurs="0" nillable="true"/>
         <xs:element name="rated" type="xs:string" minOccurs="0" nillable="true"/>
         <xs:element name="rating" type="xs:string" minOccurs="0"
                     nillable="true"/>
         <xs:element name="releaseDate" type="xs:dateTime" minOccurs="0"
                     nillable="true"/>
         <xs:element name="runTime" type="xs:double" minOccurs="0"
                     nillable="true"/>
         <xs:element name="starring" type="xs:string" minOccurs="0"
                     nillable="true"/>         <xs:element name="status" type="xs:string" minOccurs="0"
                     nillable="true"/>
         <xs:element name="synopsis" type="xs:string" minOccurs="0"
                     nillable="true"/>
         <xs:element name="title" type="xs:string"/>
         <xs:element name="totalGross" type="xs:double" minOccurs="0"
                     nillable="true"/>
         <xs:element name="viewerRating" type="xs:string" minOccurs="0"
                     nillable="true"/>
      </xs:sequence>
   </xs:complexType>
   <xs:element name="findAllInputParameters" type="findAll"/>
   <xs:complexType name="findAll">
      <xs:sequence/>
   </xs:complexType>
   <xs:element name="SelectAllByTitleServiceSelect_titleInputParameters" type="SelectAllByTitleServiceSelect_title"/>
   <xs:complexType name="SelectAllByTitleServiceSelect_title">
      <xs:sequence>
         <xs:element name="title" type="xs:string" minOccurs="1" maxOccurs="1"/>
      </xs:sequence>
   </xs:complexType>
</xs:schema>

This is a generated file. Changes to this file do not affect the behavior of the adapter. It is a declaration of the XML file that the database adapter produces and consumes.

You may need to modify the XSD file if you update the underlying toplink_mappings.xml. In that case, regenerate both files by rerunning the Adapter Configuration Wizard in edit mode.

The generated XSD flags all elements as optional with minOccurs=0, except for the primary key attributes, which are mandatory.


Note:

Do not manually modify the XSD file to configure the database adapter.

Deployment

After you define the database adapter service and complete the design of your BPEL process, you compile and deploy the process to Oracle BPEL Server. In the Applications Navigator of JDeveloper BPEL Designer, right-click the BPEL project and select Deploy. See Chapter 19, "BPEL Process Deployment and Domain Management" in Oracle BPEL Process Manager Developer's Guide for more information about deployment.

Design-Time and Run-time Connection Information

During design time, you specified connection information in the Database Server JNDI Name field of the Adapter Configuration Wizard, as shown in Figure 4-2, "Adapter Configuration Wizard: Service Connection". When a WSDL file is generated as part of creating a BPEL process, this connection information is specified in the WSDL.

At run time, if you want to change databases, for example, from a design-time or development database to a test or production database, then you no longer put the connection information in the WSDL. Instead, you specify the connection information in either the oc4j-ra.xml file or the data-sources.xml file. The following sections describe how to specify connection information:

See the following sections for other connection information:

Specifying Connection Information in the WSDL File

Use the WSDL file for connection information (specified in the mcf.* parameters) only if your design-time and run-time databases are the same and if you are not working in a production environment. (This is referred to as nonmanaged mode.) The WSDL is provided as a convenience for development and test purposes, but is not suitable for production. The database adapter uses connection information in the WSDL at run time if you have not created the corresponding JNDI entry in the oc4j-ra.xml file.

Example 4-1 shows database connection information specified in the mcf.* properties of the WSDL.

Example 4-1 WSDL Code Example Showing Database Connection Information in the mcf.* Properties

<!-- Your runtime connection is declared in 
J2EE_HOME/application-deployments/default/DbAdapter/oc4j-ra.xml.
These 'mcf' properties here are from your design time connection and save you from
having to edit that file and restart the application server if eis/DB/scott is
missing.
These 'mcf' properties are safe to remove.
-->
<service name="get">
  <port name="get_pt" binding="tns:get_binding">
    <jca:address location="eis/DB/scott"
      UIConnectionName="scott"
      ManagedConnectionFactory="oracle.tip.adapter.db.DBManagedConnectionFactory"
      mcf.DriverClassName="oracle.jdbc.driver.OracleDriver"
      mcf.PlatformClassName="oracle.toplink.oraclespecific.Oracle9Platform"
      mcf.ConnectionString="jdbc:oracle:thin:@mypc.home.com:1521:orcl"
      mcf.UserName="scott"
      mcf.Password="7347B141D0FBCEA077C118A5138D02BE"
    />
  </port>
</service>

Note:

When the database adapter uses connection information specified in the WSDL, you have no connection pooling.

Specifying Connection Information in the oc4j-ra.xml File

Use the oc4j-ra.xml file for connection information when your design-time and run-time databases are different. When you specify database connection information in the oc4j-ra.xml file, the database adapter does not use connection information from the WSDL. This is local connection pooling information, as opposed to pointing to an external connection pool. This is also referred to as managed mode using TopLink connection pooling.

Example 4-2 shows database connection information in the oc4j-ra.xml file.

Example 4-2 oc4j-ra.xml Code Example Showing Database Connection Information

<connector-factory location="eis/DB/scott" connector-name="Database Adapter">
<config-property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
<config-property name="connectionString"
                 value="jdbc:oracle:thin:@localhost:1521:orcl"/>
<config-property name="userName" value="scott"/>
<config-property name="password" value="tiger"/>
<config-property name="minConnections" value="5"/>
<config-property name="maxConnections" value="5"/>
<config-property name="minReadConnections" value="1"/>
<config-property name="maxReadConnections" value="1"/>
<config-property name="usesExternalConnectionPooling" value="false"/>
<config-property name="dataSourceName" value=""/>
<config-property name="usesExternalTransactionController" value="false"/>
<config-property name="platformClassName"
                 value="oracle.toplink.internal.databaseaccess.Oracle9Platform"/>
<config-property name="usesNativeSequencing" value="true"/>
<config-property name="sequencePreallocationSize" value="50"/>
<config-property name="tableQualifier" value=""/>
</connector-factory>

For the Oracle BPEL Process Manager for Developers installation, oc4j-ra.xml is located at

Oracle_Home\integration\orabpel\system\appserver\oc4j\j2ee\home\
application-deployments\default\DbAdapter\oc4j-ra.xml

For the Oracle BPEL Process Manager for OracleAS Middle Tier installation, oc4j-ra.xml is located at

Oracle_Home\j2ee\OC4J_BPEL\application-deployments\default\DBAdapter\oc4j-ra.xml

Connection information in the oc4j-ra.xml file is not generated by the Adapter Configuration Wizard. You must manually update this file and restart Oracle BPEL Server for updates to oc4j-ra.xml to take effect.


Note:

When the database adapter uses connection information specified in the oc4j-ra.xml file, you are using TopLink connection pooling.

See the following topics for more information about the oc4j-ra.xml file:

Specifying Connection Information in the data-sources.xml File (Recommended)

Use the data-sources.xml file for connection information for all production scenarios, particularly high availability environments. Oracle recommends that you create an entry for connection information in the data-sources.xml file. This means pointing to a connection pool managed by the application server, which is preferred over native (TopLink) connection pooling.

Example 4-3 shows database connection information in the data-sources.xml file.

Example 4-3 data-sources.xml Code Example Showing Database Connection Information

[data-source class="com.evermind.sql.DriverManagerDataSource"
name="LocalDataSource"
location="loc/LocalDataSource"
xa-location="LocalDataSource"
ejb-location="jdbc/LocalDataSource"
connection-driver="oracle.jdbc.OracleDriver"
max-connections="50"
min-connections="10"
connection-retry-interval="30"
max-connect-attempts="10"
url="jdbc:oracle:thin:scott/tiger@localhost:1521:orcl"/]

Your entry in the oc4j-ra.xml file becomes a pointer to this data source.

Example 4-4 shows the oc4j-ra.xml file pointing to connection information in the data-sources.xml file (Example 4-3) when non-JTA/non-XA pooling is used.

Example 4-4 oc4j-ra.xml Code Example Pointing to Connection Information in the data-sources.xml File (non-JTA/non-XA Pooling)

[connector-factory location="eis/DB/loc/LocalDataSource" connector-name="Database Adapter"]
[config-property name="usesExternalConnectionPooling" value="true"/]
[config-property name="dataSourceName" value="loc/LocalDataSource"/]
[config-property name="usesExternalTransactionController" value="false"/]
[config-property name="platformClassName" value="oracle.toplink.internal.databaseaccess.Oracle9Platform"/]
[config-property name="usesNativeSequencing" value="true"/]
[config-property name="sequencePreallocationSize" value="50"/]
[/connector-factory]

In Example 4-4, dataSourceName matches the location attribute in data-sources.xml, and usesExternalTransactionController is false.

Example 4-5 shows the oc4j-ra.xml file pointing to connection information in the data-sources.xml file (Example 4-3) when JTA/XA pooling is used.

Example 4-5 oc4j-ra.xml Code Example Pointing to Connection Information in the data-sources.xml File (JTA/XA Pooling)

[connector-factory location="eis/DB/LocalDataSource" connector-name="Database Adapter"]
[config-property name="usesExternalConnectionPooling" value="true"/]
[config-property name="dataSourceName" value="jdbc/LocalDataSource"/]
[config-property name="usesExternalTransactionController" value="true"/]
[config-property name="platformClassName" value="oracle.toplink.internal.databaseaccess.Oracle9Platform"/]
[config-property name="usesNativeSequencing" value="true"/]
[config-property name="sequencePreallocationSize" value="50"/]
[/connector-factory]

In Example 4-5, dataSourceName matches the ejb-location attribute in data-sources.xml, and usesExternalTransactionController is true.


Note:

When the database adapter uses connection information specified in the data-sources.xml file, you are using application server connection pooling.

How the Database Adapter Gets Connection Information

Figure 4-24 shows how the database adapter gets connection information from the three possible sources: the WSDL file, the oc4j-ra.xml file, or the data-sources.xml file, as described in the preceding sections.

Figure 4-24 How the Database Adapter Gets Connection Information

Shows how database adapter gets connection information
Description of "Figure 4-24 How the Database Adapter Gets Connection Information"

Properties of the oc4j-ra.xml File

The properties specified in the connection factory use OracleAS TopLink by translating to roughly 40 properties of an underlying TopLink login object. This is the equivalent of the OracleAS TopLink sessions.xml file.

Table 4-17, Table 4-18, and Table 4-19 show the configuration properties available to configure the run-time connection.

Table 4-17 Run-Time Connection Configuration Properties

Parameter Description Mechanism to Update

driverClassName

Name of the Java class for the JDBC driver being used

See Chapter4, "Third-Party Database Support" for more information.

connectionString

JDBC Connection String

See Chapter4, "Third-Party Database Support" for more information.

userName

Database login name

-

password

Database login password

-


Table 4-18 OracleAS TopLink Connection Pooling for the Database Adapter

Parameter Description Mechanism to Update

minConnections

Minimum number of connections in the pool

This is to use the adapter's local Toplink connection pool.

maxConnections

Maximum number of connections in the pool

This is to use the adapter's local Toplink connection pool.

minReadConnections

Minimum number of read-only connections in the pool

This is to use the adapter's local Toplink connection pool.

maxReadConnections

Maximum number of read-only connections in the pool

This is to use the adapter's local Toplink connection pool.


Table 4-19 Parameters Used in Application Server Connection Pooling

Parameter Description Mechanism to Update

dataSourceName

Points to the data source configured in the application server

Use the application server connection pooling, for example: java:comp/env/jdbc/myDataSourceName

usesExternalConnectionPooling

Use the application server connection pooling

Use the application server connection pooling.

usesExternalTransactionController

Indicates if the application server's connection pool is managed or not

Use the application server connection pooling.

platformClassName

Indicates the database platform (see Table 4-20)

-

usesNativeSequencing

Indicates whether or not to use native database sequencing

-

sequencePreallocationSize

This corresponds to the INCREMENT BY value on the native database sequence

-

tableQualifier

Schema name to be used if the table names are not qualified in toplink_mappings.xml

Obsolete, because the wizard qualifies table names as it generates them.


If the application server connection pool is used, then most connection properties in this class are managed elsewhere.

Table 4-20 shows the advanced properties, which are database platform variables. Set the DatabasePlatform name to one of the following variables.

Table 4-20 Database Platform Names

Database PlatformClassName

Oracle9+ (including 10g)

oracle.toplink.oraclespecific.Oracle9Platform

Oracle8

oracle.toplink.internal.databaseaccess.Oracle8Platform

Oracle7

oracle.toplink.internal.databaseaccess.OraclePlatform

DB2

oracle.toplink.internal.databaseaccess.DB2Platform

Informix

oracle.toplink.internal.databaseaccess.InformixPlatform

Sybase

oracle.toplink.internal.databaseaccess.SybasePlatform

SQLServer

oracle.toplink.internal.databaseaccess.SQLServerPlatform

Any other database

oracle.toplink.internal.databaseaccess.DatabasePlatform


Advanced Properties of the oc4j-ra.xml File

The following properties are configurable by using the managed connection factory entry in the oc4j-ra.xml file:

String connectionString
String userName
String password
String encryptionClassName
Integer minConnections
Integer maxConnections
Boolean useReadConnectionPool
Integer minReadConnections
Integer maxReadConnections
String dataSourceName
String driverClassName
Integer cursorCode
String databaseName
String driverURLHeader
Integer maxBatchWritingSize
String platformClassName
String sequenceCounterFieldName
String sequenceNameFieldName
Integer sequencePreallocationSize
String sequenceTableName
String serverName
Boolean shouldBindAllParameters
Boolean shouldCacheAllStatements
Boolean shouldIgnoreCaseOnFieldComparisons
Boolean shouldForceFieldNamesToUpperCase
Boolean shouldOptimizeDataConversion
Boolean shouldTrimStrings
Integer statementCacheSize
Integer stringBindingSize
String tableQualifier
Integer transactionIsolation
Boolean usesBatchWriting
Boolean usesByteArrayBinding
Boolean usesDirectDriverConnect
Boolean usesExternalConnectionPooling
Boolean usesExternalTransactionController
Boolean usesJDBCBatchWriting
Boolean usesNativeSequencing
Boolean usesNativeSQL
Boolean usesStreamsForBinding
Boolean usesStringBinding

The following properties appear in the oracle.toplink.sessions.DatabaseLogin object.

See OracleAS TopLink API reference information on DBConnectionFactory Javadoc and DatabaseLogin Javadoc at

http://download-east.oracle.com/docs/cd/B10464_02/web.904/b10491/index.html

To configure any of the preceding properties:

  1. Add the following to the ra.xml file:

    <config-property>
        <config-property-name>usesJDBCBatchWriting</config-property-name>
        <config-property-type>java.lang.Boolean</config-property-type>
        <config-property-value>true</config-property-value>
      </config-property>
    
    

    For Oracle BPEL Process Manager for Developers, ra.xml is at

    Oracle_Home\integration\orabpel\system\appserver\oc4j\j2ee\home\connectors\
    DbAdapter\DbAdapter\META-INF\ra.xml
    
    

    For Oracle BPEL Process Manager for OracleAS Middle Tier, ra.xml is at

    Oracle_Home\j2ee\OC4J_BPEL\connectors\DbAdapter\DbAdapter\META-INF\ra.xml
    
    
  2. Add the following to the oc4j-ra.xml file:

    <config-property name="usesJDBCBatchWriting" value="true"/>

  3. Restart Oracle BPEL Server for the changes to take effect.

You can also update the default oc4j-ra.xml and ra.xml files before you deploy the database adapter. This way, you deploy once and do not need to restart the application server.

Performance

The database adapter is preconfigured with many performance optimizations. You can, however, make some changes to reduce the number of round trips to the database, as described in the following sections.

Outbound Write: Should You Use Merge, Write, or Insert?

If you run through the Adapter Configuration Wizard and select Insert or Update, you get a WSDL with the following operations: merge (default), insert, update, and write. The latter three call TopLink queries of the same name, avoiding advanced functionality that you may not need for straightforward scenarios. You can make the change by double-clicking an invoke activity and selecting a different operation. The merge is the most expensive, followed by the write and then the insert.

The merge first does a read of every element and calculates what has changed. If a row has not changed, it is not updated. The extra reads (for existence) and the complex change calculation add considerable overhead. For simple cases, this can be safely avoided; that is, if you changed only two columns, it does not matter if you update all five anyway. For complex cases, however, the opposite is true. If you have a master record with 100 details, but you changed only two columns on the master and two on one detail, the merge updates those four columns on two rows. A write does a write of every column in all 101 rows. Also, the merge may appear slower, but can actually relieve pressure on the database by minimizing the number of writes.

The insert operation is the most performant because it uses no existence check and has no extra overhead. You have no reads, only writes. If you know that you will do an insert most of the time, use insert, and catch a unique key constraint SQL exception inside your BPEL process, which can then perform a merge or update instead. Use merge if you have a mix of new and existing objects (for instance, a master row containing several new details). The update is similar to insert.

To monitor performance, you can enable debug logging and then watch the SQL for various inputs.

The OracleAS TopLink Cache: When Should You Use It?

Caching is an important performance feature of OracleAS TopLink. However, issues with stale data can be difficult to manage. By default, the database adapter uses a WeakIdentityMap, meaning a cache is used only to resolve cyclical references, and entries are quickly reclaimed by the Java virtual machine. If you have no cycles (and you ideally should not for XML), you can switch to a NoIdentityMap. The TopLink default is a SoftCacheWeakIdentityMap. This means that the most frequently used rows in the database are more likely to appear already in the cache.

For a knowledge article on caching, go to

http://www.oracle.com/technology/tech/java/newsletter/november04.html

Existence Checking

One method of performance optimization for merge is to eliminate check database existence checking. The existence check is marginally better if the row is new, because only the primary key is returned, not the entire row. But, due to the nature of merge, if the existence check passes, the entire row must be read anyway to calculate what changed. Therefore, for every row to be updated, you see one extra round trip to the database during merge.

It is always safe to use check cache on the root descriptor/table and any child tables if A is master and B is a privately owned child. If A does not exist, B cannot exist. And if A exists, all its Bs are loaded as part of reading A; therefore, check cache works.

Inbound Polling: maxRaiseSize

This parameter indicates the maximum number of XML records that can be raised at a time to the BPEL engine. For example, if you set maxRaiseSize = 10, then 10 database records are raised at one time. On read (inbound) you can set maxRaiseSize = 0 (unbounded), meaning that if you read 1000 rows, you will create one XML with 1000 elements, which is passed through a single Oracle BPEL Process Manager instance. A merge on the outbound side can then take all 1000 in one group and write them all at once with batch writing.

Use the maxRaiseSize parameter for publishing large payloads.

Inbound Polling: Choosing a Polling Strategy

Your choice of polling strategy matters too. Avoid the delete polling strategy because it must individually delete each row. The sequencing polling strategy can destroy 1000 rows with a single update to a helper table.

Relationship Reading (Batch Attribute and Joined Attribute Reading)

Batch reading of one-to-many and one-to-one relationships is on by default. You can also use joined reading for one-to-one relationships instead, which may offer a slight improvement.

Connection Pooling

You can configure a connection pool if using either the adapter's local connection pool or an application server data source. Creating a database connection is an expensive operation. Ideally you should only exceed the minConnections under heavy loads. If you are consistently using more connections than that at once, then you may spend a lot of time setting up and tearing down connections. The database adapter also has a read connection pool. A read connection is more performant because there is no limit on how many users can use one connection for reading at the same time, a feature that most JDBC drivers support.

Inbound Distributed Polling

The database adapter is designed to scale to the number of unprocessed rows on the database. By default, it is possible to read and process one database row or 10,000 with as little as three round trips to the database. The most expensive operations are limited to a constant number. You can also configure the database adapter for a distributed environment.

Load Balancing: MaxTransactionSize and Pessimistic Locking

You can set a simple option that enables the database adapter to work safely in a distributed environment by making the first polling query acquire locks, as shown in Figure 4-25. In SQL terms, you are making your first SELECT into a SELECT...FOR UPDATE.

Figure 4-25 Acquiring Locks

Description of Figure 4-25 follows
Description of "Figure 4-25 Acquiring Locks"

The behavior of all polling strategies is as follows:

  1. Read all unprocessed rows.

  2. Process those rows.

  3. Commit.

If any adapter instance performs step 1 while another instance is between steps 1 and 3, then duplicate processing occurs. Acquiring locks on the first operation and releasing them in commit solves this problem, and may naturally order the polling instances.

To enable pessimistic locking, run through the wizard once to create an inbound polling query. In the Applications Navigator window, expand Application Sources, then TopLink, and click TopLink Mappings. In the Structure window, click the table name. In Diagram View, click the following tabs: TopLink Mappings, Queries, Named Queries, Options; then the Advanced… button, and then Pessimistic Locking and Acquire Locks. You see the message, "Set Refresh Identity Map Results?" If a query uses pessimistic locking, it must refresh the identity map results. Click OK when you see the message, "Would you like us to set Refresh Identity Map Results and Refresh Remote Identity Map Results to true?Ó Run the wizard again to regenerate everything. In the new toplink_mappings.xml file, you see something like this for the query: <lock-mode>1</lock-mode>.

Note the following:

  • The preceding procedure works in conjunction with every polling strategy where the first operation is a read.

  • For the sequencing-based polling strategies, the SELECT FOR UPDATE is applied to the SELECT on the helper table only. The SELECT on the polled table does not acquire locks because you do not have write access to those tables.

  • If an adapter instance fails while polling records, those records are returned to the unprocessed pool (no commit happens).

  • No individual adapter instance is special. In an ideal distributed system, coordination between instances is minimal (here effected with locking). No master acts as a weak link, and every part is identically configured and interchangeable.

  • Other than the SELECT FOR UPDATE, no extra reads or writes are performed by the database adapter in a distributed environment.

Load Balancing: MaxTransactionSize and Pessimistic Locking

After you enable pessimistic locking on a polling query, the maxTransactionSize activation property automatically behaves differently.

Assume that there are 10,000 rows at the start of a polling interval and that maxTransactionSize is 100. In standalone mode, a cursor is used to iteratively read and process 100 rows at a time until all 10,000 have been processed, dividing the work into 10,000 / 100 = 100 sequential transactional units. In a distributed environment, a cursor is also used to read and process the first 100 rows. However, the adapter instance will release the cursor, leaving 9,900 unprocessed rows (or 99 transactional units) for the next polling interval or another adapter instance.

For load balancing purposes, it is dangerous to set the maxTransactionSize too low in a distributed environment (where it becomes a speed limit). It is best to set the maxTransactionSize close to the per CPU throughput of the entire business process. This way, load balancing occurs only when you need it.

Third-Party Database Support

The following section discusses how to connect to third-party databases. You can use one vendor's database for design time and another for run time because BPEL processes are database-platform neutral.

This section contains the following topics:


Note:

To create an Oracle Lite database connection, follow the steps for a third-party JDBC driver (because the existing wizard and libraries for the Oracle Lite database require extra configuration). Table 4-21 provides information for connecting to an Oracle Lite database.

See AppendixA, "Problems Importing Third-Party Database Tables" for more information.

Design Time

The following steps generally apply when you create a database connection using a third-party JDBC driver. For specific third-party databases, see the following topics:

To create a database connection when using a third-party JDBC driver:

  1. Select Connection Navigator from View.

  2. Right-click Database and select New Database Connection.

  3. Click Next in the Welcome window.

  4. Enter a connection name.

  5. Select Third Party JDBC Driver from Connection Type.

  6. Enter your username, password, and role information.

  7. Click New for Driver Class.

  8. Enter the driver name (for example, some.jdbc.Driver) for Driver Class.

    See Table 4-21 for some commonly used driver names.

  9. Click New for Library.

  10. Click Edit to add each JAR file of your driver to Class Path.

  11. Click OK twice to exit the Create Library windows.

  12. Click OK to exit the Register JDBC Driver window.

  13. Enter your connection string name for URL and click Next.

    See Table 4-21 for some commonly used URLs. Also, sample entries appear in the deployment descriptor file (oc4j-ra.xml). See Chapter4, "Properties of the oc4j-ra.xml File" for more information.

  14. Click Test Connection.

  15. If the connection is successful, click Finish.

Using a Microsoft SQL Server Database

When using a Microsoft SQL Server database, follow the database connection steps in Chapter4, "Design Time" and use the following information:

MS JDBC Driver

URL: jdbc:microsoft:sqlserver://localhost\NAME:1433;SelectMethod=cursor;databasename=???

Driver Class: com.microsoft.jdbc.sqlserver.SQLServerDriver

Driver Jar: .\SQLServer2000\msbase.jar, msutil.jar, mssqlserver.jar

DataDirect Driver

URL: jdbc:oracle:sqlserver://localhost

Driver Class: com.oracle.ias.jdbc.sqlserver.SQLServerDriver

Driver Jar: .\DataDirect\YMbase.jar, YMoc4j.jar, YMutil.jar, YMsqlserver.jar

Note the following when connecting to a SQL Server database:

  • User name and password

    • SQL Server 2005 installs with Windows authentication as the default. Therefore, you do not log in with a user name and password; rather, your Windows user account either has privilege or does not. JDBC requires you to provide a user name and password.

      According to support.microsoft.com, "Microsoft SQL Server 2000 driver for JDBC does not support connecting by using Windows NT authentication." See

      http://support.microsoft.com/default.aspx?scid=kb;en-us;313100
      
      

      However, the DataDirect driver specification states that it does.

      If you use your Windows user name and password, you may see something like the following:

      [Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]Login failed for user
      'DOMAIN\USER'. The user is not associated with a trusted SQL Server
      connection.[Microsoft][SQLServer 2000 Driver for JDBC]
      An error occured while attempting to log onto the database.
      
      

      You must select mixed mode authentication on a clean installation.

    • On a Microsoft SQL Server 2000 Express Edition installation, the system username is su and the password is whatever you provide.

  • Connect string

    From the sqlcmd login, you can infer your connect string, as in the following examples:

    Example 1:

    sqlcmd
    1>
    jdbc:microsoft:sqlserver://localhost:1433
    
    

    Example 2:

    sqlcmd -S user.mycompany.com\SQLExpress
    1>
    jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433
    
    

    Example 3:

    sqlcmd -S user.mycompany.com\SQLExpress -d master
    1>
    jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433;databasename=
    master
    
    

    A full URL is as follows:

    jdbc:microsoft:sqlserver://serverName[\instanceName]:tcpPort[;SelectMethod=cursor][;databasename=databaseName]
    
    
  • Database name

    If you must explicitly supply the database name, but do not know it, go to

    C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data
    
    

    If you see a file named master.mdf, then one of the database names is master.

  • TCP port

    Make sure that SQL Server Browser is running and that your SQL Server service has TCP/IP enabled and is listening on static port 1433. Disable dynamic ports. In SQL Native Client Configuration/Client Protocols, make sure that TCP/IP is enabled and that the default port is 1433.

  • JDBC drivers

    You must download the JDBC drivers separately. From www.microsoft.com, click Downloads and search on jdbc. You can also try using the DataDirect driver.

Using an IBM DB2 Database

When using an IBM DB2 database, follow the database connection steps in Chapter4, "Design Time" and use the following information:

DataDirect Driver

URL: jdbc:db2:localhost:NAME

Driver Class: COM.ibm.db2.jdbc.net.DB2Driver

Driver Jar (v8.1): .\IBM-net\db2java_81.zip, db2jcc_81.jar

JT400 Driver

URL: jdbc:as400://hostname;translate binary=true

Driver Class: com.ibm.as400.access.AS400JDBCDriver

Driver Jar: jt400.jar

For correct character set translation, use translate binary=true.

Using a Sybase Database

When using a Sybase database, follow the database connection steps in Chapter4, "Design Time" and use the following information:

jconn Driver

URL: jdbc:sybase:Tds:localhost:5001/NAME

Driver Class: com.sybase.jdbc2.jdbc.SybDriver

Driver Jar: .\Sybase-jconn\jconn2.jar

DataDirect Driver

URL: jdbc:oracle:sybase://localhost:5001

Driver Class: com.oracle.ias.jdbc.sybase.SybaseDriver

Driver Jar: .\DataDirect\YMbase.jar, YMoc4j.jar, YMutil.jar, YMsybase.jar

Using an InterSystems Caché Database

When using an InterSystems Caché database, follow the database connection steps in Chapter4, "Design Time" and use the following information:

URL: jdbc:Cache://machinename_running_Cache_DB_Server:1972/Cache_Namespace

Driver Class: com.intersys.jdbc.CacheDriver

Driver Jar: C:\CacheSys\Dev\Java\Lib\CacheDB.jar

The default login is _SYSTEM/sys.

Using a MySQL 4 Database

When using a MySQL 4 database, follow the database connection steps in Chapter4, "Design Time" and use the following information:

URL: jdbc:mysql://hostname:3306/dbname

Driver Class: com.mysql.jdbc.Driver

Driver Jar: mysql-connector-java-3.1.10-bin.jar

Summary of Third-Party and Oracle Lite Database Connection Information

Table 4-21 summarizes the preceding connection information for common third-party databases and for Oracle Olite. Also see Table 4-20, "Database Platform Names" for PlatformClassName information.

Table 4-21 Information for Connecting to Third-Party Databases and Oracle Olite Database

Database URL Driver Class Driver Jar

Microsoft SQL Server

MS JDBC driver:

jdbc:microsoft:sqlserver://
localhost\NAME:1433;
SelectMethod=cursor;
databasename=???

DataDirect driver:

jdbc:oracle:sqlserver://
localhost

MS JDBC driver:

com.microsoft.jdbc.sqlserver.
SQLServerDriver

DataDirect driver:

com.oracle.ias.jdbc.sqlserver.
SQLServerDriver

MS JDBC driver:

.\SQLServer2000\msbase.
jar, msutil.jar, 
mssqlserver.jar

DataDirect driver:

.\DataDirect\YMbase.jar, YMoc4j.jar, YMutil.jar, YMsqlserver.jar

IBM DB2

DataDirect driver:

jdbc:db2:localhost:NAME

JT400 driver:

jdbc:as400://hostname;
translate binary=true

Example:

jdbc:as400://localhost;
translate binary=true

DataDirect driver:

COM.ibm.db2.jdbc.net.DB2Driver

JT400 driver:

com.ibm.as400.access.
AS400JDBCDriver

DataDirect driver (v8.1):

.\IBM-net\db2java_81.zip, db2jcc_81.jar

JT400 driver:

jt400.jar

Sybase

jconn driver:

jdbc:sybase:Tds:localhost:5001/NAME

DataDirect driver:

jdbc:oracle:sybase://
localhost:5001

jconn driver:

com.sybase.jdbc2.jdbc.SybDriver

DataDirect driver:

com.oracle.ias.jdbc.sybase.
SybaseDriver

jconn driver:

.\Sybase-jconn\jconn2.jar

DataDirect driver:

.\DataDirect\YMbase.jar, YMoc4j.jar, YMutil.jar, YMsybase.jar

InterSystems Caché

jdbc:Cache://machinename_running_Cache_DB_Server:1972/
Cache_Namespace

where 1972 is the default port

Example:

jdbc:Cache://127.0.0.1:1972/
Samples
com.intersys.jdbc.CacheDriver
C:\CacheSys\Dev\Java\Lib\
CacheDB.jar

MySQL 4

jdbc:mysql://hostname:3306/
dbname

Example:

jdbc:mysql://localhost:3306/
test
com.mysql.jdbc.Driver
mysql-connector-java-
3.1.10-bin.jar

Oracle Olite Database

jdbc:polite4@localhost:100:
orabpel
oracle.lite.poljdbc.
POLJDBCDriver
Oracle_Home\integration\
orabpel\lib\olite40.jar

Run Time

At run time, put the driver JAR files in the application server class path in either of the following ways:

  • Edit the class path in the following files:

    For the Oracle BPEL Process Manager for Developers installation, go to

    Oracle_Home/integration/orabpel/system/appserver/oc4j/j2ee/home/config/
    server.xml
    
    

    For the Oracle BPEL Process Manager for OracleAS Middle Tier installation, go to

    Oracle_Home/j2ee/OC4J_BPEL/config/server.xml
    
    

    Or

  • Drop the JAR files into the following directories:

    For the Oracle BPEL Process Manager for Developers installation, go to

    Oracle_Home/integration/orabpel/system/appserver/oc4j/j2ee/home/applib
    
    

    For the Oracle BPEL Process Manager for OracleAS Middle Tier installation, go to

    Oracle_Home/j2ee/OC4J_BPEL/applib
    

Stored Procedure and Function Support

This section describes how the database adapter supports the use of stored procedures and functions for Oracle databases only. Stored procedures and functions for non-Oracle databases are not supported out-of-the-box.

This section contains the following topics:

Design Time: Using the Adapter Configuration Wizard

The Adapter Configuration Wizard – Stored Procedures is used to generate an adapter service WSDL and the necessary XSD. The adapter service WSDL encapsulates the underlying stored procedure or function as a Web service with a WSIF JCA binding. The XSD describes the procedure or function, including all the parameters and their types. This XSD provides the definition used to create instance XML that is submitted to the database adapter at run time.

Using Top-Level Standalone APIs

This section describes how to use the wizard with APIs that are not defined in PL/SQL packages. You use the Adapter Configuration Wizard – Stored Procedures to select a procedure or function and generate the XSD. See Chapter4, "The Adapter Configuration Wizard" if you are not familiar with how to start the wizard.

In the wizard, select Database Adapter, as shown in Figure 4-26.

Figure 4-26 Selecting the Database Adapter in the Adapter Configuration Wizard

Description of Figure 4-26 follows
Description of "Figure 4-26 Selecting the Database Adapter in the Adapter Configuration Wizard"

After entering a service name (for example, ProcedureProc) and an optional description for the service, you associate a connection with the service, as shown in Figure 4-27. You can select an existing connection from the list or create a new connection.

Figure 4-27 Setting the Database Connection in the Adapter Configuration Wizard

Description of Figure 4-27 follows
Description of "Figure 4-27 Setting the Database Connection in the Adapter Configuration Wizard"

For the Operation Type, select Call a Stored Procedure or Function, as shown in Figure 4-28.

Figure 4-28 Calling for a Stored Procedure or Function in the Adapter Configuration Wizard

Description of Figure 4-28 follows
Description of "Figure 4-28 Calling for a Stored Procedure or Function in the Adapter Configuration Wizard"

Next you select the schema and procedure or function. You can select a schema from the list or select <Default Schema>, in which case the schema associated with the connection is used. If you know the procedure name, enter it in the Procedure field. If the procedure is defined inside a package, then you must include the package name, as in EMPLOYEE.GET_NAME.

If you do not know the schema and procedure names, click Browse to access the Stored Procedures window, shown in Figure 4-29.

Figure 4-29 Searching for a Procedure or Function

Description of Figure 4-29 follows
Description of "Figure 4-29 Searching for a Procedure or Function"

Select a schema from the list or select <Default Schema>. The available procedures are displayed in the left window. To search for a particular API in a long list of APIs, enter search criteria in the Search field. For example, to find all APIs that begin with XX, enter XX% and click the Search button. Clicking the Show All button displays all available APIs.

Figure 4-30 shows how you can select the PROC procedure and click the Arguments tab. The Arguments tab displays the parameters of the procedure, including their names, type, mode (IN, IN/OUT or OUT) and the numeric position of the parameter in the definition of the procedure.

Figure 4-30 Viewing the Arguments of a Selected Procedure

Description of Figure 4-30 follows
Description of "Figure 4-30 Viewing the Arguments of a Selected Procedure"

Figure 4-31 shows how the Source tab displays the code that implements the procedure. Text that matches the name of the procedure is highlighted.

Figure 4-31 Viewing the Source Code of a Selected Procedure

Description of Figure 4-31 follows
Description of "Figure 4-31 Viewing the Source Code of a Selected Procedure"

After you select a procedure or function and click OK, information about the API is displayed, as shown in Figure 4-32. Use Back or Browse to make revisions, or Next followed by Finish to conclude.

Figure 4-32 Viewing Procedure or Function Details in the Adapter Configuration Wizard

Description of Figure 4-32 follows
Description of "Figure 4-32 Viewing Procedure or Function Details in the Adapter Configuration Wizard"

When you have finished using the Adapter Configuration Wizard, two files are added to the existing project: servicename.wsdl (for example, ProcedureProc.wsdl) and the generated XSD. The generated XSD file is named schema_package_procedurename.xsd. In this case, SCOTT_PROC.xsd is the name of the generated XSD file.

Using Packaged APIs and Overloading

Using APIs defined in packages is similar to using standalone APIs. The only difference is that you can expand the package name to see a list of all the APIs defined within the package, as shown in Figure 4-33.

APIs that have the same name but different parameters are called overloaded APIs. As shown in Figure 4-33, the package called PACKAGE has two overloaded procedures called OVERLOAD.

Figure 4-33 A Package with Two Overloaded Procedures

Description of Figure 4-33 follows
Description of "Figure 4-33 A Package with Two Overloaded Procedures"

As Figure 4-34 shows, the code for the entire PL/SQL package is displayed, regardless of which API from the package is selected when you view the Source tab. Text that matches the name of the procedure is highlighted.

Figure 4-34 Viewing the Source Code of an Overloaded Procedure

Description of Figure 4-34 follows
Description of "Figure 4-34 Viewing the Source Code of an Overloaded Procedure"

After you select a procedure or function and click OK, information about the API is displayed, as shown in Figure 4-35. The schema, procedure name, and parameter list are displayed. Note how the procedure name is qualified with the name of the package (PACKAGE.OVERLOAD). Use Back or Browse to make revisions, or Next followed by Finish to conclude.

Figure 4-35 Viewing Procedure or Function Details in the Adapter Configuration Wizard

Description of Figure 4-35 follows
Description of "Figure 4-35 Viewing Procedure or Function Details in the Adapter Configuration Wizard"

When you have finished using the Adapter Configuration Wizard, two files are added to the existing project: Overload.wsdl and SCOTT_PACKAGE_OVERLOAD_2.xsd. The _2 appended after the name of the procedure in the XSD filename differentiates the overloaded APIs.

Design Time: WSDL and XSD Generation

The Adapter Configuration Wizard – Stored Procedures is capable of creating a WSDL and a valid XSD that describes the signature of a stored procedure or function. The following sections describe the relevant structure and content of both the WSDL and the XSD, and their relationship with each other.

The WSDL–XSD Relationship

In the paragraphs that follow, the operation name, ProcedureProc, and procedure name, PROC, are taken from an example cited previously (see Figure 4-32). The generated WSDL imports the XSD.

<types>
  <schema xmlns=Óhttp://www.w3.org/2001/XMLSchemaÓ>
    <import namespace=Óhttp://xmlns.oracle.com/pcbpel/adapter/db/SCOTT/PROC/Ó
      schemaLocation=ÓSCOTT_PROC.xsdÓ/>
  </schema>
</types>

The namespace is derived from the schema, package, and procedure name, and appears as the targetNamespace in the generated XSD.

A root element called InputParameters is created in the XSD for specifying elements that correspond to the IN and IN/OUT parameters of the stored procedure. Another root element called OutputParameters is also created in the XSD for specifying elements only if there are any IN/OUT or OUT parameters. Note that IN/OUT parameters appear in both root elements.

These root elements are represented in the XSD as an unnamed complexType definition whose sequence includes one element for each parameter. If there are no IN or IN/OUT parameters, the InputParameters root element is still created; however, the complexType is empty. A comment in the XSD indicates that there are no such parameters. An example of one of these root elements follows.

<element name="InputParameters"
  <complexType>
    <sequence>
      <element …>
       …
    </sequence>
  </complexType>
</element>

The WSDL defines message types whose parts are defined in terms of these two root elements.

<message name=Óargs_in_msgÓ
  <part name=ÓInputParametersÓ element=Ódb:InputParametersÓ/>
</message>
<message name=Óargs_out_msgÓ
  <part name=ÓOutputParametersÓ element=Ódb:OutputParametersÓ/>
</message>

The db namespace is the same as the targetNamespace of the generated XSD. Note that the args_in_msg message type always appears in the WSDL while args_out_msg is included only if the OutputParameters root element is generated in the XSD.

An operation is defined in the WSDL whose name is the same as the adapter service and whose input and output messages are defined in terms of these two message types.

<portType name=ÓProcedureProc_pttÓ>
  <operation name=ÓProcedureProcÓ>
    <input message=Ótns:args_in_msgÓ/>
    <output message=Ótns:args_out_msgÓ/>
  </operation>
</portType>

The input message always appears while the output message depends on the existence of the OutputParameters root element in the XSD. The tns namespace is derived from the operation name and is defined in the WSDL as

xmlns:tns=Óhttp://xmlns.oracle.com/pcbpel/adapter/db/ProcedureProc

The root elements in the XSD define the structure of the parts used in the messages that are passed into and sent out of the Web service encapsulated by the WSDL.

The input message in the WSDL corresponds to the InputParameters root element from the XSD. The instance XML supplies values for the IN and IN/OUT parameters of the stored procedure. The output message corresponds to the OutputParameters root element. This is the XML that gets generated after the stored procedure has executed. It holds the values of any IN/OUT and OUT parameters.

Supported Primitive Datatypes

Many primitive datatypes have well-defined mappings and therefore are supported by both the design-time and run-time components. In addition, you can use user-defined types such as VARRAY, nested tables, and OBJECT. Table 4-22 lists the primitive datatypes supported by the database adapter for stored procedures.

Table 4-22 Primitive Datatypes Supported by the Database Adapter for Stored Procedures

SQL or PL/SQL Type XML Schema Type

BINARY_DOUBLE

DOUBLE PRECISION

double

BINARY_FLOAT

FLOAT

REAL

float

BINARY_INTEGER

INTEGER

PLS_INTEGER

SMALLINT

int

BLOB

LONG RAW

RAW

base64Binary

CHAR

CLOB

LONG

VARCHAR2

string

DATE

TIMESTAMP

dateTime

DECIMAL

NUMBER

decimal


Generated XSD Attributes

Table 4-23 lists the attributes used in the generated XSDs. Attributes prefixed with db: are specific to the database adapter.

Table 4-23 Generated XSD Attributes

Attribute Example Purpose

name

name="param"

Name of an element

type

type="string"

XML schema type

db:type

db:type="VARCHAR2"

SQL or PL/SQL type

db:index

db:index="1"

Position of a parameter

db:default

db:default="true"

Has a default clause

minOccurs

minOccurs="0"

Minimum occurrences

maxOccurs

maxOccurs="1"

Maximum occurrences

nillable

nillable="true"

Permits null values


The db namespace is used to distinguish attributes used during run time from standard XML schema attributes. The db:type attribute is used to indicate what the database type is so that a suitable JDBC type mapping can be obtained at run time. The db:index attribute is used as an optimization by both the design-time and run-time components to ensure that the parameters are arranged in the proper order. Parameter indices begin at 1 for procedures and 0 for functions. The return value of a function is represented as an OutputParameter element whose name is the name of the function and whose db:index is 0. The db:default attribute is used to indicate whether or not a parameter has a default clause.

The minOccurs value is set to 0 to allow for an IN parameter to be removed from the XML. This is useful when a parameter has a default clause defining a value for the parameter (for example, X IN INTEGER DEFAULT 0). At run time, if no element is specified for the parameter in the XML, the parameter is omitted from the invocation of the stored procedure, thus allowing the default value to be used. Each parameter can appear at most once in the invocation of a stored procedure or function. Therefore, maxOccurs, whose default value is always 1, is always omitted from elements representing parameters.

The nillable attribute is always set to true to allow the corresponding element in the instance XML to have a null value (for example, <X/> or <X></X>). In some cases, however, to pass schema validation, an element such as this, which does have a null value, must state this explicitly (for example, <X xsi:nil="true"/>). The namespace, xsi, used for the nillable attribute, must be declared explicitly in the instance XML (for example, xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance").

User-Defined Types

The wizard can also generate valid definitions for user-defined types such as collections (VARRAY and nested tables) and OBJECT. These are created as complexType definitions in the XSD.

For VARRAY, the complexType definition defines a single element in its sequence, called name_ITEM, where name is the name of the VARRAY element. All array elements in the XML are so named. Given the following VARRAY type definition,

SQL> CREATE TYPE FOO AS VARRAY (5) OF VARCHAR2 (10);

and a VARRAY element, X, whose type is FOO, the following complexType is generated:

<complexType name="FOO">
  <sequence>
    <element name="X_ITEM" db:type="VARCHAR2" minOccurs="0" maxOccurs="5" nillable="true"/>
      <simpleType>
        <restriction base="string">
          <maxLength value="10"/>
        </restriction>
      </simpleType>
  </sequence>
</complexType>

The minOccurs value is 0 to allow for an empty collection. The maxOccurs value is set to the maximum number of items that the collection can hold. Note that the db:index attribute is not used. Having nillable set to true allows individual items in the VARRAY to be null.

Note the use of the restriction specified on the element of the VARRAY, FOO. This is used on types such as CHAR and VARCHAR2, whose length is known from the declaration of the VARRAY (or nested table). It specifies the type and maximum length of the element. An element value that exceeds the specified length causes the instance XML to fail during schema validation.

The attribute values of a parameter declared to be of type FOO look as follows in the generated XSD:

<element name="X" type="db:FOO" db:type="Array" db:index="1" minOccurs="0" nillable="true"/>

The type and db:type values indicate that the parameter is represented as an array defined by the complexType called FOO in the XSD. The value for db:index is whatever the position of that parameter is in the stored procedure.

A nested table is treated almost identically to a VARRAY. The following nested table type definition,

SQL> CREATE TYPE FOO AS TABLE OF VARCHAR2 (10);

is also generated as a complexType with a single element in its sequence, called name_ITEM. The element has the same attributes as in the VARRAY example, except that the maxOccurs value is unbounded because nested tables can be of arbitrary size.

<complexType name="FOO">
  <sequence>
    <element name="X_ITEM" … maxOccurs="unbounded" nillable="true">
      ...
    </element>
  </sequence>
</complexType>

An identical restriction is generated for the X_ITEM element in the VARRAY. The attributes of a parameter, X, declared to be of this type, are the same as in the VARRAY example.

An OBJECT definition is also generated as a complexType. Its sequence holds one element for each attribute in the OBJECT. The following OBJECT,

SQL> CREATE TYPE FOO AS OBJECT (X VARCHAR2 (10), Y NUMBER);

is represented as a complexType called FOO with two sequence elements.

<complexType name="FOO">
  <sequence>
    <element name="X" db:type="VARCHAR2" minOccurs="0" nillable="true"/>
      <simpleType>
        <restriction base="string">
          <maxLength value="10"/>
        </restriction>
      </simpleType>
    <element name="Y" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/>
  </sequence>
</complexType>

The minOccurs value is 0 to allow for the element to be removed from the XML. This causes the value of the corresponding attribute in the OBJECT to be set to null at run time. The nillable value is true to allow empty elements to appear in the XML, annotated with the xsi:nil attribute, to indicate that the value of the element is null. Again, the db:index attribute is not used.

Note the use of a restriction on the VARCHAR2 attribute. The length is known from the declaration of the attribute in the OBJECT.

Complex User-Defined Types

User-defined types can be defined in arbitrarily complex ways. An OBJECT can contain attributes whose types are defined as any of the aforementioned user-defined types. This means that the type of an attribute in an OBJECT can be another OBJECT, VARRAY or a nested table, and so on. The base type of a VARRAY or a nested table can also be an OBJECT. Allowing the base type of a collection to be another collection supports multidimensional collections.

Object Type Inheritance

The wizard is capable of generating a valid XSD for parameters whose types are defined using OBJECT-type inheritance. Given the following type hierarchy,

SQL> CREATE TYPE A AS OBJECT (A1 NUMBER, A2 VARCHAR2 (10)) NOT FINAL;
SQL> CREATE TYPE B UNDER A (B1 VARCHAR2 (10));

and a procedure containing a parameter, X, whose type is B,

SQL> CREATE PROCEDURE P (X IN B) AS BEGIN … END;

the wizard generates an InputParameters element for parameter X as

<element name="X" type="db:B" db:index="1" db:type="Struct" minOccurs="0" nillable="true"/>

where the definition of OBJECT type B in the XSD is generated as the following complexType.

<complexType name="B">
  <sequence>
    <element name="A1" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/>
    <element name="A2" db:type="VARCHAR2" minOccurs="0"  nillable="true">
      ...
    </element>
    <element name="B1" db:type="VARCHAR2" minOccurs="0"  nillable="true">
      ...
    </element>
  </sequence>
</complexType>

Restrictions on the maximum length of attributes A2 and B1 are added appropriately. Notice how the OBJECT type hierarchy is flattened into a single sequence of elements that corresponds to all of the attributes in the entire hierarchy.

Object References

The wizard can also generate a valid XSD for parameters that are references to OBJECT types (for example, object references), or are user-defined types that contain an object reference somewhere in their definition. In this example,

SQL> CREATE TYPE FOO AS OBJECT (…);
SQL> CREATE TYPE BAR AS OBJECT (F REF FOO, …);
SQL> CREATE PROCEDURE PROC (X OUT BAR, Y OUT REF FOO) AS BEGIN … END;

the wizard generates complexType definitions for FOO and BAR as already indicated, except that for BAR, the element for the attribute, F, is generated as

<element name=ÓFÓ type=Ódb:FOOÓ db:type=ÓRefÓ minOccurs=Ó0Ó nillable=ÓtrueÓ/>

where together, the type and db:type attribute values indicate that F is a reference to the OBJECT type FOO.

For a procedure PROC, the following elements are generated in the OutputParameters root element of the XSD:

<element name=ÓXÓ type=Ódb:BARÓ db:index=Ó1Ó db:type=ÓStructÓ minOccurs=Ó0Ó nillable=ÓtrueÓ/>
<element name=ÓYÓ type=Ódb:FOOÓ db:index=Ó2Ó db:type=ÓRefÓ minOccurs=Ó0Ó nillable=ÓtrueÓ/>

For Y, note the value of the db:type attribute, Ref. Together with the type attribute, the element definition indicates that Y is a reference to FOO.

Note that there is a restriction on the use of object references that limits their parameter mode to OUT only. Passing an IN or IN/OUT parameter into an API that is either directly a REF or, if the type of the parameter is user-defined, contains a REF somewhere in the definition of that type, is not permitted.

Referencing Types in Other Schemas

You can refer to types defined in other schemas, provided that the necessary privileges to access them have been granted. For example, suppose type OBJ was declared in SCHEMA1:

SQL> CREATE TYPE OBJ AS OBJECT (…);

The type of a parameter in a stored procedure declared in SCHEMA2 can be type OBJ from SCHEMA1:

CREATE PROCEDURE PROC (O IN SCHEMA1.OBJ) AS BEGIN … END;

This is possible only if SCHEMA1 granted permission to SCHEMA2 to access type OBJ:

SQL> GRANT EXECUTE ON OBJ TO SCHEMA2;

If the required privileges are not granted, an error occurs when trying to create procedure PROC in SCHEMA2:

PLS-00201: identifier "SCHEMA1.OBJ" must be declared

Because the privileges have not been granted, type OBJ from SCHEMA1 is not visible to SCHEMA2; therefore, SCHEMA2 cannot refer to it in the declaration of parameter O.

Run Time: Before Stored Procedure Invocation

This section discusses important considerations of stored procedure support.

Value Binding

Consider the extraction of values from the XML and how the run time works given those values. The possible cases for data in the XML corresponding to the value of a parameter whose type is one of the supported primitive datatypes value are as follows:

  1. The value of an element is specified (for example, <X>100</X>).

  2. The value of an element is not specified (for example, <X/>).

  3. The value is explicitly specified as null (for example, <X xsi:nil="true"/>)

  4. The element is not specified in the XML at all.

Each case is handled differently.

In the first case, the value is taken from the XML as-is and is converted to the appropriate object according to its type. That object is then bound to its corresponding parameter during preparation of the stored procedure invocation.

In the second and third cases, the actual value extracted from the XML is null. The type converter accepts null and returns it without any conversion. The null value is bound to its corresponding parameter regardless of its type. Essentially, this is the same as passing null for parameter X.

The fourth case has two possibilities. The parameter either has a default clause or it does not. If the parameter has a default clause, then the parameter is completely excluded from the invocation of the stored procedure. This allows the default value to be used for the parameter. On the other hand, if the parameter does not have a default clause, then the parameter is included in the invocation of the procedure. A null value is bound to the parameter by default:

SQL> CREATE PROCEDURE PROC (X IN INTEGER DEFAULT 0) AS BEGIN … END;

Here, no value is bound to the parameter. In fact, the parameter is completely excluded from the invocation of the stored procedure. This allows the value of 0 to default for parameter X.

To summarize, the following PL/SQL is executed in each of the four cases:

  1. "BEGIN PROC (X=>?); END;" - X = 100

  2. "BEGIN PROC (X=>?); END;" - X = null

  3. "BEGIN PROC (X=>?); END;" - X = null

  4. There are two possibilities:

    1. "BEGIN PROC (); END;" - X = 0 (X has a default clause)

    2. "BEGIN PROC (X=>?); END;" - X = null (X does not have a default clause)

With the exception of default clause handling, these general semantics also apply to item values of a collection or attribute values of an OBJECT whose types are one of the supported primitive datatypes. The semantics of <X/> when the type is user-defined are, however, quite different.

For a collection, whether it is a VARRAY or a nested table, the following behavior can be expected given a type definition such as

SQL> CREATE TYPE ARRAY AS VARRAY (5) OF VARCHAR2 (10);

and XML for a parameter, X, which has type ARRAY, that appears as follows:

<X>
    <X_ITEM xsi:nil="true"/>
    <X_ITEM>Hello</X_ITEM>
    <X_ITEM xsi:nil="true"/>
    <X_ITEM>World</X_ITEM>
</X>

The first and third elements of the VARRAY are set to null. The second and fourth are assigned their respective values. No fifth element is specified in the XML; therefore, the VARRAY instance has only four elements.

Assume an OBJECT definition such as

SQL> CREATE TYPE OBJ AS OBJECT (A INTEGER, B INTEGER, C INTEGER);

and XML for a parameter, X, which has type OBJ, that appears as

<X>
    <A>100</A>
    <C xsi:nil="true"/>
</X>

The value 100 is assigned to attribute A and null is assigned to attributes B and C. Because there is no element in the instance XML for attribute B, a null value is assigned.

The second case, <X/>, behaves differently if the type of X is user-defined. Rather than assigning null to X, an initialized instance of the user-defined type is created and bound instead.

In the preceding VARRAY example, if <X/> or <X></X> is specified, then the value bound to X is an empty instance of the VARRAY. In PL/SQL, this is equivalent to calling the type constructor and assigning the value to X. For example,

X := ARRAY();

Similarly, in the preceding OBJECT example, an initialized instance of OBJ, whose attribute values have all been null assigned, is bound to X. Like the VARRAY case, this is equivalent to calling the type constructor. For example,

X := OBJ(NULL, NULL, NULL);

To specifically assign a null value to X when the type of X is user-defined, add the xsi:nil attribute to the element in the XML, as in

<X xsi:nil="true"/>

Datatype Conversions

This section describes the conversion of datatypes such as CLOB, DATE, TIMESTAMP, and binary datatypes including RAW, LONG RAW and BLOB.

For CLOB parameters, a temporary CLOB is first created. The data extracted from the XML is then written to it before binding the CLOB to its corresponding parameter. The temporary CLOB is freed when the interaction completes. For other character types, such as CHAR and VARCHAR2, the data is simply extracted and bound as necessary. Note that it is possible to bind an XML document to a CLOB (or VARCHAR2 if it is large enough). However, appropriate substitutions for <, >, and so on, must first be made (for example, &lt; for < and &gt; for >).

Note that the XML schema type, dateTime, represents both DATE and TIMESTAMP. This means that the XML values for both datatypes must adhere to the XML schema representation for dateTime. Therefore, a simple DATE string, 01-JAN-05, is invalid. XML schema defines dateTime as YYYY-MM-DDTHH:mm:ss. Therefore, the correct DATE value is 2005-01-01T00:00:00.

Data for binary datatypes must be represented in a human readable manner. The chosen XML schema representation for binary data is base64Binary. The type converter uses the javax.mail.internet.MimeUtility encode and decode APIs to process binary data. The encode API must be used to encode all binary data into base64Binary form so that it can be used in an XML file. The type converter uses the decode API to decode the XML data into a byte array. This is then bound either directly, as is the case with RAW and LONG RAW parameters, or is used to create a temporary BLOB, which is then bound to its associated BLOB parameter. The temporary BLOB is freed when the interaction completes.

Conversions for the remaining datatypes are straightforward and require no additional information.

Run Time: After Stored Procedure Invocation

After the procedure (or function) executes, the values for any IN/OUT and OUT parameters are retrieved. These correspond to the values of the elements in the OutputParameters root element in the generated XSD.

Datatype Conversions

Conversions of the data retrieved are straightforward. However, BLOB, CLOB (and other character data), as well as RAW and LONG RAW conversions, require special attention.

When a CLOB is retrieved, the entire contents of that CLOB are written to the corresponding element in the generated XML. Standard DOM APIs are used to construct the XML. This means that character data, as for types like CLOB, CHAR, and VARCHAR2, is massaged as needed to make any required substitutions so that the value is valid and can be placed in the XML for subsequent processing. Therefore, substitutions for < and >, for example, in an XML document stored in a CLOB are made so that the value placed in the element within the generated XML for the associated parameter is valid.

Raw data, such as for RAW and LONG RAW types, is retrieved as a byte array. For BLOBs, the BLOB is first retrieved, and then its contents are obtained, also as a byte array. The byte array is then encoded using the javax.mail.internet.MimeUtility encode API into base64Binary form. The encoded value is then placed in its entirety in the XML for the corresponding element. The MimeUtility decode API must be used to decode this value back into a byte array.

Conversions for the remaining datatypes are straightforward and require no additional information.

Null Values

Elements whose values are null appear as empty elements in the generated XML and are annotated with the xsi:nil attribute. This means that the xsi namespace is declared in the XML that is generated. Generated XML for a procedure PROC, which has a single OUT parameter, X, whose value is null, looks as follows:

<db:OutputParameters … xmlns:xsi=Óhttp://www.w3.org/2001/XMLSchema-instanceÓ>
    <X xsi:nil=ÓtrueÓ/>
</db:OutputParameters>

The db namespace is also declared (that is, xmlns:db="..."). Note that XML elements for parameters of any type (including user-defined types) appear this way if their value is null.

Function Return Values

The return value of a function is treated as an OUT parameter at position 0 whose name is the name of the function itself. For example,

CREATE FUNCTION FACTORIAL (X IN INTEGER) RETURN INTEGER AS
BEGIN
    IF (X <= 0) THEN RETURN 1;
    ELSE RETURN FACTORIAL (X - 1);
    END IF;
END;

An invocation of this function with a value of 5, for example, results in a value of 120 and appears as <FACTORIAL>120</FACTORIAL> in the XML generated in OutputParameters.

Advanced Topics

This section discusses scenarios for types that are not supported directly using the stored procedure functionality that the database adapter provides. The following sections describe workarounds that address the need to use these datatypes.

Support for REF CURSOR

Neither the design-time nor run-time components support REF CURSOR types directly. The solution is to use a collection of an OBJECT type. Because the number of rows returned by a REF CURSOR is usually unknown, it is best to use a nested table as the collection type. This solution involves using a Java stored procedure to convert a ResultSet into an instance of the declared collection type. A sample tutorial illustrating this is provided in the following directory:

Oracle_Home\integration\orabpel\samples\tutorials\122.DBAdapter\ResultSetConverter

Support for PL/SQL Boolean, PL/SQL Record, and PL/SQL Table Types

The wizard provides a mechanism that detects when these types are used and then invokes Oracle JPublisher to generate the necessary wrappers automatically. Oracle JPublisher generates two SQL files, one to create schema objects, and another to drop them. The SQL that creates the schema objects is automatically executed from within the wizard to create the schema objects in the database schema before the XSD is generated. For example, suppose the following package specification is declared:

CREATE PACKAGE PKG AS
  TYPE REC IS RECORD (X NUMBER, Y VARCHAR2 (10));
  TYPE TBL IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
  PROCEDURE PROC (R REC, T TBL, B BOOLEAN);
END;

Figure 4-36 shows the step in the wizard that is displayed when procedure PROC from package PKG is selected.

Figure 4-36 Specifying a Stored Procedure in the Adapter Configuration Wizard

Specifying a stored procedure: step 6.
Description of "Figure 4-36 Specifying a Stored Procedure in the Adapter Configuration Wizard"

As Figure 4-36 shows, the original procedure name is fully qualified, PKG.PROC. The type of parameter, R, is the name of the RECORD. The type of T is the name of the TABLE. The type of B is BOOLEAN. The name of the wrapper package that is generated is derived from the service name, bpel_ServiceName (for example, bpel_UseJPub). This is the name of the generated package that contains the wrapper procedure. The check box can be used to force the wizard to overwrite an existing package when the schema objects are created.

Clicking Next reveals the Finish page of the wizard, as shown in Figure 4-37.

Figure 4-37 Defining a Database Adapter Service: Finish Page

Description of Figure 4-37 follows
Description of "Figure 4-37 Defining a Database Adapter Service: Finish Page"

The contents of this page describe what the wizard has detected and what actions are performed when the Finish button is clicked. The following summarizes the contents of this page:

  1. The name of the generated WSDL is UseJPub.wsdl.

  2. Two SQL scripts are created and added to the BPEL process project:

    1. BPEL_USEJPUB.sql – Creates the schema objects.

    2. BPEL_USEJPUB_drop.sql – Drops the schema objects.

  3. The name of the generated XSD is SCOTT_USEJPUB_PKG-24PROC.xsd.

When you click Finish, Oracle JPublisher is invoked to generate the SQL files and load the schema objects into the database. The process of generating wrappers may take quite some time to complete. Processing times for wrappers that are generated in the same package usually require less time after an initial wrapper has been generated for another procedure within the same package.

The following user-defined types are generated to replace the PL/SQL types from the original procedure:

SQL> CREATE TYPE PKG_REC AS OBJECT (X NUMBER, Y VARCHAR2 (10));
SQL> CREATE TYPE PKG_TBL AS TABLE OF NUMBER;

The naming convention for these types is OriginalPackageName_OriginalTypeName. BOOLEAN is replaced by INTEGER in the wrapper procedure.

Acceptable values for the original BOOLEAN parameter now that it is an INTEGER are 1 for true and 0 for false. Any value other than 1 is considered false. The generated wrapper procedure uses APIs from the SYS.SQLJUTL package to convert from INTEGER to BOOLEAN and vice-versa.

A new wrapper package called BPEL_USEJPUB is created that contains the wrapper for procedure PROC, called PKG$PROC, as well as conversion APIs that convert from the PL/SQL types to the user-defined types and vice-versa. If the original procedure is a root-level procedure, the name of the generated wrapper procedure is TOPLEVEL$OriginalProcedureName.

The generated XSD represents the signature of wrapper procedure PKG$PROC and not the original procedure. The name of the XSD file is URL-encoded, which replaces $ with -24.

Note the naming conventions for the generated artifacts:

  • The service name is used in the names of the WSDL and SQL files. It is also used as the name of the wrapper package.

  • The name of the generated XSD is derived from the schema name, service name, and the original package and procedure names.

  • The name of a user-defined type is derived from the original package name and the name of its corresponding PL/SQL type.

  • The name of the wrapper procedure is derived from the original package and procedure names. TOPLEVEL$ is used for root-level procedures.

The name of the generated wrapper package is limited to 30 characters. The name of the wrapper procedure is limited to 29 characters. If the names generated by Oracle JPublisher are longer than these limits, they are truncated.

When the PartnerLink that corresponds with the service associated with the procedure is invoked, the generated wrapper procedure is executed instead of the original procedure.

Default Clauses in Wrapper Procedures

If a procedure contains a special type that requires a wrapper to be generated, the default clauses on any of the parameters are not carried over to the wrapper. For example, consider

SQL> CREATE PROCEDURE NEEDSWRAPPER (
        >     B BOOLEAN DEFAULT TRUE, N NUMBER DEFAULT 0) IS BEGIN … END;

Assuming that this is a root-level procedure, the signature of the generated wrapper procedure is

TOPLEVEL$NEEDSWRAPPER (B INTEGER, N NUMBER)

The BOOLEAN type has been replaced by INTEGER. The default clauses on both parameters are missing in the generated wrapper. Parameters of generated wrapper procedures never have a default clause, even if they did in the original procedure.

In this example, if an element for either parameter is not specified in the instance XML, then a null value is bound to that parameter during the invocation of the wrapper procedure. The default value of the parameter that is specified in the original procedure is not used.

To address this, the generated SQL file that creates the wrapper must be edited, restoring the default clauses to the parameters of the wrapper procedure. The wrapper and any additional schema objects must then be reloaded into the database schema. After editing the SQL file, the signature of the wrapper procedure is

TOPLEVEL$NEEDSWRAPPER (B INTEGER DEFAULT 1, N NUMBER DEFAULT 0)

For BOOLEAN parameters, the default value for true is 1 and the default value for false is 0.

As a final step, the XSD generated for the wrapper must be edited. A special attribute must be added to elements representing parameters that now have default clauses. Add db:default="true" to each element representing a parameter that now has a default clause. For example,

<element name="B" … db:default="true" …/>
<element name="N" … db:default="true" …/>

This attribute is used at run time to indicate that, if the element is missing from the instance XML, the corresponding parameter must be omitted from the procedure call. The remaining attributes of these elements remain exactly the same.

Use Case for Creating and Configuring a Stored Procedure in JDeveloper BPEL Designer

This tutorial describes how to integrate a stored procedure into Oracle BPEL Process Manager with JDeveloper BPEL Designer. Other tutorials that demonstrate stored procedures and functions are File2StoredProcedure, JPublisherWrapper, and ResultSetConverter. Go to

Oracle_Home\integration\orabpel\samples\tutorials\122.DBAdapter

This section contains the following topics:


Note:

Stored procedures are supported for Oracle databases only.

Creating a Stored Procedure

  1. Connect to the scott schema of the Oracle database using SQL*Plus. This is the schema in which to create the stored procedure. This example assumes tiger is the password.

    sqlplus scott/tiger
    
    
  2. Create the stored procedure (note the blank space after Hello):

     SQL> CREATE PROCEDURE HELLO (NAME IN VARCHAR2, GREETING OUT VARCHAR2) AS
        2  BEGIN
        3      GREETING := 'Hello ' || NAME;
        4  END;
        5  /
    
      Procedure created.
    

Creating a Database Connection

Use the Create Database Connection Wizard in JDeveloper BPEL Designer to create a connection to the scott schema in which you created the stored procedure.

  1. Go to JDeveloper BPEL Designer.

  2. Select Connection Navigator from the View main menu.

  3. Right-click Database in the Connection Navigator window and select New Database Connection.

    This starts the Create Database Connection Wizard.

  4. Click Next on the Welcome window.

  5. Enter a name (for example, myConnection) in the Connection Name field of the Type window.

  6. Select the database connection type (for example, Oracle (JDBC)) from the Connection Type list, and click Next.

  7. Enter scott in the Username field of the Authentication window.

  8. Enter the password for scott in the Password field (tiger for this example).

  9. Leave the remaining fields as they are, and click Next.

  10. Enter the following connection information. If you do not know this information, contact your database administrator.

    Field Example of Value
    Driver thin
    Host Name localhost
    JDBC Port 1521
    SID ORCL

  11. Click Next.

  12. Click Test Connection on the Test window.

    If the connection was successful, the following message appears:

    Success!
    
    
  13. Click Finish.

Creating a Workspace and a Greeting Process

  1. Select Application Navigator from the View main menu in JDeveloper BPEL Designer.

  2. Select New from the File main menu.

  3. Double-click Workspace in the Items window to display the Create Workspace window.

  4. Enter a name (for example, myWorkspace) in the Workspace Name field and accept the default path in the Directory Name field.

  5. Deselect the Add a New Empty Project check box.

  6. Click OK.

  7. Right-click this new workspace in the Applications Navigator section.

  8. Select New Project.

  9. Double-click BPEL Process Project in the Items window to display the BPEL Process Project window.

  10. Enter Greeting in the BPEL Process Name field.

  11. Select Synchronous BPEL Process from the Template list.

  12. Leave the Use Default check box selected.

  13. Click OK.

    The bpel.xml, Greeting.bpel, and Greeting.wsdl files are created in the Applications Navigator.

    Description of storeproc.gif follows
    Description of the illustration storeproc.gif

Creating a Partner Link

  1. Ensure that Process Activities is selected in the drop-down list of the Component Palette section in the upper right section of JDeveloper BPEL Designer.

  2. Drag and drop a PartnerLink activity onto the right side of the designer window under the Partner Links header.

    Description of storeprocpl.gif follows
    Description of the illustration storeprocpl.gif

    The Create Partner Link window appears.

  3. Enter Hello in the Name field.

  4. Click the third icon at the top (the Define Adapter Service icon). This starts the Adapter Configuration Wizard.

    Description of adapter_service.gif follows
    Description of the illustration adapter_service.gif

  5. Click Next on the Welcome window.

  6. Select Database Adapter on the Adapter Type window and click Next.

  7. Enter Hello in the Service Name field on the Service Name window. This is the same name as that of the partner link.

  8. Click Next.

  9. Select the database connection in the Connection field on the Service Connection page that you created for the scott schema in Chapter4, "Creating a Database Connection".

  10. Ensure that eis/DB/connection_name displays in the Database Server JNDI Name field. The connection_name is the name you selected in the Connection field and the connection to the scott schema that you created in Chapter4, "Creating a Database Connection". The name is case sensitive. Ensure that it correctly matches the case of the connection name.

  11. Click Next.

  12. Select Call a Stored Procedure or Function on the Operation Type window.

  13. Click Next.

    The Specify Stored Procedure window appears.

  14. Click Browse to the right of the Procedure field.

    The Stored Procedures window appears.

  15. Leave <Default Schema> selected in the Schema list. This defaults to the scott schema in which the stored procedure is defined.

  16. Select Hello in the Stored Procedures navigation tree.


    Note:

    As an alternative, you can also enter Hello in the Search field, click Search to display this stored procedure for selection in the Stored Procedures navigation tree, and then select it.

    The Arguments tab displays the parameters of the stored procedure.

    Description of storeproc1.gif follows
    Description of the illustration storeproc1.gif

  17. Click the Source tab to display the Hello stored procedure source code. You entered this syntax when you created the stored procedure using SQL*Plus in Chapter4, "Creating a Stored Procedure".

    Description of storeproc2.gif follows
    Description of the illustration storeproc2.gif

  18. Click OK.

    The Specify Stored Procedure window displays your selections. They appear as they did when the Arguments tab displayed in the Stored Procedures window in Step 16.

    Description of storeproc3.gif follows
    Description of the illustration storeproc3.gif

  19. Click Next.

  20. Click Finish to complete adapter configuration.

    The Create Partner Link window is automatically completed. The window looks as follows:

    Field Value
    Name Hello
    WSDL File file:/c:/OraBPELPM/integration/jdev/jdev/mywork/myWorkspace/Greeting/Hello.wsdl

    Note: OraBPELPM is the Oracle home directory used in this example. In addition, this directory path with a drive letter represents an example on Windows operating systems. If running this tutorial on Unix operating systems, your directory path varies.

    Partner Link Type Hello_plt
    My Role Leave unspecified.
    Partner Role Hello_role

  21. Click Apply.

  22. Click OK.

  23. Select Save All from the File main menu.

    The following files appear under Greeting > Integration Content in the Applications Navigator. These files contain the parameters you specified with the Adapter Configuration Wizard.

    • Hello.wsdl

      Corresponds with the new stored procedure partner link

    • SCOTT_HELLO.xsd

      Provides the definition of the stored procedure, including its parameters

Creating an Invoke Activity

You now create an invoke activity to specify an operation you want to invoke for the service (identified by the Hello partner link).

  1. Drag and drop an invoke activity below the receiveInput Receive activity.

    Description of storeproc4.gif follows
    Description of the illustration storeproc4.gif

  2. Double-click the invoke activity to display the Invoke window.

  3. Enter the following details:

    Field Value
    Name Greet
    Partner Link Hello

    The Operation (Hello) field is automatically filled in.

  4. Click the first icon to the right of the Input Variable field. This is the automatic variable creation icon.

    Description of autocreate.gif follows
    Description of the illustration autocreate.gif

    A variable named Greet_Hello_InputVariable automatically appears in the Name field. This variable provides the value for the in parameter of the stored procedure. The type is http://xmlns.oracle.com/pcbpel/adapter/db/Hello/}args_in_msg.

  5. Ensure that Global Variable is selected.

  6. Click OK on the Create Variable window.

  7. Click the first icon to the right of the Output Variable field.

  8. A variable named Greet_Hello_OutputVariable automatically appears in the Name field. This variable stores the value of the out parameter of the procedure after it executes. The type is http://xmlns.oracle.com/pcbpel/adapter/db/Hello/}args_out_msg.

  9. Ensure that Global Variable is selected.

  10. Click OK in the Create Variable window.

    Your selections for the Invoke window appear.

    Description of storeproc5.gif follows
    Description of the illustration storeproc5.gif

  11. Click OK in the Invoke window.

  12. Select Save All from the File main menu.

    The process displays a link from the Greet Invoke activity to the Hello partner link.

Creating an Initial Assign Activity

You now create an Assign activity to assign the input value to the in parameter of the stored procedure.

  1. Drag and drop an Assign activity from the Component Palette section to above the Greet Invoke activity.

    Description of storeproc6.gif follows
    Description of the illustration storeproc6.gif

  2. Double-click the assign icon to display the Assign window.

  3. Click the General tab.

  4. Enter Input in the Name field.

  5. Click Apply.

  6. Click the Copy Rules tab.

  7. Click Create to display the Create Copy Rule window.

  8. Enter the following values:

    Field Value
    From
    • Type
    Variable
    • Variables
    Expand and select Variables, then inputVariable, then payload, then client:GreetingProcessRequest, and then client:input.
    To
    • Type
    Variable
    • Variables
    Expand and select Variables, then Greet_Hello_InputVariable, then InputParameters, then ns2:InputParameters, and then NAME.

    The Create Copy Rule window appears as follows:

    Description of storeproc7.gif follows
    Description of the illustration storeproc7.gif

  9. Click OK to close the Create Copy Rule window.

  10. Click OK to close the Assign window.

  11. Select Save All from the File main menu.

Creating a Second Assign Activity

You now create an Assign activity to retrieve the value of the out parameter of the stored procedure.

  1. Drag and drop an Assign activity from the Component Palette section to below the Greet Invoke activity.

    Description of storeproc8.gif follows
    Description of the illustration storeproc8.gif

  2. Double-click the assign icon to display the Assign window.

  3. Click the General tab.

  4. Enter Output in the Name field.

  5. Click Apply.

  6. Click the Copy Rules tab.

  7. Click Create to display the Create Copy Rule window.

  8. Enter the following values:

    Field Value
    From
    • Type
    Variable
    • Variables
    Expand and select Variable, then Greet_Hello_OutputVariable, then OutputParameters, then ns2:OutputParameters, and then GREETING.
    To
    • Type
    Variable
    • Variables
    Expand and select Variables, then outputVariable, then payload, then client:GreetingProcessResponse, and then client:result.

  9. Click OK to close the Create Copy Rule window.

  10. Click OK to close the Assign window.

    The Greeting process appears as follows in JDeveloper BPEL Designer.

    Description of storeproc9.gif follows
    Description of the illustration storeproc9.gif

  11. Select Save All from the File main menu.

Validating, Compiling, and Deploying the Greeting Process

  1. Go to the Applications Navigator section.

  2. Right-click Greeting.

  3. Select Deploy, then LocalBPELServer, and then Deploy to default domain.

  4. Enter the domain password (initially set to bpel) when prompted.

  5. Click OK.

    This compiles the BPEL process. Review the bottom of the window for any errors. If there are no errors, deployment was successful.

Running the Greeting Process

  1. Log in to Oracle BPEL Console using Internet Explorer by selecting Start, then All Programs, then Oracle - Oracle_Home, then Oracle BPEL Process Manager 10.1.2, and then BPEL Console, or by running the $ORACLE_HOME/integration/orabpel/bin/startorabpel.sh script for UNIX.

  2. Enter the password (initially set to bpel) when prompted.

    The Dashboard tab of Oracle BPEL Console appears. Note that your BPEL process, Greeting, now appears in the Deployed BPEL Processes list.

  3. Click Greeting.

    The Testing this BPEL Process page appears with the Initiate tab selected.

  4. Enter your first name in the input field (for example, John).

  5. Click Post XML Message.

    After the procedure executes and the BPEL process finishes the value appears as follows:

    Value:  <GreetingProcessResponse>
              <result>Hello John<result>
            </GreetingProcessResponse>
    
    
  6. Click Audit Instance.

    The Instances tab of Oracle BPEL Console appears, along with the sequence of process activities.

  7. Click More... on the Greet activity to see the input to and output from the stored procedure.

    Note the <NAME> tag and its value in the <InputParameters> element. This value came from the inputVariable and was set by the Input Assign activity.

    Note the <GREETING> tag and its value in the <OutputParameters> element. This value came from the output parameter of the stored procedure. The value was then assigned to the outputVariable by the Output Assign activity.

    Description of storeproc10.gif follows
    Description of the illustration storeproc10.gif

  8. Click the Flow tab to view the process flow.

    The process diagram appears.

  9. Click any of the activities to view the XML as it passed through the BPEL process. For example, click the Greet Invoke activity to display the following:

    Description of storeproc11.gif follows
    Description of the illustration storeproc11.gif

Summary

This chapter describes how to use the database adapter to communicate with Oracle and third-party databases. The Adapter Configuration Wizard and advanced configuration capabilities are explained, as is support for stored procedures and functions. A use case is also provided that describes how to create and configure a stored procedure in JDeveloper BPEL Designer.