Oracle® Application Server Adapters for Files, FTP, Databases, and Enterprise Messaging User's Guide 10g Release 3 (10.1.3.1.0) Part Number B28994-02 |
|
|
View PDF |
This chapter describes the Oracle Application Server Adapter for Databases (database adapter), which works in conjunction with Oracle BPEL Process Manager and Oracle Enterprise Service Bus. Support for stored procedures and functions (for Oracle databases only) is also described. References to use cases for the database adapter and for stored procedures are provided.
This chapter contains the following topics:
Section 4.3, "Database Adapter Use Case for Oracle BPEL Process Manager"
Section 4.4, "Database Adapter Use Cases for Oracle Enterprise Service Bus"
Section 4.8, "Use Case for Creating and Configuring a Stored Procedure in JDeveloper BPEL Designer"
The database adapter enables a BPEL process to communicate with Oracle databases or third-party databases through JDBC. The database adapter service is defined within a BPEL process partner link using the Adapter Configuration Wizard of Oracle BPEL Process Manager.
This section contains the following topics:
Database Adapter Integration with Oracle BPEL Process Manager
Database Adapter Integration with Oracle Enterprise Service Bus
The database adapter connects to any relational database. For nonrelational databases and legacy systems, application and mainframe adapters are available. See Oracle Application Server Adapter Concepts for information about application and mainframe adapters.
To access an existing relational schema, you use the Adapter Configuration Wizard to do the following:
Import a relational schema and map it as an XML schema (XSD)
See Relational-to-XML Mapping for more information.
Abstract SQL operations such as SELECT
, INSERT
, and UPDATE
as Web services
See SQL Operations as Web Services for more information.
While your BPEL process deals with XML and invokes Web services, database rows and values are queried, inserted, and updated. Unlike other solutions that give you a way to access data using a fixed schema, stored procedures, streams, or queues, with the database adapter, you access table data directly and transparently.
Features of the database adapter include:
Compliance with open standards. The database adapter is an implementation of the JCA 1.5 connector. Like the other adapters that work with Oracle BPEL Process Manager, the database adapter is compatible with open standards such as BPEL, WSIF, and WSDL.
Connectivity to any relational (SQL 92) database using JDBC, or ODBC using the Sun JdbcOdbcBridge
Ability to map any existing relational schema to XML. The mapping is nonintrusive to the schema and no changes need to be made to it.
Web services abstraction of SQL operations. The generated WSDL operations are merge
, insert
, update
, write
, delete
, select
, queryByExample
, and inbound polling, which includes physical delete, logical delete, and sequencing-based polling strategies.
Leveraging of TopLink technology, an advanced object-to-relational persistence framework. You can access the underlying TopLink project, and use the TopLink Workbench interface for advanced mapping and configuration, sequencing, batch and joined relationship reading, batch writing, parameter binding, statement caching, connection pooling, external transaction control (JTS and JTA), UnitOfWork
for minimal updates, caching, optimistic locking, advanced query support, and query by example.
Ability to execute arbitrary sql
See the following for more information:
The Oracle BPEL Process Manager forum at
http://forums.oracle.com/forums/forum.jsp?forum=212
The TopLink forum at
http://forums.oracle.com/forums/forum.jsp?forum=48
This site contains over 2,000 topics, such as implementing native sequencing, optimistic locking, and JTA-managed connection pools with TopLink
You can also access the forums from Oracle Technology Network at
http://www.oracle.com/technology
When executing a SQL select
statement against multiple related tables there are the following three methods to build the SQL. These ways relate to how to pull in the detail records when the query is against the master record:
Twisting the original select
(TopLink batch-attribute reading)
Returning a Single Result Set (TopLink
Joined-Attribute Reading)
The following sections contain an outline of these three methods and their comparison. However, note that when selecting rows from a single table there are no issues as against selecting from multiple tables.
Using relationship Queries (TopLink
default)
Having selected a Master
row, TopLink
can always query separately to get all the details belonging to that Master
table. These hidden queries (relationship queries) are cached in the TopLink
metadata and need to be prepared only once.
Consider the SQL statement in following sample scenario:
SELECT DIRECTOR, ..., VIEWER_RATING FROM MOVIES WHERE RATING = 'A';
For each master, this will be as follows:
SELECT CRITIC, ..., TITLE FROM MOVIE_REVIEWS WHERE (TITLE = ?)
It enables you to bring in all the data with 1 + n query executions, where n is the number of master rows returned by the first query.
This approach is safe but slow, as a large number of round trips to the database are required to pull in all the data.
Twisting the original select
(TopLink batch-attribute reading)
This feature allows TopLink
to alter the original SQL select
statement to read all the details in a second select
statement as shown in the following example:
SELECT DIRECTOR, ..., VIEWER_RATING FROM MOVIES WHERE RATING = 'A' SELECT DISTINCT t0.CRITIC, ..., t0.TITLE FROM MOVIE_REVIEWS t0, MOVIES t1 WHERE ((t1.RATING = 'A') AND (t0.TITLE = t1.TITLE))
By considering the original select
statement in pulling in the details, a total of two (1 + 1 = 2) query executions need to be performed.
Advantages
Batch attribute reading has the following advantages:
All data read in two round trips to database
The is a default feature in the 10.1.2.0.2 release
Disadvantages
Batch attribute reading has the following disadvantages:
When using maxTransactionSize
(on polling receive) or maxRows
(on invoke select) to limit the number of rows loaded into memory at a time, these settings do not easily carry over to the batch attribute query. It is easier to work with a cursored result when there is only a single result set. (Multiple cursors can be used with difficulty, if the original query has an order by clause).
TopLink
can alter a SQL statement, only when it is in a format it can understand. If you use the hybrid SQL approach and set custom SQL for the root select
, then TopLink
will not be able to interpret that SQL to build the batch select
.
The DISTINCT
clause is used on the batch query, to avoid returning the same detail twice if two master happen to both point to it. The DISTINCT
clause cannot be used when returning LOBs in the resultset.
Configuration
Configuration is on a per 1-1
or 1-M
mapping basis. By default, all such mappings in the 10.1.2.0.2 release have this property set.
Returning a Single Result Set (TopLink
Joined-Attribute Reading)
The detail tables are outer-joined to the original SQL select
statement, returning both master and detail in a single result set, as shown in the following example:
SELECT DISTINCT t1.DIRECTOR, ..., t1.VIEWER_RATING, t0.CRITIC, ..., t0.TITLE FROM MOVIE_REVIEWS t0, MOVIES t1 WHERE ((t1.RATING = 'A') AND (t0.TITLE (+) = t1.TITLE))
This requires one query execution in total.
Advantages
The advantages include the following:
In case of using maxTransactionSize
while polling, the benefits of dealing with a single cursor can be great.
When following the hybrid SQL route and entering custom SQL statements, you only have to deal with a single SQL statement, whereas TopLink
normally uses a series of additional hidden SQL statements to bring in related rows.
read
consistency: Enables you to read all related rows at the same time, and not at different instances in time for the different tables.
Performance can be ideal as only a single round trip to the database is required, whereas batch attribute reading requires one per table queried.
Disadvantages
There are some drawbacks however, namely the cost of returning duplicate data. For example, consider that you read the Master
and Detail
tables; Master
has 100 columns per row, and Detail
has 2 columns per row. Each row in the table, Master
also, typically has 100 related Detail
rows.
With one query per table, the result sets for the preceding example will look the following:
Master Column1 column2 ….. column100 Master1 ... Detail Detail Column1 column2 Detail1 ... Detail2 ... Detail100 ...
In this example, 300 column values are returned as shown:
(columns in master + columns in detail x details per master) = ( 100 + 2 x 100 ) = 300
With one query for all tables, the result set will look like:
Master Detail Column1 Column2 ... Column100 Column1 Column2 Master1 ... Detail1 ... Master1 ... Detail2 ... Master1 ... Detail100 ...
Note that, in the case of one query for all tables, 10,200 column values are returned in a single result set, versus 300 in two result sets, as shown here:
((columns in master + columns in detail) x details per master) = (( 100 + 2 ) x 100 ) = 10,200
This can have a serious drain on network traffic and computation, because 97 percent of the data returned is duplicate data. Also, if master had two related tables detail1
and detail2
, and there were 100 each per master, then the number of column values returned would be over 10 million per master row.
In general, you can use the following simple formula to estimate the relative cost of returning all rows in a single result set:
(Master columns + Detail1 columns + Detail2 columns + ... ) x Detail1s per Master x Detail2s per Master x ... bloat = ___________________________________________________________ (Master columns + Detail1 columns X Detail1s per Master + Detail2 columns X Detail2s per Master + ...)
Note that for 1-1
relationships this value is always 1, and if in the same example each master had two columns only and the details had 100 columns instead, and each master had only 3 or 4 details each, then the bloat would be
(2 + 100) x 4 408 bloat = ____________ = ___________ ~= 1 (2 + 100 x 4) 402
Another disadvantage is that this setting could distort the meaning of the maxRows
setting on an outbound select.
Configuration
For example, assume that you have imported Movies
and MovieReviews
, with a 1-M
attribute on Movies
called movieReviewsCollection
. The following are the steps to configure this:
In the BPEL project, choose Application sources, and then click TopLink.
In the Structure panel, click Movies, and then double-click movieReviewsCollection.
A view containing the check box, Use Batch Reading
is displayed. By default, his box is enabled. You must deselect the check box to disable batch attribute reading (altering the SQL).
For a 1-1
relationship may also see the option, Use Joining below the option, Use Batch Reading. This is similar to returning a single result set and the following is a list of a few key differences:
The setting is on a per attribute basis, not a per query/wsdl
operation basis.
This setting is available only for 1-1
attributes.
This setting does an inner join, that is, if there is a Master
without a Detail
, then that Master
will be filtered out of the original select
statement.
To configure returning a single result set, edit your wsdl
and for each jca
operation of type DBActivationSpec
or DBReadInteractionSpec
, add the property ReturnSingleResultSet="true"
, as shown in the following example:
<operation name="SelectAllByTitleServiceSelect_title"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBReadInteractionSpec" DescriptorName="SelectAllByTitle.Movies" QueryName="SelectAllByTitleServiceSelect" ReturnSingleResultSet="true" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation>
This setting will override altering the original select
(batch attribute reading) statement.
Comparison of The Methods Used for Querying Over Multiple Tables
On the surface, returning a single result set looks best (1 query), followed by batch attribute reading (altering the select
statement: 2 queries), and finally by default relationship reading (n + 1 queries). However, there are several pitfalls to both of the more advanced options, as explained below:
Altering User Defined SQL
If you specify custom/hybrid
SQL, the TopLink
cannot alter that SQL string to build the details select
. For this reason, you should avoid using hybrid
SQL and build selects
using the wizards' visual expression builder
as often as possible.
Show Me The SQL
The additional queries executed by TopLink
in both, the default and batch attribute reading cases can be somewhat of a mystery to users. For this reason, the raw
SQL shown to users in the DBAdapter
wizard assumes returning a single result set, to make things clearer and also to improve manageability.
Returning Too Many Rows At Once
Databases can store vast quantities of information, and a common pitfall of select
statements which return too much information at once. On a DBAdapter
receive
, a maxTransactionSize
property can be set to limit the number of rows which are read from a cursored result set and processed in memory at a time. A similar max-rows
setting exists for one time invoke select
statements. However, this setting is very risky.
Figure 4-1 shows how the database adapter interacts with the various design-time and deployment artifacts.
Figure 4-1 How the Database Adapter Works
The database adapter is a separate JCA 1.5 connector. It is deployed to the application server during installation, and is configured using oc4j-ra.xml
. The oc4j-ra.xml
file is the database adapter deployment descriptor file for Oracle Application Server.
Each entry in oc4j-ra.xml
has a Java Naming and Directory Interface (JNDI) name (location) and session and login properties, and represents a single database and database adapter instance. The connector is tied to the application server; therefore, it can be used independently, but any change to oc4j-ra.xml
requires restarting the application server. This file is created by the application server the first time Oracle BPEL Server comes up. Therefore, in a standalone installation, you do not see this file unless you start Oracle BPEL Server at least once.
When a business process is executed in Oracle BPEL Process Manager, a Web service (WSDL) may be invoked (using WSIF) against the database. The jca:address
tag in the WSDL is used to look up an adapter instance, and the jca:operation
tag in the WSDL is used to set up an interaction (outbound) or activation (inbound) with the database adapter using a JCA interface. The jca:operation
tag contains a link to TopLink metadata for executing a query to push XML data to a relational schema, or vice versa.
The toplink_mappings.xml
file and WSDL (with custom jca:operation
and jca:address
tags) are created during design time. In JDeveloper BPEL Designer, you create an endpoint, or partner link, for interacting with the database. Each partner link has its own WSDL. The WSDL defines all the operations (queries) that can be performed with that database.
To create the WSDL, you use the Adapter Configuration Wizard, where you import the relational schema, map it to an XML schema, and define one or more operations. This produces an XSD representing the schema and a toplink_mappings.xml
file.
The Adapter Configuration Wizard creates an TopLink Workbench project (a .mwp
file) as part of the JDeveloper BPEL Designer project. Like the JDeveloper BPEL Designer .jpr
file, it enables a user to go back and visually change mappings or leverage TopLink to set advanced properties. Saving the MWP project does not regenerate toplink_mappings.xml
; that is done by running through the wizard again in edit mode. (No changes are needed; you simply run through it.)
During deployment, a copy of toplink_mappings.xml
is included in the BPEL suitcase. It is later read by the database adapter and the metadata is cached.
The database adapter is used for relational-to-XML mapping; therefore, no Java class files are needed. The database adapter generates byte codes for the classes in memory based on information in the descriptors. You do not compile class files or deal with class path issues when using the database adapter. The MWP project in the JDeveloper BPEL Designer project may create Java files as a by-product of using the wizard, but they are needed at design time only.
Adapter Framework is used for the bidirectional integration of the J2CA 1.5 resource adapters with BPEL Process Manager. Adapter Framework is based on standards and employs the Web service Invocation Framework (WSIF) technology for exposing the underlying J2CA interactions as Web services.
See Oracle Application Server Adapter Concepts for information on database adapter architecture, adapter integration with Oracle BPEL Process Manager, and adapter deployments.
Oracle Enterprise Service Bus Server supports the Oracle adapters and enables you to define inbound and outbound adapter services for each. An inbound adapter service receives data from an external data source and transforms it into an XML message. An outbound adapter service sends data to a target application by transforming an XML message into the native format of the given adapter.
Using Oracle Enterprise Service Bus Server you can send or receive messages extracted from an Oracle Database table or created by executing a stored procedure.
BPEL pre-dates ESB and most of this guide and the samples implicitly assume use with BPEL. However the adapters work equally well with either BPEL or ESB. For any mention of BPEL here you may substitute ESB instead.
This section contains the following topics:
For advanced topics in Relational-XML Mapping please see the section Relational-to-XML Mappings (toplink_mappings.xml) under Advanced Configuration.
For a flat table or schema, the relational-to-XML mapping is easy to see. Each row in the table becomes a complex XML element. The value for each column becomes a text node in the XML element. Both column values and text elements are primitive types.
Table 4-1 shows the structure of the MOVIES
table. This table is used in the use cases described in this chapter. See Database Adapter Use Case for Oracle BPEL Process Manager for more information.
Table 4-1 MOVIES Table Description
Name | Null? | Type |
---|---|---|
|
NOT NULL |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
The corresponding XML schema definition (XSD) is as follows:
<xs:complexType name="Movies"> <xs:sequence> <xs:element name="director" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="genre" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="rated" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="rating" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="releaseDate" type="xs:dateTime" minOccurs="0" nillable="true"/> <xs:element name="runTime" type="xs:double" minOccurs="0" nillable="true"/> <xs:element name="starring" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="status" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="synopsis" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="title" type="xs:string"/> <xs:element name="totalGross" type="xs:double" minOccurs="0" nillable="true"/> <xs:element name="viewerRating" type="xs:string" minOccurs="0" nillable="true"/> </xs:sequence> </xs:complexType>
As the preceding code example shows, MOVIES
is not just a single CLOB
or XMLTYPE
column containing the entire XML string. Instead, it is an XML complexType
comprised of elements, each of which corresponds to a column in the MOVIES
table. For flat tables, the relational-to-XML mapping is straightforward.
Table 4-2 and Table 4-3 show the structure of the EMP
and DEPT
tables, respectively. These tables are used in the MasterDetail
use case. See Database Adapter Use Case for Oracle BPEL Process Manager for more information.
Table 4-2 EMP Table Description
Name | Null? | Type |
---|---|---|
|
NOT NULL |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
|
-- |
|
Table 4-3 DEPT Table Description
Name | Null? | Type |
---|---|---|
|
NOT NULL |
|
|
-- |
|
|
-- |
|
As the preceding table definitions show, and as is typical of a normalized relational schema, an employee's department number is not stored in the EMP table. Instead, one of the columns of EMP
(DEPTNO
) is a foreign key, which equals the primary key (DEPTNO
) in DEPT
.
However, the XML equivalent has no similar notion of primary keys and foreign keys. Consequently, in the resulting XML, the same data is represented in a hierarchy, thereby preserving the relationships by capturing the detail record inline (embedded) inside the master.
An XML element can contain elements that are either a primitive type (string
, decimal
), or a complex type, that is, another XML element. Therefore, an employee element can contain a department element.
The corresponding XML shows how the relationship is materialized, or shown inline. DEPTNO
is removed from EMP
, and instead you see the DEPT
itself.
<EmpCollection> <Emp> <comm xsi:nil = "true" ></comm> <empno >7369.0</empno> <ename >SMITH</ename> <hiredate >1980-12-17T00:00:00.000-08:00</hiredate> <job >CLERK</job> <mgr >7902.0</mgr <sal >800.0</sal> <dept> <deptno >20.0</deptno> <dname >RESEARCH</dname> <loc >DALLAS</loc> </dept> </Emp> ... </EmpCollection>
Materializing the relationship makes XML human readable, and allows the data to be sent as one packet of information. No cycles are allowed in the XML; therefore, an element cannot contain itself. This is handled automatically by the database adapter. However, you may see duplication (that is, the same XML detail record appearing more than once under different master records). For example, if a query returned two employees, both of whom work in the same department, then, in the returned XML, you see the same DEPT
record inline in both the EMP
records.
Therefore, when you import tables and map them as XML, it is recommended that you avoid excessive duplication, although the database adapter does not print an element inside itself. The database adapter prints the following:
<Emp> <name>Bob</name> <spouse> <name>June</name> </spouse </Emp>
But not:
<Emp> <name>Bob</name> <spouse> <name>June</name> <spouse> <name>Bob</name> <spouse> ... </spouse> </spouse> </spouse> </Emp>
To avoid duplication, you can do the following:
Import fewer tables. If you import only EMP
, then DEPT
does not appear.
Remove the relationship between EMP
and DEPT
in the wizard. This removes the relationship, but the foreign key column is put back.
In both these cases, the corresponding XML is as follows:
<EmpCollection> <Emp> <comm xsi:nil = "true" ></comm> <empno >7369.0</empno> <ename >SMITH</ename> <hiredate >1980-12-17T00:00:00.000-08:00</hiredate> <job >CLERK</job> <mgr >7902.0</mgr> <sal >800.0</sal> <deptno >20.0</deptno> </Emp> ... </EmpCollection>
Note that one of the two preceding solutions is feasible only if getting back the foreign key suffices, as opposed to getting back the complete detail record in its entirety.
Table 4-4 shows how database data types are converted to XML primitive types when you import tables from a database.
Table 4-4 Mapping Database Data Types to XML Primitive Types
Database Type | XML Type (Prefixed with xs:) |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Essentially, NUMBER
goes to DECIMAL
, the most versatile XML data type for numbers, VARCHAR2
and CLOB
to string
, BLOB
to base64Binary
(to meet the plain-text requirement), and date
types to dateTime
.
Any type not mentioned in this discussion defaults to java.lang.String
and xs:string
. Timestamp support is basic, because only the xs:dateTime
format is supported. The BFILE
, USER DEFINED
, OBJECT
, STRUCT
, VARRAY
, and REF
types are specifically not supported.
Because XML is plain text, BLOB
and byte
values are base 64/MIME
encoded so that they can be passed as character data.
The database adapter supports mapping any relational schema on any relational database to an XML schema, although not any XML schema of your choice, because the wizard generates the XML schema with no explicit user control over the layout of the elements. You can control how you map the schema in both the Adapter Configuration Wizard and later in TopLink Workbench. By pairing the database adapter with a transformation step, you can map any relational schema to any XML schema.
After mapping a relational schema as XML, you must also map basic SQL operations as Web services. Each operation discussed in the following sections has a corresponding tutorial and readme. It is recommended that you start with these and try to run one or more as you read this section. As the tutorials demonstrate, some operations translate directly to the SQL equivalent, while others are more complex.
See the following sections for details:
This section comprises the following topics:
Data manipulation language (DML) operations align with basic SQL INSERT
, UPDATE
, and SELECT
operations. SQL INSERT
, UPDATE
, DELETE
, and SELECT
are all mapped to Web service operations of the same name. The WRITE
is either an INSERT
or UPDATE
, based on the results of an existence check. A distinction is made between the data manipulation operations—called outbound writes—and the SELECT
operations—called outbound reads. The connection between the Web service and the SQL for merge
(the default for outbound write) and queryByExample
are not as obvious as for basic SQL INSERT
, UPDATE
, and SELECT
.
This section comprises the following topics:
Merge
first reads the corresponding records in the database, calculates any changes, and then performs a minimal update. INSERT
, UPDATE
, and WRITE
make the most sense when you are thinking about a single row and a single table. However, your XML can contain complex types and map to multiple rows on multiple tables. Imagine a DEPT
with many EMPS
, each with an ADDRESS
. In this case, you must calculate which of possibly many rows have changed and which to insert, update, or delete. If a particular row did not change or only one field changed, the DML calls will be minimal.
Unlike the SELECT
operation, queryByExample
does not require a selection criteria to be specified at design time. Instead, for each invoke
, a selection criteria is inferred from an exemplar input XML record.
For instance, if the output xmlRecord
is an employee record, and the input is a sample xmlRecord
with lastName = "Smith"
, then on execution, all employees with a last name of Smith are returned.
A subset of queryByExample
is to query by primary key, which can be implemented by passing in sample XML records where only the primary key attributes are set.
Use queryByExample
when you do not want to create a query using the visual query builder, and want the flexibility of allowing the input record to share the same XML schema as the output records.
The queryByExample
operation is slightly less performant because a new SELECT
needs to be prepared for each execution. This is because the attributes that are set in the example XML record can vary each time, and therefore the selection criteria varies.
Input xmlRecord: <Employee> <id/> <lastName>Smith</lastName> </Employee>
Output xmlRecord
:
<EmployeeCollection> <Employee> <id>5</id> <lastName>Smith</lastName> .... </Employee> <Employee> <id>456</id> <lastName>Smith</lastName> .... </Employee> </EmployeeCollection>
Use Cases for Outbound Invoke Operations
Outbound invoke operations are demonstrated in the following tutorial files:
Insert
Update
Delete
Merge
SelectAll
SelectAllByTitle
PureSQLSelect
QueryByExample
For these files, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter
Note:
When defining a DB outbound adapter service, ensure that the JNDI value you give to the service is declared inj2ee\home\application-deployments\default\DBAdapter\oc4j-ra.xml
The outbound adapter service will use the xADataSource
definition defined in oc4j_ra.xml
to lookup the datasource. If the JNDI value is not defined in oc4j_ra.xml
, then the db adapter would use unmanaged datasource which does not support XA
transaction.
A new option in 10.1.3.1 enables you to specify any arbitrary SQL string, and an xml representing the inputs and outputs to the SQL is generated. Pure SQL operations are demonstrated in the following tutorial files:
UpdateAll
SelectCount
SelectGroupBy
SelectStar
For these files, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter
Advanced Use Cases for Outbound Invoke Operations
Advanced outbound invoke operations are demonstrated in the following tutorial files:
InsertWithClobs
XAInsert
NativeSequencingInsert
For these files, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\advanced\dmlInvoke
The inbound receive enables you to listen to and detect events and changes in the database, which in turn can be the initiators of a business process. This is not a one-time action, but rather an activation. A polling thread is started, which polls a database table for new rows or events.
Whenever a new row is inserted into the MOVIES
table, the polling operation raises it to Oracle BPEL Process Manager. The stratagem is to poll every record once. The initial SELECT
has to be repeated over time, to receive the rows that exist at the start and all new rows as they are inserted over time. However, a new row once read is not likely to be deleted, and therefore can possibly be read repeatedly with each polling.
The various ways to poll for events—called polling strategies, also known as after-read strategies or publish strategies—range from simple and intrusive to sophisticated and nonintrusive. Each strategy employs a different solution for the problem of what to do after reading a row or event so as not to pick it up again in the next polling interval. The simplest (and most intrusive) solution is to delete the row so that you do not query it again.
Note:
When you attempt to insert multiple records to a database, theDBWriteInteractionSpec Execute Failed
Exception is thrown. The following is the solution to overcome this exception.
The number of retries can be tuned through the following bpel/system/config/collaxa-config.xml
setting:
<property id="nonFatalConnectionMaxRetry"> <name>Non fatal connection retry limit</name> <value>2</value> <comment> <![CDATA[The maximum number of times a non-fatal connection error can be retried before failing. Non-fatal connections errors may be thrown when the dehydration store is a RAC installation and the node the application server is pointing to is shutdown. The engine will resubmit messages that failed as a result of the connection error. <p/> The default value is 2. ]]> </comment> </property>
Increase this value to a really high value to overcome this exeption.
This section discusses the following polling strategies and factors to help you determine which strategy to employ for a particular situation:
The physical delete polling strategy polls the database table for records and deletes them after processing. This strategy can be used to capture events related to INSERT
operations and cannot capture database events related to DELETE
and UPDATE
operations on the parent table. This strategy cannot be used to poll child table events. This strategy allows multiple adapter instances to go against the same source table. There is zero data replication.
Preconditions: You must have deletion privileges on the parent and associated child tables to use the delete polling strategy. Table 4-5 describes the requirements for using the delete polling strategy.
Table 4-5 Delete Polling Strategy Preconditions
Requirements Met | Conflicts with |
---|---|
Poll for inserts |
No delete on source |
Shallow delete |
No updates on source |
Cascading delete |
Poll for updates |
Minimal SQL |
Poll for deletes |
Zero data replication |
Poll for child updates |
Default |
-- |
Allows raw SQL |
-- |
Concurrent polling |
-- |
Note:
In Shallow delete and Cascading delete, delete can be configured to delete the top-level row, to cascade all, or to cascade on a case-by-case basis.Concurrent polling can be configured for both delete and logical delete polling strategies.
Configuration: You can configure the delete polling strategy to delete the top-level row, to cascade all, or to cascade on a case-by-case basis. This enables deleting only the parent rows and not the child rows, cascaded deletes, and optional cascaded deletes, determined on a case-by-case basis. You can configure the polling interval for performing an event publish at design time.
Delete Cascade Policy: The optional advanced configuration is to specify the cascade policy of the DELETE
. For instance, after polling for an employee with an address and many phone numbers, the phone numbers are deleted because they are privately owned (default for one-to-many), but not the address (default for one-to-one). This can be altered by configuring toplink_mappings.xml
, as in the following example:
<database-mapping> <attribute-name>orders</attribute-name> <reference-class>taxonomy.Order</reference-class> <is-private-owned>true</is-private-owned>
You can also configure the activation itself to delete only the top level (master row), or to delete everything.
A receive operation appears in an inbound WSDL as:
<operation name="receive"> <jca:operation ActivationSpec="oracle.tip.adapter.db.DBActivationSpec" … PollingStrategyName="DeletePollingStrategy" DeleteDetailRows="true"
The logical delete polling strategy involves updating a special field on each row processed, and updating the WHERE
clause at run time to filter out processed rows. It mimics logical delete, wherein applications rows are rarely deleted but instead a status column isDeleted
is set to true. The status column and the read value must be provided, but the modified WHERE
clause and the post-read update are handled automatically by the database adapter.
Preconditions: You must have the logical delete privilege or a one-time alter schema (add column) privilege on the source table. Table 4-6 describes the requirements for using the logical delete polling strategy.
Table 4-6 Logical Delete Polling Strategy Preconditions
Requirements Met | Conflicts With |
---|---|
Poll for inserts |
No updates on source |
No delete on source |
Poll for deletes |
Minimal SQL |
-- |
Zero data replication |
-- |
Minimal configuration |
-- |
Allows raw SQL |
-- |
Poll for updates |
-- |
Poll for child updates |
-- |
Concurrent polling |
-- |
Note:
The requirements of the following are met, as follows:Poll for updates: By adding a trigger
Poll for child updates: By adding a trigger
Concurrent polling: By specifying additional mark unread and reserved values.
Configuration: The logical delete polling strategy requires minimal configuration. You must specify the mark read column, and the value that indicates a processed record.
A receive operation appears in an inbound WSDL as:
<operation name="receive"> <jca:operation ActivationSpec="oracle.tip.adapter.db.DBActivationSpec" … PollingStrategyName="LogicalDeletePollingStrategy" MarkReadField="STATUS" MarkReadValue="PROCESSED"
Given the configuration for logical delete, the database adapter appends the following WHERE
clause to every polling query:
AND (STATUS IS NULL) OR (STATUS <> 'PROCESSED')
Database Configuration: A status column on the table being polled must exist. If it does not exist already, you can add one to an existing table.
Support for Polling for Updates: Given that rows are not deleted with each read, it is possible to repetitively read a row multiple times. You should add a trigger to reset the mark read field whenever a record is changed, as follows:
create trigger Employee_modified before update on Employee for each row begin :new.STATUS := 'MODIFIED'; end;
Support for Concurrent Access Polling: Just as a single instance should never process an event more than once, the same applies to a collection of instances. Therefore, before processing a record, an instance needs to reserve that record with a unique value. Again, the status column can be used:
<operation name="receive"> <jca:operation ActivationSpec="oracle.tip.adapter.db.DBActivationSpec" … PollingStrategyName="LogicalDeletePollingStrategy" MarkReadField="STATUS" MarkUnreadValue="UNPROCESSED" MarkReservedValue="RESERVED-1" MarkReadValue="PROCESSED"
The polling query instead looks like the following:
Update EMPLOYE set STATUS = 'RESERVED-1' where (CRITERIA) AND (STATUS = 'UNPROCESSED'); Select … from EMPLOYEE where (CRITERIA) AND (STATUS = 'RESERVED-1');
The after-read UPDATE
is faster because it can update all:
Update EMPLOYEE set STATUS = 'PROCESSED' where (CRITERIA) AND (STATUS = 'RESERVED-1');
Sequencing Table: Last-Read Id
This polling strategy involves using a helper table to remember a sequence value. The source table is not modified; instead, rows that have been read in a separate helper table are recorded. A sequence value of 1000
, for example, means that every record with a sequence less than that value has already been processed. Because many tables have some counter field that is always increasing and maintained by triggers or the application, this strategy can often be used for noninvasive polling. No fields on the processed row ever need to be modified by the database adapter.
Native sequencing with a preallocation size of 1
can ensure that rows are inserted with primary keys that are always increasing over time.
This strategy is also called a nondestructive delete because no updates are made to the source rows, and a sequencing strategy such as the sequence
field can be used to order the rows in a sequence for processing. When the rows are ordered in a line, the database adapter knows which rows are processed and which are not with a single unit of information.
Preconditions: You must have a sequencing table or create table privilege on the source schema. The source table has a column that is monotonically increasing with every INSERT
(an Oracle native sequenced primary key) or UPDATE
(the last-modified timestamp). Table 4-7 describes the requirements for using the sequencing polling strategy.
Table 4-7 Sequencing Polling Strategy Preconditions
Requirements Met | Conflicts With |
---|---|
Poll for inserts |
Poll for deletes |
Poll for updates |
Allows raw SQL |
No delete on source |
Concurrent polling |
No updates on source |
Poll for child updates |
One extra SQL select |
-- |
Zero data replication |
-- |
Moderate configuration |
-- |
Configuration: A separate helper table must be defined. On the source table, you must specify which column is ever increasing.
<operation name="receive"> <jca:operation ActivationSpec="oracle.tip.adapter.db.DBActivationSpec" … PollingStrategyName="SequencingPollingStrategy" SequencingFieldName="MODIFIED_DATE" SequencingFieldType="java.sql.Date" SequencingTableNameFieldValue="EMPLOYEE" SequencingTableName="SEQUENCING_HELPER" SequencingTableNameFieldName="TABLE_NAME" SequencingTableValueFieldName="LAST_READ_DATE"
The sequencing field type can be excluded if it is actually a number.
Database Configuration: A sequencing table must be configured once for a given database. Multiple processes can share the same table. Given the ActivationSpec
specified in the preceding example, the CREATE TABLE
command looks as follows:
CREATE TABLE SEQUENCING_HELPER ( TABLE_NAME VARCHAR2(32) NOT NULL, LAST_READ_DATE DATE ) ;
Polling for Updates: In the preceding example, the polling is for new objects or updates, because every time an object is changed, the modified time is updated.
A sample trigger to set the modified time on every insert
or update
is as follows:
create trigger Employee_modified before insert or update on Employee for each row begin :new.modified_date := sysdate; end;
Using a Sequence Number: A sequence number can be used for either insert or update polling. Native sequencing returns monotonically increasing primary keys, as long as an increment by 1 is used. You can also use the sequence number of a materialized view log.
Sequencing Table: Last Updated
This polling strategy involves using a helper table to remember a last_updated
value. A last_updated
value of 2005-01-01 12:45:01 000
, for example, means that every record last updated at that time or earlier has already been processed. Because many tables have rows with a last_updated
or creation_time
column maintained by triggers or the application, this strategy can often be used for noninvasive polling. No fields on the processed row ever need to be modified by the database adapter.
This strategy is also called a nondestructive delete because no updates are made to the source rows, and a sequencing strategy such as the last_updated
field can be used to order the rows in a sequence for processing. When the rows are ordered in a line, the database adapter knows which rows are processed and which are not with a single unit of information.
See Sequencing Table: Last-Read Id for information about preconditions and configuration.
The control table polling strategy involves using a control table to store the primary key of every row that has yet to be processed. With a natural join between the control table and the source table (by primary key), polling against the control table is practically the same as polling against the source table directly. However, an extra layer of indirection allows the following:
Destructive polling strategies such as the delete polling strategy can be applied to rows in the control table alone, while shielding any rows in the source table.
Only rows that are meant to be processed have their primary key appear in the control table. Information that is not in the rows themselves can be used to control which rows to process (a good WHERE
clause may not be enough).
The entire row is not copied to a control table, and any structure under the source table, such as detail rows, can also be raised without copying.
Streams and materialized view logs make good control tables.
Preconditions: You must have create/alter triggers privilege on the source table. Table 4-8 describes the requirements for using the control table polling strategy.
Table 4-8 Control Table Polling Strategy Preconditions
Requirements Met | Conflicts With |
---|---|
Poll for inserts |
Advanced configuration: the native XML from the database will have control header, and triggers are required. |
Poll for updates |
-- |
Poll for deletes |
-- |
Poll for child updates |
Minimal data replication (primary keys are stored in control table) |
No delete on source |
-- |
No updates on source |
-- |
No extra SQL selects |
-- |
Concurrent polling |
-- |
Allows raw SQL |
-- |
Auditing |
-- |
Using triggers, whenever a row is modified, an entry is added to a control table, containing the name of the master table, and the primary keys. At design time, the control table is defined to be the root table, with a one-to-one mapping to the master table, based on the matching primary keys. The control table can contain extra control information, such as a timestamp, and operation type (INSERT
, UPDATE
, and so on).
The delete polling strategy is useful with this setup. It is important to keep the control table small, and if the option shouldDeleteDetailRows="false"
is used, then only the control rows are deleted, giving you a nondestructive delete (the DELETE
is not cascaded to the real tables).
It is possible to reuse the same control table for multiple master tables. In TopLink, you can map the same table to multiple descriptors by mapping the control table as one abstract class with multiple children. Each child has a unique one-to-one mapping to a different master table. The advantage of this approach is that you can specify for each child a class indicator field and value so that you do not need an explicit WHERE
clause for each polling query.
Some sample triggers follow for polling for changes both to a department table and any of its child employee rows:
CREATE OR REPLACE TRIGGER EVENT_ON_DEPT AFTER INSERT OR UPDATE ON DEPARTMENT REFERENCING NEW AS newRow FOR EACH ROW DECLARE X NUMBER; BEGIN SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO); IF X = 0 then insert into DEPT_CONTROL values (:newRow. DEPTNO); END IF; END; CREATE OR REPLACE TRIGGER EVENT_ON_EMPLOYEE AFTER INSERT OR UPDATE ON EMPLOYEE REFERENCING OLD AS oldRow NEW AS newRow FOR EACH ROW DECLARE X NUMBER; BEGIN SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO); IF X = 0 then INSERT INTO DEPT_CONTROL VALUES (:newRow.DEPTNO); END IF; IF (:oldRow.DEPTNO <> :newRow.DEPTNO) THEN SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :oldRow.DEPTNO); IF (X = 0) THEN INSERT INTO DEPT_CONTROL VALUES (:oldRow.DEPTNO); END IF; END IF; END;
Use Cases for Polling Strategies
Polling strategies are demonstrated in the following tutorials:
PollingLogicalDeleteStrategy
PollingLastUpdatedStrategy
PollingLastReadIdStrategy
PollingControlTableStrategy
MasterDetail
(for physical delete polling strategy
For these files, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter
Advanced Use Cases for Polling Strategies
Advanced polling strategies are demonstrated in the following tutorials:
DistributedPolling
PollingExternalSequencing
PollingFileSequencingStrategy
PollingForChildUpdates
PollingNoAfterReadStrategy
PollingOracleSCNStrategy
PollingPureSQLOtherTableInsert
PollingPureSQLSysdateLogicalDelete
PollingWithParameters
For these files, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\advanced\polling
To use the database adapter demonstrated in the 122.DBAdapter
tutorial, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter
Table 4-9 shows the database adapter samples that are provided with Oracle BPEL Process Manager.
Table 4-9 Database Adapter Use Cases
Tutorial Name | Description |
---|---|
|
Illustrates the outbound |
|
Describes a simple scenario in which the file adapter is used to provide instance XML to a stored procedure, |
|
Illustrates the use of an input a native (CSV) data file defined in a custom format. The input file is a purchase order, which the file adapter processes and publishes as an XML message to the FIle2Table BPEL process. The message is transformed to another purchase order format and routed to an |
|
Illustrates the outbound |
|
Illustrates the extra steps (based on the |
|
Illustrates a workaround for using PL/SQL RECORD types. JPublisher is used to create a corresponding OBJECT type whose attributes match the fields of the RECORD, and conversion APIs that convert from RECORD to OBJECT and vice versa. JPublisher also generates a wrapper procedure (or function) that accepts the OBJECT and invokes the underlying method using the conversion APIs in both directions. The invoked methods must be installed in an in an Oracle database (not Oracle Lite). |
|
Illustrates how to migrate data from one set of tables to another. The sample uses the database adapter to read data from one set of tables, process the data, and write it in to another set of database tables using the adapter. |
|
Illustrates the outbound |
|
Illustrates an inbound polling operation to poll XML instances from the |
|
Illustrates an inbound polling operation to poll XML instances from the |
|
Illustrates an inbound polling operation to poll XML instances from the |
|
Illustrates an inbound polling operation to poll XML instances from the |
|
Illustrates how to poll a table based on a date field. |
|
Illustrates how to bypass the JDeveloper BPEL Designer |
|
Illustrates the outbound |
|
Illustrates a workaround for using REF CURSORs. The solution involves the use of a Java stored procedure to convert the corresponding |
|
Illustrates the outbound |
|
Illustrates the outbound |
|
Illustrates the outbound |
See Table 4-1 for the structure of the MOVIES
table, which is used for many of the use cases. The readme.txt
files that are included with most of the samples provide instructions.
This section comprises the following two use cases:
This use case describes how by using the Adapter Configuration Wizard, you can import tables from the database, specify relationships spanning multiple tables, generate corresponding XML schema definitions, and create services to expose the necessary SQL or database operations. These services are consumed to define partner links that are used in the BPEL process. You use the Adapter Configuration to both create and edit adapter services.
This section contains the following topics:
After you create a BPEL project in JDeveloper BPEL Designer, you can start defining a database adapter. If you lose focus on the window, use alt+tab
to get it back.
To launch the Adapter Configuration Wizard:
Ensure that All Process Activities is selected in the drop-down list of the Component Palette section.
Drag and drop a PartnerLink activity onto the right side of the designer window.
Enter a name in the Create Partner Link window.
Click the Define Adapter Service icon to start the Adapter Configuration Wizard.
Click Next on the Welcome window.
Select Database Adapter for the Adapter Type, and then click Next.
In the Service Name window, enter the service name, and a description for it.
The description is optional.
Click Next. The Service Connection wizard is displayed.
See Connecting to a Database to continue using the wizard.
Figure 4-2 shows where you select the database connection that you are using with the service. This is the database from which you import tables to configure the service.
You can provide a Java Naming and Directory Interface (JNDI) name to identify the database connection, or use the default name that is provided. The JNDI name acts as a placeholder for the connection used when your service is deployed to Oracle BPEL Server. This enables you to use different databases for development and production. The Adapter Configuration Wizard captures the design-time connection in the generated WSDL as well, to serve as a fallback in case the run-time lookup fails.
Figure 4-2 Adapter Configuration Wizard: Service Connection
Note the following:
In production environments, it is recommended that you add the JNDI entry to the adapter deployment descriptor (oc4j-ra.xml
). This way, the database adapter is more performant by working in a managed mode. In a nonmanaged mode, the database adapter uses the design-time connection information.
When you click Next, a connection to the database is attempted. If a connection cannot be made, you are not able to proceed to the next window, even if you are editing an existing partner link.
See Selecting the Operation Type to continue using the wizard.
Figure 4-3 shows where you indicate the type of operation you want to configure for this service.
Figure 4-3 Adapter Configuration Wizard: Operation Type
The follow operation types are available:
Call a Stored Procedure or Function
Select this option if you want the service to execute a stored procedure or function. See Stored Procedure and Function Support for more information.
Perform an Operation on a Table
Select this option for outbound operations. You can select Insert or Update, Insert Only, Update Only, Delete, Select, or any combination of the five. These operations loosely translate to SQL INSERT
, UPDATE
, DELETE
, and SELECT
operations. See DML Operations for more information.
Note the following:
The operations merge, insert, update, and write are created from selecting Insert or Update.
The preceding Invoke window shows the MergeService
service name as part of the Select operation, that is, MergeServiceSelect.
The queryByExample operation appears in every WSDL.
If the Operation list is initially blank, reselect the partner link and click the Operation list again.
Note:
The operationUpdate Only
sometimes performs inserts/deletes for child records. That is, an update to Master could involve a new or deleted detail. So if the input to update contains only one record, then the other records in the table will be deleted.Poll for New or Changed Records in a Table
Select this option for an inbound operation (that is, an operation that is associated with a Receive activity). This operation type polls a specified table and returns for processing any new rows that are added. You can also specify the polling frequency. See Polling Strategies for more information.
See Selecting and Importing Tables to continue using the wizard.
Figure 4-4 shows where you select the root database table for your operation. If you are using multiple, related tables, then this is the highest-level table (or highest parent table) in the relationship tree.
Figure 4-4 Adapter Configuration Wizard: Select Table
This window shows all the tables that have been previously imported in the JDeveloper BPEL Designer project (including tables that were imported for other partner links). This enables you to reuse configured table definitions across multiple partner links in a given BPEL project. These are the generated TopLink descriptors.
If the root database table you want to use for this operation has not been previously imported, you can click Import Tables.... If you want to reimport a table (if the table structure has changed on the database, for example), import it again. You can then reimport the table and overwrite the previously configured table definition.
Note:
If you reimport a table, you lose any custom relationships you may have defined on that table, as well as any customWHERE
clauses (if the table being imported was the root table).See Defining Primary Keys to continue using the wizard.
If any of the tables you have imported do not have primary keys defined on the database, you are prompted to provide a primary key for each one, as shown in Figure 4-5. You must specify a primary key for all imported tables. You can select multiple fields if you need to specify a multipart primary key.
Figure 4-5 Adapter Configuration Wizard: Define Primary Keys
The primary key that you specify here is recorded on the offline database table and is not persisted back to the database schema; the database schema is left untouched.
See Creating Relationships to continue using the wizard.
Figure 4-6 shows the relationships defined on the root database table and any other related tables. You can click Create Relationships… to create a new relationship between two tables, or Remove Relationship to remove it.
Figure 4-6 Adapter Configuration Wizard: Relationships
Note the following regarding creating relationships:
If foreign key constraints between tables already exist on the database, then two relationships are created automatically when you import the tables: a one-to-one (1:1) from the source table (the table containing the foreign key constraints) to the target table, as well as a one-to-many (1:M) from the target table to the source table.
As Figure 4-6 shows, you see only the relationships that are reachable from the root database table. If, after removing a relationship, other relationships are no longer reachable from the root table, then they are not shown in the Relationships window. Consider the following set of relationships:
A --1:1--> B --1:1--> C --1:M--> D --1:1--> E --1:M--> F
(1) (2) (3) (4) (5)
If you remove relationship 3, then you see only:
A --1:1--> B
B --1:1--> C
If you remove relationship 2, then you see only:
A --1:1--> B
If you remove relationship 1, you no longer see any relationships.
Figure 4-7 shows where you can create a new relationship.
To create a new relationship:
Select the parent and child tables.
Select the mapping type (one-to-many, one-to-one, or one-to-one with the foreign key on the child table).
Associate the foreign key fields to the primary key fields.
Optionally name the relationship (a default name is generated).
Note:
Only tables that are reachable from the root table can be selected as a parent.When tables are initially imported into the wizard, a TopLink direct-to-field mapping corresponding to each field in the database is created. Consider the schemas shown in Figure 4-8 and Figure 4-9:
Immediately after importing these two tables, the following mappings in the Employee
descriptor are created:
Employee:
id
(direct mapping to the ID
field, for example, 151)
name
(direct mapping to the NAME
field, for example, Stephen King)
addrId
(direct mapping to the ADDR_ID
field, for example, 345)
When creating a relationship mapping, the direct-to-field mappings to the foreign key fields are removed and replaced with a single relationship (one-to-one, one-to-many) mapping. Therefore, after creating a one-to-one relationship between Employee
and Address
called homeAddress
, the Employee
descriptor looks like this:
Employee:
id
name
homeAddress
(one-to-one mapping to the ADDRESS
table; this attribute now represents the entire Addres
s object.)
When a relationship is removed, the direct mappings for the foreign keys are restored.
The following ways of specifying one-to-one relationships are supported:
The foreign keys exist on the parent table, as shown in Figure 4-10 and Figure 4-11.
The foreign keys exist on the child table, as shown in Figure 4-12 and Figure 4-13.
Figure 4-10 Foreign Keys on the Parent Table EMPLOYEE
Figure 4-11 Foreign Keys on the Parent Table ADDRESS
Figure 4-12 Foreign Keys on the Child Table EMPLOYEE
Figure 4-13 Foreign Keys on the Child Table ADDRESS
Figure 4-14 shows the object filter that is created from the imported table definitions, including any relationships that you may have defined.
Figure 4-14 Adapter Configuration Wizard: Object Filtering
If your object filter contains self-relationships (for example, the employee-to-employee manager relationship), then you see these as loops in the tree. These loops are not present in the XSD. This is the descriptor object model, not the XSD.
See Defining a WHERE Clause to continue using the wizard.
If your service contains a SELECT
query (that is, inbound polling services, or outbound services that contain a SELECT
), then you can customize the WHERE
clause of the SELECT
statement.
Figure 4-15 shows where you define a WHERE
clause for an outbound service. For inbound services, you do not see the Parameters section.
Figure 4-15 Adapter Configuration Wizard: Define WHERE Clause
Note:
TheWHERE
clause applies to SELECT
operations only (that is, polling for new or changed records, or performing a SELECT
operation on a table). It does not apply to INSERT
, UPDATE
, and DELETE
operations.The most basic expression in a WHERE
clause can be one of the following three cases, depending on what the right-hand side (RHS) is:
EMP.ID = 123
In this case, the RHS is a literal value. This RHS is the Literal option shown in Figure 4-16.
EMP.ADDR_ID = ADDR.ID
In this case, the RHS is another database field. This RHS is the Query Key option shown in Figure 4-16.
EMP.ID = ?
In this case, the RHS value must be specified at run time. This is the Parameter option shown in Figure 4-16.
You create the parameters that you need in the WHERE
clause by clicking Add before you move on to build the WHERE
clause. To build the WHERE
clause, click Edit… to launch the Expression Builder, as shown in Figure 4-16.
See the following for more information:
The TopLink page on OTN at
http://www.oracle.com/technology/products/ias/toplink/index.html
TopLink documentation at
http://download.oracle.com/docs/cd/B14099_04/web.htm#toplink
This site contains documentation on configuring expressions using the XPath Expression Builder.
See Choosing an After-Read Strategy to continue using the wizard.
When configuring an inbound operation, you have the following options about what to do after a row or rows have been read:
Figure 4-17 shows these options.
Figure 4-17 Adapter Configuration Wizard: After-Read Strategies
See Polling Strategies for more information.
With this option, the rows are deleted from the database after they have been read and processed by the adapter service.
With this option, you update a field in the root database table to indicate that the rows have been read. The WHERE
clause of the query is updated automatically after you complete the configuration, as shown in Figure 4-18.
Figure 4-18 Adapter Configuration Wizard: Logical Delete
Using this approach, your database table looks something like Figure 4-19.
Note the following:
Rows 150 and 153 have been previously read and processed.
At the next polling event, row 152 will be read and processed because it contains UNPROCESSED
in the Status column. Because an explicit Unread Valu
e was provided, row 151 will not be read.
Row 154 has been flagged as LOCKED
and will not be read. You can use this reserved value if your table is in use by other processes.
With this option, you are keeping track of the last-read rows in a separate sequence table. Figure 4-20 shows the information you provide. The WHERE
clause of your query is updated automatically after you complete the configuration.
Figure 4-20 Adapter Configuration Wizard: Last Read IDs Table
Using these settings, your sequence table looks something like Figure 4-21.
Whenever a row is read, this table is updated with the ID that was just read. Then when the next polling event occurs, it will search for rows that have an ID greater than the last-read ID (154).
Typical columns used are event_id
, transaction_id
, scn
(system change number), id
, or last_updated
. These columns typically have (monotonically) increasing values, populated from a sequence number or sysdate
.
This section describes happens internally at design time when you use the Adapter Configuration Wizard to configure the database adapter.
When you import a table, the offline table support of JDeveloper BPEL Designer creates an offline snapshot of the database table. You can modify this offline version of the table (for example, you can add a foreign key constraint) without affecting the real database table. This creates a TopLink descriptor and associated Java source file for the table, and all the attributes in the descriptor are automapped to their corresponding database columns. The TopLink descriptor maps the Java class to the offline database table.
Most typical data columns are mapped as direct-to-field mappings, meaning that the value in the database column is directly mapped to the attribute. For example, a SALARY
column in the database is mapped to a salary
attribute in the object model, and that attribute contains the value of that column.
If foreign key constraints are already present in the imported tables, then relationship mappings are autogenerated between the tables. To cover as many scenarios as possible, two mappings are generated for every foreign key constraint encountered: a one-to-one mapping from the source table to the target table, and a one-to-many mapping in the opposite direction. After this is done, you are left with an TopLink Workbench project in your BPEL project.
Note:
The Java classes that are created as part of the descriptor generation process are never actually deployed with your process or used at run time. They are present in the design time because TopLink Workbench is expecting each descriptor to be associated with a Java class. When your process is deployed, the mapping metadata is stored intoplink_mappings.xml
.When you have finished importing tables, you must select a root database table. In doing so, you are actually selecting which TopLink descriptor stores the autogenerated query.
When you create or remove a relationship, you are actually modifying the TopLink descriptor mappings. Creating a new relationship does the following:
Creates a foreign key constraint in the offline database table
Adds a one-to-one or one-to-many mapping to the descriptor
Removes the direct-to-field mappings to the foreign key fields
Removing a relationship mapping does the following:
Removes the one-to-one or one-to-many mapping from the descriptor
Removes the foreign key constraint from the offline database table
Adds direct-to-field mappings for each foreign key field involved in the relationship
The following files are generated:
service_name
.wsdl
—contains the database adapter service definition
RootTable
.xsd
—the XML type definition of the root object
toplink_mappings.xml
—contains the TopLink mapping metadata for your BPEL project. It is the only Toplink artifact that is deployed to the server.
This section explains an advanced use case, titled Polling File Sequencing Strategy. This use case is available at:
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\advanced/polling/PollingFileSequencingStrategy
This scenario aims at performing a completely non-intrusive polling. In this example, the sequencing strategies need to only remember a single piece of information, the last read id
/sequence value
. That one piece of information can either be stored on a table, on the same database or a different one, or that single piece of information can be stored simply in a file.
This section comprises the following sections:
Ensure that you run the sql
scripts, setup.sql and reset.sql available at:
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\sql
The following are the steps to create a new BPEL project:
Open Oracle JDeveloper.
From the File menu, select New.
The New Gallery dialog box is displayed.
Select All Technologies from the Filter By box. This displays a list of available categories.
Expand the General node, and then select Projects.
Select BPEL Project from the Items group.
Click OK.
The Create BPEL Project dialog box is displayed.
Click OK.
You have created a new BPEL project.
After you create a BPEL project in Oracle JDeveloper, you can start defining a database adapter. To define a database adapter:
Drag and drop Database Adapter from the component Palette onto the designer window.
The Adapter Configuration Wizards-Welcome screen is displayed.
Click Next.
The Service Name dialog box is displayed.
Enter a service name as shown in Figure 4-22. The Description filed is optional.
Click Next.
The Service Connection dialog box is displayed.
Select a database connection already defined in the project or create a new connection.
Click Next.
The Operation Type dialog box is displayed.
Select Poll for New or Changed Records in a table, as shown in Figure 4-23.
Click Next.
The Select Table dialog box is displayed.
Click Import Tables.
The Import Tables dialog box is displayed.
Double-click the Movies table to select it, as shown in Figure 4-24.
Click OK.
The Select Tables dialog box is displayed with the Movies table selected, as shown in Figure 4-25.
Click Next.
The Relationships dialog box is displayed.
In the Relationships dialog box you can define relationships that are reachable from the root database. However, in the example, you will not define any relationship. Click Next.
In the Object Filtering dialog box is displayed, click Next.
The After Read dialog box is displayed.
Select Update a Sequencing File, and then click Next.
The Sequencing File dialog box is displayed.
Enter the following information:
Sequencing File: Specify the location of the of the sequencing file. In this example, specify,
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\advanced\polling\PollingFileSequencingStrategy\lastReadId.txt
Sequenced ID Field: Select LATST_UPDATED.
Figure 4-26 shows the Sequencing File dialog box, wish the value populated.
Figure 4-26 The Sequencing File Dialog Box
Click Next.
The Polling Options dialog box is displayed, as shown in Figure 4-27.
Figure 4-27 The Polling Options Dialog Box
In the Polling Options dialog box, retail default values, and then click Next.
The Define Selection Criteria dialog box is displayed, as shown in Figure 4-28.
Figure 4-28 The Define Selection Criteria Dialog Box
Retain default values, and then click Next.
The Adapter Configuration Wizard-Finish screen is displayed, as shown in Figure 4-29.
Figure 4-29 The Adapter Configuration Wizard - Finish Screen
Click Finish.
The Create Partner Link dialog box is displayed with all the fields populated, as shown in Figure 4-30.
Figure 4-30 The Create Partner Link Dialog Box
Click Ok.
Your screen will resemble Figure 4-31.
Right-click the project, select Deploy, point to ServerConnection1, and then click Deploy to Default Domain.
The Oracle BPEL Control will resemble Figure 4-32.
This use case shows a simple scenario for replicating data in one set of tables on one database to tables on another database. This scenario involves an inbound polling read on the source tables, and an outbound write/merge to destination tables.
In this use case, there are two sets of department and employee tables: SENDER_EMP
and SENDER_DEPT
, which are used for inbound data, and RECEIVER_EMP
and RECEIVER_DEPT
for outbound data.
Note that the operations against each set of tables are defined in separate WSDLs that can have their own Database Adapter JNDI (connection) names defined. This means that each set of tables could potentially reside in separate database schemas or instances.
This section comprises the following steps:
Before you create the application, you must run the stored procedure, create_schemas.sql
available at:
OracleAS_1\bpel\samples\tutorials\122.DBAdapter\MasterDetail\sql\oracle
This stored procedure creates four tables: SENDER_EMP
and SENDER_DEPT
, from which data is read by the inbound database adapter, and RECEIVER_EMP
and RECEIVER_DEPT
, from which new data is stored by the outbound database adapter.
The following are the steps to create a new ESB project:
Open Oracle JDeveloper.
From the File menu, select New.
The New Gallery dialog box is displayed.
Select All Technologies from the Filter By box. This displays a list of available categories.
Expand the General node, and then select Projects.
Select ESB Project from the Items group, as shown in Figure 4-33
Click OK.
The Create ESB Project dialog box is displayed.
In the Project Name field, enter a descriptive name. For example, MasterDetails
.
Click OK.
You have created a new ESB project, titled MasterDetails
.
The following are the steps to create an inbound Database adapter:
Select Adapter Services from the Component Palette, and then drag and drop Database Adapter into the MasterDetails.esb
project.
The Create Database Adapter Service dialog box is displayed.
Specify the following information in the Create Database Adapter Service dialog box:
Name: Type a name for the service. In this example, type InboundServices
.
System/Group: Retain the default value.
Figure 4-34 shows the Database Adapter Service dialog box with the Name and System/Group fields, filled up.
Figure 4-34 Defining the Database Adapter Service
Under Adapter Service WSDL, click the Configure adapter service wsdl icon.
The Adapter Configuration wizard Welcome page is displayed.
Click Next.
The Service Name dialog box is displayed with the Service Name field, filled up.
Retain the service name, and click Next.
The Service Connection dialog box is displayed, as shown in Figure 4-35
Figure 4-35 The Service Connection Dialog Box
Click New to define a database connection.
The Create Database Connection Wizard Welcome page is displayed.
Click Next.
The Type dialog box is displayed.
Enter the following information in the Type dialog box:
In the Connection Name field, specify a unique name for the database connection. In this example, type masterdetails.
From the Connection Type box, select Oracle (JDBC).
Figure 4-36 shows the Type dialog box.
Figure 4-36 Specifying the Connection Name and Type of Connection
Click Next.
The Authentication dialog box is displayed.
Enter the authentication credentials in the following fields:
In the UserName field, specify a unique name for the database connection. In this example, type scott
.
In the Password field, specify a password for the database connection. In this example, type tiger
.
Leave the Role field blank.
Select Deploy Password.
Figure 4-37 shows the Authentication dialog box with the credentials fields, populated.
Figure 4-37 Specifying the Authentication Credentials
Click Next.
The Connection dialog box is displayed.
Enter information in the following fields:
In the Driver list, retain the default value, Thin.
In the Host Name field, retain the default value, localhost.
In the JDBC Port field, specify the port number for the database connection. In this example, type 1521.
In the SID field, specify a unique SID value for the database connection. In this example, type ORCL.
Figure 4-38 shows the Connection dialog box.
Figure 4-38 Specifying the New Database Connection Information
Click Next.
The Test dialog box is displayed.
Click Test Connection to determine whether the specified information establishes a connection with the database.
Click Finish to complete the process of creating a new database connection.
The Service Connection dialog box is displayed, providing a summary of the database connection, as shown in Figure 4-39.
Figure 4-39 The Service Connection Dialog Box
Click Next.
The Operation dialog box is displayed, as shown in Figure 4-40.
Figure 4-40 Specifying the Operation Type
Select Poll for New or Changed Records in a Table, and then click Next.
The Select Table dialog box is displayed.
Click Import Tables to import tables.
The Import Tables dialog box is displayed, as shown in Figure 4-41.
Select the following query parameters in the Import Tables dialog box:
Schema: Select Scott
Name Filter: Click Query to retrieve objects from database.
A list of available tables is displayed.
Select RECEIVER_DEPT
and RECEIVER_EMP
from the Available column, and move it to the Selected column by clicking the Add button.
Figure 4-42 shows the Add button.
The selected tables now appear in the Selected column, as shown in Figure 4-43.
Figure 4-43 Adding the Required Tables from the Schema
Click Ok.
The Select Table dialog box is displayed with the tables you imported, as shown in Figure 4-44.
Figure 4-44 The Select Tables Dialog Box with the Imported Tables
Select a root table, and then click Next. In this example, select SCOTT.RECEIVER_DEPT
.
The Relationships dialog box is displayed, as shown in Figure 4-45.
Click Next.
The Object Filtering dialog box is displayed, as shown in Figure 4-46.
Figure 4-46 The Object Filtering Dialog Box
Click Next.
The After Read dialog box is displayed.
In the After Read dialog box, select Delete the Row(s) that were Read, as shown in Figure 4-47, and then click Next.
The Polling Options dialog box is displayed, as shown in Figure 4-48.
Figure 4-48 The Polling Options Dialog Box
Retain the default value in the Polling Options dialog box and then click Next.
The Define Selection Criteria dialog box is displayed, as shown in Figure 4-49.
Figure 4-49 The Define Selection Criteria Dialog Box
Retain the default query, and click Next. However, to define your own expression, click Edit
The Adapter Configuration Wizard - Finish screen is displayed, as shown in Figure 4-50.
Figure 4-50 The Adapter Configuration Wizard- Finish
The following are the steps to create an inbound Database adapter:
Select Adapter Services from the Component Palette, and then drag and drop Database Adapter into the MasterDetails.esb
project.
The Create Database Adapter Service dialog box is displayed.
Specify the following information in Database Adapter Service dialog box:
Name: Type a name for the service. In this example, type OutboundServices
.
System/Group: Retain the default value.
Under Adapter Service WSDL, click the Configure adapter service wsdl icon.
The Adapter Configuration wizard Welcome page is displayed.
Click Next.
The Service Name dialog box is displayed with the Service Name field, filled up.
Retain the service name, and click Next.
The Service Connection dialog box is displayed.
Select the connection that you already created, and the click Next.
The Operation dialog box is displayed, as shown in Figure 4-51.
Figure 4-51 Specifying the Operation Type
Select Perform an Operation on a Table, and then click Next.
The Select Table dialog box is displayed.
Click Import Tables to import tables.
The Import Tables dialog box is displayed.
Select the following query parameters in the Import Tables dialog box:
Schema: Select Scott
Name Filter: Click Query to retrieve objects from database.
A list of available tables is displayed.
Select SENDER_DEPT
and SENDER_EMP
from the Available column, and move it to the Selected column by clicking the Add button.
The selected tables now appears in the Selected column, as shown in Figure 4-52.
Figure 4-52 Adding the Required Tables from the Schema
Click Ok.
The Select Table dialog box is displayed with the tables you imported, as shown in Figure 4-53.
Figure 4-53 The Select Tables Dialog Box with the Imported Tables
Select a root table, and then click Next. In this example, select SCOTT.SENDER_DEPT
.
The Relationships dialog box is displayed, as shown in Figure 4-55.
Click Next.
The Object Filtering dialog box is displayed, as shown in Figure 4-55.
Figure 4-55 The Object Filtering Dialog Box
Click Next.
The Define Selection Criteria dialog box is displayed, as shown in Figure 4-56.
Figure 4-56 The Define Selection Criteria Dialog Box
Click Next.
The Finish screen confirming the creation of the outbound service is displayed, as shown in Figure 4-57.
Figure 4-57 The Adapter Configuration Wizard Screen - Finish Screen
The following are the steps to configure the routing service:
Double-click the InboundServices
routing service.
The Routing Service window is displayed in the midpane of the Application window, as shown in Figure 4-58.
Select the Routing Rules tab, and then click + icon to add a rule.
The Browse Target Service Operation dialog box is displayed.
Select the Merge service, as shown in Figure 4-59, and then click OK.
Figure 4-59 The Browse Target Service Operation Dialog Box
Click Ok.
The middle pane of the application window will resemble Figure 4-60.
Figure 4-60 Selecting the Transformation Map
Double-click the Transformation icon, and then click the Select Create New Mapper File icon.
The Request Transformation Map dialog box is displayed, as shown in Figure 4-61.
Figure 4-61 The Request Transformation Map Dialog Box
Select the Create New Mapper File option.
Accept the default values, and click OK.
The Transformation window appears, as shown in Figure 4-62.
Select elements on the left-hand side of the mapper and drag it over to the elements on the right-hand side to set the map preferences.
The middle pane of the application window will resemble Figure 4-63.
Save and close the tab for the mapper.
Save and close the tab for the routing service.
The midpane of the MasterDetails project will resemble Figure 4-64.
Figure 4-64 The MasterDetails Project After Setting Map Preferences
Edit oc4j-ra.xml
to reflect your database connection. For example,
eis/DB/masterdetails
oc4j-ra.xml
is available at the location where you have installed OracleAS
. For example:
C:\product\10.1.3.1\OracleAS_1\j2ee\home\application-deployments\default\DbAdapter\oc4j-ra.xml
Right-click the project, select Register with ESB, and then click LocalESBServer, as shown in Figure 4-65.
The Success page is displayed, as shown in Figure 4-66.
Figure 4-66 The ESB Registration Summary Page
To check the ESB control, open the ESB Console. For example: http://localhost:8888/esb/esb/EsbConsole.html
Now, your service window will resemble Figure 4-67:
Use the following steps to check execution in the ESB control:
Open the ESB Console.
Click Instances on top-right corner.
Click the green arrow next to Search.
An instance resembling Figure 4-68 is displayed.
The Adapter Configuration Wizard generates everything you need to use the database adapter as part of a BPEL process. The following sections describe what happens in the background when you use the wizard, as well as performance considerations.
This section contains the following topics:
The wizard works by creating an TopLink Workbench project as part of your BPEL process project. This TopLink project contains metadata for mapping a database schema to objects/XML.
The TopLink mappings are stored in two formats. The toplink_mappings.mwp
file is your design time project, which you can edit visually in JDeveloper BPEL Designer. In contrast, the toplink_mappings.xml
file is an XML representation of your project for use at run time. It is not as easy as editing the .bpel
file, where there is only one file, but you can toggle between Diagram View and Source.
Note the following:
Rather than edit the toplink_mappings.xml
file directly, it is recommended that you edit the toplink_mappings.mwp
visually, and regenerate all the BPEL artifacts to reflect the changes. You can do this by double-clicking the partner link to open the Adapter Configuration Wizard in edit mode, and then clicking through the wizard until you can click Finish. Changing the MWP version does not update the XML version until you click through the wizard in edit mode.
When running the wizard, any changes that affect the TopLink project (importing tables, creating or removing mappings, specifying an expression, and so on) are applied immediately, and are not undone if you cancel the wizard.
You cannot remove TopLink descriptors from your project from within the wizard because removing descriptors can potentially affect other partner links that are sharing that descriptor. To explicitly remove a descriptor, do the following:
Click the TopLink Mappings node under Application Sources under your project in the Applications - Navigator.
Select the descriptor from the tree in the TopLink Mappings - Structure pane.
Right-click and select Remove.
Currently, the Adapter Configuration Wizard does not have built-in support for partial object reading, that is, returning only specific fields from a table. To achieve this functionality, you can manually unmap any attributes that you do not want to include in the result set. Relationship mappings can be unmapped by removing them in the Relationships window, but direct mappings must be explicitly unmapped on the TopLink descriptor.
To unmap attributes:
Click the TopLink Mappings node under Application Sources under your project in the Applications - Navigator.
Select the descriptor containing the attribute you want to unmap from the tree in the TopLink Mappings - Structure pane.
Right-click the attribute you want to unmap and select Map As > Unmapped.
To remap the attribute, you can do the following:
Click the TopLink Mappings node under Application Sources under your project in the Applications - Navigator.
Select the descriptor containing the attribute you want to remap from the tree in the TopLink Mappings - Structure pane.
Right-click the attribute you want to remap and select Map As > Direct to Field.
The TopLink Mappings Editor automatically opens in the JDeveloper BPEL Designer window.
From Database Field, select the column to which the attribute should be mapped.
Open the corresponding Java source file and change the name. Then go to the structure/Mappings pane, and the newly named attribute will appear unmapped. Right-click it and select Map As to remap it. Then save and regenerate BPEL artifacts.
Keep in mind there are four views, the project view, the table/descriptor view, and the individual attribute/column view you can access from the TopLink Mappings structure window. The Java source view is not exactly a TopLink view, but can be treated as such (when renaming a mapping).
Offline database tables are internal to the TopLink Workbench project. When you run the wizard, a TopLink project is created. When you import tables, they are saved as offline table definitions.
You can use the offline database tables to control the micromapping from database data type to XML data type. If you are using a third-party database, you may need to edit these objects as a workaround. For instance, a serial
field on a third-party database may need to be mapped as Integer
so that it is recognized by the wizard and mapped to xs:integer
. See "Problems Importing Third-Party Database Tables" for more information.
Run the wizard once. Then add the following to your JDeveloper BPEL Designer project: database/
schemaName/schemaName
.schema
Click the table name (see Figure 4-69) after it is added to your project and change the types of any of the columns. When you run the wizard again (in edit mode) and click Finish, the toplink_mappings.xml
and XSD file are remapped based on the new database data types.
Edit the offline table in your JDeveloper BPEL Designer project (see Figure 4-70), not the table reachable from the ConnectionManager
. If you try the latter, the column types will not be editable, because you are editing the table itself, not an offline representation of it.
The database adapter is implemented using TopLink. For every business process, there is an underlying TopLink project, which contains metadata from mapping a database schema to objects/XML.
In TopLink terminology, toplink_mappings.xml
is an XML deployment file. It is generated from a.mwp
project file for use at run time. It is recommended that you edit the project in TopLink Workbench and periodically refresh toplink_mappings.xml
.
The toplink_mappings.xml
file is the run-time version of the TopLink Workbench project. If you edit this file directly, keep in mind that changes are not reflected in the design-time toplink_mappings.mwp
. Therefore, any changes are lost when you edit a partner link.
The toplink_mappings.xml
file consists of a set of descriptors and mappings. Descriptors roughly represent a single table in the database schema, and mappings represent either a single column in the table (direct to field), or a one-to-one or one-to-many relationship to another table (foreign reference).
When modifying the toplink_mappings.xml
file, the recommended approach is to use TopLink Workbench. The following is an example of a mapping and a descriptor from a toplink_mappings.xml
file.
<mappings> <database-mapping> <attribute-name>fname</attribute-name> <read-only>false</read-only> <field-name>ACTOR.FNAME</field-name> <attribute-classification>java.lang.String</attribute-classification> <type>oracle.toplink.mappings.DirectToFieldMapping</type> </database-mapping>
and:
<descriptor> <java-class>BusinessProcess.Actor</java-class> <tables> <table>ACTOR</table> </tables> <primary-key-fields> <field>ACTOR.ID</field> <field>ACTOR.PROGRAM_ID</field> <field>ACTOR.PROGRAM_TYPE</field> </primary-key-fields>
However, the recommended approach is to work from the TopLink Workbench.
Useful attributes on foreign reference mappings (one-to-one, one-to-many) include:
<privately-owned>false/true
If a relationship is privately owned, that means that any target rows are deleted whenever any source rows are deleted.
This is important for one-to-many relationships because, if you remove Dept without first deleting its Emp rows, you get a 'child records found' constraint exception.
If you set privately-owned
to true, the database adapter automatically deletes child records before deleting source rows. In XML everything is assumed to be privately owned; therefore, this tag is set to true by default.
<uses-batch-reading>false/true
and <uses-joining>false/true
There are two key optimizations in relation to reading rows with detail rows from the database.
The following shows the series of selects that TopLink uses to read two department objects (1 and 2), and their employees:
Unoptimized:
SELECT DEPT_COLUMNS FROM DEPT WHERE (subQuery) SELECT EMP_COLUMNS FROM EMP WHERE (DEPTID = 1) SELECT EMP_COLUMNS FROM EMP WHERE (DEPTID = 2)
Batch Reading:
SELECT DEPT_COLUMNS FROM DEPT WHERE (subQuery) SELECT EMP_COLUMNS FROM EMP e, DEPT d WHERE ((subQuery) AND (e.DEPTID = d.DEPTID))
Joined Reading:
SELECT DEPT_COLUMNS, EMP_COLUMNS FROM DEPT d, EMP e WHERE ((subQuery) AND e.DEPTID = d.DEPTID))
Joined reading appears to be the more advanced, but only works for one-to-one mappings currently, and the detail record cannot be null because the join is not an outer join.
Therefore, by default, batch reading is enabled, but not joined reading. This can easily be reversed to improve performance.
If you specify raw SQL for a query, that query cannot be a batched or joined read. To use batched or joined reading, you must not use raw SQL.
You can set other properties in toplink_mappings.xml
.
The WSDL generated by the Adapter Configuration Wizard defines the adapter service. This WSDL specifies the various operations exposed by the service. Table 4-10 specifies the operations that are generated based on your selection in the wizard.
Table 4-10 WSDL Operations Generated by the Adapter Configuration Wizard
Adapter Configuration Wizard Selection | Generated WSDL Operation |
---|---|
Insert or Update |
|
Delete |
|
Select |
|
Poll for New or Changed Records in a Table |
|
Of the preceding operations, receive
is associated with a BPEL receive
activity, whereas the rest of the preceding operations are associated with a BPEL invoke
activity.See "SQL Operations as Web Services" for more information on the preceding operations.
This section discusses the database adapter-specific parameters in the generated WSDL. This is intended for advanced users who want information about all the parameters in the generated WSDL.
A given database adapter service is meant for either continuous polling of a data source (translates to a JCA Activation) or for performing a one-time DML operation (translates to a JCA Interaction). In the continuous polling case, the WSDL contains only one receive operation with a corresponding activation spec defined in the binding section. In the one-time DML operation case, the WSDL contains multiple operations, all of which have a corresponding interaction spec defined in the binding section.
Table 4-11 specifies the JCA Activation/Interaction spec associated with each of the preceding operations:
Table 4-11 Operation and JCA Activation/Interaction Spec
WSDL Operation | JCA Activation/Interaction Spec |
---|---|
|
|
|
|
|
|
The following code example shows the binding section corresponding to the movie
service to write to the Movies
table:
<binding name="movie_binding" type="tns:movie_ptt"> <jca:binding /> <operation name="merge"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec" DescriptorName="BPELProcess1.Movies" DmlType="merge" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> <operation name="insert"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec" DescriptorName="BPELProcess1.Movies" DmlType="insert" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> <operation name="update"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec" DescriptorName="BPELProcess1.Movies" DmlType="update" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> <operation name="write"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec" DescriptorName="BPELProcess1.Movies" DmlType="write" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> <operation name="delete"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec" DescriptorName="BPELProcess1.Movies" DmlType="delete" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </binding>
Table 4-12 describes the DBWriteInteractionSpec
parameters:
Table 4-12 DBWriteInteractionSpec Parameters
Parameter | Description | Mechanism to Update |
---|---|---|
|
Indirect reference to the root database table that is being written to |
Wizard updates automatically. Do not modify this manually. |
|
The DML type of the operation ( |
Wizard updates automatically. Do not modify this manually. |
|
Reference to file containing relational-to-XML mappings ( |
Wizard updates automatically. Do not modify this manually. |
The following code example corresponds to the movie
service to query the Movies
table:
<binding name="movie_binding" type="tns:movie_ptt"> <jca:binding /> <operation name="movieSelect"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBReadInteractionSpec" DescriptorName="BPELProcess1.Movies" QueryName="movieSelect" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> <operation name="queryByExample"> <jca:operation InteractionSpec="oracle.tip.adapter.db.DBReadInteractionSpec" DescriptorName="BPELProcess1.Movies" IsQueryByExample="true" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> </binding>
Table 4-13 describes the DBReadInteractionSpec
parameters:
Table 4-13 DBReadInteractionSpec Parameters
Parameter | Description | Mechanism to Update |
---|---|---|
|
Indirect reference to the root database table that is being queried |
Wizard updates automatically. Do not modify this manually. |
|
Reference to the |
Wizard updates automatically. Do not modify this manually. |
|
Indicates if this query is a |
Wizard updates automatically. Do not modify this manually. This parameter is needed for |
|
Reference to file containing relational-to-XML mappings ( |
Wizard updates automatically. Do not modify this manually. |
The following code example shows the binding section corresponding to the MovieFetch
service to poll the Movies
table using DeletePollingStrategy
:
<binding name="MovieFetch_binding" type="tns:MovieFetch_ptt"> <pc:inbound_binding/> <operation name="receive"> <jca:operation ActivationSpec="oracle.tip.adapter.db.DBActivationSpec" DescriptorName="BPELProcess1.Movies" QueryName="MovieFetch" PollingStrategyName="DeletePollingStrategy" MaxRaiseSize="1" MaxTransactionSize="unlimited" PollingInterval="5" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> </binding>
Table 4-14 describes the DBActivationSpec
parameters:
Table 4-14 DBActivationSpec Parameters
Parameter | Description | Mechanism to Update |
---|---|---|
|
Indirect reference to the root database table that is being queried |
Wizard updates automatically. Do not modify this manually. |
|
Reference to the |
Wizard updates automatically. Do not modify this manually. |
|
Indicates the polling strategy to be used |
Wizard updates automatically. Do not modify this manually. |
|
Indicates how often to poll the root database table for new events (in seconds) |
Wizard updates automatically. Do not modify this manually. |
|
Indicates the maximum number of database records that can be raised at a time to the BPEL engine |
Modify manually in the generated WSDL. |
|
Indicates the maximum number of rows to process as part of one database transaction |
Modify manually in the generated WSDL. |
|
Reference to file containing relational-to-XML mappings ( |
Wizard updates automatically. Do not modify this manually. |
The following code example is the binding section corresponding to the MovieFetch
service to poll the Movies
table using LogicalDeletePollingStrategy
:
<binding name="PollingLogicalDeleteService_binding" type="tns:PollingLogicalDeleteService_ptt"> <pc:inbound_binding/> <operation name="receive"> <jca:operation ActivationSpec="oracle.tip.adapter.db.DBActivationSpec" DescriptorName="PollingLogicalDeleteStrategy.Movies" QueryName="PollingLogicalDeleteService" PollingStrategyName="LogicalDeletePollingStrategy" MarkReadFieldName="DELETED" MarkReadValue="TRUE" MarkReservedValue="MINE" MarkUnreadValue="FALSE" MaxRaiseSize="1" MaxTransactionSize="unlimited" PollingInterval="10" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> </binding>
Table 4-15 describes all of the additional DBActivationSpec
parameters for LogicalDeletePollingStrategy
:
Table 4-15 DBActivationSpec Parameters for LogicalDeletePollingStrategy
Parameter | Description | Mechanism to Update |
---|---|---|
|
Specifies the database column to use to mark the row as read |
Wizard updates automatically. Do not modify this manually. |
|
Specifies the value to which the database column is set to mark the row as read |
Wizard updates automatically. Do not modify this manually. |
|
Specifies the value to which the database column is set to mark the row as reserved. This parameter is optional. You can use it when multiple adapter instances are providing the same database adapter service. |
Wizard updates automatically. Do not modify this manually. |
|
Specifies the value to which the database column is set to mark the row as unread. This parameter is optional. Use it when you want to indicate specific rows that the database adapter must process. |
Wizard updates automatically. Do not modify this manually. |
The following code example shows the binding section corresponding to the MovieFetch
service to poll the Movies
table using SequencingPollingStrategy
:
<binding name="PollingLastReadIdStrategyService_binding" type="tns:PollingLastReadIdStrategyService_ptt"> <pc:inbound_binding/> <operation name="receive"> <jca:operation ActivationSpec="oracle.tip.adapter.db.DBActivationSpec" DescriptorName="PollingLastReadIdStrategy.Movies" QueryName="PollingLastReadIdStrategyService" PollingStrategyName="SequencingPollingStrategy" SequencingFieldName="SEQUENCENO" SequencingTableNameFieldValue="MOVIES" SequencingTableName="PC_SEQUENCING" SequencingTableNameFieldName="TABLE_NAME" SequencingTableValueFieldName="LAST_READ_ID" MaxRaiseSize="1" MaxTransactionSize="unlimited" PollingInterval="10" MappingsMetaDataURL="toplink_mappings.xml" /> <input/> </operation> </binding>
Table 4-16 describes all of the additional DBActivationSpec
parameters for SequencingPollingStrategy
:
Table 4-16 DBActivationSpec Parameters for SequencingPollingStrategy
Parameter | Description | Mechanism to update |
---|---|---|
|
Specifies the database column that is monotonically increasing |
Wizard updates automatically. Do not modify this manually. |
|
Specifies the type of the database column that is monotonically increasing. This parameter is optional. Use it if the type is not |
Wizard updates automatically. Do not modify this manually. |
|
Specifies the root database table for this polling query |
Wizard updates automatically. Do not modify this manually. |
|
Name of the database table that is serving as the helper table |
Wizard updates automatically. Do not modify this manually. |
|
Specifies the database column in the helper table that is used to store the root database table name |
Wizard updates automatically. Do not modify this manually. |
|
Specifies the database column in the helper table that is used to store the sequence number of the last processed row in the root database table name |
Wizard updates automatically. Do not modify this manually. |
See Deployment for details about the service
section of the WSDL.
From a database schema, the wizard generates an XML schema representation of that object. This schema is used by the BPEL process.
For example, from the table named Movies
, the following is generated:
<?xml version = '1.0' encoding = 'UTF-8'?> <xs:schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/top/SelectAllByTitle" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/SelectAllByTitle" elementFormDefault="unqualified" attributeFormDefault="unqualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="MoviesCollection" type="MoviesCollection"/> <xs:element name="Movies" type="Movies"/> <xs:complexType name="MoviesCollection"> <xs:sequence> <xs:element name="Movies" type="Movies" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="Movies"> <xs:sequence> <xs:element name="director" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="genre" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="rated" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="rating" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="releaseDate" type="xs:dateTime" minOccurs="0" nillable="true"/> <xs:element name="runTime" type="xs:double" minOccurs="0" nillable="true"/> <xs:element name="starring" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="status" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="synopsis" type="xs:string" minOccurs="0" nillable="true"/> <xs:element name="title" type="xs:string"/> <xs:element name="totalGross" type="xs:double" minOccurs="0" nillable="true"/> <xs:element name="viewerRating" type="xs:string" minOccurs="0" nillable="true"/> </xs:sequence> </xs:complexType> <xs:element name="findAllInputParameters" type="findAll"/> <xs:complexType name="findAll"> <xs:sequence/> </xs:complexType> <xs:element name="SelectAllByTitleServiceSelect_titleInputParameters" type="SelectAllByTitleServiceSelect_title"/> <xs:complexType name="SelectAllByTitleServiceSelect_title"> <xs:sequence> <xs:element name="title" type="xs:string" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:complexType> </xs:schema>
This is a generated file. Changes to this file do not affect the behavior of the adapter. It is a declaration of the XML file that the database adapter produces and consumes.
You may need to modify the XSD file if you update the underlying toplink_mappings.xml
. In that case, regenerate both files by rerunning the Adapter Configuration Wizard in edit mode.
The generated XSD flags all elements as optional with minOccurs=0
, except for the primary key attributes, which are mandatory.
Note:
Do not manually modify the XSD file to configure the database adapter.After you define the database adapter service and complete the design of your BPEL process, you compile and deploy the process to Oracle BPEL Server. In the Applications Navigator of JDeveloper BPEL Designer, right-click the BPEL project and select Deploy. See Chapter 19, "BPEL Process Deployment and Domain Management" in Oracle BPEL Process Manager Developer's Guide for more information about deployment.
Figure 4-71 shows how the database adapter gets connection information from the three possible sources: the WSDL file, the oc4j-ra.xml
file, or the data-sources.xml
file, as described in the following sections.
Figure 4-71 How the Database Adapter Gets Connection Information
When you deploy a process using the Database Adapter, the nature of the deployment varies based on whether a separate database adapter instance is already deployed and available on the application server. For information on production deployment, see Production Deployment.
Note:
In the out of the box deployment, when you run the DBAdapter wizard, connection information is required to import tables from the database, as shown in Example 4-1. As a convenience feature, when thewsdl
file is written it includes the connection information that was used by the design time. This way you can deploy the business process immediately, as the wsdl
has all the information it needs to execute properly.
Out of box deployment is used only until production deployment is correctly setup. For production you may remove this connection information (mcf.*
properties) in the wsdl
keeping only the location attribute, which is required for production deployment. It is mandatory that you use production deployment for production.
Example 4-1 shows database connection information specified in the mcf.*
properties of the WSDL.
Example 4-1 WSDL Code Example Showing Database Connection Information in the mcf.* Properties
<!-- Your runtime connection is declared in J2EE_HOME/application-deployments/default/DbAdapter/oc4j-ra.xml. These 'mcf' properties here are from your design time connection and save you from having to edit that file and restart the application server if eis/DB/scott is missing. These 'mcf' properties are safe to remove. --> <service name="get"> <port name="get_pt" binding="tns:get_binding"> <jca:address location="eis/DB/scott" UIConnectionName="scott" ManagedConnectionFactory="oracle.tip.adapter.db.DBManagedConnectionFactory" mcf.DriverClassName="oracle.jdbc.driver.OracleDriver" mcf.PlatformClassName="oracle.toplink.oraclespecific.Oracle9Platform" mcf.ConnectionString="jdbc:oracle:thin:@mypc.home.com:1521:orcl" mcf.UserName="scott" mcf.Password="7347B141D0FBCEA077C118A5138D02BE" /> </port> </service>
Note:
When the database adapter uses connection information specified in the WSDL, you have no connection pooling.Please see Section 4.1.2, "Design Overview". During design time, you specified something like eis/DB/<JdevConnectionName>
on the Adapter Configuration Wizard: Service Connection page. For production deployment a Database adapter instance with that JNDI name needs to be already deployed and available on the Application Server. This deployment section provides the information to do that.
Runtime instances are configured in the database adapter's oc4j-ra.xml
file, similar to how connection pools are configured in data-sources.xml
. These in turn refer through JNDI to the name of a connection pool in data-sources.xml
.
For the Oracle BPEL Process Manager, oc4j-ra.xml
is located at
Oracle_Home\bpel\system\appserver\oc4j\j2ee\home\
application-deployments\default\DbAdapter\oc4j-ra.xml
For the Oracle BPEL Process Manager for OracleAS Middle Tier installation, oc4j-ra.xml
is located at
Oracle_Home\j2ee\OC4J_BPEL\application-deployments\default\DBAdapter\oc4j-ra.xml
Connection information in the oc4j-ra.xml
file is not generated by the Adapter Configuration Wizard. You must manually update this file and restart Oracle BPEL Server for updates to oc4j-ra.xml
to take effect.
The Application Server manages and pools connections for all applications. It provides cross-application sharing of a single connection pool, high availability, special RAC configuration, XA/transaction support, password indirection, and a single configuration file, data-sources.xml
. Hence the DB, AQ, and JMS adapters should not contain connection information in their oc4j-ra.xml
files but simply refer to a data source by name. This is the recommended and default approach in 10.1.3.1.
Note:
The WSDL points to a JCA adapter (that is,eis/DB/BPELSamples
in oc4j-ra.xml
), which in turn points to a data source (that is, jdbc/BPELSamples
in data-sources.xml
). A common pitfall is to set the JNDI name in the wizard directly to the name of a data source.The deployment descriptor file (oc4j-ra.xml
) has four key properties: location
, xADataSourceName
, dataSourceName
, and platformClassName
.
Location
You may remember that when you ran the wizard you could enter a runtime JNDI
name, which defaulted to eis/DB/<JdeveloperConnectionName>
. Ensure that a corresponding entry exists in the oc4j-ra.xml
deployment descriptor by production time.
xADataSourceName and nonXaDataSourceName
xADataSourceName
and dataSourceName
refer to global transaction and local transaction data sources, respectively.
For the Oracle BPEL Process Manager, data-sources.xml
is at
Oracle_Home
\bpel\system\appserver\oc4j\j2ee\home\config\data-sources.xml
For the Oracle BPEL Process Manager for OracleAS Middle Tier installation, data-sources.xml
is at
Oracle_Home
\j2ee\OC4J_BPEL\config\data-sources.xml
If you have the following code in data-sources.xml
,
<!-- Connection pool for oracle lite --> <connection-pool name="BPELPM_CONNECTION_POOL"> <connection-factory factory-class="oracle.lite.poljdbc.POLJDBCDriver" user="system" password="any" url="jdbc:polite4@127.0.0.1:100:orabpel" /> </connection-pool> <managed-data-source name="BPELServerDataSource" connection-pool-name="BPELPM_CONNECTION_POOL" jndi-name="jdbc/BPELServerDataSource" tx-level="global"/> <managed-data-source name="BPELServerDataSourceWorkflow" connection-pool-name="BPELPM_CONNECTION_POOL" jndi-name="jdbc/BPELServerDataSourceWorkflow" tx-level="local"/>
then the oc4j-ra.xml
entry would look like the following:
<connector-factory location="eis/DB/BPELSamples" connector-name="Database Adapter"> <config-property name="xADataSourceName" value="jdbc/BPELServerDataSource"/> <config-property name="dataSourceName" value="jdbc/BPELServerDataSourceWorkflow"/> ...
It is recommended that you always use the xADataSourceName
(pointing to a tx-level
="global" data source). If you specify an xADataSourceName
and a dataSourceName
(pointing to a tx-level
="local" data source), then the dataSourceName
may occasionally be used for reading performance (for multiple inbound polling threads). Only if you specify dataSourceName
will the DB Adapter not synchronize with global transactions.
PlatformClassName
This indicates whether you are connecting to Oracle, DB2, SQLServer or any other database.
Table 4-17 shows the advanced properties, which are database platform variables. Set the DatabasePlatform
name to one of the following variables.
Table 4-17 Database Platform Names
Database | PlatformClassName |
---|---|
Oracle9+ (including 10g) |
|
Oracle9+ (optional): To workaround padding of |
|
Oracle8 |
|
Oracle7 |
|
DB2 |
oracle.toplink.platform.database.DB2Platform |
DB2 on AS400 |
oracle.tip.adapter.db.toplinkext.DB2AS400Platform |
Informix |
|
Sybase |
|
SQLServer |
|
Any other database |
|
This section discusses the following advanced properties of the oc4j-ra.xml file:
Properties Configurable by Using Managed Connection Factory Entry
The following properties are configurable by using the managed connection factory entry in the oc4j-ra.xml
file:
String connectionString String userName String password String encryptionClassName Integer minConnections Integer maxConnections Boolean useReadConnectionPool Integer minReadConnections Integer maxReadConnections String dataSourceName String driverClassName Integer cursorCode String databaseName String driverURLHeader Integer maxBatchWritingSize String platformClassName String sequenceCounterFieldName String sequenceNameFieldName Integer sequencePreallocationSize String sequenceTableName String serverName Boolean shouldBindAllParameters Boolean shouldCacheAllStatements Boolean shouldIgnoreCaseOnFieldComparisons Boolean shouldForceFieldNamesToUpperCase Boolean shouldOptimizeDataConversion Boolean shouldTrimStrings Integer statementCacheSize Integer stringBindingSize String tableQualifier Integer transactionIsolation Boolean usesBatchWriting Boolean usesByteArrayBinding Boolean usesDirectDriverConnect Boolean usesExternalConnectionPooling Boolean usesExternalTransactionController Boolean usesJDBCBatchWriting Boolean usesNativeSequencing Boolean usesNativeSQL Boolean usesStreamsForBinding Boolean usesStringBinding
The following properties appear in the oracle.toplink.sessions.DatabaseLogin
object.
See TopLink API reference information on DBConnectionFactory
Javadoc and DatabaseLogin
Javadoc at
http://download-east.oracle.com/docs/cd/B10464_02/web.904/b10491/index.html
To configure any of the preceding properties:
Add the following to the ra.xml
file:
<config-property> <config-property-name>usesJDBCBatchWriting</config-property-name> <config-property-type>java.lang.Boolean</config-property-type> <config-property-value>true</config-property-value> </config-property>
For Oracle BPEL Process Manager, ra.xml
is at
Oracle_Home\bpel\system\appserver\oc4j\j2ee\home\connectors\
DbAdapter\DbAdapter\META-INF\ra.xml
For Oracle BPEL Process Manager for OracleAS Middle Tier, ra.xml
is at
Oracle_Home\j2ee\OC4J_BPEL\connectors\DbAdapter\DbAdapter\META-INF\ra.xml
Add the following to the oc4j-ra.xml
file:
<config-property name="usesJDBCBatchWriting" value="true"/>
Restart Oracle BPEL Server for the changes to take effect.
You can also update the default oc4j-ra.xml
and ra.xml
files before you deploy the database adapter. This way, you deploy once and do not need to restart the application server.
Case for Property Names in ra.xml
and oc4j-ra.xml
The case for all property names must exactly match in the ra.xml and oc4j-ra.xml
files. Otherwise, you receive an error message similar to the following during run time in the domain.log
file:
Type=Dequeue_ptt, operation=Dequeue <2005-03-14 15:20:43,484> <ERROR> <default.collaxa.cube.activation> <AdapterFram ework::Inbound> Error while performing endpoint Activation: ORABPEL-12510<br> Unable to locate the JCA Resource Adapter via WSDL port element jca:address. The Adapter Framework is unable to startup the Resource Adapter specified in the WSDL jca:address element: @ {http://xmlns.oracle.com/pcbpel/wsdl/jca/}address: location='eis/aqSample'
For example, if the userName
property in the Oracle_Home
\bpel\system\appserver\oc4j\j2ee\home\application-deployments\default\AqAdapter\oc4j-ra.xml
file for the AQ adapter uses the following upper and lower case convention:
<config-property name="userName" value="scott"/>
Then this case must match the userName
property in the corresponding Oracle_Home
\bpel\system\appserver\oc4j\j2ee\home\connectors\default\AqAdapter\AqAdapter\META-INF\ra.xml
file for the AQ adapter.
<config-property-name>userName</config-property-name>
Editing oc4j-ra.xml Through Oracle Enterprise Manager
You can deploy and undeploy resource adapters from Oracle Enterprise Manager, as well. The following steps show how you can edit both, data-sources.xml
and oc4j-ra.xml
through the Oracle Enterprise Manager:
Start the BPEL server process.
After the installation, open Windows Services.
Change the Oracle-nnnProcessManager start up
option to manual
(where, nnn is the name you gave your instance). Next time you reboot, OPMN won't be started. To start it, follow these steps:
Click Start, and then click Run.
Enter cmd
, and then click Ok to bring up a command window.
Go to your installation, by typing the following command:
cd %SOAHOME%\opmn\bin
Ensure that you replace %SOAHOME%
with your path. For example,
cd c:\oracle\product\soabeta\opmn\bin
Enter the following command:
opmnctl startall
Check to see if the services have started up correctly, by entering the following command:
opmnctl status
Note:
To stop all of the services, if you ever need to, type:opmnctl stopall
Log into Enterprise Manager by entering the following URL:
The default port is 8888; the default login userid is oc4jadmin
and password is welcome1
.
Create a connection pool and a data source, by using the following steps:
Select the Home link.
Select the Administration link.
Click the Go To Task icon for Services/JDBC Resources.
Under Connection Pools, click the Create button.
Accept defaults, and then click Continue.
Enter the information shown in the following table:
Field | Value |
---|---|
Name | appsSample_pool |
JDBC URL | The URL for your database.
For example: jdbc:oracle:thin:@<host>:1521:<sid> |
Username | apps |
Password | apps |
Accept defaults for the rest of the fields.
Click Finish.
Click the Test Connection icon for your new connection pool.
In the new screen, click Test.
Back on the main page, a successful connection message is displayed. If you get an error message, check the URL and credentials to ensure you have entered the right information.
Click Finish.
Under Data Sources, click Create.
Accept the defaults, and then click Continue.
Enter the information shown in the following table:
Field | Value |
---|---|
Name | appsDemoDS |
JNDI Location | jdbc/appsSampleDataSource |
Connection Pool | appsSample_pool |
Accept defaults for the rest of the fields.
Click Finish.
Create an Oracle Applications adapter, by using the following steps:
At the top of the page, select the OC4J:home breadcrumb link.
Select the Applications link.
In the tree of applications, select the default link.
Under Modules, select the AppsAdapter link.
Select the Connection Factories link.
Under Connection Factories, click Create.
Note that you must use the Create button near the top of the screen, and not the one in the Shared Connection Pools section.
Accept all the defaults, and then click Continue.
For JNDI Location, enter eis/Apps/appsSample
.
Under Configuration Properties, For xADataSourceName, enter jdbc/appsSampleDataSource
.
Keep the default entries for all of the other fields.
Click Finish.
In Oracle BPEL Process Manager 10.1.3, the deployment descriptor no longer contains local connection information, such as driverClassName
, userName
, password
, connectionUrl
, minConnections
, maxConnections
, minReadConnections
, maxReadConnections
. Instead, the deployment descriptor points to another file, the data-sources.xml
. This is to leave connection configuration and pooling to the app server. Also an application server connection pool must be used to support XA (commits to multiple databases in the same transaction) which is highly desirable.
Hence these 'TopLink Internal pooling' properties have been removed but they can still be readded to the ra.xml and used as advanced properties.
XA and application server pooling was supported pre 10.1.3 with a combination of dataSourceName
, usesExternalConnectionPooling
, and usesExternalTransactionController
. These have been removed and replaced with xaDataSourceName
and nonXaDataSourceName
only.
Table shows the properties that should be changed when migrating to Oracle BPEL Process Manager 10.1.3
Table 4-18 Properties Used in Oracle BPEL Process Manager 10.1.3
Pre 10.1.3 | Post 10.1.3 |
---|---|
|
Specify |
|
specify only |
|
set |
The database adapter is preconfigured with many performance optimizations. You can, however, make some changes to reduce the number of round trips to the database, as described in the following sections.
Performance tuning is demonstrated in the following tutorials:
DirectSQLPerformance
MultiTablesPerformance
../polling/DistributedPolling
In 10.1.3.1 a new code path was added, which for simple scenarios, such as delete polling strategy
, insert
, flat tables
, was heavily optimized for maximum performance. Hence there are no two main performance samples, one for the direct SQL and another for the more feature-rich adapter as a whole. The adapter will auto-detect when it is in a configuration that permits the slightly faster code path.
For these files, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\advanced\endToEnd
The following performance hit list was taken from the MultiTablesPerformance
README. It contains a best configuration and observations on the relative impacts of the various performance options. Single change variations from ideal (280 rows per second throughput) configuration:
Oracle 10g Windows Database on Other Machine (ping time about 40 ms) (vs. db on same machine): 70 rows / second (10,000 rows) -75% degradation vs +300% improvement
MaxTransactionSize/MaxRaiseSize=2/2 (vs. 100/100): 80 rows / second (10,000 rows) -71% / +250%
batch-reading=false (vs true in toplink_mappings.xml): 104 rows / second (10,000 rows) -63% / +169%
DeletePollingStrategy (vs. LogicalDelete, NumberOfThreads=1)
: 120 rows / second (10,000 rows) -57% / +133%
usesBatchWriting =false (vs true in oc4j-ra.xml): 127 rows / second (10,000 rows) -55% / +120%
OptimizeMerge="false" (vs true): 175 rows / second (10,000 rows) -38% / +60%
BPEL dehydration on (vs off): 222 rows / second (10,000 rows) -21% / +26%
NumberOfThreads=1 (vs 6): 227 rows / second (10,000 rows) * -19% / +23%
Note:
Ideal is exactly the same configuration as best possible configuration forDeletePollingStrategy
, where throughput was 120 rows/second.No primary / foreign keys on database (vs. specified): 270 rows / second (10,000 rows) -4% / +4% (267 rows / second (20,000 rows))
Insert (vs Merge): 303 rows / second (10,000 rows) -8% / +8%
UseBatchDestroy="false" (vs. true): 312 rows / second (10,000 rows) -10% / +11%
The first seven performance hit mentioned in the preceding list are all directly related almost exclusively to the number of rountrips to the database, and the network cost of each trip. Plus deleting seems to be a very expensive operation, even worse with foreign key constraints NumberOfThreads
must be set to 1.
If you run through the Adapter Configuration Wizard and select Insert or Update, you get a WSDL with the following operations: merge
(default), insert
, update
, and write.
The latter three call TopLink queries of the same name, avoiding advanced functionality that you may not need for straightforward scenarios. You can make the change by double-clicking an invoke
activity and selecting a different operation. The merge
is the most expensive, followed by the write
and then the insert
.
The merge
first does a read of every element and calculates what has changed. If a row has not changed, it is not updated. The extra reads (for existence) and the complex change calculation add considerable overhead. For simple cases, this can be safely avoided; that is, if you changed only two columns, it does not matter if you update all five anyway. For complex cases, however, the opposite is true. If you have a master record with 100 details, but you changed only two columns on the master and two on one detail, the merge
updates those four columns on two rows. A write
does a write of every column in all 101 rows. Also, the merge
may appear slower, but can actually relieve pressure on the database by minimizing the number of writes.
The insert
operation is the most performant because it uses no existence check and has no extra overhead. You have no reads, only writes. If you know that you will do an insert most of the time, use insert
, and catch a unique key constraint SQL exception inside your BPEL process, which can then perform a merge
or update
instead. Use merge
if you have a mix of new and existing objects (for instance, a master row containing several new details). The update
is similar to insert
.
To monitor performance, you can enable debug logging and then watch the SQL for various inputs.
Caching is an important performance feature of TopLink. However, issues with stale data can be difficult to manage. By default, the database adapter uses a WeakIdentityMap
, meaning a cache is used only to resolve cyclical references, and entries are quickly reclaimed by the Java virtual machine. If you have no cycles (and you ideally should not for XML), you can switch to a NoIdentityMap
. The TopLink default is a SoftCacheWeakIdentityMap
. This means that the most frequently used rows in the database are more likely to appear already in the cache.
For a knowledge article on caching, go to
http://www.oracle.com/technology/tech/java/newsletter/november04.html
One method of performance optimization for merge
is to eliminate check database existence checking. The existence check is marginally better if the row is new, because only the primary key is returned, not the entire row. But, due to the nature of merge
, if the existence check passes, the entire row must be read anyway to calculate what changed. Therefore, for every row to be updated, you see one extra round trip to the database during merge
.
It is always safe to use check cache on the root descriptor/table and any child tables if A is master and B is a privately owned child. If A does not exist, B cannot exist. And if A exists, all its Bs are loaded as part of reading A; therefore, check cache works.
This parameter indicates the maximum number of XML records that can be raised at a time to the BPEL engine. For example, if you set maxRaiseSize = 10
, then 10 database records are raised at one time. On read (inbound) you can set maxRaiseSize = 0
(unbounded), meaning that if you read 1000 rows, you will create one XML with 1000 elements, which is passed through a single Oracle BPEL Process Manager instance. A merge
on the outbound side can then take all 1000 in one group and write them all at once with batch writing.
Use the maxRaiseSize
parameter for publishing large payloads.
Your choice of polling strategy matters too. Avoid the delete polling strategy because it must individually delete each row. The sequencing polling strategy can destroy 1000 rows with a single update to a helper table.
Batch reading of one-to-many and one-to-one relationships is on by default. You can also use joined reading for one-to-one relationships instead, which may offer a slight improvement.
You can configure a connection pool if using either the adapter's local connection pool or an application server data source. Creating a database connection is an expensive operation. Ideally you should only exceed the minConnections
under heavy loads. If you are consistently using more connections than that at once, then you may spend a lot of time setting up and tearing down connections. The database adapter also has a read connection pool. A read connection is more performant because there is no limit on how many users can use one connection for reading at the same time, a feature that most JDBC drivers support.
The database adapter is designed to scale to the number of unprocessed rows on the database. By default, it is possible to read and process one database row or 10,000 with as little as three round trips to the database. The most expensive operations are limited to a constant number. You can also configure the database adapter for a distributed environment.
Load Balancing: MaxTransactionSize and Pessimistic Locking
You can set a simple option that enables the database adapter to work safely in a distributed environment by making the first polling query acquire locks, as shown in Figure 4-72. In SQL terms, you are making your first SELECT
into a SELECT...FOR UPDATE
.
The behavior of all polling strategies is as follows:
If any adapter instance performs step 1 while another instance is between steps 1 and 3, then duplicate processing occurs. Acquiring locks on the first operation and releasing them in commit solves this problem, and may naturally order the polling instances.
To enable pessimistic locking, run through the wizard once to create an inbound polling query. In the Applications Navigator window, expand Application Sources, then TopLink, and click TopLink Mappings. In the Structure window, click the table name. In Diagram View, click the following tabs: TopLink Mappings, Queries, Named Queries, Options; then the Advanced… button, and then Pessimistic Locking and Acquire Locks. You see the message, "Set Refresh Identity Map Results?" If a query uses pessimistic locking, it must refresh the identity map results. Click OK when you see the message, "Would you like us to set Refresh Identity Map Results and Refresh Remote Identity Map Results to true?Ó Run the wizard again to regenerate everything. In the new toplink_mappings.xml
file, you see something like this for the query: <lock-mode>1</lock-mode>
.
Note the following:
The preceding procedure works in conjunction with every polling strategy where the first operation is a read.
For the sequencing-based polling strategies, the SELECT FOR UPDATE
is applied to the SELECT
on the helper table only. The SELECT
on the polled table does not acquire locks because you do not have write
access to those tables.
If an adapter instance fails while polling records, those records are returned to the unprocessed pool (no commit happens).
No individual adapter instance is special. In an ideal distributed system, coordination between instances is minimal (here effected with locking). No master acts as a weak link, and every part is identically configured and interchangeable.
Other than the SELECT FOR UPDATE
, no extra reads or writes are performed by the database adapter in a distributed environment.
Load Balancing: MaxTransactionSize and Pessimistic Locking
After you enable pessimistic locking on a polling query, the maxTransactionSize
activation property automatically behaves differently.
Assume that there are 10,000 rows at the start of a polling interval and that maxTransactionSize
is 100. In standalone mode, a cursor is used to iteratively read and process 100 rows at a time until all 10,000 have been processed, dividing the work into 10,000 / 100 = 100 sequential transactional units. In a distributed environment, a cursor is also used to read and process the first 100 rows. However, the adapter instance will release the cursor, leaving 9,900 unprocessed rows (or 99 transactional units) for the next polling interval or another adapter instance.
For load balancing purposes, it is dangerous to set the maxTransactionSize
too low in a distributed environment (where it becomes a speed limit). It is best to set the maxTransactionSize
close to the per CPU throughput of the entire business process. This way, load balancing occurs only when you need it.
detectOmissions
FeatureThe following are the features of the detectOmission
feature:
Available Since
Release 10.1.3
Configurable
Yes
Default Value
Design Time: true
, unless explicitly set to false
Use Case
Users may pass incomplete or partial xml to a merge, update, or insert, and see that every column they left unspecified in the xml is set to null in the database.
It allows DBAdapter
merge, insert or update to differentiate between null
value and the absence of a value (omission) in xml documents. On a case by case basis, it determines which information in the xml is meaningfull and which is not. In this way the xml is seen as a partial representation of a database row, as opposed to a complete representation. The following table lists examples for null values, and values that can be omitted.
Element Type | Omission | Null |
---|---|---|
Column | <director></director>
|
<director xsi:nil=ÓtrueÓ /> |
1-1 | <!-- dept> … </dept --> |
<dept xsi:nil=ÓtrueÓ /> |
1-M | <!-- empCollection>…
|
</empCollection>
|
Note:
The 1-1 representation<dept />
denotes an empty department object, and should not be used.For 1-M, <empCollection />
actually means a collection of 0 elements, and is considered a meaningfull value.For columns, <director></director>
is not considered an omission in cases where it represents an empty string.A value considered omitted will be omitted from UPDATE
or INSERT
sql. For an update operation, existing (meaningful) values on the database will not be overwritten. And for an insert operation, the default value on the database will be used, as no explicit value is provided in the SQL.
A DBAdapter
receive
will not be able to produce xml
with omissions, and makes use of xsi:nil="true"
. If you are unable to produce input xml with xsi:nil="true"
, or are concerned about the difference between <director />
and <director></director>
, then it is best to set DetectOmissions="false"
in the wsdl
.
To treat all null
values as omissions, check out the IgnoreNullsMerge
sample, which comes with a custom TopLink
plugin. The plugin works similar to this feature, but cannot detect subtleties between null
and omission
.
When you are expecting an update you can improve performance, by omitting 1-1
and 1-M
relationships. Because, the merge
operation can skip considering the detail records completely.
Alternatively, map only those columns you are interested in, and create separate mappings for different invokes. If two updates should update two different sets of columns, create two separate partnernlinks
.
Performance
By default, xml
will not be input to the database adapter containing omissions. Until an xml
with omissions is detected, there is no performance overhead. Once omissions are detected, a TopLink
descriptor
event
listener
is added. This event
listener
has some overhead, and every modifyRow
about to become a SQLUpdate
or SQLInsert
needs to be iterated over, to check for omissions. Hence, every column value sent to the database is checked. If the input xml
has mostly omissions then the cost overhead should be more than compensated by sending fewer values to the database.
Incompatible Interactions
DirectSQL="true"
and DetectOmissions="true"
- DetectOmissions
takes precedence. The following are some examples for incomaptile interactions:
DetectOmissionsMerge
IgnoreNullsMerge
OptimizeMerge
Note:
For migrated old BPEL project, you must re-run the DB Adapter wizard in order to regenerate the WSDL file. When you do this, theDetectOmissions
and OptimizeMerge
options will appear in the WSDL file with default values as DetectOmissions="false"
and OptimizeMerge="true"
.See the following for more information:
You can also access the forums from Oracle Technology Network at
The Oracle BPEL Process Manager forum at
http://forums.oracle.com/forums/forum.jsp?forum=212
The TopLink forum at
http://forums.oracle.com/forums/forum.jsp?forum=48
This site contains over 2,000 topics, such as implementing native sequencing, optimistic locking, and JTA-managed connection pools with TopLink
http://www.oracle.com/technology
OutputCompletedXml
FeatureOutputCompletedXml
is a feature of the outbound write
activity. The following are some of the features of the OutputCompletedXml
feature:
Available Since
Release 10.1.2.0.2
Configurable
OutputCompletedXml
appears in wsdl only when default is true
.
Default Value
It is true
when TopLink
sequencing is configured to assign primary keys on insert from a database sequence, otherwise it is false
.
Issue
You can have primary keys auto-assigned on insert
from a database sequence. However the usefullness of this feature is diminished, because insert
/merge
have no output message, and so there is no way to tell which primary keys were assigned.
Note:
After configuring sequencing (link), run the wizard again, so that theinsert
/merge
wsdl
operations can be regenerated with an output message, and wsdl
property OutputCompletedXml="true"
.Performance
An output
xml
is only provided when the output
xml
would be significantly different, so if TopLink
sequencing is not used, then this feature is disabled and there is no performance hit. Further, this feature can be explicitly disabled. Likewise, the original input xml
is updated and returned, a completely new xml
is not built. Also only a shallow update of the xml
is performed, if primary keys were assigned to detail records these will not be reflected in the output xml
.
Incompatible Interactions
DirectSQL="true"
and "OutputCompletedXml"
- OutputCompletedXml
takes precedence.
The following section discusses how to connect to third-party databases. You can use one vendor's database for design time and another for run time because BPEL processes are database-platform neutral.
Note:
To create an Oracle Lite database connection, follow the steps for a third-party JDBC driver (because the existing wizard and libraries for the Oracle Lite database require extra configuration). Table 4-19 provides information for connecting to an Oracle Lite database.See Problems Importing Third-Party Database Tables for more information.
The following steps generally apply when you create a database connection using a third-party JDBC driver. For specific third-party databases, see the following topics:
Summary of Third-Party and Oracle Lite Database Connection Information
Location of JDBC Driver JAR Files and Setting the Class Path
To create a database connection when using a third-party JDBC driver:
Select Connection Navigator from View.
Right-click Database and select New Database Connection.
Click Next in the Welcome window.
Enter a connection name.
Select Third Party JDBC Driver from Connection Type.
Enter your username, password, and role information.
Click New for Driver Class.
Enter the driver name (for example, some
.jdbc.
Driver
) for Driver Class.
Click New for Library.
Click Edit to add each JAR file of your driver to Class Path.
Click OK twice to exit the Create Library windows.
Click OK to exit the Register JDBC Driver window.
Enter your connection string name for URL and click Next.
See Table 4-19 for some commonly used URLs. Also, sample entries appear in the deployment descriptor file (oc4j-ra.xml
).
Click Test Connection.
If the connection is successful, click Finish.
When using a Microsoft SQL Server database, follow the database connection steps in Design Time: Using the Command-Line Utility and use the following information:
URL: jdbc:microsoft:sqlserver://localhost\
NAME:1433;SelectMethod=cursor;databasename=???
Driver Class: com.microsoft.jdbc.sqlserver.SQLServerDriver
Driver Jar: .\SQLServer2000\msbase.jar, msutil.jar, mssqlserver.jar
URL: jdbc:oracle:sqlserver://localhost
Driver Class: com.oracle.ias.jdbc.sqlserver.SQLServerDriver
Driver Jar: .\DataDirect\YMbase.jar, YMoc4j.jar, YMutil.jar, YMsqlserver.jar
Note the following when connecting to a SQL Server database:
User name and password
SQL Server 2005 installs with Windows authentication as the default. Therefore, you do not log in with a user name and password; rather, your Windows user account either has privilege or does not. JDBC requires you to provide a user name and password.
According to support.microsoft.com
, "Microsoft SQL Server 2000 driver for JDBC does not support connecting by using Windows NT authentication." See
http://support.microsoft.com/default.aspx?scid=kb;en-us;313100
However, the DataDirect driver specification states that it does.
If you use your Windows user name and password, you may see something like the following:
[Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]Login failed for user 'DOMAIN\USER'. The user is not associated with a trusted SQL Server connection.[Microsoft][SQLServer 2000 Driver for JDBC] An error occured while attempting to log onto the database.
You must select mixed mode authentication
on a clean installation.
On a Microsoft SQL Server 2000 Express Edition installation, the system username is sa
and the password is whatever you provide.
Connect string
From the sqlcmd
login, you can infer your connect string, as in the following examples:
Example 1:
sqlcmd 1> jdbc:microsoft:sqlserver://localhost:1433
Example 2:
sqlcmd -S user.mycompany.com\SQLExpress 1> jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433
Example 3:
sqlcmd -S user.mycompany.com\SQLExpress -d master 1> jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433;databasename= master
A full URL is as follows:
jdbc:microsoft:sqlserver://serverName[\instanceName]:tcpPort[;SelectMethod=cursor][;databasename=databaseName]
Database name
If you must explicitly supply the database name, but do not know it, go to
C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data
If you see a file named master.mdf
, then one of the database names is master.
TCP port
Make sure that SQL Server Browser is running and that your SQL Server service has TCP/IP enabled and is listening on static port 1433. Disable dynamic ports. In SQL Native Client Configuration/Client Protocols, make sure that TCP/IP is enabled and that the default port is 1433.
JDBC drivers
You must download the JDBC drivers separately. From www.microsoft.com
, click Downloads and search on jdbc. You can also try using the DataDirect driver.
When using an IBM DB2 database, follow the database connection steps in Design Time: Using the Command-Line Utility and use the following information:
When using a Sybase database, follow the database connection steps in Design Time: Using the Command-Line Utility and use the following information:
When using an InterSystems Caché database, follow the database connection steps in Design Time: Using the Command-Line Utility and use the following information:
URL: jdbc:Cache://
machinename_running_Cache_DB_Server:
1972
/Cache_Namespace
Driver Class: com.intersys.jdbc.CacheDriver
Driver Jar: C:\CacheSys\Dev\Java\Lib\CacheDB.jar
The default login is _SYSTEM/sys
.
When using a MySQL 4 database, follow the database connection steps in Design Time: Using the Command-Line Utility and use the following information:
URL: jdbc:mysql://
hostname
:3306
/dbname
Driver Class: com.mysql.jdbc.Driver
Driver Jar: mysql-connector-java-3.1.10-bin.jar
Table 4-19 summarizes the preceding connection information for common third-party databases and for Oracle Olite. Also see Table 4-17, "Database Platform Names" for PlatformClassName
information.
Table 4-19 Information for Connecting to Third-Party Databases and Oracle Olite Database
Database | URL | Driver Class | Driver Jar |
---|---|---|---|
Microsoft SQL Server |
MS JDBC driver: jdbc:microsoft:sqlserver:// localhost\NAME:1433; SelectMethod=cursor; databasename=??? DataDirect driver: jdbc:oracle:sqlserver:// localhost |
MS JDBC driver: com.microsoft.jdbc.sqlserver. SQLServerDriver DataDirect driver: com.oracle.ias.jdbc.sqlserver. SQLServerDriver |
MS JDBC driver: .\SQLServer2000\msbase. jar, msutil.jar, mssqlserver.jar DataDirect driver: .\DataDirect\YMbase.jar, YMoc4j.jar, YMutil.jar, YMsqlserver.jar |
IBM DB2 |
DataDirect driver: jdbc:db2:localhost:NAME JT400 driver (AS400 DB2): jdbc:as400://hostname;
translate binary=true
Example: jdbc:as400://localhost; translate binary=true |
DataDirect driver: COM.ibm.db2.jdbc.net.DB2Driver JT400 driver (AS400 DB2): com.ibm.as400.access. AS400JDBCDriver |
DataDirect driver (v8.1): .\IBM-net\db2java_81.zip, db2jcc_81.jar JT400 driver (AS400 DB2): jt400.jar |
Sybase |
jconn driver: jdbc:sybase:Tds:localhost:5001/NAME DataDirect driver: jdbc:oracle:sybase:// localhost:5001 |
jconn driver: com.sybase.jdbc2.jdbc.SybDriver DataDirect driver: com.oracle.ias.jdbc.sybase. SybaseDriver |
jconn driver: .\Sybase-jconn\jconn2.jar DataDirect driver: .\DataDirect\YMbase.jar, YMoc4j.jar, YMutil.jar, YMsybase.jar |
InterSystems Caché |
jdbc:Cache://
machinename_running_
Cache_DB_Server:1972/
Cache_Namespace
where Example: jdbc:Cache://127.0.0.1:1972/ Samples |
com.intersys.jdbc.CacheDriver |
C:\CacheSys\Dev\Java\Lib\ CacheDB.jar |
MySQL 4 |
jdbc:mysql://hostname:3306/ dbname Example: jdbc:mysql://localhost:3306/ test |
com.mysql.jdbc.Driver |
mysql-connector-java- 3.1.10-bin.jar |
Oracle Olite Database |
jdbc:polite4@localhost:100: orabpel |
oracle.lite.poljdbc. POLJDBCDriver |
Oracle_Home\
bpel\lib\olite40.jar
|
At run time, put the driver JAR files in the application server class path in either of the following ways:
Edit the class path in the following files:
For the Oracle BPEL Process Manager, go to
Oracle_Home/bpel/system/appserver/oc4j/j2ee/home/config/
server.xml
For the Oracle BPEL Process Manager for OracleAS Middle Tier installation, go to
Oracle_Home/j2ee/OC4J_BPEL/config/server.xml
Or
Drop the JAR files into the following directories:
For the Oracle BPEL Process Manager, go to
Oracle_Home/bpel/system/appserver/oc4j/j2ee/home/applib
For the Oracle BPEL Process Manager for OracleAS Middle Tier installation, go to
Oracle_Home/j2ee/OC4J_BPEL/applib
This section describes how the database adapter supports the use of stored procedures and functions for Oracle databases only. Stored procedures and functions for non-Oracle databases are not supported out-of-the-box.
This section contains the following topics:
The Adapter Configuration Wizard – Stored Procedures is used to generate an adapter service WSDL and the necessary XSD. The adapter service WSDL encapsulates the underlying stored procedure or function as a Web service with a WSIF JCA binding. The XSD describes the procedure or function, including all the parameters and their types. This XSD provides the definition used to create instance XML that is submitted to the database adapter at run time.
This section comprises the following:
This section describes how to use the wizard with APIs that are not defined in PL/SQL packages. You use the Adapter Configuration Wizard – Stored Procedures to select a procedure or function and generate the XSD. See Database Adapter Use Case for Oracle BPEL Process Manager if you are not familiar with how to start the wizard.
In the wizard, select Database Adapter, as shown in Figure 4-73.
Figure 4-73 Selecting the Database Adapter in the Adapter Configuration Wizard
After entering a service name (for example, ProcedureProc
) and an optional description for the service, you associate a connection with the service, as shown in Figure 4-74. You can select an existing connection from the list or create a new connection.
Figure 4-74 Setting the Database Connection in the Adapter Configuration Wizard
For the Operation Type, select Call a Stored Procedure or Function, as shown in Figure 4-75.
Figure 4-75 Calling for a Stored Procedure or Function in the Adapter Configuration Wizard
Next you select the schema and procedure or function. You can select a schema from the list or select <Default Schema>
, in which case the schema associated with the connection is used. If you know the procedure name, enter it in the Procedure field. If the procedure is defined inside a package, then you must include the package name, as in EMPLOYEE.GET_NAME
.
If you do not know the schema and procedure names, click Browse to access the Stored Procedures window, shown in Figure 4-76.
Figure 4-76 Searching for a Procedure or Function
Select a schema from the list or select <Default Schema>
. The available procedures are displayed in the left window. To search for a particular API in a long list of APIs, enter search criteria in the Search field. For example, to find all APIs that begin with XX
, enter XX%
and click the Search button. Clicking the Show All button displays all available APIs.
Figure 4-77 shows how you can select the PROC procedure and click the Arguments tab. The Arguments tab displays the parameters of the procedure, including their names, type, mode (IN
, IN/OUT
or OUT
) and the numeric position of the parameter in the definition of the procedure.
Figure 4-77 Viewing the Arguments of a Selected Procedure
Figure 4-78 shows how the Source tab displays the code that implements the procedure. Text that matches the name of the procedure is highlighted.
Figure 4-78 Viewing the Source Code of a Selected Procedure
After you select a procedure or function and click OK, information about the API is displayed, as shown in Figure 4-79. Use Back or Browse to make revisions, or Next followed by Finish to conclude.
Figure 4-79 Viewing Procedure or Function Details in the Adapter Configuration Wizard
When you have finished using the Adapter Configuration Wizard, two files are added to the existing project: servicename
.wsdl
(for example, ProcedureProc.wsdl
) and the generated XSD. The generated XSD file is named schema_package_procedurename
.xsd
. In this case, SCOTT_PROC.xsd
is the name of the generated XSD file.
Using APIs defined in packages is similar to using standalone APIs. The only difference is that you can expand the package name to see a list of all the APIs defined within the package, as shown in Figure 4-80.
APIs that have the same name but different parameters are called overloaded APIs. As shown in Figure 4-80, the package called PACKAGE has two overloaded procedures called OVERLOAD.
Figure 4-80 A Package with Two Overloaded Procedures
As Figure 4-81 shows, the code for the entire PL/SQL package is displayed, regardless of which API from the package is selected when you view the Source tab. Text that matches the name of the procedure is highlighted.
Figure 4-81 Viewing the Source Code of an Overloaded Procedure
After you select a procedure or function and click OK, information about the API is displayed, as shown in Figure 4-82. The schema, procedure name, and parameter list are displayed. Note how the procedure name is qualified with the name of the package (PACKAGE.OVERLOAD). Use Back or Browse to make revisions, or Next followed by Finish to conclude.
Figure 4-82 Viewing Procedure or Function Details in the Adapter Configuration Wizard
When you have finished using the Adapter Configuration Wizard, two files are added to the existing project: Overload.wsdl
and SCOTT_PACKAGE_OVERLOAD_2.xsd
. The _2
appended after the name of the procedure in the XSD filename differentiates the overloaded APIs.
The command-line utility is invoked with a properties file and the filename of a template WSDL that will be used to create the desired service WSDL. This section comprises the following:
Each of the additional databases shares some common functionality that is provided by the command-line utility. Several properties are shared by all supported databases. The following is a list of properties shared by the supported databases:
ProductName
This is the name of the additional database. ProductName supports IBM DB2: IBM DB2v8.2, and Microsoft SQL Server: SQL Server 2000 or 2005.
DriverClassName
This is the class name of the JDBC driver.
ConnectionString
This is the JDBC connection URL.
Username
This is the database user name.
Password
This is the password associated with the user name.
ProcedureName
This is the name of the stored procedure or the function.
ServiceName
This is the service name for the desired operation.
DatabaseConnection
This is the JNDI name of the connection. For example, eis/DB/<DatabaseConnection>
.
Destination
This is the destination directory for the generated files, for example, C:\Temp
.
C:\java oracle.tip.adapter.db.sp.artifacts.GenerateArtifacts <properties>
Where <properties>
is the name of the properties file. The utility will verify that all mandatory properties exist including any database-specific properties, as well.
The command-line utility generates three files, an XSD representing the signature of the chosen API, an adapter service WSDL and a project WSDL imported by the service WSDL. The name of the generated WSDL is derived from the value of the ServiceName
property. The name of the XSD is derived from the ProcedureName
property and other property values specific to the database. For example, the schema and package name for Oracle database.
The contents of the generated service WSDL are derived using a WSDL template and the values of the required properties. The contents of the XSD are derived from the qualified procedure name and a type mapping tables specific to the database in use. The elements that represent parameters of the stored procedure have the same number and type of attributes as those that are generated using the adapter wizard.
The command-line utility can be used to generate the required service artifacts for IBM DB2 v8.2, SQL Server 2000 and 2005. The data types and properties that are supported vary for each database. This section comprises the following:
SQL Server supports functions, in addition to stored procedures. A property is necessary to indicate to the command-line utility that the chosen API is a function as opposed to a procedure. The following table list the additional properties used with Microsoft SQL Server:
Property | Description |
---|---|
IsFunction |
If the API is a function, then its value is true or false , by default. |
DatabaseName |
The name of the database where the API is defined |
SchemaName |
The name of the schema to which the procedure belongs |
The IsFunction
property is optional. The default value is false
. If the API is a procedure, then this property need not be specified. If the API is a function or the API is a procedure that returns a value, then this property must be set to true
, True
, or TRUE
. The following is an example of a procedure that returns a value:
1> create procedure … as begin …;return 1; end 2> go
If the value of IsFunction
is not set, or if it is set to false
or some other value, then the return value will not be included in the generated XML after the API executes.
The SchemaName
and DatabaseName
properties are used to further qualify the stored procedure. For example, <DatabaseName>.<SchemaName>.<ProcedureName>
. These properties are optional, so they need not be specified.
Table 4-20 lists the data types that are supported only for stored procedures for use with SQL Server database:
Table 4-20 Data Types Supported for Stored Procedures for Use with SQL Server Database
SQL Data Type | XML Schema Type |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Besides, the data types mentioned in the preceding table, alias data types are also supported for stored procedure. The alias data types are created by using the sp_addtype
database engine stored procedure or the CREATE TYPE
Transact-SQL statement (only for SQL Server 2005.) Note that the use of the Transact-SQL statement is the preferred method for creating alias data types. The use of sp_addtype
is being deprecated.
If the data type of a parameter in a stored procedure is defined using an alias data type, then the underlying base SQL data type will be determined, and this data type and its mappings will be used for the parameter in the generated XSD. Consider the following example:
1>create type myint from int 2>go 3>create procedure aliastype @x myint as … 4>go
The type of the parameter in the procedure, in the preceding example is the alias type, myint
. The underlying database SQL data type of myint
is INT
, as shown in the following example, whose type mappings will be used for that parameter in the stored procedure.
<element name="x" type="int" … db:type="INT" … />
Structured data types (user-defined) were introduced in SQL Server 2005. The command-line utility, however, does not support them. Therefore, structured data types may not be used as the type of a parameter in a stored procedure.
IBM DB2 utilizes an additional property to further qualify the stored procedure. Note that only SQL stored procedures are supported. External procedures and user-defined functions are not supported.
In IBM DB2 the SchemaName
property is mandatory. SchemaName
is the name of the schema to which the procedure belongs. It is used by the command-line utility to qualify the stored procedure so that it can verify that the procedure exists. Table 4-21 lists the data types supported only for stored procedures for use with DB2 database:
Table 4-21 Data Types Supported for Stored Procedures by IBM DB2 Database
SQL Data Type | XML Schema Type |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note that the names of other data types are also supported implicitly. For example, NUMERIC
is equivalent to DECIMAL
(as is DEC
and NUM
as well.)
Distinct data types are also supported only for stored procedures. These are similar to alias data types in SQL Server. They are created using the CREATE DISTINCT TYPE
statement, as shown in the following example:
db2 => create distinct type myint as integer with comparisons db2 => create procedure distincttype(in x myint, …) begin … end
The underlying database SQL data type of myint
is INTEGER
, whose type mappings will be used for that parameter in the stored procedure.
<element name="X" type="int" … db:type="INTEGER" … />
IBM DB2 supports structured data types (user-defined). However, there is no support for these types in the JDBC drivers. Consequently, a structured data type may not be used as the type of a parameter in a stored procedure. IBM DB2 also supports user-defined functions. The adapter, however, does not support these.
The Adapter Configuration Wizard – Stored Procedures is capable of creating a WSDL and a valid XSD that describes the signature of a stored procedure or function. The following sections describe the relevant structure and content of both the WSDL and the XSD, and their relationship with each other.
This section comprises the following:
In the paragraphs that follow, the operation name, ProcedureProc
, and procedure name, PROC
, are taken from an example cited previously (see Figure 4-79). The generated WSDL imports the XSD.
<types> <schema xmlns="http://www.w3.org/2001/XMLSchema"> <import namespace="http://xmlns.oracle.com/pcbpel/adapter/db/SCOTT/PROC/" schemaLocation="SCOTT_PROC.xsd"/> </schema> </types>
The namespace is derived from the schema, package, and procedure name, and appears as the targetNamespace
in the generated XSD.
A root element called InputParameters
is created in the XSD for specifying elements that correspond to the IN
and IN/OUT
parameters of the stored procedure. Another root element called OutputParameters
is also created in the XSD for specifying elements only if there are any IN/OUT
or OUT
parameters. Note that IN/OUT
parameters appear in both root elements.
These root elements are represented in the XSD as an unnamed complexType
definition whose sequence includes one element for each parameter. If there are no IN
or IN/OUT
parameters, the InputParameters
root element is still created; however, the complexType
is empty. A comment in the XSD indicates that there are no such parameters. An example of one of these root elements follows.
<element name="InputParameters" <complexType> <sequence> <element …> … </sequence> </complexType> </element>
The WSDL defines message types whose parts are defined in terms of these two root elements.
<message name="args_in_msg" <part name="InputParameters" element="db:InputParameters"/> </message> <message name="args_out_msg" <part name="OutputParameters" element="db:OutputParameters"/> </message>
The db
namespace is the same as the targetNamespace
of the generated XSD. Note that the args_in_msg
message type always appears in the WSDL while args_out_msg
is included only if the OutputParameters
root element is generated in the XSD.
An operation is defined in the WSDL whose name is the same as the adapter service and whose input and output messages are defined in terms of these two message types.
<portType name="ProcedureProc_ptt"> <operation name="ProcedureProc"> <input message="tns:args_in_msg"/> <output message="tns:args_out_msg"/> </operation> </portType>
The input message always appears while the output message depends on the existence of the OutputParameters
root element in the XSD. The tns
namespace is derived from the operation name and is defined in the WSDL as
xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/db/ProcedureProc/"
The root elements in the XSD define the structure of the parts used in the messages that are passed into and sent out of the Web service encapsulated by the WSDL.
The input message in the WSDL corresponds to the InputParameters
root element from the XSD. The instance XML supplies values for the IN
and IN/OUT
parameters of the stored procedure. The output message corresponds to the OutputParameters
root element. This is the XML that gets generated after the stored procedure has executed. It holds the values of any IN/OUT
and OUT
parameters.
Many primitive data types have well-defined mappings and therefore are supported by both the design-time and run-time components. In addition, you can use user-defined types such as VARRAY
, nested tables, and OBJECT
. Table 4-22 lists the primitive data types supported by the database adapter for stored procedures.
Table 4-22 Primitive Data types Supported by the Database Adapter for Stored Procedures
SQL or PL/SQL Type | XML Schema Type |
---|---|
|
double |
|
|
|
|
|
|
|
|
|
|
|
|
Table 4-23 lists the attributes used in the generated XSDs. Attributes prefixed with db:
are specific to the database adapter.
Table 4-23 Generated XSD Attributes
Attribute | Example | Purpose |
---|---|---|
|
|
Name of an element |
|
|
XML schema type |
|
|
SQL or PL/SQL type |
|
|
Position of a parameter |
|
|
Has a default clause |
|
|
Minimum occurrences |
|
|
Maximum occurrences |
|
|
Permits null values |
The db
namespace is used to distinguish attributes used during run time from standard XML schema attributes. The db:type
attribute is used to indicate what the database type is so that a suitable JDBC type mapping can be obtained at run time. The db:index
attribute is used as an optimization by both the design-time and run-time components to ensure that the parameters are arranged in the proper order. Parameter indices begin at 1
for procedures and 0
for functions. The return value of a function is represented as an OutputParamete
r element whose name
is the name of the function and whose db:index
is 0
. The db:default
attribute is used to indicate whether or not a parameter has a default clause.
The minOccurs
value is set to 0
to allow for an IN
parameter to be removed from the XML. This is useful when a parameter has a default clause defining a value for the parameter (for example, X IN INTEGER DEFAULT 0
). At run time, if no element is specified for the parameter in the XML, the parameter is omitted from the invocation of the stored procedure, thus allowing the default value to be used. Each parameter can appear at most once in the invocation of a stored procedure or function. Therefore, maxOccurs
, whose default value is always 1
, is always omitted from elements representing parameters.
The nillable
attribute is always set to true
to allow the corresponding element in the instance XML to have a null value (for example, <X/>
or <X></X>
). In some cases, however, to pass schema validation, an element such as this, which does have a null value, must state this explicitly (for example, <X xsi:nil="true"/>
). The namespace, xsi
, used for the nillable
attribute, must be declared explicitly in the instance XML (for example, xmlns:xsi="http://www.w3.org/2001/XMLSchema
-instance").
The wizard can also generate valid definitions for user-defined types such as collections (VARRAY
and nested tables) and OBJECT
. These are created as complexType
definitions in the XSD.
For VARRAY
, the complexType
definition defines a single element in its sequence, called name
_ITEM
, where name
is the name of the VARRAY element. All array elements in the XML are so named. Given the following VARRAY
type definition,
SQL> CREATE TYPE FOO AS VARRAY (5) OF VARCHAR2 (10);
and a VARRAY
element, X
, whose type is FOO
, the following complexType
is generated:
<complexType name="FOO"> <sequence> <element name="X_ITEM" db:type="VARCHAR2" minOccurs="0" maxOccurs="5" nillable="true"/> <simpleType> <restriction base="string"> <maxLength value="10"/> </restriction> </simpleType> </sequence> </complexType>
The minOccurs
value is 0
to allow for an empty collection. The maxOccurs
value is set to the maximum number of items that the collection can hold. Note that the db:index
attribute is not used. Having nillable
set to true
allows individual items in the VARRAY
to be null.
Note the use of the restriction specified on the element of the VARRAY
, FOO
. This is used on types such as CHAR
and VARCHAR2
, whose length is known from the declaration of the VARRAY
(or nested table). It specifies the type and maximum length of the element. An element value that exceeds the specified length causes the instance XML to fail during schema validation.
The attribute values of a parameter declared to be of type FOO
look as follows in the generated XSD:
<element name="X" type="db:FOO" db:type="Array" db:index="1" minOccurs="0" nillable="true"/>
The type
and db:type
values indicate that the parameter is represented as an array defined by the complexType
called FOO
in the XSD. The value for db:index
is whatever the position of that parameter is in the stored procedure.
A nested table is treated almost identically to a VARRAY
. The following nested table type definition,
SQL> CREATE TYPE FOO AS TABLE OF VARCHAR2 (10);
is also generated as a complexType
with a single element in its sequence, called name
_ITEM
. The element has the same attributes as in the VARRAY
example, except that the maxOccurs
value is unbounded because nested tables can be of arbitrary size.
<complexType name="FOO"> <sequence> <element name="X_ITEM" … maxOccurs="unbounded" nillable="true"> … </element> </sequence> </complexType>
An identical restriction is generated for the X_ITEM
element in the VARRAY
. The attributes of a parameter, X
, declared to be of this type, are the same as in the VARRAY
example.
Note that collections
(Varray
and nested table) are not supported if they are defined inside of a PL/SQL package specification. For example:
SQL> create package pkg as > type vary is varray(10) of number; > type ntbl is table of varchar2(100; > procedure test(v in vary, n in ntbl); > end; > /
If a user selects the test procedure in the DBAdapter wizard for stored procedures, an error will occur stating that the types are not supported. However, if the vary
and ntbl
type definitions were defined at the root-level, outside of the package, then choosing the test procedure will work without issue. The supported way to use collection types (Varray
and nested table) is shown in the following example:
SQL> create type vary as varray(10) of number; SQL> create type ntbl as table of varchar2(10); SQL> create package pkg as > procedure test(v in vary, n in ntbl); > end; /
An OBJECT
definition is also generated as a complexType
. Its sequence holds one element for each attribute in the OBJECT
.
The following OBJECT
,
SQL> CREATE TYPE FOO AS OBJECT (X VARCHAR2 (10), Y NUMBER);
is represented as a complexType
called FOO
with two sequence elements.
<complexType name="FOO"> <sequence> <element name="X" db:type="VARCHAR2" minOccurs="0" nillable="true"/> <simpleType> <restriction base="string"> <maxLength value="10"/> </restriction> </simpleType> <element name="Y" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/> </sequence> </complexType>
The minOccurs
value is 0
to allow for the element to be removed from the XML. This causes the value of the corresponding attribute in the OBJECT
to be set to null at run time. The nillable value is true
to allow empty elements to appear in the XML, annotated with the xsi:nil
attribute, to indicate that the value of the element is null. Again, the db:index
attribute is not used.
Note the use of a restriction on the VARCHAR2
attribute. The length is known from the declaration of the attribute in the OBJECT
.
User-defined types can be defined in arbitrarily complex ways. An OBJECT
can contain attributes whose types are defined as any of the aforementioned user-defined types. This means that the type of an attribute in an OBJECT
can be another OBJECT
, VARRAY
or a nested table, and so on. The base type of a VARRAY
or a nested table can also be an OBJECT
. Allowing the base type of a collection to be another collection supports multidimensional collections.
The wizard is capable of generating a valid XSD for parameters whose types are defined using OBJECT
-type inheritance. Given the following type hierarchy,
SQL> CREATE TYPE A AS OBJECT (A1 NUMBER, A2 VARCHAR2 (10)) NOT FINAL; SQL> CREATE TYPE B UNDER A (B1 VARCHAR2 (10));
and a procedure containing a parameter, X
, whose type is B
,
SQL> CREATE PROCEDURE P (X IN B) AS BEGIN … END;
the wizard generates an InputParameters
element for parameter X
as
<element name="X" type="db:B" db:index="1" db:type="Struct" minOccurs="0" nillable="true"/>
where the definition of OBJECT
type B
in the XSD is generated as the following complexType
.
<complexType name="B"> <sequence> <element name="A1" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/> <element name="A2" db:type="VARCHAR2" minOccurs="0" nillable="true"> ... </element> <element name="B1" db:type="VARCHAR2" minOccurs="0" nillable="true"> ... </element> </sequence> </complexType>
Restrictions on the maximum length of attributes A2
and B1
are added appropriately. Notice how the OBJECT
type hierarchy is flattened into a single sequence of elements that corresponds to all of the attributes in the entire hierarchy.
The wizard can also generate a valid XSD for parameters that are references to OBJECT
types (for example, object references), or are user-defined types that contain an object reference somewhere in their definition. In this example,
SQL> CREATE TYPE FOO AS OBJECT (…); SQL> CREATE TYPE BAR AS OBJECT (F REF FOO, …); SQL> CREATE PROCEDURE PROC (X OUT BAR, Y OUT REF FOO) AS BEGIN … END;
the wizard generates complexType
definitions for FOO
and BAR
as already indicated, except that for BAR
, the element for the attribute, F,
is generated as
<element name="F" type="db:FOO" db:type="Ref" minOccurs="0" nillable="true"/>
where together, the type
and db:type
attribute values indicate that F
is a reference to the OBJECT
type FOO
.
For a procedure PROC
, the following elements are generated in the OutputParameters
root element of the XSD:
<element name="X" type="db:BAR" db:index="1" db:type="Struct" minOccurs="0" nillable="true"/> <element name="Y" type="db:FOO" db:index="2" db:type="Ref" minOccurs="0" nillable="true"/>
For Y
, note the value of the db:type
attribute, Ref
. Together with the type
attribute, the element definition indicates that Y
is a reference to FOO
.
Note that there is a restriction on the use of object references that limits their parameter mode to OUT
only. Passing an IN
or IN/OUT
parameter into an API that is either directly a REF
or, if the type of the parameter is user-defined, contains a REF
somewhere in the definition of that type, is not permitted.
You can refer to types defined in other schemas, provided that the necessary privileges to access them have been granted. For example, suppose type OBJ
was declared in SCHEMA1
:
SQL> CREATE TYPE OBJ AS OBJECT (…);
The type of a parameter in a stored procedure declared in SCHEMA2
can be type OBJ
from SCHEMA1
:
CREATE PROCEDURE PROC (O IN SCHEMA1.OBJ) AS BEGIN … END;
This is possible only if SCHEMA1
granted permission to SCHEMA2
to access type OBJ
:
SQL> GRANT EXECUTE ON OBJ TO SCHEMA2;
If the required privileges are not granted, an error occurs when trying to create procedure PROC
in SCHEMA2
:
PLS-00201: identifier "SCHEMA1.OBJ" must be declared
Because the privileges have not been granted, type OBJ
from SCHEMA1
is not visible to SCHEMA2
; therefore, SCHEMA2
cannot refer to it in the declaration of parameter O
.
This section discusses important considerations of stored procedure support and a brief overview of some important details regarding what happens prior to the invocation of a stored procedure or function.
This section comprises the following:
Consider the extraction of values from the XML and how the run time works given those values. The possible cases for data in the XML corresponding to the value of a parameter whose type is one of the supported primitive data types value are as follows:
The value of an element is specified (for example, <X>100</X>
, here X=100.)
The value of an element is not specified (for example, <X/>
, here X=null.)
The value is explicitly specified as null (for example, <X xsi:nil="true"/>
, here X=null.)
The element is not specified in the XML at all (for example, X = <default value>
).
In the first case, the value is taken from the XML as-is and is converted to the appropriate object according to its type. That object is then bound to its corresponding parameter during preparation of the stored procedure invocation.
In the second and third cases, the actual value extracted from the XML is null. The type converter accepts null and returns it without any conversion. The null value is bound to its corresponding parameter regardless of its type. Essentially, this is the same as passing null for parameter X
.
The fourth case has two possibilities. The parameter either has a default clause or it does not. If the parameter has a default clause, then the parameter is completely excluded from the invocation of the stored procedure. This allows the default value to be used for the parameter. On the other hand, if the parameter does not have a default clause, then the parameter is included in the invocation of the procedure. However, for functions, elements for all parameters must be specified. If an element in the instance XML is missing, then the function will be invoked with fewer arguments than the expected. IBM DB2 does not provide a mechanism for specifying a default value for a parameter in a stored procedure. Therefore, an element must exist in the instance XML for every parameter.
A null value is bound to the parameter by default:
SQL> CREATE PROCEDURE PROC (X IN INTEGER DEFAULT 0) AS BEGIN … END;
Here, no value is bound to the parameter. In fact, the parameter is completely excluded from the invocation of the stored procedure. This allows the value of 0
to default for parameter X
.
To summarize, the following PL/SQL is executed in each of the four cases:
"BEGIN PROC (X=>?); END;" - X = 100
"BEGIN PROC (X=>?); END;" - X = null
"BEGIN PROC (X=>?); END;" - X = null
There are two possibilities:
"BEGIN PROC (); END;" - X = 0
(X
has a default clause)
"BEGIN PROC (X=>?); END;" - X = null
(X
does not have a default clause)
With the exception of default clause handling, these general semantics also apply to item values of a collection or attribute values of an OBJECT
whose types are one of the supported primitive data types. The semantics of <X/>
when the type is user-defined are, however, quite different.
For a collection, whether it is a VARRAY
or a nested table, the following behavior can be expected given a type definition such as
SQL> CREATE TYPE ARRAY AS VARRAY (5) OF VARCHAR2 (10);
and XML for a parameter, X
, which has type ARRAY
, that appears as follows:
<X> <X_ITEM xsi:nil="true"/> <X_ITEM>Hello</X_ITEM> <X_ITEM xsi:nil="true"/> <X_ITEM>World</X_ITEM> </X>
The first and third elements of the VARRAY
are set to null. The second and fourth are assigned their respective values. No fifth element is specified in the XML; therefore, the VARRAY
instance has only four elements.
Assume an OBJECT
definition such as
SQL> CREATE TYPE OBJ AS OBJECT (A INTEGER, B INTEGER, C INTEGER);
and XML for a parameter, X
, which has type OBJ
, that appears as
<X> <A>100</A> <C xsi:nil="true"/> </X>
The value 100
is assigned to attribute A
and null is assigned to attributes B
and C
. Because there is no element in the instance XML for attribute B
, a null value is assigned.
The second case, <X/>
, behaves differently if the type of X
is user-defined. Rather than assigning null to X
, an initialized instance of the user-defined type is created and bound instead.
In the preceding VARRAY
example, if <X/>
or <X></X>
is specified, then the value bound to X
is an empty instance of the VARRAY
. In PL/SQL, this is equivalent to calling the type constructor and assigning the value to X
. For example,
X := ARRAY();
Similarly, in the preceding OBJECT
example, an initialized instance of OBJ
, whose attribute values have all been null assigned, is bound to X
. Like the VARRAY
case, this is equivalent to calling the type constructor. For example,
X := OBJ(NULL, NULL, NULL);
To specifically assign a null value to X
when the type of X
is user-defined, add the xsi:nil
attribute to the element in the XML, as in
<X xsi:nil="true"/>
This section describes the conversion of data types such as CLOB
, DATE
, TIMESTAMP
and binary data types including RAW
, LONG
RAW
and BLOB
, as well as similar data types supported by third-party databases.
For CLOB
parameters, a temporary CLOB
is first created. The data extracted from the XML is then written to it before binding the CLOB
to its corresponding parameter. The temporary CLOB
is freed when the interaction completes. For other character types, such as CHAR
and VARCHAR2
, the data is simply extracted and bound as necessary. Note that it is possible to bind an XML document to a CLOB
(or VARCHAR2
if it is large enough). However, appropriate substitutions for <
, >
, and so on, must first be made (for example, <
for <
and >
for >
).
Note that the XML schema type, dateTime
, represents both DATE
and TIMESTAMP
. This means that the XML values for both data types must adhere to the XML schema representation for dateTime
. Therefore, a simple DATE
string, 01-JAN-05
, is invalid. XML schema defines dateTime
as YYYY-MM-DDTHH:mm:ss
. Therefore, the correct DATE
value is 2005-01-01T00:00:00
.
Data for binary data types must be represented in a human readable manner. The chosen XML schema representation for binary data is base64Binary
. The type converter uses the javax.mail.internet.MimeUtility
encode and decode APIs to process binary data. The encode API must be used to encode all binary data into base64Binary
form so that it can be used in an XML file. The type converter uses the decode API to decode the XML data into a byte array. This is then bound either directly, as is the case with RAW
and LONG RAW
parameters, or is used to create a temporary BLOB
, which is then bound to its associated BLOB
parameter. The temporary BLOB
is freed when the interaction completes.
Conversions for the remaining data types are straightforward and require no additional information.
After the procedure (or function) executes, the values for any IN/OUT
and OUT
parameters are retrieved. These correspond to the values of the elements in the OutputParameters
root element in the generated XSD.
This section comprises the following:
Conversions of data retrieved are straightforward. However, BLOB
, CLOB
(and other character data), RAW
, LONG
RAW
and BLOB
conversions, as well as conversions for similar data types supported by third-party databases, require special attention.
When a CLOB
is retrieved, the entire contents of that CLOB
are written to the corresponding element in the generated XML. Standard DOM APIs are used to construct the XML. This means that character data, as for types like CLOB
, CHAR
, and VARCHAR2
, is massaged as needed to make any required substitutions so that the value is valid and can be placed in the XML for subsequent processing. Therefore, substitutions for <
and>
, for example, in an XML document stored in a CLOB
are made so that the value placed in the element within the generated XML for the associated parameter is valid.
Raw data, such as for RAW
and LONG RAW
types, is retrieved as a byte array. For BLOB
s, the BLOB
is first retrieved, and then its contents are obtained, also as a byte array. The byte array is then encoded using the javax.mail.internet.MimeUtility
encode API into base64Binary
form. The encoded value is then placed in its entirety in the XML for the corresponding element. The MimeUtility
decode API must be used to decode this value back into a byte array.
Conversions for the remaining data types are straightforward and require no additional information.
Elements whose values are null appear as empty elements in the generated XML and are annotated with the xsi:nil
attribute. This means that the xsi
namespace is declared in the XML that is generated. Generated XML for a procedure PROC
, which has a single OUT
parameter, X
, whose value is null, looks as follows:
<db:OutputParameters … xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <X xsi:nil="true"/> </db:OutputParameters>
The db
namespace is also declared (that is, xmlns:db="..."
). Note that XML elements for parameters of any type (including user-defined types) appear this way if their value is null.
The return value of a function is treated as an OUT
parameter at position 0
whose name
is the name of the function itself. For example,
CREATE FUNCTION FACTORIAL (X IN INTEGER) RETURN INTEGER AS BEGIN IF (X <= 0) THEN RETURN 1; ELSE RETURN FACTORIAL (X - 1); END IF; END;
An invocation of this function with a value of 5
, for example, results in a value of 120
and appears as <FACTORIAL>120</FACTORIAL>
in the XML generated in OutputParameters
.
Both SQL Server and DB2 share the same functionality for handling ResultSets
. The following is a SQL Server example of an API that returns a ResultSet
:
1> create procedure foo ... as select ... from ...; 2> go
A RowSet defined in the generated XSD represents a ResultSet
. A RowSet
consists of zero or more rows, each having one or more columns. A row corresponds with a row returned by the query. A column corresponds with a column item in the query. The generated XML for the API shown in the preceding example after it executes is shown in the following example:
<RowSet> <Row> <Column name="<column name>" sqltype="<sql datatype">value</Column> ... </Row> ... </RowSet> …
The column name attribute stores the name of the column appearing in the query while the sqltype attribute stores the SQL datatype of that column, for example INT
. The value will be whatever the value is for that column.
Note that it is possible for an API to return multiple ResultSets
. In such cases, there will be one RowSet
for each ResultSet
in the generated XML. All RowSets
will always appear first in the generated XML.
This section discusses scenarios for types that are not supported directly using the stored procedure functionality that the database adapter provides. The following sections describe workarounds that address the need to use these data types:
Neither the design-time nor run-time components support REF CURSOR
types directly. The solution is to use a collection of an OBJECT
type. Because the number of rows returned by a REF CURSOR
is usually unknown, it is best to use a nested table as the collection type. This solution involves using a Java stored procedure to convert a ResultSet
into an instance of the declared collection type. A sample tutorial illustrating this is provided in the following directory:
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\ResultSetConverter
This section comprises the following:
PL/SQL cursor variables, known as ref cursors, are supported using RowSets
. The wizard is incapable of determining what columns are included in the query of a cursor variable at design time. This makes it impossible to generate an XSD that represents the contents of a cursor variable. However, it is possible to generalize the structure of its corresponding RowSet
.
A RowSet
consists of zero or more rows. Each row consists of one or more columns where each column corresponds with a column item in the query of the cursor variable. Each column has, among other things, a name and a SQL datatype. It is therefore possible to create a definition in the XSD that corresponds with this generalization. Consider the following example:
<complexType name="RowSet"> <sequence> <element name="Row" minOccurs="0" maxOccurs="unbounded" nillable="true"> <complexType> <sequence> <element name="Column" maxOccurs="unbounded" nillable="true"> <complexType> <simpleContent> <extension base="string"> <attribute name="name" type="string" use="required"/> <attribute name="sqltype" type="string" use="required"/> </extension> </simpleContent> </complexType> </element> </sequence> </complexType> </element> </sequence> </complexType>
Notice in the XSD that the XML Schema type, string, represents all column values. This is acceptable for all of the primitive datatypes such as NUMBER
, INTEGER
and TIMESTAMP
.
Unfortunately, a simple string doesn't capture the structure of user-defined datatypes such as Object
, Varray
and Nested
Table
. Because the structure of these datatypes is unknown when the XSD is generated, it is not possible to generate definitions for them.
However, it is possible to ascertain the structure of user-defined datatypes at runtime and generate a string that contains the value of the datatype. The value of these datatypes will therefore be an XML string that resembles their actual structure. Because this value is itself an XML string, it will appear as fully escaped XML in the corresponding element of the generated XML.
Suppose you have the following package:
CREATE PACKAGE PKG AS TYPE REF_CURSOR IS REF CURSOR; PROCEDURE TEST(C OUT REF_CURSOR); END; CREATE PACKAGE BODY PKG AS ROCEDURE TEST(C OUT REF_CURSOR) AS BEGIN OPEN C FOR SELECT DEPTNO, DNAME FROM DEPT; END; END;
The REF_CURSOR
is a weakly typed cursor variable because the query is not specified. After the procedure executes, the following XML will be generated for parameter, C
:
<C> <Row> <Column name="DEPTNO" sqltype="NUMBER">10</Column> <Column name="DNAME" sqltype="VARCHAR2">ACCOUNTING</Column> </Row> <Row> <Column name="DEPTNO" sqltype="NUMBER">20</Column> <Column name="DNAME" sqltype="VARCHAR2">RESEARCH</Column> </Row> … </C>
There will be a total of four rows, each consisting of two columns, DEPTNO
and DNAME
.
Ref cursors are represented by Java ResultSets
. It is not possible to create a ResultSet
programmatically using APIs provided by the JDBC driver. Therefore, ref cursors may not be passed IN
to a stored procedure. They can only be passed as IN/OUT
and OUT
parameters with one caveat. An IN/OUT
ref cursor will be treated strictly as an OUT
parameter. Because no IN
value can be provided, a null will be bound when invoking the stored procedure.
The wizard provides a mechanism that detects when these types are used and then invokes Oracle JPublisher to generate the necessary wrappers automatically. Oracle JPublisher generates two SQL files, one to create schema objects, and another to drop them. The SQL that creates the schema objects is automatically executed from within the wizard to create the schema objects in the database schema before the XSD is generated. For example, suppose the following package specification is declared:
CREATE PACKAGE PKG AS TYPE REC IS RECORD (X NUMBER, Y VARCHAR2 (10)); TYPE TBL IS TABLE OF NUMBER INDEX BY PLS_INTEGER; PROCEDURE PROC (R REC, T TBL, B BOOLEAN); END;
Figure 4-83 shows the step in the wizard that is displayed when procedure PROC
from package PKG
is selected.
Figure 4-83 Specifying a Stored Procedure in the Adapter Configuration Wizard
As Figure 4-83 shows, the original procedure name is fully qualified, PKG.PROC
. The type of parameter, R
, is the name of the RECORD
. The type of T
is the name of the TABLE
. The type of B
is BOOLEAN
. The name of the wrapper package that is generated is derived from the service name, bpel_
ServiceName
(for example, bpel_UseJPub
). This is the name of the generated package that contains the wrapper procedure. The check box can be used to force the wizard to overwrite an existing package when the schema objects are created.
Clicking Next reveals the Finish page of the wizard, as shown in Figure 4-84.
Figure 4-84 Defining a Database Adapter Service: Finish Page
The contents of this page describe what the wizard has detected and what actions are performed when the Finish button is clicked. The following summarizes the contents of this page:
The name of the generated WSDL is UseJPub.wsdl
.
Two SQL scripts are created and added to the BPEL process project:
BPEL_USEJPUB.sql
– Creates the schema objects.
BPEL_USEJPUB_drop.sql
– Drops the schema objects.
The name of the generated XSD is SCOTT_USEJPUB_PKG-24PROC.xsd
.
When you click Finish, Oracle JPublisher is invoked to generate the SQL files and load the schema objects into the database. The process of generating wrappers may take quite some time to complete. Processing times for wrappers that are generated in the same package usually require less time after an initial wrapper has been generated for another procedure within the same package.
The following user-defined types are generated to replace the PL/SQL types from the original procedure:
SQL> CREATE TYPE PKG_REC AS OBJECT (X NUMBER, Y VARCHAR2 (10)); SQL> CREATE TYPE PKG_TBL AS TABLE OF NUMBER;
The naming convention for these types is OriginalPackageName_OriginalTypeName
. BOOLEAN
is replaced by INTEGER
in the wrapper procedure.
Acceptable values for the original BOOLEAN
parameter now that it is an INTEGER
are 1
for true and 0
for false. Any value other than 1
is considered false. The generated wrapper procedure uses APIs from the SYS.SQLJUTL
package to convert from INTEGER
to BOOLEAN
and vice-versa.
A new wrapper package called BPEL_USEJPUB
is created that contains the wrapper for procedure PROC
, called PKG$PROC
, as well as conversion APIs that convert from the PL/SQL types to the user-defined types and vice-versa. If the original procedure is a root-level procedure, the name of the generated wrapper procedure is TOPLEVEL$
OriginalProcedureName
.
The generated XSD represents the signature of wrapper procedure PKG$PROC
and not the original procedure. The name of the XSD file is URL-encoded, which replaces $
with -24
.
Note the naming conventions for the generated artifacts:
The service name is used in the names of the WSDL and SQL files. It is also used as the name of the wrapper package.
The name of the generated XSD is derived from the schema name, service name, and the original package and procedure names.
The name of a user-defined type is derived from the original package name and the name of its corresponding PL/SQL type.
The name of the wrapper procedure is derived from the original package and procedure names. TOPLEVEL$
is used for root-level procedures.
The name of the generated wrapper package is limited to 30 characters. The name of the wrapper procedure is limited to 29 characters. If the names generated by Oracle JPublisher are longer than these limits, they are truncated.
When the PartnerLink that corresponds with the service associated with the procedure is invoked, the generated wrapper procedure is executed instead of the original procedure.
If a procedure contains a special type that requires a wrapper to be generated, the default clauses on any of the parameters are not carried over to the wrapper. For example, consider
SQL> CREATE PROCEDURE NEEDSWRAPPER ( > B BOOLEAN DEFAULT TRUE, N NUMBER DEFAULT 0) IS BEGIN … END;
Assuming that this is a root-level procedure, the signature of the generated wrapper procedure is
TOPLEVEL$NEEDSWRAPPER (B INTEGER, N NUMBER)
The BOOLEAN
type has been replaced by INTEGER
. The default clauses on both parameters are missing in the generated wrapper. Parameters of generated wrapper procedures never have a default clause, even if they did in the original procedure.
In this example, if an element for either parameter is not specified in the instance XML, then a null value is bound to that parameter during the invocation of the wrapper procedure. The default value of the parameter that is specified in the original procedure is not used.
To address this, the generated SQL file that creates the wrapper must be edited, restoring the default clauses to the parameters of the wrapper procedure. The wrapper and any additional schema objects must then be reloaded into the database schema. After editing the SQL file, the signature of the wrapper procedure is
TOPLEVEL$NEEDSWRAPPER (B INTEGER DEFAULT 1, N NUMBER DEFAULT 0)
For BOOLEAN
parameters, the default value for true is 1
and the default value for false is 0
.
As a final step, the XSD generated for the wrapper must be edited. A special attribute must be added to elements representing parameters that now have default clauses. Add db:default="true"
to each element representing a parameter that now has a default clause. For example,
<element name="B" … db:default="true" …/> <element name="N" … db:default="true" …/>
This attribute is used at run time to indicate that, if the element is missing from the instance XML, the corresponding parameter must be omitted from the procedure call. The remaining attributes of these elements remain exactly the same.
This tutorial describes how to integrate a stored procedure into Oracle BPEL Process Manager with JDeveloper BPEL Designer. Other tutorials that demonstrate stored procedures and functions are File2StoredProcedure
, JPublisherWrapper
, and ResultSetConverter
. Go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter
This section contains the following topics:
Connect to the scott
schema of the Oracle database using SQL*Plus. This is the schema in which to create the stored procedure. This example assumes tiger
is the password.
sqlplus scott/tiger
Create the stored procedure (note the blank space after Hello
):
SQL> CREATE PROCEDURE HELLO (NAME IN VARCHAR2, GREETING OUT VARCHAR2) AS 2 BEGIN 3 GREETING := 'Hello ' || NAME; 4 END; 5 / Procedure created.
The following are the steps to create a new BPEL project:
Open Oracle JDeveloper.
From the File menu, select New.
The New Gallery dialog box is displayed.
Select All Technologies from the Filter By box. This displays a list of available categories.
Expand the General node, and then select Projects.
Select ESB Project from the Items group, as shown in Figure 4-85
Click OK.
The BPEL Project Creation Wizard -Project Settings dialog box is displayed.
Name: Enter a process name. In this example, type Greeting.
Namespace: Retain the default values.
Template: Select Synchronous BPEL Process.
Click Next.
The BPEL Project Creation Wizard -Input/Output Elements dialog box is displayed.
Accept deafult values, and then click Finish.
You have completed creating a BPEL Project, as shown in Figure 4-86.
This section describes how you create a partner link. The following are the steps to create a partner link:
From the Components Palette, select Services.
From the list of services, drag and drop Partner Link onto the BPEL project window.
The Create Partner Link dialog box is displayed.
Enter Hello in the Name field, and then click the Define Adapter Service icon (the third icon) under WSDL Settings.
The Adapter Configuration wizard - Welcome screen is displayed.
Click Next.
The Adapter Type dialog box is displayed.
Select Database Adapter, as shown in Figure 4-87 and then click Next.
Enter Hello in the Service Name field on the Service Name window, and then click Next. This is the same name as that of the partner link.
The Service Connection dialog box is displayed.
Click New to create a new connection.
The Welcome dialog box is displayed.
Click Next.
The Type dialog box is displayed.
Enter a name (for example, myConnection) in the Connection Name field of the Type dialog box.
Select the database connection type (for example, Oracle (JDBC)) from the Connection Type list, and click Next.
The Authentication dialog box is displayed.
Enter scott in the Username field of the Authentication window.
Enter the password for scott in the Password field (tiger for this example).
Leave the remaining fields as they are, and click Next.
The Connection dialog box is displayed.
Enter the following connection information. If you do not know this information, contact your database administrator.
Field | Example of Value |
---|---|
Driver | thin |
Host Name | localhost |
JDBC Port | 1521 |
SID | ORCL |
Click Next.
The Test dialog box is displayed.
Click Test Connection on the Test window.
If the connection was successful, the following message appears:
Success!
Click Finish.
The Service Connection dialog box is displayed, wil all fields populated, as shown in Figure 4-88.
Figure 4-88 The Service Connection Dialog Box
Click Next.
The Operation Type dialog box is displayed.
Select Call a Stored Procedure or Function on the Operation Type window, and then click Next.
The Specify Stored Procedure dialog box is displayed.
Click Browse to the right of the Procedure field.
The Stored Procedures dialog box is displayed.
Leave <Default Schema> selected in the Schema list. This defaults to the scott
schema in which the stored procedure is defined.
Select Hello in the Stored Procedures navigation tree.
Note:
As an alternative, you can also enter Hello in the Search field, click Search to display this stored procedure for selection in the Stored Procedures navigation tree, and then select it.The Arguments tab displays the parameters of the stored procedure.
Click the Source tab to display the Hello stored procedure source code. You entered this syntax when you created the stored procedure using SQL*Plus in Creating a Stored Procedure.
Figure 4-89 shows the source code of the stored procedure Hello.
Figure 4-89 Source Code of Stored Procedure Hello
Click OK.
The Specify Stored Procedure window displays your selections. They appear as shown in Figure 4-90.
Figure 4-90 The Specify Stored Procedure Dialog Box
Click Next.
The Finish screen is displayed.
Click Finish to complete adapter configuration.
The Create Partner Link window is automatically completed.
Click Apply.
Click OK.
Select Save All from the File main menu.
The following files appear under Greeting > Integration Content in the Applications Navigator. These files contain the parameters you specified with the Adapter Configuration Wizard.
Hello.wsdl
Corresponds with the new stored procedure partner link
SCOTT_HELLO.xsd
Provides the definition of the stored procedure, including its parameters
You now create an invoke activity to specify an operation you want to invoke for the service (identified by the Hello partner link).
Drag and drop an invoke activity below receiveInput activity, as shown in fig.
Double-click the invoke activity to display the Invoke dialog box.
In the Invoke dialog box, specify the perform the following actions:
In the Name field type Greet.
Click the Browse PartnerLinks icon on the right of the Partner Link field.
The Partner Link Chooser dialog box is displayed.
Select Hello, as shown in Figure 4-92, and then click OK.
Figure 4-92 The Partner Link Chooser Dialog Box
The Invoke dialog box is displayed with the Name, Partner Link, and Operation fields filled in. The Operation field automatically filled in with the value, Hello.
Click the Automatically Create Input Variable icon, which is the first icon to the right of the Input Variable field.
The Create Variable dialog box is displayed. A variable named Greet_Hello_InputVariable automatically appears in the Name field. This variable provides the value for the in
parameter of the stored procedure. The type is http://xmlns.oracle.com/pcbpel/adapter/db/Hello/}args_in_msg.
Ensure that Global Variable is selected.
Click OK on the Create Variable dialog box.
Click the first icon to the right of the Output Variable field.
A variable named Greet_Hello_OutputVariable automatically appears in the Name field. This variable stores the value of the out
parameter of the procedure after it executes. The type is http://xmlns.oracle.com/pcbpel/adapter/db/Hello/}args_out_msg.
Ensure that Global Variable is selected.
Click OK in the Create Variable dialog box.
Your selections for the Invoke window appears, as shown in Figure 4-93.
Click OK in the Invoke window.
Select Save All from the File main menu.
The process displays a link from the Greet Invoke activity to the Hello partner link.
You now create an Assign activity to assign the input value to the in
parameter of the stored procedure.
Drag and drop an Assign activity from the Component Palette section above the Greet Invoke activity.
Double-click the assign icon to display the Assign window.
Click the General tab.
Enter Input in the Name field.
Click Apply.
Click the Copy Rules tab.
Select Copy Operation from the Create drop-down list.
The Create Copy dialog box is displayed.
Field | Value |
---|---|
From | |
|
Variable |
|
Expand and select Variables, then inputVariable, then payload, then client:GreetingProcessRequest, and then client:input. |
To | |
|
Variable |
|
Expand and select Variables, then Greet_Hello_InputVariable, then InputParameters, then ns2:InputParameters, and then NAME. |
The Create Copy Operation Dialog Box appears as follows:
Click OK to close the Create Copy Rule window.
Click OK to close the Assign window.
Select Save All from the File main menu.
You now create an Assign activity to retrieve the value of the out
parameter of the stored procedure.
Drag and drop an Assign activity from the Component Palette section below the Greet Invoke activity.
Double-click the assign icon to display the Assign window.
Click the General tab.
Enter Output in the Name field.
Click Apply.
Click the Copy Rules tab.
Click Create to display the Create Copy Operation dialog box.
Field | Value |
---|---|
From | |
|
Variable |
|
Expand and select Variable, then Greet_Hello_OutputVariable, then OutputParameters, then ns2:OutputParameters, and then GREETING. |
To | |
|
Variable |
|
Expand and select Variables, then outputVariable, then payload, then client:GreetingProcessResponse, and then client:result. |
Click OK to close the Create Copy Operation dialog box.
Click OK to close the Assign window.
The Greeting process appears as follows in JDeveloper BPEL Designer.
Select Save All from the File main menu.
Go to the Applications Navigator section.
Right-click Greeting.
Select Deploy, then LocalBPELServer, and then Deploy to default domain.
Enter the domain password (initially set to bpel) when prompted.
Click OK.
This compiles the BPEL process. Review the bottom of the window for any errors. If there are no errors, deployment was successful.
Log in to Oracle BPEL Control using Internet Explorer by selecting Start, then All Programs, then Oracle - Oracle_Home
, then Oracle BPEL Process Manager, and then Oracle BPEL Control, or by running the $ORACLE_HOME/bpel/bin/startorabpel.sh
script for UNIX.
Enter the password (initially set to bpel) when prompted.
The Dashboard tab of Oracle BPEL Control appears. Note that your BPEL process, Greeting, now appears in the Deployed BPEL Processes list.
Click Greeting.
The Testing this BPEL Process page appears with the Initiate tab selected.
Enter your first name in the input field (for example, John).
Click Post XML Message.
After the procedure executes and the BPEL process finishes the value appears as follows:
Value: <GreetingProcessResponse>
<result>Hello John<result>
</GreetingProcessResponse>
Click Audit Instance.
The Instances tab of Oracle BPEL Control appears, along with the sequence of process activities.
Click More... on the Greet activity to see the input to and output from the stored procedure.
Note the <NAME> tag and its value in the <InputParameters> element. This value came from the inputVariable and was set by the Input Assign activity.
Note the <GREETING> tag and its value in the <OutputParameters> element. This value came from the output parameter of the stored procedure. The value was then assigned to the outputVariable by the Output Assign activity.
Click the Flow tab to view the process flow.
The process diagram appears.
Click any of the activities to view the XML as it passed through the BPEL process. For example, click the Greet Invoke activity to display the following: