9 Oracle JCA Adapter for Database

This chapter describes the Oracle JCA Adapter for Database (Oracle Database Adapter), which works in conjunction with Oracle BPEL Process Manager and Oracle Mediator (Mediator). This chapter also includes support for stored procedures and functions (for Oracle databases only). In addition, it contains references to use cases for the Oracle Database Adapter and for stored procedures.

This chapter includes the following topics:

9.1 Introduction to the Oracle Database Adapter

The Oracle Database Adapter enables a BPEL process to communicate with Oracle databases or third party databases through JDBC. The Oracle Database Adapter service is defined within a BPEL process partner link by using the Adapter Configuration Wizard of Oracle BPEL Process Manager (Oracle BPEL PM).

This section includes the following topics:

9.1.1 Functional Overview

The Oracle Database Adapter enables Oracle SOA Suite and Oracle Fusion Middleware to communicate with database end points. These include Oracle database servers and any relational databases that comply with ANSI SQL and provide JDBC drivers.

The principle of the tables and views in the Oracle Database Adapter is to expose to SOA tables and SQL as transparently and non-intrusively as possible. From an integration standpoint, tables and SQL are what relational database products have in common, so a generic solution focused on what is standard has the greatest reach. In exposing databases to SOA, it is also about combining the technologies of SQL and XML, the former an ideal language for querying information, the latter an ideal format for transporting and representing information. While stored procedure support is less standard across databases, Oracle Database Adapter provides support for stored procedures as the guide describes.

The Oracle Database Adapter is a JCA 1.5 connector, which runs on the Oracle Application Server. It relies on an underlying JDBC connector/driver to enact the database communication. In contrast to JDBC, it is non-programmatic. The interaction (series of SELECT, UPDATE, INSERT) is loosely modeled using the Adapter Configuration Wizard. The inputs/outputs are XML, most easily seen as input parameters and result sets converted to XML. These XML inputs and outputs allow the Oracle Database Adapter services to be plugged into Oracle Fusion Middleware.

To access an existing relational schema, you must create a new application and an SOA project to use the Adapter Configuration Wizard to perform the following:

The Oracle Database Adapter can currently be used only within the context of an SOA process as Section 9.1.1.1, "Oracle Database Adapter Integration with Oracle BPEL PM" describes.

Although Oracle Streams Advanced Queuing (Oracle AQ) is an Oracle Database feature, you use the separate, specialized Oracle JCA Adapter for AQ to integrate with Oracle AQ. For more information, see Chapter 7, "Oracle JCA Adapter for AQ".

For non-relational and legacy systems (with a few exceptions such as DB2 on AS/400), application and mainframe adapters are available. For more information about application and mainframe adapters, see:

For more information on the Oracle Database Adapter, see:

9.1.1.1 Oracle Database Adapter Integration with Oracle BPEL PM

When the Oracle Database Adapter is used to poll for database events (usually an INSERT operation on an input table) and initiate a process, in a Mediator component or an SOA composite it is called an exposed service. In Oracle BPEL process it is a partner link tied to a Receive activity. The expression inbound (from database into SOA) is commonly used.

When the Oracle Database Adapter is used to invoke a one-time DML statement such as INSERT or SELECT, in a Mediator component or an SOA composite, it is called a service reference. In Oracle BPEL process, it is a partner link tied to an Invoke activity. The expression outbound (from SOA out to the database) is used.

9.1.2 Design Overview

Figure 9-1 shows how the Oracle Database Adapter interacts with the various design-time and deployment artifacts.

Figure 9-1 How the Oracle Database Adapter Works

How the Database Adapter Works
Description of "Figure 9-1 How the Oracle Database Adapter Works"

The Oracle Database Adapter is a JCA 1.5 connector, which is deployed to the application server during installation.

The Oracle Database Adapter consists of multiple instances; each instance represents a connection to a database end point. Different SOA processes may point to the same adapter instance (database), while different service endpoints in a SOA process may point to different adapter instances (databases).

Because each adapter instance points to a single database, there is a one-to-one correspondence from adapter instances to application server data sources. Out of the box there is a single Oracle Database Adapter instance named eis/DB/SOADemo, which points to the data source jdbc/SOADataSource.

The list of adapter instances is stored in a deployment descriptor file, weblogic-ra.xml on Oracle WebLogic Server. (It is inside of DbAdapter.rar, which contains also the Java class files in DBAdapter.jar). Configuring an Oracle Database Adapter instance is more about creating the underlying data source: getting the correct JDBC driver and connection URL.

For more information, see Section 9.6, "JDBC Driver and Database Connection Configuration."

However weblogic-ra.xml entries occasionally have more than simply the name of the underlying data source. These properties are detailed further under Section 9.5, "Deployment".

While at run time you have Oracle Database Adapter instances, at design time you have the Adapter Configuration Wizard (link). You can run it once to generate a single adapter service end point, and then multiple times in edit mode to make incremental changes to each. It generates all the adapter related artifacts needed when deploying a SOA composite as Table 9-1 lists.

Table 9-1 Adapter Configuration Wizard Generated SOA Composite Adapter Artifacts

File Description

<serviceName>.wsdl

This is an abstract WSDL, which defines the service end point in terms of the name of the operations and the input and output XML elements.

<serviceName>_table.xsd

This contains the XML file schema for these input and output XML elements. Both these files form the interface to the rest of the SOA project.

<serviceName>_or-mappings.xml

This is an internal file. It is a TopLink specific file, which is used to describe the mapping between a relational schema and the XML schema. It is used at run time.

<serviceName>_db.jca

This contains the internal implementation details of the abstract WSDL. It has two main sections, location and operations. Location is the JNDI name of an adapter instance, that is, eis/DB/SOADemo. Operations describe the action to take against that end point, such as INSERT, UPDATE, SELECT, and POLL. The contents of the db.jca file are wholly determined by choices made while running the Adapter Configuration Wizard.

<serviceName>.properties

This is also an internal file. It is created when tables are imported, and information about them is saved. It is used only at design time.

At run time, the location is used to look up the adapter instance which executes the service. Based on the properties in the db.jca file and the linked or-mappings.xml file, <seviceName>.properties file generates the correct SQL to execute, parses the input XML, and builds an output XML file matching the XSD file. To execute the SQL, it obtains a pooled SQL connection from the underlying data source.


9.2 Complete Walkthrough of the Adapter Configuration Wizard

This section describes the Adapter Configuration Wizard and how you can define an Oracle Database Adapter by using the Adapter Configuration Wizard.

This section describes the various Oracle Database Adapter concepts through a use case, which is, a complete walkthrough of the Adapter Configuration Wizard. In addition, this use case also describes how by using the Adapter Configuration Wizard, you can import tables from the database, specify relationships spanning multiple tables, generate corresponding XML schema definitions, and create services to expose the necessary SQL or database operations. These services are consumed to define partner links that are used in the BPEL process. You use the Adapter Configuration Wizard to both create and edit adapter services.

9.2.1 Creating an Application and an SOA Project

You must create an Oracle JDeveloper (JDeveloper) application to contain the SOA composite. Perform the following steps to create a new application, and an SOA project:

  1. Open JDeveloper.

  2. In the Application Navigator, click New Application.

    The Create Generic Application - Name your application page is displayed, as shown in Figure 9-2.

  3. Enter a name for the application in the Application Name field.

  4. In the Application Template list, choose Generic Application.

    Figure 9-2 The Create Generic Application - Name your application Page

    Description of Figure 9-2 follows
    Description of "Figure 9-2 The Create Generic Application - Name your application Page"

  5. Click Next.

    The Create Generic Application - Name your project page is displayed, as shown in Figure 9-3.

  6. In the Project Name field, enter a descriptive name.

  7. In the Available list in the Project Technologies tab, double-click SOA to move it to the Selected list.

    Figure 9-3 The Create Generic Application - Name your Generic project Page

    Description of Figure 9-3 follows
    Description of "Figure 9-3 The Create Generic Application - Name your Generic project Page"

  8. Click Next. The Create Generic Application - Configure SOA settings page is displayed, as shown in Figure 9-4.

    Figure 9-4 The Create Generic Application - Configure SOA settings Page

    Description of Figure 9-4 follows
    Description of "Figure 9-4 The Create Generic Application - Configure SOA settings Page"

  9. Select Composite With BPEL from the Composite Template list, and then click Finish.

    You have created a new application and an SOA project. This automatically creates an SOA composite.

    The Create BPEL Process page is displayed, as shown in Figure 9-5.

    Figure 9-5 The Create BPEL Process Page

    Description of Figure 9-5 follows
    Description of "Figure 9-5 The Create BPEL Process Page"

  10. Enter a name for the BPEL process in the Name field.

  11. Select Define Service Later in the Template list, and then click OK.

    You have created a BPEL process.

9.2.2 Defining an Oracle Database Adapter

The next step is to define an Oracle Database Adapter service. Perform the following steps to create an Oracle Database Adapter service:

  1. In the Component Palette, select SOA.

  2. Drag and drop Database Adapter from the Service Adapters list to the Exposed components swim lane in the composite.xml page.

    The Adapter Configuration Wizard is displayed.

    Note:

    To create an Oracle Database Adapter service as part of a BPEL process, drag and drop a BPEL process from Service Components onto Components. Double-click it. Then, in the BPEL Component Palette, drag and drop Database Adapter from BPEL Services onto one of the Partner Links swim lanes.
  3. Click Next. The Service Name page is displayed, as shown in Figure 9-6. Enter the following information:

    Figure 9-6 Specifying the Service Name

    Description of Figure 9-6 follows
    Description of "Figure 9-6 Specifying the Service Name"

  4. In the Service Name field, enter a service name, and then click Next. The Service Connection page is displayed.

See Section 9.2.3, "Connecting to a Database" to continue using the Adapter Configuration Wizard.

9.2.3 Connecting to a Database

Figure 9-7 shows where you select the database connection that you are using with the service. This is the database from which you import tables to configure the service. This is the database from which you import tables to configure the service. You may need to re-create it here in each new JDeveloper application you create.

You can provide a Java Naming and Directory Interface (JNDI) name to identify the database connection, as the default name that is provided is eis/DB/<ConnectionNameInJDev>.

For more information, see Section 9.5, "Deployment."

Figure 9-7 The Adapter Configuration Wizard: Service Connection Page

Description of Figure 9-7 follows
Description of "Figure 9-7 The Adapter Configuration Wizard: Service Connection Page"

Note the following:

  • In production environments, it is recommended that you add the JNDI entry to the adapter deployment descriptor (weblogic-ra.xml). This way, the Oracle Database Adapter is more performant by working in a managed mode.

    For information about creating a data source and an outbound connection pool, see Section 2.19, "Adding an Adapter Connection Factory."

  • When you click Next, a connection to the database is attempted. If a connection cannot be made, you will not be able to proceed to the next window, even if you are editing an existing partner link.

See Section 9.2.4, "Selecting the Operation Type" to continue using the Adapter Configuration Wizard.

9.2.4 Selecting the Operation Type

Figure 9-8 shows where you indicate the type of operation you want to configure for this service.

Figure 9-8 The Adapter Configuration Wizard: Operation Type Page

Description of Figure 9-8 follows
Description of "Figure 9-8 The Adapter Configuration Wizard: Operation Type Page"

The following operation types are available:

Otherwise, see Section 9.2.5, "Selecting and Importing Tables" to continue using the Adapter Configuration Wizard.

9.2.5 Selecting and Importing Tables

Figure 9-10 shows where you select the root database table for your operation. If you are using multiple related tables, then this is the highest-level table (or highest parent table) in the relationship tree.

Figure 9-10 The Adapter Configuration Wizard: Select Table

Description of Figure 9-10 follows
Description of "Figure 9-10 The Adapter Configuration Wizard: Select Table"

Selecting Import Tables launches a sub-wizard, which lets you search for and select multiple tables to import from the database. Removing a table removes (or undoes) any relationships on related tables that remain. If any underlying tables have changed when running this wizard in edit mode, you get a warning showing you what changes have occurred. To reconcile, import the tables again. Note that if you click Import Tables and select multiple tables, then relationships between these tables are inferred based on the foreign key constraints. However if you launch Import Tables once for each table imported, then no relationships are inferred.

Note:

If you reimport a table, you lose any custom relationships you may have defined on that table as well as any custom WHERE clauses (if the table being imported was the root table).

See Section 9.2.6, "Defining Primary Keys" to continue using the Adapter Configuration Wizard.

9.2.6 Defining Primary Keys

If any of the tables you have imported do not have primary keys defined on the database, you are prompted to provide a primary key for each one, as shown in Figure 9-11. You must specify a primary key for all imported tables. You can select multiple fields to specify a multipart primary key.

Figure 9-11 The Adapter Configuration Wizard: Define Primary Keys Page

Description of Figure 9-11 follows
Description of "Figure 9-11 The Adapter Configuration Wizard: Define Primary Keys Page"

The primary key that you specify here is recorded on the offline database table and is not persisted back to the database schema; the database schema is left untouched.

See Section 9.2.7, "Creating Relationships" to continue using the Adapter Configuration Wizard.

Note:

Note that Oracle Database Adapter only supports tables where there is a primary key defined. If primary key constraints have not been defined on a table explicitly, then you must provide one at design time while defining the Oracle Database Adapter by using the Adapter Configuration Wizard. If you do not provide a valid primary key, then the unique constraint is not guaranteed, and this could result in possible loss of messages at run time. That is, rows with duplicate primary key values are likely to be lost.

The following sample describes how to use the row id field as primary key:

http://www.oracle.com/technology/sample_code/products/adapters

Note:

Oracle recommends that you use varchar instead of char for primary key columns, otherwise you will need to set the weblogic-ra.xml property shouldTrimStrings to false. The truncation of trailing spaces could cause the primary key to be read incorrectly, making it impossible to update read rows as processed.

9.2.7 Creating Relationships

Figure 9-12 shows the relationships defined on the root database table and any other related tables. You can click Create Relationships… to create a new relationship between two tables, or click Remove Relationship to remove it. To rename a relationship, click Rename Relationship.

Figure 9-12 The Adapter Configuration Wizard: Relationships Page

Description of Figure 9-12 follows
Description of "Figure 9-12 The Adapter Configuration Wizard: Relationships Page"

Note the following regarding creating relationships:

  • If foreign key constraints between tables already exist on the database, then two relationships are created automatically when you import the tables, a one-to-one (1:1) from the source table (the table containing the foreign key constraints) to the target table, and a one-to-many (1:M) from the target table to the source table.

  • As Figure 9-12 shows, you see only the relationships that are reachable from the root database table. If, after removing a relationship, other relationships are no longer reachable from the root table, then they are not shown in the Relationships window. Consider the following set of relationships:

    A --1:1--> B --1:1--> C --1:M--> D --1:1--> E --1:M--> F

    (1) (2) (3) (4) (5)

    If you remove relationship 3, then you see only:

    A --1:1--> B

    B --1:1--> C

    If you remove relationship 2, then you see only:

    A --1:1--> B

    If you remove relationship 1, you no longer see any relationships.

Figure 9-13 shows where you can create a new relationship.

Figure 9-13 The Create Relationship Dialog

Description of Figure 9-13 follows
Description of "Figure 9-13 The Create Relationship Dialog"

To create a new relationship:

  1. Select the parent and child tables.

  2. Select the mapping type (one-to-many, one-to-one, or one-to-one with the foreign key on the child table).

  3. Associate the foreign key fields to the primary key fields.

  4. Optionally name the relationship (a default name is generated).

Note:

Only tables that are reachable from the root table can be selected as a parent.

9.2.7.1 What Happens When Relationships Are Created or Removed

When tables are initially imported into the Adapter Configuration Wizard, a TopLink direct-to-field mapping corresponding to each field in the database is created. Consider the schemas shown in Figure 9-14 and Figure 9-15:

Figure 9-14 EMPLOYEE Schema

Description of Figure 9-14 follows
Description of "Figure 9-14 EMPLOYEE Schema"

Figure 9-15 ADDRESS Schema

Description of Figure 9-15 follows
Description of "Figure 9-15 ADDRESS Schema"

Immediately after importing these two tables, the following mappings in the Employee descriptor are created:

Employee:

  • id (direct mapping to the ID field, for example, 151)

  • name (direct mapping to the NAME field, for example, Stephen King)

  • addrId (direct mapping to the ADDR_ID field, for example, 345)

When creating a relationship mapping, the direct-to-field mappings to the foreign key fields are removed and replaced with a single relationship (one-to-one, one-to-many) mapping. Therefore, after creating a one-to-one relationship between Employee and Address called homeAddress, the Employee descriptor appears, as shown in the following example:

Employee:

  • id

  • name

  • homeAddress (one-to-one mapping to the ADDRESS table; this attribute now represents the entire Address object.)

When a relationship is removed, the direct mappings for the foreign keys are restored.

9.2.7.2 Different Types of One-to-One Mappings

When relationships are auto created, the one-to-many relationship is from the table without the foreign key. However, you can declare this mapping, which is technically 1-many, as a 1-1. For that, choose 1-1 (foreign key on target).

9.2.7.3 When Foreign Keys Are Primary Keys

Not all tables imported are in the third normal form (3NF). In rare cases, you may have two or more tables which share the same primary key but no separate foreign key columns exist. It is recommended to create 1-1 (foreign key on target) relationships from the root table to all related tables. The reason is two fold. First, if you were to declare the primary key on the root as a foreign key (1-1, foreign key on source), then that mapping would be removed, so you would not see the primary key in the root record, only in the child record. Second, a foreign key can only point to a single table. Once you declare a column to be part of a foreign key, it is removed, so it cannot be used again in a new relationship. Creating a 1-1 (foreign key on source) on the root table not only makes the primary key column disappear but prevents you from joining the root table to the remaining tables.

9.2.8 Creating the Attribute Filter

Figure 9-16 shows the attribute filter that is created from the imported table definitions, including any relationships that you may have defined.

Figure 9-16 The Adapter Configuration Wizard: Attribute Filtering Page

Description of Figure 9-16 follows
Description of "Figure 9-16 The Adapter Configuration Wizard: Attribute Filtering Page"

If your object filter contains self-relationships (for example, the employee-to-employee manager relationship), then you see these as loops in the tree. These loops are not present in the XSD file. This is the descriptor object model, not the XSD file.

In this page, you select those columns that will appear in the XML file, whether for input (MERGE, INSERT) or output (SELECT). Columns you are not interested in or which are to be read-only (should not be modified) can be deselected here.

See Section 9.2.9, "Defining a WHERE Clause" to continue using the Adapter Configuration Wizard.

9.2.9 Defining a WHERE Clause

If your service contains a SELECT query (that is, inbound polling services, or outbound services that contain a SELECT), then you can customize the WHERE clause of the SELECT statement.

Note:

Incase of polling with Sequencing Table/Update an External Sequencing Table, ensure that the name of the table in the SELECT query matches the case of the data in the sequencing table.

Figure 9-17 shows where you define a WHERE clause for an outbound service.

Figure 9-17 The Adapter Configuration Wizard: Define Selection Criteria Page

Description of Figure 9-17 follows
Description of "Figure 9-17 The Adapter Configuration Wizard: Define Selection Criteria Page"

Note:

The WHERE clause applies to SELECT operations only (that is, polling for new or changed records or performing a SELECT operation on a table). It does not apply to INSERT, UPDATE, and DELETE operations.

The most basic expression in a WHERE clause can be one of the following three cases, depending on what the right-hand side (RHS) is:

  1. EMP.ID = 123

    In this case, the RHS is a literal value. This RHS is the Literal option shown in Figure 9-18.

  2. EMP.ADDR_ID = ADDR.ID

    In this case, the RHS is another database field. This RHS is the Query Key option shown in Figure 9-18.

  3. EMP.ID = ?

    In this case, the RHS value must be specified at run time. This is the Parameter option shown in Figure 9-18.

You can create the parameters that you need in the WHERE clause by clicking Add before you move on to build the WHERE clause. To build the WHERE clause, click Edit… to launch the Expression Builder, as shown in Figure 9-18.

Figure 9-18 Expression Builder

Description of Figure 9-18 follows
Description of "Figure 9-18 Expression Builder"

To model more complex WHERE clauses (sub selects and functions), and to add ORDER BY clauses, you can edit the SQL procedure manually and click Next. However, this creates maintenance overhead later on, due to hard-coded SQL, and you may lose platform independence.

You can change the columns listed in the FROM clause as long as the number of columns and the types of each remain unchanged. For more complex changes consider using the Execute Pure SQL option directly where you can type any SQL.

Return Single Result Set

You must select Use Outer Joins to return a Single Result Set for both Master and Detail Tables in the Define Selection Criteria page to use an advanced feature that influences how many total statements TopLink uses when querying against multiple related tables. The safest method is to use the default (1 per table), and this feature will attempt 1 total, by outer joining all related tables into a single result set.

See Section 9.2.10, "Choosing an After-Read Strategy" to continue using the Adapter Configuration Wizard.

9.2.10 Choosing an After-Read Strategy

If you selected Perform an Operation on a Table, then you can skip ahead to the Section 9.2.12, "Specifying Advanced Options.".

When configuring an inbound operation, you have the following options about what to do after a row or rows have been read:

Figure 9-19 shows these options.

Figure 9-19 The Adapter Configuration Wizard: After Read Page

Description of Figure 9-19 follows
Description of "Figure 9-19 The Adapter Configuration Wizard: After Read Page"

See Section 9.4.2.2, "Polling Strategies" to continue using the Adapter Configuration Wizard.

9.2.10.1 Delete the Rows That Were Read

With this option, the rows are deleted from the database after they have been read and processed by the adapter service.

9.2.10.2 Update a Field in the Table (Logical Delete)

With this option, you update a field in the root database table to indicate that the rows have been read. The WHERE clause of the query is updated automatically after you complete the configuration, as shown in Figure 9-20.

Figure 9-20 The Adapter Configuration Wizard: Logical Delete Page

Description of Figure 9-20 follows
Description of "Figure 9-20 The Adapter Configuration Wizard: Logical Delete Page"

When you use this approach, your database table appears, as shown in Figure 9-21.

Figure 9-21 Updating Fields in a Table

Description of Figure 9-21 follows
Description of "Figure 9-21 Updating Fields in a Table"

Note the following:

  • Rows 150 and 153 have been previously read and processed.

  • At the next polling event, row 152 is read and processed because it contains UNPROCESSED in the Status column. Because an explicit Unread Value was provided, row 151 is not read.

  • Row 154 has been flagged as LOCKED and is not read. You can use this reserved value if your table is used by other processes.

9.2.10.3 Update a Sequencing Table

With this option, you are keeping track of the last-read rows in a separate sequence table. Figure 9-22 shows the information you provide. The WHERE clause of your query is updated automatically after you complete the configuration.

Figure 9-22 The Adapter Configuration Wizard: Sequencing Table Page

Description of Figure 9-22 follows
Description of "Figure 9-22 The Adapter Configuration Wizard: Sequencing Table Page"

When you use these settings, your sequence table appears, as shown in Figure 9-23.

Figure 9-23 Updating a Sequence Table

Description of Figure 9-23 follows
Description of "Figure 9-23 Updating a Sequence Table"

Whenever a row is read, this table is updated with the ID that was just read. Then, when the next polling event occurs, it searches for rows that have an ID greater than the last-read ID (154).

Typical columns used are event_id, transaction_id, scn (system change number), id, or last_updated. These columns typically have (monotonically) increasing values, populated from a sequence number or sysdate.

9.2.10.4 Update an External Sequencing Table on a Different Database

Choose this operation to employ the sequencing table: last updated strategy. Figure 9-24 shows the Adapter Configuration Wizard - External Sequencing Table page in which you specify the details required to perform this operation.

Figure 9-24 The Adapter Configuration Wizard - External Sequencing Table page

Description of Figure 9-24 follows
Description of "Figure 9-24 The Adapter Configuration Wizard - External Sequencing Table page"

9.2.10.5 Update a Sequencing File

Use this option to update a sequencing file. Figure 9-25 shows the Adapter Configuration Wizard - Update a Sequencing File page where you specify the details for performing this operation.

Figure 9-25 Adapter Configuration Wizard - Update a Sequencing File Page

Description of Figure 9-25 follows
Description of "Figure 9-25 Adapter Configuration Wizard - Update a Sequencing File Page"

9.2.11 Specifying Polling Options

You can specify additional polling options, if any, in this page. Figure 9-26 shows the Adapter Configuration Wizard - Polling Options page.

Figure 9-26 Specifying Polling Options

Description of Figure 9-26 follows
Description of "Figure 9-26 Specifying Polling Options"

In this page, you specify details about how to poll the database table for new rows or events.

From the Polling Frequency list, select how frequently to poll for new records or events.

In the Database Rows per XML Document field, specify the number of rows per XML document when sending events to Oracle BPEL PM or Mediator. This is the batch setting between the database adapter and its consumer: Oracle BPEL PM or Mediator.

In the Database Rows per Transaction field, select Unlimited or enter a value to indicate the number of table rows to process during a single transaction.

When polling the database for events, you can order the returned rows by the selected column by using the Order By list. The best practice is to choose <No Ordering>, as message ordering regardless is not guaranteed without extra configuration.

In the SQL field, if the SQL syntax is incorrect, then a message is displayed in red.

For more information about specifying polling options, click Help in the Polling Options page or press F1.

9.2.12 Specifying Advanced Options

You can specify advanced options, if any. Figure 9-27 shows the Adapter Configuration Wizard - Advanced Options page. In this page, you can specify advanced JDBC and DBAdapter options, configure retries, and configure native sequencing.

Figure 9-27 Specifying Advanced Options

Description of Figure 9-27 follows
Description of "Figure 9-27 Specifying Advanced Options"

You must specify JDBC options in the JDBC Options section. Set low-level JDBC options on calls to the database. The operation you selected determines which options may appear here.

In the Auto-Retries section, specify the value for auto-retry incase of time out. In case of a connection related fault, the Invoke activity can be automatically retried a limited number of times. You can specify the following values in the fields in this section:

  • To retry indefinitely, type unlimited in the Attempts field.

  • Interval is the delay between retries.

  • Backoff Factor: x allows you to wait for increasing periods of time between retries. 9 attempts with a starting interval of 1 and a back off of 2 will lead to retries after 1, 2, 4, 8, 16, 32, 64, 128, and 256 (28) seconds.

In the Interaction Options, specify the interaction options, as follows:

  • GetActiveUnitOfWork is an advanced setting that forces all invoke activities in the same global transaction to use the same SQL connection if going to the same database. This makes it easier to guarantee that later invoke activities can see the changes of earlier invoke activities, but you may not need to set this at all (if using emulated two-phase commit, it should automatically be the same connection). Another difference is that for MERGE and INSERT, all changes are not written until the global transaction commits, so this setting also changes the timing of when WRITE operations occur.

  • Detect Omissions allows the MERGE and INSERT operations to ignore empty or missing XML elements in the input payload. For a MERGE operation, this will prevent valid but unspecified values from being overwritten with NULL. For INSERT operations, they will be omitted from the INSERT statement, allowing default values to take effect.

  • Optimize Merge should always be set to true, as it is a general enhancement to MERGE performance (using an in query for the primary key existence check).

Native Sequencing (Oracle only) allows you to specify that the primary key will be assigned from a sequence on any insert. Click Search and then select a sequence from the Sequence list, or type the name and click Create.

For more information about specifying advanced options, click Help in the Advanced Options page or press F1.

9.2.13 Entering the SQL String for the Pure SQL Operation

You can enter a SQL string for performing the Execute Pure SQL operation in the Custom SQL page. Figure 9-28 shows the Adapter Configuration Wizard - Custom SQL page.

Figure 9-28 Entering a SQL String

Description of Figure 9-28 follows
Description of "Figure 9-28 Entering a SQL String"

In the SQL field, enter a custom SQL string. An XSD schema of your SQL input is automatically created in the XSD field.

The XSD field displays the XSD schema of the custom SQL string you entered. You can directly edit the resulting XSD. However, if you make subsequent changes to the SQL string, then your XSD changes are lost.

For more information about entering a SQL string, click Help in the Custom SQL page or press F1.

9.3 Oracle Database Adapter Features

This section discusses the Oracle Database Adapter features.

It includes the following topics:

9.3.1 Transaction Support

The Oracle Database Adapter enables transaction support, which, along with the inherent data processing, ensures that each modification has a clearly defined outcome, resulting in either success or failure, thus preventing potential corruption of data, executes independently from other changes, and, once completed, leaves underlying data in the same state until another transaction takes place.

There are two types of transaction support, XA Transaction support and Local Transaction support. XA transaction support allows a transaction to be managed by a transaction manager external to a resource adapter, whereas, a local transaction support allows an application server to manage resources that are local to the resource adapter.

To ensure two Oracle Database Adapter invokes commit or rollback as a unit, you need to perform the following:

  • Both Oracle Database Adapter invokes must be configured to participate in global transactions.

  • Both Oracle Database Adapter invokes must participate in the same global transaction.

  • The failure of either invoke must cause the global transaction to roll back.

The transaction support is demonstrated in the following tutorial files:

  • XAInsert

  • InsertWithCatch

  • DirectSQLPerformance

For the tutorial files, go to

http://www.oracle.com/technology/sample_code/products/adapters

Note:

You must use a non-XA driver with the SOALocalTxDataSource parameter. Switching to an XA driver breaks product functionality.

9.3.1.1 Configuring Oracle Database Adapter for Global Transaction Participation

In the deployment descriptor (weblogic-ra.xml file), you must set the xADataSourceName parameter. Additionally, the referenced DataSource must be configured for transaction participation by creating a data source in Oracle WebLogic Server Console.

You must create a data source and choose one of the XA data sources from the list.

Note:

True Database XA is only certified on Oracle 10.2.0.4 or 11.1.0.7. For earlier versions, you will be safer picking a non-XA data source implementation and selecting Emulated Two-phase commit on the next page.

For information about the recommended setting for non-XA and XA data sources used by Oracle JCA Adapters, see Section 2.21, "Recommended Setting for Data Sources Used by Oracle JCA Adapters."

Note that you cannot edit the data-sources.xml file in the Oracle WebLogic Server. You must create a data source by using the Oracle WebLogic Server Administration Console, as mentioned in Section 2.19.1, "Creating a Data Source."

9.3.1.2 Both Invokes in Same Global Transaction

Once both the Oracle Database Adapter invokes participate in global transactions, to commit or rollback as a unit, they must be participating in the same global transaction. In BPEL, this requires the understanding of where the transaction boundaries are, at what points does a checkpoint have to write to the dehydration store, commit the current global transaction, and start a new one.

The transaction boundaries in a BPEL process occur either before a Receive activity or wait activity, or before an onMessage or pick activity. This may also occur when invoking a synchronous child BPEL process, unless the bpel.config.transaction property is set on the partnerlink, as shown in the following code sample.

<property name="bpel.config.transaction">required</property>

Otherwise, the parent process is broken into two transactions and the child process runs in its own transaction.

9.3.1.3 Failure Must Cause Rollback

Finally, even if both Oracle Database Adapter invokes participate in the same global transaction, the failure of either invoke may not cause the global transaction to rollback.

The only cases where a failure can actually cause a global rollback are:

  • A Oracle Database Adapter operation that inserts/updates multiple tables as part of one invoke fails after having succeeded in some writes but not others. In this case, the Oracle Database Adapter marks the global transaction as rollback only, because the invoke operation was not atomic and a commit could cause data corruption.

  • The invoke retries multiple times in a database down scenario, until the global transaction times out and is rolled back.

  • An explicit bpelx:rollback fault is thrown from within the BPEL process.

9.3.1.3.1 Using the Same Sessions for Both Invokes

You must set the GetActiveUnitOfWork JCA parameter to true to enable using the same sessions or connections for both the Oracle Database Adapter invokes.

GetActiveUnitOfWork is an advanced JCA property you can set on any DBInteractionSpec. It causes the invoke to register itself with the two-phase commit callbacks, and all writes to the database are performed as part of the two-phase commit. By setting this property on any failure, the transaction is automatically rolled back, as there is no way to handle a fault at this late stage. Similarly, the same underlying TopLink session is used for both invokes, meaning if you merge the same object twice, it is inserted/updated once. All merge invokes that set GetActiveUnitOfWork as true are cumulative.

9.3.1.4 Transaction/XA Support

To make two Oracle Database Adapter invokes commit or roll back as a unit requires the following: both Oracle Database Adapter invokes must be configured to participate in global transactions, both invokes must participate in the same global transaction, and the failure of either invoke must cause the global transaction to rollback.

9.3.1.4.1 Configuring an Oracle Database Adapter for Global Transaction Participation

In the deployment descriptor (weblogic-ra.xml), you must set xADataSourceName. The matching data source entry must be configured for global transaction participation.

True XA: Two-Phase (XA) Versus One-Phase (Emulated) Commit

XA is a two-phase commit protocol, which is more robust than a one-phase commit or emulated protocol. The difference is that with a one-phase protocol, you may very rarely still see message loss or other rollback/commit inconsistency, on the order of one per one thousand generally.

RAC Configuration

For more information about RAC configuration, see the Oracle Database High Availability Overview guide.

True XA Configuration with Third Party Drivers

When configuring true XA for third party drivers (that is, Microsoft SQL Server, IBM DB2), see if the driver jars contain a class that implements javax.sql.XADataSource.

For data direct drivers, the naming happens to be com.oracle.ias.jdbcx.db2.DB2DataSource, or com.oracle.ias.jdbcx.sqlserver.SQLServerDataSource.

9.3.1.4.2 Failure Must Cause Rollback

Finally, even if both invokes participate in the same global transaction, the failure of either invoke may not cause the global transaction to roll back.

The only cases where a failure can actually cause a global roll back are:

  • An Oracle Database Adapter operation that inserts/updates multiple tables as part of one invoke fails after having succeeded in some writes but not others. In this case, the adapter marks the global transaction rollback only, as the invoke operation was not atomic and a commit could cause data corruption.

  • The invoke retries multiple times in a database down scenario, until the global transaction times out and is rolled back.

  • An explicit bpelx:rollback fault is thrown from within the BPEL process. GetActiveUnitOfWork="true" in WSDL.

9.3.2 Pure SQL - XML Type Support

Pure SQL Adapter is an option in the Oracle Database Adapter Wizard that allows you to type the SQL string directly and have an XSD/Web service generated automatically. The database tables are introspected dynamically in the Adapter Configuration Wizard to test the SQL and populate the XSD file better (that is, with valid return types.)

The Pure SQL support allows the Oracle Database Adapter to deal with tables/views as entities and for dealing directly with SQL. You can use Pure SQL:

  • for simple data projection style report queries

  • in cases where the result set is not table oriented, such as select count(*)

  • to perform an update or delete all

  • when working with XMLType columns and xquery

  • when using complex SQL, which are not modeled in the Adapter Configuration Wizard expression builder

You can use the Pure SQL Adapter with Oracle XMLTypes. It is a natural fit for inserting XML into XMLType tables and columns, and retrieving XML using xquery selects. Pure SQL is a natural fit for the Oracle Database Adapter that provides a relational-xml mapping that parallels XML DB(XDB) support. So, when using XDB the adapter should be as lightweight and transparent as possible, to let you focus on XDB and XQuery.

If your data is in XML (unstructured/semi-structured) format, and you have no relational schema at all that you can map your data to, then you could use XDB. The conventional Oracle Database Adapter allows you to import an existing relational schema as an XML schema to be used with Web services. XDBs XML shredding algorithm can generate a relational schema from an existing XML schema for persistent storage.

Note:

Use of schema bound XMLTypes requires the oci driver, which is not certified in the 11g release. Therefore, you must use non-schema bound XMLTypes at run time, though you can use schema bound XMLTypes at design time to import a representative XSD.

For more information, see:

9.3.3 Row Set Support Using a Strongly or Weakly Typed XSD

Currently a REF CURSOR by nature can support any arbitrary result set, so the XSD generated at design time allows this and looks like the XSD that Example 9-1 shows.

Note:

Oracle Database stored procedures return result sets that are referred to as RefCursors, whereas, in the case of third-party databases result sets that are returned are referred to as RowSets.

Example 9-1 Weakly Typed XSD

<refCursorOutputParam>
    <Row>
        <Column name="DEPTNO" sqltype="NUMBER">20</Column>
        ...
    </Row>
</refCursorOutputParam>

However the XML output from this is hard to use. It is very difficult to write an Xpath expression or XSL based on a weakly typed XSD and column names as attribute values instead of element names.

Although a row set can represent any result set, it is possible to assume for some procedures that it will have the same structure each time, and hence can be described with a strongly typed XSD. A strongly typed XSD is almost a necessity if you want to transform the result set to another XSD later on. A strongly typed XSD looks like the XSD that Example 9-2 shows.

Example 9-2 Strongly Typed XSD

<refCursorOutputParam>
    <dept>
        <deptno>20</deptno>
        ...
    </dept>
</refCursorOutputParam>

You can use the Adapter Configuration Wizard to create a strongly typed XSD for a row set returned by a stored procedure or function REF CURSOR variable. An Oracle Database function is a special stored procedure that always has one out variable, and can be inlined - for example, inside select statements - and so traditionally does not do updates.

Using this feature, you can select a stored procedure (or stored function), enter its arguments, and perform a test execution to retrieve an actual row set. The Adapter Configuration Wizard then introspects the returned row set and generates a strongly typed XSD. You can enter arguments easily through the wizard. For example, you can enter numbers and strings directly, dates as literals (2009/11/11), and you can even enter structs like MYOBJ('a', 'b').

Note:

Functions are not supported for IBM DB2 UDB. Only SQL stored procedures are supported.

The Adapter Configuration Wizard row set support using a strongly typed XSD has the following restrictions:

  • Oracle Database PL/SQL record or boolean types are not supported.

  • Oracle Database PL/SQL varray is not supported.

  • Oracle Database PL/SQL %rowtype is not supported.

  • Oracle Database PL/SQL table types are not supported.

  • Oracle Database PL/SQL procedures with IN only REF CURSOR parameters are not supported.

    For an Oracle Database PL/SQL procedure with REF CURSOR as an IN/OUT parameter, the Adapter Configuration Wizard ignores the IN and generates the strongly typed XSD based on the OUT parameter.

  • Referencing an element in the XSD using ref is not supported.

  • SQL Server table valued functions and CLR functions are not supported.

The Oracle Database Adapter supports strongly typed XSD for the following third-party databases:

  • Microsoft SQL Server 2005

  • Microsoft SQL Server 2008

  • IBM DB2 UDB 9.7

The Oracle Database Adapter does not support strongly typed XSD for the following third-party databases:

  • IBM DB2 AS/400

  • MySQL

  • Informix Dynamic Server

  • Sybase 15.0.2

For more information, see:

9.3.4 Proxy Authentication Support

You can connect to your Oracle data store by using Proxy Authentication. On a per-invoke basis, you can set a combination of the following new header properties:

  • jca.db.ProxyUserName: to use the OracleConnection.PROXYTYPE_USER_PASSWORD proxy type, set this property to the proxy user name as a java.lang.String.

  • jca.db.ProxyPassword: to use the OracleConnection.PROXYTYPE_USER_PASSWORD proxy type, set this property to the proxy user password as a java.lang.String.

  • jca.db.ProxyCertificate: to use the OracleConnection.PROXYTYPE_CERTIFICATE proxy type, set this property to a base64Binary encoded byte[] array containing a valid certificate.

    This is a more encrypted way of passing the credentials of the user, who is to be proxied, to the database. The certificate contains the distinguished name encoded in it. One way of generating the certificate is by creating a wallet and then decoding the wallet to get the certificate. The wallet can be created using runutl mkwallet. It is then necessary to authenticate using the generated certificate.

  • jca.db.ProxyDistinguishedName: to use the OracleConnection.PROXYTYPE_DISTINGUISHED_NAME proxy type, set this property to the proxy distinguished name as a java.lang.String.

    This is a global name in lieu of the password of the user being proxied for.

  • jca.db.ProxyRoles: regardless of what proxy type you use, you can optionally set this property to define the roles associated with the proxy user as a String[] array where each java.lang.String corresponds to a role name.

  • jca.db.ProxyIsThickDriver: if you are using the OCI driver, set this property to a value of true to accommodate differences in the JDBC-level API between the thick and thin drivers.

To run the invoke, a proxy connection is obtained from the data source.

For more information, see:

9.3.5 Streaming Large Payload

To enable support to stream payload, you must select the Enable Streaming check box while specifying polling options, as shown in Figure 9-26. When you enable this feature, the payload is streamed to a database instead of getting manipulated in SOA run time as in a memory DOM. You use this feature while handling large payloads. When you select the Enable Streaming check box, a corresponding Boolean property StreamPayload is appended to the ActivationSpec properties defined in the respective .jca file.

9.3.6 Schema Validation

The SchemaValidation [false/true] property is a new activation specification property that has been added, and this can be configured in a .jca file. When set to true, all XML files produced by the polling Oracle Database Adapter (for Receive activities) is validated against the XSD file. On failure, the XML record is rejected but still marked as processed by the Oracle Database Adapter.

Databases provide structured storage and the XSD file is generated by the Oracle Database Adapter Wizard itself. However, if you edit the auto generated XSD and add your own restrictions, you may want to start validation. For instance, if you import a VARCHAR(50) field, the auto-generated XSD has the max-length 50 restriction. However, if your BPEL process for some reason can only handle values of fixed length 22, it may want to validate the XML file.

9.3.7 High Availability

The Oracle Database Adapter supports high availability in an active-active setup. In an active-active setup, distributed polling techniques can be used for inbound Database Adapters to ensure that the same data is not retrieved more than once. For more information, see Section 9.3.8.1, "Distributed Polling First Best Practice: SELECT FOR UPDATE (SKIP LOCKED)." Similar to other adapters, an Oracle Database Adapter can also be configured for singleton behavior within an active-passive setup. This allows a high performance multithreaded inbound Oracle Database Adapter instance running in an active-passive setup, to follow a fan out pattern and invoke multiple composite instances across a cluster. The Oracle Database Adapter also supports the high availability feature when there is a database failure or restart. The DB adapter picks up again without any message loss.

For information about how an inbound rejected message is handled by using fault policy, see Section 2.22.2.2, "Inbound Interaction."

9.3.7.1 Surviving Database Restart

The Oracle Database Adapter can survive a database down scenario and pick up again without any message loss.

The following are some possibilities of what may be wrong:

9.3.7.1.1 Oracle Database Adapter Gets Stuck on DataSource Closed Exceptions

After a database restart or fail-over, you may get into a cycle of continual SQLExceptions such as DataSource has been disabled or connection has been reset/closed. This is generally a data source configuration issue. Ensure that you configure your data source to be robust in the event of exceptions, by using the recommended setting for non-XA and XA data sources used by Oracle JCA Adapters.

For information about using recommended setting for non-XA and XA data sources used by Oracle JCA Adapters, see Section 2.21, "Recommended Setting for Data Sources Used by Oracle JCA Adapters."

9.3.7.1.2 Auto-Retrying Remote Faults

The Oracle Database Adapter is configured by default to retry a small number of times in the event of a retryable exception on an outbound one-time invoke. It is configured to retry indefinitely on a retryable fault on the inbound side. You can configure both of these behaviors on the Adapter Configuration Wizard - Advanced Options page of the Wizard.

For information about using the Advanced Options page, see Section 9.2.12, "Specifying Advanced Options."

9.3.7.1.3 Exception Misclassification

Auto-retries can kick in only if a remote fault is thrown. If instead a binding fault is thrown, then you may have an issue with exception misclassification. Do this only if you can see that the wrong flavor of exception is being thrown.

First find out the error code of the SQLException being thrown. For example, if it is 17002, then it is the tns listener not available exception.

Then add the property nonRetriableSQLErrorCodes to weblogic-ra.xml. (You must declare the property in ra.xml first.) The value is a space or comma separated list that says what is certainly a binding fault or a remote fault. "1" says 1 is certainly a binding fault. "+1 -17002" says 1 is certainly a binding fault and 17002 is certainly a remote fault. Hence, if 17002 were accidentally being classified as a binding fault, then "-17002" overrides that. Thus, ensure that it is always a remote fault.

9.3.7.1.4 Recoverable Instances

If you configure a set number of times to retry an instance in the event of a retryable fault, then SOA will mark these instances as processed to allow other instance processing to continue, once all retries are exhausted. These recoverable instances will be stored to a database and can later be manually replayed from the Fusion Middleware Control Console.

9.3.7.2 Undying

The DBActivationSpec property Undying instructs the Oracle Database Adapter to never shut down while polling due to an EIS system exception. The Adapter instead goes into a special mode where it keeps polling but does not log any messages, until it is either shut down manually or the original problem is resolved.

It is recommended to set this before going into production, to handle any SQLExceptions which may be classified incorrectly. In test and development it is good to leave it as false, as most exceptions classified as non-retryable are genuinely due to modeling mistakes.

When you execute the Adapter Configuration Wizard for Oracle Database Adapter and click finish, the Undying property appears in the db.jca file for end point activations, as shown in following example:

<endpoint-activation portType="poll_ptt" operation="receive">
   <activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
     <property name="DescriptorName" value="poll.PerfMasterIn"/>
     <property name="QueryName" value="pollSelect"/>
     <property name="MappingsMetaDataURL" value="poll-or-mappings.xml"/>
     <property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
     <property name="MarkReadColumn" value="ATTRIBUTE2"/>
     <property name="MarkReadValue" value="READ${weblogic.Name-1}-${IP}"/>
     <property name="MarkReservedValue" value="MINE${weblogic.Name-1}-${IP}"/>
     <property name="MarkUnreadValue" value="UNREAD"/>
     <property name="PollingInterval" value="5"/>
     <property name="MaxRaiseSize" value="5"/>
     <property name="MaxTransactionSize" value="10"/>
     <property name="SequencingColumn" value="PK"/>
     <property name="ReturnSingleResultSet" value="false"/>
     <property name="Undying" value="false"/>
   </activation-spec>
 </endpoint-activation>

To activate this property, you must set Undying to true.

9.3.8 Scalability

The following sections describe best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL PM or Mediator nodes, including:

9.3.8.1 Distributed Polling First Best Practice: SELECT FOR UPDATE (SKIP LOCKED)

The first best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL PM or Mediator nodes is to use the Adapter Configuration Wizard to set both the Distributed Polling check box in the Adapter Configuration Wizard and to set MaxTransactionSize. Increase concurrency by setting the adapter binding property activationInstances.

On an Oracle database, this will automatically use the syntax SELECT FOR UPDATE SKIP LOCKED. Concurrent threads will each try to select and lock the available rows, but the locks are only obtained on fetch. If an about to be fetched row is already locked, the next unlocked row will be locked and fetched instead. If many threads all execute the same polling query at the same time, they should all relatively quickly obtain a disjoint subset of unprocessed rows.

On a non-Oracle database the SELECT FOR UPDATE will safely insure that the same row cannot be processed multiple times, however you may get less scalability. You should consider either using additionally a partition field or the second best practice which is essentially multi-threading on a single node with fan-out (see Section 9.3.8.2, "Distributed Polling Second Best Practice: Tuning on a Single Node First").

Note:

A distributed approach is required to insure that multiple activation instances do not process the same rows.

When configuring this best practice, consider the following:

9.3.8.1.1 Configuring PollingInterval, MaxTransactionSize, and ActivationInstances

In a distributed scenario, each polling instance will try to balance the load by not greedily attempting to process all unprocessed rows by itself. What that means is that per polling interval, an instance will only process at most MaxTransactionSize rows. Hence the maximum throughput is:

activationInstances x MaxTransactionSize / PollingInterval

Note:

In case of distributed polling, you can ignore the polling interval, if you process full batches of size MaxTransactionSize each.

Although you may want to increase MaxTransactionSize, if you increase it too high, you may start to see transaction timeouts. Table 9-2 lists safe values for MaxTransactionSize.

Table 9-2 MaxTransactionSize and MaxRaiseSize Values

MaxTransactionSize MaxRaiseSize Description

10

1

When using sequential routing.

For 10 rows you will have 10 individual instances and 10 XML records passing through SOA.

100

 

When using parallel routing.

>= 100

MaxTransactionSize

When using the adapter to stream rows through as fast as possible.


For load balancing purposes, it is dangerous to set the MaxTransactionSize too low in a distributed environment (where it becomes a speed limit). It is best to set the MaxTransactionSize close to the per CPU throughput of the entire business process. This way, load balancing occurs only when you need it.

If distributed polling is not set, then the adapter tries to process all unprocessed rows in a single polling interval.

9.3.8.1.2 Partition Field

In a distributed scenario you will have polling instances on multiple servers, however per server you will likely also have more than one thread configured. You can configure these activation instances to at least cooperate somewhat by processing separate rows, possibly improving scaling.

Simply add the property PartitionField to your db.jca file:

<property name="PartitionField" value="ID"/>

If you set activationInstances to 2, then activation instances 1 and 2 (or 0 and 1) would respectively execute:

SELECT ... WHERE ... AND MOD (ID, 2) = 0 FOR UPDATE SKIP LOCKED

and

SELECT ... WHERE ... AND MOD (ID, 2) = 1 FOR UPDATE SKIP LOCKED

Activation instance 0 will still conflict with other activation instances with this ID on other servers, but at least it will not conflict with other activation instances with ID 1.

Make sure that the partition field is numeric and that applying mod will evenly distribute the rows (i.e. in this case make sure all the IDs are not either even or odd).

On Oracle Database, you can set the partition field to be rowid by setting db.jca file property PartitionField as follows:

 <property name="PartitionField" value="rowid"/>

Then the SQL will in fact be converted to:

SELECT ... WHERE ... AND MOD (dbms_rowid.rowid_row_number(rowid), 2) = [0/1] FOR UPDATE SKIP LOCKED

Since on Oracle Database skip locking already provides scalability, setting a partition field is not recommended. There is cost in terms of increased database CPU usage with more complex SQL.

9.3.8.1.3 NumberOfThreads

The NumberOfThreads property should not be configured with a distributed polling scenario. Instead, configure the composite.xml adapter binding property activationInstances.

Note:

In a distributed cluster scenario configure activationInstances. In a multi-threaded single node scenario, configure NumberOfThreads.
9.3.8.1.4 Indexing and Null Values

Try to index (and/or add explicit constraints on the database for) the primary and all foreign keys to joined tables. If using Logical delete polling, try to index the status column. Try to configure a non-null MarkUnreadValue and MarkReadValue.

If you have no indexes at all and prefer that, you may want to go with the single node multi-threaded approach (see Section 9.3.8.2, "Distributed Polling Second Best Practice: Tuning on a Single Node First"). That way the polling query is executed once, which may be a full table scan, but then multiple threads help to exhaust the entire result set until all rows are processed. With a distributed approach all work must be done while the rows are exclusively locked, which means locked in a timed transaction. In a distributed scenario there will be many repeated selects, which may harm performance if each one is doing a full table scan.

Note:

Performance will be very slow if MarkUnreadValue is configured as null.
9.3.8.1.5 Disabling Skip Locking

Skip locking has been available on Oracle Database since 8 but is documented in 11. You may rarely come across an incompatible feature and need to disable it. In that case you can set the Oracle Database Adapter connector property usesSkipLocking to false in the ra.xml file you deploy with your application as Example 9-3 shows.

Example 9-3 Configuring usersSkipLocking in ra.xml

<?xml version="1.0" encoding="UTF-8"?>
<connector xmlns="http://java.sun.com/xml/ns/j2ee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee
  http://java.sun.com/xml/ns/j2ee/connector_1_5.xsd" version="1.5">
...
  <resourceadapter>
        <outbound-resourceadapter>
            <connection-definition>
...
            <config-property>
              <config-property-name>usesSkipLocking</config-property-name>
              <config-property-type>java.lang.Boolean</config-property-type>
                    <config-property-value>false</config-property-value>
            </config-property>
...
            </connection-definition>
...
        </outbound-resourceadapter>
  </resourceadapter>
</connector>

For more information on how to configure connector-level properties, see:

9.3.8.1.6 MarkReservedValue

If you are using Logical Delete polling and you set MarkReservedValue, skip locking will not be used.

Formerly, the best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL Process Manager or Oracle Mediator nodes was essentially using LogicalDeletePollingStrategy or DeletePollingStrategy with a unique MarkReservedValue on each polling node, and setting MaxTransactionSize.

However with the introduction of skip locking in this release, that approach has now been superseded. If you were using this approach previously, you can simply remove (in db.jca) or clear (Logical Delete Page of wizard) the MarkReservedValue, and you will automatically get skip locking.

The benefits of using skip locking over a reserved value include:

  • Skip locking scales better in a cluster and under load.

  • All work is in one transaction (as opposed to update/reserve, then commit, then select in a new transaction), so the risk of a non-recoverable situation in an HA environments is minimized.

  • No unique MarkReservedValue must be specified. For this to work you had to configure a complex variable like R${weblogic.Name-2}-${IP-2}-${instance}.

9.3.8.1.7 SequencingPollingStrategy (Last Read or Last Updated)

This distributed approach works with Delete or Logical Delete based polling strategies.

The work of the sequencing polling based strategies cannot be distributed as records are initially processed in order.

For example, the second row cannot be marked as processed ahead of the first (setting last read ID to 2 means not just that 2 has been processed but 1 also).

However, as the sequencing polling strategies are non-intrusive, requiring no post updates or deletes to the source tables, they are already extremely fast.

Sequencing polling strategies should be used with a single node and with fan-out on a cluster. It is still safe to use in a cluster however, the select for update is instead applied on accessing the last read ID in the helper table.

9.3.8.2 Distributed Polling Second Best Practice: Tuning on a Single Node First

The next best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL PM or Mediator nodes is to tune on a single node first.

For an Oracle Database Adapter intensive process, such as a database-database integration, performance can be improved by a factor 10 or 100 just by tuning on a single Java Virtual Machine (JVM), scaling |NumberOfThreads|, and setting high values for MaxTransactionSize and MaxRaiseSize.

As Section 9.3.8.1, "Distributed Polling First Best Practice: SELECT FOR UPDATE (SKIP LOCKED)" describes, there may be times where it is best to improve performance on a single node, and then optionally do fan-out to multiple nodes in a cluster. Relying on concurrency control features of the database such as locking can be great, but these are often designed more for preserving data integrity than for high performance scalability.

Cases where it may be best to do polling on a single node in the cluster include using the non-intrusive Sequencing Polling strategy, polling large un-indexed tables, or using a non-Oracle back-end database that does not provide high concurrency locks like skip locks.

Note:

For Oracle Database Adapter with polling operation in a clustered environment, you must use the option of distributed polling by selecting the Distributed Polling check box in the Adapter Configuration Wizard.

See the samples MultiTablesPerformance and DirectSQLPerformance in the following location for tuning on a single node:

http://www.oracle.com/technology/sample_code/products/adapters

9.3.9 Performance Tuning

The Oracle Database Adapter is preconfigured with many performance optimizations. You can, however, make some changes to reduce the number of round trips to the database by implementing performance tuning.

For information about performance tuning, see:

9.3.10 detectOmissions Feature

The following are the features of the detectOmission feature:

Available Since

Release 10.1.3

Configurable

Yes

Default Value

Design Time: true, unless explicitly set to false

Use Case

Users may pass incomplete or partial XML to a merge, update, or insert, and see that every column they left unspecified in XML is set to null in the database.

It allows DBAdapter merge, insert, or update to differentiate between null value and the absence of a value (omission) in XML documents. On a case by case basis, it determines which information in XML is meaningful and which is not. In this way XML is seen as a partial representation of a database row, as opposed to a complete representation. The following table lists examples for null values, and values that can be omitted.

Element Type Omission Null
Column <director></director>

<director />

<!-- director>…</director -->

<director xsi:nil="true" />
1-1 <!-- dept> … </dept --> <dept xsi:nil="true" />
1-M <!-- empCollection>…

</empCollection -->

</empCollection>

</empCollection> (empty)


Note:

The 1-1 representation <dept /> denotes an empty department object and should not be used.For 1-M, <empCollection /> actually means a collection of 0 elements and is considered a meaningful value.For columns, <director></director> is not considered an omission in cases where it represents an empty string.

A value considered omitted is omitted from UPDATE or INSERT SQL. For an update operation, existing (meaningful) values on the database are not overwritten. For an insert operation, the default value on the database is used, as no explicit value is provided in the SQL string.

A DBAdapter receive is not able to produce XML with omissions, and makes use of xsi:nil="true". If you are unable to produce input XML with xsi:nil="true", or are concerned about the difference between <director /> and <director></director>, then it is best to set DetectOmissions="false" in the JCA file.

To treat all null values as omissions, check out the IgnoreNullsMerge sample, which comes with a custom TopLink plugin. The plugin works similar to this feature, but cannot detect subtleties between null and omission.

The IgnoreNullsMerge sample is available at the following location:

http://www.oracle.com/technology/sample_code/products/adapters

When you are expecting an update, you can improve performance, by omitting 1-1 and 1-M relationships. Because the merge operation can skip considering the detail records completely.

Alternatively, map only those columns that you are interested in, and create separate mappings for different invokes. If two updates should update two different sets of columns, create two separate partnernlinks.

Performance

By default, XML is not used as an input to the Oracle Database Adapter containing omissions. Until an XML with omissions is detected, there is no performance overhead. Once omissions are detected, a TopLink descriptor event listener is added. This event listener has some overhead, and every modifyRow about to become a SQLUpdate or SQLInsert must be iterated over, to check for omissions. Hence, every column value sent to the database is checked. If the input XML has mostly omissions, then the cost overhead should be more than compensated by sending fewer values to the database.

Incompatible Interactions

DirectSQL="true" and DetectOmissions="true" - DetectOmissions takes precedence. The following are some examples for incompatible interactions:

  • DetectOmissionsMerge

  • IgnoreNullsMerge

  • OptimizeMerge

Note:

For migrated old BPEL project, you must re-run the Database Adapter Wizard in order to regenerate the JCA file. When you do this, the DetectOmissions and OptimizeMerge options appear in the JCA file with default values as DetectOmissions="false" and OptimizeMerge="false".

See the following for more information:

You can also access the forums from Oracle Technology Network at

http://www.oracle.com/technology

9.3.11 OutputCompletedXml Feature

OutputCompletedXml is a feature of the outbound insert activity. The following are some of the features of the OutputCompletedXml feature:

Available Since

Release 10.1.2.0.2

Configurable

OutputCompletedXml appears in the JCA file only when default is true.

Default Value

It is true when TopLink sequencing is configured to assign primary keys on insert from a database sequence, otherwise it is false.

Issue

You can have primary keys auto-assigned on insert from a database sequence. However, the usefulness of this feature is diminished, because insert/merge have no output message, so there is no way to tell which primary keys were assigned.

Note:

After configuring sequencing (link), run the Adapter Configuration Wizard again so that the insert/merge WSDL operations can be regenerated with an output message, and WSDL property OutputCompletedXml="true".

Performance

An output XML is provided only when the output XML would be significantly different, so if TopLink sequencing is not used, then this feature is disabled and there is no performance hit. Further, this feature can be explicitly disabled. Likewise, the original input XML is updated and returned; a completely new XML is not built. Also only a shallow update of the XML is performed; if primary keys were assigned to detail records, then these are not reflected in the output XML.

Incompatible Interactions

DirectSQL="true" and "OutputCompletedXml" - OutputCompletedXml takes precedence.

9.3.12 QueryTimeout for Inbound and Outbound Transactions

You can configure QueryTimeout from the Adapter Configuration Wizard- Advanced Options page. This feature exposes the java.sql.Statement level property of the same name. Essentially, QueryTimeout allows you to configure a time-out on the call.

9.3.13 Doing Synchronous Post to BPEL (Allow In-Order Delivery)

In this feature, the entire invocation is in a single thread and global transaction. By default, initiation is asynchronous and the BPEL process is invoked in a separate global transaction. With Oracle Mediator, it is generally a synchronous invoke so this is only specific to an Oracle BPEL process.

To enable this feature, click the Do Synchronous Post to BPEL (Allow In-Order Delivery) option in the Adapter Configuration Wizard - Operation page.

9.4 Oracle Database Adapter Concepts

This section includes the following topics:

9.4.1 Relational-to-XML Mapping

This section includes the following topics:

For a flat table or schema, the relational-to-XML mapping is easy to see. Each row in the table becomes a complex XML element. The value for each column becomes a text node in the XML element. Both column values and text elements are primitive types.

Table 9-3 shows the structure of the MOVIES table. This table is used in the use cases described in this chapter. See Oracle Database Adapter Use Cases for more information.

Table 9-3 MOVIES Table Description

Name Null? Type

TITLE

NOT NULL

VARCHAR2(50)

DIRECTOR

--

VARCHAR2(20)

STARRING

--

VARCHAR2(100)

SYNOPSIS

--

VARCHAR2(255)

GENRE

--

VARCHAR2(70)

RUN_TIME

--

NUMBER

RELEASE_DATE

--

DATE

RATED

--

VARCHAR2(6)

RATING

--

VARCHAR2(4)

VIEWER_RATING

--

VARCHAR2(5)

STATUS

--

VARCHAR2(11)

TOTAL_GROSS

--

NUMBER

DELETED

--

VARCHAR2(5)

SEQUENCENO

--

NUMBER

LAST_UPDATED

--

DATE


The corresponding XML schema definition (XSD) is as follows:

<?xml version = '1.0' encoding = 'UTF-8'?>
<xs:schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadS1" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadS1" elementFormDefault="qualified" attributeFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="MoviesCollection" type="MoviesCollection"/>
  <xs:complexType name="MoviesCollection">
     <xs:sequence>
        <xs:element name="Movies" type="Movies" minOccurs="0" maxOccurs="unbounded"/>
     </xs:sequence>
  </xs:complexType>
  <xs:complexType name="Movies">
     <xs:sequence>
        <xs:element name="title">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="50"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="director" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="20"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="starring" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="100"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="synopsis" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="255"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="genre" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="70"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="runTime" type="xs:decimal" minOccurs="0" nillable="true"/>
        <xs:element name="releaseDate" type="xs:dateTime" minOccurs="0" nillable="true"/>
        <xs:element name="rated" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="6"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="rating" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="4"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="viewerRating" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="5"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="status" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="11"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="totalGross" type="xs:decimal" minOccurs="0" nillable="true"/>
        <xs:element name="deleted" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="5"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="sequenceno" type="xs:decimal" minOccurs="0" nillable="true"/>
        <xs:element name="lastUpdated" type="xs:dateTime" minOccurs="0" nillable="true"/>
     </xs:sequence>
  </xs:complexType>
</xs:schema>

As the preceding code example shows, MOVIES is not just a single CLOB or XMLTYPE column containing the entire XML string. Instead, it is an XML complexType comprising elements, each of which corresponds to a column in the MOVIES table. For flat tables, the relational-to-XML mapping is straightforward.

Table 9-4 and Table 9-5 show the structure of the EMP and DEPT tables, respectively. These tables are used in the MasterDetail use case. See Oracle Database Adapter Use Cases for more information.

Table 9-4 EMP Table Description

Name Null? Type

EMPNO

NOT NULL

NUMBER(4)

ENAME

--

VARCHAR2(10)

JOB

--

VARCHAR2(9)

MGR

--

NUMBER(4)

HIREDATE

--

DATE

SAL

--

NUMBER(7,2)

COMM

--

NUMBER(7,2)

DEPTNO

--

NUMBER(2)


Table 9-5 DEPT Table Description

Name Null? Type

DEPTNO

NOT NULL

NUMBER(2)

DNAME

--

VARCHAR2(14)

LOC

--

VARCHAR2(13)


As the preceding table definitions show, and as is typical of a normalized relational schema, an employee's department number is not stored in the EMP table. Instead, one of the columns of EMP (DEPTNO) is a foreign key, which equals the primary key (DEPTNO) in DEPT.

However, the XML file equivalent has no similar notion of primary keys and foreign keys. Consequently, in the resulting XML file, the same data is represented in a hierarchy, thereby preserving the relationships by capturing the detail record embedded inside the master.

An XML element can contain elements that are either a primitive type (string, decimal), or a complex type, that is, another XML element. Therefore, an employee element can contain a department element.

The corresponding XML shows how the relationship is materialized, or shown inline. DEPTNO is removed from EMP, and instead you see the DEPT itself.

<EmpCollection>
  <Emp>
    <comm xsi:nil = "true" ></comm> 
    <empno >7369.0</empno>
    <ename >SMITH</ename>
    <hiredate >1980-12-17T00:00:00.000-08:00</hiredate>
    <job >CLERK</job>
    <mgr >7902.0</mgr
    <sal >800.0</sal>
    <dept>
      <deptno >20.0</deptno>
      <dname >RESEARCH</dname>
      <loc >DALLAS</loc>
    </dept>
  </Emp>
    ...
</EmpCollection>

Materializing the relationship makes XML human readable and allows the data to be sent as one packet of information. No cycles are allowed in the XML file; therefore, an element cannot contain itself. This is handled automatically by the Oracle Database Adapter. However, you may see duplication (that is, the same XML detail record appearing more than once under different master records). For example, if a query returned two employees, both of whom work in the same department, then, in the returned XML, you see the same DEPT record inline in both the EMP records.

Therefore, when you import tables and map them as XML, it is recommended that you avoid excessive duplication, although the Oracle Database Adapter does not print an element inside itself. The Oracle Database Adapter prints the following:

<Emp>
  <name>Bob</name>
  <spouse> 
    <name>June</name>
  </spouse
</Emp>

But not:

<Emp>
  <name>Bob</name>
  <spouse>
    <name>June</name> 
    <spouse>
      <name>Bob</name> 
      <spouse>
        ...
      </spouse> 
    </spouse> 
  </spouse> 
</Emp>

To avoid duplication, you can do the following:

  • Import fewer tables. If you import only EMP, then DEPT does not appear.

  • Remove the relationship between EMP and DEPT in the Adapter Configuration Wizard. This removes the relationship, but the foreign key column is put back.

In both these cases, the corresponding XML is as follows:

<EmpCollection> 
  <Emp>
    <comm xsi:nil = "true" ></comm>
    <empno >7369.0</empno>
    <ename >SMITH</ename> 
    <hiredate >1980-12-17T00:00:00.000-08:00</hiredate>
    <job >CLERK</job>
    <mgr >7902.0</mgr>
    <sal >800.0</sal> 
    <deptno >20.0</deptno>
  </Emp> 
   ...
</EmpCollection>

Note that one of the two preceding solutions is feasible only if getting back the foreign key suffices, as opposed to getting back the complete detail record in its entirety.

9.4.1.1 Relational Types to XML Schema Types

Table 9-6 shows how database data types are converted to XML primitive types when you import tables from a database.

Table 9-6 Mapping Database Data Types to XML Primitive Types

Database Type XML Type (Prefixed with xs:)

VARCHAR, VARCHAR2, CHAR, NCHAR, NVARCHAR, NVARCHAR2, MEMO, TEXT, CHARACTER, CHARACTER VARYING, UNICHAR, UNIVARCHAR, SYSNAME, NATIONAL CHARACTER, NATIONAL CHAR, NATIONAL CHAR VARYING, NCHAR VARYING, LONG, CLOB, NCLOB, LONGTEXT, LONGVARCHAR, NTEXT

string

BLOB, BINARY, IMAGE, LONGVARBINARY, LONG RAW, VARBINARY, GRAPHIC, VARGRAPHIC, DBCLOB, BIT VARYING

base64Binary

BIT, NUMBER(1) DEFAULT 0, SMALLINT DEFAULT 0, SMALLINT DEFAULT 0

boolean

TINYINT, BYTE

byte

SHORT, SMALLINT

short

INT, SERIAL

int

INTEGER, BIGINT

integer

NUMBER, NUMERIC, DECIMAL, MONEY, SMALLMONEY, UNIQUEIDENTIFIER

decimal

FLOAT FLOAT16, FLOAT(16), FLOAT32, FLOAT(32), DOUBLE, DOUBLE PRECIS, REAL

double

TIME, DATE, DATETIME, TIMESTAMP, TIMESTAMP(6), SMALLDATETIME, TIMESTAMPTZ, TIMESTAMPLTZ, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE

dateTime


Essentially, NUMBER goes to DECIMAL, the most versatile XML data type for numbers, VARCHAR2 and CLOB to string, BLOB to base64Binary (to meet the plain-text requirement), and date types to dateTime.

Any type not mentioned in this discussion defaults to java.lang.String and xs:string. Time Stamp support is basic, because only the xs:dateTime format is supported. The BFILE type is specifically not supported.

Note:

The user-defined Object, Struct and VARRAY, and REF types are supported in 11g.

Because XML is plain text, BLOB and byte values are base 64/MIME encoded so that they can be passed as character data.

9.4.1.2 Mapping Any Relational Schema to Any XML Schema

The Oracle Database Adapter supports mapping any relational schema on any relational database to an XML schema, although not any XML schema of your choice, because the Adapter Configuration Wizard generates the XML schema with no explicit user control over the layout of the elements. You can control how you map the schema in both the Adapter Configuration Wizard and later in TopLink Workbench. By pairing the Oracle Database Adapter with a transformation step, you can map any relational schema to any XML schema.

9.4.1.3 Querying over Multiple Tables

When executing a SQL select statement against multiple related tables there are the following three methods to build the SQL. These ways relate to how to pull in the detail records when the query is against the master record:

The following sections contain an outline of these three methods and their comparison. However, note that when selecting rows from a single table there are no issues as against selecting from multiple tables.

9.4.1.3.1 Using Relationship Queries (TopLink Default)

Having selected a Master row, TopLink can always query separately to get all the details belonging to that Master table. These hidden queries (relationship queries) are cached in the TopLink metadata and need to be prepared only once.

Consider the SQL statement in following sample scenario:

SELECT DIRECTOR, ..., VIEWER_RATING 
       FROM MOVIES 
WHERE RATING = 'A';

For each master, the SQL statement is as follows:

SELECT CRITIC, ..., TITLE
       FROM MOVIE_REVIEWS
WHERE (TITLE = ?)

It enables you to bring in all the data with 1 + n query executions, where n is the number of master rows returned by the first query.

This approach is safe but slow, as a large number of round trips to the database are required to pull in all the data.

For configuring using the relationship Queries (TopLink default) approach, you must edit or_mappings.xml outside of JDeveloper. In addition, change the batch-reading elements value to false.

9.4.1.3.2 Twisting the Original Select (TopLink Batch-Attribute Reading)

This is a default feature that allows TopLink to alter the original SQL select statement to read all the details in a second select statement, as shown in the following example:

SELECT DIRECTOR, ..., VIEWER_RATING
FROM MOVIES
WHERE RATING = 'A'
SELECT DISTINCT t0.CRITIC, ..., t0.TITLE
FROM MOVIE_REVIEWS t0, MOVIES t1
WHERE ((t1.RATING = 'A') AND (t0.TITLE = t1.TITLE))

By considering the original select statement in pulling in the details, a total of two (1 + 1 = 2) query executions need to be performed.

Advantages

Batch attribute reading has the following advantages:

  • All data read in two round trips to database

  • The is a default feature in the 10.1.2.0.2 release

Disadvantages

Batch attribute reading has the following disadvantages:

  • When using maxTransactionSize (on polling receive) or maxRows (on invoke select) to limit the number of rows loaded into memory at a time, these settings do not easily carry over to the batch attribute query. It is easier to work with a cursored result when there is only a single result set. (Multiple cursors can be used with difficulty, if the original query has an order by clause).

  • TopLink can alter a SQL statement, only when it is in a format it can understand. If you use the hybrid SQL approach and set custom SQL for the root select, then TopLink will not be able to interpret that SQL to build the batch select.

  • The DISTINCT clause is used on the batch query, to avoid returning the same detail twice if two masters happen to both point to it. The DISTINCT clause cannot be used when returning LOBs in the resultset.

Configuration

Configuration is on a per 1-1 or 1-M mapping basis. By default, all such mappings since the 10.1.2.0.2 release have this property set. To configure, edit or_mappings.xml outside JDeveloper and edit the <batch-reading> elements to true (default) or false.

9.4.1.3.3 Returning a Single Result Set (TopLink Joined-Attribute Reading)

The detail tables are outer-joined to the original SQL select statement, returning both master and detail in a single result set, as shown in the following example:

SELECT DISTINCT t1.DIRECTOR, ..., t1.VIEWER_RATING, t0.CRITIC, ..., t0.TITLE
FROM MOVIE_REVIEWS t0, MOVIES t1
WHERE ((t1.RATING = 'A') AND (t0.TITLE (+) = t1.TITLE))

This requires one query execution in total.

Advantages

The advantages include the following:

  • In case of using maxTransactionSize while polling, the benefits of dealing with a single cursor can be great.

  • When following the hybrid SQL route and entering custom SQL statements, you only have to deal with a single SQL statement, whereas TopLink normally uses a series of additional hidden SQL statements to bring in related rows.

  • read consistency: Enables you to read all related rows at the same time, and not at different instances in time for the different tables.

  • Performance can be ideal as only a single round trip to the database is required, whereas batch attribute reading requires one for each table queried.

Disadvantages

There are some drawbacks, however, namely the cost of returning duplicate data. For example, consider that you read the Master and Detail tables; Master has 100 columns in each row, and Detail has 2 columns in each row. Each row in the table, Master also, typically has 100 related Detail rows.

With one query in each table, the result sets for the preceding example appears, as shown in the following example:

Master
Column1 column2 ….. column100

Master1 ...

Detail

Detail
Column1 column2
Detail1   ...
Detail2
...
Detail100 ...

In this example, 300 column values are returned as shown:

(columns in master + columns in detail x details per master) =
(           100             +            2                x              100         )  =  300

With one query for all tables, the result set appears, as shown in the following example:

Master                                             Detail

Column1 Column2 ... Column100                      Column1  Column2
Master1    ...                                     Detail1     ...
Master1    ...                                     Detail2     ...
Master1    ...                                     Detail100   ...

Note that, in the case of one query for all tables, 10,200 column values are returned in a single result set, versus 300 in two result sets, as shown here:

((columns in master + columns in detail) x details per master) =
((          100     +            2     ) x       100         ) =  10,200

This can have a serious drain on network traffic and computation because 97 percent of the data returned is duplicate data. Also, if the master had two related tables detail1 and detail2 and there were 100 each in each master, then the number of column values returned would be over 10 million per master row.

In general, you can use the following simple formula to estimate the relative cost of returning all rows in a single result set:

        (Master columns + Detail1 columns + Detail2 columns + ... ) x 
                Detail1s per Master x 
                Detail2s per Master x ...
bloat = ___________________________________________________________

       (Master columns + Detail1 columns X Detail1s per Master + 
               Detail2 columns X Detail2s per Master + ...)

Note that for 1-1 relationships, this value is always 1, and if in the same example each master had two columns only and the details had 100 columns instead, and each master had only 3 or 4 details each, then the bloat would be

               (2 + 100) x 4        408
bloat =        ____________  = ___________  ~=  1
               (2 + 100 x 4)        402

Another disadvantage is that this setting could distort the meaning of the maxRows setting on an outbound select.

Configuration

To configure, select Use Outer Joins to return a Single Result Set for both Master and Detail Tables on the Adapter Configuration Wizard - Define Selection Criteria page.

Note:

When you create a SQL query such as the following by using the TopLink Expression Builder, the result may not be as expected:

SELECT DISTINCT t1.TABLE1_ID, t1.COLUMN_A FROM TABLE2 t0, TABLE1 t1 WHERE ((t0.STATUS = 1) AND (t0.TABLE1_ID = t1.TABLE1_ID))

The expected result for this query is that only rows with Table 1's and their owned Table 2's with status = 1 be returned.

However, what this query actually translates to is "table 1's, where any of its table 2's have status = 1," resulting in the return of table 1's that match the selection criteria, and ALL of the table 2's they own, including those with other statuses, whether or not their statuses =1. The DISTINCT keyword ensures the table 1's are not repeated and the join happens across table 2.

The misunderstanding happens in the way Toplink works. Through the Expression Builder, you can only specify a selection criteria for Table 1 and have no control over the Table 2's they own, this part is automatically done.

However, you can get the expected result by using either of the following two approaches:

1.) Query directly for table 2 using the selection criteria of status = 1, that is, do not go through table 1 and get the table 2's they own.

2.) Use direct (custom SQL), as shown in the following example:

SELECT TABLE1.TABLE1_ID, TABLE1.COLUMN_A, TABLE2.STATUS FROM TABLE2, TABLE1 WHERE TABLE2.STATUS=1 AND TABLE1.TABLE1_ID = TABLE2.TABLE1_ID
9.4.1.3.4 Comparison of the Methods Used for Querying over Multiple Tables

On the surface, returning a single result set looks best (1 query), followed by batch attribute reading (altering the select statement: 2 queries), and finally by default relationship reading (n + 1 queries). However, there are several pitfalls to both of the more advanced options, as explained below:

Altering User-Defined SQL

If you specify custom/hybrid SQL, the TopLink cannot alter that SQL string to build the details select. For this reason, you must avoid using hybrid SQL and build selects using the wizards' visual expression builder as often as possible.

Show Me the SQL

The additional queries executed by TopLink in both, the default and the batch attribute reading cases can be somewhat of a mystery to users. For this reason, the raw SQL shown to users in the Adapter Configuration Wizard assumes returning a single result set, to make things clearer and also to improve manageability.

Returning Too Many Rows At Once

Databases can store vast quantities of information, and a common pitfall of select statements which return too much information at once. On a DBAdapter receive, a maxTransactionSize property can be set to limit the number of rows which are read from a cursored result set and processed in memory at a time. A similar max-rows setting exists for one time invoke select statements. However, this setting is very risky.

9.4.2 SQL Operations as Web Services

After mapping a relational schema as XML, you must also map basic SQL operations as Web services. Each operation discussed in the following sections has a corresponding tutorial and a readme file. It is recommended that you start with these and try to run one or more as you read this section. As the tutorials demonstrate, some operations translate directly to the SQL equivalent, while others are more complex.

This section includes the following topics:

9.4.2.1 DML Operations

Data manipulation language (DML) operations align with basic SQL INSERT, UPDATE, and SELECT operations. SQL INSERT, UPDATE, DELETE, and SELECT are all mapped to Web service operations of the same name. The MERGE is either an INSERT or UPDATE, based on the results of an existence check. A distinction is made between the data manipulation operations—called outbound writes—and the SELECT operations—called outbound reads. The connection between the Web service and the SQL for merge (the default for outbound write) and queryByExample are not as obvious as for basic SQL INSERT, UPDATE, and SELECT.

This section includes the following topics:

Merge

Merge first reads the corresponding records in the database, calculates any changes, and then performs a minimal update. INSERT, UPDATE, and MERGE make the most sense when you are thinking about a single row and a single table. However, your XML can contain complex types and map to multiple rows on multiple tables. Imagine a DEPT with many EMPS, each with an ADDRESS. In this case, you must calculate which of possibly many rows have changed and which to insert, update, or delete. If a particular row did not change or only one field changed, then the DML calls is minimal.

querybyExample

Unlike the SELECT operation, queryByExample does not require a selection criteria to be specified at design time. Instead, for each invoke, a selection criteria is inferred from an exemplary input XML record.

For instance, if the output xmlRecord is an employee record, and the input is a sample xmlRecord with lastName = "Smith", then on execution, all employees with a last name of Smith are returned.

A subset of queryByExample is to query by primary key, which can be implemented by passing in sample XML records where only the primary key attributes are set.

Use queryByExample when you do not want to create a query using the visual query builder and want the flexibility of allowing the input record to share the same XML schema as the output records.

The queryByExample operation is slightly less performant because a new SELECT must be prepared for each execution. This is because the attributes that are set in the example XML record can vary each time, and therefore the selection criteria vary.

Input xmlRecord:

<Employee>
      <id/>
      <lastName>Smith</lastName>
</Employee>

Output xmlRecord:

<EmployeeCollection>
      <Employee>
         <id>5</id>
         <lastName>Smith</lastName>
         ....
      </Employee>
      <Employee>
         <id>456</id>
         <lastName>Smith</lastName>
         ....
      </Employee>
</EmployeeCollection>

Use Cases for Outbound Invoke Operations

Outbound invoke operations are demonstrated in the following tutorial files:

  • Insert

  • Update

  • Delete

  • Merge

  • SelectAll

  • SelectAllByTitle

  • PureSQLSelect

  • QueryByExample

For these files, go to

http://www.oracle.com/technology/sample_code/products/adapters

Use Cases for Pure SQL

A new option in 10.1.3.1 enables you to specify any arbitrary SQL string, and an XML representing the inputs and outputs to the SQL is generated. Pure SQL operations are demonstrated in the following tutorial files:

  • UpdateAll

  • SelectCount

  • SelectGroupBy

  • SelectStar

For these files, go to

http://www.oracle.com/technology/sample_code/products/adapters

Advanced Use Cases for Outbound Invoke Operations

Advanced outbound invoke operations are demonstrated in the following tutorial files:

  • InsertWithClobs

  • XAInsert

  • NativeSequencingInsert

For these files, go to

http://www.oracle.com/technology/sample_code/products/adapters

9.4.2.2 Polling Strategies

The inbound receive enables you to listen to and detect events and changes in the database, which in turn can be the initiators of a business process. This is not a one-time action, but rather an activation. A polling thread is started, which polls a database table for new rows or events.

Whenever a new row is inserted into the MOVIES table, the polling operation raises it to the SCA Run Time. The strategy is to poll every record once. The initial SELECT has to be repeated over time, to receive the rows that exist at the start and all new rows as they are inserted over time. However, a new row once read is not likely to be deleted, and therefore can possibly be read repeatedly with each polling.

The various ways to poll for events, called polling strategies, also known as after-read strategies or publish strategies, range from simple and intrusive to sophisticated and nonintrusive. Each strategy employs a different solution for the problem of what to do after reading a row or event so as not to pick it up again in the next polling interval. The simplest (and most intrusive) solution is to delete the row so that you do not query it again.

This section discusses the following polling operations that you can perform after the data is read from the database. This section also discusses the strategies and factors to help you determine which strategy to employ for a particular situation:

Delete the Row(s) that were Read

Choose this operation to employ the physical delete polling strategy. This operation polls the database table for records and deletes them after processing. This strategy can be used to capture events related to INSERT operations and cannot capture database events related to DELETE and UPDATE operations on the parent table. This strategy cannot be used to poll child table events. This strategy allows multiple adapter instances to go against the same source table. There is zero data replication.

Preconditions: You must have deletion privileges on the parent and associated child tables to use the delete polling strategy. Table 9-7 describes the requirements for using the delete polling strategy.

Table 9-7 Delete Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

No delete on source

Shallow delete

No updates on source

Cascading delete

Poll for updates

Minimal SQL

Poll for deletes

Zero data replication

Poll for child updates

Default

--

Allows raw SQL

--

Concurrent polling

--


Note:

In Shallow delete and Cascading delete, the delete operation can be configured to delete the top-level row, to cascade all, or to cascade on a case-by-case basis.

Concurrent polling can be configured for both delete and logical delete polling strategies.

Configuration: You can configure the delete polling strategy to delete the top-level row, to cascade all, or to cascade on a case-by-case basis. This enables deleting only the parent rows and not the child rows, cascaded deletes, and optional cascaded deletes, determined on a case-by-case basis. You can configure the polling interval for performing an event publish at design time.

Delete Cascade Policy: The optional advanced configuration is to specify the cascade policy of the DELETE operation. For instance, after polling for an employee with an address and many phone numbers, the phone numbers are deleted because they are privately owned (default for one-to-many), but not the address (default for one-to-one). This can be altered by configuring or_mappings.xml, as in the following example:

<database-mapping>
   <attribute-name>orders</attribute-name>
   <reference-class>taxonomy.Order</reference-class>
   <is-private-owned>true</is-private-owned>

You can also configure the activation itself to delete only the top level (master row) or to delete everything.

A receive operation appears in an inbound JCA as follows:

<connection-factory location="eis/DB/Connection1" UIConnectionName="Connection1" adapterRef=""/>
  <endpoint-activation portType="dd_ptt" operation="receive">
    <activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
      <property name="DescriptorName" value="dd.Emp"/>
      <property name="QueryName" value="ddSelect"/>
      <property name="MappingsMetaDataURL" value="dd-or-mappings.xml"/>
      <property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
      <property name="MarkReadColumn" value="STATUS"/>
      <property name="MarkReadValue" value="PROCESSED"/>
      <property name="MarkReservedValue" value="RESERVED-1"/>
      <property name="MarkUnreadValue" value="UNPROCESSED"/>
      <property name="PollingInterval" value="5"/>
      <property name="MaxRaiseSize" value="1"/>
      <property name="MaxTransactionSize" value="10"/>
      <property name="ReturnSingleResultSet" value="false"/>
    </activation-spec>
  </endpoint-activation>
</adapter-config>

Update a Field in the [Table_Name] Table (Logical Delete)

Choose this operation to employ the logical delete polling strategy. This strategy involves updating a special field on each row processed and updating the WHERE clause at run time to filter out processed rows. It mimics logical delete, wherein applications rows are rarely deleted but instead a status column isDeleted is set to true. The status column and the read value must be provided, but the modified WHERE clause and the post-read update are handled automatically by the Oracle Database Adapter.

Preconditions: You must have the logical delete privilege or a one-time alter schema (add column) privilege on the source table. Table 9-8 describes the requirements for using the logical delete polling strategy.

Table 9-8 Logical Delete Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

No updates on source

No delete on source

Poll for deletes

Minimal SQL

--

Zero data replication

--

Minimal configuration

--

Allows raw SQL

--

Poll for updates

--

Poll for child updates

--

Concurrent polling

--


Note:

The requirements of the following are met, as follows:
  • Poll for updates: By adding a trigger

  • Poll for child updates: By adding a trigger

  • Concurrent polling: By specifying additional mark unread and reserved values.

Configuration: The logical delete polling strategy requires minimal configuration. You must specify the mark read column and the value that indicates a processed record.

A receive operation appears in an inbound WSDL as follows:

<operation name="receive">
   <jca:operation
      ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
         …
      PollingStrategyName="LogicalDeletePollingStrategy"
      MarkReadField="STATUS"
      MarkReadValue="PROCESSED"

Given the configuration for logical delete, the Oracle Database Adapter appends the following WHERE clause to every polling query:

AND (STATUS IS NULL) OR (STATUS <> 'PROCESSED')

Database Configuration: A status column on the table being polled must exist. If it does not exist already, you can add one to an existing table.

Support for Polling for Updates: Given that rows are not deleted with each read, it is possible to repetitively read a row multiple times. You must add a trigger to reset the mark read field whenever a record is changed, as follows:

create trigger Employee_modified
before update on Employee
for each row
begin
   :new.STATUS := 'MODIFIED';
end;

Support for Concurrent Access Polling: Just as a single instance should never process an event more than once, the same applies to a collection of instances. Therefore, before processing a record, an instance must reserve that record with a unique value. Again, the status column can be used:

<operation name="receive">
   <jca:operation
      ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
         …
      PollingStrategyName="LogicalDeletePollingStrategy"
      MarkReadField="STATUS"
      MarkUnreadValue="UNPROCESSED"
      MarkReservedValue="RESERVED${IP-2}-${weblogic.Name-1}-${instance}"
      MarkReadValue="PROCESSED"

The polling query instead appears, as shown in the following example:

Update EMPLOYE set STATUS = 'RESERVED65-1-1' where (CRITERIA) AND (STATUS = 'UNPROCESSED');

Select … from EMPLOYEE where (CRITERIA) AND (STATUS = 'RESERVED65-1-1');

The after-read UPDATE is faster because it can update all:

Update EMPLOYEE set STATUS = 'PROCESSED' where (CRITERIA) AND (STATUS = 'RESERVED65-1-1');

Update a Sequencing Table

Choose this operation to employ the sequencing table: last-read Id strategy. This polling strategy involves using a helper table to remember a sequence value. The source table is not modified; instead, rows that have been read in a separate helper table are recorded. A sequence value of 1000, for example, means that every record with a sequence less than that value has already been processed. Because many tables have some counter field that is always increasing and maintained by triggers or the application, this strategy can often be used for noninvasive polling. No field on the processed row must be modified by the Oracle Database Adapter.

Native sequencing with a preallocation size of 1 can ensure that rows are inserted with primary keys that are always increasing over time.

This strategy is also called a nondestructive delete because no updates are made to the source rows, and a sequencing strategy such as the sequence field can be used to order the rows in a sequence for processing. When the rows are ordered in a line, the Oracle Database Adapter knows which rows are processed and which are not with a single unit of information.

Preconditions: You must have a sequencing table or create table privilege on the source schema. The source table has a column that is monotonically increasing with every INSERT (an Oracle native sequenced primary key) or UPDATE (the last-modified timestamp). Table 9-9 describes the requirements for using the sequencing polling strategy.

Table 9-9 Sequencing Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

Poll for deletes

Poll for updates

Allows raw SQL

No delete on source

Concurrent polling

No updates on source

--

One extra SQL select

--

Zero data replication

--

Moderate configuration

--

Poll for child updates

--


Configuration: A separate helper table must be defined. On the source table, you must specify which column is ever increasing.

<adapter-config name="ReadS" adapter="Database Adapter" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
    <connection-factory location="eis/DB/DBConnection1" UIConnectionName="DBConnection1" adapterRef=""/>
  <endpoint-activation portType="ReadS_ptt" operation="receive">
    <activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
      <property name="DescriptorName" value="ReadS.PerfMasterIn"/>
      <property name="QueryName" value="ReadSSelect"/>
      <property name="MappingsMetaDataURL" value="ReadS-or-mappings.xml"/>
      <property name="PollingStrategy" value="SequencingPollingStrategy"/>
      <property name="SequencingTable" value="PC_SEQUENCING"/>
      <property name="SequencingColumn" value="PK"/>
      <property name="SequencingTableKeyColumn" value="TABLE_NAME"/>
      <property name="SequencingTableValueColumn" value="LAST_READ_ID"/>
      <property name="SequencingTableKey" value="PERF_MASTER_IN"/>
      <property name="PollingInterval" value="60"/>
      <property name="MaxRaiseSize" value="1"/>
      <property name="MaxTransactionSize" value="10"/>
      <property name="ReturnSingleResultSet" value="false"/>
    </activation-spec>
  </endpoint-activation>
</adapter-config>

The sequencing field type can be excluded if it is actually a number.

Database Configuration: A sequencing table must be configured once for a given database. Multiple processes can share the same table. Given the ActivationSpec specified in the preceding example, the CREATE TABLE command looks as follows:

CREATE TABLE SEQUENCING_HELPER 
(
TABLE_NAME VARCHAR2(32) NOT NULL,
LAST_READ_DATE DATE
)
;

Polling for Updates: In the preceding example, the polling is for new objects or updates, because every time an object is changed, the modified time is updated.

A sample trigger to set the modified time on every insert or update is as follows:

create trigger Employee_modified
before insert or update on Employee
for each row
begin
  :new.modified_date := sysdate;
end;

Using a Sequence Number: A sequence number can be used for either insert or update polling. Native sequencing returns monotonically increasing primary keys, as long as an increment by 1 is used. You can also use the sequence number of a materialized view log.

Update an External Sequencing Table on a Different Database

Choose this operation to employ the sequencing table: last updated strategy. This polling strategy involves using a helper table to remember a last_updated value. A last_updated value of 2005-01-01 12:45:01 000, for example, means that every record last updated at that time or earlier has already been processed. Because many tables have rows with a last_updated or creation_time column maintained by triggers or the application, this strategy can often be used for noninvasive polling. No fields on the processed row ever need to be modified by the Oracle Database Adapter.

This strategy is also called a nondestructive delete because no updates are made to the source rows, and a sequencing strategy such as the last_updated field can be used to order the rows in a sequence for processing. When the rows are ordered in a line, the Oracle Database Adapter knows which rows are processed and which are not with a single unit of information.

See Update a Sequencing Table for information about preconditions and configuration.

Update a Sequencing File

This strategy works similar to Update an External Sequencing Table on a Different Database, the only difference being that the control information is stored in a file instead of a table.

Control Table Strategy

Choose this operation to employ the control table polling strategy. This polling strategy involves using a control table to store the primary key of every row that has yet to be processed. With a natural join between the control table and the source table (by primary key), polling against the control table is practically the same as polling against the source table directly. However, an extra layer of indirection allows the following:

  • Destructive polling strategies such as the delete polling strategy can be applied to rows in the control table alone while shielding any rows in the source table.

  • Only rows that are meant to be processed have their primary key appear in the control table. Information that is not in the rows themselves can be used to control which rows to process (a good WHERE clause may not be enough).

  • The entire row is not copied to a control table, and any structure under the source table, such as detail rows, can also be raised without copying.

Streams and materialized view logs make good control tables.

Preconditions: You must have the create/alter triggers privilege on the source table. Table 9-10 describes the requirements for using the control table polling strategy.

Table 9-10 Control Table Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

Advanced configuration: the native XML from the database will have control header, and triggers are required.

Poll for updates

--

Poll for deletes

--

Poll for child updates

Minimal data replication (primary keys are stored in control table)

No delete on source

--

No updates on source

--

No extra SQL selects

--

Concurrent polling

--

Allows raw SQL

--

Auditing

--


Using triggers, whenever a row is modified, an entry is added to a control table, containing the name of the master table, and the primary keys. At design time, the control table is defined to be the root table, with a one-to-one mapping to the master table, based on the matching primary keys. The control table can contain extra control information, such as a time stamp, and operation type (INSERT, UPDATE, and so on).

The delete polling strategy is useful with this setup. It is important to keep the control table small, and if the option shouldDeleteDetailRows="false" is used, then only the control rows are deleted, giving you a nondestructive delete (the DELETE is not cascaded to the real tables).

It is possible to reuse the same control table for multiple master tables. In TopLink, you can map the same table to multiple descriptors by mapping the control table as one abstract class with multiple children. Each child has a unique one-to-one mapping to a different master table. The advantage of this approach is that you can specify for each child a class indicator field and value so that you do not need an explicit WHERE clause for each polling query.

The following are sample triggers for polling for changes both to a department table and any of its child employee rows:

CREATE OR REPLACE TRIGGER EVENT_ON_DEPT 
   AFTER INSERT OR UPDATE ON DEPARTMENT 
   REFERENCING NEW AS newRow 
   FOR EACH ROW 
   DECLARE X NUMBER;
BEGIN
   SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO);
   IF X = 0 then
   insert into DEPT_CONTROL values (:newRow. DEPTNO);
   END IF;
END;
CREATE OR REPLACE TRIGGER EVENT_ON_EMPLOYEE
   AFTER INSERT OR UPDATE ON EMPLOYEE
   REFERENCING OLD AS oldRow NEW AS newRow
   FOR EACH ROW
   DECLARE X NUMBER;
BEGIN
   SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO);
   IF X = 0 then
   INSERT INTO DEPT_CONTROL VALUES (:newRow.DEPTNO);
   END IF;
   IF (:oldRow.DEPTNO <> :newRow.DEPTNO) THEN
      SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :oldRow.DEPTNO);
      IF (X = 0) THEN
         INSERT INTO DEPT_CONTROL VALUES (:oldRow.DEPTNO);
      END IF;
   END IF;
END;

Use Cases for Polling Strategies

Polling strategies are demonstrated in the following tutorials:

  • PollingLogicalDeleteStrategy

  • PollingLastUpdatedStrategy

  • PollingLastReadIdStrategy

  • PollingControlTableStrategy

  • MasterDetail (for physical delete polling strategy

For these files, go to

http://www.oracle.com/technology/sample_code/products/adapters

Advanced Use Cases for Polling Strategies

Advanced polling strategies are demonstrated in the following tutorials:

  • DistributedPolling

  • PollingExternalSequencing

  • PollingFileSequencingStrategy

  • PollingForChildUpdates

  • PollingNoAfterReadStrategy

  • PollingOracleSCNStrategy

  • PollingPureSQLOtherTableInsert

  • PollingPureSQLSysdateLogicalDelete

  • PollingWithParameters

For these files, go to

http://www.oracle.com/technology/sample_code/products/adapters

9.5 Deployment

The Oracle Database Adapter comes deployed to the application server by the install. It contains a single adapter instance entry eis/DB/SOADemo, which points to the data source jdbc/SOADataSource. The connection information to the database is inside the data source definition.

When deploying a SOA project that uses the OracleAS Adapter for Databases, you may need to add a new adapter instance and restart the application server first. This could be because you want to point to a database other than the one referred in jdbc/SOADataSource, or because you chose a name for the adapter instance that does not yet exist. For instance, if you create a connection in JDeveloper named Connection1, then by default the DB Adapter service points to eis/DB/Connection1, as shown in Figure 9-7.

You can also check which adapter instance the service is pointing to by looking at the db.jca file, as shown in the following code snippet:

<connection-factory location="eis/DB/Connection1" UIConnectionName="Connection1" adapterRef="" />

In the preceding example, the location is the JNDI name of the adapter instance at run time, and UIConnectionName is the name of the connection used in JDeveloper.

You can create a new DB Adapter instance through the Oracle WebLogic Server Administration Console, as mentioned in Section 2.19, "Adding an Adapter Connection Factory" or by directly editing the weblogic-ra.xml file. The following are the steps to edit weblogic-ra.xml:

  1. Search fmwhome/ for DbAdapter.rar.

  2. Unzip the file.

  3. Edit META-INF/weblogic-ra.xml (and possibly ra.xml.)

  4. Jar the file again.

  5. Restart the application server.

The following is a sample adapter instance in weblogic-ra.xml:

<connection-instance>
  <jndi-name>eis/DB/SOADemo</jndi-name>
    <connection-properties>
       <properties>
          <property>
            <name>xADataSourceName</name>
            <value>jdbc/SOADataSource</value>
              </property>
              <property>
                <name>dataSourceName</name>
                <value> </value>
              </property>
              <property>
                <name>platformClassName</name>
                <value>Oracle10Platform</value>
              </property>
           </properties>
        </connection-properties>
</connection-instance>

The four mandatory properties are: jndi-name, xADataSourceName, dataSourceName, and platformClassName. The jndi-name property must match the location attribute in the db.jca file, and is the name of the adapter instance. The xADataSourceName property is the name of the underlying data source (which has the connection information).

Most Common Mistakes

The following are the two most common mistakes with deployment:

  • Not creating an adapter instance entry that matches the location attribute in your db.jca file (or not creating one at all.)

  • Setting the location attribute in the db.jca file to the name of the data source directly.

For the latter, there is a level of indirection in that you give the name of the adapter instance (eis/DB/...), which itself points to the data source pool (jdbc/...). It is a common mistake to miss this indirection and give the name jdbc/... directly in the location attribute.

Data Source Configuration

For the relevant Data Source configuration for your application server, see Section 9.6, "JDBC Driver and Database Connection Configuration." When configuring an Oracle data source, ensure that you use the thin XA option.

Additional Adapter Instance Properties

This section briefly describes additional properties in the DB Adapter instance beyond xADataSourceName, dataSourceName, and platformClassName. When adding a property to weblogic-ra.xml, you must ensure that the property is also declared in ra.xml (also in DbAdapter.rar). For example, the following is a code snippet of the ra.xml file for the property xADataSourceName in weblogic-ra.xml:

<config-property>
<config-property-name>xADataSourceName </config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value></config-property-value>
</config-property>

For information about the Oracle Database Adapter instance properties, see Appendix A, "Oracle Database Adapter Properties." Apart from the properties mentioned there, you can also add the properties listed in the following table:

Property Name Type
connectionString String
userName String
password String
minConnections Integer
maxConnections Integer
minReadConnections Integer
maxReadConnections Integer
dataSourceName String
driverClassName String
platformClassName String
sequencePreallocationSize Integer
tableQualifier String
usesBatchWriting Boolean
usesExternalConnectionPooling Boolean
usesExternalTransactionController Boolean
usesNativeSQL Boolean

The preceding properties appear in the oracle.toplink.sessions.DatabaseLogin object. See TopLink API reference information on DBConnectionFactory Javadoc and DatabaseLogin Javadoc at http://download.oracle.com/docs/cd/B10464_02/web.904/b10491/index.html.

Table 9-11 shows the advanced properties, which are database platform variables. Set the DatabasePlatform name to one of the following variables.

Table 9-11 Database Platform Names

Database PlatformClassName

Oracle9+ (including 10g)

oracle.toplink.platform.database.Oracle9Platform

Oracle9+ (optional):

oracle.toplink.platform.database.Oracle9Platform

Oracle8

oracle.toplink.platform.database.Oracle8Platform

Oracle7

oracle.toplink.platform.database.OraclePlatform

DB2

oracle.toplink.platform.database.DB2Platform

DB2 on AS400

oracle.tip.adapter.db.toplinkext.DB2AS400Platform

Informix

oracle.toplink.platform.database.InformixPlatform

SQLServer

oracle.toplink.platform.database.SQLServerPlatform

MySQL

oracle.toplink.platform.database.MySQL4Platform

Any other database

oracle.toplink.platform.database.DatabasePlatform


9.6 JDBC Driver and Database Connection Configuration

In this release, Oracle JCA Adapters are certified against the following third-party databases using Oracle WebLogic Server Type 4 JDBC drivers:

  • Microsoft SQL Server 2008

  • Sybase 15

  • Informix 11.5

Note:

Only major databases and versions are certified. Working with other databases should be feasible as long as they provide a working JDBC driver, and you rely on core ANSI SQL relational features, such as Create, Read, Update, and Delete (CRUD) operations on tables and views. Issues tend to be more prevalent due to the fact that not all JDBC drivers implement database metadata introspection the same way. However, it should be possible to import matching tables on a certified database and then point to the uncertified database at runtime. The information provided in this section for uncertified databases is meant as a guide only.

For more information, see the following topics:

9.6.1 Creating a Database Connection Using a Native or Bundled Oracle WebLogic Server JDBC Driver

To create a database connection when using a native or bundled Oracle WebLogic Server JDBC driver:

  1. Ensure that the appropriate JDBC driver JAR files are installed and set the class path.

    For more information, see:

  2. In the File menu, click New.

    The New Gallery page is displayed.

  3. In the All Technologies tab, under General categories, select Connections.

    A list of the different connections that you can make is displayed in the Items pane on the right side of the New Gallery page.

  4. Select Database Connection, and then click OK.

    The Create Database Connection page is displayed.

  5. For Create Connection In, select IDE Connections.

  6. Enter a name for this connection in the Connection Name field.

    For example, SQLServer.

  7. Select the appropriate driver from the Connection Type menu.

  8. Enter your credentials (such as user name, password, and role, if applicable).

  9. Enter your connection information.

    For example, jdbc:sqlserver://HOST-NAME:PORT;databaseName=DATABASE-NAME

    For more information, see:

  10. Click Test Connection.

  11. If the connection is successful, click OK.

9.6.2 Creating a Database Connection Using a Third-Party JDBC Driver

To create a database connection when using a third-party JDBC driver:

  1. Install the appropriate JDBC driver JAR files and set the class path.

    For more information, see Section 9.6.4, "Location of JDBC Driver JAR Files and Setting the Class Path".

  2. In the File menu, click New.

    The New Gallery page is displayed.

  3. In the All Technologies tab, under General categories, select Connections.

    A list of the different connections that you can make is displayed in the Items pane on the right side of the New Gallery page.

  4. Select Database Connection, and then click OK.

    The Create Database Connection page is displayed.

  5. For Create Connection In, select IDE Connections.

  6. Enter a name for this connection in the Connection Name field.

    For example, SQLServer.

  7. Select Generic JDBC from Connection Type.

  8. Enter your user name, password, and role information.

  9. Click New for Driver Class.

    The Register JDBC Driver dialog is displayed.

    Perform Steps 10, 11 and 19 in the Register JDBC Driver dialog.

  10. Enter the driver name (for example, some.jdbc.Driver) for Driver Class.

    For example, com.microsoft.sqlserver.jdbc.SQLServerDriver.

  11. Click Browse for Library.

    The Select Library dialog is displayed.

    Perform Steps 12 and 18 in the Select Library dialog.

  12. Click New to create a new library.

    The Create Library dialog is displayed.

    Perform Steps 13 through 17 in the Create Library dialog.

  13. Specify a name in the Library Name field.

    For example, SQL Server JDBC.

  14. Click Class Path, and then click Add Entry to add each JAR file of your driver to the class path.

    The Select Path Entry dialog is displayed.

  15. Select a JDBC class file and click Select.

    For example, select sqljdbc.jar.

  16. Click OK when you have added all the class files to the Class Path field.

  17. Click OK to exit the Create Library dialog.

  18. Click OK to exit the Select Library dialog.

  19. Click OK to exit the Register JDBC Driver dialog.

  20. Enter your connection string name for JDBC URL and click Next.

    For example, jdbc:sqlserver://HOST-NAME:PORT;databaseName=DATABASE-NAME

    For more information, see:

  21. Click Test Connection.

  22. If the connection is successful, click OK.

9.6.3 Summary of Third-Party JDBC Driver and Database Connection Information

Table 9-12, "Database Driver Selection (from Weblogic Server Console)" summarizes the connection information for common third-party databases.

For information about PlatformClassName, see Table 9-11, "Database Platform Names".

For more information, see:

Table 9-12 Database Driver Selection (from Weblogic Server Console)

Database JDBC Driver

Microsoft SQL Server

  • Oracle's MS SQL Server Driver (Type 4 XA)

  • Oracle's MS SQL Server Driver (Type 4)

Sybase

  • Oracle's Sybase Driver (Type 4 XA)

  • Oracle's Sybase Driver (Type 4)

Informix

  • Oracle's Informix Driver (Type 4 XA)

  • Oracle's Informix Driver (Type 4)

IBM DB2

  • Oracle's DB2 Driver (Type 4 XA)

  • Oracle's DB2 Driver (Type 4)

MySQL

MySQL's Driver (Type 4)

Versions: using com.mysql.jdbc.Driver


9.6.3.1 Using a Microsoft SQL Server

You must note the following when connecting to a SQL Server database:

  • User name and password

    • SQL Server 2005 installs with Windows authentication as the default. Therefore, you do not log in with a user name and password; rather, your Windows user account either has privilege or does not. JDBC requires you to provide a user name and password.

  • Connect string

    From the sqlcmd login, you can deduce what your connect string is, as in the following examples:

    Example 1:

    sqlcmd
    1>
    jdbc:microsoft:sqlserver://localhost:1433
    

    Example 2:

    sqlcmd -S user.mycompany.com\SQLExpress
    1>
    jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433
    

    Example 3:

    sqlcmd -S user.mycompany.com\SQLExpress -d master
    1>
    jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433;databasename=
    master
    

    A full URL is as follows:

    jdbc:microsoft:sqlserver://serverName[\instanceName]:tcpPort[;SelectMethod=cursor][;databasename=databaseName]
    
  • Database name

    If you must explicitly supply the database name, but do not know it, go to

    C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data
    

    If you see a file named master.mdf, then one of the database names is master.

  • TCP port

    Ensure that SQL Server Browser is running and that your SQL Server service has TCP/IP enabled and is listening on static port 1433. Disable dynamic ports. In SQL Native Client Configuration/Client Protocols, ensure that TCP/IP is enabled and that the default port is 1433.

  • JDBC drivers

    You must download the JDBC drivers separately. From www.microsoft.com, click Downloads and search on jdbc. You can also try using the DataDirect driver.

9.6.3.2 Using a Sybase Database

This section includes the following topics:

9.6.3.2.1 Using a Sybase JConnect JDBC Driver

URL: jdbc:sybase:Tds:SERVER-NAME:PORT/DATABASE-NAME

Driver Class: com.sybase.jdbc.SybDriver

Driver Jar: jConnect-6_0\classes\jconn3.jar

For information about the Sybase JConnect JDBC driver, refer to the following link:

http://www.sybase.com/products/middleware/jconnectforjdbc.

9.6.3.3 Using an Informix Database

This section includes the following topics:

9.6.3.3.1 Using an Informix JDBC Driver

URL: jdbc:informix-sqli://HOST-NAME-OR-IP:PORT-OR-SERVICE-NAME/DATABASE-NAME:INFORMIXSERVER=SERVER-NAME

Driver Class: com.informix.jdbc.IfxDriver

Driver Jar: ifxjdbc.jar

For information about the Informix JDBC driver, refer to the following link:

http://www-01.ibm.com/software/data/informix/tools/jdbc/.

9.6.3.4 Using an IBM DB2 Database

This section includes the following topics:

9.6.3.4.1 IBM DB2 Driver

URL: jdbc:db2:localhost:NAME

Driver Class: com.ibm.db2.jcc.DB2Driver

Driver Jar (v8.1): db2jcc.jar, db2jcc_javax.jar, db2jcc_license_cu.jar

For information about DataDirect driver, refer to the following link:

http://www.datadirect.com/techres/jdbcproddoc/index.ssp

9.6.3.4.2 JT400 Driver (AS400 DB2)

URL: jdbc:as400://hostname;translate binary=true

Driver Class: com.ibm.as400.access.AS400JDBCDriver

Driver Jar: jt400.jar

For correct character set translation, use translate binary=true.

9.6.3.4.3 IBM Universal Driver

URL: jdbc:db2://hostname:port/schemaname

Driver Class: com.ibm.db2.jcc.DB2Driver

Driver Jar: db2jcc.jar, db2jcc4.jar and db2java.zip

9.6.3.5 Using a MySQL Database

Use the following information:

URL: jdbc:mysql://hostname:3306/dbname

Driver Class: com.mysql.jdbc.Driver

Driver Jar: mysql-connector-java-3.1.10-bin.jar

9.6.4 Location of JDBC Driver JAR Files and Setting the Class Path

This section describes the location of JDBC JAR files and setting the class path at run time and design time.

Run Time

For both Windows and Linux, you must perform the following steps:

  1. Drop the vendor-specific driver JAR files to the user_projects/domains/soainfra/lib directory.

  2. Drop the vendor-specific driver JAR files to the <Weblogic_Home>/server/lib.

  3. Edit the classpath to inculde the vendor-specific jar file in <Weblogic_HOME>/common/bin/commEnv.sh

Design Time

For both Windows and Linux, drop the JDBC JAR to the Oracle/Middleware/jdeveloper/jdev/lib/patches directory.

9.7 Stored Procedure and Function Support

This section describes how the Oracle Database Adapter supports the use of stored procedures and functions.

This section includes the following topics:

9.7.1 Design Time: Using the Adapter Configuration Wizard

The Adapter Configuration Wizard – Stored Procedures is used to generate an adapter service WSDL and the necessary XSD. The adapter service WSDL encapsulates the underlying stored procedure or function as a Web service with a WSIF JCA binding. The XSD file describes the procedure or function, including all the parameters and their types. This XSD provides the definition used to create instance XML that is submitted to the Oracle Database Adapter at run time.

This section includes the following topics:

9.7.1.1 Using Top-Level Standalone APIs

This section describes how to use the Adapter Configuration Wizard with APIs that are not defined in PL/SQL packages. You use the Adapter Configuration Wizard – Stored Procedures to select a procedure or function and generate the XSD file. See Section 9.8, "Oracle Database Adapter Use Cases" if you are not familiar with how to start the Adapter Configuration Wizard.

The following are the steps to select a stored procedure or function by using the Adapter Configuration Wizard:

  1. Drag and drop Database Adapter from the Service Adapters list to the Exposed Services swim lane in the composite.xml page.

    The Adapter Configuration Wizard is displayed, as shown in Figure 9-29.

    Figure 9-29 The Adapter Configuration Wizard

    Description of Figure 9-29 follows
    Description of "Figure 9-29 The Adapter Configuration Wizard"

  2. Click Next. The Service Name page is displayed, as shown in Figure 9-30.

    Note:

    Note that the name of stored procedures or packages that refers to database or user-defined data types must not include the character $ in it. The presence of $ in the name would cause the XSD file generation to fail.

    Figure 9-30 Specifying the Service Name

    Description of Figure 9-30 follows
    Description of "Figure 9-30 Specifying the Service Name"

  3. In the Service Name field, enter a service name, and then click Next. The Service Connection page is displayed.

    You associate a connection with the service, as shown in Figure 9-31. A database connection is required to configure the adapter service. Select an existing connection from the list or create a new connection.

    Figure 9-31 Setting the Database Connection in the Adapter Configuration Wizard

    Description of Figure 9-31 follows
    Description of "Figure 9-31 Setting the Database Connection in the Adapter Configuration Wizard"

  4. Click Next. The Operation Type page is displayed.

  5. For the Operation Type, select Call a Stored Procedure or Function, as shown in Figure 9-32.

    Figure 9-32 Calling a Stored Procedure or Function in the Adapter Configuration Wizard

    Description of Figure 9-32 follows
    Description of "Figure 9-32 Calling a Stored Procedure or Function in the Adapter Configuration Wizard"

  6. Click Next. The Specify Stored Procedure page is displayed, as shown in Figure 9-33. This is where you specify a stored procedure or function.

    Figure 9-33 The Specify Stored Procedure Page

    Description of Figure 9-33 follows
    Description of "Figure 9-33 The Specify Stored Procedure Page"

  7. Next, you select the schema and procedure or function. You can select a schema from the list or select <Default Schema>, in which case the schema associated with the connection is used. If you know the procedure name, enter it in the Procedure field. If the procedure is defined inside a package, then you must include the package name, as in EMPLOYEE.GET_NAME.

    If you do not know the schema and procedure names, click Browse to access the Stored Procedures window, as shown in Figure 9-34.

    Figure 9-34 Searching for a Procedure or Function

    Description of Figure 9-34 follows
    Description of "Figure 9-34 Searching for a Procedure or Function"

    Select a schema from the list or select <Default Schema>. A list of the available procedures is displayed in the left window. To search for a particular API in a long list of APIs, enter search criteria in the Search field. For example, to find all APIs that begin with XX, enter XX% and click the Search button. Clicking the Show All button displays all available APIs.

    Figure 9-35 shows how you can select the FACTORIAL function. The Arguments tab displays the parameters of the function, including their names, type, mode (IN, IN/OUT or OUT) and the numeric position of the parameter in the definition of the procedure. The return value of a function has no name and is always an OUT parameter at position zero (0).

    Figure 9-35 Viewing the Arguments of a Selected Procedure

    Description of Figure 9-35 follows
    Description of "Figure 9-35 Viewing the Arguments of a Selected Procedure"

    Figure 9-36 shows how the Source tab displays the code that implements the function. Text that matches the name of the function is highlighted.

    Figure 9-36 Viewing the Source Code of a Selected Procedure

    Description of Figure 9-36 follows
    Description of "Figure 9-36 Viewing the Source Code of a Selected Procedure"

  8. Click OK after selecting a procedure or function. Information about the API is displayed, as shown in Figure 9-37. Click Back or Browse to make revisions.

    Figure 9-37 Viewing Procedure or Function Details in the Adapter Configuration Wizard

    Description of Figure 9-37 follows
    Description of "Figure 9-37 Viewing Procedure or Function Details in the Adapter Configuration Wizard"

  9. Click Next. If the stored procedure or function has an output parameter of type row set (REF CURSOR on Oracle Database), as Figure 9-38 shows, you can define a strongly or weakly typed XSD for this ref cursor.

    Figure 9-38 Viewing Procedure or Function Details in the Adapter Configuration Wizard: Row Set Type

    Description of Figure 9-38 follows
    Description of "Figure 9-38 Viewing Procedure or Function Details in the Adapter Configuration Wizard: Row Set Type"

    For more information, see:

  10. Click Next. The Advanced Options page is displayed, as shown in Figure 9-39. Enter any advanced options, such as the JDBC QueryTimeout value. Other options include retry parameters, such as the number of retry attempts and the interval between them.

    Figure 9-39 The Advanced Options Page

    Description of Figure 9-39 follows
    Description of "Figure 9-39 The Advanced Options Page"

  11. After specifying all options, click Next, and then click Finish to complete the Adapter Configuration Wizard.

    When you have finished using the Adapter Configuration Wizard, three files are added to the existing project:

    • servicename.wsdl (for example, Factorial.wsdl)

    • service_name_db.jca (for example, Factorial_db.jca)

    • schema_package_procedurename.xsd (for example, SCOTT_FACTORIAL.xsd)

9.7.1.2 Using Packaged APIs and Overloading

Using APIs defined in packages is similar to using standalone APIs. The only difference is that you can expand the package name to see a list of all the APIs defined within the package, as shown in Figure 9-40.

APIs that have the same name but different parameters are called overloaded APIs. As shown in Figure 9-40, the package called PACKAGE has two overloaded procedures called OVERLOAD.

Figure 9-40 A Package with Two Overloaded Procedures

Description of Figure 9-40 follows
Description of "Figure 9-40 A Package with Two Overloaded Procedures"

As Figure 9-41 shows, the code for the entire PL/SQL package is displayed, regardless of which API from the package is selected when you view the Source tab. Text that matches the name of the procedure is highlighted.

Figure 9-41 Viewing the Source Code of an Overloaded Procedure

Description of Figure 9-41 follows
Description of "Figure 9-41 Viewing the Source Code of an Overloaded Procedure"

After you select a procedure or function and click OK, information about the API is displayed, as shown in Figure 9-42. The schema, procedure name, and a list of arguments are displayed. Note how the procedure name is qualified with the name of the package (PACKAGE.OVERLOAD). Click Back or Browse to make revisions, or Next. Enter values for any of the advanced options. Click Next followed by Finish to conclude.

Figure 9-42 Viewing Procedure or Function Details in the Adapter Configuration Wizard

Description of Figure 9-42 follows
Description of "Figure 9-42 Viewing Procedure or Function Details in the Adapter Configuration Wizard"

When you have finished using the Adapter Configuration Wizard, the following files are added to the existing project:

  • Overload.wsdl, Overload_db.jca

  • SCOTT_PACKAGE_OVERLOAD_2.xsd.

    The _2 appended after the name of the procedure in the XSD filename differentiates the overloaded APIs. Numeric indexes are used to differentiate between overloaded APIs.

9.7.2 Design Time: Using the Command-Line Utility

The command-line utility is invoked with a properties file that provides information for generating the necessary BPEL artifacts. The Adapter Configuration Wizard supports Oracle Database, IBM DB2, AS/400, Microsoft SQL Server 2005, and MySQL v5.2.6 or higher so using the command-line utility is not necessary. The utility continues to support all of its current databases, but is now only required for Microsoft SQL Server 2000 and versions of MySQL before v5.2.6. The command-line utility does not support AS/400.

For more information on Oracle JCA Adapters support for third-party JDBC drivers and databases, see Section 9.6, "JDBC Driver and Database Connection Configuration".

This section includes the following topics:

9.7.2.1 Common Command-Line Functionality

Each of the additional databases shares some common functionality that is provided by the command-line utility. Several properties are shared by all supported databases. The following is a list of properties shared by the supported databases:

ProductName

This is the name of the database.

Database Name Supported Database
IBM DB2 IBM DB2 v8.x and v 9.x
Microsoft SQL Server SQLServer 2000 or 2005
MySQL MySQL v5.6

DriverClassName

This is the name of the JDBC Driver Class.

Database Name JDBC Driver
IBM DB2 com.ibm.db2.jcc.DB2Driver
Microsoft SQL Server com.microsoft.sqlserver.jdbc.SQLServerDriver
MySQL com.mysql.jdbc.Driver

ConnectionString

This is the JDBC Connection URL.

Database Name Connection String
IBM DB2 jdbc:db2://hostname:port/database-name
Microsoft SQL Server jdbc:sqlserver://hostname:port;databaseName=name
MySQL jdbc:mysql://host:port/database-name

Username

This is the database user name.

Password

This is the password associated with the user name.

ProcedureName

This is the name of the stored procedure or the function.

ServiceName

This is the service name for the desired operation.

DatabaseConnection

This is the JNDI name of the connection. For example, eis/DB/<DatabaseConnection>.

Destination

This is the destination directory for the generated files. For example, C:\Temp.

Parameters

The parameters of the stored procedure (for versions of MySQL before 5.2.6 only.)

QueryTimeout

The JDBC query timeout value (in seconds.) The QueryTimeout property specifies the maximum number of seconds that the JDBC driver should wait for the specified stored procedure or function to execute. When the threshold is exceeded, SQLException is thrown. If the value is zero, then the driver waits indefinitely.

Templates are used by the command-line utility when generating the service WSDL, the JCA binding file, and the XSD file for the desired operation. You can make appropriate substitutions by using the values of the properties provided in the properties file.

9.7.2.2 Generated Output

The command-line utility generates three files, an XSD file representing the signature of the chosen API, an adapter service WSDL, and a JCA binding file. The name of the generated WSDL is derived from the value of the ServiceName property, for example, <ServiceName>.wsdl. The name of the JCA binding file is also derived from the value of the ServiceName property, for example, <ServiceName>_db.jca. The name of the XSD file is derived from the ProcedureName property and any other property values that are related to the qualification of the stored procedure or function name. These properties and their values are specific to the database. For example, the schema and package name for Oracle database.

The contents of the generated service WSDL and the JCA binding files are derived using templates and the values of the required properties. The contents of the XSD file are derived from the qualified procedure name and a data type mapping table specific to the database in use. The elements in the XSD file that represent parameters of the stored procedure have the same number and type of attributes as those that are generated using the Adapter Configuration Wizard.

9.7.2.3 Supported Third-Party Databases

The command-line utility can be used to generate the required service artifacts for IBM DB2 v8.x, v9.x; Microsoft SQL Server 2000 and 2005; and MySQL v5.x, v6.x. The data types and properties that are supported vary for each database. The BLOB and CLOB data types are not supported for use by stored procedures on Informix 11.5.

Note:

The command-line utility does not support XA drivers for generating schemas and WSDLs for stored procedures and functions for non-Oracle databases such as Microsoft SQL Server and IBM DB2. You must use non-XA drivers for the same.

This section includes the following topics:

9.7.2.3.1 Microsoft SQL Server 2000 and 2005

The Adapter Configuration Wizard can be used to access stored procedures and functions on Microsoft SQL Server 2005 using catalog tables in the INFORMATION_SCHEMA schema. Thus, using the command-line utility is no longer necessary for SQL Server 2005. However, you must continue to use the command-line utility for SQL Server 2000.

When using the command-line utility, a property is required to indicate that the chosen API is a function as opposed to a procedure. The following table lists the additional properties used with Microsoft SQL Server:

Property Description
IsFunction Determines whether the API is a function or procedure.
DatabaseName The name of the database where the API is defined.
SchemaName The name of the schema to which the procedure belongs.

The IsFunction property is optional. The default value is false. If the API is a procedure, then this property need not be specified. If the API is a function or a procedure that returns a value, then this property must be set to true, True, or TRUE. The following is an example of a procedure that returns a value:

1> create procedure ... as begin ...;return 1; end
2> go

If the IsFunction property is omitted, or if its value is not set, or if it is set to FALSE or a value other than TRUE, then the return value is not included in the generated XML after the API executes.

SchemaName and DatabaseName are optional properties. They are used to further qualify the stored procedure. For example, <DatabaseName>.<SchemaName>.<ProcedureName>.

Table 9-13 lists the supported data types for SQL Server stored procedures and functions:

Table 9-13 Data Types for SQL Server Stored Procedures and Functions

SQL Data Type XML Schema Type

BIGINT

long

BINARY

IMAGE

TIMESTAMP

VARBINARY

base64Binary

BIT

boolean

CHAR

SQL_VARIANT

SYSNAME

TEXT

UNIQUEIDENTIFIER

VARCHAR

XML (2005 only)

string

DATETIME

SMALLDATETIME

dateTime

DECIMAL

MONEY

NUMERIC

SMALLMONEY

decimal

FLOAT

REAL

float

INT

int

SMALLINT

short

TINYINT

unsignedByte


Besides, the data types mentioned in the preceding table, alias data types are also supported. Alias data types are created by using the sp_addtype database engine stored procedure or the CREATE TYPE Transact-SQL statement (only for SQL Server 2005.) Note that the use of the Transact-SQL statement is the preferred method for creating alias data types. The use of sp_addtype is being deprecated.

If the data type of a parameter in a stored procedure is defined using an alias data type, then the underlying base SQL data type is determined, and this data type and its mappings are used for the parameter's element in the generated XSD. Consider the following example:

1>create type myint from int
2>go
3>create procedure aliastype @x myint as …
4>go

The type of the parameter in the procedure, in the preceding example is the alias type, myint. The underlying database SQL data type of myint is INT, whose data type mapping is used for parameter @x in the stored procedure.

<element name="x" type="int" … db:type="INT" … />

Structured data types (user-defined) were introduced in SQL Server 2005. The command-line utility, however, does not support them. Therefore, structured data types may not be used as the data type of a parameter in a stored procedure or function.

Note:

To generate WSDLs and schemas for Microsoft SQL Server 2000 and 2005, use the product name Microsoft SQL Server.

In the Adapter Configuration Wizard, <Default Schema> refers to the database name specified as the databaseName property in the JDBC Connection URL. If the databaseName property is not specified in the JDBC Connection URL, then <Default Schema> is the default database of the user connecting to SQL Server. The <Default Schema> is also equivalent to the value of the DatabaseName property used by the command-line utility. You can click <Default Schema> to select a different database. A SQL Server "database" is essentially an Oracle "schema."

The <schema> is the default schema associated with the Username specified when a connection to SQL Server is created in the Create Database Connection window. The dbo schema is usually the default schema. The value of <schema> is also equivalent to the SchemaName property used by the command-line utility. A SQL Server "schema" is essentially an Oracle PL/SQL "package."

Refer to the tutorial about Integrating with Microsoft SQL Server to get an understanding of how to effectively use the command-line utility for Microsoft SQL Server. This tutorial also provides details about the CLASSPATH required by the utility, and is available at the following location, as Adapter Tutorial 31:

http://adapters-lab2.us.oracle.com/portal/page/portal/Adapterportal/collateral/tutorials

9.7.2.3.2 IBM DB2 v8.x and v9.x

The Adapter Configuration Wizard supports DB2. So using the command-line utility is no longer necessary. The DB2 implementation is based on queries from catalog tables in the SYSCAT schema. The Adapter Configuration Wizard tries to determine whether the SYSCAT.ROUTINES table exists. If it does, then the Adapter Configuration Wizard queries tables in the SYSCAT schema. Therefore, if your database contains this schema, then the Adapter Configuration Wizard and the adapter run time should both work. Although no longer needed, the command-line utility can still be used to generate the BPEL artifacts. However, using the Adapter Configuration Wizard is the preferred approach.

When using the command-line utility, IBM DB2 utilizes an additional property to further qualify the stored procedure. Note that only SQL stored procedures are supported. EXTERNAL procedures and user-defined functions are not supported.

In IBM DB2 the SchemaName property is mandatory. SchemaName is the name of the user who created the procedure. It is used by the command-line utility to qualify the stored procedure so that it can verify that the procedure exists. For example, a procedure such as <SchemaName>.<ProcedureName> created in the <database> specified in the <ConnectionString>.

The following is a sample properties file for IBM DB2:

ProductName=IBM DB2
DriverClassName=com.ibm.db2.jcc.DB2Driver
ConnectionString=jdbc:db2://<server>:<port>/<database>
Username:<username>
Password:<password>
SchemaName:<username>
ProcedureName:<procedure>
ServiceName:MyDB2Service
DatabaseConnection:db2

Table 9-14 lists the supported data types for DB2 SQL stored procedures:

Note:

To generate WSDLs and schemas for IBM DB2, use the product name IBM DB2.

Table 9-14 Data Types for DB2 SQL Stored Procedures

SQL Data Type XML Schema Type

BIGINT

long

BLOB

CHAR FOR BIT DATA

VARCHAR FOR BIT DATA

base64Binary

CHARACTER

CLOB

VARCHAR

string

DATE

TIME

TIMESTAMP

dateTime

DECIMAL

decimal

DOUBLE

double

INTEGER

int

REAL

float

SMALLINT

short


Note that the names of other data types are also supported implicitly. For example, NUMERIC is equivalent to DECIMAL (as is DEC and NUM as well.)

Distinct data types are also supported for SQL stored procedures. These are similar to alias data types in SQL Server. They are created using the CREATE DISTINCT TYPE statement, as shown in the following example:

db2 => create distinct type myint as integer with comparisons
db2 => create procedure distincttype(in x myint, …) begin … end

The underlying database SQL data type of myint is INTEGER, whose data type mapping is used for parameter x in the stored procedure.

<element name="X" type="int" … db:type="INTEGER" … />

IBM DB2 supports structured data types (user-defined). However, there is no support for these types in the JDBC drivers. Consequently, a structured data type may not be used as the data type of a parameter in a stored procedure. IBM DB2 also supports user-defined functions. The adapter, however, does not support these functions.

In the Adapter Configuration Wizard, stored procedures are grouped by database user. Note that a schema in IBM DB2 is equivalent to a schema in Oracle. Both represent the name of a database user.

For IBM DB2, <Default Schema> refers to the current database user. The database user name is equivalent to the SchemaName property used by the command-line utility.

Click <Default Schema> to select a different database user. The stored procedures in the Browse page are those that the database user created in the database specified as <database> in the JDBC Connection URL.

The Adapter Configuration Wizard does not support changing to a different database.

Select the stored procedure in the Stored Procedures dialog, as shown in Figure 9-43. The arguments are shown in the Arguments tab. Click Search to find database stored procedures that the user created in the specified database. For example, 'd%' or 'D%' would both find the DEMO stored procedure. Clicking Show All reveals all of the procedures that the current user created in the specified database.

Figure 9-43 The Stored Procedures Dialog

Description of Figure 9-43 follows
Description of "Figure 9-43 The Stored Procedures Dialog"

You can view the source code of the stored procedure by clicking the Source tab, as shown in Figure 9-44.

Figure 9-44 The Source Code of the Stored Procedure

Description of Figure 9-44 follows
Description of "Figure 9-44 The Source Code of the Stored Procedure"

Refer to Adapter Tutorial 30 about Integration with IBM DB2 to get an understanding of how to effectively use the command-line utility for IBM DB2. This tutorial also provides details about the CLASSPATH required by the utility, and is available at the following location:

http://adapters-lab2.us.oracle.com/portal/page/portal/Adapterportal/collateral/tutorials

9.7.2.3.3 IBM DB2 AS/400

The command-line utility does not support AS/400 because the Adapter Configuration Wizard supports it. The adapter supports SQL and EXTERNAL stored procedures. Only source code for SQL procedures can be viewed in the Adapter Configuration Wizard. Source code for EXTERNAL procedures cannot be viewed. The adapter does not support user-defined functions. The process for selecting a procedure on DB2 AS/400 is the same as for DB2.

Table 9-15 lists the supported data types for IBM DB2 AS/400 stored procedures:

Table 9-15 Data Types for IBM DB2 AS/400 Stored Procedures

SQL Data Type XML Schema Type

BINARY

BINARY LARGE OBJECT

BINARY VARYING

base64Binary

CHARACTER

CHARACTER LARGE OBJECT

CHARACTER VARYING

string

DATE

TIME

TIMESTAMP

dateTime

DECIMAL

NUMERIC

decimal

DOUBLE PRECISION

double

BIGINT

long

INTEGER

int

REAL

float

SMALLINT

short


Distinct types are also supported for data types that are created using the CREATE DISTINCT TYPE statement. These data types work in the same way as they do in IBM DB2.

Note that the IBM DB2 AS/400 implementation is based on queries from catalog tables in the QSYS2 schema. The adapter tries to determine whether the QSYS2.SCHEMATA table exists. If it does, then the Adapter Configuration Wizard queries tables in the QSYS2 schema. Therefore, if your IBM DB2 AS/400 database supports the QSYS2 schema, then the Adapter Configuration Wizard and the adapter run time should both work.

Note that the Adapter Configuration Wizard checks the SYSCAT schema first, and then the QSYS2 schema. The adapter does not support the catalog tables in the SYSIBM schema.

9.7.2.3.4 MySQL

The Adapter Configuration Wizard can be used to access stored procedures on MySQL v5.6 or later using catalog tables in the INFORMATION_SCHEMA schema. Versions of MySQL before v5.6 lack a PARAMETERS table in the INFORMATION_SCHEMA schema. You must continue to use the command-line utility when the PARAMETERS table is missing.

Without a PARAMETERS table, the MySQL database does not provide any information about the parameters of a stored procedure. It is therefore necessary to supply this information using a required property in the properties file. The Parameters property contains the signature of the stored procedure.

Property Description
IsFunction Determines whether the API is a function or a procedure
SchemaName The name of the database where the API is defined
Parameters The parameters of the stored procedure

The value of the Parameters property is a comma-delimited list of parameters, each of which has the following syntax

Parameter ::= {IN | INOUT | OUT} Parameter_Name SQL_Datatype

Note that all three elements of a parameter definition are required.

Consider the following MySQL stored procedure:

CREATE PROCEDURE demo
(IN x VARCHAR (10), INOUT y INT, OUT z CHAR (20))
BEGIN
...
END

The Parameters property must be specified as shown in the following example:

Parameters=IN x VARCHAR (10), INOUT y INT,  OUT z CHAR (20)

The generated XSD for the stored procedure is invalid unless the parameters are specified correctly in the parameters property. The following is a sample of a properties file for MySQL:

ProductName=MySQL
DriverClassName=com.mysql.jdbc.Driver
ConnectionString=jdbc:mysql://<host>:<port>/<database>
Username=<username>
Password=<password>
SchemaName=<database>
ProcedureName=demo
Parameters=IN x VARCHAR(10),INOUT y INT,OUT z TEXT(20)
ServiceName=MySQLDemoService
DatabaseConnection=mysql

Note:

For MySQL, the SchemaName, Parameters, and IsFunction properties are all required properties.

Table 9-16 lists the supported data types for MySQL stored procedures:

Table 9-16 Data Types for MySQL Stored Procedures

SQL Data Type XML Schema Type

BINARY

BLOB

LONGBLOB

MEDIUMBLOB

TINYBLOB

VARBINARY

base64Binary

BOOLEAN

boolean

CHAR

LONGTEXT

MEDIUMTEXT

TEXT

TINYTEXT

VARCHAR

string

DATE

DATETIME

TIMESTAMP

dateTime

DECIMAL

NUMERIC

REAL

decimal

DOUBLE

double

FLOAT

float

TINYINT

byte

TINYINT UNSIGNED

unsigned_byte

SMALLINT

short

SMALLINT UNSIGNED

unsigned_short

INTEGER

INT

MEDIUMINT

int

INTEGER UNSIGNED

INT UNSIGNED

MEDIUMINT UNSIGNED

unsigned_int

BIGINT

long

BIGINT UNSIGNED

unsigned_long


The character length for any SQL data type that corresponds with STRING can be specified using the '(#)' notation in the Parameters property, for example, VARCHAR (20). Specifying the length of any other SQL data type does not have any effect.

UNSIGNED integer data types can be used with the command-line utility. However, the INFORMATION_SCHEMA.PARAMETERS table does not provide information that indicates that an integer data type is UNSIGNED. Therefore, UNSIGNED integer data types are treated as though they were SIGNED integer data types when using the Adapter Configuration Wizard.

Stored procedures in MySQL are grouped by database specified by <database> in the JDBC Connection URL or as the SchemaName property used by the command-line utility. For MySQL, <Default Schema> refers to the database that the user is connected to (usually specified in the JDBC connection URL.) It is equivalent to the SchemaName property used by the command-line utility. Click <Default Schema> to select a different database. Click Search to search for specific stored procedures in the current database specified in the JDBC Connection URL. For example, 'd%' or 'D%' would both find stored procedures beginning with 'd' or 'D.' Click Show All to reveal all procedures in the current database.

9.7.2.4 Creating Database Connections

Database connections must be created in JDeveloper in order to access catalog tables necessary for the Adapter Configuration Wizard to work.

The following are the steps to create a database connection by using JDeveloper:

  1. Select Database Navigator from View.

  2. Right-click the application name, then click New followed by Connections. Select Database Connection.

    The Create Database Connection page is displayed, as shown in Figure 9-45.

    Figure 9-45 The Create Database Connection

    Description of Figure 9-45 follows
    Description of "Figure 9-45 The Create Database Connection"

  3. Enter a connection name in the Connection Name field. For example, sqlserver.

  4. Select Generic JDBC as the Connection Type from the Connection Type list.

  5. Enter your Username, Password, and role information.

  6. Click New for Driver Class. The Register JDBC Driver dialog is displayed, as shown in Figure 9-46.

    Figure 9-46 The Register JDBC Driver Dialog

    Description of Figure 9-46 follows
    Description of "Figure 9-46 The Register JDBC Driver Dialog"

  7. Enter the Driver Class (for example, com.microsoft.sqlserver.jdbc.SQLServerDriver).

  8. Create a new library or edit an existing one by using the following steps:

    1. Click Browse in the Register JDBC Driver dialog.

    2. Click New in the Select Library dialog.

      The Select Library dialog is displayed, as shown in Figure 9-47.

      Figure 9-47 The Select Library Dialog

      Description of Figure 9-47 follows
      Description of "Figure 9-47 The Select Library Dialog"

    3. Select an existing library or click New to create a new one.

      The Create Library dialog is displayed.

    4. Enter a library name, for example, SQL Server JDBC.

    5. Click Add Entry to add JDBC jar files to the class path.

    6. Click OK twice to exit the Create Library windows.

    7. Click OK to exit the Register JDBC Driver window.

  9. Enter your connection string name for JDBC URL.

  10. Click Test Connection.

  11. If the connection is successful, then a screen, as shown in Figure 9-48 is displayed.

    Figure 9-48 The Create Database Connection Dialog

    Description of Figure 9-48 follows
    Description of "Figure 9-48 The Create Database Connection Dialog"

  12. Click OK followed by Finish.

9.7.3 Design Time: Artifact Generation

The Adapter Configuration Wizard – Stored Procedures is capable of creating a WSDL file and a valid XSD file that describes the signature of a stored procedure or function. The following sections describe the relevant structure and content of both the WSDL and the XSD files, and their relationship with each other.

This section includes the following topics:

9.7.3.1 The WSDL–XSD Relationship

In the paragraphs that follow, the operation name, Factorial, and procedure name, Factorial, are taken from an example cited previously (see Figure 9-37). The generated WSDL imports the XSD file.

<types>
  <schema xmlns="http://www.w3.org/2001/XMLSchema">
    <import namespace="http://xmlns.oracle.com/pcbpel/adapter/db/SCOTT/FACTORIAL/"
      schemaLocation="xsd/SCOTT_FACTORIAL.xsd"/>
  </schema>
</types>

The namespace is derived from the schema, package, and procedure name, and appears as the targetNamespace in the generated XSD.

A root element called InputParameters is created in the XSD file for specifying elements that correspond to the IN and IN/OUT parameters of the stored procedure. Another root element called OutputParameters is also created in the XSD file for specifying elements only if there are any IN/OUT or OUT parameters. Note that IN/OUT parameters appear in both root elements.

These root elements are represented in the XSD file as an unnamed complexType definition whose sequence includes one element for each parameter. If there are no IN or IN/OUT parameters, then the InputParameters root element is still created; however, complexType is empty. A comment in the XSD file indicates that there are no such parameters. An example of one of these root elements follows.

<element name="InputParameters"
  <complexType>
    <sequence>
      <element …>
       …
    </sequence>
  </complexType>
</element>

The WSDL defines message types whose parts are defined in terms of these two root elements.

<message name="args_in_msg"
  <part name="InputParameters" element="InputParameters"/>
</message>
<message name="args_out_msg"
  <part name="OutputParameters" element="OutputParameters"/>
</message>

The db namespace is the same as the targetNamespace of the generated XSD. Note that the args_in_msg message type always appears in the WSDL while args_out_msg is included only if the OutputParameters root element is generated in the XSD file.

An operation is defined in the WSDL whose name is the same as the adapter service and whose input and output messages are defined in terms of these two message types.

<portType name="Factorial_ptt">
  <operation name="Factorial">
    <input message="tns:args_in_msg"/>
    <output message="tns:args_out_msg"/>
  </operation>
</portType>

The input message always appears while the output message depends on the existence of the OutputParameters root element in the XSD file. The tns namespace is derived from the operation name and is defined in the WSDL as

xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/db/Factorial/"

The root elements in the XSD file define the structure of the parts used in the messages that are passed into and sent out of the Web service encapsulated by the WSDL.

The input message in the WSDL corresponds to the InputParameters root element from the XSD file. The instance XML supplies values for the IN and IN/OUT parameters of the stored procedure. The output message corresponds to the OutputParameters root element. This is the XML file that gets generated after the stored procedure has executed. It holds the values of any IN/OUT and OUT parameters.

9.7.3.2 JCA File

The JCA file provides adapter configuration information for the service. A connection factory is specified so that the adapter run time can connect to the database, as shown in the following example. Non-managed connection properties are specified when the service connection has not been preconfigured.

<connection-factory location="eis/DB/oracle" UIConnectionName="oracle"
adapterRef="">
  <non-managed-connection
   ...
  </non-managed-connection>
</connection-factory>

Note that the JNDI name, eis/DB/oracle, was earlier specified as the service connection in the Adapter Configuration Wizard.

End point properties for the interaction are also specified. The name of the schema, package, and procedure are specified, as shown in the following example. The operation name ties the JCA file back to the service WSDL.

<connection-factory location="eis/db/oracle" UIConnectionName="oracle" adapterRef=""/>
<endpoint-interaction portType="Factorial_ptt" operation="Factorial">
  <interaction-spec
    className="oracle.tip.adapter.db.DBStoredProcedureInteractionSpec">
      <property name="ProcedureName" value="FACTORIAL"/>
      <property name="GetActiveUnitOfWork="false"/>
  </interaction-spec>
  </output>
</endpoint-interaction>

Note the operation name and procedure name. If an explicit schema had been chosen or if the procedure had been defined in a package, then values for these properties would also be listed here.

Note:

Non-managed connection details are not created in the DBAdapter.jca files when you start JDeveloper in the normal mode. However, non-managed connection details are created in the DBAdapter .jca files when you start JDeveloper in the preview mode.

9.7.3.3 Oracle Data Types

Many primitive data types have well-defined mappings and therefore are supported by both the design-time and run-time components. In addition, you can use user-defined types such as VARRAY, nested tables, and OBJECT.

Table 9-17 lists the supported data types for Oracle stored procedures and functions.

Table 9-17 Data Types for Oracle Stored Procedures and Functions

SQL or PL/SQL Type XML Schema Type

BINARY_DOUBLE

DOUBLE PRECISION

double

BINARY_FLOAT

FLOAT

REAL

float

BINARY_INTEGER

INTEGER

PLS_INTEGER

SMALLINT

int

BLOB

LONG RAW

RAW

base64Binary

CHAR

CLOB

LONG

STRING

VARCHAR2

string

DATE

TIMESTAMP

TIMESTAMP WITH TIME ZONE

dateTime

DECIMAL

NUMBER

decimal


9.7.3.4 Generated XSD Attributes

Table 9-18 lists the attributes used in the generated XSDs.

Table 9-18 Generated XSD Attributes

Attribute Example Purpose

name

name="param"

Name of an element

type

type="string"

XML schema type

db:type

db:type="VARCHAR2"

SQL or PL/SQL type

db:index

db:index="1"

Position of a parameter

db:default

db:default="true"

Has a default clause

minOccurs

minOccurs="0"

Minimum occurrences

maxOccurs

maxOccurs="1"

Maximum occurrences

nillable

nillable="true"

Permits null values


The db namespace is used to distinguish attributes used during run time from standard XML schema attributes. The db:type attribute is used to indicate what the database type is so that a suitable JDBC type mapping can be obtained at run time. The db:index attribute is used as an optimization by both the design-time and run-time components to ensure that the parameters are arranged in the proper order. Parameter indexes begin at 1 for procedures and 0 for functions. The return value of a function is represented as an OutputParameter element whose name is the name of the function and whose db:index is 0. The db:default attribute is used to indicate whether or not a parameter has a default clause.

The minOccurs value is set to 0 to allow for an IN parameter to be removed from the XML file. This is useful when a parameter has a default clause defining a value for the parameter (for example, X IN INTEGER DEFAULT 0). At run time, if no element is specified for the parameter in the XML file, the parameter is omitted from the invocation of the stored procedure, thus allowing the default value to be used. Each parameter can appear at most once in the invocation of a stored procedure or function. Therefore, maxOccurs, whose default value is always 1, is always omitted from elements representing parameters.

The nillable attribute is always set to true to allow the corresponding element in the instance XML to have a null value (for example, <X/> or <X></X>). In some cases, however, to pass an element such as this, which does have a null value, you must state this explicitly (for example, <X xsi:nil="true"/>). The namespace, xsi, used for the nillable attribute, must be declared explicitly in the instance XML (for example, xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance").

9.7.3.5 User-Defined Types

The Adapter Configuration Wizard can also generate valid definitions for user-defined types such as collections (VARRAY and nested tables) and OBJECT. These are created as complexType definitions in the XSD file.

For VARRAY, the complexType definition defines a single element in its sequence, called name_ITEM, where name is the name of the VARRAY element. All array elements in the XML file are so named. Given the following VARRAY type definition,

SQL> CREATE TYPE FOO AS VARRAY (5) OF VARCHAR2 (10);

and a VARRAY element, X, whose type is FOO, the following complexType is generated:

<complexType name="FOO">
  <sequence>
    <element name="X_ITEM" db:type="VARCHAR2" minOccurs="0" maxOccurs="5" nillable="true"/>
      <simpleType>
        <restriction base="string">
          <maxLength value="10"/>
        </restriction>
      </simpleType>
  </sequence>
</complexType>

The minOccurs value is 0 to allow for an empty collection. The maxOccurs value is set to the maximum number of items that the collection can hold. Note that the db:index attribute is not used. Having nillable set to true allows individual items in the VARRAY to be null.

Note the use of the restriction specified on the element of the VARRAY, FOO. This is used on types such as CHAR and VARCHAR2, whose length is known from the declaration of the VARRAY (or nested table). It specifies the type and maximum length of the element. An element value that exceeds the specified length causes the instance XML to fail during schema validation.

The attribute values of a parameter declared to be of type FOO look as follows in the generated XSD:

<element name="X" type="db:FOO" db:type="Array" db:index="1" minOccurs="0" nillable="true"/>

The type and db:type values indicate that the parameter is represented as an array defined by the complexType called FOO in the XSD file. The value for db:index is whatever the position of that parameter is in the stored procedure.

A nested table is treated almost identically to a VARRAY. The following nested table type definition,

SQL> CREATE TYPE FOO AS TABLE OF VARCHAR2 (10);

is also generated as a complexType with a single element in its sequence, called name_ITEM. The element has the same attributes as in the VARRAY example, except that the maxOccurs value is unbounded because nested tables can be of arbitrary size.

<complexType name="FOO">
  <sequence>
    <element name="X_ITEM" … maxOccurs="unbounded" nillable="true">
      …
    </element>
  </sequence>
</complexType>

An identical restriction is generated for the X_ITEM element in the VARRAY. The attributes of a parameter, X, declared to be of this type, are the same as in the VARRAY example.

Note that collections (Varray and nested table) are not supported if they are defined inside of a PL/SQL package specification. For example:

SQL> create package pkg as
   >   type vary is varray(10) of number;
   >   type ntbl is table of varchar2(100;
   >   procedure test(v in vary, n in ntbl);
   > end;
   > /

If a user selects the test procedure in the Adapter Configuration Wizard for stored procedures, an error occurs stating that the types are not supported. However, if the vary and ntbl type definitions were defined at the root level, outside of the package, then choosing the test procedure works without issue. The supported way to use collection types (Varray and nested table) is shown in the following example:

SQL> create type vary as varray(10) of number;
SQL> create type ntbl as table of varchar2(10);
SQL> create package pkg as
   >   procedure test(v in vary, n in ntbl);
   > end;
   /

An OBJECT definition is also generated as a complexType. Its sequence holds one element for each attribute in the OBJECT.

The following OBJECT,

SQL> CREATE TYPE FOO AS OBJECT (X VARCHAR2 (10), Y NUMBER);

is represented as a complexType definition called FOO with two sequence elements.

<complexType name="FOO">
  <sequence>
    <element name="X" db:type="VARCHAR2" minOccurs="0" nillable="true"/>
      <simpleType>
        <restriction base="string">
          <maxLength value="10"/>
        </restriction>
      </simpleType>
    <element name="Y" type="decimal" db:type="NUMBER" minOccurs="0"
nillable="true"/>
  </sequence>
</complexType>

The minOccurs value is 0 to allow for the element to be removed from the XML file. This causes the value of the corresponding attribute in the OBJECT to be set to null at run time. The nillable value is true to allow empty elements to appear in the XML file, annotated with the xsi:nil attribute, to indicate that the value of the element is null. Again, the db:index attribute is not used.

Note the use of a restriction on the VARCHAR2 attribute. The length is known from the declaration of the attribute in the OBJECT.

9.7.3.6 Complex User-Defined Types

User-defined types can be defined in arbitrarily complex ways. An OBJECT can contain attributes whose types are defined as any of the user-defined types mentioned in the preceding section. This means that the type of an attribute in an OBJECT can be another OBJECT, VARRAY, or a nested table, and so on. The base type of a VARRAY or a nested table can also be an OBJECT. Allowing the base type of a collection to be another collection supports multidimensional collections.

9.7.3.7 Object Type Inheritance

The Adapter Configuration Wizard is capable of generating a valid XSD for parameters whose types are defined using OBJECT-type inheritance. Given the following type hierarchy,

SQL> CREATE TYPE A AS OBJECT (A1 NUMBER, A2 VARCHAR2 (10)) NOT FINAL;
SQL> CREATE TYPE B UNDER A (B1 VARCHAR2 (10));

and a procedure containing a parameter, X, whose type is B,

SQL> CREATE PROCEDURE P (X IN B) AS BEGIN … END;

the Adapter Configuration Wizard generates an InputParameters element for parameter X as

<element name="X" type="db:B" db:index="1" db:type="Struct" minOccurs="0" nillable="true"/>

where the definition of OBJECT type B in the XSD file is generated as the following complexType.

<complexType name="B">
  <sequence>
    <element name="A1" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/>
    <element name="A2" db:type="VARCHAR2" minOccurs="0"  nillable="true">
      ...
    </element>
    <element name="B1" db:type="VARCHAR2" minOccurs="0"  nillable="true">
      ...
    </element>
  </sequence>
</complexType>

Restrictions on the maximum length of attributes A2 and B1 are added appropriately. Notice how the OBJECT type hierarchy is flattened into a single sequence of elements that corresponds to all of the attributes in the entire hierarchy.

9.7.3.8 Object References

The Adapter Configuration Wizard can also generate a valid XSD for parameters that are references to OBJECT types (for example, object references) or are user-defined types that contain an object reference somewhere in their definition. In this example,

SQL> CREATE TYPE FOO AS OBJECT (…);
SQL> CREATE TYPE BAR AS OBJECT (F REF FOO, …);
SQL> CREATE PROCEDURE PROC (X OUT BAR, Y OUT REF FOO) AS BEGIN … END;

the Adapter Configuration Wizard generates complexType definitions for FOO and BAR as already indicated, except that for BAR, the element for the attribute, F, is generated as

<element name="F" type="db:FOO" db:type="Ref" minOccurs="0" nillable="true"/>

where together, the type and db:type attribute values indicate that F is a reference to the OBJECT type FOO.

For a procedure PROC, the following elements are generated in the OutputParameters root element of the XSD file:

<element name="X" type="db:BAR" db:index="1" db:type="Struct" minOccurs="0" nillable="true"/>
<element name="Y" type="db:FOO" db:index="2" db:type="Ref" minOccurs="0" nillable="true"/>

For Y, note the value of the db:type attribute, Ref. Together with the type attribute, the element definition indicates that Y is a reference to FOO.

Note that there is a restriction on the use of object references that limits their parameter mode to OUT only. Passing an IN or IN/OUT parameter into an API that is either directly a REF or, if the type of the parameter is user-defined, contains a REF somewhere in the definition of that type, is not permitted.

9.7.3.9 Referencing Types in Other Schemas

You can refer to types defined in other schemas if the necessary privileges to access them have been granted. For example, suppose type OBJ was declared in SCHEMA1:

SQL> CREATE TYPE OBJ AS OBJECT (…);

The type of a parameter in a stored procedure declared in SCHEMA2 can be type OBJ from SCHEMA1:

CREATE PROCEDURE PROC (O IN SCHEMA1.OBJ) AS BEGIN … END;

This is possible only if SCHEMA1 granted permission to SCHEMA2 to access type OBJ:

SQL> GRANT EXECUTE ON OBJ TO SCHEMA2;

If the required privileges are not granted, an error occurs when trying to create procedure PROC in SCHEMA2:

PLS-00201: identifier "SCHEMA1.OBJ" must be declared

Because the privileges have not been granted, type OBJ from SCHEMA1 is not visible to SCHEMA2; therefore, SCHEMA2 cannot refer to it in the declaration of parameter O.

9.7.3.10 XSD Pruning Optimization

Some user-defined object types can have a very large number of attributes. These attributes can also be defined in terms of other object types that also have many attributes. In short, one object type can become quite large depending on the depth and complexity of its definition.

Depending on the situation, many attributes of a large object type may not even be necessary. It is therefore sometimes desirable to omit these attributes from the object's schema definition altogether. This can be done by physically removing the unwanted XSD elements from the definition of the object type.

Consider the following example where a stored procedure has a parameter whose type is a complex user-defined type:

SQL> CREATE TYPE OBJ AS OBJECT (A, NUMBER, B <SqlType>, C <SqlType>, ...);
SQL> CREATE PROCEDURE PROC (O OBJ) AS BEGIN ... END;

The InputParameters root element contains a single element for the parameter, O from the API's signature. A complexType definition is to be added to the generated XSD for the object type, as shown in the following code snippet:

<complexType name="OBJ">
  <sequence>
    <element name="A" type="decimal" db:type="NUMBER" minOccurs="0"  nillable="true"/>
    <element name="B" .../>
    <element name="C" .../>
    ...
  </sequence>
</complexType>

If attributes B and C are not required, then their element in the complexType definition of OBJ can be removed regardless of its type. Values are not required for these attributes in the instance XML. If parameter O had been an output parameter, then elements corresponding with the pruned attributes are also omitted in the generated XML.

Suppose that the type of parameter A was also a user-defined object type and that the definition of OBJ changed accordingly, as shown in the following example:

SQL> CREATE TYPE FOO AS OBJECT (X NUMBER, Y NUMBER, Z NUMBER);
SQL> CREATE TYPE OBJ AS OBJECT (A FOO, B <SqlType>, C <SqlType, ...);

In such a case, the API remains unchanged. Elements corresponding to unwanted attributes in the definition of FOO can also be removed regardless of their type. So, for example, if Y is not required, then its element in the complexType definition of FOO can be removed in the XSD file.

Pruning the XSD file in this fashion improves the run-time performance of the adapter and can significantly reduce memory consumption, as well.

Note:

Only attributes in user-defined object types can be pruned. You cannot prune (remove) a parameter of the stored procedure by removing its element from the InputParameters root element. This can result in an error at run time unless the parameter has a default clause.

9.7.4 Run Time: Before Stored Procedure Invocation

This section discusses important considerations of stored procedure support and a brief overview of some important details regarding what happens before the invocation of a stored procedure or function.

This section includes the following topics:

9.7.4.1 Value Binding

Consider the extraction of values from the XML file and how the run time works given those values. The possible cases for data in the XML file corresponding to the value of a parameter whose type is one of the supported primitive data types are as follows:

  1. The value of an element is specified (for example, <X>100</X>, here X=100.)

  2. The value of an element is not specified (for example, <X/>, here X=null.)

  3. The value is explicitly specified as null (for example, <X xsi:nil="true"/>, here X=null.)

  4. The element is not specified in the XML file at all (for example, X = <default value>).

Note:

There is one notable difference that distinguishes Microsoft SQL Server from IBM DB2, MySQL, and AS/400. SQL Server supports parameters that can include a default value in the definition of a stored procedure. Because IBM DB2, MySQL, and AS/400 do not support parameter defaults, every parameter must be represented as an element in the instance XML.

In the first case, the value is taken from the XML file as is and is converted to the appropriate object according to its type. That object is then bound to its corresponding parameter during preparation of the stored procedure invocation.

In the second and third cases, the actual value extracted from the XML file is null. The type converter accepts null and returns it without any conversion. The null value is bound to its corresponding parameter regardless of its type. Essentially, this is the same as passing null for parameter X.

The fourth case has two possibilities. The parameter either has a default clause or it does not. If the parameter has a default clause, then the parameter can be excluded from the invocation of the stored procedure. This allows the default value to be used for the parameter. If the parameter is included, then the value of the parameter is used, instead. If the parameter does not have a default clause, then the parameter must be included in the invocation of the procedure. Elements for all parameters of a function must be specified. If an element in the instance XML is missing, then the function is invoked with fewer arguments than is expected.

A null value is bound to the parameter by default:

SQL> CREATE PROCEDURE PROC (X IN INTEGER DEFAULT 0) AS BEGIN … END;

Here, no value is bound to the parameter. In fact, the parameter can be excluded from the invocation of the stored procedure. This allows the value of 0 to default for parameter X.

To summarize, the following PL/SQL is executed in each of these three cases:

  1. "BEGIN PROC (X=>?); END;" - X = 100

  2. "BEGIN PROC (X=>?); END;" - X = null

  3. There are two possibilities:

    1. "BEGIN PROC (); END;" - X = 0 (X has a default clause)

    2. "BEGIN PROC (X=>?); END;" - X = null (X does not have a default clause)

With the exception of default clause handling, these general semantics also apply to item values of a collection or attribute values of an OBJECT whose types are one of the supported primitive data types. The semantics of <X/> when the type is user-defined are, however, quite different.

For a collection, whether it is a VARRAY or a nested table, the following behavior can be expected, given a type definition such as

SQL> CREATE TYPE ARRAY AS VARRAY (5) OF VARCHAR2 (10);

and XML for a parameter, X, which has type ARRAY, that appears as follows:

<X>
    <X_ITEM xsi:nil="true"/>
    <X_ITEM>Hello</X_ITEM>
    <X_ITEM xsi:nil="true"/>
    <X_ITEM>World</X_ITEM>
</X>

The first and third elements of the VARRAY are set to null. The second and fourth are assigned their respective values. No fifth element is specified in the XML file; therefore, the VARRAY instance has only four elements.

Assume an OBJECT definition such as

SQL> CREATE TYPE OBJ AS OBJECT (A INTEGER, B INTEGER, C INTEGER);

and XML for a parameter, X, which has type OBJ, that appears as

<X>
    <A>100</A>
    <C xsi:nil="true"/>
</X>

The value 100 is assigned to attribute A, and null is assigned to attributes B and C. Because there is no element in the instance XML for attribute B, a null value is assigned.

The second case, <X/>, behaves differently if the type of X is user-defined. Rather than assigning null to X, an initialized instance of the user-defined type is created and bound instead.

In the preceding VARRAY example, if <X/> or <X></X> is specified, then the value bound to X is an empty instance of the VARRAY. In PL/SQL, this is equivalent to calling the type constructor and assigning the value to X. For example,

X := ARRAY();

Similarly, in the preceding OBJECT example, an initialized instance of OBJ, whose attribute values have all been null assigned, is bound to X. Similar to the VARRAY case, this is equivalent to calling the type constructor. For example,

X := OBJ(NULL, NULL, NULL);

To specifically assign a null value to X when the type of X is user-defined, add the xsi:nil attribute to the element in the XML file, as in

<X xsi:nil="true"/>

9.7.4.2 Data Type Conversions

This section describes the conversion of data types such as CLOB, DATE, TIMESTAMP, and binary data types including RAW, LONG RAW and BLOB, as well as similar data types supported by third-party databases.

Microsoft SQL Server, IBM DB2, AS/400, and MySQL support binding various forms of binary and date data types to parameters of a stored procedure, as summarized in Table 9-19.

Table 9-19 Third-Party Database: Binding Binary and Date Values to Parameters of a Stored Procedure

XML Schema Type IBM DB2 Data Type AS/400 Data Type Microsoft SQL Server Data Type MySQL Data Type

base64Binary

BLOB

CHAR FOR BIT DATA

VARCHAR FOR BIT DATA

BINARY

BINARY LARGE OBJECT

BINARY VARYING

BINARY

IMAGE

TIMESTAMP

VARBINARY

BINARY

TINYBLOB

BLOB

MEDIUMBLOB

LONGBLOB

VARBINARY

dateTime

DATE

TIME

TIMESTAMP

DATE

TIME

TIMESTAMP

DATETIME

SMALLDATETIME

DATE

DATETIME

TIMESTAMP


For a CLOB parameter, if the length of the CLOB parameter is less than 4 kilobytes, then the text extracted from the XML file is bound to the parameter as a String type with no further processing. If the length of the CLOB parameter is greater than 4 kilobytes or if the mode of the parameter is IN/OUT then a temporary CLOB parameter is created. The XML file data is then written to the temporary CLOB before the CLOB is bound to its corresponding parameter. The temporary CLOB parameter is freed when the interaction completes. For other character types, such as CHAR and VARCHAR2, the data is simply extracted and bound as necessary. Note that it is possible to bind an XML document to a CLOB parameter (or VARCHAR2 if it is large enough). However, appropriate substitutions for <, >, and so on, must first be made (for example, &lt; for < and &gt; for >).

A few data types require special processing before their values are bound to their corresponding parameters. These include data types represented by the XML Schema types base64Binary and dateTime.

Note that the XML schema type, dateTime, represents TIME, DATE, and TIMESTAMP. This means that the XML values for these data types must adhere to the XML schema representation for dateTime. Therefore, a simple DATE string, 01-JAN-05, is invalid. XML schema defines dateTime as YYYY-MM-DDTHH:mm:ss. Therefore, the correct DATE value is 2005-01-01T00:00:00. Values for these parameters must be specified using this format in the instance XML.

Data for binary data types must be represented in a human readable manner. The chosen XML schema representation for binary data is base64Binary. The type converter uses the javax.mail.internet.MimeUtility encode and decode APIs to process binary data. The encode API must be used to encode all binary data into base64Binary form so that it can be used in an XML file. The type converter uses the decode API to decode the XML data into a byte array. The decode API is used to convert the base64Binary data into a byte array.

For a BLOB parameter, if the length of a byte array containing the decoded value is less than 2 kilobytes, then the byte array is bound to its parameter with no further processing. If the length of the byte array is greater than 2 kilobytes or if the mode of the parameter is IN/OUT, then a temporary BLOB is created. The byte array is then written to the BLOB before it is bound to its corresponding parameter. The temporary BLOB is freed when the interaction completes. For other binary data types, such as RAW and LONG RAW, the base64Binary data is decoded into a byte array and bound as necessary.

Conversions for the remaining data types are straightforward and require no additional information.

9.7.5 Run Time: After Stored Procedure Invocation

After the procedure (or function) executes, the values for any IN/OUT and OUT parameters are retrieved. These correspond to the values of the elements in the OutputParameters root element in the generated XSD.

This section includes the following topics:

9.7.5.1 Data Type Conversions

Conversions of data retrieved are straightforward. However, CLOB (and other character data), RAW, LONG RAW, and BLOB conversions, as well as conversions for similar data types supported by third-party databases, require special attention.

When a CLOB is retrieved, the entire contents of that CLOB are written to the corresponding element in the generated XML. Standard DOM APIs are used to construct the XML file. This means that character data, as for types such as CLOB, CHAR, and VARCHAR2, is messaged as needed to make any required substitutions so that the value is valid and can be placed in the XML file for subsequent processing. Therefore, substitutions for <and>, for example, in an XML document stored in a CLOB are made so that the value placed in the element within the generated XML for the associated parameter is valid.

Raw data, such as for RAW and LONG RAW data types, is retrieved as a byte array. For BLOBs, the BLOB is first retrieved, and then its contents are obtained, also as a byte array. The byte array is then encoded using the javax.mail.internet.MimeUtility encode API into base64Binary form. The encoded value is then placed in its entirety in the XML file for the corresponding element. The MimeUtility decode API must be used to decode this value back into a byte array.

Conversions for the remaining data types are straightforward and require no additional information.

9.7.5.2 Null Values

Elements whose values are null appear as empty elements in the generated XML and are annotated with the xsi:nil attribute. This means that the xsi namespace is declared in the XML file that is generated. Generated XML for a procedure PROC, which has a single OUT parameter, X, whose value is null, looks as follows:

<OutputParameters … xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <X xsi:nil="true"/>
</OutputParameters>

Note that XML elements for parameters of any type (including user-defined types) appear this way if their value is null.

9.7.5.3 Function Return Values

The return value of a function is treated as an OUT parameter at position 0 whose name is the name of the function itself. For example,

CREATE FUNCTION FACTORIAL (X IN INTEGER) RETURN INTEGER AS
BEGIN
    IF (X <= 0) THEN RETURN 1;
    ELSE RETURN FACTORIAL (X - 1);
    END IF;
END;

An invocation of this function with a value of 5, for example, results in a value of 120 and appears as <FACTORIAL>120</FACTORIAL> in the OutputParameters root element in the generated XML.

9.7.6 Run Time: Common Third-Party Database Functionality

The common third-party database functionality at run time includes the following:

9.7.6.1 Processing ResultSets

All third-party databases share the same functionality for handling ResultSets. The following is a SQL Server example of an API that returns a ResultSet:

 1> create procedure foo ... as select ... from ...;
 2> go

A RowSet defined in the generated XSD represents a ResultSet. A RowSet consists of zero or more rows, each having one or more columns. A row corresponds with a row returned by the query. A column corresponds with a column item in the query. The generated XML for the API shown in the preceding example after it executes is shown in the following example:

<RowSet>
  <Row>
    <Column name="<column name>" sqltype="<sql datatype">value</Column>
    ...
  </Row>
    ...
</RowSet>
…

The name attribute stores the name of the column appearing in the query while the sqltype attribute stores the SQL datatype of that column, for example INT. The value is whatever the value is for that column.

Note that it is possible for an API to return multiple ResultSets. In such cases, there is one RowSet for each ResultSet in the generated XML. All RowSets always appear first in the generated XML.

9.7.6.2 Returning an INTEGER Status Value

Some databases support returning an INTEGER status value using a RETURN statement in a stored procedure. Microsoft SQL Server and AS/400 both support this feature. In both cases, the Adapter Configuration Wizard is unable to determine whether a stored procedure returns a status value. Therefore, you must specify that the stored procedure is returning a value. You can use a check box to make this indication.

After choosing a stored procedure in the Stored Procedures dialog, the Specify Stored Procedure page appears, as shown in Figure 9-49. The check box appears at the bottom of the page. Select the box to indicate that the procedure contains a RETURN statement. You can view the source code of the procedure to determine whether a RETURN statement exists.

Note that the check box appears only for stored procedures on databases that support this feature. The check box is not displayed for functions. The value returned by the stored procedure appears as an element in the OutputParameters root element in the generated XSD. The name of the element is the name of the stored procedure. The value of a return statement is lost after the execution of the stored procedure if the check box is not selected.

Figure 9-49 The Specify Stored Procedure Page

Description of Figure 9-49 follows
Description of "Figure 9-49 The Specify Stored Procedure Page"

9.7.7 Advanced Topics

This section discusses scenarios for types that are not supported directly using the stored procedure functionality that the Oracle Database Adapter provides. The following sections describe workarounds that address the need to use these data types:

9.7.7.1 Row Set Support Using a Strongly Typed XSD

Currently a REF CURSOR by nature can support any arbitrary result set, so the XSD generated at design time is weakly typed.

However the XML output from this is hard to use. It is very difficult to write an Xpath expression or XSL based on a weakly typed XSD and column names as attribute values instead of element names.

Although a row set can represent any result set, it is possible to assume for some procedures that it will have the same structure each time, and hence can be described with a strongly typed XSD. A strongly typed XSD is almost a necessity if you want to transform the result set to another XSD later on. You can use the Adapter Configuration Wizard to generate a strongly typed XSD for a REF CURSOR.

If a weakly typed XSD is sufficient for your use case, see Section 9.7.7.2, "Row Set Support Using a Weakly Typed XSD".

This section includes the following topics:

For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

9.7.7.1.1 Design Time

If the stored procedure or function you select contains an output parameter of type RowSet, you can define a strongly typed XSD for this ref cursor as follows:

  1. Using the Adapter Configuration Wizard, select a stored procedure or function that contains an output parameter of type RowSet.

    See steps 1 through 8 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

  2. Click Next. The RowSets page is displayed, as shown in Figure 9-50.

    By default, the Adapter Configuration Wizard generates a weakly typed XSD for this ref cursor shown in the XSD text field. Example 9-4 shows this default, weakly typed XSD.

    Example 9-4 Default Weakly Typed XSD

    <schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_CURSORS/MOVIES_QUERY/" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:db="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_CURSORS/MOVIES_QUERY/" elementFormDefault="qualified">
       <element name="InputParameters">
          <complexType>
             <sequence>
                <element name="EXAMPLE" type="db:SYS.MOVIESOBJ" db:index="1" db:type="Struct" minOccurs="0" nillable="true"/>
             </sequence>
          </complexType>
       </element>
       <element name="OutputParameters">
          <complexType>
             <sequence>
                <element name="MOVIES" type="db:RowSet" db:index="2" db:type="RowSet" minOccurs="0" nillable="true"/>
             </sequence>
          </complexType>
       </element>
       <complexType name="RowSet">
          <sequence>
             <element name="Row" minOccurs="0" maxOccurs="unbounded">
                <complexType>
                   <sequence>
                      <element name="Column" maxOccurs="unbounded" nillable="true">
                         <complexType>
                            <simpleContent>
                               <extension base="string">
                                  <attribute name="name" type="string" use="required"/>
                                  <attribute name="sqltype" type="string" use="required"/>
                               </extension>
                            </simpleContent>
                         </complexType>
                      </element>
                   </sequence>
                </complexType>
             </element>
          </sequence>
       </complexType>
       <complexType name="SYS.MOVIESOBJ">
          <sequence>
             <element name="TITLE" db:type="VARCHAR2" minOccurs="0" nillable="true">
                <simpleType>
                   <restriction base="string">
                      <maxLength value="30"/>
                   </restriction>
                </simpleType>
             </element>
             <element name="DIRECTOR" db:type="VARCHAR2" minOccurs="0" nillable="true">
                <simpleType>
                   <restriction base="string">
                      <maxLength value="30"/>
                   </restriction>
                </simpleType>
             </element>
             <element name="STARRING" db:type="VARCHAR2" minOccurs="0" nillable="true">
                <simpleType>
                   <restriction base="string">
                      <maxLength value="30"/>
                   </restriction>
                </simpleType>
             </element>
          </sequence>
       </complexType>
    </schema>
    
  3. For each of the stored procedure or function arguments:

    • Double-click in the Value column.

    • Enter a valid value for the argument.

      Enter numbers and strings directly, dates as literals (for example, 2009/11/11), and structs as say MYOBJ('a', 'b').

    • Press Enter.

    Note:

    You must choose values that are valid for the argument type and that exist in the database.

    Oracle recommends that you specify a value for all arguments to ensure that the correct stored procedure or function signature is executed.

  4. Click Introspect.

    The Adapter Configuration Wizard executes the stored procedure or function using the arguments you specify:

    1. If the stored procedure or function returns a row set with at least 1 row, the RowSets page is updated to display a strongly typed XSD in the XSD text field. Example 9-5 shows the strongly typed XSD that replaces the default, weakly typed XSD that Example 9-4 shows.

      Figure 9-51 RowSets Page: Successful Introspection

      Description of Figure 9-51 follows
      Description of "Figure 9-51 RowSets Page: Successful Introspection"

      Example 9-5 Strongly Typed XSD

      <schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_CURSORS/MOVIES_QUERY/" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:db="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_CURSORS/MOVIES_QUERY/" elementFormDefault="qualified">
         <element name="InputParameters">
            <complexType>
               <sequence>
                  <element name="EXAMPLE" type="db:SYS.MOVIESOBJ" db:index="1" db:type="Struct" minOccurs="0" nillable="true"/>
               </sequence>
            </complexType>
         </element>
         <element name="OutputParameters">
            <complexType>
               <sequence>
                  <element name="MOVIES" type="db:MOVIES_RowSet" db:index="2" db:type="RowSet" minOccurs="0" nillable="true"/>
               </sequence>
            </complexType>
         </element>
         <complexType name="MOVIES_RowSet">
            <sequence>
               <element name="MOVIES_Row" minOccurs="0" maxOccurs="unbounded">
                  <complexType>
                     <sequence>
                        <element name="TITLE" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="50"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="DIRECTOR" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="20"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="STARRING" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="100"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="SYNOPSIS" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="255"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="GENRE" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="70"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="RUN_TIME" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/>
                        <element name="RELEASE_DATE" type="dateTime" db:type="DATE" minOccurs="0" nillable="true"/>
                        <element name="RATED" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="6"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="RATING" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="4"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="VIEWER_RATING" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="5"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="STATUS" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="11"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="TOTAL_GROSS" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/>
                        <element name="DELETED" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="5"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="SEQUENCENO" type="decimal" db:type="NUMBER" minOccurs="0" nillable="true"/>
                        <element name="LAST_UPDATED" type="dateTime" db:type="DATE" minOccurs="0" nillable="true"/>
                        <element name="POLLING_STRATEGY" db:type="VARCHAR2" minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="30"/>
                              </restriction>
                           </simpleType>
                        </element>
                     </sequence>
                  </complexType>
               </element>
            </sequence>
         </complexType>
         <complexType name="SYS.MOVIESOBJ">
            <sequence>
               <element name="TITLE" db:type="VARCHAR2" minOccurs="0" nillable="true">
                  <simpleType>
                     <restriction base="string">
                        <maxLength value="30"/>
                     </restriction>
                  </simpleType>
               </element>
               <element name="DIRECTOR" db:type="VARCHAR2" minOccurs="0" nillable="true">
                  <simpleType>
                     <restriction base="string">
                        <maxLength value="30"/>
                     </restriction>
                  </simpleType>
               </element>
               <element name="STARRING" db:type="VARCHAR2" minOccurs="0" nillable="true">
                  <simpleType>
                     <restriction base="string">
                        <maxLength value="30"/>
                     </restriction>
                  </simpleType>
               </element>
            </sequence>
         </complexType>
      </schema>
      

      Proceed to step 5.

    2. If no rows are returned, the Introspection Failed dialog is displayed, as shown in Figure 9-52.

      Figure 9-52 Introspection Failed Dialog

      Description of Figure 9-52 follows
      Description of "Figure 9-52 Introspection Failed Dialog"

      The Adapter Configuration Wizard generates a weakly typed XSD and displays it in the XSD text field by default, overwriting any edits you may have made to a previous version of the XSD.

      Go back to step 3 and enter test argument values that will return a row set with at least 1 row.

    3. If the stored procedure or function throws an exception, the Introspection Error dialog is displayed, as shown in Figure 9-53.

      Figure 9-53 Introspection Error Dialog

      Description of Figure 9-53 follows
      Description of "Figure 9-53 Introspection Error Dialog"

      The Adapter Configuration Wizard generates a weakly typed XSD and displays it in the XSD text field by default, overwriting any edits you may have made to a previous version of the XSD.

      Go back to step 3 and enter test argument values that will return a row set with at least 1 row.

  5. Optionally, fine tune the strongly typed XSD by manually editing the schema shown in the XSD text filed.

  6. Proceed to step 10 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

9.7.7.1.2 Run Time

Suppose you have the following package:

CREATE PACKAGE PKG AS
       TYPE REF_CURSOR IS REF CURSOR;
       PROCEDURE TEST(C OUT REF_CURSOR);
END;

CREATE PACKAGE BODY PKG AS
       ROCEDURE TEST(C OUT REF_CURSOR) AS
       BEGIN
           OPEN C FOR SELECT DEPTNO, DNAME FROM DEPT;
       END;
END;

After using the Adapter Configuration Wizard to define a strongly typed XSD, after the procedure executes, the following XML is generated for parameter, C:

<C>
    <C_Row>
        <DEPTNO>10</DEPTNO>
        <DNAME>ACCOUNTING</DNAME>
    </C_Row>
    <C_Row>
        <DEPTNO>11</DEPTNO>
        <DNAME>DEVELOPMENT</DNAME>
    </C_Row>
    …
</C>

Using the Oracle Database Adapter, at run time, it does not matter if the XSD describing the strongly typed ref cursor is inline or imported.

The strongly typed XSD is applied by the SOA runtime and is visible in the Oracle Enterprise Manager Console, where appropriate. For example, Figure 9-54 shows the audit trail for an invoke that returns a ref cursor payload using a strongly typed XSD.

Figure 9-54 Audit Trail for Stongly Typed Payload

Description of Figure 9-54 follows
Description of "Figure 9-54 Audit Trail for Stongly Typed Payload"

9.7.7.2 Row Set Support Using a Weakly Typed XSD

Currently a REF CURSOR by nature can support any arbitrary result set, so the XSD generated at design time is weakly typed. By default, the Adapter Configuration Wizard generates a weakly typed XSD for a REF CURSOR.

However the XML output from this is hard to use. It is very difficult to write an Xpath expression or XSL based on a weakly typed XSD and column names as attribute values instead of element names.

Although a row set can represent any result set, it is possible to assume for some procedures that it will have the same structure each time, and hence can be described with a strongly typed XSD. A strongly typed XSD is almost a necessity if you want to transform the result set to another XSD later on.

If a strongly typed XSD is better suited to your use case, see Section 9.7.7.1, "Row Set Support Using a Strongly Typed XSD".

This section includes the following topics:

For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

9.7.7.2.1 Design Time

If the stored procedure or function you select contains an output parameter of type ResultSet, you can define a weakly typed XSD for this ref cursor as follows:

  1. Using the Adapter Configuration Wizard, select a stored procedure or function that contains an output parameter of type ResultSet.

    See steps 1 through 8 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

  2. Click Next. The RowSets page is displayed, as shown in Figure 9-55.

    By default, the Adapter Configuration Wizard generates a weakly typed XSD for this ref cursor shown in the XSD text field.

  3. Optionally, fine tune the weakly typed XSD by manually editing the schema shown in the XSD text filed.

  4. Proceed to step 10 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

9.7.7.2.2 Run Time

Suppose you have the following package:

CREATE PACKAGE PKG AS
       TYPE REF_CURSOR IS REF CURSOR;
       PROCEDURE TEST(C OUT REF_CURSOR);
END;

CREATE PACKAGE BODY PKG AS
       ROCEDURE TEST(C OUT REF_CURSOR) AS
       BEGIN
           OPEN C FOR SELECT DEPTNO, DNAME FROM DEPT;
       END;
END;

The REF_CURSOR is a weakly typed cursor variable because the query is not specified. After the procedure executes, the following XML is generated for parameter, C:

<C>
    <Row>
        <Column name="DEPTNO" sqltype="NUMBER">10</Column>
        <Column name="DNAME" sqltype="VARCHAR2">ACCOUNTING</Column>
    </Row>
    <Row>
        <Column name="DEPTNO" sqltype="NUMBER">20</Column>
        <Column name="DNAME" sqltype="VARCHAR2">RESEARCH</Column>
    </Row>
    …
</C>

There is a total of four rows, each consisting of two columns, DEPTNO and DNAME.

Ref cursors are represented by Java ResultSets. It is not possible to create a ResultSet programmatically by using APIs provided by the JDBC driver. Therefore, ref cursors may not be passed IN to a stored procedure. They can only be passed as IN/OUT and OUT parameters with one caveat. An IN/OUT ref cursor is treated strictly as an OUT parameter. Because no IN value can be provided for an IN/OUT parameter, a null is bound to that parameter when invoking the stored procedure.

9.7.7.3 Support for PL/SQL Boolean, PL/SQL Record, and PL/SQL Table Types

The Adapter Configuration Wizard provides a mechanism that detects when these types are used and then invokes Oracle JPublisher to generate the necessary wrappers automatically. Oracle JPublisher generates two SQL files, one to create schema objects, and another to drop them. The SQL that creates the schema objects is automatically executed from within the Adapter Configuration Wizard to create the schema objects in the database schema before the XSD file is generated. For example, suppose the following package specification is declared:

CREATE PACKAGE PKG AS
  TYPE REC IS RECORD (X NUMBER, Y VARCHAR2 (10));
  TYPE TBL IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
  PROCEDURE PLSQL (R REC, T TBL, B BOOLEAN);
END;

Figure 9-56 shows the step in the Adapter Configuration Wizard that is displayed when PROC procedure from PKG package is selected.

Figure 9-56 Specifying a Stored Procedure in the Adapter Configuration Wizard

Specifying a stored procedure: step 6.
Description of "Figure 9-56 Specifying a Stored Procedure in the Adapter Configuration Wizard"

As Figure 9-56 shows, the original procedure name is fully qualified, PKG.PLSQL. The type of parameter, R, is the name of the RECORD. The type of T is the name of the TABLE. The type of B is Boolean. The name of the wrapper package that is generated is derived from the service name, bpel_ServiceName (for example, bpel_UseJPub). This is the name of the generated package that contains the wrapper procedure. The check box can be used to force the Adapter Configuration Wizard to overwrite an existing package when the schema objects are created.

Clicking Next twice reveals the Finish page of the Adapter Configuration Wizard, as shown in Figure 9-57.

Figure 9-57 Defining a Database Adapter Service: Finish Page

Description of Figure 9-57 follows
Description of "Figure 9-57 Defining a Database Adapter Service: Finish Page"

The contents of this page describe what the Adapter Configuration Wizard has detected and what actions are performed when the Finish button is clicked. The following summarizes the contents of this page:

  1. The name of the generated WSDL is UseJPub.wsdl.

  2. The name of the JCA file is UseJPub_db.jca.

  3. Two SQL scripts are created and added to the BPEL process project:

    1. BPEL_USEJPUB.sql – Creates the schema objects.

    2. BPEL_USEJPUB_drop.sql – Drops the schema objects.

  4. The name of the generated XSD is SCOTT_USEJPUB_PKG-24PLSQL.xsd.

When you click Finish, Oracle JPublisher is invoked to generate the SQL files and load the schema objects into the database. The process of generating wrappers may take quite some time to complete. Processing times for wrappers that are generated in the same package usually require less time after an initial wrapper has been generated for another procedure within the same package.

Note:

You must execute BPEL_XXXX_drop.sql when re-creating an Oracle Database Adapter. This is likely due to the JPublisher functionality, which uses a cache when generating wrappers.

The following user-defined types are generated to replace the PL/SQL types from the original procedure:

SQL> CREATE TYPE PKG_REC AS OBJECT (X NUMBER, Y VARCHAR2 (10));
SQL> CREATE TYPE PKG_TBL AS TABLE OF NUMBER;

The naming convention for these types is OriginalPackageName_OriginalTypeName. Boolean is replaced by INTEGER in the wrapper procedure.

Acceptable values for the original Boolean parameter, now that it is an INTEGER are 0 for FALSE and any non-zero INTEGER value for TRUE. Any value other than 1 is considered false. The generated wrapper procedure uses APIs from the SYS.SQLJUTL package to convert from INTEGER to Boolean and vice-versa.

A new wrapper package called BPEL_USEJPUB is created that contains the wrapper for procedure PLSQL, called PKG$PPLSQL, as well as conversion APIs that convert from the PL/SQL types to the user-defined types and vice-versa. If the original procedure is a root-level procedure, then the name of the generated wrapper procedure is TOPLEVEL$OriginalProcedureName.

The generated XSD represents the signature of wrapper procedure PKG$PLSQL and not the original procedure. The name of the XSD file is URL-encoded, which replaces $ with -24.

Note the naming conventions for the generated artifacts:

  • The service name is used in the names of the WSDL and SQL files. It is also used as the name of the wrapper package.

  • The name of the generated XSD is derived from the schema name, service name, and the original package and procedure names.

  • The name of a SQL object or collection data types are derived from the original package name and the name of its corresponding PL/SQL type.

  • The name of the wrapper procedure is derived from the original package and procedure names. TOPLEVEL$ is used for root-level procedures.

The name of the generated wrapper package is limited to 30 characters. The name of the wrapper procedure is limited to 29 characters. If the names generated by Oracle JPublisher are longer than these limits, then they are truncated.

When the PartnerLink that corresponds with the service associated with the procedure is invoked, then the generated wrapper procedure is executed instead of the original procedure.

9.7.7.3.1 Default Clauses in Wrapper Procedures

If a procedure contains a special type that requires a wrapper to be generated, then the default clauses on any of the parameters are not carried over to the wrapper. For example, consider

SQL> CREATE PROCEDURE NEEDSWRAPPER (
        >     B BOOLEAN DEFAULT TRUE, N NUMBER DEFAULT 0) IS BEGIN … END;

Assuming that this is a root-level procedure, the signature of the generated wrapper procedure is

TOPLEVEL$NEEDSWRAPPER (B INTEGER, N NUMBER)

The Boolean type has been replaced by INTEGER. The default clauses on both parameters are missing in the generated wrapper. Parameters of generated wrapper procedures never have a default clause even if they did in the original procedure.

In this example, if an element for either parameter is not specified in the instance XML, then an error occurs stating that an incorrect number of arguments have been provided. The default value of the parameter that is specified in the original procedure is not used.

To address this, the generated SQL file that creates the wrapper must be edited, restoring the default clauses to the parameters of the wrapper procedure. The wrapper and any additional schema objects must then be reloaded into the database schema. After editing the SQL file, the signature of the wrapper procedure is as follows:

TOPLEVEL$NEEDSWRAPPER (B INTEGER DEFAULT 1, N NUMBER DEFAULT 0)

For Boolean parameters, the default value for true is 1, and the default value for false is 0.

As a final step, the XSD file generated for the wrapper must be edited. A special attribute must be added to elements representing parameters that now have default clauses. Add db:default="true" to each element representing a parameter that now has a default clause. For example,

<element name="B" … db:default="true" …/>
<element name="N" … db:default="true" …/>

This attribute is used at run time to indicate that if the element is missing from the instance XML, then the corresponding parameter must be omitted from the procedure call. The remaining attributes of these elements remain exactly the same.

9.8 Oracle Database Adapter Use Cases

This describes the Oracle Database Adapter and Oracle Database Adapter - stored procedures use cases.

This section includes the following topics:

9.8.1 Use Cases for Oracle Database Adapter

To use the sample Oracle Database Adapter use cases, go to:

http://www.oracle.com/technology/sample_code/products/adapters

Table 9-20 shows the Oracle Database Adapter samples that are provided with Oracle BPEL PM, and Mediator.

Table 9-20 Oracle Database Adapter Use Cases

Tutorial Name Description

Delete

Illustrates the outbound delete operation of the Oracle Database Adapter. An XML record is passed to the operation and the row in the database with the same primary key is deleted.

File2Table

Illustrates the use of an input a native (CSV) data file defined in a custom format. The input file is a purchase order, which the file adapter processes and publishes as an XML message to the FIle2Table BPEL process. The message is transformed to another purchase order format and routed to an invoke activity.

Insert

Illustrates the outbound insert operation of the Oracle Database Adapter. An XML record is passed to the operation and inserted into the database as relational data. (In JDeveloper BPEL Designer, Merge (Insert or Update) is provided.)

InsertWithCatch

Illustrates the extra steps (based on the Insert tutorial) needed to add fault handling to your BPEL process.

JPublisherWrapper

Illustrates a workaround for using PL/SQL RECORD types. JPublisher is used to create a corresponding OBJECT type whose attributes match the fields of the RECORD, and conversion APIs that convert from RECORD to OBJECT and vice versa. JPublisher also generates a wrapper procedure (or function) that accepts the OBJECT and invokes the underlying method using the conversion APIs in both directions. The invoked methods must be installed in an Oracle database (not Oracle Lite).

MasterDetail

Illustrates how to migrate data from one set of tables to another. The sample uses the Oracle Database Adapter to read data from one set of tables, process the data, and write it in to another set of database tables using the adapter.

Merge

Illustrates the outbound merge operation of the Oracle Database Adapter. An XML record is passed to the operation and a corresponding row in the database is either inserted or updated.

PollingControlTableStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. When a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy uses a control table to store the primary key of every row that has not yet been processed. With a natural join between the control table and the source table (by primary key), polling against the control table is practically the same as polling against the source table directly.

PollingLastReadIdStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy uses a helper table to remember a sequence value.

PollingLastUpdatedStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy involves using a helper table to remember a last_updated value.

PollingLogicalDeleteStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy involves updating a special field on each row processed, and updating the WHERE clause at run time to filter out processed rows.

PureSQLPolling

Illustrates how to poll a table based on a date field.

PureSQLSelect

Illustrates how to bypass the JDeveloper BPEL Designer WHERE-clause builder to specify arbitrarily complex SQL strings for SELECT operations.

QueryByExample

Illustrates the outbound queryByExample operation of the Oracle Database Adapter. A SELECT SQL query is built dynamically based on fields set in an example XML record, and any matching records are returned.

RefCursors

Illustrates how to use a REF CURSOR with a strongly typed or weakly typed XSD. You can use the Adapter Configuration Wizard to create a strongly typed XSD for a row set returned by an Oracle Database stored procedure or function REF CURSOR variable. For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

ResultSetConverter

Illustrates a workaround for using a REF CURSOR. The solution involves the use of a Java stored procedure to convert the corresponding java.sql.ResultSet into a collection (either VARRAY or NESTED TABLE) of OBJECTs.

SelectAll

Illustrates the outbound SelectAll operation of the Oracle Database Adapter. With no WHERE clause, all rows in the MOVIES table are returned as XML.

SelectAllByTitle

Illustrates the outbound SelectAllByTitle operation of the Oracle Database Adapter. The row in the MOVIES table with the selected title is returned as XML.

Update

Illustrates the outbound Update operation of the Oracle Database Adapter. An XML record is passed to the operation and the row in the database with the same primary key is updated. (In JDeveloper BPEL Designer, Merge (Insert or Update) is provided.)


See Table 9-3 for the structure of the MOVIES table, which is used for many of the use cases. The readme.txt files that are included with most of the samples provide instructions.

9.8.2 Use Cases for Oracle Database Adapter - Stored Procedures

This section includes the following use cases:

In addition to the uses cases documented in this section, refer to the sample Oracle Database Adapter use cases available at:

http://www.oracle.com/technology/sample_code/products/adapters

Table 9-21 shows the Oracle Database Adapter stored procedure samples that are provided with Oracle BPEL PM, and Mediator.

Table 9-21 Oracle Database Adapter Use Cases - Stored Procedures

Tutorial Name Description

JPublisherWrapper

Illustrates a workaround for using PL/SQL RECORD types. JPublisher is used to create a corresponding OBJECT type whose attributes match the fields of the RECORD, and conversion APIs that convert from RECORD to OBJECT and vice versa. JPublisher also generates a wrapper procedure (or function) that accepts the OBJECT and invokes the underlying method using the conversion APIs in both directions. The invoked methods must be installed in an Oracle database (not Oracle Lite).

RefCursors

Illustrates how to use a REF CURSOR with a strongly typed or weakly typed XSD. You can use the Adapter Configuration Wizard to create a strongly typed XSD for a row set returned by an Oracle Database stored procedure or function REF CURSOR variable. For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

ResultSetConverter

Illustrates a workaround for using a REF CURSOR. The solution involves the use of a Java stored procedure to convert the corresponding java.sql.ResultSet into a collection (either VARRAY or NESTED TABLE) of OBJECTs.


See Table 9-3 for the structure of the MOVIES table, which is used for many of the use cases. The readme.txt files that are included with most of the samples provide instructions.

9.8.2.1 Creating and Configuring a Stored Procedure in JDeveloper BPEL Designer

This use case describes how to integrate a stored procedure into BPEL Process Manager with JDeveloper BPEL Designer.

This use case includes of the following sections:

9.8.2.1.1 Prerequisites

To perform this use case, you must define the following stored procedure in the SCOTT schema:

SQL> CREATE PROCEDURE hello (name IN VARCHAR2, greeting OUT VARCHAR2) AS
  2  BEGIN
  3     greeting := 'Hello ' || name;
  4  END;
  5/
  
9.8.2.1.2 Creating an Application and an SOA Composite

You need to create an JDeveloper application to contain the SOA composite. To create an application and a project for the use case, perform the following:

  1. In the Application Navigator of JDeveloper, click New Application.

    The Create Generic Application - Name your application page is displayed.

  2. Enter MyHelloApp in the Application Name field, and click Next.

    The Create Generic Application - Name your project page is displayed.

  3. Enter HelloProject in the Project Name field.

  4. In the Available list in the Project Technologies tab, double-click SOA to move it to the Selected list.

  5. Click Next.

    The Create Generic Application - Configure SOA Settings page is displayed.

  6. Select Composite With BPEL in the Composite Template box, and click Finish. The Create BPEL Process page is displayed.

  7. Enter Greet in the Name field, and then select Synchronous BPEL Process from the Template box.

  8. Click OK.

    The Greet BPEL process in the HelloProject of MyHelloApp is displayed in the design area, as shown in Figure 9-58.

    Figure 9-58 The JDeveloper - Composite.xml

    Figure
    Description of "Figure 9-58 The JDeveloper - Composite.xml"

9.8.2.1.3 Creating the Outbound Oracle Database Adapter Service

Perform the following steps to create an outbound Oracle Database Adapter service:

  1. Drag and drop Database Adapter from the Component Palette to the External References swim lane.

    The Adapter Configuration Wizard Welcome page is displayed.

  2. Click Next.

    The Service Name page is displayed.

  3. Enter Hello in the Service Name field.

  4. Click Next.

    The Service Connection page is displayed.

    Note:

    Ensure that you have configured the JNDI name in the weblogic-ra.xml file before deploying this application.

    For more information, refer to Section 2.19.1, "Creating a Data Source" and Section 2.21, "Recommended Setting for Data Sources Used by Oracle JCA Adapters."

  5. Click the Create a New Database Connection icon.

    The Create Database Connection dialog is displayed.

  6. Enter the following details in the Create Database Connection dialog:

    1. Enter a connection name in the Connection Name field. For example, Myconnection.

    2. Select Oracle (JDBC) for Connection Type.

    3. Enter the user name and password as scott/tiger.

    4. Enter the host name in the Host Name field and the JDBC port in the JDBC Port field.

    5. Select SID and enter the SID name. Alternatively, select Service Name and enter the service name.

    6. Click Test Connection. A success message is displayed in the Status pane.

    7. Click OK.

    The Connection field is populated with the MyConnection connection and the JNDI field is populated with eis/DB/MyConnection.

  7. Click Next.

    The Operation Type page is displayed.

  8. Select Call a Stored Procedure or Function, and then click Next.

    The Specify Stored Procedure page is displayed.

  9. Click Browse. Select HELLO in the Stored Procedures pane.

    The Arguments tab displays the parameters of the stored procedure and the Source tab displays the source code of the stored procedure.

  10. Click OK.

    The Specify Stored Procedure page is displayed. The Procedure field is populated with the HELLO stored procedure and the arguments for the HELLO stored procedure are also displayed.

  11. Click Next.

    The Advanced Options page is displayed.

  12. Specify any additional advanced options, and then click Next.

    The Adapter Configuration Wizard - Finish page is displayed.

  13. Click Finish.

    The Create Partner Link dialog box is displayed. The name of the partner link is Hello, which is the same as the service name.

  14. Click OK.

    The outbound Oracle Database Adapter is now configured and the Greet BPEL process is displayed.

9.8.2.1.4 Add an Invoke Activity

The following are the steps to add an invoke activity:

  1. Drag and drop an Invoke activity from the Component Palette to the design area between the receiveInput activity and the replyOutput activity.

  2. Double-click the Invoke activity.

    The Edit Invoke dialog is displayed.

  3. Enter Input in the Name field.

  4. Click the Automatically Create Input Variable icon to the right of the Input Variable field in the Invoke box.

    The Create Variable dialog is displayed.

  5. Select the default variable name and click OK.

    The Input Variable field is populated with the default variable name. The Invoke dialog is displayed.

  6. Repeat the same procedure to select output variable in the Output Variable field.

    In the Variables section of the Edit Invoke dialog the Input and Output variable names are displayed.

  7. Click OK.

    A line with a right arrow will be connected to the Hello partner link is displayed, as shown in Figure 9-59.

    Figure 9-59 The Greet.bpel Page

    Description of Figure 9-59 follows
    Description of "Figure 9-59 The Greet.bpel Page"

9.8.2.1.5 Change the Message Part of the Request Message

When the payload of the request matches the InputParameters, then all of the IN parameters will be included in the request. The only IN parameter in this example is name.

The following are the steps to change the message part for the GreetRequestMessage message:

  1. In the Structure Pane for the Greet BPEL process, which is beneath the Application pane, expand Message Types, then Process WSDL - Greet.wsdl, and then GreetRequestMessage.

  2. Select payload , and then click the Edit icon.

    The Edit Message Part - payload dialog is displayed.

  3. Choose Element and then click the Search icon.

    The Type Chooser dialog is displayed.

  4. Expand Project Schema Files, then SCOTT_HELLO.xsd, and select InputParameters.

  5. Click OK.

    The Edit Message Part - payload dialog is displayed.

  6. Click OK.

9.8.2.1.6 Change the Message Part of the Response Message

When the payload of the response matches the OutputParameters, then all of the OUT parameters will be included in the response. The only OUT parameter in this example is greeting.

The steps for the GreetResponseMessage message part are the same as that of GreetRequestMessage with the following exceptions:

  1. Expand the GreetResponseMessage message type, and then select payload.

  2. Expand SCOTT_HELLO.xsd in the Type Chooser dialog and select OutputParameters.

  3. Select OutputParameters.

9.8.2.1.7 Add a Assign Activity for the Input Variable

The following are the steps to add an Assign activity for the input variable:

  1. Drag and drop an Assign activity from the Component Palette in between the receiveInput and Greet invoke activities in the design area.

  2. Double-click the Assign activity.

    The Assign dialog is displayed.

  3. Click General to change the name to NAME in the Name field.

  4. In the Copy Operation tab, click the plus icon, and select Copy Operation from the list of operations displayed.

    The Create Copy Operation dialog is displayed.

  5. In the From area expand Variables, inputVariable, payload, and then select ns2:InputParameters.

  6. In the To area expand Variables, Input_Hello_InputVariable, InputParameters, and then select ns2:InputParameters.

  7. Click OK.

    You have assigned a value to the input parameter.

    The Assign dialog is displayed, as shown in Figure 9-60. This dialog shows the assign from the inputVariable payload to the Input_Hello_InputVariable payload.

    Figure 9-60 The Create Copy Operation Dialog

    Description of Figure 9-60 follows
    Description of "Figure 9-60 The Create Copy Operation Dialog"

  8. Click File, Save All.

9.8.2.1.8 Add an Assign Activity for the Output Variable

In the second assign activity, you assign a value to the output parameter.

The steps for assigning a value to the output parameter are the same as assigning value to the input parameter with the following exceptions:

  1. Drag and drop an Assign activity from the Component Palette in between the Greet invoke and replyOutput activities in the design area.

  2. Double-click the Assign activity.

    The Assign dialog is displayed.

  3. Enter Greeting in the Name field.

  4. In the Copy Operation tab, click the plus icon, and select Copy Operation from the list of operations displayed.

    The Create Copy Operation dialog is displayed.

  5. In the From pane expand Input_Hello_OutputVariable, OutputParameters, and then select ns2:OutputParameters, as shown in Figure 9-61.

  6. In the To pane expand outputVariable, payload, and then select ns2:OutputParameters, as shown in Figure 9-61

    Figure 9-61 The Create Copy Operation Dialog

    Description of Figure 9-61 follows
    Description of "Figure 9-61 The Create Copy Operation Dialog"

  7. Click OK.

    You have assigned a value to the output parameter.

  8. Click File, Save All.

    You have completed modeling a BPEL Process. The final BPEL process is displayed, as shown in Figure 9-62.

    Figure 9-62 The Final BPEL Process Screen

    Description of Figure 9-62 follows
    Description of "Figure 9-62 The Final BPEL Process Screen"

9.8.2.1.9 Deploying with JDeveloper

You must deploy the application profile for the SOA project and the application you created in the preceding steps. To deploy the application profile using JDeveloper, use the following steps:

  1. Create an application server connection using the procedure described in Chapter 2, "Creating an Application Server Connection for Oracle JCA Adapters."

  2. Deploy the application using the procedure described in Section 2.8, "Deploying Oracle JCA Adapter Applications from JDeveloper."

9.8.2.1.10 Creating a DataSource in Oracle WebLogic Server Administration Console

Before you can test the HelloProject you must create a data source using the Oracle WebLogic Server Administration Console.

The following are the steps:

  1. Enter http://<hostname>:<port>/console in your Web browser.

  2. Enter a user name and password and click Log In.

    The administration console is displayed.

  3. In the Services area under JDBC click Data Sources.

    A summary of JDBC Data Sources is displayed.

  4. Click New.

    The Create a New JDBC Data Source page is displayed.

  5. In the Create a New JDBC Data Source page, enter the following details:

    • MyDataSource in the Name field.

    • jdbc/MyDataSource in the JNDI Name field.

    • The Database Type is Oracle.

    • The Database Driver is Oracle's Driver (Thin XA) for Instance Connections; Versions 9.0.1, 9.2.0, 10, 11.

  6. Click Next.

    A message stating that no other transaction configuration options are available is displayed.

  7. Click Next.

    The Create a New Data Source page is displayed.

  8. Enter the following details:.

    • Database Name: This is usually the SID.

    • Host Name: Enter the name of the host computer.

    • Port Number: Enter the port number.

      The default port is 1521.

    • Database User Name: Enter SCOTT

    • Password: Enter TIGER.

    • Confirm Password: Enter TIGER.

  9. Click Next.

    A summary of the data source configuration is displayed.

  10. Click Test Configuration.

    The Messages area will indicate that the connection test succeeded.

  11. Click Next. Select AdminServer as the target by selecting the check box.

  12. Click Finish.

    The summary of JDBC Data Sources now includes the MyDataSource data source that you created in the preceding steps.

9.8.2.1.11 Monitoring Using the Fusion Middleware Control Console

You can monitor the deployed SOA composite using the Fusion Middleware Control Console. Perform the following steps:

  1. Navigate to http://servername:portnumber/em. A list of SOA composites is displayed, including the HelloProject[1.0] that you created in the preceding steps.

  2. Click the HelloProject[1.0] link. The Dashboard tab is displayed, as shown in Figure 9-63.

    Figure 9-63 The Dashboard Tab of the HelloProject[1.0] Project

    Description of Figure 9-63 follows
    Description of "Figure 9-63 The Dashboard Tab of the HelloProject[1.0] Project"

  3. Click Test. A new browser window is displayed.

  4. Enter your name in the NAME field that is marked xsd:string and then click Invoke.

    The browser window will display the Test Result.

  5. To view the XML file in readable form, click Formatted XML. Figure 9-64 shows the formatted XML file.

    Figure 9-64 The Formatted XML File

    Description of Figure 9-64 follows
    Description of "Figure 9-64 The Formatted XML File"

9.8.2.2 File To StoredProcedure Use Case

This use case illustrates the execution of an Oracle stored procedure. The input to the stored procedure is obtained by reading a file using the File Adapter. The stored procedure executes, populating a table with the data from its input parameter.

The File2StoredProcedure use case is available at:

http://www.oracle.com/technology/sample_code/products/adapters

This use case includes the following topics:

9.8.2.2.1 Prerequisites

To perform the file to stored procedure use case, the following schema objects and stored procedure must be defined in the SCOTT/TIGER schema before modeling the BPEL Composite using JDeveloper.

create type address as object
(
  street varchar2(20),
  city   varchar2(15),
  state  char(2),
  zip    char(5)
);
create type customer as object
(
  fname varchar2(10),
  lname varchar2(10),
  loc   address,
  email varchar2(25),
  phone varchar2(15)
);
create type collection as table of customer;
create table customers
(
  name  varchar2(25),
  loc   varchar2(45),
  email varchar2(25),
  phone varchar2(15)
);
create procedure add_customers(c in collection) as
begin
  for i in c.first .. c.last loop
    insert into customers values (
      c(i).lname || ', ' || c(i).fname,
      c(i).loc.street || ', ' || c(i).loc.city || ', ' || c(i).loc.state || ' ' ||
 c(i).loc.zip,
      c(i).email,
      c(i).phone    );
  end loop;
end;

You can define these schema objects and stored procedure using the adapters-db-101-file2storedprocedure/artifacts/sql/customers.sql file from the adapters-db-101-file2storedprocedure sample available at the following location:

http://www.oracle.com/technology/sample_code/products/adapters

9.8.2.2.2 Creating an Application and an SOA Project

You must create a JDeveloper application to contain the SOA composite. Use the following steps to create a new application, an SOA project:

  1. Open JDeveloper.

  2. In the Application Navigator, click New Application. The Create Generic Application - Name your Application page is displayed.

  3. Enter File2SPApp in the Application Name field.

  4. In the Application Template list, select Generic Application.

  5. Click Next.

    The Create Generic Application - Name your project page is displayed.

  6. In the Project Name field, enter a descriptive name. For example, File2SPProject.

  7. In the Available list in the Project Technologies tab, double-click SOA to move it to the Selected list.

  8. Click Next. The Create Generic Application - Configure SOA Settings page is displayed.

  9. Select Composite With BPEL from the Composite Template list, and then click Finish.

    You have created a new application, and an SOA project. This automatically creates an SOA composite.

    The Create BPEL Process page is displayed.

  10. Enter a name for the BPEL process in the Name field. For example, File2SP.

  11. Select Define Service Later in the Template list, and then click OK.

    The File2SP BPEL process in the File2SPProject of File2SPApp is displayed in the design area.

9.8.2.2.3 Creating the Outbound Oracle Database Adapter Service

Perform the following steps to create an outbound Oracle Database Adapter service:

  1. Drag and drop Database Adapter from the Service Adapters list to the Exposed Services swim lane. The Adapter Configuration Wizard Welcome page is displayed.

  2. Click Next. The Service Name page is displayed.

  3. Enter File2SPService in the Service Name field.

  4. Click Next.

    The Service Connection page is displayed.

  5. Click the Create a New Database Connection icon.

    The Create Database Connection dialog is displayed.

  6. Enter the following details in the Create Database Connection dialog:

    1. Enter a connection name in the Connection Name field. For example, MyConnection.

    2. Select Oracle (JDBC) for Connection Type.

    3. Enter the user name and password as scott/tiger.

    4. Enter the host name in the Host Name field and the JDBC port in the JDBC Port field.

    5. Select SID and enter the SID name. Alternatively, select Service Name and enter the service name.

    6. Click Test Connection. A success message is displayed in the Status pane.

    7. Click OK.

    The Connection field is populated with the MyConnection connection and the JNDI field is populated with eis/DB/MyConnection.

  7. Click Next.

    The Adapter Interface page is displayed.

  8. In the Adapter Interface page, select Define from operation and schema (specified later), and the click Next.

    The Operation Type page is displayed.

  9. Select Call a Stored Procedure or Function, as shown in Figure 9-65, and click Next.

    The Specify Stored Procedure page is displayed.

    Figure 9-65 The Adapter Configuration Wizard - Operation Type Page

    Description of Figure 9-65 follows
    Description of "Figure 9-65 The Adapter Configuration Wizard - Operation Type Page"

  10. Click Browse. Select ADD_CUSTOMERS in the Stored Procedures pane.

    The Arguments tab displays the parameters of the stored procedure and the Source tab displays the source code of the stored procedure.

  11. Click OK.

    The Specify Stored Procedure page is displayed.

    The Procedure field is populated with the ADD_CUSTOMERS stored procedure and the arguments for the ADD_CUSTOMERS stored procedure are also displayed.

  12. Click Next.

    The Advanced Options page is displayed.

  13. Specify any additional options, and then click Next.

    The Finish page is displayed.

  14. Click Finish.

    The Create Partner Link dialog is displayed.

    The name of the partner link is File2SPService, which is the same as the service name.

  15. Click OK.

    The outbound Oracle Database Adapter is now configured and the File2SP BPEL process is displayed.

9.8.2.2.4 Creating an Invoke Activity

You must complete the BPEL process by creating an Invoke activity. This creates the input variables.

The following are the steps to create an Invoke activity:

  1. Click File, Save All.

  2. Drag and drop an Invoke activity from the Component Palette to the design area.

  3. Drag the right arrow on the right of the Invoke activity and connect it to the File2SPService partner link.

    The Edit Invoke dialog is displayed.

  4. Enter Invoke in the Name field.

  5. Click the Automatically Create Input Variable icon to the right of the Input Variable field in the Invoke dialog.

    The Create Variable dialog is displayed.

  6. Select the default variable name and click OK.

    The Input variable name is displayed in the Variables area of the Edit Invoke dialog.

  7. Click OK.

    A line with a right arrow connecting to the is File2SPService partner link is displayed.

9.8.2.2.5 Creating the Inbound File Adapter Service

Perform the following steps to create an inbound File adapter service. This will create the service that reads input XML from a file directory:

  1. Drag and drop the File Adapter from the Component Palette to the External References swim lane.

    The Adapter Configuration Wizard Welcome page is displayed.

  2. Click Next. The Service Name page is displayed.

  3. Enter ReadCustomers in the Service Name field.

  4. Click Next.

    The Adapter Interface page is displayed.

  5. Select Define from operation and schema (specified later), and then click Next. The Operation page is displayed.

  6. Select Read File as the Operation Type and Read as the Operation Name. Do not select the other check boxes.

  7. Click Next.

    The File Directories page is displayed.

  8. Select Physical Path, and enter a physical path in the Directory for Incoming Files field.

  9. Select Process files recursively and Delete files after successful delivery, and the click Next.

    The File Filtering page is displayed.

  10. Specify File Wildcards, enter customers.xml in the Include Files with Name Pattern field, and then click Next.

    The File Polling page is displayed.

  11. Specify any value in the Polling Frequency field, and click Next.

    The Message page is displayed.

  12. Click Browse For Schema File that is displayed at the end of the URL field.

    The Type Chooser dialog is displayed.

  13. Click Project Schema Files, SCOTT_ADD_CUSTOMERS.xsd, and InputParameters.

  14. Click OK.

    The Messages page is displayed again. The URL is xsd/SCOTT_ADD_CUSTOMERS.xsd, and the Schema Element is InputParameters.

  15. Click Next.

    The Finish page is displayed.

  16. Click Finish.

    This terminates the inbound File Adapter service.

  17. Click OK to complete the partner link.

  18. Click File, Save All.

9.8.2.2.6 Adding a Receive Activity

The File Adapter Service provides input to the Receive Activity, which then initiates the rest of the BPEL Process.

The following are the steps to add a Receive activity:

  1. Double-click File2SP. The File2SP.bpel page is displayed.

  2. Drag and drop a Receive activity from the Component Palette to the design area.

  3. Drag the left arrow on the left of the Receive activity and connect it to the ReadCustomers partner link.

    The Edit Receive dialog is displayed.

  4. Enter Receive in the Name field.

  5. Click the Automatically Create Input Variable icon to the right of the Variable field in the Edit Receive dialog.

    The Create Variable dialog is displayed.

  6. Select the default variable name and click OK.

    The Variable field is populated with the default variable name.

  7. Select Create Instance, and click OK. The JDeveloper File2SP.bpel page is displayed.

    After adding the Receive activity, the JDeveloper window appears, as shown in Figure 9-66.

    Figure 9-66 Adding a Receive Activity

    Description of Figure 9-66 follows
    Description of "Figure 9-66 Adding a Receive Activity"

  8. Click File, Save All.

9.8.2.2.7 Adding an Assign Activity

Next, you must assign a value to the input parameter.

The following are the steps to add an Assign activity:

  1. Drag and drop an Assign activity from the Component Palette in between the Receive and Invoke activities in the design area.

  2. Double-click the Assign activity.

    The Assign dialog is displayed.

  3. Click General, and then CUSTOMER in the Name field.

  4. Click the Copy Operation tab.

    The Assign dialog is displayed, as shown in Figure 9-67.

    Figure 9-67 The Assign Dialog - Copy Operation Tab

    Description of Figure 9-67 follows
    Description of "Figure 9-67 The Assign Dialog - Copy Operation Tab"

  5. Click the icon with the plus sign, as shown in Figure 9-67, and then select Copy Operation.

    The Create Copy Operation dialog is displayed.

  6. In the From area expand Process, Variables, Receive_Read_InputVariable and then body.

  7. Select ns3:InputParameters.

  8. In the To area expand Process, Variables, Invoke_File2SPService_InputVariable, and then InputParameters.

  9. Select ns3:InputParameters.

  10. Click OK. The Assign dialog is displayed, as shown in Figure 9-68.

    Figure 9-68 The Assign Dialog

    Description of Figure 9-68 follows
    Description of "Figure 9-68 The Assign Dialog"

  11. Click OK.

    The JDeveloper File2SP.bpel page is displayed, as shown in Figure 9-69.

    Figure 9-69 The JDeveloper - File2SP.bpel

    Description of Figure 9-69 follows
    Description of "Figure 9-69 The JDeveloper - File2SP.bpel"

  12. Click File, Save All.

9.8.2.2.8 Wiring Services and Activities

You must assemble or wire the three components that you have created: Inbound adapter service, BPEL process, Outbound adapter reference. Perform the following steps to wire components together:

  1. Drag the small triangle in ReadCustomer in the Exposed Services area to the drop zone that appears as a green triangle in the BPEL process in the Components area.

  2. Drag the small triangle in the BPEL process in the Components area to the drop zone that appears as a green triangle in File2SPService in the External References area.

  3. Click File, Save All.

9.8.2.2.9 Deploying with JDeveloper

You must deploy the application profile for the SOA project and the application you created in the preceding steps. To deploy the application profile using JDeveloper, use the following steps:

  1. Create an application server connection using the procedure described in Chapter 2, "Creating an Application Server Connection for Oracle JCA Adapters."

  2. Deploy the application using the procedure described in Section 2.8, "Deploying Oracle JCA Adapter Applications from JDeveloper."

9.8.2.2.10 Creating a Data Source

Before you can test the File2SPProject you must create a data source using the Oracle WebLogic Server Administration Console, by using the following steps:

  1. Navigate to http://servername:portnumber/console.

  2. Use the required credentials to open the Home page of the Oracle WebLogic Server Administration Console.

    The Home page of the Oracle WebLogic Server Administration Console is displayed, as shown in Figure 9-70.

    Figure 9-70 Oracle WebLogic Server Administration Console Home Page

    Description of Figure 9-70 follows
    Description of "Figure 9-70 Oracle WebLogic Server Administration Console Home Page"

  3. Under Domain Structure, select Services, JBDC, and then click DataSources.

    The Summary of JDBC Data Sources page is displayed, as shown Figure 9-71.

    Figure 9-71 The Summary of JDBC Data Sources Page

    Description of Figure 9-71 follows
    Description of "Figure 9-71 The Summary of JDBC Data Sources Page"

  4. Click New.

    The Create a New JDBC Data Source page is displayed.

  5. Enter the following values for the properties to be used to identify your new JDBC data source:

    • Enter MyDataSource in the Name field.

    • Enter jdbc/MyDataSource in the JNDI Name field.

    • Retain the default value Oracle for Database Type.

    • Retain the default value Oracle's Driver (Thin XA) for Instance Connections; Versions 9.0.1, 9.2.0, 10, 11 for Database Driver.

  6. Click Next.

    The Create a New JDBC Data Source - Transaction Options page is displayed. A message stating, "No other transaction configuration options are available." is displayed.

  7. Click Next.

    The Create a New JDBC Data Source - Connection Properties page is displayed.

  8. Enter the following connection properties in the Connection Properties page:

    • Enter a name in the Database Name field, which is usually the SID.

    • Enter the host name in the Host Name field.

    • Enter the port number in the Port field.

    • Enter SCOTT in the Database User Name field.

    • Enter TIGER in the Password field.

    • Enter TIGER in the Confirm Password field.

  9. Click Next. The Create a New JDBC Data Source - Test Database Connection page is displayed.

  10. Click Test Configuration to test the database availability and the connection properties you provided. A message stating that the, "Connection test succeeded" is displayed at the top of the Create a New JDBC Data Source - Test Database Connection page.

  11. Click Next.

    The Create a New JDBC Data Source - Select Targets page is displayed.

  12. Select AdminServer as target, and then click Finish.

    The Summary of JDBC Data Sources page is displayed. This page summarizes the JDBC data source objects that have been created in this domain. The Data Source that you created appears in this list.

9.8.2.2.11 Adding a Connection-Instance

The database adapter needs an instance entry, which points to a data source.

The following are the steps to add a connection instance:

  1. Search beahome/ for DbAdapter.rar.

  2. Unzip the file.

  3. Edit META-INF/weblogic-ra.xml (and possibly ra.xml.), as shown in the following example:

    <connection-instance>
                    <jndi-name>eis/DB/MyConnection</jndi-name>
                    <connection-properties>
                        <properties>
                            <property>
                                <name>xADataSourceName</name>
                                <value>jdbc/MyDataSource</value>
                            </property>
                        </properties>
                    </connection-properties>
                </connection-instance>
    
  4. Use the same database connection name, MyConnection, for the JNDI name.

  5. Use the same data source name, MyDataSource, as the xADataSourceName.

  6. Jar the file again.

  7. Restart the application server.

    You can also create a new database adapter instance using the Oracle WebLogic Server Administration Console.

9.8.2.2.12 Testing using the File Adapter Service and SQL*Plus

You must test the BPEL process by providing input file for the File Adapter. The results of the BPEL process are seen using a simple query from a table. The customers.xml file contains the following input:

<InputParameters xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/SCOTT/ADD_CUSTOMERS/">
  <C>
    <C_ITEM>
      <FNAME>John</FNAME>
      <LNAME>Doe</LNAME>
      <LOC>
        <STREET>123 Main Street</STREET>
        <CITY>Anytown</CITY>
        <STATE>CA</STATE>
        <ZIP>12345</ZIP>
      </LOC>
      <EMAIL>john.smith@gmail.com</EMAIL>
      <PHONE>567-123-9876</PHONE>
    </C_ITEM>
    <C_ITEM>
      <FNAME>Jane</FNAME>
      <LNAME>Doe</LNAME>
      <LOC>
        <STREET>987 Sunset Blvd</STREET>
        <CITY>Sometown</CITY>
        <STATE>CA</STATE>
        <ZIP>34567</ZIP>
      </LOC>
      <EMAIL>JaneDoe@yahoo.com</EMAIL>
      <PHONE>567-123-9876</PHONE>
    </C_ITEM>
  </C>
</InputParameters>

The following are the steps for testing the BPEL process you created:

  1. Place a copy of customers.xml in the input directory that you specified when you created the inbound File Adapter Service.

  2. The Oracle File Adapter will poll the directory for new files. The Receive activity will initiate the BPEL process once the file is received by the File Adapter Service.

  3. The data for all of the customers is assigned to the InputParameters of the stored procedure.

  4. The stored procedure executes. It transforms the data for each customer and then inserts the customer data into a table.

  5. Query the table to see the following results:

    SQL> select * from customers;
    
    NAME                      LOC
    ------------------------- ---------------------------------------------
    EMAIL                     PHONE
    ------------------------- ---------------
    Doe, John                 123 Main Street, Anytown, CA 12345
    john.smith@gmail.com      567-123-9876
    
    Doe, Jane                 987 Sunset Blvd, Sometown, CA 34567
    JaneDoe@yahoo.com         567-123-9876
    
9.8.2.2.13 Monitoring Using the Fusion Middleware Control Console

You can monitor the deployed EM Composite using the Fusion Middleware Control Console. Perform the following steps:

  1. Navigate to http://servername:portnumber/em.

    A list of SOA composites is displayed, including File2SPProject[1.0] that you created in the preceding steps.

  2. Click File2SPProject[1.0].

    The Dashboard is displayed. Note your Instance ID in the Recent Instances area.

  3. Click the Instances tab.

    A Search dialog is displayed. The default search displays all instances by their Instance ID.

  4. Select the Instance ID that you noted above.

    A new window opens. It lists any faults (No faults found) and enables you to view the Audit Trail of your instance. Your instance trace is displayed in a new window.

  5. The instance tree is already expanded from ReadCustomers (service) to File2SP (BPEL Component) to File2SPService (reference).

  6. Click File2SP BPEL Component.

    The Audit Trail of your process is displayed.

  7. Expand the <payload> node to see the input provided to the stored procedure, as shown in Figure 9-72.

    Figure 9-72 The Audit Trail Tab

    Description of Figure 9-72 follows
    Description of "Figure 9-72 The Audit Trail Tab"

  8. Additionally, click the Flow tab to view the process flow, as shown in Figure 9-73.

    Figure 9-73 Viewing the Process Flow

    Description of Figure 9-73 follows
    Description of "Figure 9-73 Viewing the Process Flow"