9 Oracle JCA Adapter for Database

This chapter describes the Oracle JCA Adapter for Database (Oracle Database Adapter), which works with Oracle BPEL Process Manager and Oracle Mediator (Mediator). This chapter also includes support for stored procedures and functions (for Oracle databases only). In addition, it contains references to use cases for the Oracle Database Adapter and for stored procedures.

This chapter includes the following topics:

9.1 Introduction to the Oracle Database Adapter

The Oracle Database Adapter enables a BPEL process to communicate with Oracle databases or third party databases through JDBC. The Oracle Database Adapter service is defined within a BPEL process partner link by using the Adapter Configuration Wizard of Oracle BPEL Process Manager (Oracle BPEL PM).

This section includes the following topics:

9.1.1 Functional Overview

This section provides a functional overview of the Oracle Database Adapter. The Oracle Database Adapter enables Oracle SOA Suite and Oracle Fusion Middleware to communicate with database end points. These include Oracle database servers and any relational databases that follow the ANSI SQL standard and which provide JDBC drivers.

The principle of the tables and views in the Oracle Database Adapter is to expose to SOA tables and SQL as transparently and non-intrusively as possible. From an integration standpoint, tables and SQL are what relational database products have in common, so a generic solution focused on what is standard has the greatest reach. In exposing databases to SOA, it is also about combining the technologies of SQL and XML, the former an ideal language for querying information, the latter an ideal format for transporting and representing information. While stored procedure support is less standard across databases, Oracle Database Adapter provides support for stored procedures as the guide describes.

The Oracle Database Adapter is a JCA 1.5 connector, which runs on the Oracle Application Server. It relies on an underlying JDBC connector/driver to enact the database communication. In contrast to JDBC, it is non-programmatic. The interaction (series of SELECT, UPDATE, INSERT) is loosely modeled using the Adapter Configuration Wizard. The inputs/outputs are XML, most easily seen as input parameters and result sets converted to XML. These XML inputs and outputs allow the Oracle Database Adapter services to be plugged into Oracle Fusion Middleware.

To access an existing relational schema, you must create an application and an SOA project to use the Adapter Configuration Wizard to perform the following:

The Oracle Database Adapter can currently be used only within the context of an SOA process as Section 9.1.1.1, "Oracle Database Adapter Integration with Oracle BPEL PM" describes.

Although Oracle Streams Advanced Queuing (Oracle AQ) is an Oracle Database feature, you use the separate, specialized Oracle JCA Adapter for AQ to integrate with Oracle AQ. For more information, see Chapter 7, "Oracle JCA Adapter for AQ".

For non-relational and legacy systems (with a few exceptions such as DB2 on AS/400), application and mainframe adapters are available. For more information about application and mainframe adapters, see:

For more information on the Oracle Database Adapter, see:

9.1.1.1 Oracle Database Adapter Integration with Oracle BPEL PM

When the Oracle Database Adapter is used to poll for database events (usually an INSERT operation on an input table) and initiate a process, in a Mediator component or an SOA composite it is called an exposed service. In Oracle BPEL process it is a partner link tied to a Receive activity. The expression inbound (from database into SOA) is commonly used.

When the Oracle Database Adapter is used to invoke a one-time DML statement such as INSERT or SELECT, in a Mediator component or an SOA composite, it is called a service reference. In Oracle BPEL process, it is a partner link tied to an Invoke activity. The expression outbound (from SOA out to the database) is used.

9.1.2 Design Overview

This section provides an overview of the design of the Oracle Database Adapter. Figure 9-1 shows how the Oracle Database Adapter interacts with the various design-time and deployment artifacts.

Figure 9-1 How the Oracle Database Adapter Works

How the Database Adapter Works
Description of "Figure 9-1 How the Oracle Database Adapter Works"

The Oracle Database Adapter is a JCA 1.5 connector, which is deployed to the application server during installation.

The Oracle Database Adapter consists of multiple instances; each instance represents a connection to a database end point. Different SOA processes may point to the same adapter instance (database), while different service endpoints in a SOA process may point to different adapter instances (databases).

Because each adapter instance points to a single database, there is a one-to-one correspondence from adapter instances to application server data sources. Out of the box there is a single Oracle Database Adapter instance named eis/DB/SOADemo, which points to the data source jdbc/SOADataSource.

The list of adapter instances is stored in a deployment descriptor file, weblogic-ra.xml on Oracle WebLogic Server. (It is inside of DbAdapter.rar, which contains also the Java class files in DBAdapter.jar). Configuring an Oracle Database Adapter instance is more about creating the underlying data source: getting the correct JDBC driver and connection URL.

For more information, see Section 9.6, "JDBC Driver and Database Connection Configuration."

However weblogic-ra.xml entries occasionally have more than simply the name of the underlying data source. These properties are detailed further under Section 9.5, "Deployment".

While at run time you have Oracle Database Adapter instances, at design time you have the Adapter Configuration Wizard (link). You can run it once to generate a single adapter service end point, and then multiple times in edit mode to make incremental changes to each. It generates all the adapter related artifacts needed when deploying a SOA composite as Table 9-1 lists.

Table 9-1 Adapter Configuration Wizard Generated SOA Composite Adapter Artifacts

File Description

<serviceName>.wsdl

This is an abstract WSDL, which defines the service end point in terms of the name of the operations and the input and output XML elements.

<serviceName>_table.xsd

This contains the XML file schema for these input and output XML elements. Both these files form the interface to the rest of the SOA project.

<serviceName>_or-mappings.xml

This is an internal file. It is a TopLink specific file, which is used to describe the mapping between a relational schema and the XML schema. It is used at run time.

<serviceName>_db.jca

This contains the internal implementation details of the abstract WSDL. It has two main sections, location and operations. Location is the JNDI name of an adapter instance, that is, eis/DB/SOADemo. Operations describe the action to take against that end point, such as INSERT, UPDATE, SELECT, and POLL. The contents of the db.jca file are wholly determined by choices made while running the Adapter Configuration Wizard.

<serviceName>.properties

This is also an internal file. It is created when tables are imported, and information about them is saved. It is used only at design time.

At run time, the location is used to look up the adapter instance which executes the service. Based on the properties in the db.jca file and the linked or-mappings.xml file, <seviceName>.properties file generates the correct SQL to execute, parses the input XML, and builds an output XML file matching the XSD file. To execute the SQL, it obtains a pooled SQL connection from the underlying data source.


9.2 Complete Walkthrough of the Adapter Configuration Wizard

This section describes the Adapter Configuration Wizard and how you can define an Oracle Database Adapter by using the Adapter Configuration Wizard.

This section describes the various Oracle Database Adapter concepts through a use case, which is, a complete walkthrough of the Adapter Configuration Wizard. In addition, this use case also describes how by using the Adapter Configuration Wizard, you can import tables from the database, specify relationships spanning multiple tables, generate corresponding XML schema definitions, and create services to expose the necessary SQL or database operations. These services are consumed to define partner links that are used in the BPEL process. You use the Adapter Configuration Wizard to both create and edit adapter services.

9.2.1 Creating an Application and an SOA Project

You must create an Oracle JDeveloper (JDeveloper) application to contain the SOA composite. Perform the following steps to create an application, and an SOA project:

  1. Open JDeveloper.

  2. In the Application Navigator, click New Application.

    The Create Generic Application - Name your application page is displayed, as shown in Figure 9-2.

  3. Enter a name for the application in the Application Name field.

  4. In the Application Template list, choose Generic Application.

    Figure 9-2 The Create Generic Application - Name your application Page

    Description of Figure 9-2 follows
    Description of "Figure 9-2 The Create Generic Application - Name your application Page"

  5. Click Next.

    The Create Generic Application - Name your project page is displayed, as shown in Figure 9-3.

  6. In the Project Name field, enter a descriptive name.

  7. In the Available list in the Project Technologies tab, double-click SOA to move it to the Selected list.

    Figure 9-3 The Create Generic Application - Name your Generic project Page

    Description of Figure 9-3 follows
    Description of "Figure 9-3 The Create Generic Application - Name your Generic project Page"

  8. Click Next. The Create Generic Application - Configure SOA settings page is displayed, as shown in Figure 9-4.

    Figure 9-4 The Create Generic Application - Configure SOA settings Page

    Description of Figure 9-4 follows
    Description of "Figure 9-4 The Create Generic Application - Configure SOA settings Page"

  9. Select Composite With BPEL from the Composite Template list, and then click Finish.

    You have created a new application and an SOA project. This automatically creates an SOA composite.

    The Create BPEL Process page is displayed, as shown in Figure 9-5.

    Figure 9-5 The Create BPEL Process Page

    Description of Figure 9-5 follows
    Description of "Figure 9-5 The Create BPEL Process Page"

  10. Enter a name for the BPEL process in the Name field.

  11. Select Define Service Later in the Template list, and then click OK.

    You have created a BPEL process.

9.2.2 Defining an Oracle Database Adapter

The next step is to define an Oracle Database Adapter service. Perform the following steps to create an Oracle Database Adapter service:

  1. In the Component Palette, select SOA.

  2. Drag and drop Database Adapter from the Service Adapters list to the Exposed components swim lane in the composite.xml page.

    The Adapter Configuration Wizard is displayed.

    Note:

    To create an Oracle Database Adapter service as part of a BPEL process, drag and drop a BPEL process from Service Components onto Components. Double-click it. Then, in the BPEL Component Palette, drag and drop Database Adapter from BPEL Services onto a Partner Links swim lanes.

  3. Click Next. The Service Name page is displayed, as shown in Figure 9-6. Enter the following information:

    Figure 9-6 Specifying the Service Name

    Description of Figure 9-6 follows
    Description of "Figure 9-6 Specifying the Service Name"

  4. In the Service Name field, enter a service name, and then click Next. The Service Connection page is displayed.

See Section 9.2.3, "Connecting to a Database" to continue using the Adapter Configuration Wizard.

9.2.3 Connecting to a Database

Figure 9-7 shows where you select the database connection that you are using with the service. This is the database from which you import tables to configure the service. This is the database from which you import tables to configure the service. You can re-create it here in each new JDeveloper application you create.

You can provide a Java Naming and Directory Interface (JNDI) name to identify the database connection, as the default name that is provided is eis/DB/<ConnectionNameInJDev>.

For more information, see Section 9.5, "Deployment."

Figure 9-7 The Adapter Configuration Wizard: Service Connection Page

Description of Figure 9-7 follows
Description of "Figure 9-7 The Adapter Configuration Wizard: Service Connection Page"

Note the following:

  • In production environments, it is recommended that you add the JNDI entry to the adapter deployment descriptor (weblogic-ra.xml). This way, the Oracle Database Adapter is more performant by working in a managed mode.

    For information about creating a data source and an outbound connection pool, see Section 2.18, "Adding an Adapter Connection Factory."

  • When you click Next, a connection to the database is attempted. If a connection cannot be made, you cannot proceed to the next window, even if you are editing an existing partner link.

See Section 9.2.4, "Selecting the Operation Type" to continue using the Adapter Configuration Wizard.

9.2.4 Selecting the Operation Type

Figure 9-8 shows where you indicate the type of operation you want to configure for this service.

Figure 9-8 The Adapter Configuration Wizard: Operation Type Page

Description of Figure 9-8 follows
Description of "Figure 9-8 The Adapter Configuration Wizard: Operation Type Page"

The following operation types are available:

Otherwise, see Section 9.2.5, "Selecting and Importing Tables" to continue using the Adapter Configuration Wizard.

9.2.5 Selecting and Importing Tables

Figure 9-10 shows where you select the root database table for your operation. If you are using multiple related tables, then this is the highest-level table (or highest parent table) in the relationship tree.

Figure 9-10 The Adapter Configuration Wizard: Select Table

Description of Figure 9-10 follows
Description of "Figure 9-10 The Adapter Configuration Wizard: Select Table"

Selecting Import Tables launches a sub-wizard, which lets you search for and select multiple tables to import from the database. Removing a table removes (or undoes) any relationships on related tables that remain. If any underlying tables have changed when running this wizard in edit mode, you get a warning showing you what changes have occurred. To reconcile, import the tables again. If you click Import Tables and select multiple tables, then relationships between these tables are inferred based on the foreign key constraints. However if you launch Import Tables once for each table imported, then no relationships are inferred.

Note:

If you reimport a table, you lose any custom relationships you may have defined on that table and any custom WHERE clauses (if the table being imported was the root table).

See Section 9.2.6, "Defining Primary Keys" to continue using the Adapter Configuration Wizard.

9.2.6 Defining Primary Keys

If any of the tables you have imported do not have primary keys defined on the database, you are prompted to provide a primary key for each one, as shown in Figure 9-11. You must specify a primary key for all imported tables. You can select multiple fields to specify a multipart primary key.

Figure 9-11 The Adapter Configuration Wizard: Define Primary Keys Page

Description of Figure 9-11 follows
Description of "Figure 9-11 The Adapter Configuration Wizard: Define Primary Keys Page"

The primary key that you specify here is recorded on the offline database table and is not persisted back to the database schema; the database schema is left untouched.

See Section 9.2.7, "Creating Relationships" to continue using the Adapter Configuration Wizard.

Note:

Note that the Oracle Database Adapter only supports tables where there is a primary key defined. If primary key constraints have not been defined on a table explicitly, then you must provide one at design time while defining the Oracle Database Adapter by using the Adapter Configuration Wizard. If you do not provide a valid primary key, then the unique constraint is not guaranteed, and this could result in possible loss of messages at run time. That is, rows with duplicate primary key values are likely to be lost.

To obtain a sample that describes how to use the row id field as primary key, access the Oracle SOA Sample Code site.

Note:

Oracle recommends that you use varchar instead of char for primary key columns, otherwise you must set the weblogic-ra.xml property shouldTrimStrings to false. The truncation of trailing spaces could cause the primary key to be read incorrectly, making it impossible to update read rows as processed.

9.2.7 Creating Relationships

Figure 9-12 shows the relationships defined on the root database table and any other related tables. You can click Create Relationships… to create a relationship between two tables, or click Remove Relationship to remove it. To rename a relationship, click Rename Relationship.

Figure 9-12 The Adapter Configuration Wizard: Relationships Page

Description of Figure 9-12 follows
Description of "Figure 9-12 The Adapter Configuration Wizard: Relationships Page"

Note the following regarding creating relationships:

  • If foreign key constraints between tables exist on the database, then two relationships are created automatically when you import the tables, a one-to-one (1:1) from the source table (the table containing the foreign key constraints) to the target table, and a one-to-many (1:M) from the target table to the source table.

  • As Figure 9-12 shows, you see only the relationships that are reachable from the root database table. If, after removing a relationship, other relationships are no longer reachable from the root table, then they are not shown in the Relationships window. Consider the following set of relationships:

    A --1:1--> B --1:1--> C --1:M--> D --1:1--> E --1:M--> F

    (1) (2) (3) (4) (5)

    If you remove relationship 3, then you see only:

    A --1:1--> B

    B --1:1--> C

    If you remove relationship 2, then you see only:

    A --1:1--> B

    If you remove relationship 1, you no longer see any relationships.

Figure 9-13 shows where you can create a relationship.

Figure 9-13 The Create Relationship Dialog

Description of Figure 9-13 follows
Description of "Figure 9-13 The Create Relationship Dialog"

To create a relationship:

  1. Select the parent and child tables.

  2. Select the mapping type (one-to-many, one-to-one, or one-to-one with the foreign key on the child table).

  3. Associate the foreign key fields to the primary key fields.

  4. Optionally name the relationship (a default name is generated).

Note:

Only tables that are reachable from the root table can be selected as a parent.

9.2.7.1 What Happens When Relationships Are Created or Removed

When tables are initially imported into the Adapter Configuration Wizard, a TopLink direct-to-field mapping corresponding to each field in the database is created. Consider the schemas shown in Figure 9-14 and Figure 9-15:

Figure 9-14 EMPLOYEE Schema

Description of Figure 9-14 follows
Description of "Figure 9-14 EMPLOYEE Schema"

Figure 9-15 ADDRESS Schema

Description of Figure 9-15 follows
Description of "Figure 9-15 ADDRESS Schema"

Immediately after importing these two tables, the following mappings in the Employee descriptor are created:

Employee:

  • id (direct mapping to the ID field, for example, 151)

  • name (direct mapping to the NAME field, for example, Stephen King)

  • addrId (direct mapping to the ADDR_ID field, for example, 345)

When creating a relationship mapping, the direct-to-field mappings to the foreign key fields are removed and replaced with a single relationship (one-to-one, one-to-many) mapping. Therefore, after creating a one-to-one relationship between Employee and Address called homeAddress, the Employee descriptor appears, as shown in the following example:

Employee:

  • id

  • name

  • homeAddress (one-to-one mapping to the ADDRESS table; this attribute now represents the entire Address object.)

When a relationship is removed, the direct mappings for the foreign keys are restored.

9.2.7.2 Different Types of One-to-One Mappings

When relationships are auto created, the one-to-many relationship is from the table without the foreign key. However, you can declare this mapping, which is technically 1-many, as a 1-1. For that, choose 1-1 (foreign key on target).

9.2.7.3 When Foreign Keys Are Primary Keys

Not all tables imported are in the third normal form (3NF). In rare cases, you may have two or more tables which share the same primary key but no separate foreign key columns exist. It is recommended to create 1-1 (foreign key on target) relationships from the root table to all related tables. The reason is two fold. First, if you were to declare the primary key on the root as a foreign key (1-1, foreign key on source), then that mapping would be removed, so you would not see the primary key in the root record, only in the child record. Second, a foreign key can only point to a single table. Once you declare a column to be part of a foreign key, it is removed, so it cannot be used again in a new relationship. Creating a 1-1 (foreign key on source) on the root table not only makes the primary key column disappear but prevents you from joining the root table to the remaining tables.

9.2.8 Creating the Attribute Filter

Figure 9-16 shows the attribute filter that is created from the imported table definitions, including any relationships that you may have defined.

Figure 9-16 The Adapter Configuration Wizard: Attribute Filtering Page

Description of Figure 9-16 follows
Description of "Figure 9-16 The Adapter Configuration Wizard: Attribute Filtering Page"

If your object filter contains self-relationships (for example, the employee-to-employee manager relationship), then you see these as loops in the tree. These loops are not present in the XSD file. This is the descriptor object model, not the XSD file.

In this page, you select those columns that appear in the XML file, whether for input (MERGE, INSERT) or output (SELECT). Columns you are not interested in or which are to be read-only (should not be modified) can be deselected here.

See Section 9.2.9, "Defining a WHERE Clause" to continue using the Adapter Configuration Wizard.

9.2.9 Defining a WHERE Clause

If your service contains a SELECT query (that is, inbound polling services, or outbound services that contain a SELECT), then you can customize the WHERE clause of the SELECT statement.

Note:

When using polling with Sequencing Table/Update an External Sequencing Table, ensure that the name of the table in the SELECT query matches the case of the data in the sequencing table.

Figure 9-17 shows where you define a WHERE clause for an outbound service.

Figure 9-17 The Adapter Configuration Wizard: Define Selection Criteria Page

Description of Figure 9-17 follows
Description of "Figure 9-17 The Adapter Configuration Wizard: Define Selection Criteria Page"

Note:

The WHERE clause applies to SELECT operations only (that is, polling for new or changed records or performing a SELECT operation on a table). It does not apply to INSERT, UPDATE, and DELETE operations.

The most basic expression in a WHERE clause can be one of the following three cases, depending on what the right-hand side (RHS) is:

  1. EMP.ID = 123

    In this case, the RHS is a literal value. This RHS is the Literal option shown in Figure 9-18.

  2. EMP.ADDR_ID = ADDR.ID

    In this case, the RHS is another database field. This RHS is the Query Key option shown in Figure 9-18.

  3. EMP.ID = ?

    In this case, the RHS value must be specified at run time. This is the Parameter option shown in Figure 9-18.

You can create the parameters that you need in the WHERE clause by clicking Add before you move on to build the WHERE clause. To build the WHERE clause, click Edit… to launch the Expression Builder, as shown in Figure 9-18.

Figure 9-18 Expression Builder

Description of Figure 9-18 follows
Description of "Figure 9-18 Expression Builder"

To model more complex WHERE clauses (sub selects and functions), and to add ORDER BY clauses, you can edit the SQL procedure manually and click Next. However, this creates maintenance overhead later on, due to hard-coded SQL, and you may lose platform independence.

You can change the columns listed in the FROM clause when the number of columns and the types of each remain unchanged. For more complex changes consider using the Execute Pure SQL option directly where you can type any SQL.

Return Single Result Set

You must select Use Outer Joins to return a Single Result Set for both Master and Detail Tables in the Define Selection Criteria page to use an advanced feature that influences how many total statements TopLink uses when querying against multiple related tables. The safest method is to use the default (1 per table), and this feature attempts 1 total, by outer joining all related tables into a single result set.

See Section 9.2.10, "Choosing an After-Read Strategy" to continue using the Adapter Configuration Wizard.

9.2.10 Choosing an After-Read Strategy

If you selected Perform an Operation on a Table, then you can skip ahead to the Section 9.2.12, "Specifying Advanced Options.".

When configuring an inbound operation, you have the following options about what to do after a row or rows have been read:

Figure 9-19 shows these options.

Figure 9-19 The Adapter Configuration Wizard: After Read Page

Description of Figure 9-19 follows
Description of "Figure 9-19 The Adapter Configuration Wizard: After Read Page"

See Section 9.4.2.2, "Polling Strategies" to continue using the Adapter Configuration Wizard.

9.2.10.1 Delete the Rows That Were Read

With this option, the rows are deleted from the database after they have been read and processed by the adapter service.

9.2.10.2 Update a Field in the Table (Logical Delete)

With this option, you update a field in the root database table to indicate that the rows have been read. The WHERE clause of the query is updated automatically after you complete the configuration, as shown in Figure 9-20.

Figure 9-20 The Adapter Configuration Wizard: Logical Delete Page

Description of Figure 9-20 follows
Description of "Figure 9-20 The Adapter Configuration Wizard: Logical Delete Page"

When you use this approach, your database table appears, as shown in Figure 9-21.

Figure 9-21 Updating Fields in a Table

Description of Figure 9-21 follows
Description of "Figure 9-21 Updating Fields in a Table"

Note the following:

  • Rows 150 and 153 have been previously read and processed.

  • At the next polling event, row 152 is read and processed because it contains UNPROCESSED in the Status column. Because an explicit Unread Value was provided, row 151 is not read.

  • Row 154 has been flagged as LOCKED and is not read. You can use this reserved value if your table is used by other processes.

9.2.10.3 Update a Sequencing Table

With this option, you are keeping track of the last-read rows in a separate sequence table. Figure 9-22 shows the information you provide. The WHERE clause of your query is updated automatically after you complete the configuration.

Figure 9-22 The Adapter Configuration Wizard: Sequencing Table Page

Description of Figure 9-22 follows
Description of "Figure 9-22 The Adapter Configuration Wizard: Sequencing Table Page"

When you use these settings, your sequence table appears, as shown in Figure 9-23.

Figure 9-23 Updating a Sequence Table

Description of Figure 9-23 follows
Description of "Figure 9-23 Updating a Sequence Table"

Whenever a row is read, this table is updated with the ID that was just read. Then, when the next polling event occurs, it searches for rows that have an ID greater than the last-read ID (154).

Typical columns used are event_id, transaction_id, scn (system change number), id, or last_updated. These columns typically have (monotonically) increasing values, populated from a sequence number or sysdate.

9.2.10.4 Update an External Sequencing Table on a Different Database

Choose this operation to employ the sequencing table: last updated strategy. Figure 9-24 shows the Adapter Configuration Wizard - External Sequencing Table page in which you specify the details required to perform this operation.

Figure 9-24 The Adapter Configuration Wizard - External Sequencing Table page

Description of Figure 9-24 follows
Description of "Figure 9-24 The Adapter Configuration Wizard - External Sequencing Table page"

9.2.10.5 Update a Sequencing File

Use this option to update a sequencing file. Figure 9-25 shows the Adapter Configuration Wizard - Update a Sequencing File page where you specify the details for performing this operation.

Figure 9-25 Adapter Configuration Wizard - Update a Sequencing File Page

Description of Figure 9-25 follows
Description of "Figure 9-25 Adapter Configuration Wizard - Update a Sequencing File Page"

9.2.11 Specifying Polling Options

You can specify additional polling options, if any, in this page. Figure 9-26 shows the Adapter Configuration Wizard - Polling Options page.

Figure 9-26 Specifying Polling Options

Description of Figure 9-26 follows
Description of "Figure 9-26 Specifying Polling Options"

In this page, you specify details about how to poll the database table for new rows or events.

From the Polling Frequency list, select how frequently to poll for new records or events.

In the Database Rows per XML Document field, specify the number of rows per XML document when sending events to Oracle BPEL PM or Mediator. This is the batch setting between the database adapter and its consumer: Oracle BPEL PM or Mediator.

In the Database Rows per Transaction field, select Unlimited or enter a value to indicate the number of table rows to process during a single transaction.

When polling the database for events, you can order the returned rows by the selected column by using the Order By list. The best practice is to choose <No Ordering>, as message ordering regardless is not guaranteed without extra configuration.

In the SQL field, if the SQL syntax is incorrect, then a message is displayed in red.

For more information about specifying polling options, click Help in the Polling Options page or press F1.

9.2.12 Specifying Advanced Options

You can specify advanced options, if any. Figure 9-27 shows the Adapter Configuration Wizard - Advanced Options page. In this page, you can specify advanced JDBC and DBAdapter options, configure retries, and configure native sequencing.

Figure 9-27 Specifying Advanced Options

Description of Figure 9-27 follows
Description of "Figure 9-27 Specifying Advanced Options"

You must specify JDBC options in the JDBC Options section. Set low-level JDBC options on calls to the database. The operation you selected determines which options may appear here.

In the Auto-Retries section, specify the value for auto-retry incase of time out. In case of a connection related fault, the Invoke activity can be automatically retried a limited number of times. You can specify the following values in the fields in this section:

  • To retry indefinitely, type unlimited in the Attempts field.

  • Interval is the delay between retries.

  • Backoff Factor: x allows you to wait for increasing periods of time between retries. 9 attempts with a starting interval of 1 and a back off of 2 leads to retries after 1, 2, 4, 8, 16, 32, 64, 128, and 256 (28) seconds.

In the Interaction Options, specify the interaction options, as follows:

  • GetActiveUnitOfWork is an advanced setting that forces all invoke activities in the same global transaction to use the same SQL connection if going to the same database. This makes it easier to guarantee that later invoke activities can see the changes of earlier invoke activities, but you might have to set this (if using emulated two-phase commit, it should automatically be the same connection). Another difference is that for MERGE and INSERT, all changes are not written until the global transaction commits, so this setting also changes the timing of when WRITE operations occur.

  • Detect Omissions allows the MERGE and INSERT operations to ignore empty or missing XML elements in the input payload. For a MERGE operation, this prevents valid but unspecified values from being overwritten with NULL. For INSERT operations, they are omitted from the INSERT statement, allowing default values to take effect.

  • Optimize Merge should always be set to true, as it is a general enhancement to MERGE performance (using an in query for the primary key existence check).

Native Sequencing (Oracle only) allows you to specify that the primary key are assigned from a sequence on any insert. Click Search and then select a sequence from the Sequence list, or type the name and click Create.

For more information about specifying advanced options, click Help in the Advanced Options page or press F1.

9.2.13 Entering the SQL String for the Pure SQL Operation

You can enter a SQL string for performing the Execute Pure SQL operation in the Custom SQL page. Figure 9-28 shows the Adapter Configuration Wizard - Custom SQL page.

Figure 9-28 Entering a SQL String

Description of Figure 9-28 follows
Description of "Figure 9-28 Entering a SQL String"

In the SQL field, enter a custom SQL string. An XSD schema of your SQL input is automatically created in the XSD field.

The XSD field displays the XSD schema of the custom SQL string you entered. You can directly edit the resulting XSD. However, if you make subsequent changes to the SQL string, then your XSD changes are lost.

For more information about entering a SQL string, click Help in the Custom SQL page or press F1.

9.3 Oracle Database Adapter Features

This section discusses the Oracle Database Adapter features.

It includes the following topics:

9.3.1 Transaction Support

The Oracle Database Adapter enables transaction support, which, along with the inherent data processing, ensures that each modification has a clearly defined outcome, resulting in either success or failure, thus preventing potential corruption of data, executes independently from other changes, and, once completed, leaves underlying data in the same state until another transaction takes place.

There are two types of transaction support, XA Transaction support and Local Transaction support. XA transaction support allows a transaction to be managed by a transaction manager external to a resource adapter, whereas, a local transaction support allows an application server to manage resources that are local to the resource adapter.

To ensure two Oracle Database Adapter invokes commit or rollback as a unit, you must perform the following:

  • Both Oracle Database Adapter invokes must be configured to participate in global transactions.

  • Both Oracle Database Adapter invokes must participate in the same global transaction.

  • The failure of either invoke must cause the global transaction to roll back.

The transaction support is demonstrated in the following tutorial files:

  • XAInsert

  • InsertWithCatch

  • DirectSQLPerformance

To obtain the tutorial sample code, access the Oracle SOA Sample Code site.

Note:

You must use a non-XA driver with the SOALocalTxDataSource parameter. Switching to an XA driver breaks product functionality.

9.3.1.1 Configuring Oracle Database Adapter for Global Transaction Participation

In the deployment descriptor (weblogic-ra.xml file), you must set the xADataSourceName parameter. Additionally, the referenced DataSource must be configured for transaction participation by creating a data source in Oracle WebLogic Server Console.

You must create a data source and choose a XA data sources from the list.

Note:

True Database XA is only certified on Oracle 10.2.0.4 or 11.1.0.7. For earlier versions, you are safer picking a non-XA data source implementation and selecting Emulated Two-phase commit on the next page.

For information about the recommended setting for non-XA and XA data sources used by Oracle JCA Adapters, see Section 2.20, "Recommended Setting for Data Sources Used by Oracle JCA Adapters."

You cannot edit the data-sources.xml file in the Oracle WebLogic Server. You must create a data source by using the Oracle WebLogic Server Administration Console, as mentioned in Section 2.18.1, "Creating a Data Source."

9.3.1.2 Both Invokes in Same Global Transaction

Once both the Oracle Database Adapter invokes participate in global transactions, to commit or rollback as a unit, they must be participating in the same global transaction. In BPEL, this requires the understanding of where the transaction boundaries are, at what points does a checkpoint have to write to the dehydration store, commit the current global transaction, and start a new one.

The transaction boundaries in a BPEL process occur either before a Receive activity or wait activity, or before an onMessage or pick activity. This may also occur when invoking a synchronous child BPEL process, unless the bpel.config.transaction property is set on the partnerlink, as shown in the following code sample.

<property name="bpel.config.transaction">required</property>

Otherwise, the parent process is broken into two transactions and the child process runs in its own transaction.

9.3.1.3 Failure Must Cause Rollback

Finally, even if both Oracle Database Adapter invokes participate in the same global transaction, the failure of either invoke may not cause the global transaction to rollback.

The only cases where a failure can actually cause a global rollback are:

  • A Oracle Database Adapter operation that inserts/updates multiple tables as part of one invoke fails after having succeeded in some writes but not others. In this case, the Oracle Database Adapter marks the global transaction as rollback only, because the invoke operation was not atomic and a commit could cause data corruption.

  • The invoke retries multiple times in a database down scenario, until the global transaction times out and is rolled back.

  • An explicit bpelx:rollback fault is thrown from within the BPEL process.

9.3.1.3.1 Using the Same Sessions for Both Invokes

You must set the GetActiveUnitOfWork JCA parameter to true to enable using the same sessions or connections for both the Oracle Database Adapter invokes.

GetActiveUnitOfWork is an advanced JCA property you can set on any DBInteractionSpec. It causes the invoke to register itself with the two-phase commit callbacks, and all writes to the database are performed as part of the two-phase commit. By setting this property on any failure, the transaction is automatically rolled back, as there is no way to handle a fault at this late stage. Similarly, the same underlying TopLink session is used for both invokes, meaning if you merge the same object twice, it is inserted/updated once. All merge invokes that set GetActiveUnitOfWork as true are cumulative.

9.3.1.4 Transaction/XA Support

To make two Oracle Database Adapter invokes commit or roll back as a unit requires the following: both Oracle Database Adapter invokes must be configured to participate in global transactions, both invokes must participate in the same global transaction, and the failure of either invoke must cause the global transaction to rollback.

9.3.1.4.1 Configuring an Oracle Database Adapter for Global Transaction Participation

In the deployment descriptor (weblogic-ra.xml), you must set xADataSourceName. The matching data source entry must be configured for global transaction participation.

True XA: Two-Phase (XA) Versus One-Phase (Emulated) Commit

XA is a two-phase commit protocol, which is more robust than a one-phase commit or emulated protocol. The difference is that with a one-phase protocol, you may very rarely still see message loss or other rollback/commit inconsistency, on the order of one per one thousand generally.

Oracle RAC Configuration

For more information about Oracle RAC configuration, see the Oracle Database High Availability Overview guide.

True XA Configuration with Third Party Drivers

When configuring true XA for third party drivers (that is, Microsoft SQL Server 2008, IBM DB2), see if the driver jars contain a class that implements javax.sql.XADataSource.

For data direct drivers, the naming happens to be com.oracle.ias.jdbcx.db2.DB2DataSource, or com.oracle.ias.jdbcx.sqlserver.SQLServerDataSource.

9.3.1.4.2 Failure Must Cause Rollback

Finally, even if both invokes participate in the same global transaction, the failure of either invoke may not cause the global transaction to roll back.

The only cases where a failure can actually cause a global roll back are:

  • An Oracle Database Adapter operation that inserts/updates multiple tables as part of one invoke fails after having succeeded in some writes but not others. In this case, the adapter marks the global transaction rollback only, as the invoke operation was not atomic and a commit could cause data corruption.

  • The invoke retries multiple times in a database down scenario, until the global transaction times out and is rolled back.

  • An explicit bpelx:rollback fault is thrown from within the BPEL process. GetActiveUnitOfWork="true" in WSDL.

9.3.2 Pure SQL - XML Type Support

Pure SQL Adapter is an option in the Oracle Database Adapter Wizard that allows you to type the SQL string directly and have an XSD/Web service generated automatically. The database tables are introspected dynamically in the Adapter Configuration Wizard to test the SQL and populate the XSD file better (that is, with valid return types.)

The Pure SQL support allows the Oracle Database Adapter to deal with tables/views as entities and for dealing directly with SQL. You can use Pure SQL:

  • for simple data projection style report queries

  • in cases where the result set is not table oriented, such as select count(*)

  • to perform an update or delete all

  • when working with XMLType columns and xquery

  • when using complex SQL, which are not modeled in the Adapter Configuration Wizard expression builder

You can use the Pure SQL Adapter with Oracle XMLTypes. It is a natural fit for inserting XML into XMLType tables and columns, and retrieving XML using xquery selects. Pure SQL is a natural fit for the Oracle Database Adapter that provides a relational-xml mapping that parallels XML DB(XDB) support. So, when using XDB the adapter should be as lightweight and transparent as possible, to let you focus on XDB and XQuery.

If your data is in XML (unstructured/semi-structured) format, and you have no relational schema at all that you can map your data to, then you could use XDB. The conventional Oracle Database Adapter allows you to import an existing relational schema as an XML schema to be used with Web services. XDBs XML shredding algorithm can generate a relational schema from an existing XML schema for persistent storage.

Note:

Use of schema bound XMLTypes requires the oci driver, which is not certified in the 11g release. Therefore, you must use non-schema bound XMLTypes at run time, though you can use schema bound XMLTypes at design time to import a representative XSD.

For more information, see:

9.3.3 Row Set Support Using a Strongly or Weakly Typed XSD

Currently a REF CURSOR by nature can support any arbitrary result set, so the XSD generated at design time allows this and looks like the XSD that Example 9-1 shows.

Note:

Oracle Database stored procedures return result sets that are referred to as RefCursors, whereas third-party databases result sets that are returned are referred to as RowSets.

Example 9-1 Weakly Typed XSD

<refCursorOutputParam>
    <Row>
        <Column name="DEPTNO" sqltype="NUMBER">20</Column>
        ...
    </Row>
</refCursorOutputParam>

However the XML output from this is hard to use. It is very difficult to write an Xpath expression or XSL based on a weakly typed XSD and column names as attribute values instead of element names.

Although a row set can represent any result set, it is possible to assume for some procedures that it has the same structure each time, and hence can be described with a strongly typed XSD. A strongly typed XSD is almost a necessity to transform the result set to another XSD later on. A strongly typed XSD looks like the XSD that Example 9-2 shows.

Example 9-2 Strongly Typed XSD

<refCursorOutputParam>
    <dept>
        <deptno>20</deptno>
        ...
    </dept>
</refCursorOutputParam>

You can use the Adapter Configuration Wizard to create a strongly typed XSD for a row set returned by a stored procedure or function REF CURSOR variable. An Oracle Database function is a special stored procedure that always has one out variable, and can be inlined - for example, inside select statements - and so traditionally does not do updates.

Using this feature, you can select a stored procedure (or stored function), enter its arguments, and perform a test execution to retrieve an actual row set. The Adapter Configuration Wizard then introspects the returned row set and generates a strongly typed XSD. You can enter arguments easily through the wizard. For example, you can enter numbers and strings directly, dates as literals (2009/11/11), and you can even enter structs like MYOBJ('a', 'b').

Note:

Functions are not supported for IBM DB2 UDB. Only SQL stored procedures are supported.

The Adapter Configuration Wizard row set support using a strongly typed XSD has the following restrictions:

  • Oracle Database PL/SQL record or boolean types are not supported.

  • Oracle Database PL/SQL varray is not supported.

  • Oracle Database PL/SQL %rowtype is not supported.

  • Oracle Database PL/SQL table types are not supported.

  • Oracle Database PL/SQL procedures with IN only REF CURSOR parameters are not supported.

    For an Oracle Database PL/SQL procedure with REF CURSOR as an IN/OUT parameter, the Adapter Configuration Wizard ignores the IN and generates the strongly typed XSD based on the OUT parameter.

  • Referencing an element in the XSD using ref is not supported.

  • SQL Server 2008 table valued functions and CLR functions are not supported.

The Oracle Database Adapter supports strongly typed XSD for the following third-party databases:

  • Microsoft SQL Server 2005

  • Microsoft SQL Server 2008

  • IBM DB2 UDB 9.7

The Oracle Database Adapter does not support strongly typed XSD for the following third-party databases:

  • IBM DB2 AS/400

  • MySQL

  • Informix Dynamic Server

  • Sybase 15.0.2

For more information, see:

9.3.4 Proxy Authentication Support

You can connect to your Oracle data store by using Proxy Authentication. On a per-invoke basis, you can set a combination of the following new header properties:

  • jca.db.ProxyUserName: to use the OracleConnection.PROXYTYPE_USER_PASSWORD proxy type, set this property to the proxy user name as a java.lang.String.

  • jca.db.ProxyPassword: to use the OracleConnection.PROXYTYPE_USER_PASSWORD proxy type, set this property to the proxy user password as a java.lang.String.

  • jca.db.ProxyCertificate: to use the OracleConnection.PROXYTYPE_CERTIFICATE proxy type, set this property to a base64Binary encoded byte[] array containing a valid certificate.

    This is a more encrypted way of passing the credentials of the user, who is to be proxied, to the database. The certificate contains the distinguished name encoded in it. One way of generating the certificate is by creating a wallet and then decoding the wallet to get the certificate. The wallet can be created using runutl mkwallet. It is then necessary to authenticate using the generated certificate.

  • jca.db.ProxyDistinguishedName: to use the OracleConnection.PROXYTYPE_DISTINGUISHED_NAME proxy type, set this property to the proxy distinguished name as a java.lang.String.

    This is a global name in lieu of the password of the user being proxied for.

  • jca.db.ProxyRoles: regardless of what proxy type you use, you can optionally set this property to define the roles associated with the proxy user as a String[] array where each java.lang.String corresponds to a role name.

  • jca.db.ProxyIsThickDriver: if you are using the OCI driver, set this property to a value of true to accommodate differences in the JDBC-level API between the thick and thin drivers.

To run the invoke, a proxy connection is obtained from the data source.

For more information, see Chapter 10, "Proxy Authentication", in the Oracle Database JDBC Developer's Guide and Reference

9.3.5 Streaming Large Payload

To enable support to stream payload, you must select the Enable Streaming check box while specifying polling options, as shown in Figure 9-26. When you enable this feature, the payload is streamed to a database instead of getting manipulated in SOA run time as in a memory DOM. You use this feature while handling large payloads. When you select the Enable Streaming check box, a corresponding Boolean property StreamPayload is appended to the ActivationSpec properties defined in the respective .jca file.

9.3.6 Schema Validation

The SchemaValidation [false/true] property is a new activation specification property that has been added, and this can be configured in a .jca file. When set to true, all XML files produced by the polling Oracle Database Adapter (for Receive activities) is validated against the XSD file. On failure, the XML record is rejected but still marked as processed by the Oracle Database Adapter.

Databases provide structured storage and the XSD file is generated by the Oracle Database Adapter Wizard itself. However, if you edit the auto generated XSD and add your own restrictions, you may want to start validation. For instance, if you import a VARCHAR(50) field, the auto-generated XSD has the max-length 50 restriction. However, if your BPEL process for some reason can only handle values of fixed length 22, it may want to validate the XML file.

9.3.7 High Availability

The Oracle Database Adapter supports high availability in an active-active setup. In an active-active setup, distributed polling techniques can be used for inbound Database Adapters to ensure that the same data is not retrieved more than once. For more information, see Section 9.3.8.1, "Distributed Polling First Best Practice: SELECT FOR UPDATE (SKIP LOCKED)." Similar to other adapters, an Oracle Database Adapter can also be configured for singleton behavior within an active-passive setup. This allows a high performance multithreaded inbound Oracle Database Adapter instance running in an active-passive setup, to follow a fan out pattern and invoke multiple composite instances across a cluster. The Oracle Database Adapter also supports the high availability feature when there is a database failure or restart. The DB adapter picks up again without any message loss.

9.3.8 Scalability

The following sections describe best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL PM or Mediator nodes, including:

9.3.8.1 Distributed Polling First Best Practice: SELECT FOR UPDATE (SKIP LOCKED)

The first best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL PM or Mediator nodes is to use the Adapter Configuration Wizard to set both the Distributed Polling check box in the Adapter Configuration Wizard and to set MaxTransactionSize. Increase concurrency by setting the adapter _db.JCA property NumberOfThreads.

On an Oracle database, this automatically uses the syntax SELECT FOR UPDATE SKIP LOCKED. Concurrent threads each try to select and lock the available rows, but the locks are only obtained on fetch. If an about to be fetched row is locked, the next unlocked row are locked and fetched instead. If many threads all execute the same polling query at the same time, they should all relatively quickly obtain a disjoint subset of unprocessed rows.

On a non-Oracle database the SELECT FOR UPDATE safely ensures that the same row cannot be processed multiple times, however you may get less scalability. You should consider either using additionally a partition field or the second best practice which is essentially multi-threading on a single node with fan-out (see Section 9.3.8.2, "Distributed Polling Second Best Practice: Tuning on a Single Node First").

Note:

A distributed approach is required to insure that multiple activation instances do not process the same rows.

When configuring this best practice, consider the following:

9.3.8.1.1 Configuring PollingInterval, MaxTransactionSize, and ActivationInstances

In a distributed scenario, each polling instance tries to balance the load by not greedily attempting to process all unprocessed rows by itself. What that means is that at a time, an instance only fetches at most MaxTransactionSize rows.

When using skip locking, if a full MaxTransactionSize rows are fetched, the next MaxTransactionSize rows can be immediately fetched continuously. This is because concurrent threads do no block each other when using skip locking, so there is no danger of one instance fetching all the rows.

However, with skip locking disabled, all threads tries to lock the same rows, and only one succeeds. Consequently, once this thread has processed MaxTransactionSize rows, it pauses until the next polling interval, to allow other threads to also lock and process rows.

Hence, the maximum throughput with distributed polling enabled but uses SkipLocking disabled is:

NumberOfThreads x MaxTransactionSize/PollingInterval

Note:

Although you may want to increase MaxTransactionSize, if you increase it too high, you may start to see transaction timeouts. Table 9-2 lists safe values for MaxTransactionSize.

For load balancing purposes, it is dangerous to set the MaxTransactionSize too low in a distributed environment with skip locking disabled (where MaxTransactionSize becomes a speed limit). It is best to set the MaxTransactionSize close to the per CPU throughput of the entire business process. This way, load balancing occurs only when you need it.

Table 9-2 MaxTransactionSize and MaxRaiseSize Values

MaxTransactionSize MaxRaiseSize Description

10

1

When using sequential routing.

For 10 rows you have 10 individual instances and 10 XML records passing through SOA.

100

 

When using parallel routing.

>= 100

MaxTransactionSize

When using the adapter to stream rows through as fast as possible.


For load balancing purposes, it is dangerous to set the MaxTransactionSize too low in a distributed environment (where it becomes a speed limit). It is best to set the MaxTransactionSize close to the per CPU throughput of the entire business process. This way, load balancing occurs only when you need it.

If distributed polling is not set, then the adapter tries to process all unprocessed rows in a single polling interval.

9.3.8.1.2 Partition Field

In a distributed scenario there are polling instances on multiple servers, however, per server there can be multiple threads configured. You can configure these activation instances to cooperate somewhat by processing separate rows, possibly improving scaling.

To so, simply add the property PartitionField to your db.jca file:

<property name="PartitionField" value="ID"/>

If you set activationInstances to 2, then activation instances 1 and 2 (or 0 and 1) would respectively execute:

SELECT ... WHERE ... AND MOD (ID, 2) = 0 FOR UPDATE SKIP LOCKED

and

SELECT ... WHERE ... AND MOD (ID, 2) = 1 FOR UPDATE SKIP LOCKED

Activation instance 0 still conflicts with other activation instances with this ID on other servers, but at least it does not conflict with other activation instances with ID 1.

Ensure that the partition field is numeric and that applying mod evenly distribute the rows (that is, in this case make sure all the IDs are not either even or odd).

On Oracle Database, you can set the partition field to be rowid by setting db.jca file property PartitionField as follows:

 <property name="PartitionField" value="rowid"/>

Then the SQL is in fact converted to:

SELECT ... WHERE ... AND MOD (dbms_rowid.rowid_row_number(rowid), 2) = [0/1] FOR UPDATE SKIP LOCKED

Because Oracle Database skip locking provides scalability, setting a partition field is not recommended. There is a cost of increased database CPU usage with more complex SQL.

9.3.8.1.3 activationInstances

T

The adapter framework level property activationInstances (configured in composite.xml) is interchangeable with NumberOfThreads for distributed scenarios.

Setting activationInstances to 5 and NumberOfThreads to 5 is equal to setting one to 25 and the other to 1. As the extra work instances are created outside of the DbAdapter, they do not cooperate in any way. Hence, in a multi-threaded single node scenario, always configure NumberOfThreads only. Without database level concurrency control through enabling distributed polling, duplicates are read.

Note:

In a distributed cluster scenario configuring NumberOfThreads or activationInstances has the same effect. For a non distributed scenario, you must use NumberOfThreads. Hence it is safe to always use NumberOfThreads and disregard activationInstances.

For more information, see Section 2.13, "Singleton (Active/Passive) Inbound Endpoint Lifecycle Support Within Adapters".

9.3.8.1.4 Indexing and Null Values

Try to index (and/or add explicit constraints on the database for) the primary and all foreign keys to joined tables. If using Logical delete polling, try to index the status column. Try to configure a non-null MarkUnreadValue and MarkReadValue.

If you have no indexes at all and prefer to have none, you can proceed with the single node multi-threaded approach (see Section 9.3.8.2, "Distributed Polling Second Best Practice: Tuning on a Single Node First"). That way the polling query is executed once, which might be a full table scan, but multiple threads help to exhaust the entire result set until all rows are processed. With a distributed approach all work must be done while the rows are exclusively locked, which means locked in a timed transaction. In a distributed scenario there are many repeated selects, which may harm performance if each one is doing a full table scan.

Note:

Performance is very slow if MarkUnreadValue is configured as null.

9.3.8.1.5 Disabling Skip Locking

Skip locking has been available on Oracle Database since Oracle 8 but is documented in Oracle 11. You rarely come across an incompatible feature and have to disable it. In that case you can set the Oracle Database Adapter connector property usesSkipLocking to false in the ra.xml file you deploy with your application as Example 9-3 shows.

Example 9-3 Configuring usersSkipLocking in ra.xml

<?xml version="1.0" encoding="UTF-8"?>
<connector xmlns="http://java.sun.com/xml/ns/j2ee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee
  http://java.sun.com/xml/ns/j2ee/connector_1_5.xsd" version="1.5">
...
  <resourceadapter>
        <outbound-resourceadapter>
            <connection-definition>
...
            <config-property>
              <config-property-name>usesSkipLocking</config-property-name>
              <config-property-type>java.lang.Boolean</config-property-type>
                    <config-property-value>false</config-property-value>
            </config-property>
...
            </connection-definition>
...
        </outbound-resourceadapter>
  </resourceadapter>
</connector>

For more information on how to configure connector-level properties, see:

9.3.8.1.6 MarkReservedValue

If you are using Logical Delete polling and you set MarkReservedValue, skip locking is not used.

Formerly, the best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL Process Manager or Oracle Mediator nodes was essentially using LogicalDeletePollingStrategy or DeletePollingStrategy with a unique MarkReservedValue on each polling node, and setting MaxTransactionSize.

However with the introduction of skip locking in this release, that approach has now been superseded. If you were using this approach previously, you can simply remove (in db.jca) or clear (Logical Delete Page of wizard) the MarkReservedValue, and you automatically get skip locking.

The benefits of using skip locking over a reserved value include:

  • Skip locking scales better in a cluster and under load.

  • All work is in one transaction (as opposed to update/reserve, then commit, then select in a new transaction), so the risk of a non-recoverable situation in an HA environments is minimized.

  • No unique MarkReservedValue must be specified. For this to work you had to configure a complex variable like R${weblogic.Name-2}-${IP-2}-${instance}.

9.3.8.1.7 SequencingPollingStrategy (Last Read or Last Updated)

This distributed approach works with Delete or Logical Delete based polling strategies.

The work of the sequencing polling based strategies cannot be distributed as records are initially processed in order.

For example, the second row cannot be marked as processed ahead of the first (setting last read ID to 2 means not just that 2 has been processed but 1 also).

However, as the sequencing polling strategies are non-intrusive, requiring no post updates or deletes to the source tables, they are extremely fast.

Use sequencing polling strategies with a single node and with fan-out on a cluster. It is still safe to use in a cluster; however, the select for update is instead applied on accessing the last read ID in the helper table.

9.3.8.2 Distributed Polling Second Best Practice: Tuning on a Single Node First

The next best practice for multiple Oracle Database Adapter process instances deployed to multiple Oracle BPEL PM or Mediator nodes is to tune on a single node first.

For an Oracle Database Adapter intensive process, such as a database-database integration, performance can be improved by a factor 10 or 100 just by tuning on a single Java Virtual Machine (JVM), scaling |NumberOfThreads|, and setting high values for MaxTransactionSize and MaxRaiseSize.

As Section 9.3.8.1, "Distributed Polling First Best Practice: SELECT FOR UPDATE (SKIP LOCKED)" describes, there may be times where it is best to improve performance on a single node, and then optionally do fan-out to multiple nodes in a cluster. Relying on concurrency control features of the database such as locking can be great, but these are often designed more for preserving data integrity than for high performance scalability.

Cases where it may be best to do polling on a single node in the cluster include using the non-intrusive Sequencing Polling strategy, polling large un-indexed tables, or using a non-Oracle back-end database that does not provide high concurrency locks like skip locks.

Note:

For Oracle Database Adapter with polling operation in a clustered environment, you must use the option of distributed polling by selecting the Distributed Polling check box in the Adapter Configuration Wizard.

You can also refer to Section 2.13, "Singleton (Active/Passive) Inbound Endpoint Lifecycle Support Within Adapters".


For the samples MultiTablesPerformance and DirectSQLPerformance that show tuning on a single node, access the Oracle SOA Sample Code site.

9.3.9 Performance Tuning

The Oracle Database Adapter is preconfigured with many performance optimizations. You can, however, make some changes to reduce the number of round trips to the database by implementing performance tuning.

For information about performance tuning, see:

9.3.10 detectOmissions Feature

The following are the features of the detectOmission feature:

Available Since

Release 10.1.3

Configurable

Yes

Default Value

Design Time: true, unless explicitly set to false

Use Case

Users may pass incomplete or partial XML to a merge, update, or insert, and see that every column they left unspecified in XML is set to null in the database.

It allows DBAdapter merge, insert, or update to differentiate between null value and the absence of a value (omission) in XML documents. On a case by case basis, it determines which information in XML is meaningful and which is not. In this way XML is seen as a partial representation of a database row, as opposed to a complete representation. The following table lists examples for null values, and values that can be omitted.

Element Type Omission Null

Column

<director></director>

<director />

<!-- director>…</director -->

<director xsi:nil="true" />

1-1

<!-- dept> … </dept -->

<dept xsi:nil="true" />

1-M

<!-- empCollection>…

</empCollection -->

</empCollection>

</empCollection> (empty)


Note:

The 1-1 representation <dept /> denotes an empty department object and should not be used.For 1-M, <empCollection /> actually means a collection of 0 elements and is considered a meaningful value.For columns, <director></director> is not considered an omission in cases where it represents an empty string.

A value considered omitted is omitted from UPDATE or INSERT SQL. For an update operation, existing (meaningful) values on the database are not overwritten. For an insert operation, the default value on the database is used, as no explicit value is provided in the SQL string.

A DBAdapter receive is not able to produce XML with omissions, and makes use of xsi:nil="true". If you are unable to produce input XML with xsi:nil="true", or are concerned about the difference between <director /> and <director></director>, then it is best to set DetectOmissions="false" in the JCA file.

To treat all null values as omissions, check out the IgnoreNullsMerge sample, which comes with a custom TopLink plugin. The plugin works similar to this feature, but cannot detect subtleties between null and omission.

To obtain the IgnoreNullsMerge sample code, access the Oracle SOA Sample Code site.

When you are expecting an update, you can improve performance, by omitting 1-1 and 1-M relationships. Because the merge operation can skip considering the detail records completely.

Alternatively, map only those columns that you are interested in, and create separate mappings for different invokes. If two updates should update two different sets of columns, create two separate partnernlinks.

Performance

By default, XML is not used as an input to the Oracle Database Adapter containing omissions. Until an XML with omissions is detected, there is no performance overhead. Once omissions are detected, a TopLink descriptor event listener is added. This event listener has some overhead, and every modifyRow about to become a SQLUpdate or SQLInsert must be iterated over, to check for omissions. Hence, every column value sent to the database is checked. If the input XML has mostly omissions, then the cost overhead should be more than compensated by sending fewer values to the database.

Incompatible Interactions

DirectSQL="true" and DetectOmissions="true" - DetectOmissions takes precedence. The following are some examples for incompatible interactions:

  • DetectOmissionsMerge

  • IgnoreNullsMerge

  • OptimizeMerge

Note:

For migrated old BPEL project, you must re-run the Database Adapter Wizard to regenerate the JCA file. When re-run the Database Adapter Wizard, the DetectOmissions and OptimizeMerge options appear in the JCA file with default values as DetectOmissions="false" and OptimizeMerge="false".

See the following for more information:

You can also access the forums from Oracle Technology Network at

http://www.oracle.com/technology

9.3.11 OutputCompletedXml Feature

OutputCompletedXml is a feature of the outbound insert activity. The following are some of the features of the OutputCompletedXml feature:

Available Since

Release 10.1.2.0.2

Configurable

OutputCompletedXml appears in the JCA file only when default is true.

Default Value

It is true when TopLink sequencing is configured to assign primary keys on insert from a database sequence, otherwise it is false.

Issue

You can have primary keys auto-assigned on insert from a database sequence. However, the usefulness of this feature is diminished, because insert/merge have no output message, so there is no way to tell which primary keys were assigned.

Note:

After configuring sequencing (link), run the Adapter Configuration Wizard again so that the insert/merge WSDL operations can be regenerated with an output message, and WSDL property OutputCompletedXml="true".

Performance

An output XML is provided only when the output XML would be significantly different, so if TopLink sequencing is not used, then this feature is disabled and there is no performance hit. Further, this feature can be explicitly disabled. Likewise, the original input XML is updated and returned; a completely new XML is not built. Also only a shallow update of the XML is performed; if primary keys were assigned to detail records, then these are not reflected in the output XML.

Incompatible Interactions

DirectSQL="true" and "OutputCompletedXml" - OutputCompletedXml takes precedence.

9.3.12 QueryTimeout for Inbound and Outbound Transactions

You can configure QueryTimeout from the Adapter Configuration Wizard- Advanced Options page. This feature exposes the java.sql.Statement level property of the same name. Essentially, QueryTimeout allows you to configure a timeout on the call.

9.3.13 Doing Synchronous Post to BPEL (Allow In-Order Delivery)

In this feature, the entire invocation is in a single thread and global transaction. By default, initiation is asynchronous and the BPEL process is invoked in a separate global transaction. With Oracle Mediator, it is generally a synchronous invoke so this is only specific to an Oracle BPEL process.

To enable this feature, click the Do Synchronous Post to BPEL (Allow In-Order Delivery) option in the Adapter Configuration Wizard - Operation page.

9.4 Oracle Database Adapter Concepts

This section includes the following topics related to Oracle Database Adapter Concepts:

9.4.1 Relational-to-XML Mapping

This section includes the following topics related to Relational-to-XML mapping:

For a flat table or schema, the relational-to-XML mapping is easy to see. Each row in the table becomes a complex XML element. The value for each column becomes a text node in the XML element. Both column values and text elements are primitive types.

Table 9-3 shows the structure of the MOVIES table. This table is used in the use cases described in this chapter. See Oracle Database Adapter Use Cases for more information.

Table 9-3 MOVIES Table Description

Name Null? Type

TITLE

NOT NULL

VARCHAR2(50)

DIRECTOR

--

VARCHAR2(20)

STARRING

--

VARCHAR2(100)

SYNOPSIS

--

VARCHAR2(255)

GENRE

--

VARCHAR2(70)

RUN_TIME

--

NUMBER

RELEASE_DATE

--

DATE

RATED

--

VARCHAR2(6)

RATING

--

VARCHAR2(4)

VIEWER_RATING

--

VARCHAR2(5)

STATUS

--

VARCHAR2(11)

TOTAL_GROSS

--

NUMBER

DELETED

--

VARCHAR2(5)

SEQUENCENO

--

NUMBER

LAST_UPDATED

--

DATE


The corresponding XML schema definition (XSD) is as follows:

<?xml version = '1.0' encoding = 'UTF-8'?>
<xs:schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/top
/ReadS1" xmlns="http://xmlns.oracle.com/pcbpel/
adapter/db/top/ReadS1" elementFormDefault=
"qualified" attributeFormDefault="qualified" 
xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="MoviesCollection" type="MoviesCollection"/>
  <xs:complexType name="MoviesCollection">
     <xs:sequence>
        <xs:element name="Movies" type="Movies" minOccurs="0" 
maxOccurs="unbounded"/>
     </xs:sequence>
  </xs:complexType>
  <xs:complexType name="Movies">
     <xs:sequence>
        <xs:element name="title">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="50"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="director" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="20"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="starring" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="100"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="synopsis" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="255"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="genre" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="70"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="runTime" type="xs:decimal" minOccurs="0"
 nillable="true"/>
        <xs:element name="releaseDate" type="xs:dateTime" minOccurs="0"
 nillable="true"/>
        <xs:element name="rated" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="6"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="rating" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="4"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="viewerRating" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="5"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="status" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="11"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="totalGross" type="xs:decimal" minOccurs="0"
 nillable="true"/>
        <xs:element name="deleted" minOccurs="0" nillable="true">
           <xs:simpleType>
              <xs:restriction base="xs:string">
                 <xs:maxLength value="5"/>
              </xs:restriction>
           </xs:simpleType>
        </xs:element>
        <xs:element name="sequenceno" type="xs:decimal" minOccurs="0"
 nillable="true"/>
        <xs:element name="lastUpdated" type="xs:dateTime" minOccurs="0"
 nillable="true"/>
     </xs:sequence>
  </xs:complexType>
</xs:schema>

As the preceding code example shows, MOVIES is not just a single CLOB or XMLTYPE column containing the entire XML string. Instead, it is an XML complexType comprising elements, each of which corresponds to a column in the MOVIES table. For flat tables, the relational-to-XML mapping is straightforward.

Table 9-4 and Table 9-5 show the structure of the EMP and DEPT tables, respectively. These tables are used in the MasterDetail use case. See Oracle Database Adapter Use Cases for more information.

Table 9-4 EMP Table Description

Name Null? Type

EMPNO

NOT NULL

NUMBER(4)

ENAME

--

VARCHAR2(10)

JOB

--

VARCHAR2(9)

MGR

--

NUMBER(4)

HIREDATE

--

DATE

SAL

--

NUMBER(7,2)

COMM

--

NUMBER(7,2)

DEPTNO

--

NUMBER(2)


Table 9-5 DEPT Table Description

Name Null? Type

DEPTNO

NOT NULL

NUMBER(2)

DNAME

--

VARCHAR2(14)

LOC

--

VARCHAR2(13)


As the preceding table definitions show, and as is typical of a normalized relational schema, an employee's department number is not stored in the EMP table. Instead, one of the columns of EMP (DEPTNO) is a foreign key, which equals the primary key (DEPTNO) in DEPT.

However, the XML file equivalent has no similar notion of primary keys and foreign keys. Consequently, in the resulting XML file, the same data is represented in a hierarchy, thereby preserving the relationships by capturing the detail record embedded inside the master.

An XML element can contain elements that are either a primitive type (string, decimal), or a complex type, that is, another XML element. Therefore, an employee element can contain a department element.

The corresponding XML shows how the relationship is materialized, or shown inline. DEPTNO is removed from EMP, and instead you see the DEPT itself.

<EmpCollection>
  <Emp>
    <comm xsi:nil = "true" ></comm> 
    <empno >7369.0</empno>
    <ename >SMITH</ename>
    <hiredate >1980-12-17T00:00:00.000-08:00</hiredate>
    <job >CLERK</job>
    <mgr >7902.0</mgr
    <sal >800.0</sal>
    <dept>
      <deptno >20.0</deptno>
      <dname >RESEARCH</dname>
      <loc >DALLAS</loc>
    </dept>
  </Emp>
    ...
</EmpCollection>

Materializing the relationship makes XML human readable and allows the data to be sent as one packet of information. No cycles are allowed in the XML file; therefore, an element cannot contain itself. This is handled automatically by the Oracle Database Adapter. However, you may see duplication (that is, the same XML detail record appearing more than once under different master records). For example, if a query returned two employees, both of whom work in the same department, then, in the returned XML, you see the same DEPT record inline in both the EMP records.

Therefore, when you import tables and map them as XML, it is recommended that you avoid excessive duplication, although the Oracle Database Adapter does not print an element inside itself. The Oracle Database Adapter prints the following:

<Emp>
  <name>Bob</name>
  <spouse> 
    <name>June</name>
  </spouse
</Emp>

But not:

<Emp>
  <name>Bob</name>
  <spouse>
    <name>June</name> 
    <spouse>
      <name>Bob</name> 
      <spouse>
        ...
      </spouse> 
    </spouse> 
  </spouse> 
</Emp>

To avoid duplication, you can do the following:

  • Import fewer tables. If you import only EMP, then DEPT does not appear.

  • Remove the relationship between EMP and DEPT in the Adapter Configuration Wizard. This removes the relationship, but the foreign key column is put back.

In both these cases, the corresponding XML is as follows:

<EmpCollection> 
  <Emp>
    <comm xsi:nil = "true" ></comm>
    <empno >7369.0</empno>
    <ename >SMITH</ename> 
    <hiredate >1980-12-17T00:00:00.000-08:00</hiredate>
    <job >CLERK</job>
    <mgr >7902.0</mgr>
    <sal >800.0</sal> 
    <deptno >20.0</deptno>
  </Emp> 
   ...
</EmpCollection>

Either preceding solution is feasible only if returning foreign key suffices, as opposed to getting back the complete detail record in its entirety.

9.4.1.1 Relational Types to XML Schema Types

Table 9-6 shows how database data types are converted to XML primitive types when you import tables from a database.

Table 9-6 Mapping Database Data Types to XML Primitive Types

Database Type XML Type (Prefixed with xs:)

VARCHAR, VARCHAR2, CHAR, NCHAR, NVARCHAR, NVARCHAR2, MEMO, TEXT, CHARACTER, CHARACTER VARYING, UNICHAR, UNIVARCHAR, SYSNAME, NATIONAL CHARACTER, NATIONAL CHAR, NATIONAL CHAR VARYING, NCHAR VARYING, LONG, CLOB, NCLOB, LONGTEXT, LONGVARCHAR, NTEXT

string

BLOB, BINARY, IMAGE, LONGVARBINARY, LONG RAW, VARBINARY, GRAPHIC, VARGRAPHIC, DBCLOB, BIT VARYING

base64Binary

BIT, NUMBER(1) DEFAULT 0, SMALLINT DEFAULT 0, SMALLINT DEFAULT 0

boolean

TINYINT, BYTE

byte

SHORT, SMALLINT

short

INT, SERIAL

int

INTEGER, BIGINT

integer

NUMBER, NUMERIC, DECIMAL, MONEY, SMALLMONEY, UNIQUEIDENTIFIER

decimal

FLOAT FLOAT16, FLOAT(16), FLOAT32, FLOAT(32), DOUBLE, DOUBLE PRECIS, REAL

double

TIME, DATE, DATETIME, TIMESTAMP, TIMESTAMP(6), SMALLDATETIME, TIMESTAMPTZ, TIMESTAMPLTZ, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE

dateTime


Essentially, NUMBER goes to DECIMAL, the most versatile XML data type for numbers, VARCHAR2 and CLOB to string, BLOB to base64Binary (to meet the plain-text requirement), and date types to dateTime.

Any type not mentioned in this discussion defaults to java.lang.String and xs:string. Time Stamp support is basic, because only the xs:dateTime format is supported. The BFILE type is specifically not supported.

Note:

The user-defined Object, Struct and VARRAY, and REF types are supported in 11g.

Because XML is plain text, BLOB and byte values are base 64/MIME encoded so that they can be passed as character data.

9.4.1.2 Mapping Any Relational Schema to Any XML Schema

The Oracle Database Adapter supports mapping any relational schema on any relational database to an XML schema, although not any XML schema of your choice, because the Adapter Configuration Wizard generates the XML schema with no explicit user control over the layout of the elements. You can control how you map the schema in both the Adapter Configuration Wizard and later in TopLink Workbench. By pairing the Oracle Database Adapter with a transformation step, you can map any relational schema to any XML schema.

9.4.1.3 Querying over Multiple Tables

When executing a SQL select statement against multiple related tables there are the following three methods to build the SQL. These ways relate to how to pull in the detail records when the query is against the master record:

The following sections contain an outline of these three methods and their comparison. When selecting rows from a single table there are no issues different from selecting from multiple tables.

9.4.1.3.1 Using Relationship Queries (TopLink Default)

Having selected a Master row, TopLink can always query separately to get all the details belonging to that Master table. These hidden queries (relationship queries) are cached in the TopLink metadata and must be prepared only once.

Consider the SQL statement in following sample scenario:

SELECT DIRECTOR, ..., VIEWER_RATING 
       FROM MOVIES 
WHERE RATING = 'A';

For each master, the SQL statement is as follows:

SELECT CRITIC, ..., TITLE
       FROM MOVIE_REVIEWS
WHERE (TITLE = ?)

It enables you to bring in all the data with 1 + n query executions, where n is the number of master rows returned by the first query.

This approach is safe but slow, as a large number of round trips to the database are required to pull in all the data.

For configuring using the relationship Queries (TopLink default) approach, you must edit or_mappings.xml outside of JDeveloper. In addition, change the batch-reading elements value to false.

9.4.1.3.2 Twisting the Original Select (TopLink Batch-Attribute Reading)

This is a default feature that allows TopLink to alter the original SQL select statement to read all the details in a second select statement, as shown in the following example:

SELECT DIRECTOR, ..., VIEWER_RATING
FROM MOVIES
WHERE RATING = 'A'
SELECT DISTINCT t0.CRITIC, ..., t0.TITLE
FROM MOVIE_REVIEWS t0, MOVIES t1
WHERE ((t1.RATING = 'A') AND (t0.TITLE = t1.TITLE))

By considering the original select statement in pulling in the details, a total of two (1 + 1 = 2) query executions must be performed.

Advantages

Batch attribute reading has the following advantages:

  • All data read in two round trips to database

  • The is a default feature in the 10.1.2.0.2 release

Disadvantages

Batch attribute reading has the following disadvantages:

  • When using maxTransactionSize (on polling receive) or maxRows (on invoke select) to limit the number of rows loaded into memory at a time, these settings do not easily carry over to the batch attribute query. It is easier to work with a cursored result when there is only a single result set. (Multiple cursors can be used with difficulty, if the original query has an order by clause).

  • TopLink can alter a SQL statement, only when it is in a format it can understand. If you use the hybrid SQL approach and set custom SQL for the root select, then TopLink cannot interpret that SQL to build the batch select.

  • The DISTINCT clause is used on the batch query, to avoid returning the same detail twice if two masters happen to both point to it. The DISTINCT clause cannot be used when returning LOBs in the resultset.

Configuration

Configuration is on a per 1-1 or 1-M mapping basis. By default, all such mappings since the 10.1.2.0.2 release have this property set. To configure, edit or_mappings.xml outside JDeveloper and edit the <batch-reading> elements to true (default) or false.

9.4.1.3.3 Returning a Single Result Set (TopLink Joined-Attribute Reading)

The detail tables are outer-joined to the original SQL select statement, returning both master and detail in a single result set, as shown in the following example:

SELECT DISTINCT t1.DIRECTOR, ..., t1.VIEWER_RATING, t0.CRITIC, ..., t0.TITLE
FROM MOVIE_REVIEWS t0, MOVIES t1
WHERE ((t1.RATING = 'A') AND (t0.TITLE (+) = t1.TITLE))

This requires one query execution in total.

Advantages

The advantages include the following:

  • In case of using maxTransactionSize while polling, the benefits of dealing with a single cursor can be great.

  • When following the hybrid SQL route and entering custom SQL statements, you only have to deal with a single SQL statement, whereas TopLink normally uses a series of additional hidden SQL statements to bring in related rows.

  • read consistency: Enables you to read all related rows at the same time, and not at different instances in time for the different tables.

  • Performance can be ideal as only a single round trip to the database is required, whereas batch attribute reading requires one for each table queried.

Disadvantages

There are some drawbacks, however, namely the cost of returning duplicate data. For example, consider that you read the Master and Detail tables; Master has 100 columns in each row, and Detail has 2 columns in each row. Each row in the table, Master also, typically has 100 related Detail rows.

With one query in each table, the result sets for the preceding example appears, as shown in the following example:

Master
Column1 column2 ….. column100

Master1 ...

Detail

Detail
Column1 column2
Detail1   ...
Detail2
...
Detail100 ...

In this example, 300 column values are returned as shown:

(columns in master + columns in detail x details per master) =
(           100             +            2                x              100         )  =  300

With one query for all tables, the result set appears, as shown in the following example:

Master                                             Detail

Column1 Column2 ... Column100                      Column1  Column2
Master1    ...                                     Detail1     ...
Master1    ...                                     Detail2     ...
Master1    ...                                     Detail100   ...

When there is one query for all tables, 10,200 column values are returned in a single result set, versus 300 in two result sets, as shown here:

((columns in master + columns in detail) x details per master) =
((          100     +            2     ) x       100         ) =  10,200

This can have a serious drain on network traffic and computation because 97 percent of the data returned is duplicate data. Also, if the master had two related tables detail1 and detail2 and there were 100 each in each master, then the number of column values returned would be over 10 million per master row.

In general, you can use the following simple formula to estimate the relative cost of returning all rows in a single result set:

        (Master columns + Detail1 columns + Detail2 columns + ... ) x 
                Detail1s per Master x 
                Detail2s per Master x ...
bloat = ___________________________________________________________

       (Master columns + Detail1 columns X Detail1s per Master + 
               Detail2 columns X Detail2s per Master + ...)

For 1-1 relationships, this value is always 1, and if in the same example each master had two columns only and the details had 100 columns instead, and each master had only 3 or 4 details each, then the bloat would be

               (2 + 100) x 4        408
bloat =        ____________  = ___________  ~=  1
               (2 + 100 x 4)        402

Another disadvantage is that this setting could distort the meaning of the maxRows setting on an outbound select.

Configuration

To configure, select Use Outer Joins to return a Single Result Set for both Master and Detail Tables on the Adapter Configuration Wizard - Define Selection Criteria page.

Note:

When you create a SQL query such as the following by using the TopLink Expression Builder, the result may not be as expected:

SELECT DISTINCT t1.TABLE1_ID, t1.COLUMN_A FROM TABLE2 t0, 
TABLE1 t1 WHERE ((t0.STATUS = 1) AND (t0.TABLE1_ID = t1.
TABLE1_ID))

The expected result for this query is that only rows with Table 1's and their owned Table 2's with status = 1 be returned.

However, what this query actually translates to is "table 1's, where any of its table 2's have status = 1," resulting in the return of table 1's that match the selection criteria, and ALL of the table 2's they own, including those with other statuses, whether or not their statuses =1. The DISTINCT keyword ensures the table 1's are not repeated and the join happens across table 2.

The misunderstanding happens in the way Toplink works. Through the Expression Builder, you can only specify a selection criteria for Table 1 and have no control over the Table 2's they own, this part is automatically done.

However, you can get the expected result by using either of the following two approaches:

1.) Query directly for table 2 using the selection criteria of status = 1, that is, do not go through table 1 and get the table 2's they own.

2.) Use direct (custom SQL), as shown in the following example:

SELECT TABLE1.TABLE1_ID, TABLE1.COLUMN_A, TABLE2.STATUS 
FROM TABLE2, TABLE1 WHERE TABLE2.STATUS=1 AND TABLE1.
TABLE1_ID = TABLE2.TABLE1_ID
9.4.1.3.4 Comparison of the Methods Used for Querying over Multiple Tables

Superficially, returning a single result set looks best (1 query), followed by batch attribute reading (altering the select statement: 2 queries), and finally by default relationship reading (n + 1 queries). However, there are several pitfalls to both of the more advanced options, as explained below:

Altering User-Defined SQL

If you specify custom/hybrid SQL, the TopLink cannot alter that SQL string to build the details select. For this reason, you must avoid using hybrid SQL and build selects using the wizards' visual expression builder as often as possible.

Show Me the SQL

The additional queries executed by TopLink in both, the default and the batch attribute reading cases can be somewhat of a mystery to users. For this reason, the raw SQL shown to users in the Adapter Configuration Wizard assumes returning a single result set, to make things clearer and also to improve manageability.

Returning Too Many Rows At Once

Databases can store vast quantities of information, and a common pitfall of select statements which return too much information at once. On a DBAdapter receive, a maxTransactionSize property can be set to limit the number of rows which are read from a cursored result set and processed in memory at a time. A similar max-rows setting exists for one time invoke select statements. However, this setting is very risky.

9.4.2 SQL Operations as Web Services

After mapping a relational schema as XML, you must also map basic SQL operations as Web services. Each operation discussed in the following sections has a corresponding tutorial and a readme file. It is recommended that you start with these and try to run one or more as you read this section. As the tutorials demonstrate, some operations translate directly to the SQL equivalent, while others are more complex.

This section includes the following topics:

9.4.2.1 DML Operations

Data manipulation language (DML) operations align with basic SQL INSERT, UPDATE, and SELECT operations. SQL INSERT, UPDATE, DELETE, and SELECT are all mapped to Web service operations of the same name. The MERGE is either an INSERT or UPDATE, based on the results of an existence check. A distinction is made between the data manipulation operations—called outbound writes—and the SELECT operations—called outbound reads. The connection between the Web service and the SQL for merge (the default for outbound write) and queryByExample are not as obvious as for basic SQL INSERT, UPDATE, and SELECT.

This section includes the following topics:

Merge

Merge first reads the corresponding records in the database, calculates any changes, and then performs a minimal update. INSERT, UPDATE, and MERGE make the most sense when you are thinking about a single row and a single table. However, your XML can contain complex types and map to multiple rows on multiple tables. Imagine a DEPT with many EMPS, each with an ADDRESS. In this case, you must calculate which of possibly many rows have changed and which to insert, update, or delete. If a specific row did not change or only one field changed, then the DML calls is minimal.

querybyExample

Unlike the SELECT operation, queryByExample does not require a selection criteria to be specified at design time. Instead, for each invoke, a selection criteria is inferred from an exemplary input XML record.

For instance, if the output xmlRecord is an employee record, and the input is a sample xmlRecord with lastName = "Smith", then on execution, all employees with a last name of Smith are returned.

A subset of queryByExample is to query by primary key, which can be implemented by passing in sample XML records where only the primary key attributes are set.

Use queryByExample when you do not want to create a query using the visual query builder and want the flexibility of allowing the input record to share the same XML schema as the output records.

The queryByExample operation is slightly less performant because a new SELECT must be prepared for each execution. This is because the attributes that are set in the example XML record can vary each time, and therefore the selection criteria vary.

Input xmlRecord:

<Employee>
      <id/>
      <lastName>Smith</lastName>
</Employee>

Output xmlRecord:

<EmployeeCollection>
      <Employee>
         <id>5</id>
         <lastName>Smith</lastName>
         ....
      </Employee>
      <Employee>
         <id>456</id>
         <lastName>Smith</lastName>
         ....
      </Employee>
</EmployeeCollection>

Use Cases for Outbound Invoke Operations

Outbound invoke operations are demonstrated in the following tutorial files:

  • Insert

  • Update

  • Delete

  • Merge

  • SelectAll

  • SelectAllByTitle

  • PureSQLSelect

  • QueryByExample

To obtain these files, access the Oracle SOA Sample Code site.

Use Cases for Pure SQL

A new option in 10.1.3.1 enables you to specify any arbitrary SQL string, and an XML representing the inputs and outputs to the SQL is generated. Pure SQL operations are demonstrated in the following tutorial files:

  • UpdateAll

  • SelectCount

  • SelectGroupBy

  • SelectStar

To obtain these files, access the Oracle SOA Sample Code site.

Advanced Use Cases for Outbound Invoke Operations

Advanced outbound invoke operations are demonstrated in the following tutorial files:

  • InsertWithClobs

  • XAInsert

  • NativeSequencingInsert

To obtain these files, access the Oracle SOA Sample Code site.

9.4.2.2 Polling Strategies

The inbound receive enables you to listen to and detect events and changes in the database, which in turn can be the initiators of a business process. This is not a one-time action, but rather an activation. A polling thread is started, which polls a database table for new rows or events.

Whenever a new row is inserted into the MOVIES table, the polling operation raises it to the SCA Run Time. The strategy is to poll every record once. The initial SELECT has to be repeated over time, to receive the rows that exist at the start and all new rows as they are inserted over time. However, a new row once read is not likely to be deleted, and therefore can possibly be read repeatedly with each polling.

The various ways to poll for events, called polling strategies, also known as after-read strategies or publish strategies, range from simple and intrusive to sophisticated and nonintrusive. Each strategy employs a different solution for the problem of what to do after reading a row or event so as not to pick it up again in the next polling interval. The simplest (and most intrusive) solution is to delete the row so that you do not query it again.

This section discusses the following polling operations that you can perform after the data is read from the database. This section also discusses the strategies and factors to help you determine which strategy to employ for a particular situation:

Delete the Row(s) that were Read

Choose this operation to employ the physical delete polling strategy. This operation polls the database table for records and deletes them after processing. Use this strategy to capture events related to INSERT operations and cannot capture database events related to DELETE and UPDATE operations on the parent table. This strategy cannot be used to poll child table events. This strategy allows multiple adapter instances to go against the same source table. There is zero data replication.

Preconditions: You must have deletion privileges on the parent and associated child tables to use the delete polling strategy. Table 9-7 describes the requirements for using the delete polling strategy.

Table 9-7 Delete Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

No delete on source

Shallow delete

No updates on source

Cascading delete

Poll for updates

Minimal SQL

Poll for deletes

Zero data replication

Poll for child updates

Default

--

Allows raw SQL

--

Concurrent polling

--


Note:

In Shallow delete and Cascading delete, the delete operation can be configured to delete the top-level row, to cascade all, or to cascade on a case-by-case basis.

Concurrent polling can be configured for both delete and logical delete polling strategies.

Configuration: You can configure the delete polling strategy to delete the top-level row, to cascade all, or to cascade on a case-by-case basis. This strategy enables deleting only the parent rows and not the child rows, cascaded deletes, and optional cascaded deletes, determined on a case-by-case basis. You can configure the polling interval for performing an event publish at design time.

Delete Cascade Policy: The optional advanced configuration is to specify the cascade policy of the DELETE operation. For instance, after polling for an employee with an address and many phone numbers, the phone numbers are deleted because they are privately owned (default for one-to-many), but not the address (default for one-to-one). This can be altered by configuring or_mappings.xml, as in the following example:

<database-mapping>
   <attribute-name>orders</attribute-name>
   <reference-class>taxonomy.Order</reference-class>
   <is-private-owned>true</is-private-owned>

You can also configure the activation itself to delete only the top level (master row) or to delete everything.

A receive operation appears in an inbound JCA as follows:

<connection-factory location="eis/DB/Connection1" UIConnectionName="Connection1" adapterRef=""/>
  <endpoint-activation portType="dd_ptt" operation="receive">
    <activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
      <property name="DescriptorName" value="dd.Emp"/>
      <property name="QueryName" value="ddSelect"/>
      <property name="MappingsMetaDataURL" value="dd-or-mappings.xml"/>
      <property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
      <property name="MarkReadColumn" value="STATUS"/>
      <property name="MarkReadValue" value="PROCESSED"/>
      <property name="MarkReservedValue" value="RESERVED-1"/>
      <property name="MarkUnreadValue" value="UNPROCESSED"/>
      <property name="PollingInterval" value="5"/>
      <property name="MaxRaiseSize" value="1"/>
      <property name="MaxTransactionSize" value="10"/>
      <property name="ReturnSingleResultSet" value="false"/>
    </activation-spec>
  </endpoint-activation>
</adapter-config>

Update a Field in the [Table_Name] Table (Logical Delete)

Choose this operation to employ the logical delete polling strategy. This strategy involves updating a special field on each row processed and updating the WHERE clause at run time to filter out processed rows. It mimics logical delete, wherein applications rows are rarely deleted but instead a status column isDeleted is set to true. The status column and the read value must be provided, but the modified WHERE clause and the post-read update are handled automatically by the Oracle Database Adapter.

Preconditions: You must have the logical delete privilege or a one-time alter schema (add column) privilege on the source table. Table 9-8 describes the requirements for using the logical delete polling strategy.

Table 9-8 Logical Delete Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

No updates on source

No delete on source

Poll for deletes

Minimal SQL

--

Zero data replication

--

Minimal configuration

--

Allows raw SQL

--

Poll for updates

--

Poll for child updates

--

Concurrent polling

--


Note:

The requirements of the following are met, as follows:

  • Poll for updates: By adding a trigger

  • Poll for child updates: By adding a trigger

  • Concurrent polling: By specifying additional mark unread and reserved values.

Configuration: The logical delete polling strategy requires minimal configuration. You must specify the mark read column and the value that indicates a processed record.

A receive operation appears in an inbound WSDL as follows:

<operation name="receive">
   <jca:operation
      ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
         …
      PollingStrategyName="LogicalDeletePollingStrategy"
      MarkReadField="STATUS"
      MarkReadValue="PROCESSED"

Given the configuration for logical delete, the Oracle Database Adapter appends the following WHERE clause to every polling query:

AND (STATUS IS NULL) OR (STATUS <> 'PROCESSED')

Database Configuration: A status column on the table being polled must exist. If it does not exist already, you can add one to an existing table.

Support for Polling for Updates: Given that rows are not deleted with each read, it is possible to repetitively read a row multiple times. You must add a trigger to reset the mark read field whenever a record is changed, as follows:

create trigger Employee_modified
before update on Employee
for each row
begin
   :new.STATUS := 'MODIFIED';
end;

Support for Concurrent Access Polling: Just as a single instance should never process an event more than once, the same applies to a collection of instances. Therefore, before processing a record, an instance must reserve that record with a unique value. Again, the status column can be used:

<operation name="receive">
   <jca:operation
      ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
         …
      PollingStrategyName="LogicalDeletePollingStrategy"
      MarkReadField="STATUS"
      MarkUnreadValue="UNPROCESSED"
      MarkReservedValue="RESERVED${IP-2}-${weblogic.Name-1}-${instance}"
      MarkReadValue="PROCESSED"

The polling query instead appears, as shown in the following example:

Update EMPLOYE set STATUS = 'RESERVED65-1-1' where (CRITERIA) AND (STATUS = 'UNPROCESSED');

Select … from EMPLOYEE where (CRITERIA) AND (STATUS = 'RESERVED65-1-1');

The after-read UPDATE is faster because it can update all:

Update EMPLOYEE set STATUS = 'PROCESSED' where (CRITERIA) AND (STATUS = 'RESERVED65-1-1');

Update a Sequencing Table

Choose this operation to employ the sequencing table: last-read Id strategy. This polling strategy involves using a helper table to remember a sequence value. The source table is not modified; instead, rows that have been read in a separate helper table are recorded. A sequence value of 1000, for example, means that every record with a sequence less than that value have been processed. Because many tables have some counter field that is always increasing and maintained by triggers or the application, this strategy can often be used for noninvasive polling. No field on the processed row must be modified by the Oracle Database Adapter.

Native sequencing with a preallocation size of 1 can ensure that rows are inserted with primary keys that are always increasing over time.

This strategy is also called a nondestructive delete because no updates are made to the source rows, and you can use a sequencing strategy such as the sequence field to order the rows in a sequence for processing. When the rows are ordered in a line, the Oracle Database Adapter knows which rows are processed and which are not with a single unit of information.

Preconditions: You must have a sequencing table or create table privilege on the source schema. The source table has a column that is monotonically increasing with every INSERT (an Oracle native sequenced primary key) or UPDATE (the last-modified timestamp). Table 9-9 describes the requirements for using the sequencing polling strategy.

Table 9-9 Sequencing Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

Poll for deletes

Poll for updates

Allows raw SQL

No delete on source

Concurrent polling

No updates on source

--

One extra SQL select

--

Zero data replication

--

Moderate configuration

--

Poll for child updates

--


Configuration: A separate helper table must be defined. On the source table, you must specify which column is ever increasing.

<adapter-config name="ReadS" adapter="Database Adapter" 
xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
    <connection-factory location="eis/DB/DBConnection1" 
UIConnectionName="DBConnection1" adapterRef=""/>
  <endpoint-activation portType="ReadS_ptt" operation="receive">
    <activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
      <property name="DescriptorName" value="ReadS.PerfMasterIn"/>
      <property name="QueryName" value="ReadSSelect"/>
      <property name="MappingsMetaDataURL" value="ReadS-or-mappings.xml"/>
      <property name="PollingStrategy" value="SequencingPollingStrategy"/>
      <property name="SequencingTable" value="PC_SEQUENCING"/>
      <property name="SequencingColumn" value="PK"/>
      <property name="SequencingTableKeyColumn" value="TABLE_NAME"/>
      <property name="SequencingTableValueColumn" value="LAST_READ_ID"/>
      <property name="SequencingTableKey" value="PERF_MASTER_IN"/>
      <property name="PollingInterval" value="60"/>
      <property name="MaxRaiseSize" value="1"/>
      <property name="MaxTransactionSize" value="10"/>
      <property name="ReturnSingleResultSet" value="false"/>
    </activation-spec>
  </endpoint-activation>
</adapter-config>

The sequencing field type can be excluded if it is actually a number.

Database Configuration: A sequencing table must be configured once for a given database. Multiple processes can share the same table. Given the ActivationSpec specified in the preceding example, the CREATE TABLE command looks as follows:

CREATE TABLE SEQUENCING_HELPER 
(
TABLE_NAME VARCHAR2(32) NOT NULL,
LAST_READ_DATE DATE
)
;

Polling for Updates: In the preceding example, the polling is for new objects or updates, because every time an object is changed, the modified time is updated.

A sample trigger to set the modified time on every insert or update is as follows:

create trigger Employee_modified
before insert or update on Employee
for each row
begin
  :new.modified_date := sysdate;
end;

Using a Sequence Number: A sequence number can be used for either insert or update polling. Native sequencing returns monotonically increasing primary keys, when an increment by 1 is used. You can also use the sequence number of a materialized view log.

Update an External Sequencing Table on a Different Database

Choose this operation to employ the sequencing table: last updated strategy. This polling strategy involves using a helper table to remember a last_updated value. A last_updated value of 2005-01-01 12:45:01 000, for example, means that every record last updated at that time or earlier have been processed. Because many tables have rows with a last_updated or creation_time column maintained by triggers or the application, this strategy can often be used for noninvasive polling. Fields on the processed row never require modification by the Oracle Database Adapter.

This strategy is also called a nondestructive delete because no updates are made to the source rows, and you can use a sequencing strategy such as the last_updated field to order the rows in a sequence for processing. When the rows are ordered in a line, the Oracle Database Adapter knows which rows are processed and which are not with a single unit of information.

See Update a Sequencing Table for information about preconditions and configuration.

Update a Sequencing File

This strategy works similar to Update an External Sequencing Table on a Different Database, the only difference is that the control information is stored in a file instead of a table.

Control Table Strategy

Choose this operation to employ the control table polling strategy. This polling strategy involves using a control table to store the primary key of every row that has yet to be processed. With a natural join between the control table and the source table (by primary key), polling against the control table is practically the same as polling against the source table directly. However, an extra layer of indirection allows the following:

  • Destructive polling strategies such as the delete polling strategy can be applied to rows in the control table alone while shielding any rows in the source table.

  • Only rows that are meant to be processed have their primary key appear in the control table. You can use information that is not in the rows themselves to control which rows to process (a good WHERE clause may not be enough).

  • The entire row is not copied to a control table, and any structure under the source table, such as detail rows, can also be raised without copying.

Streams and materialized view logs make good control tables.

Preconditions: You must have the create/alter triggers privilege on the source table. Table 9-10 describes the requirements for using the control table polling strategy.

Table 9-10 Control Table Polling Strategy Preconditions

Requirements Met Conflicts With

Poll for inserts

Advanced configuration: the native XML from the database has control header, and triggers are required.

Poll for updates

--

Poll for deletes

--

Poll for child updates

Minimal data replication (primary keys are stored in control table)

No delete on source

--

No updates on source

--

No extra SQL selects

--

Concurrent polling

--

Allows raw SQL

--

Auditing

--


Using triggers, whenever a row is modified, an entry is added to a control table, containing the name of the master table, and the primary keys. At design time, the control table is defined to be the root table, with a one-to-one mapping to the master table, based on the matching primary keys. The control table can contain extra control information, such as a time stamp, and operation type (INSERT, UPDATE, and so on).

The delete polling strategy is useful with this setup. It is important to keep the control table small, and if the option shouldDeleteDetailRows="false" is used, then only the control rows are deleted, giving you a nondestructive delete (the DELETE is not cascaded to the real tables).

It is possible to reuse the same control table for multiple master tables. In TopLink, you can map the same table to multiple descriptors by mapping the control table as one abstract class with multiple children. Each child has a unique one-to-one mapping to a different master table. The advantage of this approach is that you can specify for each child a class indicator field and value so that you do not need an explicit WHERE clause for each polling query.

The following are sample triggers for polling for changes both to a department table and any of its child employee rows:

CREATE OR REPLACE TRIGGER EVENT_ON_DEPT 
   AFTER INSERT OR UPDATE ON DEPARTMENT 
   REFERENCING NEW AS newRow 
   FOR EACH ROW 
   DECLARE X NUMBER;
BEGIN
   SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO);
   IF X = 0 then
   insert into DEPT_CONTROL values (:newRow. DEPTNO);
   END IF;
END;
CREATE OR REPLACE TRIGGER EVENT_ON_EMPLOYEE
   AFTER INSERT OR UPDATE ON EMPLOYEE
   REFERENCING OLD AS oldRow NEW AS newRow
   FOR EACH ROW
   DECLARE X NUMBER;
BEGIN
   SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :newRow.DEPTNO);
   IF X = 0 then
   INSERT INTO DEPT_CONTROL VALUES (:newRow.DEPTNO);
   END IF;
   IF (:oldRow.DEPTNO <> :newRow.DEPTNO) THEN
      SELECT COUNT(*) INTO X FROM DEPT_CONTROL WHERE (DEPTNO = :oldRow.DEPTNO);
      IF (X = 0) THEN
         INSERT INTO DEPT_CONTROL VALUES (:oldRow.DEPTNO);
      END IF;
   END IF;
END;

Use Cases for Polling Strategies

Polling strategies are demonstrated in the following tutorials:

  • PollingLogicalDeleteStrategy

  • PollingLastUpdatedStrategy

  • PollingLastReadIdStrategy

  • PollingControlTableStrategy

  • MasterDetail (for physical delete polling strategy

To obtain these files, access the Oracle SOA Sample Code site.

Advanced Use Cases for Polling Strategies

Advanced polling strategies are demonstrated in the following tutorials:

  • DistributedPolling

  • PollingExternalSequencing

  • PollingFileSequencingStrategy

  • PollingForChildUpdates

  • PollingNoAfterReadStrategy

  • PollingOracleSCNStrategy

  • PollingPureSQLOtherTableInsert

  • PollingPureSQLSysdateLogicalDelete

  • PollingWithParameters

To obtain these files, access the Oracle SOA Sample Code site.

9.5 Deployment

The Oracle Database Adapter comes deployed to the application server by the install. It contains a single adapter instance entry eis/DB/SOADemo, which points to the data source jdbc/SOADataSource. The connection information to the database is inside the data source definition.

When deploying a SOA project that uses the OracleAS Adapter for Databases, you might have to add a new adapter instance and restart the application server first. This could be because you want to point to a database other than the one referred in jdbc/SOADataSource, or because you chose a name for the adapter instance that does not yet exist. For instance, if you create a connection in JDeveloper named Connection1, then by default the DB Adapter service points to eis/DB/Connection1, as shown in Figure 9-7.

You can also check which adapter instance the service is pointing to by looking at the db.jca file, as shown in the following code snippet:

<connection-factory location="eis/DB/Connection1" UIConnectionName="Connection1" 
adapterRef="" />

In the preceding example, the location is the JNDI name of the adapter instance at run time, and UIConnectionName is the name of the connection used in JDeveloper.

You can create a DB Adapter instance through the Oracle WebLogic Server Administration Console, as mentioned in Section 2.18, "Adding an Adapter Connection Factory" or by directly editing the weblogic-ra.xml file. Following these steps are screenshots that show how to create an adapter instance through the Oracle WebLogic Administration Console. The following are the steps to edit weblogic-ra.xml:

  1. Search fmwhome/ for DbAdapter.rar.

  2. Unzip the file.

  3. Edit META-INF/weblogic-ra.xml (and possibly ra.xml.)

  4. Jar the file again.

  5. Restart the application server.

The following is a sample adapter instance in weblogic-ra.xml:

<connection-instance>
  <jndi-name>eis/DB/SOADemo</jndi-name>
    <connection-properties>
       <properties>
          <property>
            <name>xADataSourceName</name>
            <value>jdbc/SOADataSource</value>
              </property>
              <property>
                <name>dataSourceName</name>
                <value> </value>
              </property>
              <property>
                <name>platformClassName</name>
                <value>Oracle10Platform</value>
              </property>
           </properties>
        </connection-properties>
</connection-instance>

The four mandatory properties are: jndi-name, xADataSourceName, dataSourceName, and platformClassName. The jndi-name property must match the location attribute in the db.jca file, and is the name of the adapter instance.

The xADataSourceName property is the name of the underlying data source (which has the connection information).

The platformClassName indicates which SQL to generate. For information about PlatformClassName, see Table 9-11, "Database Platform Names".

The following screenshots show how to edit Database Adapter properties using the Oracle WebLogic Administation Console

The first screenshot shows navigation to the Outbound Connectio Pools within the WebLogic Administration Console. This is the actual Database Adapter Configuration, from where you can go to the subsequent page to edit the Database Adapter properties.

Figure 9-29 The Outbound Connection Pools Tab of the WebLogic Console

Description of Figure 9-29 follows
Description of "Figure 9-29 The Outbound Connection Pools Tab of the WebLogic Console"

The second screenshot shows editing properties from the WebLogic Console that you edit accordingly andas needed. Name, Type and Value are displayed on a per-property basis.

Figure 9-30 Database Adapter Properties in the Oracle WebLogic Administration Console

Description of Figure 9-30 follows
Description of "Figure 9-30 Database Adapter Properties in the Oracle WebLogic Administration Console"

Most Common Mistakes

The following are the two most common mistakes with deployment:

  • Not creating an adapter instance entry that matches the location attribute in your db.jca file (or not creating one at all.)

  • Setting the location attribute in the db.jca file to the name of the data source directly.

  • Not changing platformClassName when connecting to a database other than Oracle.

For the latter, there is a level of indirection in that you give the name of the adapter instance (eis/DB/...), which itself points to the data source pool (jdbc/...). It is a common mistake to miss this indirection and give the name jdbc/... directly in the location attribute.

Data Source Configuration

For the relevant Data Source configuration for your application server, see Section 9.6, "JDBC Driver and Database Connection Configuration." When configuring an Oracle data source, ensure that you use the thin XA option.

Additional Adapter Instance Properties

This section briefly describes additional properties in the DB Adapter instance beyond xADataSourceName, dataSourceName, and platformClassName. When adding a property to weblogic-ra.xml, you must ensure that the property is also declared in ra.xml (also in DbAdapter.rar). For example, the following is a code snippet of the ra.xml file for the property xADataSourceName in weblogic-ra.xml:

<config-property>
<config-property-name>xADataSourceName </config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value></config-property-value>
</config-property>

For information about the Oracle Database Adapter instance properties, see Appendix A, "Oracle Database Adapter Properties." Apart from the properties mentioned there, you can also add the properties listed in the following table:

Property Name Type

usesNativeSequencing

Boolean

usesSkipLocking

Boolean

usesStringBinding

Boolean

usesByteArrayBinding

Boolean

usesStreamsForBinding

Boolean

eventListenerClass

String

logTopLinkAll

Boolean

maxBatchWritingSize

Integer

nonRetriableSQLErrorCodes

String

shouldOptimizeDataConversion

Boolean

shouldTrimStrings

Boolean

driverClassName

String

sequencePreallocationSize

Integer

tableQualifier

String

usesBatchWriting

Boolean

usesNativeSQL

Boolean


The preceding properties appear in the oracle.toplink.sessions.DatabaseLogin object. See TopLink API reference information on DBConnectionFactory Javadoc and DatabaseLogin Javadoc at https://download.oracle.com/docs/cd/B10464_02/web.904/b10491/index.html.

9.5.1 Deployment with Third Party Databases

Table 9-11 lists databases and their advanced properties, which are database platform variables. Set the platformClassName name to a listed variable. Setting platformClassName is mandatory if you are using an advanced database.features that are not uniform across databases, such as native sequencing or stored procedures.

As an example, to execute a stored procedure on DB2 versus SQL Server, the DbAdapter must generate and send different SQL.Use the example below for use with the SQLServer Platform :

execute <procedure> @<arg1>=? ...

when using the DB2 Platform:

call <procedure>(?, ...)

The platformClassName setting indicates which SQL to generate. Since most databases offer non-uniform features (that is, variants on the ANSI SQL 92 language specification), it is safest to configure platformClassName accurately.The default value is Oracle10Platform, and should be changed to the appropriate variable if you are connecting to a different database vendor.

Note:

Providing the qualified class name with package is not necessary if it starts with org.eclipse.persistence.platform.database

Table 9-11 Database Platform Names

Database PlatformClassName

Oracle10+ (including 11g)

org.eclipse.persistence.platform.database.Oracle10Platform

Oracle9+ (optional):

org.eclipse.persistence.platform.database.Oracle9Platform

Oracle8

org.eclipse.persistence.platform.database.Oracle8Platform

Oracle7

org.eclipse.persistence.platform.database.OraclePlatform

DB2

org.eclipse.persistence.platform.database.DB2Platform

DB2 on AS400e

oracle.tip.adapter.db.toplinkext.DB2AS400Platform

Informix

org.eclipse.persistence.platform.database.InformixPlatform

SQLServer

org.eclipse.persistence.platform.database.SQLServerPlatform

MySQL

org.eclipse.persistence.platform.database.MySQLPlatform

Any other database

org.eclipse.persistence.platform.database.DatabasePlatform


9.6 JDBC Driver and Database Connection Configuration

In this release, Oracle JCA Adapters are certified against the following third-party databases using Oracle WebLogic Server Type 4 JDBC drivers:

  • Microsoft SQL Server 2005 and 2008 (all SP levels included)

  • Sybase 15

  • Informix 11.5

  • DB2 9.7 and later FixPaks

  • MySQL 5.x+

Note:

Only major databases and versions are certified. Working with other databases should be feasible when they provide a working JDBC driver, and you rely on core ANSI SQL relational features, such as Create, Read, Update, and Delete (CRUD) operations on tables and views. Issues tend to be more prevalent due to the fact that not all JDBC drivers implement database metadata introspection the same way. However, it should be possible to import matching tables on a certified database and then point to the uncertified database at runtime. The information provided in this section for uncertified databases is meant as a guide only.

For more information, see the following topics:

9.6.1 Creating a Database Connection Using a Native or Bundled Oracle WebLogic Server JDBC Driver

To create a database connection when using a native or bundled Oracle WebLogic Server JDBC driver:

  1. Ensure that the appropriate JDBC driver JAR files are installed and set the class path.

    For more information, see:

  2. In the File menu, click New.

    The New Gallery page is displayed.

  3. In the All Technologies tab, under General categories, select Connections.

    A list of the different connections that you can make is displayed in the Items pane on the right side of the New Gallery page.

  4. Select Database Connection, and then click OK.

    The Create Database Connection page is displayed.

  5. For Create Connection In, select IDE Connections.

  6. Enter a name for this connection in the Connection Name field.

    For example, SQLServer.

  7. Select the appropriate driver from the Connection Type menu.

  8. Enter your credentials (such as user name, password, and role, if applicable).

  9. Enter your connection information.

    For example, jdbc:sqlserver://HOST-NAME:PORT;databaseName=DATABASE-NAME

    For more information, see:

  10. Click Test Connection.

  11. If the connection is successful, click OK.

9.6.2 Creating a Database Connection Using a Third-Party JDBC Driver

To create a database connection when using a third-party JDBC driver:

  1. Install the appropriate JDBC driver JAR files and set the class path.

    For more information, see Section 9.6.4, "Location of JDBC Driver JAR Files and Setting the Class Path".

  2. In the File menu, click New.

    The New Gallery page is displayed.

  3. In the All Technologies tab, under General categories, select Connections.

    A list of the different connections that you can make is displayed in the Items pane on the right side of the New Gallery page.

  4. Select Database Connection, and then click OK.

    The Create Database Connection page is displayed.

  5. For Create Connection In, select IDE Connections.

  6. Enter a name for this connection in the Connection Name field.

    For example, SQLServer.

  7. Select Generic JDBC from Connection Type.

  8. Enter your user name, password, and role information.

  9. Click New for Driver Class.

    The Register JDBC Driver dialog is displayed.

    Perform Steps 10, 11 and 19 in the Register JDBC Driver dialog.

  10. Enter the driver name (for example, some.jdbc.Driver) for Driver Class.

    For example, com.microsoft.sqlserver.jdbc.SQLServerDriver.

  11. Click Browse for Library.

    The Select Library dialog is displayed.

    Perform Steps 12 and 18 in the Select Library dialog.

  12. Click New to create a library.

    The Create Library dialog is displayed.

    Perform Steps 13 through 17 in the Create Library dialog.

  13. Specify a name in the Library Name field.

    For example, SQL Server JDBC.

  14. Click Class Path, and then click Add Entry to add each JAR file of your driver to the class path.

    The Select Path Entry dialog is displayed.

  15. Select a JDBC class file and click Select.

    For example, select sqljdbc.jar.

  16. Click OK when you have added all the class files to the Class Path field.

  17. Click OK to exit the Create Library dialog.

  18. Click OK to exit the Select Library dialog.

  19. Click OK to exit the Register JDBC Driver dialog.

  20. Enter your connection string name for JDBC URL and click Next.

    For example, jdbc:sqlserver://HOST-NAME:PORT;databaseName=DATABASE-NAME

    For more information, see:

  21. Click Test Connection.

  22. If the connection is successful, click OK.

9.6.3 Summary of Third-Party JDBC Driver and Database Connection Information

Table 9-12, "Database Driver Selection (from Weblogic Server Console)" summarizes the connection information for common third-party databases.

For information about PlatformClassName, see Table 9-11, "Database Platform Names".

For more information, see:

Table 9-12 Database Driver Selection (from Weblogic Server Console)

Database JDBC Driver

Microsoft SQL Server

  • Oracle's MS SQL Server Driver (Type 4 XA)

  • Oracle's MS SQL Server Driver (Type 4)

Sybase

  • Oracle's Sybase Driver (Type 4 XA)

  • Oracle's Sybase Driver (Type 4)

Informix

  • Oracle's Informix Driver (Type 4 XA)

  • Oracle's Informix Driver (Type 4)

IBM DB2

  • Oracle's DB2 Driver (Type 4 XA)

  • Oracle's DB2 Driver (Type 4)

MySQL

MySQL's Driver (Type 4)

Versions: using com.mysql.jdbc.Driver


9.6.3.1 Using a Microsoft SQL Server

You must note the following when connecting to a SQL Server database:

  • User name and password

    • SQL Server 2005 installs with Windows authentication as the default. Therefore, you do not log in with a user name and password; rather, your Windows user account either has privilege or does not. JDBC requires you to provide a user name and password.

  • Connect string

    From the sqlcmd login, you can deduce what your connect string is, as in the following examples:

    Example 1:

    sqlcmd
    1>
    jdbc:microsoft:sqlserver://localhost:1433
    

    Example 2:

    sqlcmd -S user.mycompany.com\SQLExpress
    1>
    jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433
    

    Example 3:

    sqlcmd -S user.mycompany.com\SQLExpress -d master
    1>
    jdbc:microsoft:sqlserver://user.mycompany.com\SQLExpress:1433;databasename=
    master
    

    A full URL is as follows:

    jdbc:microsoft:sqlserver://serverName[\instanceName]:tcpPort[;SelectMethod=cursor][;databasename=databaseName]
    
  • Database name

    If you must explicitly supply the database name, but do not know it, go to

    C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data
    

    If you see a file named master.mdf, then one database name is master.

  • TCP port

    Ensure that SQL Server Browser is running and that your SQL Server service has TCP/IP enabled and is listening on static port 1433. Disable dynamic ports. In SQL Native Client Configuration/Client Protocols, ensure that TCP/IP is enabled and that the default port is 1433.

  • JDBC drivers

    You must download the JDBC drivers separately. From www.microsoft.com, click Downloads and search on jdbc. You can also try using the DataDirect driver.

9.6.3.2 Using a Sybase Database

This section includes the following topics:

9.6.3.2.1 Using a Sybase JConnect JDBC Driver

URL: jdbc:weblogic:sybase://SERVER-NAME:PORT;databaseName=
DATABASE-NAME

Driver Class: weblogic.jdbc.sybase.SybaseDriver

Driver Jar: jConnect-6_0\classes\jconn3.jar

For information about the Sybase JConnect JDBC driver, refer to the following link:

http://www.sybase.com/products/middleware/jconnectforjdbc.

9.6.3.3 Using an Informix Database

This section includes the following topics:

9.6.3.3.1 Using an Informix JDBC Driver

URL: jdbc:informix-sqli://HOST-NAME-OR-IP:PORT-OR-SERVICE-NAME/DATABASE-NAME:INFORMIXSERVER=SERVER-NAME

Driver Class: com.informix.jdbc.IfxDriver

Driver Jar: ifxjdbc.jar

For information about the Informix JDBC driver, refer to the following link:

http://www-01.ibm.com/software/data/informix/tools/jdbc/.

9.6.3.4 Using an IBM DB2 Database

This section includes the following topics:

9.6.3.4.1 IBM DB2 Driver

URL: jdbc:db2:localhost:NAME

Driver Class: com.ibm.db2.jcc.DB2Driver

Driver Jar (v8.1): db2jcc.jar, db2jcc_javax.jar, db2jcc_license_cu.jar

For information about DataDirect driver, refer to the following link:

http://www.datadirect.com/techres/jdbcproddoc/index.ssp

9.6.3.4.2 JT400 Driver (AS400 DB2)

URL: jdbc:as400://hostname;translate binary=true

Driver Class: com.ibm.as400.access.AS400JDBCDriver

Driver Jar: jt400.jar

For correct character set translation, use translate binary=true.

9.6.3.4.3 IBM Universal Driver

URL: jdbc:db2://hostname:port/schemaname

Driver Class: com.ibm.db2.jcc.DB2Driver

Driver Jar: db2jcc.jar, db2jcc4.jar and db2java.zip

9.6.3.5 Using a MySQL Database

Use the following information:

URL: jdbc:mysql://hostname:3306/dbname

Driver Class: com.mysql.jdbc.Driver

Driver Jar: mysql-connector-java-3.1.10-bin.jar

9.6.4 Location of JDBC Driver JAR Files and Setting the Class Path

This section describes the location of JDBC JAR files and setting the class path at run time and design time.

Run Time

For both Windows and Linux, you must perform the following steps:

  1. Drop the vendor-specific driver JAR files to the user_projects/domains/soainfra/lib directory.

  2. Drop the vendor-specific driver JAR files to the <Weblogic_Home>/server/lib.

  3. Edit the classpath to include the vendor-specific jar file in <Weblogic_HOME>/common/bin/commEnv.sh

Design Time

For both Windows and Linux, drop the JDBC JAR to the Oracle/Middleware/jdeveloper/jdev/lib/patches directory.

9.7 Stored Procedure and Function Support

This section describes how the Oracle Database Adapter supports the use of stored procedures and functions.

This section includes the following topics:

9.7.1 Design Time: Using the Adapter Configuration Wizard

The Adapter Configuration Wizard – Stored Procedures is used to generate an adapter service WSDL and the necessary XSD. The adapter service WSDL encapsulates the underlying stored procedure or function as a Web service with a WSIF JCA binding. The XSD file describes the procedure or function, including all the parameters and their types. This XSD provides the definition used to create instance XML that is submitted to the Oracle Database Adapter at run time.

This section includes the following topics:

9.7.1.1 Using Top-Level Standalone APIs

This section describes how to use the Adapter Configuration Wizard with APIs that are not defined in PL/SQL packages. You use the Adapter Configuration Wizard – Stored Procedures to select a procedure or function and generate the XSD file. See Section 9.8, "Oracle Database Adapter Use Cases" if you are not familiar with how to start the Adapter Configuration Wizard.

The following are the steps to select a stored procedure or function by using the Adapter Configuration Wizard:

  1. Drag and drop Database Adapter from the Service Adapters list to the Exposed Services swim lane in the composite.xml page.

    The Adapter Configuration Wizard is displayed, as shown in Figure 9-31.

    Figure 9-31 The Adapter Configuration Wizard

    Description of Figure 9-31 follows
    Description of "Figure 9-31 The Adapter Configuration Wizard"

  2. Click Next. The Service Name page is displayed, as shown in Figure 9-32.

    Note:

    Note that the name of stored procedures or packages that refers to database or user-defined data types must not include the character $ in it. The presence of $ in the name would cause the XSD file generation to fail.

    Figure 9-32 Specifying the Service Name

    Description of Figure 9-32 follows
    Description of "Figure 9-32 Specifying the Service Name"

  3. In the Service Name field, enter a service name, and then click Next. The Service Connection page is displayed.

    You associate a connection with the service, as shown in Figure 9-33. A database connection is required to configure the adapter service. Select an existing connection from the list or create a new connection.

    Figure 9-33 Setting the Database Connection in the Adapter Configuration Wizard

    Description of Figure 9-33 follows
    Description of "Figure 9-33 Setting the Database Connection in the Adapter Configuration Wizard"

  4. Click Next. The Operation Type page is displayed.

  5. For the Operation Type, select Call a Stored Procedure or Function, as shown in Figure 9-34.

    Figure 9-34 Calling a Stored Procedure or Function in the Adapter Configuration Wizard

    Description of Figure 9-34 follows
    Description of "Figure 9-34 Calling a Stored Procedure or Function in the Adapter Configuration Wizard"

  6. Click Next. The Specify Stored Procedure page is displayed, as shown in Figure 9-35. This is where you specify a stored procedure or function.

    Figure 9-35 The Specify Stored Procedure Page

    Description of Figure 9-35 follows
    Description of "Figure 9-35 The Specify Stored Procedure Page"

  7. Next, you select the schema and procedure or function. You can select a schema from the list or select <Default Schema>, in which case the schema associated with the connection is used. If you know the procedure name, enter it in the Procedure field. If the procedure is defined inside a package, then you must include the package name, as in EMPLOYEE.GET_NAME.

    If you do not know the schema and procedure names, click Browse to access the Stored Procedures window, as shown in Figure 9-36.

    Figure 9-36 Searching for a Procedure or Function

    Description of Figure 9-36 follows
    Description of "Figure 9-36 Searching for a Procedure or Function"

    Select a schema from the list or select <Default Schema>. A list of the available procedures is displayed in the left window. To search for a particular API in a long list of APIs, enter search criteria in the Search field. For example, to find all APIs that begin with XX, enter XX% and click the Search button. Clicking the Show All button displays all available APIs.

    Figure 9-37 shows how you can select the FACTORIAL function. The Arguments tab displays the parameters of the function, including their names, type, mode (IN, IN/OUT or OUT) and the numeric position of the parameter in the definition of the procedure. The return value of a function has no name and is always an OUT parameter at position zero (0).

    Figure 9-37 Viewing the Arguments of a Selected Procedure

    Description of Figure 9-37 follows
    Description of "Figure 9-37 Viewing the Arguments of a Selected Procedure"

    Figure 9-38 shows how the Source tab displays the code that implements the function. Text that matches the name of the function is highlighted.

    Figure 9-38 Viewing the Source Code of a Selected Procedure

    Description of Figure 9-38 follows
    Description of "Figure 9-38 Viewing the Source Code of a Selected Procedure"

  8. Click OK after selecting a procedure or function. Information about the API is displayed, as shown in Figure 9-39. Click Back or Browse to make revisions.

    Figure 9-39 Viewing Procedure or Function Details in the Adapter Configuration Wizard

    Description of Figure 9-39 follows
    Description of "Figure 9-39 Viewing Procedure or Function Details in the Adapter Configuration Wizard"

  9. Click Next. If the stored procedure or function has an output parameter of type row set (REF CURSOR on Oracle Database), as Figure 9-40 shows, you can define a strongly or weakly typed XSD for this ref cursor.

    Figure 9-40 Viewing Procedure or Function Details in the Adapter Configuration Wizard: Row Set Type

    Description of Figure 9-40 follows
    Description of "Figure 9-40 Viewing Procedure or Function Details in the Adapter Configuration Wizard: Row Set Type"

    For more information, see:

  10. Click Next. The Advanced Options page is displayed, as shown in Figure 9-41. Enter any advanced options, such as the JDBC QueryTimeout value. Other options include retry parameters, such as the number of retry attempts and the interval between them.

    Figure 9-41 The Advanced Options Page

    Description of Figure 9-41 follows
    Description of "Figure 9-41 The Advanced Options Page"

  11. After specifying all options, click Next, and then click Finish to complete the Adapter Configuration Wizard.

    When you have finished using the Adapter Configuration Wizard, three files are added to the existing project:

    • servicename.wsdl (for example, Factorial.wsdl)

    • service_name_db.jca (for example, Factorial_db.jca)

    • schema_package_procedurename.xsd (for example, SCOTT_FACTORIAL.xsd)

9.7.1.2 Using Packaged APIs and Overloading

Using APIs defined in packages is similar to using standalone APIs. The only difference is that you can expand the package name to see a list of all the APIs defined within the package, as shown in Figure 9-42.

APIs that have the same name but different parameters are called overloaded APIs. As shown in Figure 9-42, the package called PACKAGE has two overloaded procedures called OVERLOAD.

Figure 9-42 A Package with Two Overloaded Procedures

Description of Figure 9-42 follows
Description of "Figure 9-42 A Package with Two Overloaded Procedures"

As Figure 9-43 shows, the code for the entire PL/SQL package is displayed, regardless of which API from the package is selected when you view the Source tab. Text that matches the name of the procedure is highlighted.

Figure 9-43 Viewing the Source Code of an Overloaded Procedure

Description of Figure 9-43 follows
Description of "Figure 9-43 Viewing the Source Code of an Overloaded Procedure"

After you select a procedure or function and click OK, information about the API is displayed, as shown in Figure 9-44. The schema, procedure name, and a list of arguments are displayed. Note how the procedure name is qualified with the name of the package (PACKAGE.OVERLOAD). Click Back or Browse to make revisions, or Next. Enter values for any of the advanced options. Click Next followed by Finish to conclude.

Figure 9-44 Viewing Procedure or Function Details in the Adapter Configuration Wizard

Description of Figure 9-44 follows
Description of "Figure 9-44 Viewing Procedure or Function Details in the Adapter Configuration Wizard"

When you have finished using the Adapter Configuration Wizard, the following files are added to the existing project:

  • Overload.wsdl, Overload_db.jca

  • SCOTT_PACKAGE_OVERLOAD_2.xsd.

    The _2 appended after the name of the procedure in the XSD filename differentiates the overloaded APIs. Numeric indexes are used to differentiate between overloaded APIs.

9.7.2 Supported Third-Party Databases

For stored procedures the following databases are supported: Oracle, DB2, Informix Dynamic Server, MySQL, Microsoft SQL Server, and Sybase Adaptive Server Enterprise. Contact support for specific versions that have been certified. If your particular version is more recent than one mentioned here it is probably supported.

For more information on Oracle JCA Adapters support for third-party JDBC drivers and databases, see Section 9.6, "JDBC Driver and Database Connection Configuration".

This section includes the following topics:

9.7.2.1 Terms Used

ProductName

This is the name of the database.

Database Name Supported Database

IBM DB2

IBM DB2 v 9.x

Microsoft SQL Server

SQLServer 2000 or 2005

MySQL

MySQL v5.6


DriverClassName

This is the name of the JDBC Driver Class.

Database Name JDBC Driver

IBM DB2

com.ibm.db2.jcc.DB2Driver

Microsoft SQL Server

com.microsoft.sqlserver.jdbc.SQLServerDriver

MySQL

com.mysql.jdbc.Driver


ConnectionString

This is the JDBC Connection URL.

Database Name Connection String

IBM DB2

jdbc:db2://hostname:port/database-name

Microsoft SQL Server

jdbc:sqlserver://hostname:port;databaseName=name

MySQL

jdbc:mysql://host:port/database-name


Username

This is the database user name.

Password

This is the password associated with the user name.

ProcedureName

This is the name of the stored procedure or the function.

ServiceName

This is the service name for the desired operation.

DatabaseConnection

This is the JNDI name of the connection. For example, eis/DB/<DatabaseConnection>.

Destination

This is the destination directory for the generated files. For example, C:\Temp.

Parameters

The parameters of the stored procedure (for versions of MySQL before 5.2.6 only.)

QueryTimeout

The JDBC query timeout value (in seconds.) The QueryTimeout property specifies the maximum number of seconds that the JDBC driver should wait for the specified stored procedure or function to execute. When the threshold is exceeded, SQLException is thrown. If the value is zero, then the driver waits indefinitely.

9.7.2.2 Supported Third-Party Databases

The Adapter Configuration Wizard supports Oracle Database, IBM DB2, AS/400, Microsoft SQL Server, and MySQL v5.2.6 or higher.

This section includes the following topics:

9.7.2.2.1 Microsoft SQL Server

Table 9-13 lists the supported data types for SQL Server stored procedures and functions:

Table 9-13 Data Types for SQL Server Stored Procedures and Functions

SQL Data Type XML Schema Type

BIGINT

long

BINARY

IMAGE

TIMESTAMP

VARBINARY

base64Binary

BIT

boolean

CHAR

SQL_VARIANT

SYSNAME

TEXT

UNIQUEIDENTIFIER

VARCHAR

XML (2005 only)

string

DATETIME

SMALLDATETIME

dateTime

DECIMAL

MONEY

NUMERIC

SMALLMONEY

decimal

FLOAT

REAL

float

INT

int

SMALLINT

short

TINYINT

unsignedByte


Besides, the data types mentioned in the preceding table, alias data types are also supported. Alias data types are created by using the sp_addtype database engine stored procedure or the CREATE TYPE Transact-SQL statement (only for SQL Server 2005.) The use of the Transact-SQL statement is the preferred method for creating alias data types. The use of sp_addtype is being deprecated.

9.7.2.2.2 DB2 Data Types

Table 9-14 lists the supported data types for DB2 SQL stored procedures:

Table 9-14 Data Types for DB2 SQL Stored Procedures

SQL Data Type XML Schema Type

BIGINT

long

BLOB

CHAR FOR BIT DATA

VARCHAR FOR BIT DATA

base64Binary

CHARACTER

CLOB

VARCHAR

string

DATE

TIME

TIMESTAMP

dateTime

DECIMAL

decimal

DOUBLE

double

INTEGER

int

REAL

float

SMALLINT

short


The names of other data types are also supported implicitly. For example, NUMERIC is equivalent to DECIMAL (as is DEC and NUM as well.)

IBM DB2 supports structured data types (user-defined). However, there is no support for these types in the JDBC drivers. Consequently, a structured data type may not be used as the data type of a parameter in a stored procedure. IBM DB2 also supports user-defined functions. The adapter, however, does not support these functions.

In the Adapter Configuration Wizard, stored procedures are grouped by database user. A schema in IBM DB2 is equivalent to a schema in Oracle. Both represent the name of a database user.

For IBM DB2, <Default Schema> refers to the current database user.

Click <Default Schema> to select a different database user. The stored procedures in the Browse page are those that the database user created in the database specified as <database> in the JDBC Connection URL.

The Adapter Configuration Wizard does not support changing to a different database.

Select the stored procedure in the Stored Procedures dialog, as shown in Figure 9-45. The arguments are shown in the Arguments tab. Click Search to find database stored procedures that the user created in the specified database. For example, 'd%' or 'D%' would both find the DEMO stored procedure. Clicking Show All reveals all of the procedures that the current user created in the specified database.

Figure 9-45 The Stored Procedures Dialog

Description of Figure 9-45 follows
Description of "Figure 9-45 The Stored Procedures Dialog"

You can view the source code of the stored procedure by clicking the Source tab, as shown in Figure 9-46.

Figure 9-46 The Source Code of the Stored Procedure

Description of Figure 9-46 follows
Description of "Figure 9-46 The Source Code of the Stored Procedure"

9.7.2.2.3 IBM DB2 AS/400

Table 9-15 lists the supported data types for IBM DB2 AS/400 stored procedures:

Table 9-15 Data Types for IBM DB2 AS/400 Stored Procedures

SQL Data Type XML Schema Type

BINARY

BINARY LARGE OBJECT

BINARY VARYING

base64Binary

CHARACTER

CHARACTER LARGE OBJECT

CHARACTER VARYING

string

DATE

TIME

TIMESTAMP

dateTime

DECIMAL

NUMERIC

decimal

DOUBLE PRECISION

double

BIGINT

long

INTEGER

int

REAL

float

SMALLINT

short


Distinct types are also supported for data types that are created using the CREATE DISTINCT TYPE statement. These data types work in the same way as they do in IBM DB2.

The IBM DB2 AS/400 implementation is based on queries from catalog tables in the QSYS2 schema. The adapter tries to determine whether the QSYS2.SCHEMATA table exists. If it does, then the Adapter Configuration Wizard queries tables in the QSYS2 schema. Therefore, if your IBM DB2 AS/400 database supports the QSYS2 schema, then the Adapter Configuration Wizard and the adapter run time should both work.

The Adapter Configuration Wizard checks the SYSCAT schema first, and then the QSYS2 schema. The adapter does not support the catalog tables in the SYSIBM schema.

9.7.2.2.4 MySQL

Use the Adapter Configuration Wizard to access stored procedures on MySQL v5.6 or later using catalog tables in the INFORMATION_SCHEMA schema. Versions of MySQL before v5.6 lack a PARAMETERS table in the INFORMATION_SCHEMA schema.

Without a PARAMETERS table, the MySQL database does not provide any information about the parameters of a stored procedure. It is therefore necessary to supply this information using a required property in the properties file. The Parameters property contains the signature of the stored procedure.

Property Description

IsFunction

Determines whether the API is a function or a procedure

SchemaName

The name of the database where the API is defined

Parameters

The parameters of the stored procedure


The value of the Parameters property is a comma-delimited list of parameters, each of which has the following syntax

Parameter ::= {IN | INOUT | OUT} Parameter_Name SQL_Datatype

All three elements of a parameter definition are required.

Consider the following MySQL stored procedure:

CREATE PROCEDURE demo
(IN x VARCHAR (10), INOUT y INT, OUT z CHAR (20))
BEGIN
...
END

The Parameters property must be specified as shown in the following example:

Parameters=IN x VARCHAR (10), INOUT y INT,  OUT z CHAR (20)

The generated XSD for the stored procedure is invalid unless the parameters are specified correctly in the parameters property. The following is a sample of a properties file for MySQL:

ProductName=MySQL
DriverClassName=com.mysql.jdbc.Driver
ConnectionString=jdbc:mysql://<host>:<port>/<database>
Username=<username>
Password=<password>
SchemaName=<database>
ProcedureName=demo
Parameters=IN x VARCHAR(10),INOUT y INT,OUT z TEXT(20)
ServiceName=MySQLDemoService
DatabaseConnection=mysql

Note:

For MySQL, the SchemaName, Parameters, and IsFunction properties are all required properties.

Table 9-16 lists the supported data types for MySQL stored procedures:

Table 9-16 Data Types for MySQL Stored Procedures

SQL Data Type XML Schema Type

BINARY

BLOB

LONGBLOB

MEDIUMBLOB

TINYBLOB

VARBINARY

base64Binary

BOOLEAN

boolean

CHAR

LONGTEXT

MEDIUMTEXT

TEXT

TINYTEXT

VARCHAR

string

DATE

DATETIME

TIMESTAMP

dateTime

DECIMAL

NUMERIC

REAL

decimal

DOUBLE

double

FLOAT

float

TINYINT

byte

TINYINT UNSIGNED

unsigned_byte

SMALLINT

short

SMALLINT UNSIGNED

unsigned_short

INTEGER

INT

MEDIUMINT

int

INTEGER UNSIGNED

INT UNSIGNED

MEDIUMINT UNSIGNED

unsigned_int

BIGINT

long

BIGINT UNSIGNED

unsigned_long


The character length for any SQL data type that corresponds with STRING can be specified using the '(#)' notation in the Parameters property, for example, VARCHAR (20). Specifying the length of any other SQL data type does not have any effect.

UNSIGNED integer data types are treated as though they were SIGNED integer data types when using the Adapter Configuration Wizard.

Stored procedures in MySQL are grouped by database specified by <database> in the JDBC Connection URL. For MySQL, <Default Schema> refers to the database that the user is connected to (usually specified in the JDBC connection URL.) Click <Default Schema> to select a different database. Click Search to search for specific stored procedures in the current database specified in the JDBC Connection URL. For example, 'd%' or 'D%' would both find stored procedures beginning with 'd' or 'D.' Click Show All to reveal all procedures in the current database.

9.7.2.3 Creating Database Connections

Database connections must be created in JDeveloper to access catalog tables necessary for the Adapter Configuration Wizard to work.

The following are the steps to create a database connection by using JDeveloper:

  1. Select Database Navigator from View.

  2. Right-click the application name, then click New followed by Connections. Select Database Connection.

    The Create Database Connection page is displayed, as shown in Figure 9-47.

    Figure 9-47 The Create Database Connection

    Description of Figure 9-47 follows
    Description of "Figure 9-47 The Create Database Connection"

  3. Enter a connection name in the Connection Name field. For example, sqlserver.

  4. Select Generic JDBC as the Connection Type from the Connection Type list.

  5. Enter your Username, Password, and role information.

  6. Click New for Driver Class. The Register JDBC Driver dialog is displayed, as shown in Figure 9-48.

    Figure 9-48 The Register JDBC Driver Dialog

    Description of Figure 9-48 follows
    Description of "Figure 9-48 The Register JDBC Driver Dialog"

  7. Enter the Driver Class (for example, com.microsoft.sqlserver.jdbc.SQLServerDriver).

  8. Create a library or edit an existing one by using the following steps:

    1. Click Browse in the Register JDBC Driver dialog.

    2. Click New in the Select Library dialog.

      The Select Library dialog is displayed, as shown in Figure 9-49.

      Figure 9-49 The Select Library Dialog

      Description of Figure 9-49 follows
      Description of "Figure 9-49 The Select Library Dialog"

    3. Select an existing library or click New to create one.

      The Create Library dialog is displayed.

    4. Enter a library name, for example, SQL Server JDBC.

    5. Click Add Entry to add JDBC jar files to the class path.

    6. Click OK twice to exit the Create Library windows.

    7. Click OK to exit the Register JDBC Driver window.

  9. Enter your connection string name for JDBC URL.

  10. Click Test Connection.

  11. If the connection is successful, then a screen, as shown in Figure 9-50 is displayed.

    Figure 9-50 The Create Database Connection Dialog

    Description of Figure 9-50 follows
    Description of "Figure 9-50 The Create Database Connection Dialog"

  12. Click OK followed by Finish.

9.7.3 Design Time: Artifact Generation

The Adapter Configuration Wizard – Stored Procedures is capable of creating a WSDL file and a valid XSD file that describes the signature of a stored procedure or function. The following sections describe the relevant structure and content of both the WSDL and the XSD files, and their relationship with each other.

This section includes the following topics:

9.7.3.1 The WSDL–XSD Relationship

In the paragraphs that follow, the operation name, Factorial, and procedure name, Factorial, are taken from an example cited previously (see Figure 9-39). The generated WSDL imports the XSD file.

<types>
  <schema xmlns="http://www.w3.org/2001/XMLSchema">
    <import namespace="http://xmlns.oracle.com/pcbpel/adapter/db/SCOTT/FACTORIAL/"
      schemaLocation="xsd/SCOTT_FACTORIAL.xsd"/>
  </schema>
</types>

The namespace is derived from the schema, package, and procedure name, and appears as the targetNamespace in the generated XSD.

A root element called InputParameters is created in the XSD file for specifying elements that correspond to the IN and IN/OUT parameters of the stored procedure. Another root element called OutputParameters is also created in the XSD file for specifying elements only if there are any IN/OUT or OUT parameters. IN/OUT parameters appear in both root elements.

These root elements are represented in the XSD file as an unnamed complexType definition whose sequence includes one element for each parameter. If there are no IN or IN/OUT parameters, then the InputParameters root element is still created; however, complexType is empty. A comment in the XSD file indicates that there are no such parameters. An example of a root elements follows.

<element name="InputParameters"
  <complexType>
    <sequence>
      <element …>
       …
    </sequence>
  </complexType>
</element>

The WSDL defines message types whose parts are defined in terms of these two root elements.

<message name="args_in_msg"
  <part name="InputParameters" element="InputParameters"/>
</message>
<message name="args_out_msg"
  <part name="OutputParameters" element="OutputParameters"/>
</message>

The db namespace is equal to the targetNamespace of the generated XSD. The args_in_msg message type always appears in the WSDL while args_out_msg is included only if the OutputParameters root element is generated in the XSD file.

An operation is defined in the WSDL whose name is identical to the adapter service and whose input and output messages are defined in terms of these two message types.

<portType name="Factorial_ptt">
  <operation name="Factorial">
    <input message="tns:args_in_msg"/>
    <output message="tns:args_out_msg"/>
  </operation>
</portType>

The input message always appears while the output message depends on the existence of the OutputParameters root element in the XSD file. The tns namespace is derived from the operation name and is defined in the WSDL as

xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/db/Factorial/"

The root elements in the XSD file define the structure of the parts used in the messages that are passed into and sent out of the Web service encapsulated by the WSDL.

The input message in the WSDL corresponds to the InputParameters root element from the XSD file. The instance XML supplies values for the IN and IN/OUT parameters of the stored procedure. The output message corresponds to the OutputParameters root element. This is the XML file that gets generated after the stored procedure has executed. It holds the values of any IN/OUT and OUT parameters.

9.7.3.2 JCA File

The JCA file provides adapter configuration information for the service. A connection factory is specified so that the adapter run time can connect to the database, as shown in the following example. Non-managed connection properties should not be specified directly in the JCA file. Instead you should create a connection factory on the application server, and refer to it by name in the JCA file (<connection-factory location).

<connection-factory location="eis/DB/oracle" UIConnectionName="oracle"
adapterRef="">
</connection-factory>

The JNDI name, eis/DB/oracle, was earlier specified as the service connection in the Adapter Configuration Wizard.

End point properties for the interaction are also specified. The name of the schema, package, and procedure are specified, as shown in the following example. The operation name ties the JCA file back to the service WSDL.

<connection-factory location="eis/db/oracle" UIConnectionName="oracle" 
adapterRef=""/>
<endpoint-interaction portType="Factorial_ptt" operation="Factorial">
  <interaction-spec
    className="oracle.tip.adapter.db.DBStoredProcedureInteractionSpec">
      <property name="ProcedureName" value="FACTORIAL"/>
      <property name="GetActiveUnitOfWork="false"/>
  </interaction-spec>
  </output>
</endpoint-interaction>

Note the operation name and procedure name. If an explicit schema had been chosen or if the procedure had been defined in a package, then values for these properties would also be listed here.

Note:

Non-managed connection details are not created in the DBAdapter.jca files when you start JDeveloper in the normal mode. However, non-managed connection details are created in the DBAdapter .jca files when you start JDeveloper in the preview mode.

9.7.3.3 Oracle Data Types

Many primitive data types have well-defined mappings and therefore are supported by both the design-time and run-time components. In addition, you can use user-defined types such as VARRAY, nested tables, and OBJECT.

Table 9-17 lists the supported data types for Oracle stored procedures and functions.

Table 9-17 Data Types for Oracle Stored Procedures and Functions

SQL or PL/SQL Type XML Schema Type

BINARY_DOUBLE

DOUBLE PRECISION

double

BINARY_FLOAT

FLOAT

REAL

float

BINARY_INTEGER

INTEGER

PLS_INTEGER

SMALLINT

int

BLOB

LONG RAW

RAW

base64Binary

CHAR

CLOB

LONG

STRING

VARCHAR2

string

DATE

TIMESTAMP

TIMESTAMP WITH TIME ZONE

dateTime

DECIMAL

NUMBER

decimal


9.7.3.4 Generated XSD Attributes

Table 9-18 lists the attributes used in the generated XSDs.

Table 9-18 Generated XSD Attributes

Attribute Example Purpose

name

name="param"

Name of an element

type

type="string"

XML schema type

db:type

db:type="VARCHAR2"

SQL or PL/SQL type

db:index

db:index="1"

Position of a parameter

db:default

db:default="true"

Has a default clause

minOccurs

minOccurs="0"

Minimum occurrences

maxOccurs

maxOccurs="1"

Maximum occurrences

nillable

nillable="true"

Permits null values


The db namespace is used to distinguish attributes used during run time from standard XML schema attributes. The db:type attribute is used to indicate what the database type is so that a suitable JDBC type mapping can be obtained at run time. The db:index attribute is used as an optimization by both the design-time and run-time components to ensure that the parameters are arranged in the proper order. Parameter indexes begin at 1 for procedures and 0 for functions. The return value of a function is represented as an OutputParameter element whose name is the name of the function and whose db:index is 0. The db:default attribute is used to indicate whether or not a parameter has a default clause.

The minOccurs value is set to 0 to allow for an IN parameter to be removed from the XML file. This is useful when a parameter has a default clause defining a value for the parameter (for example, X IN INTEGER DEFAULT 0). At run time, if no element is specified for the parameter in the XML file, the parameter is omitted from the invocation of the stored procedure, thus allowing the default value to be used. Each parameter can appear at most once in the invocation of a stored procedure or function. Therefore, maxOccurs, whose default value is always 1, is always omitted from elements representing parameters.

The nillable attribute is always set to true to allow the corresponding element in the instance XML to have a null value (for example, <X/> or <X></X>). In some cases, however, to pass an element such as this element, which does have a null value, you must state this explicitly (for example, <X xsi:nil="true"/>). The namespace, xsi, used for the nillable attribute, must be declared explicitly in the instance XML (for example, xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance").

9.7.3.5 User-Defined Types

The Adapter Configuration Wizard can also generate valid definitions for user-defined types such as collections (VARRAY and nested tables) and OBJECT. These are created as complexType definitions in the XSD file.

For VARRAY, the complexType definition defines a single element in its sequence, called name_ITEM, where name is the name of the VARRAY element. All array elements in the XML file are so named. Given the following VARRAY type definition,

SQL> CREATE TYPE FOO AS VARRAY (5) OF VARCHAR2 (10);

and a VARRAY element, X, whose type is FOO, the following complexType is generated:

<complexType name="FOO">
  <sequence>
    <element name="X_ITEM" db:type="VARCHAR2" minOccurs="0" maxOccurs="5" 
nillable="true"/>
      <simpleType>
        <restriction base="string">
          <maxLength value="10"/>
        </restriction>
      </simpleType>
  </sequence>
</complexType>

The minOccurs value is 0 to allow for an empty collection. The maxOccurs value is set to the maximum number of items that the collection can hold. The db:index attribute is not used. Having nillable set to true allows individual items in the VARRAY to be null.

Note the use of the restriction specified on the element of the VARRAY, FOO. This is used on types such as CHAR and VARCHAR2, whose length is known from the declaration of the VARRAY (or nested table). It specifies the type and maximum length of the element. An element value that exceeds the specified length causes the instance XML to fail during schema validation.

The attribute values of a parameter declared to be of type FOO look as follows in the generated XSD:

<element name="X" type="db:FOO" db:type="Array" db:index="1" minOccurs="0" 
nillable="true"/>

The type and db:type values indicate that the parameter is represented as an array defined by the complexType called FOO in the XSD file. The value for db:index is whatever the position of that parameter is in the stored procedure.

A nested table is treated almost identically to a VARRAY. The following nested table type definition,

SQL> CREATE TYPE FOO AS TABLE OF VARCHAR2 (10);

is also generated as a complexType with a single element in its sequence, called name_ITEM. The element has the same attributes as in the VARRAY example, except that the maxOccurs value is unbounded because nested tables can be of arbitrary size.

<complexType name="FOO">
  <sequence>
    <element name="X_ITEM" … maxOccurs="unbounded" nillable="true">
      …
    </element>
  </sequence>
</complexType>

An identical restriction is generated for the X_ITEM element in the VARRAY. The attributes of a parameter, X, declared to be of this type, are the same as in the VARRAY example.

collections (Varray and nested table) are not supported if they are defined inside of a PL/SQL package specification. For example:

SQL> create package pkg as
   >   type vary is varray(10) of number;
   >   type ntbl is table of varchar2(100;
   >   procedure test(v in vary, n in ntbl);
   > end;
   > /

If a user selects the test procedure in the Adapter Configuration Wizard for stored procedures, an error occurs stating that the types are not supported. However, if the vary and ntbl type definitions were defined at the root level, outside of the package, then choosing the test procedure works without issue. The supported way to use collection types (Varray and nested table) is shown in the following example:

SQL> create type vary as varray(10) of number;
SQL> create type ntbl as table of varchar2(10);
SQL> create package pkg as
   >   procedure test(v in vary, n in ntbl);
   > end;
   /

An OBJECT definition is also generated as a complexType. Its sequence holds one element for each attribute in the OBJECT.

The following OBJECT,

SQL> CREATE TYPE FOO AS OBJECT (X VARCHAR2 (10), Y NUMBER);

is represented as a complexType definition called FOO with two sequence elements.

<complexType name="FOO">
  <sequence>
    <element name="X" db:type="VARCHAR2" minOccurs="0" nillable="true"/>
      <simpleType>
        <restriction base="string">
          <maxLength value="10"/>
        </restriction>
      </simpleType>
    <element name="Y" type="decimal" db:type="NUMBER" minOccurs="0"
nillable="true"/>
  </sequence>
</complexType>

The minOccurs value is 0 to allow for the element to be removed from the XML file. This causes the value of the corresponding attribute in the OBJECT to be set to null at run time. The nillable value is true to allow empty elements to appear in the XML file, annotated with the xsi:nil attribute, to indicate that the value of the element is null. Again, the db:index attribute is not used.

Note the use of a restriction on the VARCHAR2 attribute. The length is known from the declaration of the attribute in the OBJECT.

9.7.3.6 Complex User-Defined Types

User-defined types can be defined in arbitrarily complex ways. An OBJECT can contain attributes whose types are defined as any of the user-defined types mentioned in the preceding section. The type of an attribute in an OBJECT can be another OBJECT, VARRAY, or a nested table, and so on. The base type of a VARRAY or a nested table can also be an OBJECT. Allowing the base type of a collection to be another collection supports multidimensional collections.

9.7.3.7 Object Type Inheritance

The Adapter Configuration Wizard is capable of generating a valid XSD for parameters whose types are defined using OBJECT-type inheritance. Given the following type hierarchy,

SQL> CREATE TYPE A AS OBJECT (A1 NUMBER, A2 VARCHAR2 (10)) NOT FINAL;
SQL> CREATE TYPE B UNDER A (B1 VARCHAR2 (10));

and a procedure containing a parameter, X, whose type is B,

SQL> CREATE PROCEDURE P (X IN B) AS BEGIN … END;

the Adapter Configuration Wizard generates an InputParameters element for parameter X as

<element name="X" type="db:B" db:index="1" db:type="Struct" minOccurs="0" 
nillable="true"/>

where the definition of OBJECT type B in the XSD file is generated as the following complexType.

<complexType name="B">
  <sequence>
    <element name="A1" type="decimal" db:type="NUMBER" minOccurs="0" 
nillable="true"/>
    <element name="A2" db:type="VARCHAR2" minOccurs="0"  nillable="true">
      ...
    </element>
    <element name="B1" db:type="VARCHAR2" minOccurs="0"  nillable="true">
      ...
    </element>
  </sequence>
</complexType>

Restrictions on the maximum length of attributes A2 and B1 are added appropriately. Notice how the OBJECT type hierarchy is flattened into a single sequence of elements that corresponds to all of the attributes in the entire hierarchy.

9.7.3.8 Object References

The Adapter Configuration Wizard can also generate a valid XSD for parameters that are references to OBJECT types (for example, object references) or are user-defined types that contain an object reference somewhere in their definition. In this example,

SQL> CREATE TYPE FOO AS OBJECT (…);
SQL> CREATE TYPE BAR AS OBJECT (F REF FOO, …);
SQL> CREATE PROCEDURE PROC (X OUT BAR, Y OUT REF FOO) AS BEGIN … END;

the Adapter Configuration Wizard generates complexType definitions for FOO and BAR as indicated, except that for BAR, the element for the attribute, F, is generated as

<element name="F" type="db:FOO" db:type="Ref" minOccurs="0" nillable="true"/>

where the type and db:type attribute values indicate that F is a reference to the OBJECT type FOO.

For a procedure PROC, the following elements are generated in the OutputParameters root element of the XSD file:

<element name="X" type="db:BAR" db:index="1" db:type="Struct" minOccurs="0" 
nillable="true"/>
<element name="Y" type="db:FOO" db:index="2" db:type="Ref" minOccurs="0" 
nillable="true"/>

For Y, note the value of the db:type attribute, Ref. with the type attribute, the element definition indicates that Y is a reference to FOO.

There is a restriction on the use of object references that limits their parameter mode to OUT only. Passing an IN or IN/OUT parameter into an API that is either directly a REF or, if the type of the parameter is user-defined, contains a REF somewhere in the definition of that type, is not permitted.

9.7.3.9 Referencing Types in Other Schemas

You can refer to types defined in other schemas if the necessary privileges to access them have been granted. For example, suppose type OBJ was declared in SCHEMA1:

SQL> CREATE TYPE OBJ AS OBJECT (…);

The type of a parameter in a stored procedure declared in SCHEMA2 can be type OBJ from SCHEMA1:

CREATE PROCEDURE PROC (O IN SCHEMA1.OBJ) AS BEGIN … END;

This is possible only if SCHEMA1 granted permission to SCHEMA2 to access type OBJ:

SQL> GRANT EXECUTE ON OBJ TO SCHEMA2;

If the required privileges are not granted, an error occurs when trying to create procedure PROC in SCHEMA2:

PLS-00201: identifier "SCHEMA1.OBJ" must be declared

Because the privileges have not been granted, type OBJ from SCHEMA1 is not visible to SCHEMA2; therefore, SCHEMA2 cannot refer to it in the declaration of parameter O.

9.7.3.10 XSD Pruning Optimization

Some user-defined object types can have a very large number of attributes. These attributes can also be defined in terms of other object types that also have many attributes. In short, one object type can become quite large depending on the depth and complexity of its definition.

Depending on the situation, many attributes of a large object type may not even be necessary. It is sometimes desirable to omit these attributes from the object's schema definition. This can be done by physically removing the unwanted XSD elements from the definition of the object type.

See the following example where a stored procedure has a parameter whose type is a complex user-defined type:

SQL> CREATE TYPE OBJ AS OBJECT (A, NUMBER, B <SqlType>, C <SqlType>, ...);
SQL> CREATE PROCEDURE PROC (O OBJ) AS BEGIN ... END;

The InputParameters root element contains a single element for the parameter, O from the API's signature. A complexType definition is to be added to the generated XSD for the object type, as shown in the following code snippet:

<complexType name="OBJ">
  <sequence>
    <element name="A" type="decimal" db:type="NUMBER" minOccurs="0"  
nillable="true"/>
    <element name="B" .../>
    <element name="C" .../>
    ...
  </sequence>
</complexType>

If attributes B and C are not required, then their element in the complexType definition of OBJ can be removed regardless of its type. Values are not required for these attributes in the instance XML. If parameter O had been an output parameter, then elements corresponding with the pruned attributes are also omitted in the generated XML.

Suppose that the type of parameter A was also a user-defined object type and that the definition of OBJ changed accordingly, as shown in the following example:

SQL> CREATE TYPE FOO AS OBJECT (X NUMBER, Y NUMBER, Z NUMBER);
SQL> CREATE TYPE OBJ AS OBJECT (A FOO, B <SqlType>, C <SqlType, ...);

In such a case, the API remains unchanged. Elements corresponding to unwanted attributes in the definition of FOO can also be removed regardless of their type. So, for example, if Y is not required, then its element in the complexType definition of FOO can be removed in the XSD file.

Pruning the XSD file in this fashion improves the run-time performance of the adapter and can significantly reduce memory consumption, as well.

Note:

Only attributes in user-defined object types can be pruned. You cannot prune (remove) a parameter of the stored procedure by removing its element from the InputParameters root element. This can result in an error at run time unless the parameter has a default clause.

9.7.4 Run Time: Before Stored Procedure Invocation

This section discusses important considerations of stored procedure support and a brief overview of some important details regarding what happens before the invocation of a stored procedure or function.

This section includes the following topics:

9.7.4.1 Value Binding

Consider the extraction of values from the XML file and how the run time works given those values. The possible cases for data in the XML file corresponding to the value of a parameter whose type is a supported primitive data type are as follows:

  1. The value of an element is specified (for example, <X>100</X>, here X=100.)

  2. The value of an element is not specified (for example, <X/>, here X=null.)

  3. The value is explicitly specified as null (for example, <X xsi:nil="true"/>, here X=null.)

  4. The element is not specified in the XML file at all (for example, X = <default value>).

Note:

There is one notable difference that distinguishes Microsoft SQL Server from IBM DB2, MySQL, and AS/400. SQL Server supports parameters that can include a default value in the definition of a stored procedure. Because IBM DB2, MySQL, and AS/400 do not support parameter defaults, every parameter must be represented as an element in the instance XML.

In the first case, the value is taken from the XML file as is and is converted to the appropriate object according to its type. That object is then bound to its corresponding parameter during preparation of the stored procedure invocation.

In the second and third cases, the actual value extracted from the XML file is null. The type converter accepts null and returns it without any conversion. The null value is bound to its corresponding parameter regardless of its type. Essentially, this is equal to passing null for parameter X.

The fourth case has two possibilities. The parameter either has a default clause or it does not. If the parameter has a default clause, then the parameter can be excluded from the invocation of the stored procedure. This allows the default value to be used for the parameter. If the parameter is included, then the value of the parameter is used, instead. If the parameter does not have a default clause, then the parameter must be included in the invocation of the procedure. Elements for all parameters of a function must be specified. If an element in the instance XML is missing, then the function is invoked with fewer arguments than is expected.

A null value is bound to the parameter by default:

SQL> CREATE PROCEDURE PROC (X IN INTEGER DEFAULT 0) AS BEGIN … END;

Here, no value is bound to the parameter. In fact, the parameter can be excluded from the invocation of the stored procedure. This allows the value of 0 to default for parameter X.

To summarize, the following PL/SQL is executed in each of these three cases:

  1. "BEGIN PROC (X=>?); END;" - X = 100

  2. "BEGIN PROC (X=>?); END;" - X = null

  3. There are two possibilities:

    1. "BEGIN PROC (); END;" - X = 0 (X has a default clause)

    2. "BEGIN PROC (X=>?); END;" - X = null (X does not have a default clause)

With the exception of default clause handling, these general semantics also apply to item values of a collection or attribute values of an OBJECT whose types are a supported primitive data types. The semantics of <X/> when the type is user-defined are, however, quite different.

For a collection, whether it is a VARRAY or a nested table, the following behavior can be expected, given a type definition such as

SQL> CREATE TYPE ARRAY AS VARRAY (5) OF VARCHAR2 (10);

and XML for a parameter, X, which has type ARRAY, that appears as follows:

<X>
    <X_ITEM xsi:nil="true"/>
    <X_ITEM>Hello</X_ITEM>
    <X_ITEM xsi:nil="true"/>
    <X_ITEM>World</X_ITEM>
</X>

The first and third elements of the VARRAY are set to null. The second and fourth are assigned their respective values. No fifth element is specified in the XML file; therefore, the VARRAY instance has only four elements.

Assume an OBJECT definition such as

SQL> CREATE TYPE OBJ AS OBJECT (A INTEGER, B INTEGER, C INTEGER);

and XML for a parameter, X, which has type OBJ, that appears as

<X>
    <A>100</A>
    <C xsi:nil="true"/>
</X>

The value 100 is assigned to attribute A, and null is assigned to attributes B and C. Because there is no element in the instance XML for attribute B, a null value is assigned.

The second case, <X/>, behaves differently if the type of X is user-defined. Rather than assigning null to X, an initialized instance of the user-defined type is created and bound instead.

In the preceding VARRAY example, if <X/> or <X></X> is specified, then the value bound to X is an empty instance of the VARRAY. In PL/SQL, this is equivalent to calling the type constructor and assigning the value to X. For example,

X := ARRAY();

Similarly, in the preceding OBJECT example, an initialized instance of OBJ, whose attribute values have all been null assigned, is bound to X. Similar to the VARRAY case, this is equivalent to calling the type constructor. For example,

X := OBJ(NULL, NULL, NULL);

To specifically assign a null value to X when the type of X is user-defined, add the xsi:nil attribute to the element in the XML file, as in

<X xsi:nil="true"/>

9.7.4.2 Data Type Conversions

This section describes the conversion of data types such as CLOB, DATE, TIMESTAMP, and binary data types including RAW, LONG RAW and BLOB, and similar data types supported by third-party databases.

Microsoft SQL Server, IBM DB2, AS/400, and MySQL support binding various forms of binary and date data types to parameters of a stored procedure, as summarized in Table 9-19.

Table 9-19 Third-Party Database: Binding Binary and Date Values to Parameters of a Stored Procedure

XML Schema Type IBM DB2 Data Type AS/400 Data Type Microsoft SQL Server Data Type MySQL Data Type

base64Binary

BLOB

CHAR FOR BIT DATA

VARCHAR FOR BIT DATA

BINARY

BINARY LARGE OBJECT

BINARY VARYING

BINARY

IMAGE

TIMESTAMP

VARBINARY

BINARY

TINYBLOB

BLOB

MEDIUMBLOB

LONGBLOB

VARBINARY

dateTime

DATE

TIME

TIMESTAMP

DATE

TIME

TIMESTAMP

DATETIME

SMALLDATETIME

DATE

DATETIME

TIMESTAMP


For a CLOB parameter, if the length of the CLOB parameter is less than 4 kilobytes, then the text extracted from the XML file is bound to the parameter as a String type with no further processing. If the length of the CLOB parameter is greater than 4 kilobytes or if the mode of the parameter is IN/OUT then a temporary CLOB parameter is created. The XML file data is then written to the temporary CLOB before the CLOB is bound to its corresponding parameter. The temporary CLOB parameter is freed when the interaction completes. For other character types, such as CHAR and VARCHAR2, the data is simply extracted and bound as necessary. It is possible to bind an XML document to a CLOB parameter (or VARCHAR2 if it is large enough). However, appropriate substitutions for <, >, and so on, must first be made (for example, &lt; for < and &gt; for >).

A few data types require special processing before their values are bound to their corresponding parameters. These include data types represented by the XML Schema types base64Binary and dateTime.

The XML schema type, dateTime, represents TIME, DATE, and TIMESTAMP. The XML values for these data types must adhere to the XML schema representation for dateTime. Therefore, a simple DATE string, 01-JAN-05, is invalid. XML schema defines dateTime as YYYY-MM-DDTHH:mm:ss. Therefore, the correct DATE value is 2005-01-01T00:00:00. Values for these parameters must be specified using this format in the instance XML.

Data for binary data types must be represented in a human readable manner. The chosen XML schema representation for binary data is base64Binary. The type converter uses the javax.mail.internet.MimeUtility encode and decode APIs to process binary data. The encode API must be used to encode all binary data into base64Binary form so that it can be used in an XML file. The type converter uses the decode API to decode the XML data into a byte array. The decode API is used to convert the base64Binary data into a byte array.

For a BLOB parameter, if the length of a byte array containing the decoded value is less than 2 kilobytes, then the byte array is bound to its parameter with no further processing. If the length of the byte array is greater than 2 kilobytes or if the mode of the parameter is IN/OUT, then a temporary BLOB is created. The byte array is then written to the BLOB before it is bound to its corresponding parameter. The temporary BLOB is freed when the interaction completes. For other binary data types, such as RAW and LONG RAW, the base64Binary data is decoded into a byte array and bound as necessary.

Conversions for the remaining data types are straightforward and require no additional information.

9.7.5 Run Time: After Stored Procedure Invocation

After the procedure (or function) executes, the values for any IN/OUT and OUT parameters are retrieved. These correspond to the values of the elements in the OutputParameters root element in the generated XSD.

This section includes the following topics:

9.7.5.1 Data Type Conversions

Conversions of data retrieved are straightforward. However, CLOB (and other character data), RAW, LONG RAW, and BLOB conversions, and conversions for similar data types supported by third-party databases, require special attention.

When a CLOB is retrieved, the entire contents of that CLOB are written to the corresponding element in the generated XML. Standard DOM APIs are used to construct the XML file. Hence, character data, for types such as CLOB, CHAR, and VARCHAR2, is messaged as needed to make any required substitutions so that the value is valid and can be placed in the XML file for subsequent processing. Therefore, substitutions for <and>, for example, in an XML document stored in a CLOB are made so that the value placed in the element within the generated XML for the associated parameter is valid.

Raw data, such as for RAW and LONG RAW data types, is retrieved as a byte array. For BLOBs, the BLOB is first retrieved, and then its contents are obtained, also as a byte array. The byte array is then encoded using the javax.mail.internet.MimeUtility encode API into base64Binary form. The encoded value is then placed in its entirety in the XML file for the corresponding element. The MimeUtility decode API must be used to decode this value back into a byte array.

Conversions for the remaining data types are straightforward and require no additional information.

9.7.5.2 Null Values

Elements whose values are null appear as empty elements in the generated XML and are annotated with the xsi:nil attribute. Thus, the xsi namespace is declared in the XML file that is generated. Generated XML for a procedure PROC, which has a single OUT parameter, X, whose value is null, looks as follows:

<OutputParameters … xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <X xsi:nil="true"/>
</OutputParameters>

XML elements for parameters of any type (including user-defined types) appear this way if their value is null.

9.7.5.3 Function Return Values

The return value of a function is treated as an OUT parameter at position 0 whose name is the name of the function itself. For example,

CREATE FUNCTION FACTORIAL (X IN INTEGER) RETURN INTEGER AS
BEGIN
    IF (X <= 0) THEN RETURN 1;
    ELSE RETURN FACTORIAL (X - 1);
    END IF;
END;

An invocation of this function with a value of 5, for example, results in a value of 120 and appears as <FACTORIAL>120</FACTORIAL> in the OutputParameters root element in the generated XML.

9.7.6 Run Time: Common Third-Party Database Functionality

The common third-party database functionality at run time includes the following:

9.7.6.1 Processing ResultSets

All third-party databases share the same functionality for handling ResultSets. The following is a SQL Server example of an API that returns a ResultSet:

 1> create procedure foo ... as select ... from ...;
 2> go

A RowSet defined in the generated XSD represents a ResultSet. A RowSet consists of zero or more rows, each having one or more columns. A row corresponds with a row returned by the query. A column corresponds with a column item in the query. The generated XML for the API shown in the preceding example after it executes is shown in the following example:

<RowSet>
  <Row>
    <Column name="<column name>" sqltype="<sql datatype">value</Column>
    ...
  </Row>
    ...
</RowSet>
…

The name attribute stores the name of the column appearing in the query while the sqltype attribute stores the SQL datatype of that column, for example INT. The value is whatever the value is for that column.

It is possible for an API to return multiple ResultSets. In such cases, there is one RowSet for each ResultSet in the generated XML. All RowSets always appear first in the generated XML.

9.7.6.2 Returning an INTEGER Status Value

Some databases support returning an INTEGER status value using a RETURN statement in a stored procedure. Microsoft SQL Server and AS/400 both support this feature. In both cases, the Adapter Configuration Wizard cannot determine whether a stored procedure returns a status value. Therefore, you must specify that the stored procedure is returning a value. You can use a check box to make this indication.

After choosing a stored procedure in the Stored Procedures dialog, the Specify Stored Procedure page appears, as shown in Figure 9-51. The check box appears at the bottom of the page. Select the box to indicate that the procedure contains a RETURN statement. You can view the source code of the procedure to determine whether a RETURN statement exists.

The check box appears only for stored procedures on databases that support this feature. The check box is not displayed for functions. The value returned by the stored procedure appears as an element in the OutputParameters root element in the generated XSD. The name of the element is the name of the stored procedure. The value of a return statement is lost after the execution of the stored procedure if the check box is not selected.

Figure 9-51 The Specify Stored Procedure Page

Description of Figure 9-51 follows
Description of "Figure 9-51 The Specify Stored Procedure Page"

9.7.7 Advanced Topics

This section discusses scenarios for types that are not supported directly using the stored procedure functionality that the Oracle Database Adapter provides. The following sections describe workarounds that address the have to use these data types:

9.7.7.1 Row Set Support Using a Strongly Typed XSD

Currently a REF CURSOR by nature can support any arbitrary result set, so the XSD generated at design time is weakly typed.

However the XML output from this is hard to use. It is very difficult to write an Xpath expression or XSL based on a weakly typed XSD and column names as attribute values instead of element names.

Although a row set can represent any result set, it is possible to assume for some procedures that it has the same structure each time, and hence can be described with a strongly typed XSD. A strongly typed XSD is almost a necessity to transform the result set to another XSD later on. You can use the Adapter Configuration Wizard to generate a strongly typed XSD for a REF CURSOR.

If a weakly typed XSD is sufficient for your use case, see Section 9.7.7.2, "Row Set Support Using a Weakly Typed XSD".

This section includes the following topics:

For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

9.7.7.1.1 Design Time

If the stored procedure or function you select contains an output parameter of type RowSet, you can define a strongly typed XSD for this ref cursor as follows:

  1. Using the Adapter Configuration Wizard, select a stored procedure or function that contains an output parameter of type RowSet.

    See steps 1 through 8 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

  2. Click Next. The RowSets page is displayed, as shown in Figure 9-52.

    By default, the Adapter Configuration Wizard generates a weakly typed XSD for this ref cursor shown in the XSD text field. Example 9-4 shows this default, weakly typed XSD.

    Example 9-4 Default Weakly Typed XSD

    <schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_
    CURSORS/MOVIES_QUERY/" xmlns="http://www.w3.org/2001/XMLSchema" 
    xmlns:db="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_CURSORS/MOVIES_QUERY/" 
    elementFormDefault="qualified">
       <element name="InputParameters">
          <complexType>
             <sequence>
                <element name="EXAMPLE" type="db:SYS.MOVIESOBJ" db:index="1" db:type="Struct" 
    minOccurs="0" nillable="true"/>
             </sequence>
          </complexType>
       </element>
       <element name="OutputParameters">
          <complexType>
             <sequence>
                <element name="MOVIES" type="db:RowSet" db:index="2" db:type="RowSet" 
    minOccurs="0" nillable="true"/>
             </sequence>
          </complexType>
       </element>
       <complexType name="RowSet">
          <sequence>
             <element name="Row" minOccurs="0" maxOccurs="unbounded">
                <complexType>
                   <sequence>
                      <element name="Column" maxOccurs="unbounded" nillable="true">
                         <complexType>
                            <simpleContent>
                               <extension base="string">
                                  <attribute name="name" type="string" use="required"/>
                                  <attribute name="sqltype" type="string" use="required"/>
                               </extension>
                            </simpleContent>
                         </complexType>
                      </element>
                   </sequence>
                </complexType>
             </element>
          </sequence>
       </complexType>
       <complexType name="SYS.MOVIESOBJ">
          <sequence>
             <element name="TITLE" db:type="VARCHAR2" minOccurs="0" nillable="true">
                <simpleType>
                   <restriction base="string">
                      <maxLength value="30"/>
                   </restriction>
                </simpleType>
             </element>
             <element name="DIRECTOR" db:type="VARCHAR2" minOccurs="0" nillable="true">
                <simpleType>
                   <restriction base="string">
                      <maxLength value="30"/>
                   </restriction>
                </simpleType>
             </element>
             <element name="STARRING" db:type="VARCHAR2" minOccurs="0" nillable="true">
                <simpleType>
                   <restriction base="string">
                      <maxLength value="30"/>
                   </restriction>
                </simpleType>
             </element>
          </sequence>
       </complexType>
    </schema>
    
  3. For each of the stored procedure or function arguments:

    • Double-click in the Value column.

    • Enter a valid value for the argument.

      Enter numbers and strings directly, dates as literals (for example, 2009/11/11), and structs as say MYOBJ('a', 'b').

    • Press Enter.

    Note:

    You must choose values that are valid for the argument type and that exist in the database.

    Oracle recommends that you specify a value for all arguments to ensure that the correct stored procedure or function signature is executed.

  4. Click Introspect.

    The Adapter Configuration Wizard executes the stored procedure or function using the arguments you specify:

    1. If the stored procedure or function returns a row set with at least 1 row, the RowSets page is updated to display a strongly typed XSD in the XSD text field. Example 9-5 shows the strongly typed XSD that replaces the default, weakly typed XSD that Example 9-4 shows.

      Figure 9-53 RowSets Page: Successful Introspection

      Description of Figure 9-53 follows
      Description of "Figure 9-53 RowSets Page: Successful Introspection"

      Example 9-5 Strongly Typed XSD

      <schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_
      CURSORS/MOVIES_QUERY/" xmlns="http://www.w3.org/2001/XMLSchema" 
      xmlns:db="http://xmlns.oracle.com/pcbpel/adapter/db/SYS/MOVIES_CURSORS/MOVIES_
      QUERY/" elementFormDefault="qualified">
         <element name="InputParameters">
            <complexType>
               <sequence>
                  <element name="EXAMPLE" type="db:SYS.MOVIESOBJ" db:index="1" 
      db:type="Struct" minOccurs="0" nillable="true"/>
               </sequence>
            </complexType>
         </element>
         <element name="OutputParameters">
            <complexType>
               <sequence>
                  <element name="MOVIES" type="db:MOVIES_RowSet" db:index="2" 
      db:type="RowSet" minOccurs="0" nillable="true"/>
               </sequence>
            </complexType>
         </element>
         <complexType name="MOVIES_RowSet">
            <sequence>
               <element name="MOVIES_Row" minOccurs="0" maxOccurs="unbounded">
                  <complexType>
                     <sequence>
                        <element name="TITLE" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="50"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="DIRECTOR" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="20"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="STARRING" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="100"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="SYNOPSIS" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="255"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="GENRE" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="70"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="RUN_TIME" type="decimal" db:type="NUMBER" 
      minOccurs="0" nillable="true"/>
                        <element name="RELEASE_DATE" type="dateTime" db:type="DATE" 
      minOccurs="0" nillable="true"/>
                        <element name="RATED" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="6"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="RATING" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="4"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="VIEWER_RATING" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="5"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="STATUS" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="11"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="TOTAL_GROSS" type="decimal" db:type="NUMBER" 
      minOccurs="0" nillable="true"/>
                        <element name="DELETED" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="5"/>
                              </restriction>
                           </simpleType>
                        </element>
                        <element name="SEQUENCENO" type="decimal" db:type="NUMBER" 
      minOccurs="0" nillable="true"/>
                        <element name="LAST_UPDATED" type="dateTime" db:type="DATE" 
      minOccurs="0" nillable="true"/>
                        <element name="POLLING_STRATEGY" db:type="VARCHAR2" 
      minOccurs="0" nillable="true">
                           <simpleType>
                              <restriction base="string">
                                 <maxLength value="30"/>
                              </restriction>
                           </simpleType>
                        </element>
                     </sequence>
                  </complexType>
               </element>
            </sequence>
         </complexType>
         <complexType name="SYS.MOVIESOBJ">
            <sequence>
               <element name="TITLE" db:type="VARCHAR2" minOccurs="0" nillable="true">
                  <simpleType>
                     <restriction base="string">
                        <maxLength value="30"/>
                     </restriction>
                  </simpleType>
               </element>
               <element name="DIRECTOR" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                  <simpleType>
                     <restriction base="string">
                        <maxLength value="30"/>
                     </restriction>
                  </simpleType>
               </element>
               <element name="STARRING" db:type="VARCHAR2" minOccurs="0" 
      nillable="true">
                  <simpleType>
                     <restriction base="string">
                        <maxLength value="30"/>
                     </restriction>
                  </simpleType>
               </element>
            </sequence>
         </complexType>
      </schema>
      

      Proceed to step 5.

    2. If no rows are returned, the Introspection Failed dialog is displayed, as shown in Figure 9-54.

      Figure 9-54 Introspection Failed Dialog

      Description of Figure 9-54 follows
      Description of "Figure 9-54 Introspection Failed Dialog"

      The Adapter Configuration Wizard generates a weakly typed XSD and displays it in the XSD text field by default, overwriting any edits you may have made to a previous version of the XSD.

      Go back to step 3 and enter test argument values that returns a row set with at least 1 row.

    3. If the stored procedure or function throws an exception, the Introspection Error dialog is displayed, as shown in Figure 9-55.

      Figure 9-55 Introspection Error Dialog

      Description of Figure 9-55 follows
      Description of "Figure 9-55 Introspection Error Dialog"

      The Adapter Configuration Wizard generates a weakly typed XSD and displays it in the XSD text field by default, overwriting any edits you may have made to a previous version of the XSD.

      Go back to step 3 and enter test argument values that returns a row set with at least 1 row.

  5. Optionally, fine tune the strongly typed XSD by manually editing the schema shown in the XSD text filed.

  6. Proceed to step 10 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

9.7.7.1.2 Run Time

Suppose you have the following package:

CREATE PACKAGE PKG AS
       TYPE REF_CURSOR IS REF CURSOR;
       PROCEDURE TEST(C OUT REF_CURSOR);
END;

CREATE PACKAGE BODY PKG AS
       ROCEDURE TEST(C OUT REF_CURSOR) AS
       BEGIN
           OPEN C FOR SELECT DEPTNO, DNAME FROM DEPT;
       END;
END;

After using the Adapter Configuration Wizard to define a strongly typed XSD, after the procedure executes, the following XML is generated for parameter, C:

<C>
    <C_Row>
        <DEPTNO>10</DEPTNO>
        <DNAME>ACCOUNTING</DNAME>
    </C_Row>
    <C_Row>
        <DEPTNO>11</DEPTNO>
        <DNAME>DEVELOPMENT</DNAME>
    </C_Row>
    …
</C>

Using the Oracle Database Adapter, at run time, it does not matter if the XSD describing the strongly typed ref cursor is inline or imported.

The strongly typed XSD is applied by the SOA runtime and is visible in the Oracle Enterprise Manager Console, where appropriate. For example, Figure 9-56 shows the audit trail for an invoke that returns a ref cursor payload using a strongly typed XSD.

Figure 9-56 Audit Trail for Stongly Typed Payload

Description of Figure 9-56 follows
Description of "Figure 9-56 Audit Trail for Stongly Typed Payload"

9.7.7.2 Row Set Support Using a Weakly Typed XSD

Currently a REF CURSOR by nature can support any arbitrary result set, so the XSD generated at design time is weakly typed. By default, the Adapter Configuration Wizard generates a weakly typed XSD for a REF CURSOR.

However the XML output from this is hard to use. It is very difficult to write an Xpath expression or XSL based on a weakly typed XSD and column names as attribute values instead of element names.

Although a row set can represent any result set, it is possible to assume for some procedures that it has the same structure each time, and hence can be described with a strongly typed XSD. A strongly typed XSD is almost a necessity to transform the result set to another XSD later on.

If a strongly typed XSD is better suited to your use case, see Section 9.7.7.1, "Row Set Support Using a Strongly Typed XSD".

This section includes the following topics:

For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

9.7.7.2.1 Design Time

If the stored procedure or function you select contains an output parameter of type ResultSet, you can define a weakly typed XSD for this ref cursor as follows:

  1. Using the Adapter Configuration Wizard, select a stored procedure or function that contains an output parameter of type ResultSet.

    See steps 1 through 8 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

  2. Click Next. The RowSets page is displayed, as shown in Figure 9-57.

    By default, the Adapter Configuration Wizard generates a weakly typed XSD for this ref cursor shown in the XSD text field.

  3. Optionally, fine tune the weakly typed XSD by manually editing the schema shown in the XSD text filed.

  4. Proceed to step 10 in Section 9.7.1.1, "Using Top-Level Standalone APIs".

9.7.7.2.2 Run Time

Suppose you have the following package:

CREATE PACKAGE PKG AS
       TYPE REF_CURSOR IS REF CURSOR;
       PROCEDURE TEST(C OUT REF_CURSOR);
END;

CREATE PACKAGE BODY PKG AS
       ROCEDURE TEST(C OUT REF_CURSOR) AS
       BEGIN
           OPEN C FOR SELECT DEPTNO, DNAME FROM DEPT;
       END;
END;

The REF_CURSOR is a weakly typed cursor variable because the query is not specified. After the procedure executes, the following XML is generated for parameter, C:

<C>
    <Row>
        <Column name="DEPTNO" sqltype="NUMBER">10</Column>
        <Column name="DNAME" sqltype="VARCHAR2">ACCOUNTING</Column>
    </Row>
    <Row>
        <Column name="DEPTNO" sqltype="NUMBER">20</Column>
        <Column name="DNAME" sqltype="VARCHAR2">RESEARCH</Column>
    </Row>
    …
</C>

There is a total of four rows, each consisting of two columns, DEPTNO and DNAME.

Ref cursors are represented by Java ResultSets. It is not possible to create a ResultSet programmatically by using APIs provided by the JDBC driver. Therefore, ref cursors may not be passed IN to a stored procedure. They can only be passed as IN/OUT and OUT parameters with one caveat. An IN/OUT ref cursor is treated strictly as an OUT parameter. Because no IN value can be provided for an IN/OUT parameter, a null is bound to that parameter when invoking the stored procedure.

9.7.7.3 Support for PL/SQL Boolean, PL/SQL Record, and PL/SQL Table Types

The Adapter Configuration Wizard provides a mechanism that detects when these types are used and then invokes Oracle JPublisher to generate the necessary wrappers automatically. Oracle JPublisher generates two SQL files, one to create schema objects, and another to drop them. The SQL that creates the schema objects is automatically executed from within the Adapter Configuration Wizard to create the schema objects in the database schema before the XSD file is generated. For example, suppose the following package specification is declared:

CREATE PACKAGE PKG AS
  TYPE REC IS RECORD (X NUMBER, Y VARCHAR2 (10));
  TYPE TBL IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
  PROCEDURE PLSQL (R REC, T TBL, B BOOLEAN);
END;

Figure 9-58 shows the step in the Adapter Configuration Wizard that is displayed when PROC procedure from PKG package is selected.

Figure 9-58 Specifying a Stored Procedure in the Adapter Configuration Wizard

Specifying a stored procedure: step 6.
Description of "Figure 9-58 Specifying a Stored Procedure in the Adapter Configuration Wizard"

As Figure 9-58 shows, the original procedure name is fully qualified, PKG.PLSQL. The type of parameter, R, is the name of the RECORD. The type of T is the name of the TABLE. The type of B is Boolean. The name of the wrapper package that is generated is derived from the service name, bpel_ServiceName (for example, bpel_UseJPub). This is the name of the generated package that contains the wrapper procedure. You can use the check box to force the Adapter Configuration Wizard to overwrite an existing package when the schema objects are created.

Clicking Next twice reveals the Finish page of the Adapter Configuration Wizard, as shown in Figure 9-59.

Figure 9-59 Defining a Database Adapter Service: Finish Page

Description of Figure 9-59 follows
Description of "Figure 9-59 Defining a Database Adapter Service: Finish Page"

The contents of this page describe what the Adapter Configuration Wizard has detected and what actions are performed when the Finish button is clicked. The following summarizes the contents of this page:

  1. The name of the generated WSDL is UseJPub.wsdl.

  2. The name of the JCA file is UseJPub_db.jca.

  3. Two SQL scripts are created and added to the BPEL process project:

    1. BPEL_USEJPUB.sql – Creates the schema objects.

    2. BPEL_USEJPUB_drop.sql – Drops the schema objects.

  4. The name of the generated XSD is SCOTT_USEJPUB_PKG-24PLSQL.xsd.

When you click Finish, Oracle JPublisher is invoked to generate the SQL files and load the schema objects into the database. The process of generating wrappers may take quite some time to complete. Processing times for wrappers that are generated in the same package usually require less time after an initial wrapper has been generated for another procedure within the same package.

Note:

You must execute BPEL_XXXX_drop.sql when re-creating an Oracle Database Adapter. This is likely due to the JPublisher functionality, which uses a cache when generating wrappers.

The following user-defined types are generated to replace the PL/SQL types from the original procedure:

SQL> CREATE TYPE PKG_REC AS OBJECT (X NUMBER, Y VARCHAR2 (10));
SQL> CREATE TYPE PKG_TBL AS TABLE OF NUMBER;

The naming convention for these types is OriginalPackageName_OriginalTypeName. Boolean is replaced by INTEGER in the wrapper procedure.

Acceptable values for the original Boolean parameter, now that it is an INTEGER are 0 for FALSE and any nonFzero INTEGER value for TRUE. Any value other than 1 is considered false. The generated wrapper procedure uses APIs from the SYS.SQLJUTL package to convert from INTEGER to Boolean and vice-versa.

A new wrapper package called BPEL_USEJPUB is created that contains the wrapper for procedure PLSQL, called PKG$PPLSQL, and conversion APIs that convert from the PL/SQL types to the user-defined types and vice-versa. If the original procedure is a root-level procedure, then the name of the generated wrapper procedure is TOPLEVEL$OriginalProcedureName.

The generated XSD represents the signature of wrapper procedure PKG$PLSQL and not the original procedure. The name of the XSD file is URL-encoded, which replaces $ with -24.

Note the naming conventions for the generated artifacts:

  • The service name is used in the names of the WSDL and SQL files. It is also used as the name of the wrapper package.

  • The name of the generated XSD is derived from the schema name, service name, and the original package and procedure names.

  • The name of a SQL object or collection data types are derived from the original package name and the name of its corresponding PL/SQL type.

  • The name of the wrapper procedure is derived from the original package and procedure names. TOPLEVEL$ is used for root-level procedures.

The name of the generated wrapper package is limited to 30 characters. The name of the wrapper procedure is limited to 29 characters. If the names generated by Oracle JPublisher are longer than these limits, then they are truncated.

When the PartnerLink that corresponds with the service associated with the procedure is invoked, then the generated wrapper procedure is executed instead of the original procedure.

9.7.7.3.1 Default Clauses in Wrapper Procedures

If a procedure contains a special type that requires a wrapper to be generated, then the default clauses on any of the parameters are not carried over to the wrapper. For example, consider

SQL> CREATE PROCEDURE NEEDSWRAPPER (
        >     B BOOLEAN DEFAULT TRUE, N NUMBER DEFAULT 0) IS BEGIN … END;

Assuming that this is a root-level procedure, the signature of the generated wrapper procedure is

TOPLEVEL$NEEDSWRAPPER (B INTEGER, N NUMBER)

The Boolean type has been replaced by INTEGER. The default clauses on both parameters are missing in the generated wrapper. Parameters of generated wrapper procedures never have a default clause even if they did in the original procedure.

In this example, if an element for either parameter is not specified in the instance XML, then an error occurs stating that an incorrect number of arguments have been provided. The default value of the parameter that is specified in the original procedure is not used.

To address this situation, the generated SQL file that creates the wrapper must be edited, restoring the default clauses to the parameters of the wrapper procedure. The wrapper and any additional schema objects must then be reloaded into the database schema. After editing the SQL file, the signature of the wrapper procedure is as follows:

TOPLEVEL$NEEDSWRAPPER (B INTEGER DEFAULT 1, N NUMBER DEFAULT 0)

For Boolean parameters, the default value for true is 1, and the default value for false is 0.

As a final step, the XSD file generated for the wrapper must be edited. A special attribute must be added to elements representing parameters that now have default clauses. Add db:default="true" to each element representing a parameter that now has a default clause. For example,

<element name="B" … db:default="true" …/>
<element name="N" … db:default="true" …/>

This attribute is used at run time to indicate that if the element is missing from the instance XML, then the corresponding parameter must be omitted from the procedure call. The remaining attributes of these elements remain exactly the same.

9.8 Oracle Database Adapter Use Cases

This describes the Oracle Database Adapter and Oracle Database Adapter - stored procedures use cases.

This section includes the following topics:

9.8.1 Use Cases for Oracle Database Adapter

To obtain Oracle Database Adapter use cases, access the Oracle SOA Sample Code site.

Table 9-20 shows the Oracle Database Adapter samples that are provided with Oracle BPEL PM, and Mediator.

Table 9-20 Oracle Database Adapter Use Cases

Tutorial Name Description

Delete

Illustrates the outbound delete operation of the Oracle Database Adapter. An XML record is passed to the operation and the row in the database with the same primary key is deleted.

File2Table

Illustrates the use of an input a native (CSV) data file defined in a custom format. The input file is a purchase order, which the file adapter processes and publishes as an XML message to the FIle2Table BPEL process. The message is transformed to another purchase order format and routed to an invoke activity.

Insert

Illustrates the outbound insert operation of the Oracle Database Adapter. An XML record is passed to the operation and inserted into the database as relational data. (In JDeveloper BPEL Designer, Merge (Insert or Update) is provided.)

InsertWithCatch

Illustrates the extra steps (based on the Insert tutorial) needed to add fault handling to your BPEL process.

JPublisherWrapper

Illustrates a workaround for using PL/SQL RECORD types. JPublisher is used to create a corresponding OBJECT type whose attributes match the fields of the RECORD, and conversion APIs that convert from RECORD to OBJECT and vice versa. JPublisher also generates a wrapper procedure (or function) that accepts the OBJECT and invokes the underlying method using the conversion APIs in both directions. The invoked methods must be installed in an Oracle database (not Oracle Lite).

MasterDetail

Illustrates how to migrate data from one set of tables to another. The sample uses the Oracle Database Adapter to read data from one set of tables, process the data, and write it in to another set of database tables using the adapter.

Merge

Illustrates the outbound merge operation of the Oracle Database Adapter. An XML record is passed to the operation and a corresponding row in the database is either inserted or updated.

PollingControlTableStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. When a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy uses a control table to store the primary key of every row that has not yet been processed. With a natural join between the control table and the source table (by primary key), polling against the control table is practically the same as polling against the source table directly.

PollingLastReadIdStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy uses a helper table to remember a sequence value.

PollingLastUpdatedStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy involves using a helper table to remember a last_updated value.

PollingLogicalDeleteStrategy

Illustrates an inbound polling operation to poll XML instances from the MOVIES table. Whenever a new row is inserted into the MOVIES table, the polling operation raises it to Oracle BPEL PM. This strategy involves updating a special field on each row processed, and updating the WHERE clause at run time to filter out processed rows.

PureSQLPolling

Illustrates how to poll a table based on a date field.

PureSQLSelect

Illustrates how to bypass the JDeveloper BPEL Designer WHERE-clause builder to specify arbitrarily complex SQL strings for SELECT operations.

QueryByExample

Illustrates the outbound queryByExample operation of the Oracle Database Adapter. A SELECT SQL query is built dynamically based on fields set in an example XML record, and any matching records are returned.

RefCursors

Illustrates how to use a REF CURSOR with a strongly typed or weakly typed XSD. You can use the Adapter Configuration Wizard to create a strongly typed XSD for a row set returned by an Oracle Database stored procedure or function REF CURSOR variable. For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

ResultSetConverter

Illustrates a workaround for using a REF CURSOR. The solution involves the use of a Java stored procedure to convert the corresponding java.sql.ResultSet into a collection (either VARRAY or NESTED TABLE) of OBJECTs.

SelectAll

Illustrates the outbound SelectAll operation of the Oracle Database Adapter. With no WHERE clause, all rows in the MOVIES table are returned as XML.

SelectAllByTitle

Illustrates the outbound SelectAllByTitle operation of the Oracle Database Adapter. The row in the MOVIES table with the selected title is returned as XML.

Update

Illustrates the outbound Update operation of the Oracle Database Adapter. An XML record is passed to the operation and the row in the database with the same primary key is updated. (In JDeveloper BPEL Designer, Merge (Insert or Update) is provided.)


See Table 9-3 for the structure of the MOVIES table, which is used for many of the use cases. The readme.txt files that are included with most of the samples provide instructions.

9.8.2 Use Cases for Oracle Database Adapter - Stored Procedures

This section includes the following use cases:

In addition to the uses cases documented in this section, refer to the sample Oracle Database Adapter use cases available by accessing the Oracle SOA Sample Code site.

Table 9-21 shows the Oracle Database Adapter stored procedure samples that are provided with Oracle BPEL PM, and Mediator.

Table 9-21 Oracle Database Adapter Use Cases - Stored Procedures

Tutorial Name Description

JPublisherWrapper

Illustrates a workaround for using PL/SQL RECORD types. JPublisher is used to create a corresponding OBJECT type whose attributes match the fields of the RECORD, and conversion APIs that convert from RECORD to OBJECT and vice versa. JPublisher also generates a wrapper procedure (or function) that accepts the OBJECT and invokes the underlying method using the conversion APIs in both directions. The invoked methods must be installed in an Oracle database (not Oracle Lite).

RefCursors

Illustrates how to use a REF CURSOR with a strongly typed or weakly typed XSD. You can use the Adapter Configuration Wizard to create a strongly typed XSD for a row set returned by an Oracle Database stored procedure or function REF CURSOR variable. For more information, see Section 9.3.3, "Row Set Support Using a Strongly or Weakly Typed XSD".

ResultSetConverter

Illustrates a workaround for using a REF CURSOR. The solution involves the use of a Java stored procedure to convert the corresponding java.sql.ResultSet into a collection (either VARRAY or NESTED TABLE) of OBJECTs.


See Table 9-3 for the structure of the MOVIES table, which is used for many of the use cases. The readme.txt files that are included with most of the samples provide instructions.

9.8.2.1 Creating and Configuring a Stored Procedure in JDeveloper BPEL Designer

This use case describes how to integrate a stored procedure into BPEL Process Manager with JDeveloper BPEL Designer.

This use case includes of the following sections:

9.8.2.1.1 Prerequisites

To perform this use case, you must define the following stored procedure in the SCOTT schema:

SQL> CREATE PROCEDURE hello (name IN VARCHAR2, greeting OUT VARCHAR2) AS
  2  BEGIN
  3     greeting := 'Hello ' || name;
  4  END;
  5/
  
9.8.2.1.2 Creating an Application and an SOA Composite

You must create a JDeveloper application to contain the SOA composite. To create an application and a project for the use case, perform the following:

  1. In the Application Navigator of JDeveloper, click New Application.

    The Create Generic Application - Name your application page is displayed.

  2. Enter MyHelloApp in the Application Name field, and click Next.

    The Create Generic Application - Name your project page is displayed.

  3. Enter HelloProject in the Project Name field.

  4. In the Available list in the Project Technologies tab, double-click SOA to move it to the Selected list.

  5. Click Next.

    The Create Generic Application - Configure SOA Settings page is displayed.

  6. Select Composite With BPEL in the Composite Template box, and click Finish. The Create BPEL Process page is displayed.

  7. Enter Greet in the Name field, and then select Synchronous BPEL Process from the Template box.

  8. Click OK.

    The Greet BPEL process in the HelloProject of MyHelloApp is displayed in the design area, as shown in Figure 9-60.

    Figure 9-60 The JDeveloper - Composite.xml

    Figure
    Description of "Figure 9-60 The JDeveloper - Composite.xml"

9.8.2.1.3 Creating the Outbound Oracle Database Adapter Service

Perform the following steps to create an outbound Oracle Database Adapter service:

  1. Drag and drop Database Adapter from the Component Palette to the External References swim lane.

    The Adapter Configuration Wizard Welcome page is displayed.

  2. Click Next.

    The Service Name page is displayed.

  3. Enter Hello in the Service Name field.

  4. Click Next.

    The Service Connection page is displayed.

    Note:

    Ensure that you have configured the JNDI name in the weblogic-ra.xml file before deploying this application.

    For more information, refer to Section 2.18.1, "Creating a Data Source" and Section 2.20, "Recommended Setting for Data Sources Used by Oracle JCA Adapters."

  5. Click the Create a New Database Connection icon.

    The Create Database Connection dialog is displayed.

  6. Enter the following details in the Create Database Connection dialog:

    1. Enter a connection name in the Connection Name field. For example, Myconnection.

    2. Select Oracle (JDBC) for Connection Type.

    3. Enter the user name and password as scott/tiger.

    4. Enter the host name in the Host Name field and the JDBC port in the JDBC Port field.

    5. Select SID and enter the SID name. Alternatively, select Service Name and enter the service name.

    6. Click Test Connection. A success message is displayed in the Status pane.

    7. Click OK.

    The Connection field is populated with the MyConnection connection and the JNDI field is populated with eis/DB/MyConnection.

  7. Click Next.

    The Operation Type page is displayed.

  8. Select Call a Stored Procedure or Function, and then click Next.

    The Specify Stored Procedure page is displayed.

  9. Click Browse. Select HELLO in the Stored Procedures pane.

    The Arguments tab displays the parameters of the stored procedure and the Source tab displays the source code of the stored procedure.

  10. Click OK.

    The Specify Stored Procedure page is displayed. The Procedure field is populated with the HELLO stored procedure and the arguments for the HELLO stored procedure are also displayed.

  11. Click Next.

    The Advanced Options page is displayed.

  12. Specify any additional advanced options, and then click Next.

    The Adapter Configuration Wizard - Finish page is displayed.

  13. Click Finish.

    The Create Partner Link dialog box is displayed. The name of the partner link is Hello, which is the same as the service name.

  14. Click OK.

    The outbound Oracle Database Adapter is now configured and the Greet BPEL process is displayed.

9.8.2.1.4 Add an Invoke Activity

The following are the steps to add an invoke activity:

  1. Drag and drop an Invoke activity from the Component Palette to the design area between the receiveInput activity and the replyOutput activity.

  2. Double-click the Invoke activity.

    The Edit Invoke dialog is displayed.

  3. Enter Input in the Name field.

  4. Click the Automatically Create Input Variable icon to the right of the Input Variable field in the Invoke box.

    The Create Variable dialog is displayed.

  5. Select the default variable name and click OK.

    The Input Variable field is populated with the default variable name. The Invoke dialog is displayed.

  6. Repeat the same procedure to select output variable in the Output Variable field.

    In the Variables section of the Edit Invoke dialog the Input and Output variable names are displayed.

  7. Click OK.

    A line with a right arrow is connected to the Hello partner link is displayed, as shown in Figure 9-61.

    Figure 9-61 The Greet.bpel Page

    Description of Figure 9-61 follows
    Description of "Figure 9-61 The Greet.bpel Page"

9.8.2.1.5 Change the Message Part of the Request Message

When the payload of the request matches the InputParameters, then all of the IN parameters is included in the request. The only IN parameter in this example is name.

The following are the steps to change the message part for the GreetRequestMessage message:

  1. In the Structure Pane for the Greet BPEL process, which is beneath the Application pane, expand Message Types, then Process WSDL - Greet.wsdl, and then GreetRequestMessage.

  2. Select payload, and then click the Edit icon.

    The Edit Message Part - payload dialog is displayed.

  3. Choose Element and then click the Search icon.

    The Type Chooser dialog is displayed.

  4. Expand Project Schema Files, then SCOTT_HELLO.xsd, and select InputParameters.

  5. Click OK.

    The Edit Message Part - payload dialog is displayed.

  6. Click OK.

9.8.2.1.6 Change the Message Part of the Response Message

When the payload of the response matches the OutputParameters, then all of the OUT parameters is included in the response. The only OUT parameter in this example is greeting.

The steps for the GreetResponseMessage message part are the same as that of GreetRequestMessage with the following exceptions:

  1. Expand the GreetResponseMessage message type, and then select payload.

  2. Expand SCOTT_HELLO.xsd in the Type Chooser dialog and select OutputParameters.

  3. Select OutputParameters.

9.8.2.1.7 Add a Assign Activity for the Input Variable

The following are the steps to add an Assign activity for the input variable:

  1. Drag and drop an Assign activity from the Component Palette in between the receiveInput and Greet invoke activities in the design area.

  2. Double-click the Assign activity.

    The Assign dialog is displayed.

  3. Click General to change the name to NAME in the Name field.

  4. In the Copy Operation tab, click the plus icon, and select Copy Operation from the list of operations displayed.

    The Create Copy Operation dialog is displayed.

  5. In the From area expand Variables, inputVariable, payload, and then select ns2:InputParameters.

  6. In the To area expand Variables, Input_Hello_InputVariable, InputParameters, and then select ns2:InputParameters.

  7. Click OK.

    You have assigned a value to the input parameter.

    The Assign dialog is displayed, as shown in Figure 9-62. This dialog shows the assign from the inputVariable payload to the Input_Hello_InputVariable payload.

    Figure 9-62 The Create Copy Operation Dialog

    Description of Figure 9-62 follows
    Description of "Figure 9-62 The Create Copy Operation Dialog"

  8. Click File, Save All.

9.8.2.1.8 Add an Assign Activity for the Output Variable

In the second assign activity, you assign a value to the output parameter.

The steps for assigning a value to the output parameter are the same as assigning value to the input parameter with the following exceptions:

  1. Drag and drop an Assign activity from the Component Palette in between the Greet invoke and replyOutput activities in the design area.

  2. Double-click the Assign activity.

    The Assign dialog is displayed.

  3. Enter Greeting in the Name field.

  4. In the Copy Operation tab, click the plus icon, and select Copy Operation from the list of operations displayed.

    The Create Copy Operation dialog is displayed.

  5. In the From pane expand Input_Hello_OutputVariable, OutputParameters, and then select ns2:OutputParameters, as shown in Figure 9-63.

  6. In the To pane expand outputVariable, payload, and then select ns2:OutputParameters, as shown in Figure 9-63

    Figure 9-63 The Create Copy Operation Dialog

    Description of Figure 9-63 follows
    Description of "Figure 9-63 The Create Copy Operation Dialog"

  7. Click OK.

    You have assigned a value to the output parameter.

  8. Click File, Save All.

    You have completed modeling a BPEL Process. The final BPEL process is displayed, as shown in Figure 9-64.

    Figure 9-64 The Final BPEL Process Screen

    Description of Figure 9-64 follows
    Description of "Figure 9-64 The Final BPEL Process Screen"

9.8.2.1.9 Deploying with JDeveloper

You must deploy the application profile for the SOA project and the application you created in the preceding steps. To deploy the application profile using JDeveloper, use the following steps:

  1. Create an application server connection using the procedure described in Chapter 2, "Creating an Application Server Connection for Oracle JCA Adapters."

  2. Deploy the application using the procedure described in Section 2.7, "Deploying Oracle JCA Adapter Applications from JDeveloper."

9.8.2.1.10 Creating a DataSource in Oracle WebLogic Server Administration Console

Before you can test the HelloProject you must create a data source using the Oracle WebLogic Server Administration Console.

The following are the steps:

  1. Enter http://<hostname>:<port>/console in your Web browser.

  2. Enter a user name and password and click Log In.

    The administration console is displayed.

  3. In the Services area under JDBC click Data Sources.

    A summary of JDBC Data Sources is displayed.

  4. Click New.

    The Create a New JDBC Data Source page is displayed.

  5. In the Create a New JDBC Data Source page, enter the following details:

    • MyDataSource in the Name field.

    • jdbc/MyDataSource in the JNDI Name field.

    • The Database Type is Oracle.

    • The Database Driver is Oracle's Driver (Thin XA) for Instance Connections; Versions 9.0.1, 9.2.0, 10, 11.

  6. Click Next.

    A message stating that no other transaction configuration options are available is displayed.

  7. Click Next.

    The Create a New Data Source page is displayed.

  8. Enter the following details:.

    • Database Name: This is usually the SID.

    • Host Name: Enter the name of the host computer.

    • Port Number: Enter the port number.

      The default port is 1521.

    • Database User Name: Enter SCOTT

    • Password: Enter TIGER.

    • Confirm Password: Enter TIGER.

  9. Click Next.

    A summary of the data source configuration is displayed.

  10. Click Test Configuration.

    The Messages area indicates that the connection test succeeded.

  11. Click Next. Select AdminServer as the target by selecting the check box.

  12. Click Finish.

    The summary of JDBC Data Sources now includes the MyDataSource data source that you created in the preceding steps.

9.8.2.1.11 Monitoring Using the Fusion Middleware Control Console

You can monitor the deployed SOA composite using the Fusion Middleware Control Console. Perform the following steps:

  1. Navigate to http://servername:portnumber/em. A list of SOA composites is displayed, including the HelloProject[1.0] that you created in the preceding steps.

  2. Click the HelloProject[1.0] link. The Dashboard tab is displayed, as shown in Figure 9-65.

    Figure 9-65 The Dashboard Tab of the HelloProject[1.0] Project

    Description of Figure 9-65 follows
    Description of "Figure 9-65 The Dashboard Tab of the HelloProject[1.0] Project"

  3. Click Test. A new browser window is displayed.

  4. Enter your name in the NAME field that is marked xsd:string and then click Invoke.

    The browser window displays the Test Result.

  5. To view the XML file in readable form, click Formatted XML. Figure 9-66 shows the formatted XML file.

    Figure 9-66 The Formatted XML File

    Description of Figure 9-66 follows
    Description of "Figure 9-66 The Formatted XML File"

9.8.2.2 File To StoredProcedure Use Case

This use case illustrates the execution of an Oracle stored procedure. The input to the stored procedure is obtained by reading a file using the File Adapter. The stored procedure executes, populating a table with the data from its input parameter.

To obtain the adapter-db-101-file2storedprocedure use case, access the Oracle SOA Sample Code site.

This use case includes the following topics:

9.8.2.2.1 Prerequisites

To perform the file to stored procedure use case, the following schema objects and stored procedure must be defined in the SCOTT/TIGER schema before modeling the BPEL Composite using JDeveloper.

create type address as object
(
  street varchar2(20),
  city   varchar2(15),
  state  char(2),
  zip    char(5)
);
create type customer as object
(
  fname varchar2(10),
  lname varchar2(10),
  loc   address,
  email varchar2(25),
  phone varchar2(15)
);
create type collection as table of customer;
create table customers
(
  name  varchar2(25),
  loc   varchar2(45),
  email varchar2(25),
  phone varchar2(15)
);
create procedure add_customers(c in collection) as
begin
  for i in c.first .. c.last loop
    insert into customers values (
      c(i).lname || ', ' || c(i).fname,
      c(i).loc.street || ', ' || c(i).loc.city || ', ' || c(i).loc.state || ' ' ||
 c(i).loc.zip,
      c(i).email,
      c(i).phone    );
  end loop;
end;

You can define these schema objects and stored procedure using the adapters-db-101-file2storedprocedure/artifacts/sql/customers.sql file from the adapters-db-101-file2storedprocedure sample by accessing the Oracle SOA Sample Code site.

9.8.2.2.2 Creating an Application and an SOA Project

You must create a JDeveloper application to contain the SOA composite. Use the following steps to create a new application, an SOA project:

  1. Open JDeveloper.

  2. In the Application Navigator, click New Application. The Create Generic Application - Name your Application page is displayed.

  3. Enter File2SPApp in the Application Name field.

  4. In the Application Template list, select Generic Application.

  5. Click Next.

    The Create Generic Application - Name your project page is displayed.

  6. In the Project Name field, enter a descriptive name. For example, File2SPProject.

  7. In the Available list in the Project Technologies tab, double-click SOA to move it to the Selected list.

  8. Click Next. The Create Generic Application - Configure SOA Settings page is displayed.

  9. Select Composite With BPEL from the Composite Template list, and then click Finish.

    You have created a new application, and an SOA project. This automatically creates an SOA composite.

    The Create BPEL Process page is displayed.

  10. Enter a name for the BPEL process in the Name field. For example, File2SP.

  11. Select Define Service Later in the Template list, and then click OK.

    The File2SP BPEL process in the File2SPProject of File2SPApp is displayed in the design area.

9.8.2.2.3 Creating the Outbound Oracle Database Adapter Service

Perform the following steps to create an outbound Oracle Database Adapter service:

  1. Drag and drop Database Adapter from the Service Adapters list to the Exposed Services swim lane. The Adapter Configuration Wizard Welcome page is displayed.

  2. Click Next. The Service Name page is displayed.

  3. Enter File2SPService in the Service Name field.

  4. Click Next.

    The Service Connection page is displayed.

  5. Click the Create a New Database Connection icon.

    The Create Database Connection dialog is displayed.

  6. Enter the following details in the Create Database Connection dialog:

    1. Enter a connection name in the Connection Name field. For example, MyConnection.

    2. Select Oracle (JDBC) for Connection Type.

    3. Enter the user name and password as scott/tiger.

    4. Enter the host name in the Host Name field and the JDBC port in the JDBC Port field.

    5. Select SID and enter the SID name. Alternatively, select Service Name and enter the service name.

    6. Click Test Connection. A success message is displayed in the Status pane.

    7. Click OK.

    The Connection field is populated with the MyConnection connection and the JNDI field is populated with eis/DB/MyConnection.

  7. Click Next.

    The Adapter Interface page is displayed.

  8. In the Adapter Interface page, select Define from operation and schema (specified later), and the click Next.

    The Operation Type page is displayed.

  9. Select Call a Stored Procedure or Function, as shown in Figure 9-67, and click Next.

    The Specify Stored Procedure page is displayed.

    Figure 9-67 The Adapter Configuration Wizard - Operation Type Page

    Description of Figure 9-67 follows
    Description of "Figure 9-67 The Adapter Configuration Wizard - Operation Type Page"

  10. Click Browse. Select ADD_CUSTOMERS in the Stored Procedures pane.

    The Arguments tab displays the parameters of the stored procedure and the Source tab displays the source code of the stored procedure.

  11. Click OK.

    The Specify Stored Procedure page is displayed.

    The Procedure field is populated with the ADD_CUSTOMERS stored procedure and the arguments for the ADD_CUSTOMERS stored procedure are also displayed.

  12. Click Next.

    The Advanced Options page is displayed.

  13. Specify any additional options, and then click Next.

    The Finish page is displayed.

  14. Click Finish.

    The Create Partner Link dialog is displayed.

    The name of the partner link is File2SPService, which is the same as the service name.

  15. Click OK.

    The outbound Oracle Database Adapter is now configured and the File2SP BPEL process is displayed.

9.8.2.2.4 Creating an Invoke Activity

You must complete the BPEL process by creating an Invoke activity. This creates the input variables.

The following are the steps to create an Invoke activity:

  1. Click File, Save All.

  2. Drag and drop an Invoke activity from the Component Palette to the design area.

  3. Drag the right arrow on the right of the Invoke activity and connect it to the File2SPService partner link.

    The Edit Invoke dialog is displayed.

  4. Enter Invoke in the Name field.

  5. Click the Automatically Create Input Variable icon to the right of the Input Variable field in the Invoke dialog.

    The Create Variable dialog is displayed.

  6. Select the default variable name and click OK.

    The Input variable name is displayed in the Variables area of the Edit Invoke dialog.

  7. Click OK.

    A line with a right arrow connecting to the is File2SPService partner link is displayed.

9.8.2.2.5 Creating the Inbound File Adapter Service

Perform the following steps to create an inbound File adapter service. This creates the service that reads input XML from a file directory:

  1. Drag and drop the File Adapter from the Component Palette to the External References swim lane.

    The Adapter Configuration Wizard Welcome page is displayed.

  2. Click Next. The Service Name page is displayed.

  3. Enter ReadCustomers in the Service Name field.

  4. Click Next.

    The Adapter Interface page is displayed.

  5. Select Define from operation and schema (specified later), and then click Next. The Operation page is displayed.

  6. Select Read File as the Operation Type and Read as the Operation Name. Do not select the other check boxes.

  7. Click Next.

    The File Directories page is displayed.

  8. Select Physical Path, and enter a physical path in the Directory for Incoming Files field.

  9. Select Process files recursively and Delete files after successful delivery, and the click Next.

    The File Filtering page is displayed.

  10. Specify File Wildcards, enter customers.xml in the Include Files with Name Pattern field, and then click Next.

    The File Polling page is displayed.

  11. Specify any value in the Polling Frequency field, and click Next.

    The Message page is displayed.

  12. Click Browse For Schema File that is displayed at the end of the URL field.

    The Type Chooser dialog is displayed.

  13. Click Project Schema Files, SCOTT_ADD_CUSTOMERS.xsd, and InputParameters.

  14. Click OK.

    The Messages page is displayed again. The URL is xsd/SCOTT_ADD_CUSTOMERS.xsd, and the Schema Element is InputParameters.

  15. Click Next.

    The Finish page is displayed.

  16. Click Finish.

    This terminates the inbound File Adapter service.

  17. Click OK to complete the partner link.

  18. Click File, Save All.

9.8.2.2.6 Adding a Receive Activity

The File Adapter Service provides input to the Receive Activity, which then initiates the rest of the BPEL Process.

The following are the steps to add a Receive activity:

  1. Double-click File2SP. The File2SP.bpel page is displayed.

  2. Drag and drop a Receive activity from the Component Palette to the design area.

  3. Drag the left arrow on the left of the Receive activity and connect it to the ReadCustomers partner link.

    The Edit Receive dialog is displayed.

  4. Enter Receive in the Name field.

  5. Click the Automatically Create Input Variable icon to the right of the Variable field in the Edit Receive dialog.

    The Create Variable dialog is displayed.

  6. Select the default variable name and click OK.

    The Variable field is populated with the default variable name.

  7. Select Create Instance, and click OK. The JDeveloper File2SP.bpel page is displayed.

    After adding the Receive activity, the JDeveloper window appears, as shown in Figure 9-68.

    Figure 9-68 Adding a Receive Activity

    Description of Figure 9-68 follows
    Description of "Figure 9-68 Adding a Receive Activity"

  8. Click File, Save All.

9.8.2.2.7 Adding an Assign Activity

Next, you must assign a value to the input parameter.

The following are the steps to add an Assign activity:

  1. Drag and drop an Assign activity from the Component Palette in between the Receive and Invoke activities in the design area.

  2. Double-click the Assign activity.

    The Assign dialog is displayed.

  3. Click General, and then CUSTOMER in the Name field.

  4. Click the Copy Operation tab.

    The Assign dialog is displayed, as shown in Figure 9-69.

    Figure 9-69 The Assign Dialog - Copy Operation Tab

    Description of Figure 9-69 follows
    Description of "Figure 9-69 The Assign Dialog - Copy Operation Tab"

  5. Click the icon with the plus sign, as shown in Figure 9-69, and then select Copy Operation.

    The Create Copy Operation dialog is displayed.

  6. In the From area expand Process, Variables, Receive_Read_InputVariable and then body.

  7. Select ns3:InputParameters.

  8. In the To area expand Process, Variables, Invoke_File2SPService_InputVariable, and then InputParameters.

  9. Select ns3:InputParameters.

  10. Click OK. The Assign dialog is displayed, as shown in Figure 9-70.

    Figure 9-70 The Assign Dialog

    Description of Figure 9-70 follows
    Description of "Figure 9-70 The Assign Dialog"

  11. Click OK.

    The JDeveloper File2SP.bpel page is displayed, as shown in Figure 9-71.

    Figure 9-71 The JDeveloper - File2SP.bpel

    Description of Figure 9-71 follows
    Description of "Figure 9-71 The JDeveloper - File2SP.bpel"

  12. Click File, Save All.

9.8.2.2.8 Wiring Services and Activities

You must assemble or wire the three components that you have created: Inbound adapter service, BPEL process, Outbound adapter reference. Perform the following steps to wire components:

  1. Drag the small triangle in ReadCustomer in the Exposed Services area to the drop zone that appears as a green triangle in the BPEL process in the Components area.

  2. Drag the small triangle in the BPEL process in the Components area to the drop zone that appears as a green triangle in File2SPService in the External References area.

  3. Click File, Save All.

9.8.2.2.9 Deploying with JDeveloper

You must deploy the application profile for the SOA project and the application you created in the preceding steps. To deploy the application profile using JDeveloper, use the following steps:

  1. Create an application server connection using the procedure described in Chapter 2, "Creating an Application Server Connection for Oracle JCA Adapters."

  2. Deploy the application using the procedure described in Section 2.7, "Deploying Oracle JCA Adapter Applications from JDeveloper."

9.8.2.2.10 Creating a Data Source

Before you can test the File2SPProject you must create a data source using the Oracle WebLogic Server Administration Console, by using the following steps:

  1. Navigate to http://servername:portnumber/console.

  2. Use the required credentials to open the Home page of the Oracle WebLogic Server Administration Console.

    The Home page of the Oracle WebLogic Server Administration Console is displayed, as shown in Figure 9-72.

    Figure 9-72 Oracle WebLogic Server Administration Console Home Page

    Description of Figure 9-72 follows
    Description of "Figure 9-72 Oracle WebLogic Server Administration Console Home Page"

  3. Under Domain Structure, select Services, JBDC, and then click DataSources.

    The Summary of JDBC Data Sources page is displayed, as shown Figure 9-73.

    Figure 9-73 The Summary of JDBC Data Sources Page

    Description of Figure 9-73 follows
    Description of "Figure 9-73 The Summary of JDBC Data Sources Page"

  4. Click New.

    The Create a New JDBC Data Source page is displayed.

  5. Enter the following values for the properties to be used to identify your new JDBC data source:

    • Enter MyDataSource in the Name field.

    • Enter jdbc/MyDataSource in the JNDI Name field.

    • Retain the default value Oracle for Database Type.

    • Retain the default value Oracle's Driver (Thin XA) for Instance Connections; Versions 9.0.1, 9.2.0, 10, 11 for Database Driver.

  6. Click Next.

    The Create a New JDBC Data Source - Transaction Options page is displayed. A message stating, "No other transaction configuration options are available." is displayed.

  7. Click Next.

    The Create a New JDBC Data Source - Connection Properties page is displayed.

  8. Enter the following connection properties in the Connection Properties page:

    • Enter a name in the Database Name field, which is usually the SID.

    • Enter the host name in the Host Name field.

    • Enter the port number in the Port field.

    • Enter SCOTT in the Database User Name field.

    • Enter TIGER in the Password field.

    • Enter TIGER in the Confirm Password field.

  9. Click Next. The Create a New JDBC Data Source - Test Database Connection page is displayed.

  10. Click Test Configuration to test the database availability and the connection properties you provided. A message stating that the, "Connection test succeeded" is displayed at the top of the Create a New JDBC Data Source - Test Database Connection page.

  11. Click Next.

    The Create a New JDBC Data Source - Select Targets page is displayed.

  12. Select AdminServer as target, and then click Finish.

    The Summary of JDBC Data Sources page is displayed. This page summarizes the JDBC data source objects that have been created in this domain. The Data Source that you created appears in this list.

9.8.2.2.11 Adding a Connection-Instance

The database adapter needs an instance entry, which points to a data source.

The following are the steps to add a connection instance:

  1. Search beahome/ for DbAdapter.rar.

  2. Unzip the file.

  3. Edit META-INF/weblogic-ra.xml (and possibly ra.xml.), as shown in the following example:

    <connection-instance>
                    <jndi-name>eis/DB/MyConnection</jndi-name>
                    <connection-properties>
                        <properties>
                            <property>
                                <name>xADataSourceName</name>
                                <value>jdbc/MyDataSource</value>
                            </property>
                        </properties>
                    </connection-properties>
                </connection-instance>
    
  4. Use the same database connection name, MyConnection, for the JNDI name.

  5. Use the same data source name, MyDataSource, as the xADataSourceName.

  6. Jar the file again.

  7. Restart the application server.

    You can also create a new database adapter instance using the Oracle WebLogic Server Administration Console.

9.8.2.2.12 Testing using the File Adapter Service and SQL*Plus

You must test the BPEL process by providing input file for the File Adapter. The results of the BPEL process are seen using a simple query from a table. The customers.xml file contains the following input:

<InputParameters xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/SCOTT/ADD_CUSTOMERS/">
  <C>
    <C_ITEM>
      <FNAME>John</FNAME>
      <LNAME>Doe</LNAME>
      <LOC>
        <STREET>123 Main Street</STREET>
        <CITY>Anytown</CITY>
        <STATE>CA</STATE>
        <ZIP>12345</ZIP>
      </LOC>
      <EMAIL>john.smith@gmail.com</EMAIL>
      <PHONE>567-123-9876</PHONE>
    </C_ITEM>
    <C_ITEM>
      <FNAME>Jane</FNAME>
      <LNAME>Doe</LNAME>
      <LOC>
        <STREET>987 Sunset Blvd</STREET>
        <CITY>Sometown</CITY>
        <STATE>CA</STATE>
        <ZIP>34567</ZIP>
      </LOC>
      <EMAIL>JaneDoe@yahoo.com</EMAIL>
      <PHONE>567-123-9876</PHONE>
    </C_ITEM>
  </C>
</InputParameters>

The following are the steps for testing the BPEL process you created:

  1. Place a copy of customers.xml in the input directory that you specified when you created the inbound File Adapter Service.

  2. The Oracle File Adapter polls the directory for new files. The Receive activity initiates the BPEL process once the file is received by the File Adapter Service.

  3. The data for all of the customers is assigned to the InputParameters of the stored procedure.

  4. The stored procedure executes. It transforms the data for each customer and then inserts the customer data into a table.

  5. Query the table to see the following results:

    SQL> select * from customers;
    
    NAME                      LOC
    ------------------------- ---------------------------------------------
    EMAIL                     PHONE
    ------------------------- ---------------
    Doe, John                 123 Main Street, Anytown, CA 12345
    john.smith@gmail.com      567-123-9876
    
    Doe, Jane                 987 Sunset Blvd, Sometown, CA 34567
    JaneDoe@yahoo.com         567-123-9876
    
9.8.2.2.13 Monitoring Using the Fusion Middleware Control Console

You can monitor the deployed EM Composite using the Fusion Middleware Control Console. Perform the following steps:

  1. Navigate to http://servername:portnumber/em.

    A list of SOA composites is displayed, including File2SPProject[1.0] that you created in the preceding steps.

  2. Click File2SPProject[1.0].

    The Dashboard is displayed. Note your Instance ID in the Recent Instances area.

  3. Click the Instances tab.

    A Search dialog is displayed. The default search displays all instances by their Instance ID.

  4. Select the Instance ID that you noted above.

    A new window opens. It lists any faults (No faults found) and enables you to view the Audit Trail of your instance. Your instance trace is displayed in a new window.

  5. The instance tree is expanded from ReadCustomers (service) to File2SP (BPEL Component) to File2SPService (reference).

  6. Click File2SP BPEL Component.

    The Audit Trail of your process is displayed.

  7. Expand the <payload> node to see the input provided to the stored procedure, as shown in Figure 9-74.

    Figure 9-74 The Audit Trail Tab

    Description of Figure 9-74 follows
    Description of "Figure 9-74 The Audit Trail Tab"

  8. Additionally, click the Flow tab to view the process flow, as shown in Figure 9-75.

    Figure 9-75 Viewing the Process Flow

    Description of Figure 9-75 follows
    Description of "Figure 9-75 Viewing the Process Flow"

9.8.3 Database Adapter/Coherence Integration

There is a performance improvement when the Database Adapter is used with Coherence Cache on an Exalogic system. The feature that provides this improvement is called Database Adapter/Coherence Integration.

There are two specific use cases where there is an advantage to using the Database Adapter with Coherence Cache on an Exalogic system. Specifically, performance can be improved when performing the following operations:

  • Insert/Update to a database

  • Select by primary key

9.8.3.1 Inserts/Updates to a Database

Inserts and updates to a database using the Database Adapter and Coherence cache are improved through the internal use of an intermediary Coherence data source, called a Coherence Cache, basically an in-memory database.

In the typical case, you perform insert/delete/update operations directly on the database. To improve performance, these operations can first be performed on this Coherence-fronting in-memory database, called a write-behind map, which enables read-write operations using the Cache.

Using such a Coherence map improves the latency of BPEL/OSB processes performing insert/delete/update operations, as these processes can return immediately to the caller without a trip to the database; the actual and intensive work of updating the database is done instead by the Coherence Cache intermediary.

You can use Coherence in this manner when processing large batches of records, which makes record updating more convenient and efficient.

Note:

Database Adapter use cases that do not leverage Coherence Cache include the following operations: inbound polling, pure SQL invokes, stored procedure calls, and general Selects that return multiple rows.

9.8.3.1.1 Select Optimization

The second use case that Database Adapter/Coherence Integration improves is query performance, specifically in optimizing Select statement use cases. Database Adapter/Coherence Integration provides benefits to the query performance by caching data that might be accessed frequently by many different process instances.

When select optimization is used, to optimize queries, the Database Adapter/Coherence Integration uses a read-only Coherence Cache (also called an L2-read cache), which the Database Adapter checks first for a cache hit before proceeding to the database. In other words, queries are optimized by checking to see if the data being queried against is in the Coherence Cache first; if not found there, the database is checked for the same data.

When a Coherence miss occurs, the data is read from the database and loaded into the Coherence Cache. The presumption is that checking the Coherence Cache is faster than executing a query on the database, as the ratio of cache visits to cache misses is typically high.

9.8.3.1.2 Queries that Do Not Benefit from Coherence Database Adapter Integration

Not all queries can benefit from cache visits and hence Coherence Database Adapter Integration.There is no indication if there was a Coherence cache hit on all records that meet a specified query criterion, or if there are additional database records that could have been hit but which were not in cache.

For this reason, the query optimization feature includes a new kind of Database Adapter operation, which is a Select by Primary Key. Unlike the existing Select and queryByExample operations, when using Select by Primary Key you can only return a single row. With the primary key selected to return a single row, you are in effect requesting more specific records to be returned from the Coherence Cache, thus improving the performance of the feature against the Cache.

9.8.3.2 Database Adapter/Coherence Integration Architecture

You can choose whether to use Database Adapter/Coherence Integration by making a simple choice among none, read, or read-write in the Operation Type screen of the Database Adapter Wizard. However, it is useful to know some of the background related to the architecture of the Database Adapter/Coherence Integration, as detailed in the following subsections.

For more information on Eclipselink, see http://www.oracle.com/technetwork/middleware/toplink/documentation/index.html

Some background on Coherence can be found at http://www.oracle.com/technetwork/middleware/coherence/overview/index.html

9.8.3.2.1 Current Design of the Database Adapter (No Coherence Cache)

With the current design of the Database Adapter, the Adapter performs selects and inserts to the EclipseLink layer, which directly communicates with the data sourcees without the Coherence Cache.

When you choose none in the Cache Usage dropdown on the Operations Type screen in the Database Configuration Wizard, you indicate you do not want to use cache.

9.8.3.2.2 Read-Write Coherence Cache Database Adapter Integration

You can choose to use read-write cache by choosing read-write from the Cache-Usage dropdown of the Operation Type screen of the Database Adapter Wizard

Eclipselink is in two layers, with Coherence Cache (a Coherence Cache Store) between the two layers. There is actually only one Eclipselink project, but two copies of that project.

  • The top copy of Eclipselink redirects all inserts/selects from the data store to Coherence Cache.

  • The bottom copy of Eclipselink handles requests by Coherence Cache to load a particular record by ID from the database, or to store a particular record to the database.

A Select you execute in the read-write scenario might not uniquely identify the rows to retrieve.

Such a case could be a SELECT * or SELECT where total gross > ?

The write-behind Coherence Cache can only receive requests to load a record by ID. Thus, in either of these cases, if all queries were directed to Coherence Cache, no results would be returned. In this case, the query proceeds to the data source directly, and then the Coherence Cache is updated.

9.8.3.2.3 Read Coherence Cache Database Adapter Integration

You can choose to use read cache by choosing read from the Cache Usage dropdown on the Operation Type screen of the Database Adapter Wizard.

With Read Cache, when the Database Adapter inserts a record to the Database or selects a record from the database, the Coherence Cache is updated. Any query that identifies a row (that is, by specifying primary key) first checks the Coherence cache, possibly saving a trip to the database.) As the Coherence Cache is distributed and can be simply thought of as a hash map, selecting by a specific primary key enables faster lookups through the Coherence Cache Map.

9.8.3.2.4 Enabling No Cache Using the Operations Type Screen

Figure 9-76 shows the No Caching option as it appears on the Operations type screen of the Database Adapter Wizard, with none selected.

Figure 9-76 The Database Adapter Configuration Wizzrd Operation Type Screen, with No Caching Selected

Description of Figure 9-76 follows
Description of "Figure 9-76 The Database Adapter Configuration Wizzrd Operation Type Screen, with No Caching Selected "

All outbound operations are enabled on this screen with the none option selected. Once you select this option, and choose Next or Finish, none of the selected operations contain the property CacheUsage. This absence of a property is equivalent to the JCA activation property CacheUsage being equal to the value none.

The following options are the only operations pre-selected when you choose the none option as the cache usage:

  • Merge

  • Insert Only

  • Select

9.8.3.2.5 Enabling Read-Write Caching Using the Operation Type Screen

You can choose to enable read-write caching through the Operation Type screen. See Figure 9-77. Once you select this option and press Next or Finish, the JCA property CacheUsage value is set to read-write.

Figure 9-77 Enabling Read-Write Caching Using the Operation Type Screen

Description of Figure 9-77 follows
Description of "Figure 9-77 Enabling Read-Write Caching Using the Operation Type Screen"

Refer to the following list of operations to understand how they are used on this screen when you choose the read-write option from the Cache Usage dropdown:

  • Insert or Update (Merge) are enabled and have the string uses cache appended to their label.

  • Insert only is disabled, as the underlying cache store always performs a merge.

  • Update Only is disabled, as the underlying cache store always performs a merge.

  • Delete is selectable but not pre-selected and has the string uses cache appended to it when it is selected.

  • Select is disabled, as this query is converted into a Coherence filter executed on a Coherence map.

  • Query by Example is diabled, as the Database Adapter/Coherence Integration query is converted into a Coherence filter executed on a Coherence map.

  • Select by Primary Key has the string uses cache appended to the label.

9.8.3.2.6 Enabling Read Caching Using the Operation Type Screen

You can enable read caching using the Cache Usage option on the Operation Type screen. See Figure 9-78. Once you select this option on the screen and press Next or Finish, the JCA property CacheUsage value is set to read.

Figure 9-78 Enabling Read-Write Caching Using the Operation Type Screen

Description of Figure 9-78 follows
Description of "Figure 9-78 Enabling Read-Write Caching Using the Operation Type Screen"

For the read cache option, only the Select by Primary Key operation is pre-selected. Select by Primary Key is the only operation that can be meaningfully executed by the Coherence Database Adapter Integration feature through the cache, although other operations can update cache. Because read cache is not intrusive on the cache, any of the operations on the Operation Type on this screen are disabled.

Select and Query By Example are not disabled, although they do not directly update the cache. The Database Adapter/Coherence Integration feature executes the Select against the database, but updates the Coherence Cache with any rows that are returned.

The general operation of read caching is that if any objects returned exist in the Coherence Cache, the objects in the cache are returned, rather than the Database Adapter/Coherence Integration feature building a new copy from the result set.

This operation improves performance where the master database record has several details; a query on the details does not have to be executed again.

A query behaves the same as a Select. This is true, for example with XML data where the primary key is set (and it does not get a cache hit).

9.8.3.2.7 XA Transactions, Read-Write and Read Operations with Coherence/Database Adapter Integration

When using Database Adapter/Coherence Integration, you cannot use XA transactions with read-write operations. This is because the Database Adapter, with Coherence Integration, performs inserts to the Coherence Cache and subsequently to the database, a sequence which breaks the XA transaction contract.

However, you can use XA transactions with read operations using Database Adapter/Coherence Integration.

Database transactions using the Database Adapter that do not use Database Adapter/Coherence Integration can still use the XA transaction model.