Skip Headers
Oracle® Application Server Integration InterConnect User's Guide
10g Release 2 (10.1.2)
B14069-02
  Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
Next
Next
 

9 Run-Time System Concepts and Components

This chapter describes the run-time concepts of OracleAS Integration InterConnect. It contains the following topics:

Integration Architecture

OracleAS Integration InterConnect run-time system is an event-based distributed messaging system. An event is any action that starts the communication through messaging between two or more applications integrated through OracleAS Integration InterConnect. The messaging system can be deployed both within an enterprise or across enterprise boundaries.

The run-time system enables inter-application communication through hub-and-spoke integration. This methodology keeps the applications decoupled from each other by integrating them to a central hub rather than to each other directly. The applications are at the spokes of this arrangement and are unaware of the other applications they are integrating with. To them, the target of a message (or the source) is the hub. As each application integrates with the hub, transformation of data between the application and hub (in either direction) is sufficient to integrate two or more applications.

provides an overview of design-time and run-time phases in integration.

Figure 9-1 A Graphical Overview of Design-Time and Run-Time Phases in Integration

Description of Figure 9-1  follows
Description of "Figure 9-1 A Graphical Overview of Design-Time and Run-Time Phases in Integration"

Components

The following are the main components in the run-time system:

Adapters

Prepackaged adapters help applications at run time to participate in the integration without any programming effort.

Agent and Bridge Combination

Adapters are the run-time component for OracleAS Integration InterConnect. Adapters have the following features:

  • Application Connectivity: Connect to applications to transfer data between the application and OracleAS Integration InterConnect. The logical subcomponent within an adapter that handles this connectivity is called a bridge. This protocol/application-specific subcomponent of the adapter knows how to communicate with the application. For example, the database adapter is capable of connecting to an Oracle database using JDBC and calling SQL APIs. This subcomponent does not know which APIs to call but only how to call them.

  • Transformations: Transform data to and from the application view to common view as dictated by the repository metadata. In general, adapters are responsible for carrying out all the run-time instructions captured through iStudio as metadata in the repository. Transformations are an important subset of these instructions. The logical subcomponent within an adapter that handles the run-time instructions is called an agent. This is the generic run-time engine in the adapter that is independent of the application to which the adapter connects. It focuses on the integration scenario based on the integration metadata in the repository. There is no integration logic coded into the adapter itself. All integration logic is stored in the repository. The repository contains the metadata that drives this subcomponent. For example, in a database adapter, the agent subcomponent knows which SQL APIs to call but not how to call them. All adapters have the same agent code. It is the difference in metadata that each adapter receives from the repository that controls and differentiates each adapter.

Description of adapterarch.gif follows
Description of the illustration adapterarch.gif

Adapters can be configured to cache the metadata at run time to address performance needs. There are three settings for caching metadata:

  • No Caching: For each message, the adapter will query the repository for metadata. This setting is recommended for an early or unstable integration development environment.

  • Demand Caching: The adapter will query the repository only once for each message type and then cache that information. For subsequent messages of the same type, it will use the information from the cache. This setting is recommended for a stable integration development environment.

  • Full Caching: At startup time, the adapter will cache all its relevant metadata. This setting is recommended for a production environment.


    Note:

    For more information about the adapters provided by OracleAS Integration InterConnect, refer to Oracle Application Server InterConnect Installation Guide.

Adapters are stateless by default. As a result, in case an adapter is not working, the message is either with the application or in the OracleAS Integration InterConnect Hub.

Repository

The repository consists of two components:

  • Repository Server: A Java application that runs outside the database. It provides RMI services to create, modify, or delete metadata at design time using iStudio and query during run time using adapters. Both adapters and iStudio act as RMI clients to communicate with the repository server.

  • Repository Database: The repository server stores metadata in database tables. The server communicates to the database using JDBC.

Adapters have the ability to cache metadata. If the repository metadata is modified after adapters have cached metadata, then the relevant adapters can be notified through iStudio's Sync Adapters functionality.


Note:

You need to specify a port number for the repository in the repo_admin_port parameter of the repository.ini file located at $ORACLE_HOME/integration/interconnect/repository. You also need to specify the port number in the agent_admin_port parameter of the adapter.ini file of an adapter located at $ORACLE_HOME/integration/interconnect/adapters/ADAPTERNAME. The port number in the repository.ini file and the adapter.ini file should be the same.

In addition, you need to open the seven ports on the firewall starting from the port number specified in repo_admin_port parameter of the repository.ini file.


Advanced Queues

Advanced Queues provide the messaging infrastructure for OracleAS Integration InterConnect in the hub. In addition to being the store and forward unit, Advanced Queues provide message retention, auditing, tracking, and guaranteed delivery of messages.


See Also:

Oracle Database Application Developer's Guide for information about Advanced Queues

Oracle Workflow

Oracle Workflow facilitates integration at the business process level through its Business Event System. OracleAS Integration InterConnect and Oracle Workflow are integrated to leverage this facility for business process collaborations across applications.

Run-Time System Features

The OracleAS Integration InterConnect run-time features are as follows:

Messaging Paradigms

The OracleAS Integration InterConnect run time supports three major messaging paradigms:

  • Publish/Subscribe

  • Request/Reply (synchronous and asynchronous)

  • Point-to-Point

Point-to-Point messaging can be achieved both in the context of Publish/Subscribe and Request/Reply by using content-based routing.

Applications can be configured (for each integration point) to support any of these paradigms.

Message Delivery

The following are the features of message delivery:

  • Guaranteed Delivery: All messages are guaranteed to be delivered from the source applications to the destination applications.

  • Exactly Once Delivery: The destination applications will receive each sent message exactly once. The messages are never lost or duplicated.

  • In Order Delivery: The messages are delivered in the exact same order as they were sent. This is applicable only when there is one instance of the adapter running for each serviced application.

Message Retention

Messages remain in the run-time system until they are delivered. Advanced Queues in the hub provide message retention. Messages are deleted when each application that is scheduled to receive a specific message has received that message. For auditing purposes, you can configure the system to retain all successfully delivered messages.

Routing Support

Routing is a function of the Advanced Queues in the hub. By default, oai_hub_queue is the only multiconsumer Advanced Queue configured as the persistent store for all messages for all applications. This queue will handle all standard as well as content-based routing needs. The queue is created automatically when you install the repository in the hub. The only reason to change this configuration is if Advanced Queues becomes a performance bottleneck. This is unlikely because most of the message processing is done in the adapters, not in the hub.


See Also:

"Load Balancing"

Content-Based Routing

Content-based routing enables you to route messages to specific destination applications based on message content. For example, an electronic funds transaction settlement application is designed to transmit bank transactions with a specific bank code to identify the destination bank system. When the electronic funds transfer application publishes a message at run time, the OracleAS Integration InterConnect run-time component determines the bank code value based on metadata stored in the repository, and routes the message to the corresponding recipient system.

Load Balancing

OracleAS Integration InterConnect provides the following two methods to manage load balancing across different instances of the same adapter:

OracleAS Integration InterConnect addresses adapter scalability through a well-defined methodology.

Partitions

OracleAS Integration InterConnect uses partitioning to manage load balancing across different instances of the same adapter. At run time, it is possible that the adapter attached to a particular application becomes a performance bottleneck. You can detect this by monitoring the message throughput information using the InterConnect Manager.

You can create multiple partitions of an application to enable an adapter to share the message load at run time. Partitions are always bound to specific events or procedures. We can create partitions on one or more events and procedures.

For example, application AQAPP is publishing the Create_Customer event and subscribing to the Delete_Customer event. You can create two partitions P1 and P2 on AQAPP. You can specify that partition P1 handles all the messages from the Create_Customer event and partition P2 handles all the messages from the Delete_Customer event. In partitioning, the messages are processed in same order in which the publishing application sends the messages. If there is only one instance of each partition, then the order of the messages is maintained on per-event or per-procedure basis. Perform the following steps to create multiple partitions of an application:

Design Time:

Perform the following design time steps in iStudio to create a partition:

  1. Click the Deploy tab.

  2. Click Applications, ApplicationName, and then click Routing.

  3. Right-click Message Capability Matrix and select Create Partition.

  4. Enter the name of the partition in Partition Name field.

    Description of 9_3_5_1b.gif follows
    Description of the illustration 9_3_5_1b.gif

  5. In the Available Events list, select each of the events that you want to include.

    Description of 9_3_5_1c.gif follows
    Description of the illustration 9_3_5_1c.gif

  6. Click the right-arrow button and then click Add.

Run Time

Perform the following run-time steps to create a partition:

  1. Use the copyAdapter script to make a copy of the existing adapter in the same Oracle home. Enter the following command:

    c:\> cd ORACLE_HOME\integration\interconnect\bin
    c:\> copyAdapter AQAPP AQAPPTEMP
    
    
    
  2. Edit the adapter.ini file of each adapter located in ORACLE_HOME/integration/interconnect/adapters/ADAPTERNAME directory.

    • Change the application parameter value to the old application name. For example, AQAPPTEMP to AQAPP.

    • Delete the comment tags before the partition parameter and specify the partition name.

    • Specify the application name followed by the partition name in the agent_subscriber_name parameter. For example, AQAPPP1.

    • Specify the application name followed by the partition name in the agent_message_selecter parameter.


    Note:

    The name of the application in the adapter.ini file and iStudio should be same.

Instances

You can create multiple instances of an adapter to share the message load. Instances are always bound to an application. If an adapter is receiving a large number of messages and is not able to process these messages efficiently, then it is recommended that you create multiple instances of the adapter. Messages are shared between the multiple instances. However, the messages are not processed in the same order in which publishing application sends the messages. Therefore, the in-order message delivery is lost.

In partitioning, the adapter subscribes to the hub queue with application_name+partition_name whereas in instances, the adapter subscribes to the hub queue with only application_name. Therefore, all the instances of an adapter that correspond to an application can listen to all the messages for that application. AQ handles delivering the message to only one of them. For example, DB1 and DB2 are two instances of the Database adapter and both instances are bound to an application DBAPP. When publishing a message, both instances publish the messages with the publisher name DBAPP. The adapter instance that reads the message first will publish the message and delete it from the OAI schema. This ensures that the duplicate messages are not published. Similarly, when subscribing to a message, both adapter instances subscribe to the hub queue with the same application name. Although both can listen to all the messages meant for DBAPP, a message will be delivered to either DB1 or DB2.

Perform the following steps to create multiple instances of an adapter:

  1. Use the copyAdapter script to make multiple copies of the existing adapter in the same Oracle home.

    c:\> cd ORACLE_HOME\integration\interconnect\bin
    c:\> copyAdapter copyAdapter AQAPP AQAPPTEMP
    
    
  2. Edit the adapter.ini file of the each newly created adapter. The adapter.ini file is located in the ORACLE_HOME/integration/interconnect/adapters/ADAPTERNAME directory.

    1. Change the value of the application parameter to the old application name, for example AQAPPTEMP to AQAPP.

    2. Delete the comment tags before the instance_number parameter.

    3. Specify the instance number in the instance_number parameter.

Load Balancing with Multiple Adapters

Multiple adapters can be attached to one application to share the message load. This can be done in several ways depending upon the needs of your integration environment. For example, Application A publishes three different kinds of events: EventA, EventB, and EventC. Three potential scenarios should be examined to determine how one or more adapters could be attached to the application to meet performance objectives.

Scenario 1

The order in which the messages are sent by application A must be strictly adhered to for the life of the messages. Messages sent by application A must be received by the subscribing applications in the same order across the different event types.

Recommendation In this case, you cannot add more than one instance of the same adapter to Application A for load balancing.

Scenario 2

The order in which messages are sent by Application A must be adhered to but not across different event types. Application A publishes the following messages in order: M1_EventA, M2_EventB, M3_EventA. M1_EventA, and M3_EventA must be ordered with respect to each other because they correspond to the same event type. M2_EventB has no ordering restrictions with respect to M1_EventA and M3_EventA.

Recommendation In this case, you can leverage the partitioning feature enabled through iStudio's Deploy tab. This feature enables you to allocate specific adapters for specific message types, thereby segmenting the run time load processing. For this scenario, you can create two partitions: Partition1 corresponds to EventA and Partition2 corresponds to EventB. Dedicate one adapter to each partition (specified at adapter install time or through modification of adapter.ini after install). The end result: The order of messages is maintained and the processing power has doubled because of two adapter servicing the messages instead of just one. This kind of partitioning is called Message-based partitioning.

Scenario 3

There is no message order dependency, even within the same event type.

Recommendation Two approaches for load balancing are available:

  1. One or more adapters are added using the entire message capability matrix. This means that at run time, any one of the adapters would be available to receive any message, though only one of them would actually receive the message. The adapter that is first to request the next message for processing will determine the adapter that will receive the message. This is called Pure Load Balancing partitioning.

  2. Message-based partitions are created based on projections of the number of messages for a particular event type. For example, if there will be three times as many EventA messages than EventB or EventC messages, then you could create two partitions, one for handling EventA messages, and the other for handling the other two event types. Now, you can dedicate several adapters to handle the EventA message load only. Fewer adapters can be dedicated to the other two event types.

High Availability

Enterprise applications need high availability (HA) because they cannot afford downtime. OracleAS Integration InterConnect uses Oracle Process Manager and Notification (OPMN), Oracle Database Server, and Oracle Real Application Clusters to enable high availability for its components.

Backup and Recovery

The OracleAS Backup and Recovery feature can be used to back up the critical configuration files for any OracleAS Integration InterConnect 10g Release 2 (10.1.2) installation. You can use the config_misc_files.inp file provided by the OracleAS Backup and Recovery tool to back up InterConnect configuration files. The config_misc_files.inp file is located in the following directory:

$ORACLE_HOME/backup_restore/config

The following files should be backed up from the OracleAS Integration InterConnect install along with other Application Server component files.

[Hub Component]

$ORACLE_HOME/integration/interconnect/hub/hub.ini
$ORACLE_HOME/integration/interconnect/repository/repository.ini
$ORACLE_HOME/integration/interconnect/security/cwallet.sso
$ORACLE_HOME/integration/interconnect/security/ewallet.p12
$ORACLE_HOME/integration/interconnect/adapters/workflow/adapter.ini
$ORACLE_HOME/integration/interconnect/adapters/workflow/ErrorManagement.xml [if file exists]

[Adapter Component]

$ORACLE_HOME/integration/interconnect/adapters/<adaptername>/adapter.ini
$ORACLE_HOME/integration/interconnect/adapters/<adaptername>/ErrorManagement.xml [if file exists]
$ORACLE_HOME/integration/interconnect/security/cwallet.sso [if adapter not installed in the same midtier as hub]
$ORACLE_HOME/integration/interconnect/security/ewallet.p12 [if adapter not installed in the same midtier as hub]

You can append the previously mentioned Hub Components and Adapter Components OracleAS Integration InterConnect configuration file names to the config_misc_files.inp file with the same file name format.

If all files in a directory have to be backed up, then you can specify only the directory names or use wildcards. You can also exclude certain files from the backup by specifying those file names in the config_exclude_files.inp file. However, you cannot specify directories or use wildcards in the config_exclude_files.inp file; only single entries are allowed.

Real Application Clusters Configuration

In Real Application Clusters environment, all active instances can concurrently perform transactions against a shared database. Real Application Clusters coordinates each instance's access to the shared data to provide data consistency and data integrity. It features balanced workloads among the nodes by controlling multiple server connections during periods of heavy use and provide persistent, fault-tolerant connections between clients and the Real Application Clusters database.


See Also:

Oracle Application Server Concepts Guide for information on Real Application Clusters.

OracleAS Integration InterConnect Adapters Supporting Real Application Clusters

OracleAS Integration InterConnect adapters leverage Real Application Clusters technology, provide consistent and uninterrupted service without having to restart the adapters, if an instance fails, and provide guaranteed message delivery. OracleAS Integration InterConnect adapters connect to the first of the listed available nodes. Nodes are defined in the adapter.ini and hub.ini files.


See Also:

OracleAS Integration InterConnect adapters installation documentation for information about adapter.ini and hub.ini files associated with specific adapters

If one node fails, then the database connection is established with the next available node in the adapter.ini or hub.ini file recursively until a successful connection is established. Failover is transparent to the user.

The hub connections for all adapters and the spoke connections for Database and Advance Queuing adapters are RAC enabled. The adapter process is also RAC enabled.

Adapter Failover Mechanism

In the earlier OracleAS Integration InterConnect releases, the adapters failed over to the next node in the Real Application Clusters environment for any exception. This release changes the adapter failover mechanism. The adapters are designed to fail over only when the corresponding node fails. This means that a normal exception will not cause a failover to be triggered. Instead, the adapter will fail over only when the node itself fails.

Configuration

The adapter.ini and hub.ini files must be populated with the information about the host, port, and instance for all the nodes. Additional sets of parameters which specify the number of nodes are also required to be populated. All existing entries remain the same, except that a new entry for each node is added. Table 9-1 describes the additional sets of parameters which specify the number of nodes required to be populated.

Table 9-1 Additional Parameters for RAC Configuration

File Name Parameter

hub.ini

  • host_num_nodes: The number of nodes in a cluster. For example:

    hub_num_nodes=2
    
    
  • hub_hostx: The host where the Real Application Clusters database is installed. For example:

    hub_host2=dscott13
    
    
  • hub_portx: The port where the TNS listener is listening.For example:

    hub_port2=1521
    
    
  • hub_instancex: The instance on the respective node where x varies between 2 and the number of nodes. For example:

    hub_instance2=orcl2
    

adapter.ini for the Advanced Queuing adapter

ab_bridge_num_nodes

aq_bridge_host

aq_bridge_port

aq_bridge_instance

adapter.ini for the Database adapter

db_bridge_num_nodes

db_bridge_schema1_hostx

db_bridge_schema1_portx

db_bridge_schema1_instancex: where x is a value between 2 and the number of nodes.


Sample hub.ini File

The following is a sample hub.ini file.

hub_username=ichub
encrypted_hub_password=<encrypted_password> use $ORACLE_HOME/integration/<version>/bin/encrypt for encryption
hub_use_thin_jdbc=true
hub_host=dlsun1312
hub_instance=iasdb
hub_port=1521
hub_num_nodes=2
hub_host2=vindaloo
hub_instance2=orcl
hub_port2=1521

Sample Database Adapter adapter.ini File with the Spoke Database Entry

The following is a sample adapter.ini file for the Database adapter that shows the spoke database entry.

db_bridge_schema1_host=dlsun1312
db_bridge_schema1_port=1521
db_bridge_schema1_instance=iasdb
db_bridge_num_nodes=2
db_bridge_schema1_host2=vindaloo
db_bridge_schema1_port2=1521
db_bridge_schema1_instance2=orcl