Oracle® Application Server Integration InterConnect User's Guide
10g Release 2 (10.1.2) Part No. B14069-01 |
|
![]() Previous |
![]() Next |
This chapter describes the runtime concepts of OracleAS Integration InterConnect. It contains the following topics:
OracleAS Integration InterConnect runtime system is an event-based distributed messaging system. An event is any action that initiates communication through messaging between two or more applications integrated through OracleAS Integration InterConnect. The messaging system can be deployed both within an enterprise or across enterprise boundaries.
The runtime enables inter-application communication through hub and spoke integration. This methodology keeps the applications decoupled from each other by integrating them to a central hub rather than to each other directly. The applications are at the spokes of this arrangement and are unaware of the other applications they are integrating with. To them, the target of a message (or the source) is the hub. As each application integrates with the hub, transformation of data between the application and hub (in either direction) is sufficient to integrate two or more applications.
provides an overview of design time and runtime phases in integration.
Figure 9-1 A Graphical Overview of Design Time and Runtime Phases in Integration
The following are the main components in the runtime system:
Prepackaged adapters help applications at runtime to participate in the integration without any programming effort.
Adapters are the runtime component for OracleAS Integration InterConnect. Adapters have the following features:
Application Connectivity: Connect to applications to transfer data between the application and OracleAS Integration InterConnect. The logical subcomponent within an adapter that handles this connectivity is called a bridge. This protocol/application-specific subcomponent of the adapter knows how to communicate with the application. For example, the database adapter is capable of connecting to an Oracle database using JDBC and calling SQL APIs. This subcomponent does not know which APIs to call, only how to call them.
Transformations: Transform data to and from the application view to common view as dictated by the repository metadata. In general, adapters are responsible for carrying out all the runtime instructions captured through iStudio as metadata in the repository. Transformations are an important subset of these instructions. The logical sub component within an adapter that handles the runtime instructions is called an agent. This is the generic runtime engine in the adapter that is independent of the application to which the adapter connects. It focuses on the integration scenario based on the integration metadata in the repository. There is no integration logic coded into the adapter itself. All integration logic is stored in the repository. The repository contains the metadata that drives this sub component. For example, in a database adapter, the agent subcomponent knows which SQL APIs to call, but not how to call them. All adapters have the same agent code. It is the difference in metadata that each adapter receives from the repository that controls and differentiates the behavior of each adapter.
Adapters can be configured to cache the metadata at runtime to address performance needs. There are three settings for caching metadata:
No Caching: For each message, the adapter will query the repository for metadata. This setting is recommended for an early or unstable integration development environment.
Demand Caching: The adapter will query the repository only once for each message type and then cache that information. For subsequent messages of the same type, it will use the information from the cache. This setting is recommended for a stable integration development environment.
Full Caching: At start-up time, the adapter will cache all its relevant metadata. This setting is recommended for a production environment.
Note: For more information on the adapters provided by OracleAS Integration InterConnect, refer to Oracle Application Server InterConnect Installation Guide. |
Adapters are stateless by default. As a result, in case an adapter goes down, the message is either with the application or in the OracleAS Integration InterConnect Hub AQ. This behavior lends itself well to load balancing and high availability requirements for the adapter.
The repository consists of two components:
Repository Server: A Java application that runs outside the database. It provides RMI services to create, modify, or delete metadata at design time using iStudio and query during runtime using adapters. Both adapters and iStudio act as RMI clients to communicate with the repository server.
Repository Database: The repository server stores metadata in database tables. The server communicates to the database using JDBC.
Adapters have the ability to cache metadata. If the repository metadata is modified after adapters have cached metadata, the relevant adapters can be notified through iStudio's Sync Adapters functionality.
Advanced Queues provide the messaging infrastructure for OracleAS Integration InterConnect in the hub. In addition to being the store and forward unit, they provide message retention, auditing, tracking, and guaranteed delivery of messages.
See Also: Oracle Database Application Developer's Guide for information on Advanced Queues |
The OracleAS Integration InterConnect runtime features are as follows:
OracleAS Integration InterConnect runtime supports three major messaging paradigms:
Publish/Subscribe
Request/Reply (synchronous and asynchronous)
Point-to-Point
Point-to-Point messaging can be achieved both in the context of Publish/Subscribe and Request/Reply by using Content Based Routing.
Applications can be configured (per integration point) to support any of these paradigms.
The following are features of message delivery:
Guaranteed Delivery: All messages are guaranteed to be delivered from the source applications to the destination applications.
Exactly Once Delivery: The destination applications will receive each sent message exactly once. The messages are never lost or duplicated.
In Order Delivery: The messages are delivered in the exact same order as they were sent. This is applicable only when there is one instance of the adapter running per application serviced.
Messages remain in the runtime system until they are delivered. Advanced Queues in the hub provide the message retention. Messages are deleted when each application that is scheduled to receive a specific message has received that message. For auditing purposes, you can configure the system to retain all successfully delivered messages.
Routing is a function of the Advanced Queues in the hub. By default, oai_hub_queue
is the only multiconsumer Advanced Queue configured as the persistent store for all messages for all applications. This queue will handle all standard as well as content-based routing needs. The queue is created automatically when you install the repository in the hub. The only reason to change this configuration is if Advanced Queues becomes a performance bottleneck. This is unlikely because most of the message processing is done in the adapters, not in the hub.
Content-based routing allows you to route messages to specific destination applications based on message content. For example, an electronic funds transaction settlement application is designed to transmit bank transactions with a specific bank code to identify the destination bank system. When the electronic funds transfer application publishes a message at runtime, the OracleAS Integration InterConnect runtime component determines the bank code value based on metadata stored in the repository, and routes the message to the correponding recipient system.
OracleAS Integration InterConnect uses partitioning to manage load balancing across different instances of the same adapter. At runtime, it is possible that the adapter attached to a particular application becomes a performance bottleneck. You can detect this by monitoring the message throughput information using the InterConnect Manager.
OracleAS Integration InterConnect addresses adapter scalability through a well-defined methodology.
Multiple adapters can be attached to one application to share the message load. This can be done in several ways depending upon the needs of your integration environment. For example, Application A publishes three different kinds of events: EventA, EventB, and EventC. Three potential scenarios should be examined to determine how one or more adapters could be attached to the application to meet performance objectives.
Scenario 1
The order in which the messages are sent by application A must be strictly adhered to for the life of the messages. Messages sent by application A must be received by the subscribing applications in the same order across the different event types.
Recommendation In this case, you cannot add more than one adapter to Application A for load balancing.
Scenario 2
The order in which messages are sent by Application A must be adhered to but not across different event types. Application A publishes the following messages in order: M1_EventA
, M2_EventB
, M3_EventA
. M1_EventA
and M3_EventA
must be ordered with respect to each other because they correspond to the same event type. M2_EventB
has no ordering restrictions with respect to M1_EventA
and M3_EventA
.
Recommendation IIn this case, you can leverage the Partitioning feature enabled through iStudio's Deploy tab. This feature allows you to allocate specific adapters for specific message types thereby segmenting the runtime load processing. For this scenario, you can create two partitions: Partition1 corresponds to EventA and Partition2 corresponds to EventB. Dedicate one adapter to each partition (specified at adapter install time or through modification of adapter.ini
after install). The end result: The order of messsages is maintained as per requirements and the processing power has doubled because of two adapter servicing the messages instead of just one. This kind of partitioning is called Message-based partitioning.
Scenario 3
There is no message order dependency, even within the same event type.
Recommendation Two approaches for load balancing are available:
One or more adapters are added utilizing the entire Message Capability Matrix. This means that at runtime any one of the adapters would be available to receive any message, though only one of them would actually receive the message. The adapter that is first to request the next message for processing will determine the adapter that will receive the message. This is called Pure Load Balancing partitioning.
Message-based Partitions are created based on projections of the number of messages for a particular event type. For example, if there will be three times as many EventA
messages than EventB
or EventC
messages, you could create two partitions: one for handling EventA
messages, and the other for handling the other two event types. Now you can dedicate several adapters to handle the EventA message load only. Fewer adapters can be dedicated to the other two event types.
Enterprise applications need high availability (HA) because they cannot afford downtime. OracleAS Integration InterConnect uses Oracle Process Manager and Notification (OPMN), Oracle Database Server, and Oracle Real Application Clusters to enable high availability for its components.
The OracleAS Backup and Recovery feature can be used to back up the critical configuration files for any OracleAS Integration InterConnect 10g Release 2 (10.1.2) installation. You can use the config_misc_files.inp
file provided by the OracleAS Backup and Recovery tool to back up InterConnect configuration files. The config_misc_files.inp
file is located in the following directory:
$ORACLE_HOME/backup_restore/config
The following files should be backed up from the OracleAS Integration InterConnect install along with other Application Server component files.
[Hub Component]
$ORACLE_HOME/integration/interconnect/hub/hub.ini $ORACLE_HOME/integration/interconnect/repository/repository.ini $ORACLE_HOME/integration/interconnect/security/cwallet.sso $ORACLE_HOME/integration/interconnect/security/ewallet.p12 $ORACLE_HOME/integration/interconnect/adapters/workflow/adapter.ini $ORACLE_HOME/integration/interconnect/adapters/workflow/ErrorManagement.xml [if file exists]
[Adapter Component]
$ORACLE_HOME/integration/interconnect/adapters/<adaptername>/adapter.ini $ORACLE_HOME/integration/interconnect/adapters/<adaptername>/ErrorManagement.xml [if file exists] $ORACLE_HOME/integration/interconnect/security/cwallet.sso [if adapter not installed in the same midtier as hub] $ORACLE_HOME/integration/interconnect/security/ewallet.p12 [if adapter not installed in the same midtier as hub]
You can append the preceding mentioned OracleAS Integration InterConnect configuration file names to the config_misc_files.inp
file with the same file name format.
If all files in a directory have to be backed up, then you can specify only the directory names or use wildcards. You can also exclude certain files from the backup by specifying those file names in the config_exclude_files.inp
file. However, you cannot specify directories or use wildcards in the config_exclude_files.inp
file, only single entries are allowed.
In Real Application Clusters environment, all active instances can concurrently perform transactions against a shared database. Real Application Clusters coordinates each instance's access to the shared data to provide data consistency and data integrity. It features balanced workloads among the nodes by controlling multiple server connections during period of heavy use and provide persistent, fault tolerant connections between clients and Real Application Clusters database.
See Also: The following documentation for additional information on Real Application Clusters:
|
OracleAS Integration InterConnect adapters leverage Real Applicatio Clusters technology, provide consistent and uninterrupted service without having to restart the adapters, if an instance fails, and provide guaranteed message delivery. OracleAS Integration InterConnect adapters connect to the first of the listed available nodes. Nodes are defined in adapter.ini
and hub.ini
files.
See Also: OracleAS Integration InterConnect adapters installation documentation for details onadapter.ini and hub.ini files associated with specific adapters
|
If one node fails then the database connection is established with the next available node in the adapter.ini
or hub.ini
file recursively until a successful connection. Failover is transparent to the user.
The hub connections for all adapters and the spoke connections for Database and Advance Queuing adapters are RAC enabled. From this release, the adapter process is also RAC enabled.
See Also: Section "Support for Oracle Real Application Clusters" in the Oracle Application Server Application Developer's Guide Advanced Queuing |
In the earlier OracleAS Integration InterConnect releases, the adapters failed over to the next node in the Real Application Clusters environment for any exception. This release changes the adapter failover mechanism. The adapters are designed to failover only when the corresponding node fails. This means that a normal exception will not cause a failover to be triggered. Instead, the adapter will failover only when the node itself fails.
The adapter.ini
and hub.ini
files must be populated with the information about the host, port, and instance for all the nodes. Additional sets of parameters which specify the number of nodes are also required to be populated. All existing entries remain the same except a new entry for each node is added. Table 9-1 describes the additional sets of parameters which specify the number of nodes required to be populated.
Table 9-1 Additional Parameters for RAC Configuration
File Name | Parameter |
---|---|
hub.ini
|
host_num_nodes
|
adapter.ini for the Advanced Queuing adapter
|
ab_bridge_num_nodes
|
adapter.ini for the Database adapter
|
db_bridge_num_nodes
|
Sample hub.ini File
The following is a sample hub.ini
file.
hub_username=ichub encrypted_hub_password=<encrypted_password> use $ORACLE_HOME/integration/<version>/bin/encrypt for encryption hub_use_thin_jdbc=true hub_host=dlsun1312 hub_instance=iasdb hub_port=1521 hub_num_nodes=2 hub_host2=vindaloo hub_instance2=orcl hub_port2=1521
The following is a sample adapter.ini
file for the Database adapter that shows the spoke database entry.
db_bridge_schema1_host=dlsun1312 db_bridge_schema1_port=1521 db_bridge_schema1_instance=iasdb db_bridge_num_nodes=2 db_bridge_schema1_host2=vindaloo db_bridge_schema1_port2=1521 db_bridge_schema1_instance2=orcl