Oracle Application Server InterConnect User's Guide 10g (9.0.4) Part Number B10404-01 |
|
This chapter describes the runtime concepts, components, processes, and component configuration of OracleAS InterConnect in the following topics:
OracleAS InterConnect runtime is an event-based distributed messaging system. An event is any action that initiates communication through messaging between two or more applications integrated through OracleAS InterConnect. The messaging system can be deployed both within an enterprise or across enterprise boundaries.
The runtime enables inter-application communication through hub and spoke integration. This methodology keeps the applications decoupled from each other by integrating them to a central hub rather than to each other directly. The applications are at the spokes of this arrangement and are unaware of the applications with which they are integrating with. To them, the target of a message (or the source) is the hub. Since each application integrates with the hub, translation of data between the application and hub (in either direction) is sufficient to integrate two or more applications.
The OracleAS InterConnect runtime features are as follows:
OracleAS InterConnect runtime supports three major messaging paradigms:
These paradigms can be configured on a per integration point basis.
The following are features of message delivery:
Messages remain in the runtime system until they are delivered. Advanced Queues in the hub provide the message retention. The messages are deleted when each application that is scheduled to receive a specific message has received that message. For auditing purposes, you can configure the system to retain all messages even after they have been delivered successfully to each application.
The current version of OracleAS InterConnect has significant improvements over the previous releases for configuring your routing needs. Routing is a function of the Advanced Queues in the hub. By default, oai_hub_queue
is the only multiconsumer Advanced Queue configured to be the persistent store for all messages for all applications. This will handle all your standard as well as content based routing needs. Moreover, this queue is created automatically when you install the repository in the hub. The only reason to change this configuration is if Advanced Queues becomes a performance bottleneck. For most scenarios, this is unlikely because most of the message processing is done in the adapters, not in the hub.
Content-based routing allows you to route messages to specific destination applications based on message content. For example, an electronic funds transaction settlement application is designed to transmit bank transactions with a specific bank code to identify the destination bank system. When the electronic funds transfer application publishes each message at runtime, the Oracle Application InterConnect runtime component determines the bank code value based on objects stored in the repository, and routes the message to the appropriate recipient system.
This release has significant improvements to deal with error conditions in your integration environment:
You can resubmit errored-out messages again into your integration environment for processing after modifying them (if required) using the runtime management console.
You can modify the .ini
files of adapters to turn up the tracing level to troubleshoot errors. You can view the tracing logs by opening up log files through the runtime management console.
Messages can be tracked by specifying tracking fields using iStudio. The runtime system checkpoints state at certain pre-defined points so that you can monitor where the message is currently in the integration environment. This tracking capability can be utilized through the runtime management console.
At runtime, it is possible that the adapter attached to a particular application becomes a performance bottleneck. You can detect this by monitoring the message througput information through the runtime console.
OracleAS InterConnect addresses adapter scalability through a well-defined methodology.
Multiple adapters can be attached to one application to share the message load. This can be done in several ways depending upon the needs of your integration environment. For example, Application A publishes three different kinds of events--EventA, EventB, and EventC. Three potential scenarios should be examined to determine how one or more adapters should be attached to the application to meet performance objectives:
The order in which the messages are sent by Application A must be adhered to, strictly, for the life of all messages. For example, if Application A publishes messages in a specific order, they must be received by the subscribing applications in the exact same order across all the different event types.
In this case, you cannot add more than one adapter to Application A for load balancing.
The order in which messages are sent by Application A must be adhered to but not across different event types. For example, Application A publishes the following messages in order: M1_EventA
, M2_EventB
, M3_EventA
. M1
and M3
must be ordered with respect to each other because they correspond to the same event type. However, M2
has no ordering restrictions with respect to M1
and M3
. In addition, EventA
messages are transformation, size, and computation heavy and EventB
and EventC
messages are very light.
In this case, you can create message partitions in the Message Capability Matrix. Partition1
can process EventA
messages, and Partition2
can process EventB
and EventC
messages. When you install the adapters, you specify not only the application it is attached to but also the partition it uses. These message partitions can be used to effectively load balance message processing.
There is no message order dependency, even within the same event type.
Since there are no ordering restrictions, two approaches for load balancing can be employed:
EventA
messages than EventB
or EventC
messages, you could create two partitions--one for handling EventA
messages, and the other for handling the other two event types.
Adapters deliver messages from applications to the integration hub and vice versa. By default, adapters store messages in local file persistence during processing. This makes adapters stateful. If your customized scenario requires OracleAS InterConnect to be deployed in a fail safe environment, stateful adapters become cumbersome to manage. If an adapter fails in the middle of processing a message, a new adapter instance that takes over the failed adapter must somehow read the message from the failed adapter's local file persistence.
To overcome this problem, OracleAS InterConnect provides a feature to make adapters stateless for fail safe environments by turning off file persistence through parameters in the adapter.ini
file.
Note:
There are two parameters associated with file persistence, |
Table 8-1 Summarizes file persistence parameters.
There are six major components in the runtime system:
Prepackaged adapters help re-purpose applications at runtime to participate in the integration without any programming effort.
Adapters are the run-time component for OracleAS InterConnect. Adapters have the following responsibilities:
Adapters can be configured to cache the metadata at runtime to address performance needs. There are three settings for caching metadata:
The following adapters are available with OracleAS InterConnect:
Adapter-specific installation and user's guides for details on respective OracleAS InterConnect Adapters
See Also:
The repository consists of two components:
Adapters have the ability to cache metadata. If the repository metadata is modified after adapters have cached metadata, those adapters are automatically notified by the repository server to purge their caches and re-query the new metadata.
Advanced Queues provide the messaging backbone for OracleAS InterConnect in the hub. In addition to being the store and forward unit, they provide message retention, auditing, tracking, and guaranteed delivery of messages.
Oracle Workflow facilitates integration at the business process level through its Business Event System. OracleAS InterConnect and Oracle Workflow are integrated to leverage this facility for business process collaborations across applications.
Real Application Clusters (RAC) harnesses the processing power of multiple interconnected computers, unites the processing power of each component to create a robust computing environment. In RAC environment, all active instances can concurrently execute transactions against a shared database. RAC coordinates each instance's access to the shared data to provide data consistency and data integrity. It features balance of workloads among the nodes by controlling multiple server connections during period of heavy use and provide persistent, fault tolerant connections between clients and RAC database.
OracleAS InterConnect adapters leverage RAC technology, provide consistent and uninterrupted services without having to restart the adapters, if an instance fails, and provide guaranteed message delivery. OracleAS InterConnect adapters connect to the first of the listed available nodes in adapter.ini
and hub.ini
files.
See Also:
OracleAS InterConnect adapters installation documentation for details on |
If one of the nodes fails, database connection is re-established to the next available node in the adapter.ini
or hub.ini
file list mentioned above, recursively, until a successful connection is made transparent to the user.
The hub connections for all adapters are RAC enabled. The spoke connections for DB and AQ adapters are RAC enabled.
See Also:
Section Support for Oracle Real Application Clusters in the Oracle9i Application Developer's Guide-Advanced Queuing |
The adapter.ini
and hub.ini
files must be populated with the information about the host, port, and instance for all the nodes. Additional sets of parameters which specifies the number of nodes are also required to be populated. All the existing entries remain the same except a new entry for each node is added. Table 8-2 describes the additional sets of parameters which specify the number of nodes required to be populated.
The following are sample hub.ini
files.
hub_username=oaihub904 hub_password=oaihub904 hub_use_thin_jdbc=true hub_host=dlsun1312 hub_instance=iasdb hub_port=1521 hub_num_nodes=2 hub_host2=vindaloo hub_instance2=orcl hub_port2=1521
The following are sample adapter.ini
files for the Database adapter that shows the spoke database entry.
db_bridge_schema1_host=dlsun1312 db_bridge_schema1_port=1521 db_bridge_schema1_instance=iasdb db_bridge_num_nodes=2 db_bridge_schema1_host2=vindaloo db_bridge_schema1_port2=1521 db_bridge_schema1_instance2=orcl
|
![]() Copyright © 2002, 2003 Oracle Corporation. All Rights Reserved. |
|