Oracle Applications InterConnect
Release 3.1.3

Part Number A86039-01

Library

Product

Contents

Index

Go to previous page Go to next page

3
Runtime Concepts and Components

This chapter describes the runtime concepts, components and processes of Applications InterConnect.

This chapter contains the following sections:

Introduction to the Runtime Component

The runtime component consists of an event-based distributed messaging system. An event is any action that triggers a message. The messaging system can be distributed with different components of the system communicating over a WAN.

Features

The Applications InterConnect runtime features are categorized as follows:

Integration Architecture

The runtime enables inter-application communication through hub and spoke integration. This methodology keeps the applications decoupled from each other by integrating them to a central hub only. The applications are at the spokes of this arrangement and are unaware of the applications they are integrating with. To them, the target is the hub. Since each application integrates with the hub, translation of data between the application and hub (in either direction) is sufficient to integrate two or more applications.

Refer to Chapter 2, "Design Time Concepts and iStudio", on page 2-1 for a graphical representation of the hub and spoke architecture.

Message Delivery

Guaranteed delivery

Exactly once delivery

In order delivery

Messaging Paradigms

Refer to Chapter 2, "Design Time Concepts and iStudio", on page 2-1 for an explanation of the messaging paradigms supported in Applications InterConnect.

Persistence

Messages remain in the runtime system until they are delivered. The message is deleted when each application that is scheduled to receive a specific message has done so. For auditing purposes, you can configure the system to retain all messages even after they have been delivered successfully to each application.

Content-based Routing

Content-based routing increases message delivery efficiency by routing each message directly to the intended application by examining specific elements in the data object. The diagram below illustrates content-based routing in action.

The database in the middle is the one that resides in the hub under the Oracle Message Broker (OMB). OMB has been purposely omitted from the diagram to better illustrate the functionality. Also omitted are adapters (agents and bridges) that are attached to each application.

Messages can be routed to a specific application based on specific content values contained in the message. For example, an electronic funds transaction settlement application is designed to transmit bank transactions with a specific bank code to identify the destination bank system. When the EFT application publishes each message at runtime, the Oracle Application InterConnect runtime component determines the BankCode value based on objects stored in the repository, and routes the message to the appropriate recipient system.

To implement content-based routing in this scenario, the condition set captured in iStudio may be coded in this fashion:

if (BankCode EQUAL A) then route message to AQ A
if (BankCode EQUAL B) then route message to AQ B
if (BankCode EQUAL C) then route message to AQ C

iStudio allows you to specify SQL-like routing conditions based on the message content from the publishing application. This information is stored as metadata in the repository. At runtime, the publishing agent (connected to EFT application in the example above), utilizes this information to route the message to the specific recipient application.

Default Routing Support

On a per application basis, for every published message, you can specify a hub queue in which the message should be stored. Conversely, for each subscribing application, you can specify a hub queue from which the message should be retrieved. This pairing of publish and subscribe queues constitutes the Message Capability Matrix for each application. Using this matrix, the integrator can determine which queues need to be created in the hub. This matrix is stored in the repository as metadata and is used by the agents (see runtime section) to route messages to the appropriate queue on behalf of publishing applications, and to listen for messages on the appropriate queues on behalf of subscribing applications.

Load Balancing Through Message Partitions

At runtime, for performance reasons, you may need more than one adapter attached to a specific application. For example, Application A publishes three different kinds of events--EventA, EventB, and EventC. Three potential scenarios should be examined to determine whether (and how) one or more adapters should be attached to the application to meet performance objectives:

Scenario 1

The order in which the messages are sent by Application A must be adhered to strictly for the life of all messages. For example, if Application A publishes messages in a specific order, they must be received by the subscribing applications in the exact same order (even if they correspond to different event types). In this case, you cannot add more than one adapter to Application A for load balancing.

Scenario 2

The order in which messages are sent by Application A must be adhered to but not across different event types. For example, Application A publishes the following messages in order: M1_EventA, M2_EventB, M3_EventA. M1 and M3 must be ordered with respect to each other because they correspond to the same event. However, M2 has no ordering restrictions with respect to M1 and M3.

In addition, EventA messages are transformation/size/computation heavy and EventB and EventC messages are very light. In this case, you can create message partitions from the Message Capability Matrix. Partition1 can process EventA messages, and Partition2 can process EventB and EventC messages. When you install the adapters, you specify not only the application it is attach to but also the partition it uses. These message partitions can be used to effectively load balance integrated applications.

Scenario 3

There is no message order dependency, even within the same event type. Since there are no ordering restrictions, two approaches for load balancing can be employed:

A. No message partitions are created. One or more adapters are added utilizing the entire Message Capability Matrix. This means that at runtime any one of the adapters would be available to receive any message, though only one of them would actually receive the message.

B. Message Partitions can be created based on projections of the number of messages for a particular event type. For example, if there will be three times as many EventA messages than EventB or EventC messages, you could create two partitions--one for handling EventA messages, and the other for handling the other two event types.

Logging and Tracing

The runtime provides different levels of tracing to capture all information needed for troubleshooting or monitoring. The trace information is logged to local trace files that can be read using the Runtime Management Console. For more information, refer to Chapter 4, "Runtime Management Console".

Fault Tolerance

If at any time one or more of the runtime components or applications fail, none of the messages will be lost.

Load Balancing

Adapters (see components) offer multi-threaded support for load balancing. There can be multiple adapter instantiations attached to each application. The broker may be used as part of Oracle Application Server for load balancing purposes.

Components

There are four major components in the runtime system:

Adapters

An adapter is the Applications InterConnect component that sits at the spoke with the application to make it InterConnect enabled. Internally, the adapter is written as two components for improved reuse of existing interfaces. These components are:

Bridge. This is the application specific piece of the adapter. The bridge communicates with the specific application interface to transfer data between the application and Applications InterConnect. For messages outbound from an application, the bridge is responsible for converting the data from the application's native system format to the agent's internal format (and conforming to the application view of data defined in iStudio). It then passing it on to the agent (described below) for further processing. For inbound messages, the bridge receives the message from the agent in the agent's internal format (and conforming to the application view of data defined in iStudio). It then converts the message back to the application's native format and pushes the data contained therein into the application. Each communication protocol requires a unique bridge.

Two products using the same protocol may use the same bridge code, though at runtime two separate processes are created. The bridge is also called the technology/protocol adapter.

2. Agent. The agent is a generic engine that carries out instructions for transformations and routing captured in repository metadata (populated by iStudio). The agent does not know how to talk to a particular application. For messages outbound from the application, the agent receives the message in it's internal format from the bridge. This internal format conforms to the application view of data (see iStudio description). The agent then queries the repository for metadata to transform this message to the common view and pushes the message to OMB.

For inbound messages, the agent receives a message from OMB that conforms to the common view defined in iStudio. The agent queries the repository for metadata and transforms the message from the common view to the application view. The message is then pushed to the bridge. The agent is also know as the integration adapter.

Oracle Message Broker

Oracle Message Broker (OMB) is the message store and forward component of the runtime. Adapters send messages to OMB which stores them in an Oracle 8i database using Advanced Queuing (AQ). OMB then delivers the messages to adapters who have subscribed to them. Messages are deleted from the persistent store after each recipients who expects the messages have received them. OMB conforms to the Java Messaging System (JMS) specification for a messaging server. JMS communication between each adapter and the broker is built on CORBA.

Repository

The Repository communicates with adapters at runtime using CORBA to provide translation information for messages. This translation information is called metadata. The Repository is populated with metadata during the design phase using iStudio. Metadata customizes a generic adapter to tend to a specific application's integration reqirements.

Example

Figure 3-1 CRM Application and SAP communication - A graphical interpretation

The CRM application and SAP communication interpretation from the above diagram is as follows:

  1. The CRM Adapter at startup, queries the Repository for message translation information. It is aware that it's plugged into an Oracle CRM Application, so it queries the Repository for all Oracle CRM Application-related message translation information. The metadata containing this information is cached in the CRM Adapter.

  2. The SAP Adapter at startup, expresses an interest in all (or some) messages that the CRM application is publishing. It also caches all SAP-related metadata from the Repository. Note that the Repository was populated with the metadata at design time using iStudio.

  3. An event occurs in the Oracle CRM application.

  4. As a result of the event, the application transfers all event-related information to the bridge through bridge APIs called by the application infrastructure.


Note:

This bridge was custom-developed as an extension to the agent to tailor it to support an Oracle CRM application. 


  1. The bridge creates a message with the event information using published agent APIs. It then transfers the message to the agent also using published agent APIs.

  2. On receiving the message in question, the CRM agent looks up the metadata information in the cache and performs the necessary translations. These translations are from the Oracle CRM Application's view to the hub or common view as described above in the hub and spoke architecture section.

  3. The common view message is now shipped to the broker which stores it in an Oracle 8i database.

  4. The SAP Adapter (using JMS APIs) receives this message into the adapter space.

  5. On receiving the message, the SAP agent reviews at its cached metadata information and translates this message from the hub view to the SAP application view.

  6. It then calls into the SAP bridge APIs to transfer the message into SAP space.


Note:

This bridge was custom developed as an extension to the agent to tailor it to handle an SAP application. 


  1. The bridge calls into SAP to deliver the message contents to the application infrastructure.


Go to previous page Go to next page
Oracle
Copyright © 1996-2000, Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index