Skip Headers

Oracle Application Server InterConnect User's Guide
10g (9.0.4)

Part Number B10404-01
Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Go to previous page Go to next page

8
Runtime System Concepts and Components

This chapter describes the runtime concepts, components, processes, and component configuration of OracleAS InterConnect in the following topics:

Integration Architecture

OracleAS InterConnect runtime is an event-based distributed messaging system. An event is any action that initiates communication through messaging between two or more applications integrated through OracleAS InterConnect. The messaging system can be deployed both within an enterprise or across enterprise boundaries.

The runtime enables inter-application communication through hub and spoke integration. This methodology keeps the applications decoupled from each other by integrating them to a central hub rather than to each other directly. The applications are at the spokes of this arrangement and are unaware of the applications with which they are integrating with. To them, the target of a message (or the source) is the hub. Since each application integrates with the hub, translation of data between the application and hub (in either direction) is sufficient to integrate two or more applications.

Features

The OracleAS InterConnect runtime features are as follows:

Messaging Paradigms

OracleAS InterConnect runtime supports three major messaging paradigms:

These paradigms can be configured on a per integration point basis.

See Also:

Chapter 1, "Getting Started with OracleAS InterConnect"

Message Delivery

The following are features of message delivery:

Message Retention

Messages remain in the runtime system until they are delivered. Advanced Queues in the hub provide the message retention. The messages are deleted when each application that is scheduled to receive a specific message has received that message. For auditing purposes, you can configure the system to retain all messages even after they have been delivered successfully to each application.

Routing Support

The current version of OracleAS InterConnect has significant improvements over the previous releases for configuring your routing needs. Routing is a function of the Advanced Queues in the hub. By default, oai_hub_queue is the only multiconsumer Advanced Queue configured to be the persistent store for all messages for all applications. This will handle all your standard as well as content based routing needs. Moreover, this queue is created automatically when you install the repository in the hub. The only reason to change this configuration is if Advanced Queues becomes a performance bottleneck. For most scenarios, this is unlikely because most of the message processing is done in the adapters, not in the hub.

See Also:

"Scalability and Load Balancing"

Content-Based Routing

Content-based routing allows you to route messages to specific destination applications based on message content. For example, an electronic funds transaction settlement application is designed to transmit bank transactions with a specific bank code to identify the destination bank system. When the electronic funds transfer application publishes each message at runtime, the Oracle Application InterConnect runtime component determines the bank code value based on objects stored in the repository, and routes the message to the appropriate recipient system.

Error Management

This release has significant improvements to deal with error conditions in your integration environment:

Resubmission

You can resubmit errored-out messages again into your integration environment for processing after modifying them (if required) using the runtime management console.

Tracing

You can modify the .ini files of adapters to turn up the tracing level to troubleshoot errors. You can view the tracing logs by opening up log files through the runtime management console.

Tracking

Messages can be tracked by specifying tracking fields using iStudio. The runtime system checkpoints state at certain pre-defined points so that you can monitor where the message is currently in the integration environment. This tracking capability can be utilized through the runtime management console.

Scalability and Load Balancing

At runtime, it is possible that the adapter attached to a particular application becomes a performance bottleneck. You can detect this by monitoring the message througput information through the runtime console.

See Also:

Chapter 9, "Runtime Management"

OracleAS InterConnect addresses adapter scalability through a well-defined methodology.

Multiple adapters can be attached to one application to share the message load. This can be done in several ways depending upon the needs of your integration environment. For example, Application A publishes three different kinds of events--EventA, EventB, and EventC. Three potential scenarios should be examined to determine how one or more adapters should be attached to the application to meet performance objectives:

Scenario 1

Scenario 2

Scenario 3

File Persistence

Adapters deliver messages from applications to the integration hub and vice versa. By default, adapters store messages in local file persistence during processing. This makes adapters stateful. If your customized scenario requires OracleAS InterConnect to be deployed in a fail safe environment, stateful adapters become cumbersome to manage. If an adapter fails in the middle of processing a message, a new adapter instance that takes over the failed adapter must somehow read the message from the failed adapter's local file persistence.

To overcome this problem, OracleAS InterConnect provides a feature to make adapters stateless for fail safe environments by turning off file persistence through parameters in the adapter.ini file.


Note:

There are two parameters associated with file persistence, agent_pipeline_to_hub and agent_pipeline_from_hub. You should set both parameters to false to make adapters stateless.


Table 8-1 Summarizes file persistence parameters.

Table 8-1 File Persistence Parameters
Parameter Description

agent_pipeline_to_hub=false

Turn off local file persistence when sending messages from application to the hub.

agent_pipeline_from_hub=false

Turn off local file persistence when sending messages from the hub to the application.

Components

There are six major components in the runtime system:

Adapters

Prepackaged adapters help re-purpose applications at runtime to participate in the integration without any programming effort.

Agent and Bridge Combination

Adapters are the run-time component for OracleAS InterConnect. Adapters have the following responsibilities:

Adapters can be configured to cache the metadata at runtime to address performance needs. There are three settings for caching metadata:

The following adapters are available with OracleAS InterConnect:

Repository

The repository consists of two components:

Adapters have the ability to cache metadata. If the repository metadata is modified after adapters have cached metadata, those adapters are automatically notified by the repository server to purge their caches and re-query the new metadata.

Advanced Queues

Advanced Queues provide the messaging backbone for OracleAS InterConnect in the hub. In addition to being the store and forward unit, they provide message retention, auditing, tracking, and guaranteed delivery of messages.

See Also:

Oracle Database Application Developer's Guide for information on Advanced Queues

Oracle Workflow

Oracle Workflow facilitates integration at the business process level through its Business Event System. OracleAS InterConnect and Oracle Workflow are integrated to leverage this facility for business process collaborations across applications.

Real Application Clusters Configuration

Introduction

Real Application Clusters (RAC) harnesses the processing power of multiple interconnected computers, unites the processing power of each component to create a robust computing environment. In RAC environment, all active instances can concurrently execute transactions against a shared database. RAC coordinates each instance's access to the shared data to provide data consistency and data integrity. It features balance of workloads among the nodes by controlling multiple server connections during period of heavy use and provide persistent, fault tolerant connections between clients and RAC database.

See Also:

The following documentation for additional information on RAC:

  • Oracle9i Real Application Clusters Administration

  • Oracle Application Server 10g Concepts

OracleAS InterConnect Adapters Supporting RAC

OracleAS InterConnect adapters leverage RAC technology, provide consistent and uninterrupted services without having to restart the adapters, if an instance fails, and provide guaranteed message delivery. OracleAS InterConnect adapters connect to the first of the listed available nodes in adapter.ini and hub.ini files.

See Also:

OracleAS InterConnect adapters installation documentation for details on adapter.ini and hub.ini files associated with specific adapters

If one of the nodes fails, database connection is re-established to the next available node in the adapter.ini or hub.ini file list mentioned above, recursively, until a successful connection is made transparent to the user.

The hub connections for all adapters are RAC enabled. The spoke connections for DB and AQ adapters are RAC enabled.

See Also:

Section Support for Oracle Real Application Clusters in the Oracle9i Application Developer's Guide-Advanced Queuing

Configuration

The adapter.ini and hub.ini files must be populated with the information about the host, port, and instance for all the nodes. Additional sets of parameters which specifies the number of nodes are also required to be populated. All the existing entries remain the same except a new entry for each node is added. Table 8-2 describes the additional sets of parameters which specify the number of nodes required to be populated.

Table 8-2 Additional Parameters for RAC Configuration

File Name Parameter

hub.ini

host_num_nodes

hub_hostx

hub_portx

hub_instancex--where x varies between 2 and the number of nodes.

adapter.ini for the Advanced Queueing adapter

ab_bridge_num_nodes

aq_bridge_host

aq_bridge_port

aq_bridge_instance

adapter.ini for the Database adapter

db_bridge_num_nodes

db_bridge_schema1_hostx

db_bridge_schema1_portx

db_bridge_schema1_instancex--where x varies between 2 and the number of nodes.

Sample hub.ini File

The following are sample hub.ini files.

hub_username=oaihub904
hub_password=oaihub904
hub_use_thin_jdbc=true
hub_host=dlsun1312
hub_instance=iasdb
hub_port=1521
hub_num_nodes=2
hub_host2=vindaloo
hub_instance2=orcl
hub_port2=1521

Sample Database Adapter adapter.ini File that Shows the Spoke Database Entry

The following are sample adapter.ini files for the Database adapter that shows the spoke database entry.

db_bridge_schema1_host=dlsun1312
db_bridge_schema1_port=1521
db_bridge_schema1_instance=iasdb
db_bridge_num_nodes=2
db_bridge_schema1_host2=vindaloo
db_bridge_schema1_port2=1521
db_bridge_schema1_instance2=orcl

Go to previous page Go to next page
Oracle
Copyright © 2002, 2003 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index