bea.com | products | dev2dev | support | askBEA
 Download Docs   Site Map   Glossary 
Search

Deploying Solutions

 Previous Next Contents Index View as PDF  

Introduction

This document describes how to deploy BEA WebLogic Integration solutions in a production environment. The following sections introduce key concepts and tasks for deploying WebLogic Integration in your organization:

 


Deployment Goals

WebLogic Integration is a single, unified platform that provides the functionality businesses can use to develop new applications, integrate them with existing systems, streamline business processes, and connect with trading partners. When deploying WebLogic Integration solutions, consider the following goals:

You can achieve these goals and others with every WebLogic Integration deployment.

 


Key Deployment Tasks

Deploying WebLogic Integration may require that you complete some or all of the following tasks:

  1. Define the goals for your WebLogic Integration deployment, as described in Deployment Goals.

  2. Deploy WebLogic Integration applications in a cluster. To do so, you must first design the cluster, and before you can start designing, you need to understand the components of a WebLogic Integration deployment. Understanding WebLogic Integration Clusters, provides descriptions of these components that will help you design the best possible environment for your application.

  3. Deploy WebLogic Integration applications in a clustered environment so that they are highly available. To do so, you must configure your application as described in Configuring a Clustered Deployment.

  4. Set up security for your WebLogic Integration deployment as described in Using WebLogic Integration Security.

  5. Optimize overall system performance (once your WebLogic Integration deployment is running) as described in Tuning Performance.

 


Roles in Integration Solution Deployment

To deploy an integrated solution successfully, a deployment team must include people who perform the following roles:

One person can assume multiple roles, and all roles are not equally relevant in all deployment scenarios, but a successful deployment requires input by people in each role.

Deployment Specialists

Deployment specialists coordinate the deployment effort. They are knowledgeable about the features of the WebLogic Integration product. They provide expertise in designing the deployment topology for an integration solution, based on their knowledge of how to configure various WebLogic Integration features on one or more servers. Deployment specialists have experience in the following areas:

WebLogic Server Administrators

WebLogic Server administrators provide in-depth technical and operational knowledge about WebLogic Server deployments in an organization. They have knowledge of the hardware and platform, and experience managing all aspects of a WebLogic Server deployment, including installation, configuration, monitoring, security, performance tuning, troubleshooting, and other administrative tasks

Database Administrators

Database administrators provide in-depth technical and operational knowledge about database systems deployed in an organization. They have experience in the following areas:

 


Deployment Architecture

The following illustration provides an overview of the WebLogic Integration deployment architecture.

Figure 1-1 WebLogic Integration Deployment Architecture


 

The following section describes each of the resources illustrated in the preceding figure.

 


Key Deployment Resources

This section provides an overview of resources that can be modified at deployment time:

WebLogic Server Resources

This section provides general information about BEA WebLogic Server resources that are most relevant to the deployment of a WebLogic Integration solution. You can configure these resources from the WebLogic Server Administration Console or through EJB deployment descriptors.

WebLogic Server provides many configuration options and tunable settings for deploying WebLogic Integration solutions in any supported environment. The following sections describe the configurable WebLogic Server features that are most relevant to WebLogic Integration deployments:

Clustering

To increase workload capacity, you can run WebLogic Server on a cluster: a group of servers that can be managed as a single unit. Clustering provides a deployment platform that is more scalable than a single server. For more information about clustering, see Understanding WebLogic Integration Clusters.

Java Message Service

The WebLogic Java Message Service (JMS) enables Java applications sharing a messaging system to exchange (create, send, and receive) messages. WebLogic JMS is based on the Java Message Service Specification version 1.0.2 from Sun Microsystems, Inc.

JMS servers can be clustered and connection factories can be deployed on multiple instances of WebLogic Server. In addition, JMS event destinations can be configured to handle workflow notifications and messages, as described in Business Process Management Resources.

For more information about WebLogic JMS, see the following topics:

EJB Pooling and Caching

In a WebLogic Integration deployment, the number of EJBs affects system throughput. You can tune the number of EJBs in the system through either the EJB pool or the EJB cache, depending on the type of EJB. (For information about configuring pool and cache sizes, see Configuring Other EJB Pool and Cache Sizes.) The following table describes types of EJBs and their associated tunable parameter.

Table 1-1 Parameters for Tuning EJBs  

Group Name

Description

Type of Resource Group

Event Listener Message-Driven Beans

max-beans-in-free-pool1

The maximum number of listeners that pull work from a queue.

Stateless Session Beans

max-beans-in-free-pool1

The maximum number of beans available for work requests.

Stateful Session Beans

max-beans-in-cache

The number of beans that can be active at once. A setting that is too low results in CacheFullExceptions. A setting that is too high results in excessive memory consumption.

Entity Beans


1. 
The WebLogic Server documentation recommends setting the number of execute threads rather than setting max-beans-in-free-pool. However, in a WebLogic Integration environment, it is more efficient to control the workload by specifying the max-beans-in-free-pool setting of the event listener message-driven beans than by setting the number of execute threads.

 

JDBC Connection Pools

Java Database Connectivity (JDBC) enables Java applications to access data stored in SQL databases. To reduce the overhead associated with establishing database connections, WebLogic JDBC provides connection pools that offer ready-to-use pools of connections to a DBMS.

JDBC connection pools are used to optimize DBMS connections. You can tune WebLogic Integration performance by configuring the size of JDBC connection pools. For information about determining the size of a JDBC connection pool on each node in a WebLogic Integration cluster, see Configuring JDBC Connection Pool Sizes. A setting that is too low results in delays while WebLogic Integration waits for connections to become available. A setting that is too high results in slower DBMS performance.

For more information about WebLogic JDBC connection pools, see:

Execution Thread Pool

The execution thread pool controls the number of threads that can execute concurrently on WebLogic Server. A setting that is too low results in sequential processing and possible deadlocks. A setting that is too high results in excessive memory consumption and may cause thrashing.

Set the execution thread pool high enough so that all candidate threads run, but not so high that performance is hampered due to excessive context switching in the system. The number of execute threads also determines the number of threads that read incoming socket messages (socket-reader threads). This number is, by default, one-third of the number of execute threads. A number that is too low can result in contention for threads for reading sockets and can sometimes lead to a deadlock. Monitor your running system to empirically determine the best value for the execution thread pool.

For information about configuring the execution thread pool, see Configuring the Execution Thread Pool.

Following these recommendations for tuning your execution thread pool will help optimize the performance of WebLogic Integration. However, in a WebLogic Integration environment, the best way to throttle work is by controlling the number of message-driven beans—see EJB Pooling and Caching.

J2EE Connector Architecture

The WebLogic J2EE Connector Architecture (JCA) integrates the J2EE Platform with one or more heterogeneous Enterprise Information Systems (EIS). The WebLogic JCA is based on the J2EE Connector Specification, Version 1.0, Proposed Final Draft 2, from Sun Microsystems, Inc.

For information about the WebLogic J2EE-CA, see Managing the WebLogic J2EE Connector Architecture in the BEA WebLogic Server Administration Guide at the following URL:

http://download.oracle.com/docs/cd/E13222_01/wls/docs70/adminguide/jconnector.html

Business Process Management Resources

In WebLogic Integration, the Business Process Management (BPM) functionality handles the definition and execution of business processes. For an introduction to BPM functionality, see Business Process Management in Introducing BEA WebLogic Integration.

The following sections describe BPM features that are used for the deployment of WebLogic Integration solutions:

BPM resources can be configured to run on a cluster—a group of servers that is managed as a single unit. For more information about clustering and BPM, see Understanding WebLogic Integration Clusters.

Overview of BPM Resources

The following diagram shows BPM resources for a single node in a cluster.

Figure 1-2 BPM EJB Resources


 

The next section, Types of BPM Resources, describes the resources represented in the preceding figure.

Types of BPM Resources

BPM uses WebLogic JMS (described in Java Message Service) for communicating worklist, time, and event notifications, as well as error and audit messages. BPM client applications send these messages, as XML events, to JMS event queues. BPM uses event listener message-driven beans to process XML events that arrive in event queues and deliver them to the running instance of the BPM engine.

You can create custom message queues using the WebLogic Server Administration Console, then run the MDB Generator utility to generate an event listener bean to listen on the queue, and subsequently update the BPM configuration to recognize the new event listener bean. For more information, see Creating New Pools.

The following sections describe the types of resources you can use when configuring BPM for a clustered environment and when tuning BPM performance:

Workflow Processor Beans

Workflow processor beans are stateful session beans that execute workflows, which proceed from a start/event node to a stop/event node (quiescent state to quiescent state). Workflow processor beans accept work from event listener beans, Worklist clients, and from other workflow processor beans (when subworkflows are used).

Because workflow processor beans are instantiated at run time, based on the system load, the exact number of workflow processor beans at run time is dynamic. The size of the workflow processor bean pool determines the number of workflow processor beans that can be active concurrently. If the number of beans exceeds the pool size, then excess beans are passivated until a bean in the pool becomes available. In general, a pool size that is too large is preferable to one that is too small. For tuning information, see Configuring Other EJB Pool and Cache Sizes.

Workflow processor beans are deployed to the cluster. WebLogic Server optimizes a clustered system such that each node in a cluster uses a local copy of a workflow processor bean.

Event Listener Message-Driven Beans

Event listener message-driven beans pull work from the event queue and send work to the workflow processor beans. Event listener beans wait until the workflow processor bean either executes to completion or hits a quiescent state before getting new work from the queue.

Event listener beans have a configured pool size for unordered messages and they use a series of single bean pools (named beans with a free pool size of 1) for ordered messages, as described in "Generating Message-Driven Beans for Multiple Event Queues" in Establishing JMS Connections in Programming BPM Client Applications.

In combination, these pools determine the amount of parallel workflow execution that can occur when initiated from events.

Template Beans

Template beans are entity beans that contain the workflow template to be executed. In general, the size of the template entity bean pool should equal the maximum number of workflow templates (templates, not instances) to be executed concurrently. In general, a pool that is too large is preferable to one that is too small. Template entity beans are clusterable (they have cluster-aware stubs), so they can be used by workflow processor beans on other nodes in a cluster.

Template Definition Beans

Template definition beans are entity beans that contain the workflow template definition to be executed.

Business processes are saved as workflow templates in a database. These templates are essentially empty containers for storing different workflow versions.They can be associated with multiple organizations. Templates contain template definitions, which serve as different versions of the same workflow, and are distinguished by effective and expiry dates. For information about business processes and workflows, see Using the WebLogic Integration Studio.

In general, the size of the template definition entity bean pool should equal the maximum number of workflow templates (not workflow instances) to execute concurrently. In general, a pool that is too large is preferable to one that is too small. Template definition entity beans are clusterable (they have cluster-aware stubs), so they can be used by workflow processor beans on other nodes in a cluster.

Instance Beans

Instance beans are entity beans that contain the workflow instance being executed. In general, the size of the instance entity bean pool should equal the size of the workflow processor bean pool. There is no advantage to having an instance entity bean pool that is larger than the workflow processor bean pool. In general, a pool that is too large is preferable to one that is too small. Instance entity beans are clusterable (they have cluster-aware stubs), so they can be used by workflow processor beans on other nodes in a cluster.

Event Queue

A single JAR file contains both ordered and unordered event listener message-driven beans for a particular queue. The WebLogic Integration installation provides the wlpi-mdb-ejb.jar file, which contains message-driven beans that consume messages from the default EventQueue. This JAR file must be targeted to the cluster. You can also create new event queues, as described in Creating New Pools. For information about BPM event queues in a cluster, see Load Balancing BPM Functions in a Cluster.

Note: To scale BPM functionality in a cluster, you must create new event queues.

Worklist Console (Deprecated)

The Worklist client includes the swing-based WebLogic Integration Worklist console, as well as any user code that creates workflows from the BPM API. It is shown in Figure  1-2 for context only—it is not a configurable run-time resource.

BPM Work Sequence

The following diagram shows the interaction among BPM EJBs when processing events.

Figure 1-3 Interaction Between BPM EJBs When Processing Events


 

When a BPM event listener bean receives a work request from the event queue (whether the default queue or a user-defined queue), it creates a workflow processor bean to work on the request. The workflow processor bean executes the workflow until the workflow hits a stop or event node. Note that, when a workflow calls another workflow, a new workflow processor bean is created and the calling workflow does not exit the workflow processor bean.

The template bean and template definition bean are read at the beginning of workflow execution. The instance bean is read at the beginning of workflow execution, and written when workflow execution quiesces at a transaction boundary (such as an event or done node).

For event-driven workflows, the creation of additional workflow processor beans does not enable the deployment to do more work. The number of event listener beans limits the number of workflow instances that can be processed in parallel.

B2B Integration Resources

When you deploy WebLogic Integration to a clustered domain, all B2B integration resources, with the exception of resources for the administration server, must be deployed homogeneously in the cluster. That is, to achieve high availability, scalability, and performance improvements, B2B integration resources must be targeted to all clustered servers in a domain. For more information about B2B integration resources and clustering, see Designing a Clustered Deployment.

Many B2B integration resources are allocated dynamically, as needed; a deployment cannot be configured ahead of time. For information about resources that can be configured to accommodate B2B loads, see Configuration Requirements in Administering B2B Integration.

A shared file system is required for a cluster that uses B2B integration functionality. We recommend either a Storage Area Network (SAN) or a multiported disk system.

Note: WebLogic Integration applications that are based on the XOCP business protocol are not supported in a clustered environment.

Application Integration Resources

The following sections describe the types of application integration resources that WebLogic Integration supports:

For information about clustering and application integration, see Understanding WebLogic Integration Clusters.

Application integration functionality is integrated in the WebLogic Integration product, but it is also available packaged in a single, self-contained J2EE ear file. This enables you to deploy application integration on any valid WebLogic domain. For example, Web services developers and WebLogic Portal developers can use application views to interact with EIS applications. For more information about deploying application integration outside of a WebLogic Integration environment, see Modular Deployment of Application Integration in Using Application Integration.

Synchronous Service Invocations

Use synchronous invocations when the underlying EIS can respond quickly to requests, or when the client application can afford to wait.

The following figure illustrates the flow of a synchronous service invocation.

Figure 1-4 Synchronous Service Invocations


 

In a synchronous service invocation, a client (shown here as a workflow processor) calls the application view EJB (a stateless session bean). The application view calls the service adapter using a synchronous Common Client Interface (CCI) request. The service adapter is a J2EE-CA service adapter that actually processes the request.

Note: When a workflow acts as a client to an EIS, the workflow processor is stalled while it waits for the request to complete, tying up a workflow processor bean and perhaps an event listener bean as well. To optimize throughput, consider using asynchronous invocations instead unless the underlying EIS system can respond quickly to the request.

Asynchronous Service Invocations

The following figure illustrates asynchronous service processing in WebLogic Integration.

Figure 1-5 Asynchronous Service Invocations


 

Note: The WLAI_ASYNC_REQUEST_QUEUE and WLAI_ASYNC_RESPONSE_QUEUE queues are deployed as distributed destinations in a WebLogic Integration cluster. The Asynchronous Service Request Processor is a message-driven bean (wlai-asyncprocessor-ejb.jar), which is also deployed to the cluster. For more information about how application integration resources are deployed for high availability, see Highly Available JMS.

The preceding diagram illustrates the following process flow for an application integration asynchronous service:

  1. An Application View client instantiates an Application View instance.

    The client has the option of supplying a durable client ID at the time of construction. The durable client ID is used as the correlation ID for asynchronous response messages.

    The client invokes the invokeServiceAsync method and passes the request IDocument to an AsyncServiceResponse Listener to handle the response.

  2. The Application View instance creates an AsyncServiceRequest object and sends it to the WLAI_ASYNC_REQUEST_QUEUE.

    The AsyncServiceRequest object contains the name of the destination to which the response listener is pinned. The AsyncServiceProcessor message-driven bean uses this information to determine which physical destination to which it should send the response.

    If a request object does not contain the name of a response destination, the AsyncServiceProcessor message-driven bean uses the destination specified for the JMS message (using a call to the JMSReplyTo() method).

    Suppose, however, that only the client supplies an AsyncServiceResponseListener to the Application View:

    invokeServiceAsync(String serviceName, IDocument request,
    AsyncServiceResponseListener listener);

    In this scenario, the Application View establishes a receiver to the JMS queue that is bound at the JNDI location provided by the Application View EJB method getAsyncResponseQueueJNDIName(). The Application View instance uses QueueReceiver.getQueue() to set the ReplyTo destination on the request message.

  3. In a cluster, the WLAI_ASYNC_REQUEST_QUEUE queue is deployed as a distributed JMS queue. However, each message is sent to a single physical queue and is available only from that queue. If that physical queue becomes unavailable before a given message is dequeued, then the message (that is, the Asynchronous Service Request) remains unavailable until that physical queue comes back on-line via a manual JMS migration or server restart.

    It is not sufficient to send a message to a distributed queue and expect the message to be received by a QueueReceiver of that queue. Because the message is sent to only one physical queue, there must be a QueueReceiver listening on the physical queue. To satisfy this requirement, the AsyncServiceProcessor (wlai-asyncprocessor-ejb.jar) must be deployed on all nodes in a cluster.

    The AsyncServiceProcessor message-driven bean receives the message from the queue in a first in, first out (FIFO) manner.

    The AsyncServiceProcessor uses the AsyncServiceRequest object in the JMS ObjectMessage to determine the qualified name, service name, request document, and response destination for the Application View.

  4. The AsyncServiceProcessor uses an Application View EJB to invoke the service synchronously. The service is translated into a synchronous CCI-based request/response message for the resource adapter.

  5. The AsyncServiceProcessor receives the response. The response is subsequently encapsulated into an AsyncServiceResponse object and sent to the response destination provided in the AsyncServiceRequest object, which in this case is WLAI_ASYNC_RESPONSE_QUEUE_myserver1.

    Note that the AsyncServiceProcessor must send the response to a specific physical destination (WLAI_ASYNC_RESPONSE_QUEUE_myserver1) and not to the distributed destination (WLAI_ASYNC_RESPONSE_QUEUE). The physical destination queue was established by the Application View instance running on the client when it called the Application View EJB getAsyncResponseQueueJNDIName() method. (See step 2.)

    Note: It is possible for a client application to fail before it receives all the response messages it expects. If, after recovery, you want to make sure that the client is associated with the same JMS response queue with which it was associated before the failure, you must use the same client ID that you used before the failure, after recovery. The following listing is an example of recovery code, which facilitates this association of the client with the JMS response queue by using the same unique client ID before the failure and after recovery:

    String uniqueClientID = "uniqueClientID";
    ApplicationView myAppView = new ApplicationView(jndiContext,
    "MyAppView", uniqueClientID);
    myAppView.recoverAsyncServiceResponses(new MyAsyncResponseListener()); 

  6. The instance of the Application View message listener that was created when the Application View instance was instantiated, receives the AsyncServiceResponse message as a JMS ObjectMessage and passes it to the AsyncServiceResponseListener supplied in the invokeServiceAsync() call shown in step 2.

Events

Application integration adapters generate events that are consumed by BPM or WebLogic Workshop. Events are forwarded from an Application View to a JMS queue (WLAI_EVENT_QUEUE). This queue is a distributed destination containing multiple physical destinations. A message-driven bean (the WLI-AI Event Processor) listens on the WLAI_EVENT_QUEUE distributed destination.

The WLI-AI Event Processor does the following:

The following figure illustrates event processing in WebLogic Integration.

Figure 1-6 Events


 

The preceding figure illustrates the following sequence of steps for event processing:

  1. An event occurs in an enterprise information system (EIS) and is sent to a JMS queue, as follows:

    1. An event occurs in an enterprise information system (EIS).

    2. The event data is transferred to the event generator in the resource adapter. The event generator transforms the EIS-specific event data into an XML document and posts an IEvent object to the event router (Adapter Event Router).

    3. The Event Router passes the IEvent object to an EventContext object for each application integration server that is interested in the specific event type.

    4. The EventContext encapsulates the IEvent object into a JMS ObjectMessage and, using a JMS QueueSender, sends it to the JMS Queue bound at the following JNDI context: com.bea.wlai.EVENT_QUEUE.

  2. The ObjectMessage is stored in the WLAI_EVENT_QUEUE and is processed by the WLI-AI Event Processor message-driven bean (wlai-eventprocessor-ejb.jar) in a first in, first out (FIFO) manner.

    In a cluster, WLAI_EVENT_QUEUE is deployed as a distributed JMS queue. However, each message is sent to a single physical queue and is only available from the physical queue to which it was sent. If that physical queue becomes unavailable before a given message is dequeued, then the message (that is, the event) is unavailable until that physical queue comes back on-line.

    It is not enough to send a message to a distributed queue and expect the message to be received by a QueueReceiver for that distributed queue. Because the message is sent to one physical queue, there must be a QueueReceiver listening on that physical queue. To satisfy this requirement, the WLI-AI Event Processor message-driven bean (wlai-eventprocessor-ejb.jar) must be deployed on all nodes in a cluster.

  3. The WLI-AI Event Processor message-driven bean (wlai-eventprocessor-ejb.jar) determines the list of event destinations:

    1. Event destinations are added to the AIDestinationMBean. The MBean is replicated across the cluster so that the same list of event destinations is passed to the event processor message-driven bean on each managed server. When the WLI-AI BPM plug-in (wlai-plugin-ejb.jar) is deployed, it adds the BPM Event Queue as an event destination. The inclusion of this destination makes it possible for EIS events to be sent to the BPM process engine. Also, when Application View event listeners are registered, the event is sent to the WLAI_EVENT_TOPIC.

    2. The WLI-AI Event Processor message-driven bean reads the list of event destinations, to which it should send events, from the MBean.

  4. An event ObjectMessage is delivered to all registered event destinations in a single JTA user transaction. If a message is not delivered to any event destination, it is rolled back on to the WLAI_EVENT_QUEUE. The WLAI_EVENT_QUEUE is configured to forward poisoned messages to the WebLogic Integration error destination (com.bea.wli.FailedEventQueue). For information about the FailedEventQueue, see Error Destination.

    Note: Because the destinations for events are typically JMS destinations, it is unlikely that the system will fail to forward an event.

Application Views and Connection Factories

The run-time application integration features (synchronous service invocations, asynchronous service invocations, and events) described in preceding sections can be clustered for scalability and high availability. Design-time application integration features (Application Views and connection factories) can be clustered for scalability, but not for high availability. This means that you cannot deploy or undeploy (edit) Application Views if any server in a cluster is not running. In other words, you can deploy and undeploy (edit) only in a healthy cluster.

The resource adapter (RAR) is uploaded by deploying the wlai-admin.ear archive file to the administration server, not to the clustered managed servers. Two-phase deployment is used. The WebLogic Deployer utility on the administration server controls the deployment to managed servers.

The following figure illustrates connection factory deployment at design time.

Figure 1-7 Connection Factory Deployment


 

Application View deployment depends on successful connection factory deployment. For more information about deploying application integration adapters in clustered environments, see Load Balancing Application Integration Functions in a Cluster and Deploying Adapters.

Relational Database Management System Resources

WebLogic Integration relies extensively on database resources for handling run-time operations and ensuring that application data is durable. Database performance is a key factor in overall WebLogic Integration performance. For more information, see Tuning Databases.

Hardware, Operating System, and Network Resources

Hardware, operating system, and network resources play a crucial role in WebLogic Integration performance. Deployments must comply with the hardware and software requirements described in the BEA WebLogic Integration Release Notes. For more information about configuring these resources for maximum performance in a production environment, see Recommended Hardware and Software, and Tuning Hardware, Operating System, and Network Resources.

 

Back to Top Previous Next