Best Practices for WLI Application Life Cycle

     Previous  Next    Open TOC in new window  Open Index in new window  View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Deploying and Maintaining WLI Applications

There are several best practices for deploying, running, and maintaining WLI applications, as explained in the following sections:

Deploying WLI Application During Runtime

When you work in the development mode, you can use the WLI IDE to build and deploy your application. The IDE provides a feature that helps you generate an Ant script to create a build for production purposes. You can also execute these build scripts outside the IDE via the command prompt and generate a single EAR file. You can deploy this EAR file via the command prompt or using the Oracle WebLogic Server Console. For more information, see Deploying Oracle WebLogic Integration Solutions.

Deploying WLI Application in a Cluster

A Oracle WebLogic Server cluster domain contains only one administration server, and one or more managed servers. The managed servers in a WLI domain can be grouped in a cluster. When you configure WLI resources that can be clustered, you target the resources to a named cluster. If you specify a cluster as the target for resource deployment, you can dynamically increase the capacity by adding managed servers to your cluster. The best practices that you can apply to a cluster are as follows:

Configuring Trading Partner Integration Resources

You must deploy Trading Partner Integration components homogeneously to a cluster. To avoid a single point of failure, ensure that Trading Partner Integration resources are deployed identically on every managed server.

The guidelines you can follow to configure Trading Partner Integration in a cluster are as follows:

Changing Cluster Configurations and Deployment Requests

You can change configurations for a cluster. For example, you can add new nodes to the cluster or modify Trading Partner Integration configuration only while the administration server of the cluster is active.

Requests to deploy or disable a cluster are interrupted if the administration server is inactive, but the managed servers continue to serve requests. If you can ensure that the required configuration files such as msi-config.xml, SerializedSystemIni.dat, and optionally boot.properties exist in each managed server's root directory, you can boot or reboot managed servers using an existing configuration.

Managed servers that work without an administrative server, operate in a Managed Server Independence (MSI) mode. For more information about MSI mode, see "Understanding Managed Server Independence Mode" sub-section in Avoiding and Recovering from Server Failure of Managing Oracle WebLogic Server Start up and Shutdown.

Load Balancing in a WLI Cluster

One of the goals of clustering in your WLI application is to achieve scalability. For a cluster to be scalable, each server must be fully utilized. Load balancing distributes the work load proportionally among all the servers in a cluster, so that each server can run at full capacity. Load balancing is required for various functional areas in a WLI cluster. The functions are:

HTTP Functions in a Cluster

Web services (SOAP or XML over HTTP) and Oracle WebLogic Trading Partner Integration protocols can use HTTP load balancing. You can use the Oracle WebLogic HttpClusterServlet, a web server plug-in, or a hardware router for external load balancing.

JMS Functions in a Cluster

WLI or WLI applications most often utilize JMS queues that are configured as distributed destinations. The exception to this rule is that a JMS queue is targeted to a single managed server.

Synchronous Clients and Asynchronous Business Processes

If your WLI solution includes communication between a synchronous client and an asynchronous business process, you can enable server affinity for the weblogic.jws.jms.QueueConnectionFactory. This is the default setting.

WARNING: If you disable server affinity for a solution that includes communication between a synchronous client and an asynchronous business process in an attempt to tune JMS load balancing, the resulting load balancing behavior is unpredictable.

RDBMS Event Generators

The RDBMS event generator has a dedicated JMS connection factory called wli.internal.egrdbms.XAQueueConnectionFactory. Load balancing is enabled for this connection factory by default. You must disable load balancing and enable server affinity for wli.internal.egrdbms.XAQueueConnectionFactory to disable load balancing for RDBMS events.

Application Integration Functions in a Cluster

Application Integration allows load balancing of synchronous and asynchronous services and events within a cluster. The usage of synchronous and asynchronous services are explained in detail as follows:

Synchronous Services

Synchronous Services are implemented as method calls on a session EJB. They are load balanced within the cluster according to EJB load balancing rules. These EJBs are published at design time and each application view is represented as two session EJBs: one stateless, and one stateful.

In a standard operation, stateless session EJBs invoke the services, and load balancing occurs on a per-service basis. Every time you invoke a service on an application view, you may be routed to a different EJB on a different Oracle WebLogic managed server instance.

When you use the local transaction facilities of the application view during a local transaction, the stateful session EJB invokes the services. The stateful session EJB keeps the connection to the EIS open, so that the local transaction state can persist between service invocations. In this mode, service invocations are pinned to a single EJB instance on a single managed server within the cluster. Once the transaction is complete, either through a commit or rollback, the standard per-service load balancing is applicable.

Asynchronous Services

Asynchronous services are always invoked as method calls on a stateless session EJB. You cannot use the local transaction facility of the application view for asynchronous service invocations.

A single asynchronous service invocation translates to two method invocations on two different stateless session EJB instances. The load balancing for asynchronous service in this case occurs on two occasions, the first upon receipt of the request, and the second in the execution of the request and delivery of the response.

In addition, both the asynchronous service request and response are posted to a distributed JMS queue. JMS load balancing as a result, applies to both the request and the response. In this case, the invokeServiceAsync method of the application view may be serviced on one managed server, the request delivered to a second managed server where the request is processed and the response generated, and the response delivered to a third server for retrieval by the client.


  Back to Top       Previous  Next