BEA Logo BEA WebLogic Enterprise Release 5.1

  Corporate Info  |  News  |  Solutions  |  Products  |  Partners  |  Services  |  Events  |  Download  |  How To Buy

 

   WebLogic Enterprise Doc Home   |   Tuning Topics   |   Previous Topic   |   Next Topic   |   Contents   |   Index

Scaling WebLogic Enterprise Applications

 

This topic introduces key concepts and tasks for scaling WebLogic Enterprise applications. This topic includes the following sections:

For more detailed information and examples for different types of WebLogic Enterprise applications, see the following topics:

 


About Scaling WebLogic Enterprise Applications

This topic includes the following sections:

Application Scalability Requirements

Many applications perform adequately in an environment where between 1 to 10 server processes and 10 to 100 client applications are running. However, in an enterprise environment, applications may need to support hundreds of execution contexts (where the context can be a thread or a process), tens of thousands of client applications, and millions of objects at satisfactory performance levels.

Subjecting an application to exponentially increasing demands quickly reveals any resource shortcomings and performance bottlenecks in the application. Scalability is therefore an essential characteristic of WebLogic Enterprise applications.

You can build highly scalable WebLogic Enterprise applications by:

WebLogic Enterprise Scalability Features

WebLogic Enterprise supports large-scale application deployments by:

Scalability Support for WebLogic Enterprise Applications

Table 1-1 shows how WebLogic Enterprise scalability features support each type of WebLogic Enterprise application.

Table 1-1 Supported Scalability Features for WebLogic Enterprise Applications

WebLogic Enterprise Feature

CORBA C++

CORBA Java

EJB

RMI

Object state management

Supported

Supported

Supported

Not supported

Replicating server processes and server groups

Supported

Supported

Supported

Supported

Using multithreaded servers

Not supported

Supported

Supported

Supported

Factory-based routing

Supported

Supported

Not supported

Not supported

Multiplexing incoming client connections

Supported

Supported

Supported

Supported

Notes: CORBA and EJB applications require slightly different configuration parameters in the UBBCONFIG file. For more information, see "Creating a Configuration File" in the Administration Guide.

For RMI applications, callback objects are not scalable because they are not subject to WebLogic Enterprise administration. For more information about callback objects, see "Using RMI with Client-side Callbacks" in Using RMI in a WebLogic Enterprise Environment.

 


Using Object State Management

This topic includes the following sections:

Object state management is a fundamental concern of large-scale client/server systems because it is critical that such systems achieve optimized throughput and response time. For more detailed information about using object state management, see the following topics:

Object State Models

This topic describes the following object state models:

CORBA Object State Models

WebLogic Enterprise CORBA supports three object state management models:

For more information about these models, see "Server Application Concepts" in Creating CORBA C++ Server Applications.

Method-bound Objects

Method-bound objects are loaded into the machine's memory only for the duration of the client invocation. When the invocation is complete, the object is deactivated and any state data for that object is flushed from memory. In this document, a method-bound object is considered to be a stateless object.

You can use method-bound objects to create a stateless server model in your application. By using a stateless server model, you move requests that are already directed to active objects to any available server, which allows concurrent execution for thousands and even millions of objects. From the client application view, all the objects are available to service requests. However, because the server application maps objects into memory only for the duration of client invocations, few of the objects managed by the server application are in memory at any given moment.

Process-bound Objects

Process-bound objects remain in memory beginning when they are first invoked until the server process in which they are running is shut down. A process-bound object can be activated upon a client invocation or explicitly before any client invocation (a preactivated object). Applications can control the deactivation of process-bound objects. In this document, a process-bound object is considered to be a stateful object.

When appropriate, process-bound objects with a large amount of state data can remain in memory to service multiple client invocations, thereby avoiding reading and writing the object's state data on each client invocation.

Transaction-bound Objects

Transaction-bound objects can also be considered stateful because, within the scope of a transaction, they can remain in memory between invocations. If the object is activated within the scope of a transaction, the object remains active until the transaction is either committed or rolled back. If the object is activated outside the scope of a transaction, its behavior is the same as that of a method-bound object (it is loaded for the duration of the client invocation).

EJB Object State Models

WebLogic Enterprise implements the Enterprise JavaBeans 1.1 Specification published by Sun Microsystems, Inc. WebLogic Enterprise fully supports the three EJB types defined in the specification:

For more information about these EJB types, see "Types of Beans Supported in WebLogic Enterprise" in "The WebLogic Enterprise JavaBeans Programming Environment" topic in Getting Started. For more information about object state management in EJB applications, see Scaling Tasks for EJB Providers.

RMI Object State Models

In RMI applications, a conversational state exists between the client application and the object instance. RMI objects remain in memory beginning when they are first created for as long as the object exists or until the server process in which they are running is shut down. For more information about RMI applications, see Using RMI in a WebLogic Enterprise Environment.

Implementing Stateless and Stateful Objects

In general, application developers need to balance the costs of implementing stateless objects against the costs of implementing stateful objects.

About Stateless and Stateful Objects

The decision to use stateless or stateful objects depends on various factors. In the case where the cost to initialize an object with its durable state is expensive-because, for example, the object's data takes up a great deal of space, or the durable state is located on a disk very remote from the servant that activates it-it may make sense to keep the object stateful, even if the object is idle during a conversation. In the case where the cost to keep an object active is expensive in terms of machine resource usage, it may make sense to make such an object stateless.

By managing object state in a way that is efficient and appropriate for your application, you can maximize your application's ability to support large numbers of simultaneous client applications that use large numbers of objects. The way that you manage object state depends on the specific characteristics and requirements of your application:

When to Use Stateless Objects

Stateless objects generally provide good performance and optimal usage of server resources, because server resources are never used when objects are idle. Using stateless objects is a good approach to implementing server applications and are particularly appropriate when:

By making an object stateless, you can generally assure that server application resources are not being reserved unnecessarily while waiting for input from the client application.

An application that employs a stateless object model has the following characteristics:

When to Use Stateful Objects

A stateful object, once activated, remains in memory until a specific event occurs, such as the process in which the object exists is shut down, or the transaction in which the object is activated is completed.

Using stateful objects is recommended when:

Stateful objects have the following behavior:

 


Replicating Server Processes and Server Groups

This topic includes the following sections:

For more detailed information about replicating server processes and server groups, see the following topics:

About Replicating Server Processes and Server Groups

The WebLogic Enterprise environment allows CORBA objects and EJBs to be deployed across multiple servers to provide additional failover reliability and to split the client's workload through load balancing. WebLogic Enterprise load balancing is enabled by default. For more information about configuring load balancing, see Enabling Load Balancing. For more information about distributing the application workload using BEA Tuxedo features, see Distributing Applications.

The WebLogic Enterprise architecture provides the following server organization:

This architecture allows new servers, groups, or machines to be dynamically added or removed, to adapt the application to high- or low-demand periods, or to accommodate internal changes required to the application. The WebLogic Enterprise run time provides load balancing and failover by routing requests across available servers.

System administrators can scale a WebLogic Enterprise application by:

Configuration Options

You can configure server applications as:

You can add more parallel processing capability to client/server applications by replicating server processes or add more threads. You can add more server groups to split processing across resource managers. For CORBA applications, you can implement factory-based routing, as described in Using Factory-based Routing (CORBA only).

Replicating Server Processes

System administrators can scale an EJB application by replicating the servers to support more concurrent active objects, or process more concurrent requests, on the server node. To configure replicated server processes, see Configuring Replicated Server Processes and Groups.

Benefits

The benefits of using replicated server processes include:

Guidelines

To achieve the maximum benefit of using replicated server processes, make sure that the CORBA objects or entity beans instantiated by your server application have unique object IDs. This allows a client invocation on an object to cause the object to be instantiated on demand, within the bounds of the number of server processes that are available, and not queued up for an already active object.

You should also consider the trade-off between providing better application recovery by using multiple processes versus more efficient performance using threads (for some types of application patterns and processing environments).

Better failover occurs only when you add processes, not threads. For information about using single-threaded and multithreaded Java servers, see When to Use Multithreaded Java Servers.

Replicating Server Groups

Server groups are unique to WebLogic Enterprise and are key to the scalability features of WebLogic Enterprise. A group contains one or more servers on a single node. System administrators can scale a WebLogic Enterprise application by replicating server groups and configuring load balancing within a domain.

Replicating a server group involves defining another server group with the same type of servers and resource managers to provide parallel access to a shared resource (such as a database). CORBA applications, for example, can use factory-based routing to split processing across the database partitions.

The UBBCONFIG file specifies how server groups are configured and where they run. By using multiple server groups, WebLogic Enterprise can:

To configure replicated server groups, see Configuring Replicated Server Processes and Groups.

 


Using Multithreaded Java Servers (Java only)

This topic includes the following sections:

For instructions on how to configure Java servers for multithreading, see Configuring Multithreaded Java Servers.

Note: C++ servers are single-threaded only.

About Multithreaded Java Servers

System administrators can scale a WebLogic Enterprise application by enabling multithreading in Java servers, and by tuning configuration parameters (the maximum number of server worker threads that can be created) in the application's UBBCONFIG file.

WebLogic Enterprise Java supports the ability to configure multithreaded WebLogic Enterprise Java applications. A multithreaded WebLogic Enterprise Java server can service multiple object requests simultaneously, while a single-threaded WebLogic Enterprise Java server runs only one request at a time. Running a WebLogic Enterprise Java server in multithreaded mode or in single-threaded mode is transparent to the application programmer. Programs written to WebLogic Enterprise Java run without modification in both modes.

Server worker threads are started and managed by the WebLogic Enterprise Java software rather than an application program. Internally, WebLogic Enterprise Java manages a pool of available server worker threads. If a Java server is configured to be multithreaded, then when a client request is received, an available server worker thread from the thread pool is scheduled to execute the request. Each active object has an associated thread, and while the object is active, the thread is busy. When the request is complete, the worker thread is returned to the pool of available threads.

Note: In this release, you should not establish multiple threads programmatically in your server implementation code. Only worker threads that are created by the run-time WebLogic Enterprise Java server software can access the WebLogic Enterprise Java infrastructure, which means that your Java server application should not create a Java thread from a worker thread and then attempt to begin a new transaction in the thread. You can, however, start threads in your server application to perform other, non-WebLogic Enterprise operations.

When to Use Multithreaded Java Servers

Deploying multithreaded Java servers is appropriate for many, but not all, WebLogic Enterprise Java applications. The potential for a performance gain from a multithreaded Java server depends on whether:

If the application is running on a single-processor machine and the application is CPU-intensive only (for example, without any I/O), in most cases the multithreaded Java server will not increase performance. In fact, due to the overhead of switching between threads, using a multithreaded Java server in this configuration might result in a performance loss rather than a gain.

In general, however, WebLogic Enterprise Java applications almost always perform better when running on multithreaded Java servers. Multiple multithreaded servers should be configured to distribute the load across servers. If only a single server is configured, that server's queue could fill up quickly.

Coding Recommendations

The code used in a multithreaded WebLogic Enterprise server application appears the same as a single-threaded application. However, if you plan to configure your Java server applications to be multithreaded, or you want to have the option do so in the future, consider the following recommendations:

Configuring a Multithreaded Java Server

To configure a multithreaded Java server, you change settings in the application's UBBCONFIG file. For information about defining the UBBCONFIG parameters to implement a multithreaded Java server, see Configuring Multithreaded Java Servers.

 


Using Factory-based Routing (CORBA only)

This topic includes the following sections:

This topic introduces factory-based routing in WebLogic Enterprise CORBA applications. For more detailed information about using factory-based routing, see the following topics:

About Factory-based Routing

Factory-based routing is a feature that lets you send a client request to a specific server group. Using factory-based routing, you can distribute that processing load for a given application across multiple machines, because you can determine the group and machine in which a given object is instantiated.

Routing is performed when a factory creates an object reference. The factory specifies field information in its call to the WebLogic Enterprise TP Framework to create an object reference. The TP Framework executes the routing algorithm based on the routing criteria that you define in the ROUTING section of an application's UBBCONFIG file. The resulting object reference has, as its target, an appropriate server group for the handling of method invocations on the object reference. Any server that implements the interface in that server group is eligible to activate the servant for the object reference.

The activation of CORBA objects can be distributed by server group based on defined criteria, in cooperation with a system designer. Different implementations of CORBA interfaces can be supplied in different groups. This feature enables you to replicate the same CORBA interface across multiple server groups, based on defined, group-specific differences.

The system designer of the application must communicate the factory-based routing criteria to the system administrator. In the BEA Tuxedo system, an FML field used for a service invocation can be used for routing. You can independently discover this information because there is no service request message data or associated buffer information available for routing. Routing is performed at the factory level and not on a method invocation on the target CORBA object.

The primary benefit of factory-based routing is that it provides a simple means to scale up an application, and invocations on a given interface in particular, across a growing deployment environment. Distributing the deployment of an application across additional machines is strictly an administrative function that does not require you to recode or rebuild the application.

Characteristics of Factory-based Routing

Factory-based routing has the following characteristics:

How Factory-based Routing Works

To implement factory-based routing, you change the way your factories create object references.

Thereafter, the object reference will contain additional information that is used to provide an indication of where the target server exists. Factory-based routing is performed once per CORBA object, when the object reference is created.

Configuring Factory-based Routing in the UBBCONFIG File

Routing criteria specify the data values used to route requests to a particular server group. To configure factory-based routing, you define routing criteria in the ROUTING section of the UBBCONFIG file (for each interface for which requests are routed). For more detailed information about configuring factory-based routing, see the following topics:

To configure factory-based routing across multiple domains, you must also configure the factory_finder.ini file to identify factory objects that are used in the current (local) domain but that are resident in a different (remote) domain. For more information, see "Configuring Multiple Domains (WebLogic Enterprise System)" in the Administration Guide.

 


Multiplexing Incoming Client Connections

This topic includes the following sections:

System administrators can scale a WebLogic Enterprise application by increasing, in the UBBCONFIG file, the number of incoming client connections that an application site supports. WebLogic Enterprise provides a multicontexted, multistated gateway of listener/handlers to handle the multiplexing of all the requests issued by the client.

IIOP Listener and Handler

The IIOP Listener (ISL) enables access to WebLogic Enterprise objects by remote WebLogic Enterprise clients that use IIOP. The ISL is a process that listens for remote clients requesting IIOP connections. The IIOP Handler (ISH) is a multiplexor process that acts as a surrogate on behalf of the remote client. Both the ISL and ISH run on the application site. An application site can have one or more ISL processes and multiple associated ISH processes. Each ISH is associated with a single ISL.

The client connects to the ISL process using a known network address. The ISL balances the load among ISH processes by selecting the best available ISH and passing the connection directly to it. The ISL/ISH manages the context on behalf of the application client. For more information about ISL and ISH, see the description of ISL in the Commands, Systems Processes, and MIB Reference.

Increasing the Number of ISH Processes

System administrators can scale a WebLogic Enterprise application by increasing the number of ISH processes on an application site, thereby enabling the ISL to load balance among more ISH processes. By default, an ISH can handle up to 10 client connections. To increase this number, pass the optional CLOPT -x mpx-factor parameter to the ISL command, specifying in mpx-factor the number of ISH client connections each ISH can handle (up to 4096), and therefore the degree of multiplexing, for the ISH. Increasing the number of ISH processes may affect application performance as the application site services more concurrent processes.

System administrators can tune other ISH options as well to scale WebLogic Enterprise applications. For more information, see the description of ISL in the Commands, Systems Processes, and MIB Reference.