![]() |
![]() |
|
|
This topic includes the following sections:
About Scaling WLE Applications
Many applications perform adequately in an environment where between 1 to 10 server processes and 10 to 100 client applications are running. However, in an enterprise environment, applications may need to support hundreds of execution contexts (where the context can be a thread or a process), tens of thousands of client applications, and millions of objects at satisfactory performance levels.
Subjecting an application to exponentially increasing demands quickly reveals any resource shortcomings and performance bottlenecks in the application. Scalability is therefore an essential characteristic of WLE applications.
You can build highly scalable WLE applications by:
Application Scalability Requirements
WLE supports large-scale application deployments by:
WLE Scalability Features
Table 1-1 shows how WLE scalability features support each type of WLE application.
Scalability Support for WLE Applications
WLE Feature |
CORBA C++ |
CORBA Java |
EJB |
---|---|---|---|
Note: CORBA and EJB applications require slightly different configuration parameters in the UBBCONFIG file. For more information, see "Creating a Configuration File" in the
Administration Guide.This topic includes the following sections:
This topic describes the following object state models:
Object State Models
WLE CORBA supports three object state management models:
CORBA Object State Models
For more information about these models, see "Server Application Concepts" in Method-bound objects are loaded into the machine's memory only for the duration of the client invocation. When the invocation is complete, the object is deactivated and any state data for that object is flushed from memory. In this document, a method-bound object is considered to be a stateless object.
You can use method-bound objects to create a stateless server model in your application. By using a stateless server model, you move requests that are already directed to active objects to any available server, which allows concurrent execution for thousands and even millions of objects. From the client application view, all the objects are available to service requests. However, because the server application maps objects into memory only for the duration of client invocations, few of the objects managed by the server application are in memory at any given moment.
Process-bound objects remain in memory beginning when they are first invoked until the server process in which they are running is shut down. A process-bound object can be activated upon a client invocation or explicitly before any client invocation (a preactivated object). Applications can control the deactivation of process-bound objects. In this document, a process-bound object is considered to be a stateful object.
When appropriate, process-bound objects with a large amount of state data can remain in memory to service multiple client invocations, thereby avoiding reading and writing the object's state data on each client invocation.
Transaction-bound objects can also be considered stateful because, within the scope of a transaction, they can remain in memory between invocations. If the object is activated within the scope of a transaction, the object remains active until the transaction is either committed or rolled back. If the object is activated outside the scope of a transaction, its behavior is the same as that of a method-bound object (it is loaded for the duration of the client invocation).
WLE implements Sun Microsystems, Inc. evolving Enterprise JavaBeans 1.1 Specification (Public Release 2 dated October 18, 1999). WLE fully supports the three EJB types defined in the specification:
Method-bound Objects
Process-bound Objects
Transaction-bound Objects
EJB Object State Models
For more information about these EJB types, see "Types of Beans Supported in WLE" in The WLE Enterprise JavaBeans (EJB) Programming Environment. For more information about object state management in EJB applications, see Scaling Tasks for EJB Providers.
In RMI applications, a conversational state exists between the client application and the object instance. RMI objects remain in memory beginning when they are first created for as long as the object exists or until the server process in which they are running is shut down. For more information about RMI applications, see Using RMI in a WebLogic Enterprise Environment.
In general, application developers need to balance the costs of implementing stateless objects against the costs of implementing stateful objects.
The decision to use stateless or stateful objects depends on various factors. In the case where the cost to initialize an object with its durable state is expensive--because, for example, the object's data takes up a great deal of space, or the durable state is located on a disk very remote from the servant that activates it--it may make sense to keep the object stateful, even if the object is idle during a conversation. In the case where the cost to keep an object active is expensive in terms of machine resource usage, it may make sense to make such an object stateless.
By managing object state in a way that is efficient and appropriate for your application, you can maximize your application's ability to support large numbers of simultaneous client applications that use large numbers of objects. The way that you manage object state depends on the specific characteristics and requirements of your application:
RMI Object State Models
Implementing Stateless and Stateful Objects
About Stateless and Stateful Objects
Stateless objects generally provide good performance and optimal usage of server resources, because server resources are never used when objects are idle. Using stateless objects is a good approach to implementing server applications and are particularly appropriate when:
When to Use Stateless Objects
By making an object stateless, you can generally assure that server application resources are not being reserved unnecessarily while waiting for input from the client application.
An application that employs a stateless object model has the following characteristics:
A stateful object, once activated, remains in memory until a specific event occurs, such as the process in which the object exists is shut down, or the transaction in which the object is activated is completed.
Using stateful objects is recommended when:
When to Use Stateful Objects
Note: You should carefully consider how objects will potentially be involved in a transaction. An object can be bound to a particular process temporarily (transaction-bound) or permanently (process-bound). An object that is involved in a transaction cannot be invoked by another client application or object (WLE will likely return an error indicating that the object is busy). Stateful objects that are intended to be used by a large number of client applications can create bottlenecks if they are involved in transactions frequently or for long durations.
Stateful objects have the following behavior:
For example, if an object has a lock on a database and is caching large amounts of data in memory, that database and the memory used by that stateful object are unavailable to other objects, potentially for the entire duration of a transaction.
This topic includes the following sections:
Replicating Server Processes and Server Groups
The WLE environment allows CORBA objects and EJBs to be deployed across multiple servers to provide additional failover reliability and to split the client's workload through load balancing. WLE load balancing is enabled by default. For more information about load balancing, see Enabling Load Balancing. For more information about distributing the application workload using TUXEDO features, see Distributing Applications.
The WLE architecture provides the following server organization:
About Replicating Server Processes and Server Groups
This architecture allows new servers, groups, or machines to be dynamically added or removed, to adapt the application to high- or low-demand periods, or to accommodate internal changes required to the application. The WLE run time provides load balancing and failover by routing requests across available servers.
System Administrators can scale a WLE application by:
You can configure server applications as:
Configuration Options
You can add more parallel processing capability to client/server applications by replicating server processes or add more threads. You can add more server groups to split processing across resource managers. For CORBA applications, you can implement factory-based routing, as described in Using Factory-based Routing (CORBA only).
System Administrators can scale an EJB application by replicating the servers to support more concurrent active objects, or process more concurrent requests, on the server node. To configure replicated server processes, see Configuring Replicated Server Processes and Groups.
The benefits of using replicated server processes include:
Replicating Server Processes
Benefits
To achieve the maximum benefit of using replicated server processes, make sure that the CORBA objects or entity beans instantiated by your server application have unique object IDs. This allows a client invocation on an object to cause the object to be instantiated on demand, within the bounds of the number of server processes that are available, and not queued up for an already active object.
You should also consider the trade-off between providing better application recovery by using multiple processes versus more efficient performance using threads (for some types of application patterns and processing environments).
Better failover occurs only when you add processes, not threads. For information about using single-threaded and multithreaded Java servers, see When to Use Multithreaded Java Servers.
Server groups are unique to WLE and are key to WLE's scalability features. A group contains one or more servers on a single node. System administrators can scale a WLE application by replicating server groups and configuring load balancing within a domain.
Replicating a server group involves defining another server group with the same type of servers and resource managers to provide parallel access to a shared resource (such as a database). CORBA applications, for example, can use factory-based routing to split processing across the database partitions.
The UBBCONFIG
file specifies how server groups are configured and where they run. By using multiple server groups, WLE can:
Guidelines
Replicating Server Groups
To configure replicated server groups, see Configuring Replicated Server Processes and Groups.
This topic includes the following sections:
Using Multithreaded Java Servers (Java only)
For instructions on how to configure Java servers for multithreading, see Configuring Multithreaded Java Servers.
Note:
C++ servers are single-threaded only.
System administrators can scale a WLE application by enabling multithreading in Java servers, and by tuning configuration parameters (the maximum number of server worker threads that can be created) in the application's UBBCONFIG
file.
WLE Java supports the ability to configure multithreaded WLE Java applications. A multithreaded WLE Java server can service multiple object requests simultaneously, while a single-threaded WLE Java server runs only one request at a time. Running a WLE Java server in multithreaded mode or in single-threaded mode is transparent to the application programmer. Programs written to WLE Java run without modification in both modes.
Server worker threads are started and managed by the WLE Java software rather than an application program. Internally, WLE Java manages a pool of available server worker threads. If a Java server is configured to be multithreaded, then when a client request is received, an available server worker thread from the thread pool is scheduled to execute the request. Each active object has an associated thread, and while the object is active, the thread is busy. When the request is complete, the worker thread is returned to the pool of available threads.
In this release, you should not establish multiple threads programmatically in your server implementation code. Only worker threads that are created by the run-time WLE Java server software can access the WLE Java infrastructure, which means that your Java server application should not create a Java thread from a worker thread and then attempt to begin a new transaction in the thread. You can, however, start threads in your server application to perform other, non-WLE operations.
Deploying multithreaded Java servers is appropriate for many, but not all, WLE Java applications. The potential for a performance gain from a multithreaded Java server depends on whether:
About Multithreaded Java Servers
When to Use Multithreaded Java Servers
If the application is running on a single-processor machine and the application is CPU-intensive only (for example, without any I/O), in most cases the multithreaded Java server will not increase performance. In fact, due to the overhead of switching between threads, using a multithreaded Java server in this configuration might result in a performance loss rather than a gain.
In general, however, WLE Java applications almost always perform better when running on multithreaded Java servers. Multiple multithreaded servers should be configured to distribute the load across servers. If only a single server is configured, that server's queue could fill up quickly.
The code used in a multithreaded WLE server application appears the same as a single-threaded application. However, if you plan to configure your Java server applications to be multithreaded, or you want to have the option do so in the future, consider the following recommendations:
Coding Recommendations
To configure a multithreaded Java server, you change settings in the application's UBBCONFIG
file. For information about defining the UBBCONFIG
parameters to implement a multithreaded Java server, see Configuring Multithreaded Java Servers.
This topic includes the following sections:
Configuring a Multithreaded Java Server
Using Factory-based Routing (CORBA only)
Factory-based routing is a feature that lets you send a client request to a specific server group. Using factory-based routing, you can distribute that processing load for a given application across multiple machines, because you can determine the group and machine in which a given object is instantiated.
Routing is performed when a factory creates an object reference. The factory specifies field information in its call to the WLE TP Framework to create an object reference. The TP Framework executes the routing algorithm based on the routing criteria that you define in the ROUTING
section of an application's UBBCONFIG
file. The resulting object reference has, as its target, an appropriate server group for the handling of method invocations on the object reference. Any server that implements the interface in that server group is eligible to activate the servant for the object reference.
The activation of CORBA objects can be distributed by server group based on defined criteria, in cooperation with a system designer. Different implementations of CORBA interfaces can be supplied in different groups. This feature enables you to replicate the same CORBA interface across multiple server groups, based on defined, group-specific differences.
The system designer of the application must communicate the factory-based routing criteria to the system administrator. In the BEA TUXEDO system, an FML
field used for a service invocation can be used for routing. You can independently discover this information because there is no service request message data or associated buffer information available for routing. Routing is performed at the factory level and not on a method invocation on the target CORBA object.
The primary benefit of factory-based routing is that it provides a simple means to scale up an application, and invocations on a given interface in particular, across a growing deployment environment. Distributing the deployment of an application across additional machines is strictly an administrative function that does not require you to recode or rebuild the application.
Factory-based routing has the following characteristics:
About Factory-based Routing
Characteristics of Factory-based Routing
To implement factory-based routing, you change the way your factories create object references.
How Factory-based Routing Works
If multiple clients have an object reference that contains a given interface name and OID, the reference will always be routed to the same object instance.
Thereafter, the object reference will contain additional information that is used to provide an indication of where the target server exists. Factory-based routing is performed once per CORBA object, when the object reference is created.
Routing criteria specify the data values used to route requests to a particular server group. To configure factory-based routing, you define routing criteria in the ROUTING
section of the UBBCONFIG
file (for each interface for which requests are routed). For more detailed information about configuring factory-based routing, see the following topics:
Configuring Factory-based Routing in the UBBCONFIG File
To configure factory-based routing across multiple domains, you must also configure the factory_finder.ini
file to identify factory objects that are used in the current (local) domain but that are resident in a different (remote) domain. For more information, see "Configuring Multiple Domains (WLE System)" in the Administration Guide.
This topic includes the following sections:
Multiplexing Incoming Client Connections
System Administrators can scale a WLE application by increasing, in the UBBCONFIG
file, the number of incoming client connections that an application site supports. WLE provides a multicontexted, multistated gateway of listener/handlers to handle the multiplexing of all the requests issued by the client.
The IIOP Server Listener (ISL) enables access to WLE objects by remote WLE clients that use IIOP. The ISL is a process that listens for remote clients requesting IIOP connections. The IIOP Server Handler (ISH) is a multiplexor process that acts as a surrogate on behalf of the remote client. Both the ISL and ISH run on the application site. An application site can have one or more ISL processes and multiple associated ISH processes. Each ISH is associated with a single ISL.
The client connects to the ISL process using a known network address. The ISL balances the load among ISH processes by selecting the best available ISH and passing the connection directly to it. The ISL/ISH manages the context on behalf of the application client. For more information about ISL and ISH, see the description of ISL in the WebLogic Enterprise Reference.
System administrators can scale a WLE application by increasing the number of ISH processes on an application site, thereby enabling the ISL to load balance among more ISH processes. By default, an ISH can handle up to 10 processes. To increase this number, pass the optional CLOPT -x mpx-factor parameter to the ISL command, specifying in mpx-factor the number of ISH processes (up to 4096), and therefore the degree of multiplexing, for that ISL. Increasing the number of ISH processes may affect application performance as the application site services more concurrent processes.
System administrators can tune other ISH options as well to scale WLE applications. For more information, see the description of ISL in the WebLogic Enterprise Reference.
IIOP Server Listener and Handler
Increasing the Number of ISH Processes
|
Copyright © 1999 BEA Systems, Inc. All rights reserved.
|