bea.com | products | dev2dev | support | askBEA |
![]() |
![]() |
|
![]() |
e-docs > WebLogic Server > WebLogic Server Performance and Tuning > Tuning WebLogic Server EJBs |
WebLogic Server Performance and Tuning
|
The following sections describe how to tune WebLogic Server EJBs to match your application needs:
Setting Performance-Related weblogic-ejb-jar.xml Parameters
The weblogic-ejb-jar.xml deployment file contains the WebLogic Server-specific EJB DTD that defines the concurrency, caching, clustering, and behavior of EJBs. It also contains descriptors that map available WebLogic Server resources to EJBs. WebLogic Server resources include security role names and data sources such as JDBC pools, JMS connection factories, and other deployed EJBs.
For information on how to modify the weblogic-ejb-jar.xml deployment file, see "Specifying and Editing the EJB Deployment Descriptors" in Programming WebLogic Enterprise JavaBeans.
Table 4-1 lists the weblogic-ejb-jar.xml file parameters that affect performance.
The following sections describe these elements.
WebLogic Server maintains a free pool of EJBs for every stateless session bean class. The max-beans-in-free-pool element of the weblogic-ejb-jar.xml file defines the size of this pool. By default, max-beans-in-free-pool has no limit; the maximum number of beans in the free pool is limited only by the available memory.
This section discusses the following topics:
Allocating Pool Size for Session and Message Beans
When EJBs are created, the session bean instance is created and given an identity. When the client removes a bean, the bean instance is placed in the free pool. When you create a subsequent bean, you can avoid object allocation by reusing the previous instance that is in the free pool. The max-beans-in-free-pool element can improve performance if EJBs are frequently created and removed.
The EJB container creates new instances of message beans as needed for concurrent message processing. The max-beans-in-pool element puts an absolute limit on how many of these instances will be created. The container may override this setting according to the runtime resources that are available.
For the best performance for stateless session and message beans, use the default setting max-beans-in-free-pool element. The default allows you to run beans in parallel, using as many threads as possible. The only reason to change the setting is to limit the number of beans running in parallel.
Allocating Pool Size for Entity Beans
There is a pool of anonymous entity beans (i.e., beans without a primary key assigned to them) that is used to invoke finders and home methods, and to create entity beans. The max-beans-in-free-pool element also controls the size of this pool.
If you are running lots of finders or home methods or creating lots of beans, you may want to tune the max-beans-in-free-pool element so that there are enough beans available for use in the pool.
Do not change the value of the max-beans-in-free-pool parameter unless you frequently create session beans, do a quick operation, and then throw them away. If you do this, enlarge your free pool by 25 to 50 percent and see if performance improves. If object creation represents a small fraction of your workload, increasing this parameter will not significantly improve performance. For applications where EJBs are database intensive, do not change the value of this parameter.
Caution: Tuning this parameter too high uses extra memory. Tuning it too low causes unnecessary object creation. If you are in doubt about changing this parameter, leave it unchanged.
Tuning Initial Beans in Free Pool
Use the initial-beans-in-free-pool element of the weblogic-ejb-jar.xml file to specify the number of stateless session bean instances in the free pool at startup.
If you specify a value for initial-beans-in-free-pool, WebLogic Server populates the free pool with the specified number of bean instances at startup. Populating the free pool in this way improves initial response time for the EJB, because initial requests for the bean can be satisfied without generating a new instance.
initial-beans-in-free-pool defaults to 0 if the element is not defined.
The initial-beans-in-free-pool element is described in Programming WebLogic Enterprise JavaBeans.
WebLogic Server enables you to configure the number of active beans that are present in the EJB cache (the in-memory space where beans exist).
The max-beans-in-cache element of the weblogic-ejb-jar.xml file specifies the maximum number of objects of this class that are allowed in memory. When max-beans-in-cache is reached, WebLogic Server passivates some EJBs that have not been recently used by a client. The max-beans-in-cache element also affects when EJBs are removed from the WebLogic Server cache.
Using this element sets the cache size for stateful session and entity beans similarly.
For more information, see "EJB Concurrency Strategy" in Programming WebLogic Enterprise JavaBeans
The max-beans-in-cache element is described in Programming WebLogic Enterprise JavaBeans.
Activation and Passivation of Stateful Session EJBs
Set the appropriate cache size with the max-beans-in-cache element to avoid excessive passivation and activation. Activation is the transfer of an EJB instance from secondary storage to memory. Passivation is the transfer of an EJB instance from memory to secondary storage. Tuning max-beans-in-cache too high consumes memory unnecessarily.
The EJB container performs passivation when it invokes the ejbPassivate() method. When the EJB session object is needed again, it is recalled with the ejbActivate() method. When the ejbPassivate() call is made, the EJB object is serialized using the Java serialization API or other similar methods and stored in secondary memory (disk). The ejbActivate() method causes the opposite.
The container automatically manages this working set of session objects in the EJB cache without the client's or server's direct intervention. Specific callback methods in each EJB describe how to passivate (store in cache) or activate (retrieve from cache) these objects. Excessive activation and passivation nullifies the performance benefits of caching the working set of session objects in the EJB cache—especially when the application has to handle a large number of session objects.
Relationship caching improves the performance of entity beans by loading related beans into the cache and avoiding multiple queries by issuing a join query for the related bean.
For more information on relationship caching, see Relationship Caching with Entity Beans.
WebLogic Server supports database locking and exclusive locking mechanisms. The default and recommended mechanism for EJB 1.1 and EJB 2.0 is database locking.
Database locking improves concurrent access to entity EJBs. The WebLogic Server container improves concurrent access by deferring locking services to the underlying database. Unlike exclusive locking, with deferred database locking, the underlying data store can provide finer granularity for locking EJB data, in most cases, and provide deadlock detection.
For details about database locking, see Database Concurrency Strategy in Programming WebLogic Enterprise JavaBeans.
You specify the locking mechanism used for an EJB by setting the concurrency-strategy deployment parameter in the weblogic-ejb-jar.xml file.
Setting Transaction Isolation Level
Data accessibility is controlled through the transaction isolation level mechanism. Transaction isolation level determines the degree to which multiple interleaved transactions are prevented from interfering with each other in a multi-user database system. Transaction isolation is achieved through use of locking protocols that guide the reading and writing of transaction data. This transaction data is written to the disk in a process called "serialization." Lower isolation levels give you better database concurrency at the cost of less transaction isolation.
For more information, see the description of theisolation-level element of the weblogic-ejb-jar.xml file in Programming WebLogic Enterprise JavaBeans.
Refer to your database documentation for more information on the implications and support for different isolation levels.
Tuning In Response to Monitoring Statistics
The WebLogic Server Administration Console reports a wide variety of EJB runtime monitoring statistics, many of which are useful for tuning your EJBs. This section discusses how some of these statistics can help you tune the performance of EJBs.
To display the statistics in the Administration Console, see the following Console Help sections:
A high cache miss ratio could be indicative of an improperly sized cache. If your application uses a certain subset of beans (read primary keys) more frequently than others, it would be ideal to size your cache large enough so that the commonly used beans can remain in the cache as less commonly used beans are cycled in and out upon demand. If this is the nature of your application, you may be able to decrease your cache miss ratio significantly by increasing the maximum size of your cache.
If your application doesn't necessarily use a subset of beans more frequently than others, increasing your maximum cache size may not affect your cache miss ratio. We recommend testing your application with different maximum cache sizes to determine which give the lowest cache miss ratio. It is also important to keep in mind that your server has a finite amount of memory and therefore there is always a trade-off to increasing your cache size.
A high lock waiter ratio can indicate a sub optimal concurrency strategy for the bean. If acceptable for your application, a concurrency strategy of Database or Optimistic will allow for more parallelism than an Exclusive strategy and remove the need for locking at the EJB container level.
Since locks are generally held for the duration of a transaction, reducing the amount of time your transactions take will free up beans more quickly and may help reduce your lock waiter ratio.
The lock timeout ratio is closely related to the lock waiter ratio. If you are concerned about the lock timeout ratio for your bean, first take a look at the lock waiter ratio and our recommendations for reducing it (including possibly changing your concurrency strategy). If you can reduce or eliminate the number of times a thread has to wait for a lock on a bean, you will also reduce or eliminate the amount of timeouts that occur while waiting.
A high lock timeout ratio may also be indicative of an improper transaction timeout value. The maximum amount of time a thread will wait for a lock is equal to the current transaction timeout value.
If the transaction timeout value is set too low, threads may not be waiting long enough to obtain access to a bean and timing out prematurely. If this is the case, increasing the trans-timeout-seconds value for the bean may help reduce the lock timeout ratio.
Take care when increasing the trans-timeout-seconds, however, because doing so can cause threads to wait longer for a bean and threads are a valuable server resource. Also, doing so may increase the request time, as a request ma wait longer before timing out.
If your pool miss ratio is high, you must determine what is happening to your bean instances. There are three things that can happen to your beans.
Follow these steps to diagnose the problem:
One way to check this is via the Beans in Use Current Count and Idle Beans Count. If demand for your EJB spikes during a certain period of time, you may see a lot of pool misses as your pool is emptied and unable to fill additional requests.
As the demand for the EJB drops and beans are returned to the pool, many of the beans created to satisfy requests may be unable to fit in the pool and are therefore removed. If this is the case, you may be able to reduce the number of pool misses by increasing the maximum size of your free pool. This may allow beans that were created to satisfy demand during peak periods to remain in the pool so they can be used again when demand once again increases.
To reduce the number of destroyed beans, BEA recommends against throwing non-application exceptions from your bean code except in cases where you want the bean instance to be destroyed. A non-application exception is an exception that is either a java.rmi.RemoteException (including exceptions that inherit from RemoteException) or is not defined in the throws clause of a method of an EJB's home or component interface.
In general, you should investigate which exceptions are causing your beans to be destroyed as they may be hurting performance and be indicative of a problem with the EJB or a resource used by the EJB.
A high pool timeout ratio could be indicative of an improperly sized free pool. Increasing the maximum size of your free pool via the max-beans-in-free-pool setting will increase the number of bean instances available to service requests and may reduce your pool timeout ratio.
Another factor affecting the number of pool timeouts is the configured transaction timeout for your bean. The maximum amount of time a thread will wait for a bean from the pool is equal to the default transaction timeout for the bean. Increasing the trans-timeout-seconds setting in your weblogic-ejb-jar.xml file will give threads more time to wait for a bean instance to become available.
Users should exercise caution when increasing this value, however, since doing so may cause threads to wait longer for a bean and threads are a valuable server resource. Also, request time might increase because a request will wait longer before timing out.
Begin investigating a high transaction rollback ratio by examining the Transaction Timeout Ratio. If the transaction timeout ratio is higher than you expect, try to address the timeout problem first.
An unexpectedly high transaction rollback ratio could be caused by a number of things. We recommend investigating the cause of transaction rollbacks to find potential problems with your application or a resource used by your application.
A high transaction timeout ratio could be caused by the wrong transaction timeout value. For example, if your transaction timeout is set too low, you may be timing out transactions before the thread is able to complete the necessary work. Increasing your transaction timeout value may reduce the number of transaction timeouts.
You should exercise caution when increasing this value, however, since doing so can cause threads to wait longer for a resource before timing out. Also, request time might increase because a request will wait longer before timing out.
A high transaction timeout ratio could be caused by a number of things such as a bottleneck for a server resource. We recommend tracing through your transactions to investigate what is causing the timeouts so the problem can be addressed.
Other Performance Improvement Strategies
Combined caching support allows you to configure a single cache for use with multiple entity beans. This will help solve usability and performance problems. Previously, you were required to configure a separate cache for each entity bean that was part of an application. For more information on combined caching, see Combined Caching with Entity Beans.
Batch inserts, updates and deletes improve the performance of container-managed persistence (CMP) bean creation by enabling the EJB container to perform multiple database operations for CMP beans in one SQL statement, thereby reducing network roundtrips. For more information on batch operations, see Batch Operations.
Distributing Transactions Across EJBs in a WebLogic Server Cluster
WebLogic Server provides additional transaction performance benefits for EJBs that reside in a WebLogic Server cluster. When a single transaction uses multiple EJBs, WebLogic Server attempts to use EJB instances from a single WebLogic Server instance, rather than using EJBs from different servers. This approach minimizes network traffic for the transaction.
In some cases, a transaction can use EJBs that reside on multiple WebLogic Server instances in a cluster. This can occur in heterogeneous clusters, where all EJBs have not been deployed to all WebLogic Server instances. In these cases, WebLogic Server uses a multitier connection to access the datastore, rather than multiple direct connections. This approach uses fewer resources, and yields better performance for the transaction.
However, for best performance, the cluster should be homogeneous — all EJBs should reside on all available WebLogic Server instances.
Stateless Session EJB Life Cycle
WebLogic Server uses a free pool to improve performance and throughput for stateless session EJBs. The free pool stores unbound stateless session EJBs. Unbound EJB instances are instances of a stateless session EJB class that are not processing a method call.
The following figure illustrates the WebLogic Server free pool, and the processes by which stateless EJBs enter and leave the pool. Dotted lines indicate the "state" of the EJB from the perspective of WebLogic Server.
Figure 4-1 WebLogic Server free pool showing stateless session EJB life cycle
Stateful Session EJB Life Cycle
WebLogic Server uses a cache of bean instances to improve the performance of stateful session EJBs. The cache stores active EJB instances in memory so that they are immediately available for client requests. Active EJBs consist of instances that are currently in use by a client, as well as instances that were recently in use, as described in the following sections. The cache is unlike the free pool insofar as stateful session beans in the cache are bound to a particular client, while the stateless session beans in the free pool have no client association.
Passivating Stateful Session EJBs
To achieve high performance, WebLogic Server reserves the cache for EJBs that clients are currently using and EJBs that were recently in use. When EJBs no longer meet these criteria, they become eligible for passivation.
![]() |
![]() |
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |