WebLogic Server Performance and Tuning
weblogic-ejb-jar.xml deployment file contains the WebLogic Server-specific deployment elements that determine the concurrency, caching, and clustering behaviors of EJBs. It also contains deployment elements that map available WebLogic Server resources to EJBs. WebLogic Server resources include JDBC data sources and connection pools, JMS connection factories, and other deployed EJBs.
For information on how to modify the
weblogic-ejb-jar.xml deployment file, see Editing Deployment Descriptors in Programming WebLogic Enterprise JavaBeans.
Table 5-1 lists the
weblogic-ejb-jar.xml file parameters that affect performance.
In addition to this section, see max-beans-in-free-pool in Programming WebLogic Enterprise JavaBeans.
When EJBs are created, the session bean instance is created and given an identity. When the client removes a bean, the bean instance is placed in the free pool. When you create a subsequent bean, you can avoid object allocation by reusing the previous instance that is in the free pool. The
max-beans-in-free-pool element can improve performance if EJBs are frequently created and removed.
The EJB container creates new instances of message beans as needed for concurrent message processing. The
max-beans-in-pool element puts an absolute limit on how many of these instances will be created. The container may override this setting according to the runtime resources that are available.
For the best performance for stateless session and message beans, use the default setting
max-beans-in-free-pool element. The default allows you to run beans in parallel, using as many threads as possible.
Do not change the value of the
max-beans-in-free-pool parameter for session beans unless you frequently create session beans, do a quick operation, and then throw them away. If you do this, enlarge your free pool by 25 to 50 percent and see if performance improves. If object creation represents a small fraction of your workload, increasing this parameter will not significantly improve performance. For applications where EJBs are database intensive, do not change the value of this parameter.
WebLogic Server uses a free pool to improve performance and throughput for stateless session EJBs. The free pool stores unbound stateless session EJBs. Unbound EJB instances are instances of a stateless session EJB class that are not processing a method call.
The following figure illustrates the WebLogic Server free pool, and the processes by which stateless EJBs enter and leave the pool. Dotted lines indicate the "state" of the EJB from the perspective of WebLogic Server.
A pool of anonymous entity beans (that is., beans without a primary key class) used to invoke finders and home methods, and to create other entity beans. The
max-beans-in-free-pool element also controls the size of this pool.
If you specify a value for
initial-beans-in-free-pool, WebLogic Server populates the free pool with the specified number of bean instances at startup. Populating the free pool in this way improves initial response time for the EJB, because initial requests for the bean can be satisfied without generating a new instance.
The initial-beans-in-free-pool element is described in Programming WebLogic Enterprise JavaBeans.
max-beans-in-cache element of the
weblogic-ejb-jar.xml file to specify the maximum number of objects of this bean class that are allowed in memory. When
max-beans-in-cache is reached, WebLogic Server passivates some EJBs that have not been recently used by a client. The
max-beans-in-cache element also affects when EJBs are removed from the WebLogic Server cache.
For more information, see "Choosing a Concurrency Strategy" in Programming WebLogic Enterprise JavaBeans
The max-beans-in-cache element is described in Programming WebLogic Enterprise JavaBeans.
Set the appropriate cache size with the
max-beans-in-cache element to avoid excessive passivation and activation. Activation is the transfer of an EJB instance from secondary storage to memory. Passivation is the transfer of an EJB instance from memory to secondary storage. Tuning
max-beans-in-cache too high consumes memory unnecessarily.
The container automatically manages this working set of session objects in the EJB cache without the client's or server's direct intervention. Specific callback methods in each EJB describe how to passivate (store in cache) or activate (retrieve from cache) these objects. Excessive activation and passivation nullifies the performance benefits of caching the working set of session objects in the EJB cache—especially when the application has to handle a large number of session objects.
Database locking improves concurrent access to entity EJBs. The WebLogic Server container improves concurrent access by deferring locking services to the underlying database. Unlike exclusive locking, with deferred database locking, the underlying data store can provide finer granularity for locking EJB data, in most cases, and provide deadlock detection.
For details about database locking, see Choosing a Concurrency Strategy in Enterprise JavaBeans
You specify the locking mechanism used for an EJB by setting the concurrency-strategy element in the
Data accessibility is controlled through the transaction isolation level mechanism. Transaction isolation level determines the degree to which multiple interleaved transactions are prevented from interfering with each other in a multi-user database system. Transaction isolation is achieved through use of locking protocols that guide the reading and writing of transaction data. This transaction data is written to the disk in a process called "serialization." Lower isolation levels give you better database concurrency at the cost of less transaction isolation.
For more information, see the description of the isolation-level element of the
weblogic-ejb-jar.xml file in Programming WebLogic Enterprise JavaBeans.
Table 5-1 lists the
weblogic-cmp-jar.xml file parameters that affect performance.
See Relationship Caching.
See Field Groups.
The WebLogic Server Administration Console reports a wide variety of EJB runtime monitoring statistics, many of which are useful for tuning your EJBs. This section discusses how some of these statistics can help you tune the performance of EJBs.
To display the statistics in the Administration Console, see Monitoring EJBs
Note: If you prefer to write a custom monitoring application, you can access the monitoring statistics programmatically by calling the relevant runtime MBeans. See the Javadoc for the weblogic.management.runtime package.
A high cache miss ratio could be indicative of an improperly sized cache. If your application uses a certain subset of beans (read primary keys) more frequently than others, it would be ideal to size your cache large enough so that the commonly used beans can remain in the cache as less commonly used beans are cycled in and out upon demand. If this is the nature of your application, you may be able to decrease your cache miss ratio significantly by increasing the maximum size of your cache.
If your application doesn't necessarily use a subset of beans more frequently than others, increasing your maximum cache size may not affect your cache miss ratio. We recommend testing your application with different maximum cache sizes to determine which give the lowest cache miss ratio. It is also important to keep in mind that your server has a finite amount of memory and therefore there is always a trade-off to increasing your cache size.
A high lock waiter ratio can indicate a suboptimal concurrency strategy for the bean. If acceptable for your application, a concurrency strategy of Database or Optimistic will allow for more parallelism than an Exclusive strategy and remove the need for locking at the EJB container level.
Because locks are generally held for the duration of a transaction, reducing the duration of your transactions will free up beans more quickly and may help reduce your lock waiter ratio. To reduce transaction duration, avoid grouping large amounts of work into a single transaction unless absolutely necessary.
The lock timeout ratio is closely related to the lock waiter ratio. If you are concerned about the lock timeout ratio for your bean, first take a look at the lock waiter ratio and our recommendations for reducing it (including possibly changing your concurrency strategy). If you can reduce or eliminate the number of times a thread has to wait for a lock on a bean, you will also reduce or eliminate the amount of timeouts that occur while waiting.
If the transaction timeout value is set too low, threads may not be waiting long enough to obtain access to a bean and timing out prematurely. If this is the case, increasing the trans-timeout-seconds value for the bean may help reduce the lock timeout ratio.
Take care when increasing the trans-timeout-seconds, however, because doing so can cause threads to wait longer for a bean and threads are a valuable server resource. Also, doing so may increase the request time, as a request ma wait longer before timing out.
One way to check this is via the Beans in Use Current Count and Idle Beans Count displayed in the Administration Console. If demand for your EJB spikes during a certain period of time, you may see a lot of pool misses as your pool is emptied and unable to fill additional requests.
As the demand for the EJB drops and beans are returned to the pool, many of the beans created to satisfy requests may be unable to fit in the pool and are therefore removed. If this is the case, you may be able to reduce the number of pool misses by increasing the maximum size of your free pool. This may allow beans that were created to satisfy demand during peak periods to remain in the pool so they can be used again when demand once again increases.
To reduce the number of destroyed beans, BEA recommends against throwing non-application exceptions from your bean code except in cases where you want the bean instance to be destroyed. A non-application exception is an exception that is either a java.rmi.RemoteException (including exceptions that inherit from RemoteException) or is not defined in the throws clause of a method of an EJB's home or component interface.
A high pool timeout ratio could be indicative of an improperly sized free pool. Increasing the maximum size of your free pool via the
max-beans-in-free-pool setting will increase the number of bean instances available to service requests and may reduce your pool timeout ratio.
Another factor affecting the number of pool timeouts is the configured transaction timeout for your bean. The maximum amount of time a thread will wait for a bean from the pool is equal to the default transaction timeout for the bean. Increasing the
trans-timeout-seconds setting in your
weblogic-ejb-jar.xml file will give threads more time to wait for a bean instance to become available.
Users should exercise caution when increasing this value, however, since doing so may cause threads to wait longer for a bean and threads are a valuable server resource. Also, request time might increase because a request will wait longer before timing out.
Begin investigating a high transaction rollback ratio by examining the Transaction Timeout Ratio reported in the Administration Console. If the transaction timeout ratio is higher than you expect, try to address the timeout problem first.
An unexpectedly high transaction rollback ratio could be caused by a number of things. We recommend investigating the cause of transaction rollbacks to find potential problems with your application or a resource used by your application.
A high transaction timeout ratio could be caused by the wrong transaction timeout value. For example, if your transaction timeout is set too low, you may be timing out transactions before the thread is able to complete the necessary work. Increasing your transaction timeout value may reduce the number of transaction timeouts.
You should exercise caution when increasing this value, however, since doing so can cause threads to wait longer for a resource before timing out. Also, request time might increase because a request will wait longer before timing out.
A high transaction timeout ratio could be caused by a number of things such as a bottleneck for a server resource. We recommend tracing through your transactions to investigate what is causing the timeouts so the problem can be addressed.
Application-level caching allows you to configure a single cache for use with multiple entity beans. This will help solve usability and performance problems. Previously, you were required to configure a separate cache for each entity bean that was part of an application. For more information on combined caching, see Application-Level Versus Bean-Level Caching .
Batch inserts, updates and deletes improve the performance of container-managed persistence (CMP) bean creation by enabling the EJB container to perform multiple database operations for CMP beans in one SQL statement, thereby reducing network roundtrips. For more information on batch operations, see "Ordering and Batching Operations".
WebLogic Server provides additional transaction performance benefits for EJBs that reside in a WebLogic Server cluster. When a single transaction uses multiple EJBs, WebLogic Server attempts to use EJB instances from a single WebLogic Server instance, rather than using EJBs from different servers. This approach minimizes network traffic for the transaction.
In some cases, a transaction can use EJBs that reside on multiple WebLogic Server instances in a cluster. This can occur in heterogeneous clusters, where all EJBs have not been deployed to all WebLogic Server instances. In these cases, WebLogic Server uses a multitier connection to access the datastore, rather than multiple direct connections. This approach uses fewer resources, and yields better performance for the transaction.
When using an Optimistic concurrency strategy with the
optimistic column specified as version, creating an index on the table consisting of the primary key column and the version column can improve performance.