This chapter includes the following sections:
Deployment descriptors are schema-based. Descriptors that are new in this release of WebLogic Server are not available as DTD-based descriptors.
Avoid using the
RequiresNew transaction parameter. Using
RequiresNew causes the EJB container to start a new transaction after suspending any current transactions. This means additional resources, including a separate data base connection are allocated.
Use local-interfaces or set call-by-reference to true to avoid the overhead of serialization when one EJB calls another or an EJB is called by a servlet/JSP in the same application. Note the following:
In release prior to WebLogic Server 8.1, call-by-reference is turned on by default. For releases of WebLogic Server 8.1 and higher, call-by-reference is turned off by default. Older applications migrating to WebLogic Server 8.1 and higher that do not explicitly turn on call-by-reference may experience a drop in performance.
This optimization does not apply to calls across different applications.
The calls across different applications can be between:
applications on different JVMs
applications on the same JVM
For example, when you have a JVM that contains
EJBApp2.ear on the same server. And you deploy one EJB on
EJBApp1.ear and another EJB on
EJBApp2.ear . The calls between the applications on
EJBApp2.ear are considered as calls across different applications even though they are on the same JVM.
Use Stateless session beans over Stateful session beans whenever possible. Stateless session beans scale better than stateful session beans because there is no state information to be maintained.
WebLogic Server provides additional transaction performance benefits for EJBs that reside in a WebLogic Server cluster. When a single transaction uses multiple EJBs, WebLogic Server attempts to use EJB instances from a single WebLogic Server instance, rather than using EJBs from different servers. This approach minimizes network traffic for the transaction. In some cases, a transaction can use EJBs that reside on multiple WebLogic Server instances in a cluster. This can occur in heterogeneous clusters, where all EJBs have not been deployed to all WebLogic Server instances. In these cases, WebLogic Server uses a multitier connection to access the datastore, rather than multiple direct connections. This approach uses fewer resources, and yields better performance for the transaction. However, for best performance, the cluster should be homogeneous — all EJBs should reside on all available WebLogic Server instances.
The following sections provide information on how to tune EJB caches:
The EJB Container caches stateful session beans in memory up to a count specified by the
max-beans-in-cache parameter specified in
weblogic-ejb-jar.xml. This parameter should be set equal to the number of concurrent users. This ensures minimum passivation of stateful session beans to disk and subsequent activation from disk which yields better performance.
Entity beans are cached at two levels by the EJB container:
Once an entity bean has been loaded from the database, it is always retrieved from the cache whenever it is requested when using the
findByPrimaryKey or invoked from a cached reference in that transaction. Getting an entity bean using a non-primary key finder always retrieves the persistent state of the bean from the data base.
Entity bean instances are also cached between transactions. However, by default, the persistent state of the entity beans are not cached between transactions. To enable caching between transactions, set the value of the
cache-between-transactions parameter to true.
Is it safe to cache the state? This depends on the concurrency-strategy for that bean. The entity-bean cache is really only useful when
cache-between-transactions can be safely set to true. In cases where
ejbPassivate() callbacks are expensive, it is still a good idea to ensure the entity-cache size is large enough. Even though the persistent state may be reloaded at least once per transaction, the beans in the cache are already activated. The value of the cache-size is set by the deployment descriptor parameter
max-beans-in-cache and should be set to maximize cache-hits. In most situations, the value need not be larger than the product of the number of rows in the table associated with the entity bean and the number of threads expected to access the bean concurrently.
For entity beans with a high cache miss ratio, maintaining ready bean instances can adversely affect performance.
If you can set
disable-ready-instances in the
entity-cache element of an
entity-descriptor, the container does not maintain the ready instances in cache. If the feature is enabled in the deployment descriptor, the cache only keeps the active instances. Once the involved transaction is committed or rolled back, the bean instance is moved from active cache to the pool immediately.
Query Caching is a new feature in WebLogic Server 9.0 that allows read-only CMP entity beans to cache the results of arbitrary finders. Query caching is supported for all finders except
prepared-query finders. The query cache can be an application-level cache as well as a bean-level cache. The size of the cache is limited by the
finder-level flag in the
weblogic-cmp-rdbms descriptor file,
enable-query-caching is used to specify whether the results of that finder are to be cached. A flag with the same name has the same purpose for internal relationship finders when applied to the
weblogic-relationship-role element. Queries are evicted from the query-cache under the following circumstances:
The query is least recently used and the
query-cache has hit its size limit.
At least one of the EJBs that satisfy the query has been evicted from the entity bean cache, regardless of the reason.
The query corresponds to a finder that has
eager-relationship-caching enabled and the query for the associated internal relationship finder has been evicted from the related bean's query cache.
It is possible to let the size of the entity-bean cache limit the size of the query-cache by setting the
max-queries-in-cache parameter to 0, since queries are evicted from the cache when the corresponding EJB is evicted. This may avoid some lock contention in the query cache, but the performance gain may not be significant.
The following section provides information on how to tune EJB pools:
The EJB container maintains a pool of stateless session beans to avoid creating and destroying instances. Though generally useful, this pooling is even more important for performance when the
ejbCreate() and the
setSessionContext() methods are expensive. The pool has a lower as well as an upper bound. The upper bound is the more important of the two.
The upper bound is specified by the
max-beans-in-free-pool parameter. It should be set equal to the number of threads expected to invoke the EJB concurrently. Using too small of a value impacts concurrency.
The lower bound is specified by the
initial-beans-in-free-pool parameter. Increasing the value of
initial-beans-in-free-pool increases the time it takes to deploy the application containing the EJB and contributes to startup time for the server. The advantage is the cost of creating EJB instances is not incurred at run time. Setting this value too high wastes memory.
The life cycle of MDBs is very similar to stateless session beans. The MDB pool has the same tuning parameters as stateless session beans and the same factors apply when tuning them. In general, most users will find that the default values are adequate for most applications. See Tuning Message-Driven Beans.
The entity bean pool serves two purposes:
A target objects for invocation of finders via reflection.
A pool of bean instances the container can recruit if it cannot find an instance for a particular primary key in the cache.
The entity pool contains anonymous instances (instances that do not have a primary key). These beans are not yet active (meaning
ejbActivate() has not been invoked on them yet), though the EJB context has been set. Entity bean instances evicted from the entity cache are passivated and put into the pool. The tunables are the
max-beans-in-free-pool. Unlike stateless session beans and MDBs, the
max-beans-in-free-pool has no relation with the thread count. You should increase the value of
max-beans-in-free-pool if the entity bean constructor or
setEnityContext() methods are expensive.
The largest performance gains in entity beans are achieved by using caching to minimize the number of interactions with the data base. However, in most situations, it is not realistic to be able to cache entity beans beyond the scope of a transaction. The following sections provide information on WebLogic Server EJB container features, most of which are configurable, that you can use to minimize database interaction safely:
Using eager relationship caching allows the EJB container to load related entity beans using a single SQL join. Use only when the same transaction accesses related beans. See "Relationship Caching" in Developing Enterprise JavaBeans, Version 2.1, for Oracle WebLogic Server.
In this release of WebLogic Server, if a CMR field has specified both
relationship-caching and cascade-delete, the owner bean and related bean are loaded to SQL which can provide an additional performance benefit.
The EJB container always uses an outer join in a CMP bean finder when eager
relationship-caching is turned on. Typically, inner joins are faster to execute than outer joins with the drawback that inner joins do not return rows which do not have data in the corresponding joined table. Where applicable, using an inner join on very large databases may help to free CPU resources.
In WLS 10.3,
use-inner-join has been added in
weblogic-cmp-rdbms-jar.xml, as an attribute of the weblogic-rdbms-bean, as shown here:
This element should only be set to
true if the CMP bean's related beans can never be null or an empty set.
The default value is
false. If you specify its value as true, all relationship cache query on the entity bean use an inner join instead of a left outer join to execute a select query clause.
JDBC batch operations are turned on by default in the EJB container. The EJB container automatically re-orders and executes similar data base operations in a single batch which increases performance by eliminating the number of data base round trips. Oracle recommends using batch operations.
When an entity EJB is updated, the EJB container automatically updates in the data base only those fields that have actually changed. As a result the update statements are simpler and if a bean has not been modified, no data base call is made. Because different transactions may modify different sets of fields, more than one form of update statements may be used to store the bean in the data base. It is important that you account for the types of update statements that may be used when setting the size of the prepared statement cache in the JDBC connection pool. See Cache Prepared and Callable Statements.
Field groups allow the user to segregate commonly used fields into a single group. If any of the fields in the group is accessed by application/bean code, the entire group is loaded using a single SQL statement. This group can also be associated with a finder. When the finder is invoked and
finders-load-bean is true, it loads only those fields from the data base that are included in the field group. This means that if most transactions do not use a particular field that is slow to load, such as a BLOB, it can be excluded from a field-group. Similarly, if an entity bean has a lot of fields, but a transaction uses only a small number of them, the unused fields can be excluded.
Be careful to ensure that fields that are accessed in the same transaction are not configured into separate field-groups. If that happens, multiple data base calls occur to load the same bean, when one would have been enough.
This flag causes the EJB container to flush all modified entity beans to the data base before executing a finder. If the application modifies the same entity bean more than once and executes a non-pk finder in-between in the same transaction, multiple updates to the data base are issued. This flag is turned on by default to comply with the EJB specification.
If the application has transactions where two invocations of the same or different finders could return the same bean instance and that bean instance could have been modified between the finder invocations, it makes sense leaving
include-updates turned on. If not, this flag may be safely turned off. This eliminates an unnecessary flush to the data base if the bean is modified again after executing the second finder. This flag is specified for each finder in the
When it is turned off, method parameters to an EJB are passed by value, which involves serialization. For mutable, complex types, this can be significantly expensive. Consider using for better performance when:
The application does not require call-by-value semantics, such as method parameters are not modified by the EJB.
If modified by the EJB, the changes need not be invisible to the caller of the method.
This flag applies to all EJBs, not just entity EJBs. It also applies to EJB invocations between servlets/JSPs and EJBs in the same application. The flag is turned off by default to comply with the EJB specification. This flag is specified at the bean-level in the WebLogic-specific deployment descriptor.
Bean-level pessimistic locking is implemented in the EJB container by acquiring a data base lock when loading the bean. When implemented, each entity bean can only be accessed by a single transaction in a single server at a time. All other transactions are blocked, waiting for the owning transaction to complete. This is a useful alternative to using a higher data base isolation level, which can be expensive at the RDBMS level. This flag is specified at the bean level in the
cmp-rdbms deployment descriptor.
If the lock is not exclusive lock, you man encounter deadlock conditions. If the data base lock is a shared lock, there is potential for deadlocks when using that RDBMS.
concurrency-strategy deployment descriptor tells the EJB container how to handle concurrent access of the same entity bean by multiple threads in the same server instance. Set this parameter to one of four values:
Exclusive—The EJB container ensures there is only one instance of an EJB for a given primary key and this instance is shared among all concurrent transactions in the server with the container serializing access to it. This concurrency setting generally does not provide good performance unless the EJB is used infrequently and chances of concurrent access is small.
Database—This is the default value and most commonly used concurrency strategy. The EJB container defers concurrency control to the database. The container maintains multiple instances of an EJB for a given primary-key and each transaction gets it's own copy. In combination with this strategy, the database isolation-level and bean level pessimistic locking play a major role in determining if concurrent access to the persistent state should be allowed. It is possible for multiple transactions to access the bean concurrently so long as it does not need to go to the database, as would happen when the value of
cache-between-transactions is true. However, setting the value of
cache-between-transactions to true unsafe and not recommended with the
Dababase concurrency strategy.
Optimistic—The goal of the optimistic concurrency strategy is to minimize locking at the data base and while continuing to provide data consistency. The basic assumption is that the persistent state of the EJB is changed very rarely. The container attempts to load the bean in a nested transaction so that the isolation-level settings of the outer transaction does not cause locks to be acquired at the data base. At commit-time, if the bean has been modified, a predicated update is used to ensure it's persistent state has not been changed by some other transaction. If so, an
OptimisticConcurrencyException is thrown and must be handled by the application.
Since EJBs that can use this concurrency strategy are rarely modified, using
cache-between-transactions on can boost performance significantly. This strategy also allows commit-time verification of beans that have been read, but not changed. This is done by setting the
verify-rows parameter to
Read in the
cmp-rdbms descriptor. This provides very high data-consistency while at the same time minimizing locks at the data base. However, it does slow performance somewhat. It is recommended that the optimistic verification be performed using a version column: it is faster, followed closely by timestamp, and more distantly by modified and read. The modified value does not apply if verify-rows is set to
When an optimistic concurrency bean is modified in a server that is part of a cluster, the server attempts to invalidate all instances of that bean cluster-wide in the expectation that it will prevent
OptimisticConcurrencyExceptions. In some cases, it may be more cost effective to simply let other servers throw an
OptimisticConcurrencyException. in this case, turn off the cluster-wide invalidation by setting the
cluster-invalidation-disabled flag in the
ReadOnly—The ReadOnly value is the most performant. When selected, the container assumes the EJB is non-transactional and automatically turns on
cache-between-transactions. Bean states are updated from the data base at periodic, configurable intervals or when the bean has been programmatically invalidated. The interval between updates can cause the persistent state of the bean to become stale. This is the only concurrency-strategy for which
query-caching can be used. See Caching between Transactions.
The WebLogic Server Administration Console reports a wide variety of EJB runtime monitoring statistics, many of which are useful for tuning your EJBs. This section discusses how some of these statistics can help you tune the performance of EJBs.
To display the statistics in the WebLogic Server Administration Console, see "Monitoring EJBs" in Oracle WebLogic Server Administration Console Online Help. If you prefer to write a custom monitoring application, you can access the monitoring statistics using JMX or WLST by accessing the relevant runtime MBeans. See "Runtime MBeans" in MBean Reference for Oracle WebLogic Server.
The cache miss ratio is a ratio of the number of times a container cannot find a bean in the cache (cache miss) to the number of times it attempts to find a bean in the cache (cache access):
Cache Miss Ratio = (Cache Total Miss Count / Cache Total Access Count) * 100
A high cache miss ratio could be indicative of an improperly sized cache. If your application uses a certain subset of beans (read primary keys) more frequently than others, it would be ideal to size your cache large enough so that the commonly used beans can remain in the cache as less commonly used beans are cycled in and out upon demand. If this is the nature of your application, you may be able to decrease your cache miss ratio significantly by increasing the maximum size of your cache.
If your application doesn't necessarily use a subset of beans more frequently than others, increasing your maximum cache size may not affect your cache miss ratio. We recommend testing your application with different maximum cache sizes to determine which give the lowest cache miss ratio. It is also important to keep in mind that your server has a finite amount of memory and therefore there is always a trade-off to increasing your cache size.
When using the
Exclusive concurrency strategy, the lock waiter ratio is the ratio of the number of times a thread had to wait to obtain a lock on a bean to the total amount of lock requests issued:
Lock Waiter Ratio = (Current Waiter Count / Current Lock Entry Count) * 100
A high lock waiter ratio can indicate a suboptimal concurrency strategy for the bean. If acceptable for your application, a concurrency strategy of Database or Optimistic will allow for more parallelism than an Exclusive strategy and remove the need for locking at the EJB container level.
Because locks are generally held for the duration of a transaction, reducing the duration of your transactions will free up beans more quickly and may help reduce your lock waiter ratio. To reduce transaction duration, avoid grouping large amounts of work into a single transaction unless absolutely necessary.
When using the
Exclusive concurrency strategy, the lock timeout ratio is the ratio of timeouts to accesses for the lock manager:
Lock Timeout Ratio =(Lock Manager Timeout Total Count / Lock Manager Total Access Count) * 100
The lock timeout ratio is closely related to the lock waiter ratio. If you are concerned about the lock timeout ratio for your bean, first take a look at the lock waiter ratio and our recommendations for reducing it (including possibly changing your concurrency strategy). If you can reduce or eliminate the number of times a thread has to wait for a lock on a bean, you will also reduce or eliminate the amount of timeouts that occur while waiting.
A high lock timeout ratio may also be indicative of an improper transaction timeout value. The maximum amount of time a thread will wait for a lock is equal to the current transaction timeout value.
If the transaction timeout value is set too low, threads may not be waiting long enough to obtain access to a bean and timing out prematurely. If this is the case, increasing the trans-timeout-seconds value for the bean may help reduce the lock timeout ratio.
Take care when increasing the trans-timeout-seconds, however, because doing so can cause threads to wait longer for a bean and threads are a valuable server resource. Also, doing so may increase the request time, as a request ma wait longer before timing out.
The pool miss ratio is a ratio of the number of times a request was made to get a bean from the pool when no beans were available, to the total number of requests for a bean made to the pool:
Pool Miss Ratio = (Pool Total Miss Count / Pool Total Access Count) * 100
If your pool miss ratio is high, you must determine what is happening to your bean instances. There are three things that can happen to your beans.
They are in use.
They were destroyed.
They were removed.
Follow these steps to diagnose the problem:
Check your destroyed bean ratio to verify that bean instances are not being destroyed.
Investigate the cause and try to remedy the situation.
Examine the demand for the EJB, perhaps over a period of time.
One way to check this is via the Beans in Use Current Count and Idle Beans Count displayed in the WebLogic Server Administration Console. If demand for your EJB spikes during a certain period of time, you may see a lot of pool misses as your pool is emptied and unable to fill additional requests.
As the demand for the EJB drops and beans are returned to the pool, many of the beans created to satisfy requests may be unable to fit in the pool and are therefore removed. If this is the case, you may be able to reduce the number of pool misses by increasing the maximum size of your free pool. This may allow beans that were created to satisfy demand during peak periods to remain in the pool so they can be used again when demand once again increases.
The destroyed bean ratio is a ratio of the number of beans destroyed to the total number of requests for a bean.
Destroyed Bean Ratio = (Total Destroyed Count / Total Access Count) * 100
To reduce the number of destroyed beans, Oracle recommends against throwing non-application exceptions from your bean code except in cases where you want the bean instance to be destroyed. A non-application exception is an exception that is either a java.rmi.RemoteException (including exceptions that inherit from RemoteException) or is not defined in the throws clause of a method of an EJB's home or component interface.
In general, you should investigate which exceptions are causing your beans to be destroyed as they may be hurting performance and may indicate problem with the EJB or a resource used by the EJB.
The pool timeout ratio is a ratio of requests that have timed out waiting for a bean from the pool to the total number of requests made:
Pool Timeout Ratio = (Pool Total Timeout Count / Pool Total Access Count) * 100
A high pool timeout ratio could be indicative of an improperly sized free pool. Increasing the maximum size of your free pool via the
max-beans-in-free-pool setting will increase the number of bean instances available to service requests and may reduce your pool timeout ratio.
Another factor affecting the number of pool timeouts is the configured transaction timeout for your bean. The maximum amount of time a thread will wait for a bean from the pool is equal to the default transaction timeout for the bean. Increasing the
trans-timeout-seconds setting in your
weblogic-ejb-jar.xml file will give threads more time to wait for a bean instance to become available.
Users should exercise caution when increasing this value, however, since doing so may cause threads to wait longer for a bean and threads are a valuable server resource. Also, request time might increase because a request will wait longer before timing out.
The transaction rollback ratio is the ratio of transactions that have rolled back to the number of total transactions involving the EJB:
Transaction Rollback Ratio = (Transaction Total Rollback Count / Transaction Total Count) * 100
Begin investigating a high transaction rollback ratio by examining the Transaction Timeout Ratio reported in the WebLogic Server Administration Console. If the transaction timeout ratio is higher than you expect, try to address the timeout problem first.
An unexpectedly high transaction rollback ratio could be caused by a number of things. We recommend investigating the cause of transaction rollbacks to find potential problems with your application or a resource used by your application.
The transaction timeout ratio is the ratio of transactions that have timed out to the total number of transactions involving an EJB:
Transaction Timeout Ratio = (Transaction Total Timeout Count / Transaction Total Count) * 100
A high transaction timeout ratio could be caused by the wrong transaction timeout value. For example, if your transaction timeout is set too low, you may be timing out transactions before the thread is able to complete the necessary work. Increasing your transaction timeout value may reduce the number of transaction timeouts.
You should exercise caution when increasing this value, however, since doing so can cause threads to wait longer for a resource before timing out. Also, request time might increase because a request will wait longer before timing out.
A high transaction timeout ratio could be caused by a number of things such as a bottleneck for a server resource. We recommend tracing through your transactions to investigate what is causing the timeouts so the problem can be addressed.