This chapter describes some ways to tune the Application Server for optimum performance, including the following topics:
Deployment settings can have significant impact on performance. Follow these guidelines when configuring deployment settings for best performance:
Enabling auto-deployment will adversely affect deployment, though it is a convenience in a development environment. For a production system, disable auto-deploy to optimize performance. If auto-deployment is enabled, then the Reload Poll Interval setting can have a significant performance impact.
Disable auto-deployment with the Admin Console under Stand-Alone Instances > server (Admin Server) on the Advanced/Applications Configuration tab.
Compiling JSP files is resource intensive and time consuming. Pre-compiling JSP files before deploying applications on the server will improve application performance. When you do so, only the resulting servlet class files will be deployed.
You can specify to precompile JSP files when you deploy an application through the Admin Console or DeployTool. You can also specify to pre-compile JSP files for a deployed application with the Admin Console under Stand-Alone Instances > server (Admin Server) on the Advanced/Applications Configuration tab.
If dynamic reloading is enabled, the server periodically checks for changes in deployed applications and automatically reloads the application with the changes. Dynamic reloading is intended for development environments and is also incompatible with session persistence. To improve performance, disable dynamic class reloading.
Disable dynamic class reloading for an application that is already deployed with the Admin Console under Stand-Alone Instances > server (Admin Server) on the Advanced/Applications Configuration tab.
The Application Server produces writes log messages and exception stack trace output to the log file in the logs directory of the instance, appserver-root/domains/domain-name/logs. Naturally, the volume of log activity can impact server performance; particularly in benchmarking situations.
In general, writing to the system log slows down performance slightly; and increased disk access (increasing the log level, decreasing the file rotation limit or time limit) also slows down the application.
Also, make sure that any custom log handler doesn’t log to a slow device like a network file system since this can adversely affect performance.
Set the log level for the server and its subsystems in the Admin Console Logger Settings page, Log Levels tab. The page enables you to specify the default log level for the server (labeled Root), the default log level for javax.enterprise.system subsystems (labeled Server) such as the EJB Container, MDB Container, Web Container, Classloader, JNDI naming system, and Security, and for each individual subsystem.
Log levels vary from FINEST, which provides maximum log information, through SEVERE, which logs only events that interfere with normal program execution. The default log level is INFO. The individual subsystem log level overrides the Server setting, which in turn overrides the Root setting.
For example, the MDB container can produce log messages at a different level than server default. To get more debug messages, set the log level to FINE, FINER, or FINEST. For best performance under normal conditions, set the log level to WARNING. Under benchmarking conditions, it is often appropriate to set the log level to SEVERE.
Set Web container properties with the Admin Console at Configurations > config-name > Web Container.
Session timeout determines how long the server maintains a session if a user does not explicitly invalidate the session. The default value is 30 minutes. Tune this value according to your application requirements. Setting a very large value for session timeout can degrade performance by causing the server to maintain too many sessions in the session store. However, setting a very small value can cause the server to reclaim sessions too soon.
Modifying the reap interval can improve performance, but setting it without considering the nature of your sessions and business logic can cause data inconsistency, especially for time-based persistence-frequency.
For example, if you set the reap interval to 60 seconds, the value of session data will be recorded every 60 seconds. But if a client accesses a servlet to update a value at 20 second increments, then inconsistencies will result.
For example, consider an online auction scenario as follows:
Bidding starts at $5, in 60 seconds the value recorded will be $8 (three 20 second intervals).
During the next 40 seconds, the client starts incrementing the price. The value the client sees is $10.
During the client’s 20 second rest, the Application Server stops and starts in 10 seconds. As a result, the latest value recorded at the 60 second interval ($8) is be loaded into the session.
The client clicks again expecting to see $11; but instead sees is $9, which is incorrect.
So, to avoid data inconsistencies, take into the account the expected behavior of the application when adjusting the reap interval.
On a production system, improve web container performance by disabling dynamic JSP reloading. To do so, edit the default-web.xml file in the config directory for each instance. Change the servlet definition for a JSP file to look like this:
<servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class> ... <load-on-startup>3</load-on-startup> </servlet>
The EJB Container has many settings that affect performance. As with other areas, use monitor the EJB Container to track its execution and performance.
Monitoring the EJB container is disabled by default. Enable monitoring with the Admin Console under Configurations > config-name > Monitoring. Set the monitoring level to LOW for to monitor all deployed EJB components, EJB pools, and EJB caches. Set the monitoring level to HIGH to also monitor EJB business methods.
The EJB container caches and pools EJB components for better performance. Tuning the cache and pool properties can provide significant performance benefits to the EJB container. Set EJB cache and pool settings in the Admin Console Configurations > config-name > EJB Container (EJB Settings).
The pool settings are valid for stateless session and entity beans while the cache settings are valid for stateful session and entity beans.
Both stateless session beans and entity beans can be pooled to improve server performance. In addition, both stateful session beans and entity beans can be cached to improve performance.
Table 3–1 Bean Type Pooling or Caching
Bean Type |
Pooled |
Cached |
---|---|---|
Stateless Session |
Yes |
No |
Stateful Session |
No |
Yes |
Entity |
Yes |
Yes |
The difference between a pooled bean and a cached bean is that pooled beans are all equivalent and indistinguishable from one another. Cached beans, on the contrary, contain conversational state in the case of stateful session beans, and are associated with a primary key in the case of entity beans. Entity beans are removed from the pool and added to the cache on ejbActivate() and removed from the cache and added to the pool on ejbPassivate(). ejbActivate() is called by the container when a needed entity bean is not in the cache. ejbPassivate() is called by the container when the cache grows beyond its configured limits.
If you develop and deploy your EJB components using Sun Java Studio, then you need to edit the individual bean descriptor settings for bean pool and bean cache. These settings might not be suitable for production-level deployment.
A bean in the pool represents the pooled state in the EJB lifecycle. This means that the bean does not have an identity. The advantage of having beans in the pool is that the time to create a bean can be saved for a request. The container has mechanisms that create pool objects in the background, to save the time of bean creation on the request path.
Stateless session beans and entity beans use the EJB pool. Keeping in mind how you use stateless session beans and the amount of traffic your server handles, tune the pool size to prevent excessive creation and deletion of beans.
An individual EJB component can specify cache settings that override those of the EJB container in the <bean-pool> element of the EJB component’s sun-ejb-jar.xml deployment descriptor.
The EJB pool settings are:
Initial and Minimum Pool Size: the initial and minimum number of beans maintained in the pool. Valid values are from 0 to MAX_INTEGER, and the default value is 8. The corresponding EJB deployment descriptor attribute is steady-pool-size.
Set this property to a number greater than zero for a moderately loaded system. Having a value greater than zero ensures that there is always a pooled instance to process an incoming request.
Maximum Pool Size: the maximum number of connections that can be created to satisfy client requests. Valid values are from zero to MAX_INTEGER., and the default is 32. A value of zero means that the size of the pool is unbounded. The potential implication is that the JVM heap will be filled with objects in the pool. The corresponding EJB deployment descriptor attribute is max-pool-size.
Set this property to be representative of the anticipated high load of the system. An very large pool wastes memory and can slow down the system. A very small pool is also inefficient due to contention.
Pool Resize Quantity: the number of beans to be created or deleted when the cache is being serviced by the server. Valid values are from zero to MAX_INTEGER and default is 16. The corresponding EJB deployment descriptor attribute is resize-quantity.
Be sure to re-calibrate the pool resize quantity when you change the maximum pool size, to maintain an equilibrium. Generally, a larger maximum pool size should have a larger pool resize quantity.
Pool Idle Timeout: the maximum time that a stateless session bean, entity bean, or message-driven bean is allowed to be idle in the pool. After this time, the bean is destroyed if the bean in case is a stateless session bean or a message driver bean. This is a hint to server. The default value is 600 seconds. The corresponding EJB deployment descriptor attribute is pool-idle-timeout-in-seconds.
If there are more beans in the pool than the maximum pool size, the pool drains back to initial and minimum pool size, in steps of pool resize quantity at an interval specified by the pool idle timeout. If the resize quantity is too small and the idle timeout large, you will not see the pool draining back to steady size quickly enough.
A bean in the cache represents the ready state in the EJB lifecycle. This means that the bean has an identity (for example, a primary key or session ID) associated with it.
Beans moving out of the cache have to be passivated or destroyed according to the EJB lifecycle. Once passivated, a bean has to be activated to come back into the cache. Entity beans are generally stored in databases and use some form of query language semantics to load and store data. Session beans have to be serialized when storing them upon passivation onto the disk or a database; and similarly have to be deserialized upon activation.
Any incoming request using these “ready” beans from the cache avoids the overhead of creation, setting identity, and potentially activation. So, theoretically, it is good to cache as many beans as possible. However, there are drawbacks to caching:
Memory consumed by all the beans affects the heap available in the Virtual Machine.
Increasing objects and memory taken by cache means longer, and possibly more frequent, garbage collection.
The application server might run out of memory unless the heap is carefully tuned for peak loads.
Keeping in mind how your application uses stateful session beans and entity beans, and the amount of traffic your server handles, tune the EJB cache size and time-out settings to minimize the number of activations and passivations.
An individual EJB component can specify cache settings that override those of the EJB container in the <bean-cache> element of the EJB component’s sun-ejb-jar.xml deployment descriptor.
The EJB cache settings are:
Maximum number of beans in the cache. Make this setting greater than one. The default value is 512. A value of zero indicates the cache is unbounded, which means the size of the cache is governed by Cache Idle Timeout and Cache Resize Quantity. The corresponding EJB deployment descriptor attribute is max-cache-size.
Number of beans to be created or deleted when the cache is serviced by the server. Valid values are from zero to MAX_INTEGER, and the default is 16. The corresponding EJB deployment descriptor attribute is resize-quantity.
Amount of time that a stateful session bean remains passivated (idle in the backup store). If a bean was not accessed after this interval of time, then it is removed from the backup store and will not be accessible to the client. The default value is 60 minutes. The corresponding EJB deployment descriptor attribute is removal-timeout-in-seconds.
Algorithm used to remove objects from the cache. The corresponding EJB deployment descriptor attribute is victim-selection-policy.Choices are:
NRU (not recently used). This is the default, and is actually pseudo-random selection policy.
FIFO (first in, first out)
LRU (least recently used)
Maximum time that a stateful session bean or entity bean is allowed to be idle in the cache. After this time, the bean is passivated to the backup store. The default value is 600 seconds. The corresponding EJB deployment descriptor attribute is cache-idle-timeout-in-seconds.
Rate at which a read-only-bean is refreshed from the data source. Zero (0) means that the bean is never refreshed. The default is 600 seconds. The corresponding EJB deployment descriptor attribute is refresh-period-in-seconds. Note: this setting does not have a custom field in the Admin Console. To set it, use the Add Property button in the Additional Properties section.
Individual EJB pool and cache settings in the sun-ejb-jar.xml deployment descriptor override those of the EJB container. The following table lists the cache and pool settings for each type of EJB component.
Table 3–2 EJB Cache and Pool Settings
|
Cache Settings |
Pool Settings |
||||||||
---|---|---|---|---|---|---|---|---|---|---|
Type of Bean |
cache-resize-quantity |
max- cache-size |
cache-idle-timeout-in-seconds |
removal- timeout- in- seconds |
victim-selection- policy |
refresh-period-in-seconds |
steady-pool-size |
pool-resize-quantity |
max-pool-size |
pool-idle-timeout-in- seconds |
Stateful Session |
X |
X |
X |
X |
X | |||||
Stateless Session |
X |
X |
X |
X |
||||||
Entity |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
Entity Read-only |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
Message Driven Bean |
X |
X |
X |
The commit option controls the action taken by the EJB container when an EJB component completes a transaction. The commit option has a significant impact on performance.
There are two possible values for the commit option:
Commit option B: When a transaction completes, the bean is kept in the cache and retains its identity. The next invocation for the same primary key can use the cached instance. The EJB container will call the bean’s ejbLoad() method before the method invocation to synchronize with the database.
Commit option C: When a transaction completes, the EJB container calls the bean’s ejbPassivate() method, the bean is disassociated from its primary key and returned to the free pool. The next invocation for the same primary key will have to get a free bean from the pool, set the PrimaryKey on this instance, and then call ejbActivate() on the instance. Again, the EJB container will call the bean’s ejbLoad() before the method invocation to synchronize with the database.
Option B avoids ejbAcivate() and ejbPassivate() calls. So, in most cases it performs better than option C since it avoids some overhead in acquiring and releasing objects back to pool.
However, there are some cases where option C can provide better performance. If the beans in the cache are rarely reused and if beans are constantly added to the cache, then it makes no sense to cache beans. With option C is used, the container puts beans back into the pool (instead of caching them) after method invocation or on transaction completion. This option reuses instances better and reduces the number of live objects in the JVM, speeding garbage collection.
To determine whether to use commit option B or commit option C, first take a look at the cache-hits value using the monitoring command for the bean. If the cache hits are much higher than cache misses, then option B is an appropriate choice. You might still have to change the max-cache-size and cache-resize-quantity to get the best result.
If the cache hits are too low and cache misses are very high, then the application is not reusing the bean instances and hence increasing the cache size (using max-cache-size) will not help (assuming that the access pattern remains the same). In this case you might use commit option C. If there is no great difference between cache-hits and cache-misses then tune max-cache-size, and probably cache-idle-timeout-in-seconds.
The Type attribute that determines whether the Java Message Service (JMS) is on local or remote system affects performance. Local JMS performance is better than remote JMS performance. However, a remote cluster can provide failover capabilities and can be administrated together, so there may be other advantages of using remote JMS. For more information on using JMS, see Chapter 4, Configuring Java Message Service Resources, in Sun Java System Application Server Enterprise Edition 8.2 Administration Guide.
The transaction manager makes it possible to commit and roll back distributed transactions.
A distributed transactional system writes transactional activity into transaction logs so that they can be recovered later. But writing transactional logs has some performance penalty.
Transaction Manager monitoring is disabled by default. Enable monitoring of the transaction service with the Admin Console at Configurations > config-name > Monitoring.
You can also enable monitoring with these commands:
set serverInstance.transaction-service.monitoringEnabled=true reconfig serverInstance
When you have enabled monitoring of the transaction service, view results
With Admin Console at Standalone Instances > server-name (Monitor | Monitor). Select transaction-service from the View dropdown.
With this command:
asadmin get -m serverInstance.transaction-service.*
The following statistics are gathered on the transaction service:
total-tx-completed Completed transactions.
total-tx-rolled-back Total rolled back transactions.
total-tx-inflight Total inflight (active) transactions.
isFrozen Whether transaction system is frozen (true or false)
inflight-tx List of inflight (active) transactions.
Here is a sample of the output using asadmin:
********** Stats for JTS ************ total-tx-completed = 244283 total-tx-rolled-back = 2640 total-tx-inflight = 702 isFrozen = False inflight-tx = Transaction Id , Status, ElapsedTime(msec) 000000000003C95A_00, Active, 999
This property can be used to disable the transaction logging, where the performance is of utmost importance more than the recovery. This property, by default, won’t exist in the server configuration.
To disable distributed transaction logging with the Admin Console, go to Configurations > config-name > Transaction Service. Click on Add Property, and specify:
Name: disable-distributed-transaction-logging
Value: true
You can also set this property with asadmin, for example:
asadmin set server1.transaction-service.disable-distributed-transaction-logging=true
Setting this attribute to true disables transaction logging, which can improve performance. Setting it to false (the default), makes the transaction service write transactional activity to transaction logs so that transactions can be recovered. If Recover on Restart is checked, this property is ignored.
Set this property to true only if performance is more important than transaction recovery.
To set the Recover on Restart attribute with the Admin Console, go to Configurations > config-name > Transaction Service. Click the Recover check box to set it to true (checked, the default) or false (un-checked).
You can also set automatic recovery with asadmin, for example:
asadmin set server1.transaction-service.automatic-recovery=false
When Recover on Restart is true, the server will always perform transaction logging, regardless of the Disable Distributed Transaction Logging attribute.
If Recover on Restart is false, then:
If Disable Distributed Transaction Logging is false (the default), then the server will write transaction logs.
If Disable Distributed Transaction Logging is true, then the server will not write transaction logs.
Not writing transaction logs will give approximately twenty percent improvement in performance, but at the cost of not being able to recover from any interrupted transactions. The performance benefit applies to transaction-intensive tests. Gains in real applications may be less.
The keypoint interval determines how often entries for completed transactions are removed from the log file. Keypointing prevents a process log from growing indefinitely.
Frequent keypointing is detrimental to performance. The default value of the Keypoint Interval is 2048, which is sufficient in most cases.
Monitoring and tuning the HTTP server instances that handle client requests are important parts of ensuring peak Application Server performance.
Enable monitoring statistics for the HTTP service using either Admin Console or asadmin. In the Admin Console, the monitoring level (LOW or HIGH) has no effect on monitoring the HTTP Service.
With asadmin, use the following command to list the monitoring parameters available:
list --user admin --port 4848 -m server-instance-name.http-service.*
where server-instance-name is the name of the server instance.
Use the following command to get the values:
get --user admin --port 4848 -m server.http-service.parameter-name.*
where parameter-name is the name of the parameter to monitor.
Statistics collection is enabled by default. Disable it by adding the following property to domain.xml and restart the server:
<property name="statsProfilingEnabled" value="false" />
Disabling statistics collection will increase performance.
You can also view monitoring statistics with the Admin Console. The information is divided into the following categories:
The Admin Console provides the following performance-related HTTP statistics:
Average load for last minute
Is VirtualServer Overflow enabled?
HttpServer Version
HttpServer ID
Rate at which bytes are being received
Maximum amount of threads
HttpServer Time Started
Maximum amount of virtual servers
Is profiling enabled?
Time in seconds HttpService has been running
Average load for last 15 minutes
Average load for last 5 minutes
Rate at which bytes are being transmitted
The DNS cache caches IP addresses and DNS names. Your server’s DNS cache is disabled by default. In the DNS Statistics for Process ID All page under Monitor in the web-based Administration interface the following statistics are displayed:
If the DNS cache is disabled, the rest of this section is not displayed.
By default, the DNS cache is off. Enable DNS caching with the Admin Console by setting the DNS value to “Perform DNS lookups on clients accessing the server”.
The number of current cache entries and the maximum number of cache entries. A single cache entry represents a single IP address or DNS name lookup. Make the cache as large as the maximum number of clients that access your web site concurrently. Note that setting the cache size too high is a waste of memory and degrades performance.
Set the maximum size of the DNS cache by entering or changing the value in the Size of DNS Cache field of the Performance Tuning page.
The hit ratio is the number of cache hits divided by the number of cache lookups.
This setting is not tunable.
If you turn off DNS lookups on your server, host name restrictions will not work and IP addresses will appear instead of host names in log files.
It is possible to also specify whether to cache the DNS entries. If you enable the DNS cache, the server can store hostname information after receiving it. If the server needs information about the client in the future, the information is cached and available without further querying. specify the size of the DNS cache and an expiration time for DNS cache entries. The DNS cache can contain 32 to 32768 entries; the default value is 1024. Values for the time it takes for a cache entry to expire can range from 1 second to 1 year specified in seconds; the default value is 1200 seconds (20 minutes).
Do not use DNS lookups in server processes because they are resource-intensive. If you must include DNS lookups, make them asynchronous.
If asynchronous DNS is disabled, the rest of this section will not be displayed.
The number of name lookups (DNS name to IP address) that have been done since the server was started. This setting is not tunable.
The number of address loops (IP address to DNS name) that have been done since the server was started. This setting is not tunable.
The current number of lookups in progress.
Total Connections Queued: Total connections queued is the total number of times a connection has been queued. This includes newly accepted connections and connections from the keep-alive system.
Average Queuing Delay: Average queueing delay is the average amount of time a connection spends in the connection queue. This represents the delay between when a request connection is accepted by the server, and a request processing thread (also known as a session) begins servicing the request.
The file cache caches static content so that the server handles requests for static content quickly. The file-cache section provides statistics on how your file cache is being used.
For information on tuning the file cache, see HTTP File Cache.
Number of Hits on Cached File Content
Number of Cache Entries
Number of Hits on Cached File Info
Heap Space Used for Cache
Number of Misses on Cached File Content
Cache Lookup Misses
Number of Misses on Cached File Content
Max Age of a Cache Entry: The maximum age displays the maximum age of a valid cache entry.
Max Number of Cache Entries
Max Number of Open Entries
Is File Cached Enabled?: If the cache is disabled, the other statistics are not displayed. The cache is enabled by default.
Maximum Memory Map to be Used for Cache
Memory Map Used for cache
Cache Lookup Hits
Open Cache Entries: The number of current cache entries and the maximum number of cache entries are both displayed. A single cache entry represents a single URI. This is a tunable setting.
Maximum Heap Space to be Used for Cache
The Admin Console provides the following performance-related keep-alive statistics:
Connections Terminated Due to ClientConnection Timed Out
Max Connection Allowed in Keep-alive
Number of Hits
Connections in Keep-alive Mode
Connections not Handed to Keep-alive Thread Due to too Many Persistent Connections
The Time in Seconds Before Idle Connections are Closed
Connections Closed Due to Max Keep-alive Being Exceeded
The Admin Console provides the following thread pool statistics:
Idle/Peak/Limit: Idle indicates the number of threads that are currently idle. Peak indicates the peak number in the pool. Limit indicates the maximum number of native threads allowed in the thread pool, and is determined by the setting of NativePoolMaxThreads.
Work Queue Length /Peak /Limit: These numbers refer to a queue of server requests that are waiting for the use of a native thread from the pool.
The Work Queue Length is the current number of requests waiting for a native thread.
Peak is the highest number of requests that were ever queued up simultaneously for the use of a native thread since the server was started. This value can be viewed as the maximum concurrency for requests requiring a native thread.
Limit is the maximum number of requests that can be queued at one time to wait for a native thread, and is determined by the setting of NativePoolQueueSize.
The settings for the HTTP service are divided into the following categories in the Admin Console:
Disable access logging when performing benchmarking. Access Logging is enabled by default. To disable it, in HTTP Service click Add Property, and add the following property:
name: accessLoggingEnabled
value: false
You can set the following access log properties:
Rotation (enabled/disabled). Enable rotation to ensure that the logs don’t run out of disk space.
Rotation Policy:ime-based or size-based. Size-based is the default.
Rotation Interval.
On the Request Processing tab of the HTTP Service page, tune the following HTTP request processing settings:
Thread Count
Initial Thread Count
Request Timeout
Buffer Length
The Thread Count parameter specifies the maximum number of simultaneous requests the server can handle. The default value is 128. When the server has reached the limit or request threads, it defers processing new requests until the number of active requests drops below the maximum amount. Increasing this value will reduce HTTP response latency times.
In practice, clients frequently connect to the server and then do not complete their requests. In these cases, the server waits a length of time specified by the Request Timeout parameter.
Also, some sites do heavyweight transactions that take minutes to complete. Both of these factors add to the maximum simultaneous requests that are required. If your site is processing many requests that take many seconds, you might need to increase the number of maximum simultaneous requests.
Adjust the thread count value based on your load and the length of time for an average request. In general, increase this number if you have idle CPU time and requests that are pending; decrease it if the CPU becomes overloaded. If you have many HTTP 1.0 clients (or HTTP 1.1 clients that disconnect frequently), adjust the timeout value to reduce the time a connection is kept open.
Suitable Request Thread Count values range from 100 to 500, depending on the load. If your system has extra CPU cycles, keep incrementally increasing thread count and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing thread count.
The Initial Thread Count property specifies the minimum number of threads the server initiates upon start-up. The default value is 48. Initial Thread Count represents a hard limit for the maximum number of active threads that can run simultaneously, which can become a bottleneck for performance.
The Request Timeout property specifies the number of seconds the server waits between accepting a connection to a client and receiving information from it. The default setting is 30 seconds. Under most circumstances, changing this setting is unnecessary. By setting it to less than the default 30 seconds, it is possible to free up threads sooner. However, disconnecting users with slower connections also helps.
The size (in bytes) of the buffer used by each of the request processing threads for reading the request data from the client.
Adjust the value based on the actual request size and observe the impact on performance. In most cases the default should suffice. If the request size is large, increase this parameter.
Both HTTP 1.0 and HTTP 1.1 support the ability to send multiple requests across a single HTTP session. A server can receive hundreds of new HTTP requests per second. If every request was allowed to keep the connection open indefinitely, the server could become overloaded with connections. On Unix/Linux systems, this could easily lead to a file table overflow.
The Application Server’s Keep Alive system addresses this problem. A waiting keep alive connection has completed processing the previous request, and is waiting for a new request to arrive on the same connection. The server maintains a counter for the maximum number of waiting keep-alive connections. If the server has more than the maximum waiting connections open when a new connection waits for a keep-alive request, the server closes the oldest connection. This algorithm limits the number of open waiting keep-alive connections.
If your system has extra CPU cycles, incrementally increase the keep alive settings and monitor performance after each increase. When performance saturates (stops improving), then stop increasing the settings.
The following HTTP keep alive settings affect performance:
Thread Count
Max Connections
Time Out
Keep Alive Query Mean Time
Keep Alive Query Max Sleep Time
Thread Count determines the number of threads in the Keep Alive subsystem. Adjust this setting to be a small multiple of the number of processors on the system. For example, a two-CPU system can have two or four keep-alive threads.
The default is one. Do not change the default for a server with a small number of users and Max Connections.
Max Connections controls the maximum number of keep-alive connections the server maintains. The possible range is zero to 32768, and the default is 256.
Adjust this setting based on number of keep alive connections the server is expected to service and the server’s load, because it will add up to resource utilization and might increase latency.
The number of connections specified by Max Connections is divided equally among the keep alive threads. If Max Connections is not equally divisible by Thread Count, the server can allow slightly more than Max Connections simultaneous keep alive connections.
Time Out determines the maximum time (in seconds) that the server holds open an HTTP keep alive connection. A client can keep a connection to the server open so that multiple requests to one server can be serviced by a single network connection. Since the number of open connections that the server can handle is limited, a high number of open connections will prevent new clients from connecting.
The default time out value is 30 seconds. Thus, by default, the server will close the connection if idle for more than 30 seconds. The maximum value for this parameter is 300 seconds (5 minutes).
The proper value for this parameter depends upon how much time is expected to elapse between requests from a given client. For example, if clients are expected to make requests frequently then, set the parameter to a high value; likewise, if clients are expected to make requests rarely, then set it to a low value.
Keep Alive Query Mean Time specifies the interval between polling keep alive connections. If this parameter has a value of n milliseconds, the response time seen by a client that has requested a keep alive connection will have an overhead between 0 and n milliseconds.
The default value of this parameter is one millisecond, which works well for an expected concurrent load of less than 300 keep alive connections. The default value can severely reduce the scalability with higher concurrent loads. For applications with higher connection loads, increase the default value.
Set this parameter with asadmin or in Admin Console HTTP Service page, by choosing Add Property and specifying:
Name: keep-alive-query-mean-time
Value: number of milliseconds
Keep Alive Query Max Sleep Time specifies the maximum time (in milliseconds) to wait that after polling keep alive connections for further requests. If your system has extra CPU cycles, keep incrementally increasing this parameter and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing the settings.
Set this parameter with asadmin or in the Admin Console HTTP Service page, by choosing Add Property and specifying:
Name: keep-alive-query-max-sleep-time
Value: number of milliseconds
Connection queue information shows the number of sessions in the queue, and the average delay before the connection is accepted.
If your system has extra CPU cycles, keep incrementally increasing connection pool settings and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing the settings.
Connection pool settings that affect performance are:
Max Pending Count
Queue Size
Max Pending Count specifies the maximum number of pending connections on the listen socket. Adjust Max Pending Count only when there is a heavy load on the system. For low to medium loads, the default will be acceptable.
After observing system behavior, change the value accordingly, otherwise the server will start dropping connections. Connections that time out on a listen socket whose backlog queue is full will fail. If Max Pending Count is close to the limit, increase the maximum connection queue size to avoid dropping connections under heavy load.
Queue Size specifies the number of outstanding (yet to be serviced) connections that the server can have. For heavily loaded systems (with many users) that have limited request processing threads, adjust this setting to a higher value.
Setting the connection queue size too high can degrade server performance. It was designed to prevent the server from becoming overloaded with connections it cannot handle. If the server is overloaded, increasing the connection queue size will increase the latency of request handling, and the connection queue will fill up again.
Specifies the size (in bytes) of the send buffer used by sockets.
Specifies the size (in bytes) of the receive buffer used by sockets.
The Send Buffer Size and Receive Buffer Size are the buffer sizes allocated for output and input buffers, respectively. To tune these parameters, increase them methodically and observe the impact on performance. Stop increasing the values when performance saturates (does not increase significantly).
The only HTTP Protocol attribute that significantly affects performance is DNS Lookup Enabled.
This setting specifies whether the server performs DNS (domain name service) lookups on clients that access the server. When DNS lookup is not enabled, when a client connects, the server knows the client’s IP address but not its host name (for example, it knows the client as 198.95.251.30, rather than www.xyz.com). When DS lookup is enabled, the server will resolve the client’s IP address into a host name for operations like access control, common gateway interface (CGI) programs, error reporting, and access logging.
If the server responds to many requests per day, reduce the load on the DNS or NIS (Network Information System) server by disabling DNS lookup. Enabling DNS lookup will increase the latency and load on the system—do so with caution.
The Application Server uses a file cache to serve static information faster. The file cache contains information about static files such as HTML, CSS, image, or text files. Enabling the HTTP file cache will improve performance of applications that contain static files.
Set the file cache attributes in the Admin Console under Configurations > config-name > HTTP Service (HTTP File Cache).
Max Files Count determines how many files are in the cache. If the value is too big, the server caches little-needed files, which wastes memory. If the value is too small, the benefit of caching is lost. Try different values of this attribute to find the optimal solution for specific applications—generally, the effects will not be great.
Hash Init Size affects memory use and search time, but rarely will have a measurable effect on performance.
This parameter controls how long cached information is used after a file has been cached. An entry older than the maximum age is replaced by a new entry for the same file.
If your web site’s content changes infrequently, increase this value for improved performance. Set the maximum age by entering or changing the value in the Maximum Age field of the File Cache Configuration page in the web-based Admin Console for the HTTP server node and selecting the File Caching Tab.
Set the maximum age based on whether the content is updated (existing files are modified) on a regular schedule or not. For example, if content is updated four times a day at regular intervals, you could set the maximum age to 21600 seconds (6 hours). Otherwise, consider setting the maximum age to the longest time you are willing to serve the previous version of a content file after the file has been modified.
The cache treats small, medium, and large files differently. The contents of medium files are cached by mapping the file into virtual memory (Unix/Linux platforms). The contents of small files are cached by allocating heap space and reading the file into it. The contents of large files are not cached, although information about large files is cached.
The advantage of distinguishing between small files and medium files is to avoid wasting part of many pages of virtual memory when there are lots of small files. So the Small File Size Limit is typically a slightly lower value than the VM page size.
When File Transmission is enabled, the server caches open file descriptors for files in the file cache, rather than the file contents. Also, the distinction normally made between small, medium, and large files no longer applies since only the open file descriptor is being cached.
By default, File Transmission is enabled on Windows, and disabled on UNIX. On UNIX, only enable File Transmission for platforms that have the requisite native OS support: HP-UX and AIX. Don’t enable it for other UNIX/Linux platforms.
Change HTTP listener settings in the Admin Console under Configurations > config-name > HTTP Service > HTTP Listeners > listener-name.
For machines with only one network interface card (NIC), set the network address to the IP address of the machine (for example, 192.18.80.23 instead of default 0.0.0.0). If you specify an IP address other than 0.0.0.0, the server will make one less system call per connection. Specify an IP address other than 0.0.0.0 for best possible performance. If the server has multiple NIC cards then create multiple listeners for each NIC.
The Acceptor Threads setting specifies how many threads you want in accept mode on a listen socket at any time. It is a good practice to set this to less than or equal to the number of CPUs in your system.
In the Application Server, acceptor threads on an HTTP Listener accept connections and put them onto a connection queue. Session threads then pick up connections from the queue and service the requests. The server posts more session threads if required at the end of the request.
The policy for adding new threads is based on the connection queue state:
Each time a new connection is returned, the number of connections waiting in the queue (the backlog of connections) is compared to the number of session threads already created. If it is greater than the number of threads, more threads are scheduled to be added the next time a request completes.
The previous backlog is tracked, so that n threads are added (n is the HTTP Service’s Thread Increment parameter) until one of the following is true:
The number of threads increases over time.
The increase is greater than n.
The number of session threads minus the backlog is less than n.
To avoid creating too many threads when the backlog increases suddenly (such as the startup of benchmark loads), the server makes the decision whether more threads are needed only once every 16 or 32 connections, based on how many session threads already exist.
Grizzly is an HTTP Listener using Java's NIO technology and implemented entirely in Java. This re-usable, NIO–based framework can be used for any HTTP related operations (HTTP Listener/Connector) as well as non-HTTP operations, thus allowing the creation of any type of scalable multi-threaded server. The Grizzly HTTP Listener uses a keep-alive system based on the NIO Selector classes of the Java platform, which support connection monitoring and help prevent Denial-of-Service attacks. The Denial–of–Service systems will add basic support for IP validation, number of transactions completed per IP, detection of inactive connections, etc. in order to predict resource exhaustion or "flooding" attacks. All these services are performed in conjunction with the Keep-Alive systems. The Grizzly connector will forward requests for both static and dynamic resources to the servlet container, which processes requests for static resources via a dedicated, container-provided servlet (org.apache.catalina.servlets.DefaultServlet).
For more information about Grizzly, you can read the weblog at http://weblogs.java.net/blog/jfarcand/archive/2005/06/grizzly_an_http.html
Grizzly is available as a replacement to NSAPI/WebCore in Application Server Enterprise Edition8.2. Application Server provides some special properties to support the configuration of Grizzly. To enable Grizzly, add the following property:
Dcom.sun.enterprise.web.httpservice.ee=false.
Currently, the implementation of Grizzly supports all the Application Server Enterprise Edition functionality, except dynamic configuration.
The following properties control the Grizzly configuration:
-Dcom.sun.enterprise.web.connector.grizzly.keepAliveTimeoutInSeconds=30
-Dcom.sun.enterprise.web.connector.grizzly.maxHttpHeaderSize=4096
-Dcom.sun.enterprise.web.connector.grizzly.ssBackLog=4096
-Dcom.sun.enterprise.web.connector.grizzly.queueSizeInBytes=-1
-Dcom.sun.enterprise.web.connector.grizzly.maxKeepAliveRequests=250
-Dcom.sun.enterprise.web.connector.grizzly.fileCache.isEnabled=false
-Dcom.sun.enterprise.web.connector.grizzly.fileCache.maxEntrySize=1024
-Dcom.sun.enterprise.web.connector.grizzly.fileCache.maxLargeFileCacheSize= 10485760
Like all Sun Java System Application Server 8.2 configurations, Grizzly also performs better when the Asynch Startup mechanism is disabled. You can disable it using this property: -Dcom.sun.enterprise.server.ss.ASQuickStartup=false
The following table brings out the corresponding properties of Grizzly and the production web container (PWC).
Table 3–3 Correspondence of Grizzly and PWC Properties
Property Name in Grizzly |
Default Value |
Description |
Corresponding setting in a production web container |
---|---|---|---|
maxAcceptWorkerThread |
0 |
Number of threads used to serve OP_ACCEPT (socket.accept()). |
acceptor threads in http listener |
selector.timeout |
60000 |
Time in milliseconds before Selector.select() times out. |
request processing timeout |
minWorkerThreads |
5 |
The minimum number of threads every thread pool uses at creation. |
request processing initial thread count |
fileCache.isEnabled |
false |
Indicates if file caching is enabled. |
file cache enabled |
fileCache.minEntrySize |
Minimum size that a small file can have. |
small file size limit |
|
fileCache.maxEntrySize |
1024 |
Maximum size that a medium file can have. |
medium file size limit |
fileCache.maxLargeFileCacheSize |
10485760 |
Cache space for medium files. |
medium file size |
fileCache.maxSmallFileCacheSize |
Cache space for small files. |
small file size |
|
fileCache.maxCacheEntries |
Maximum number of cached entries. |
max files count |
|
keepAliveTimeoutInSeconds |
30 |
keep alive timeout |
|
maxKeepAliveRequests |
250 |
keep alive max connections |
|
InitialRuleCount |
128 |
The initial number of KeepAliveRule created by the Keep-Alive subsystem. | |
useNioNonBlocking |
true |
Indicates whether to use NIO blocking mode or not. | |
displayConfiguration |
false |
Display Grizzly internal configuration. | |
useDirectByteBuffer |
true |
Indicates if ByteBuffer.allocateDirect() is used when creating Grizzly buffers. | |
pipelineClass |
com.sun.enterprise.web.connector.grizzly.LinkedListPipeline |
The default Pipeline (Thread Pool wrapper) used by Grizzly. | |
maxSelectorReadThread |
1 |
The number of selector threads for handing OP_READ operations. | |
useByteBufferView |
false |
Specifies whether to use a large ByteBuffer and slice it amongst Grizzly buffers. | |
algorithmClassName |
com.sun.enterprise.web.connector.grizzly.algorithms.NoParsingAlgorithm |
The request bytes parsing algorithm used to read bytes from ByteBuffer. | |
buffersize |
4096 |
ByteBuffer size created by Grizzly. | |
factoryTimeout |
30 |
Time taken before a read/write operation fails on a socket. | |
maxReadWorkerThread |
0 |
Number of threads used to read the request bytes from the socket. |
If you are migrating an existing installation from Application Server version 7.x to version 8, consult the following table to see the mapping of tunable parameters.
Table 3–4 Mapping of tunable settings from version 7 to version 8
Tunable Setting in version 7.x |
Tunable Setting in version 8.1 |
---|---|
RqThrottle |
Thread Count (thread-count) |
RqThrottleMin |
Initial Thread Count (initial-thread-count) |
ConnQueueSize |
Queue Size (queue-size-in-bytes) |
KeepAliveThreads |
Thread Count (keep-alive-thread-count) |
KeepAliveTimeout |
Time Out (timeout-in-seconds) |
MaxKeepAliveConnections |
Max Connections (max-connections) |
KeepAliveQueryMeanTime |
keep-alive-query-mean-time |
KeepAliveQueryMaxSleepTime |
keep-alive-query-max-sleep-time |
ListenQ |
Thread Count (max-pending-count) |
AcceptTimeout |
Request Time Out |
HeaderBufferSize |
Buffer Length |
The Application Server includes a high performance and scalable CORBA Object Request Broker (ORB). The ORB is the foundation of the EJB Container on the server.
The ORB is primarily used by EJB components via:
RMI/IIOP path from an application client (or rich client) using the application client container.
RMI/IIOP path from another Application Server instance ORB.
RMI/IIOP path from another vendor’s ORB.
In-process path from the Web Container or MDB (message driven beans) container.
When a server instance makes a connection to another server instance ORB, the first instance acts as a client ORB. SSL over IIOP uses a fast optimized transport with high-performance native implementations of cryptography algorithms.
It is important to remember that EJB local interfaces do not use the ORB. Using a local interface passes all arguments by reference and does not require copying any objects.
A rich client Java program performs a new initialContext() call which creates a client side ORB instance. This in turn creates a socket connection to the Application Server IIOP port. The reader thread is started on the server ORB to service IIOP requests from this client. Using the initialContext, the client code does a lookup of an EJB deployed on the server. An IOR which is a remote reference to the deployed EJB on the server is returned to the client. Using this object reference, the client code invokes remote methods on the EJB.
InitialContext lookup for the bean and the method invocations translate the marshalling application request data in Java into IIOP message(s) that are sent on the socket connection that was created earlier on to the server ORB. The server then creates a response and sends it back on the same connection. This data in the response is then un-marshalled by the client ORB and given back to the client code for processing. The Client ORB shuts down and closes the connection when the rich client application exits.
ORB statistics are disabled by default. To gather ORB statistics, enable monitoring with this asadmin command:
set serverInstance.iiop-service.orb.system.monitoringEnabled=true reconfig serverInstance
The following statistics are gathered on ORB connections:
total-inbound-connections Total inbound connections to ORB.
total-outbound-connections Total outbound connections from ORB.
Use this command to get ORB connection statistics:
asadmin get --monitor serverInstance.iiop-service.orb.system.orb-connection.*
The following statistics are gathered on ORB thread pools:
thread-pool-size Number of threads in ORB thread pool.
waiting-thread-count Number of thread pool threads waiting for work to arrive.
Use this command to get ORB thread pool statistics:
asadmin get --monitor serverInstance.iiop-service.orb.system.orb-thread-pool.*
Tune ORB performance by setting ORB parameters and ORB thread pool parameters. You can often decrease response time by leveraging load-balancing, multiple shared connections, thread pool and message fragment size. You can improve scalability by load balancing between multiple ORB servers from the client, and tuning the number of connection between the client and the server.
The following table summarizes the tunable ORB parameters.
Table 3–5 Tunable ORB Settings
Path |
ORB modules |
Server settings |
RMI/ IIOP from application client to application server |
communication infrastructure, thread pool |
steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds |
RMI/ IIOP from ORB to Application Server |
communication infrastructure, thread pool |
steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds |
RMI/ IIOP from a vendor ORB |
parts of communication infrastructure, thread pool |
steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds |
In-process |
thread pool |
steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds |
Tune the following ORB parameters using the Admin Console:
Max Message Fragment Size: Messages larger than this number of bytes will be fragmented. In CORBA GIOPv1.2, a Request, Reply, LocateRequest and LocateReply message can be broken into multiple fragments. The first message is a regular Request or Reply message with more fragments bit in the flags field set to true. If inter-ORB messages are for the most part larger than the default size (1024 bytes), increase the fragment size to decrease latencies on the network.
Total Connections: Maximum number of incoming connections at any time, on all listeners. Protects the server state by allowing finite number of connections. This value equals the maximum number of threads that will actively read from the connection.
The ORB thread pool contains a task queue and a pool of threads. Tasks or jobs are inserted into the task queue and free threads pick tasks from this queue for execution. Do not set a thread pool size such that the task queue is always empty. It is normal for a large application’s Max Pool Size to be ten times the size of the current task queue.
The Application Server uses the ORB thread pool to:
Execute every ORB request.
Trim EJB pools and caches.
Thus, even when one is not using ORB for remote-calls (via RMI/ IIOP), set the size of the threadpool to facilitate cleaning up the EJB pools and caches.
Set ORB thread pool attributes under Configurations > config-name > Thread Pools > thread-pool-ID, where thread-pool-ID is the thread pool ID selected for the ORB. Thread pools have the following attributes that affect performance.
Minimum Pool Size: The minimum number of threads in the ORB thread pool. Set to the average number of threads needed at a steady (RMI/ IIOP) load.
Maximum Pool Size: The maximum number of threads in the ORB thread pool.
Idle Timeout: Number of seconds to wait before removing an idle thread from pool. Allows shrinking of the thread pool.
Number of Work Queues
In particular, the maximum pool size is important to performance. For more information, see Thread Pool Sizing.
Specify the following properties as command-line arguments when launching the client program. You do this by using the following syntax when starting the Java VM:
-Dproperty=value
When using the default JDK ORB on the client, a connection is established from the client ORB to the application server ORB every time an initial context is created. To pool or share these connections when they are opened from the same process by adding to the configuration on the client ORB.
-Djava.naming.factory.initial=com.sun.appserv.naming.S1ASCtxFactory
The property com.sun.appserv.iiop.orbconnections is not supported in Sun Java System Application Server, version 8.x.
When using the context factory, (com.sun.appserv.naming.S1ASCtxFactory), you can specify the number of connections to open to the server from the client ORB with the property com.sun.appserv.iiop.orbconnections.
The default value is one. Using more than one connection may improve throughput for network-intense applications. The configuration changes are specified on the client ORB(s) by adding the following jvm-options:
-Djava.naming.factory.initial=com.sun.appserv.naming.S1ASCtxFactory -Dcom.sun.appserv.iiop.orbconnections=value
For information on how to configure RMI/IIOP for multiple application server instances in a cluster, Chapter 11, RMI-IIOP Load Balancing and Failover, in Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide.
When tuning the client ORB for load-balancing and connections, consider the number of connections opened on the server ORB. Start from a low number of connections and then increase it to observe any performance benefits. A connection to the server translates to an ORB thread reading actively from the connection (these threads are not pooled, but exist currently for the lifetime of the connection).
After examining the number of inbound and outbound connections as explained above, tune the size of the thread pool appropriately. This can affect performance and response times significantly.
The size computation takes into account the number of client requests to be processed concurrently, the resource (number of CPUs and amount of memory) available on the machine and the response times required for processing the client requests.
Setting the size to a very small value can affect the ability of the server to process requests concurrently, thus affecting the response times since requests will sit longer in the task queue. On the other hand, having a large number of worker threads to service requests can also be detrimental because they consume system resources, which increases concurrency. This can mean that threads take longer to acquire shared structures in the EJB container, thus affecting response times.
The worker thread pool is also used for the EJB container’s housekeeping activity such as trimming the pools and caches. This activity needs to be accounted for also when determining the size. Having too many ORB worker threads is detrimental for performance since the server has to maintain all these threads. The idle threads are destroyed after the idle thread timeout period.
It is sometimes useful to examine the IIOP messages passed by the Application Server. To make the server save IIOP messages to the server.log file, set the JVM option -Dcom.sun.CORBA.ORBDebug=giop. Use the same option on the client ORB.
The following is an example of IIOP messages saved to the server log. Note: in the actual output, each line is preceded by the timestamp, such as [29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout.
++++++++++++++++++++++++++++++ Message(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): createFromStream: type is 4 < MessageBase(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): Message GIOP version: 1.2 MessageBase(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): ORB Max GIOP Version: 1.2 Message(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): createFromStream: message construction complete. com.sun.corba.ee.internal.iiop.MessageMediator (Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): Received message: ----- Input Buffer ----- Current index: 0 Total length : 340 47 49 4f 50 01 02 00 04 0 0 00 01 48 00 00 00 05 GIOP.......H....
The flag -Dcom.sun.CORBA.ORBdebug=giop generates many debug messages in the logs. This is used only when you suspect message fragmentation.
In this sample output above, the createFromStream type is shown as 4. This implies that the message is a fragment of a bigger message. To avoid fragmented messages, increase the fragment size. Larger fragments mean that messages are sent as one unit and not as fragments, saving the overhead of multiple messages and corresponding processing at the receiving end to piece the messages together.
If most messages being sent in the application are fragmented, increasing the fragment size is likely to improve efficiency. On the other hand, if only a few messages are fragmented, it might be more efficient to have a lower fragment size that requires smaller buffers for writing messages.
It is possible to improve ORB performance by using Java Serialization instead of standard Common Data Representation (CDR) for data for transport over the network. This capability is called Java Serialization over GIOP (General Inter-ORB Protocol), or JSG.
In some cases, JSG can provide better performance throughput than CDR. The performance differences depend highly on the application. Applications with remote objects having small amounts data transmitted between client and server will most often perform better using JSG.
You must set this property on all servers that you want to use JSG.
In the tree component, expand the Configurations node.
Expand the desired node.
Select the JVM Settings node
In the JVM Settings page, choose the JVM Options tab.
Click Add JVM Option, and enter the following value:
-Dcom.sun.CORBA.encoding.ORBEnableJavaSerialization=true
Click Save
Restart the Application Server.
If an application uses standalone non-web clients (application clients), and you want to use JSG, you must also set a system property for the client applications. A common way to do this is to add the property to the Java command line used to start the client application, for example:
java -Dcom.sun.CORBA.encoding.ORBEnableJavaSerialization=true -Dorg.omg.CORBA.ORBInitialHost=gollum -Dorg.omg.CORBA.ORBInitialPort=35309 MyClientProgram
You can both monitor and tune thread pool settings through the Admin Console. To configure monitoring with the Admin Console, open the page Configurations > config-name > Monitoring. To view monitoring information with the Admin Console, open the page Stand-Alone Instances > instance-name (Monitor).
Configure thread pool settings through the Admin Console at Configurations > config-name > Thread Pools.
Since threads on Unix/Linux are always operating system (OS)-scheduled, as opposed to user-scheduled, Unix/Linux users do not need to use native thread pools. Therefore, this option is not offered in a Unix/Linux user interface. However, it is possible to edit the OS-scheduled thread pools and add new thread pools, if needed, using the Admin Console.
For optimum performance of database-intensive applications, tune the JDBC Connection Pools managed by the Application Server. These connection pools maintain numerous live database connections that can be reused to reduce the overhead of opening and closing database connections. This section describes how to tune JDBC Connection Pools to improve performance.
J2EE applications use JDBC Resources to obtain connections that are maintained by the JDBC Connection Pool. More than one JDBC Resource is allowed to refer to the same JDBC Connection Pool. In such a case, the physical connection pool is shared by all the resources.
Statistics-gathering is enabled by default for JDBC Connection Pools. The following attributes are monitored:
numConnFailedValidation (count)Number of connections that failed validation.
numConnUsed (range)Number of connections that have been used.
numConnFree (count)Number of free connections in the pool.
numConnTimedOut (bounded range)Number of connections in the pool that have timed out.
To get the statistics, use these commands:
asadmin get --monitor=true serverInstance.resources.jdbc-connection-pool.*asadmin get --monitor=true serverInstance.resources.jdbc-connection-pool. poolName.* *
Set JDBC Connection Pool attributes with the Admin Console under Resources > JDBC > Connection Pools > PoolName. The following attributes affect performance:
The following settings control the size of the connection pool:
Size of the pool when created, and its minimum allowable size.
Upper limit of size of the pool.
Number of connections to be removed when the idle timeout expires. Connections that have idled for longer than the timeout are candidates for removal. When the pool size reaches the initial and minimum pool size, removal of connections stops.
The following table summarizes pros and cons to consider when sizing connection pools.
Table 3–6 Connection Pool Sizing
Connection pool |
Pros |
Cons |
---|---|---|
Small Connection pool |
Faster access on the connection table. |
May not have enough connections to satisfy requests. Requests may spend more time in the queue. |
Large Connection pool |
More connections to fulfill requests. Requests will spend less (or no) time in the queue |
Slower access on the connection table. |
There are two timeout settings:
Max Wait Time: Amount of time the caller (the code requesting a connection) will wait before getting a connection timeout. The default is 60 seconds. A value of zero forces caller to wait indefinitely.
To improve performance set Max Wait Time to zero (0). This essentially blocks the caller thread until a connection becomes available. Also, this allows the server to alleviate the task of tracking the elapsed wait time for each request and increases performance.
Idle Timeout: Maximum time in seconds that a connection can remain idle in the pool. After this time, the pool can close this connection. This property does not control connection timeouts on the database server.
Keep this timeout shorter than the database server timeout (if such timeouts are configured on the database), to prevent accumulation of unusable connection in Application Server.
For best performance, set Idle Timeout to zero (0) seconds, so that idle connections will not be removed. This ensures that there is normally no penalty in creating new connections and disables the idle monitor thread. However, there is a risk that the database server will reset a connection that is unused for too long.
Two settings control the connection pool’s transaction isolation level on the database server:
Transaction Isolation Level: specifies the transaction isolation level of the pooled database connections. If this parameter is unspecified, the pool uses the default isolation level provided by the JDBC Driver.
Isolation Level Guaranteed: Guarantees that every connection obtained from the pool has the isolation specified by the Transaction Isolation Level parameter. Applicable only when the Transaction Isolation Level is specified. The default value is true.
This setting can have some performance impact on some JDBC drivers. Set to false when certain that the application does not change the isolation level before returning the connection.
Avoid specifying Transaction Isolation Level. If that is not possible, consider setting Isolation Level Guaranteed to false and make sure applications do not programmatically alter the connections’ isolation level.
If you must specify isolation level, specify the best-performing level possible. The isolation levels listed from best performance to worst are:
READ_UNCOMMITTED
READ_COMMITTED
REPEATABLE_READ
SERIALIZABLE
Choose the isolation level that provides the best performance, yet still meets the concurrency and consistency needs of the application.
The following settings determine whether and how the pool performs connection validation.
If true, the pool validates connections (checks to find out if they are usable) before providing them to an application.
If possible, keep the default value, false. Requiring connection validation forces the server to apply the validation algorithm every time the pool returns a connection, which adds overhead to the latency of getConnection(). If the database connectivity is reliable, you can omit validation.
Type of connection validation to perform. Must be one of:
auto-commit: attempt to perform an auto-commit on the connection.
metadata: attempt to get metadata from the connection.
table (performing a query on a specified table). Must also set Table Name. You may have to use this method if the JDBC driver caches calls to setAutoCommit() and getMetaData().
Whether to close all connections in the pool if a single validation check fails. The default is false. One attempt will be made to re-establish failed connections.
From a performance standpoint, connector connection pools are similar to JDBC connection pools. Follow all the recommendations in the previous section, Tuning JDBC Connection Pools
You may be able to improve performance by overriding the default transaction support specified for each connector connection pool.
For example, consider a case where an Enterprise Information System (EIS) has a connection factory that supports local transactions with better performance than global transactions. If a resource from this EIS needs to be mixed with a resource coming from another resource manager, the default behavior forces the use of XA transactions, leading to lower performance. However, by changing the EIS’s connector connection pool to use LocalTransaction transaction support and leveraging the Last Agent Optimization feature previously described, you could leverage the better-performing EIS LocalTransaction implementation. For more information on LAO, see Configure JDBC Resources as One-Phase Commit Resources
In the Admin Console, specify transaction support when you create a new connector connection pool, and when you edit a connector connection pool at Resources > Connectors > Connector Connection Pools.
Also set transaction support using asadmin. For example, the following asadmin command could be used to create a connector connection pool “TESTPOOL” with the transaction-support as “LOCAL”.
asadmin> create-connector-connection-pool --raname jdbcra --connectiondefinition javax.sql.DataSource -transactionsupport LocalTransaction TESTPOOL