Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java System Application Server Enterprise Edition 8 2004Q4 Beta Performance Tuning Guide 

Chapter 3
Tuning the Application Server

This chapter describes some ways to tune the Application Server for optimum performance.


Logger Settings

The Application Server produces log messages and exception stack trace output that gets written to the log file. These log messages and exception stacks can be found in the logs directory of the instance. Naturally, the volume of log activity can impact server performance; particularly in benchmarking situations.

For best performance, set the log level to WARNING.

General Settings

In general, writing to the system log slows down performance slightly ; and increased disk access (increasing the log level, decreasing the file rotation limit or time limit) also slows down the application.

Log Levels

Set the log level for the server and its subsystems in the Admin Console Logger Settings page, Log Levels tab. The page provides to means to specify the default log level for the server (labeled Root), the default log level for javax.enterprise.system subsystems (labeled Server) such as the EJB Container, MDB Container, Web Container, Classloader, JNDI naming system, and Security, and for each individual subsystem.

Set log levels from FINEST, which provides maximum log information, through SEVERE, which logs only events that interfere with normal program execution. The default, the log level is INFO. The individual subsystem log level overrides the Server setting, which in turn overrides the Root setting.

For example, the MDB container can produce log messages at a different lever than server default. To get more debug messages, set the log level to FINE, FINER, or FINEST. Under benchmarking conditions, it is often appropriate to set the log level to SEVERE.


Deployment Settings

Deployment settings can have significant impact on performance. Follow these guidelines when configuring deployment settings for best performance:

Disable Auto-deployment

Enabling auto-deployment will adversely affect deployment, though it is a convenience in a development environment. For a production system, disable auto-deploy to optimize performance.

Use Pre-compiled JSPs

Compiling JSPs is resource intensive and time consuming. Pre-compiling JSPs before deploying applications on the server will improve application performance. When you do so, only the resulting servlet class files will be deployed.

Specify to precompile JSPs when you deploy an application through the Admin Console or DeployTool.

Specify to pre-compile JSPs for a deployed application using the Admin Console on the Applications Configuration page at Domain > Configurations > config-name > Deployment Settings.

Disable Dynamic Application Reloading

If dynamic reloading is enabled, the server periodically checks for changes in deployed applications and automatically reloads the application with the changes. Dynamic reloading is intended for development environments and is also incompatible with session persistence. To improve performance, disable dynamic class reloading.

Disable dynamic class reloading for an application that is already deployed through the Admin Console on the Applications Configuration page at Domain > Configurations > config-name > Deployment Settings.


J2EE Containers

Tuning the Web Container

Set Web container properties with the Admin Console at Configurations > config-name > J2EE Containers > Web Container (Session Properties).

Session Properties: Session Timeout

Session timeout determines how long the server maintains a session if a user does not explicitly invalidate the session. The default value is 30 minutes. Tune this value according to your application requirements. Setting a very large value for session timeout can degrade performance by causing the server to maintain too many sessions in the session store. However, setting a very small value can cause the server to reclaim sessions too soon.

Manager Properties: Reap Interval

Modifying the reap interval can improve performance, but setting it without considering the nature of your sessions and business logic can cause data inconsistency, especially for time-based persistence-frequency.

For example, if you set the reap interval to 60 seconds, the value of session data will be recorded every 60 seconds. But if a client accesses a servlet to update a value (for example, bidding price) at 20 second increments, then inconsistencies will result.

For example, consider this scenario:

So, to avoid data inconsistencies, take into the account the expected behavior of the application when adjusting the reap interval.

Disabling Dynamic JSP Reloading

On a production system, improve Web container performance by disabling dynamic JSP reloading. To do so, edit the default-web.xml file in the config directory for each instance. Change the servlet definition for a JSP to look like this:

<servlet>
   <servlet-name>jsp</servlet-name>
   <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class>
   ...
   <load-on-startup>3</load-on-startup>
</servlet>

Monitoring the EJB Container

Monitoring the EJB is disabled by default. To enable monitoring, with the Admin Console under Domain > Configurations > config-name > Monitoring. Set the monitoring level to LOW for to monitor all deployed EJBs, EJB pools, and EJB caches. Set the monitoring level to HIGH to also monitor EJB business methods.

Tuning the EJB Container

The EJB container caches and pools EJBs for better performance. Tuning the cache and pool properties can provide significant performance benefits to the EJB container. Set EJB cache and pool settings in the Admin Console Domain > Configurations > default-config > J2EE Containers > EJB Container (EJB Settings).

The pool settings are valid for stateless session and entity beans while the cache settings are valid for stateful session and entity beans.

Overview of EJB Pooling and Caching

Both stateless session beans and entity beans can be pooled to improve server performance. In addition, both stateful session beans and entity beans can be cached to improve performance.

Table 3-1 Bean Type Pooling or Caching

Bean Type

Pooled

Cached

Stateless Session

Yes

No

Stateful Session

No

Yes

Entity

Yes

Yes

The difference between a pooled bean and a cached bean is that pooled beans are all equivalent and indistinguishable from one another. Cached beans, on the contrary, contain conversational state in the case of stateful session beans, and are associated with a primary key in the case of entity beans. Entity beans are removed from the pool and added to the cache on ejbActivate() and removed from the cache and added to the pool on ejbPassivate(). ejbActivate() is called by the container when a needed entity bean is not in the cache. ejbPassivate() is called by the container when the cache grows beyond its configured limits.

Note: If you develop and deploy your EJBs using Sun Java Studio, then you need to edit the individual bean descriptor settings for bean pool and bean cache. These settings might not be suitable for production-level deployment.

Tuning the EJB Pool

A bean in the pool represents the pooled state in the EJB lifecycle. This means that the bean does not have an identity. The advantage of having beans in the pool is that the time to create a bean can be saved for a request. The container has mechanisms that create pool objects in the background, to save the time of bean creation on the request path.

The EJB pool is used by stateless session EJBs and entity EJBs. Keeping in mind how you use stateless session EJBs and the amount of traffic your server handles, tune the pool size to prevent excessive creation and deletion.

EJB Pool Settings

An individual EJB can specify cache settings that override those of the EJB container in the <bean-pool> element of the EJB's sun-ejb-jar.xml deployment descriptor.

The EJB pool settings are:

Tuning the EJB Cache

A bean in the cache represents the ready state in the EJB lifecycle. This means that the bean has an identity (for example, a primary key or session ID) associated with it.

Beans moving out of the cache have to be passivated or destroyed according to the EJB lifecycle. Once passivated, a bean has to be activated to come back into the cache. Entity beans are generally stored in databases and use some form of query language semantics to load and store data. Session beans have to be serialized when storing them upon passivation onto the disk or a database; and similarly have to be deserialized upon activation.

Any incoming request using these "ready" beans from the cache avoids the overhead of creation, setting identity, and potentially activation. So, theoretically, it is good to cache as many beans as possible. However, there are drawbacks to caching:

Keeping in mind how your applications uses stateful session EJBs and entity EJBs, and the amount of traffic your server handles, tune the EJB cache size and time-out settings to minimize the number of activations and passivations.

EJB Cache Settings

An individual EJB can specify cache settings that override those of the EJB container in the <bean-cache> element of the EJB's sun-ejb-jar.xml deployment descriptor.

The EJB cache settings are:

Pool and Cache Settings for Individual EJBs

Individual EJB pool and cache settings in the sun-ejb-jar.xml deployment descriptor override those of the EJB container. The pool settings for individual beans are:

The cache settings that can be specified for individual beans are:

The following table lists the cache and pool settings for each type of EJB.

Table 3-2 Tunable EJB Cache and Pool Settings

 

Tunable Cache Settings

Tunable Pool Settings

Type of Bean

cache-resize-quantity

max- cache-size

cache-idle-timeout-in-seconds

removal-timeout-in-seconds

victim-selection-policy

refresh-period-in-seconds

steady-pool-size

pool-resize-quantity

max-pool-size

pool-idle-timeout-in-seconds

Stateful Session

X

X

X

X

X

 

 

 

 

 

Stateless Session

 

 

 

 

 

 

X

X

X

X

Entity (BMP/CMP)

X

X

X

X

X

 

X

X

X

X

Entity Read-only

X

X

X

X

X

X

X

X

X

X

Message Driven Bean

 

 

 

 

 

 

 

X

X

X

Commit Option

The commit option controls the action taken by the EJB container when an EJB completes a transaction. The commit option has a significant impact on performance.

There are two possible values for the commit option:

Option B avoids ejbAcivate() and ejbPassivate() calls. So, in most cases it performs better than option C since it avoids some overhead in acquiring and releasing objects back to pool.

However, there are some cases where option C can provide better performance. If the beans in the cache are rarely reused and if beans are constantly added to the cache, then it makes no sense to cache beans. With option C is used, the container puts beans back into the pool (instead of caching them) after method invocation or on transaction completion. This option reuses instances better and reduces the number of live objects in the VM, speeding garbage collection.

How do you decide whether to use Commit option B or commit option C?

First take a look at the cache-hits value using the monitoring command for the bean. If the cache hits are much higher than cache misses, then option B is an appropriate choice. You might still have to change the max-cache-size and cache-resize-quantity to get the best result.

If the cache hits are too low and cache misses are very high, then the application is not reusing the bean instances and hence increasing the cache size (using max-cache-size) will not help (assuming that the access pattern remains the same). In this case you might use commit option C. If there is no great difference between cache-hits and cache-misses then tune max-cache-size, and probably cache-idle-timeout-in-seconds.

Message-Driven Beans

The container for message-driven beans (MDB) is different than the containers for entity and session beans. In the MDB container, session and threads are attached to the beans in the MDB pool. That design makes it possible to pool the threads for executing message driven requests in the container. Thus, give the bean pool an optimal value, based on all the parameters of the server (taking other applications into perspective). For example, values greater than 500 is generally too large.


Transaction Service

The transaction manager makes it possible to commit and roll back distributed transactions.

A distributed transactional system writes transactional activity into transaction logs so that they can be recovered later. But writing transactional logs has some performance penalty.

Monitoring the Transaction Service

Transaction Manager monitoring is disabled by default. Enable monitoring the transaction service with the Admin Console at Domain > Configurations > config-name > Monitoring.

It is also possible to enable monitoring with these commands:

set serverInstance.transaction-service.monitoringEnabled=true
reconfig serverInstance

Viewing Monitoring Information

When you have enabled monitoring of the transaction service, view results

asadmin get -m serverInstance.transaction-service.*

The following statistics are gathered on the transaction service:

Here is a sample of the output using asadmin:

********** Stats for JTS ************
total-tx-completed = 244283
total-tx-rolled-back = 2640
total-tx-inflight = 702
isFrozen = False
inflight-tx =
Transaction Id , Status, ElapsedTime(msec)
000000000003C95A_00, Active, 999

Tuning the Transaction Service

This property can be used to disable the transaction logging, where the performance is of utmost importance more than the recovery. This property, by default, won't exist in the server configuration.

Disable Distributed Transaction Logging

If this is set to true, transaction logging is disabled, which can improve performance. If false, the transaction service writes transactional activity into transaction logs so that transactions can be recovered. If Recover on Restart is checked, this property is ignored. Default is false.

Use only if performance is more important than transaction recovery.

Set this property with the Admin Console at Configurations > config-name > Transaction Service. Click on Add Property, and specify:

Set this property with asadmin as follows:

asadmin set server1.transaction-service.disable-distributed-transaction-logging=true

On Restart - Recover (Automatic Recovery)

When automatic-recovery is set to true, disable-distributed-transaction-logging will not be considered and transaction logging will always happen. If automatic-recovery is set to false, disable-distributed-transaction-logging will be considered to determine whether to write transaction logs or not.

This value, together with disable-distributed-transaction-logging attribute, affects performance. If automatic-recovery is true, transaction logs will always be written.

If automatic recovery is false and disable-distributed-transaction-logging is off (the default), then the server will write transaction logs.

If automatic recovery is false and disable-distributed-transaction-logging is on, then the server will not write transaction logs. This will give approximately twenty percent improvement in performance but at the cost of not recovering as there won't be any transaction logs. In other words, transaction logging in case 1 and 2 results in approximately twenty percent impact. All these results apply only to transaction-intensive tests. Gains in real applications can be less.

Set automatic recovery with asadmin. For example:

asadmin set server1.transaction-service.automatic-recovery=false

Keypoint Interval

Keypointing prevents the log for a process from growing indefinitely, by defining the frequency at which the log file is cleaned up, by removing entries for completed transactions.

Frequent keypointing is deritmental to performance. The default value of the Keypoint Interval is 2048. In most of the cases, the default value is sufficient.


HTTP Service

Monitoring and tuning the HTTP server instances that handle client requests are important parts of ensuring peak Application Server performance.

Monitoring the HTTP Service

Enable monitoring statistics for the HTTP service using either Admin Console or asadmin. In the Admin Console, set the monitoring level to LOW or HIGH, depending on the level of detail desired.

With asadmin, use the following command to list the monitoring parameters available:

list --user admin --password adminadmin --port 4848
-m server.http-service.*

Use the following command to get the values:

get --user admin --password adminadmin --port 4848 -m server.http-service.*

Statistics collection is enabled by default. Disable it by adding the following property to domain.xml and restart the server:

<property name="statsProfilingEnabled" value="false" />

Disabling statisticss collection will increase performance.

In the Admin Console, monitoring statistics are divided into the following categories:

General HTTP Statistics (http-service)

The Admin Console provides the following performance-related HTTP statistics:

DNS Cache Information (dns)

The DNS cache caches IP addresses and DNS names. Your server's DNS cache is disabled by default. In the DNS Statistics for Process ID All page under Monitor in the web-based Administration interface the following statistics are displayed:

enabled

If the DNS cache is disabled, the rest of this section is not displayed.

By default, the DNS cache is off. Enable DNS caching with the Admin Console by setting the DNS value to "Perform DNS lookups on clients accessing the server".

CacheEntries (CurrentCacheEntries / MaxCacheEntries)

The number of current cache entries and the maximum number of cache entries. A single cache entry represents a single IP address or DNS name lookup. Make the cache as large as the maximum number of clients that access your web site concurrently. Note that setting the cache size too high is a waste of memory and degrades performance.

Set the maximum size of the DNS cache by entering or changing the value in the Size of DNS Cache field of the Performance Tuning page.

HitRatio (CacheHits / CacheLookups)

The hit ratio displays the number of cache hits versus the number of cache lookups.

This setting is not tunable.


Note

If you turn off DNS lookups on your server, host name restrictions will not work and hostnames will not appear in your log files. Instead, you'll see IP addresses.


Caching DNS Entries

It is possible to also specify whether to cache the DNS entries. If you enable the DNS cache, the server can store hostname information after receiving it. If the server needs information about the client in the future, the information is cached and available without further querying. specify the size of the DNS cache and an expiration time for DNS cache entries. The DNS cache can contain 32 to 32768 entries; the default value is 1024. Values for the time it takes for a cache entry to expire can range from 1 second to 1 year specified in seconds; the default value is 1200 seconds (20 minutes).

Limit DNS Lookups to Asynchronous

It is recommended that you do not use DNS lookups in server processes because they are so resource-intensive. If you must include DNS lookups, be sure to make them asynchronous.

enabled

If asynchronous DNS is disabled, the rest of this section will not be displayed.

NameLookups

The number of name lookups (DNS name to IP address) that have been done since the server was started.

This setting is not tunable.

AddrLookups

The number of address loops (IP address to DNS name) that have been done since the server was started.

This setting is not tunable.

LookupsInProgress

The current number of lookups in progress.

Connection Queue

File Cache Information (file-cache)

The file cache caches static content so that the server handles requests for static content quickly. The file-cache section provides statistics on how your file cache is being used.

For information on tuning the file cache, see HTTP File Cache.

Keep Alive (keep-alive)

The Admin Console provides the following performance-related keep-alive statistics:

Thread Pool (pwc-thread-pool)

Tuning the HTTP Service

The settings for the HTTP service are divided into the following categories in the Admin Console:

Request Processing

On the Request Processing tab of the HTTP Service page, tune the following HTTP request processing settings:

Thread Count

The Thread Count parameter specifies the maximum number of simultaneous requests the server can handle. The default value is 128. When the server has reached the limit or request threads, it defers processing new requests until the number of active requests drops below the maximum amount. Increasing this value will reduce HTTP response latency times.

In practice, Internet clients frequently connect to the server and then do not complete their requests. In these cases, the server waits a length of time specified by the Request Timeout parameter.

Also, some sites do heavyweight transactions that take minutes to complete. Both of these factors add to the maximum simultaneous requests that are required. If your site is processing many requests that take many seconds, you might need to increase the number of maximum simultaneous requests.

Adjust the thread count value based on your load and the length of time for an average request. In general, increase this number if you have idle CPU time and requests that are pending; decrease it if the CPU becomes overloaded. If you have many HTTP 1.0 clients (or HTTP 1.1 clients that disconnect frequently), adjust the timeout value to reduce the time a connection is kept open.

Suitable Request Thread Count values range from 100-500, depending on the load. If your system has extra CPU cycles, keep incrementally increasing thread count and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing thread count.

Initial Thread Count

The Initial Thread Count property specifies the minimum number of threads the server initiates upon start-up. The default value is 48. Initial Thread Count represents a hard limit for the maximum number of active threads that can run simultaneously, which can become a bottleneck for performance.

Request Timeout

The Request Timeout property specifies the number of seconds the server waits between accepting a connection to a client and receiving information from it. The default setting is 30 seconds. Under most circumstances, changing this setting is unnecessary. By setting it to less than the default 30 seconds, it is possible to free up threads sooner. However, disconnecting users with slower connections also helps.

Keep Alive

Both HTTP 1.0 and HTTP 1.1 support the ability to send multiple requests across a single HTTP session. A server can receive hundreds of new HTTP requests per second. If every request was allowed to keep the connection open indefinitely, the server could become overloaded with connections. On Unix systems, this could easily lead to a file table overflow.

To deal with this problem, the server maintains a "Maximum number of waiting keep-alive connections" counter. A `waiting' keep-alive connection has fully completed processing the previous request, and is waiting for a new request to arrive on the same connection. If the server has more than the maximum waiting connections open when a new connection waits for a keep-alive request, the server closes the oldest connection. This algorithm keeps an upper bound on the number of open waiting keep-alive connections that the server maintains.

If your system has extra CPU cycles, keep incrementally increasing the keep alive settings and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing the settings.

The following HTTP keep alive settings affect performance:

Thread Count

The Thread Count setting determines the number of threads in the Keep Alive subsystem. Adjust this setting to be a small multiple of the number of processors on the system. For example, a two-CPU system can have two or four Keep Alive threads.

The default is one. Do not change the default for a server with a small number of users and Max Keep Alive Connections.

Max Connections

The Max Connections setting controls the maximum number of keep-alive connections the Server can maintain. The possible range is 0 to 32768, and the default is 256.

Adjust this setting based on number of Keep Alive Connections the server is expected to service and the server's load, because it will add up to resource utilization and might increase the latency.

The number of connections specified by Max Connections is divided equally among the keep-alive threads. If Max Connections is not equally divisible by Thread Count, the server can allow slightly more than Max Connections simultaneous keep-alive connections.

Time Out

This parameter determines the maximum time (in seconds) that the server holds open an HTTP keep-alive connection. A client can keep a connection to the server open so that multiple requests to one server can be serviced by a single network connection. Since the number of open connections that the server can handle is limited, a high number of open connections will prevent new clients from connecting.

The default time out value is 30 seconds. Thus, by default, the server will close the connection if idle for more than 30 seconds. The maximum value for this parameter is 300 seconds (5 minutes).

The proper value for this parameter depends upon how much time is expected to elapse between requests from a given client. For example, if clients are expected to make requests frequently then, set the parameter to a high value; likewise, if clients are expected to make requests rarely, then set it to a low value.

Keep Alive Query Mean Time

The keep-alive-query-mean-time parameter specifies the interval between polling keep-alive connections. If this parameter has a value of n milliseconds, the response time seen by a client that has requested a keep-alive connection will have an overhead between 0 and n milliseconds.

The default value of this parameter is one millisecond, which works well for an expected concurrent load of less than 300 keep-alive connections. The default value can severely reduce the scalability with higher concurrent loads. For applications with higher connection loads, increase the default value.

Set this parameter with asadmin or in Admin Console HTTP Service page, by choosing Add Property and specifying:

Keep Alive Query Max Sleep Time

The keep-alive-query-max-sleep-time parameter specifies the maximum time (in milliseconds) to wait that after polling keep-alive connections for further requests. If your system has extra CPU cycles, keep incrementally increasing keep-alive-query-max-sleep-time and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing the settings.

Set this parameter with asadmin or in Admin Console HTTP Service page, by choosing Add Property and specifying:

Connection Pool

Connection queue information shows the number of sessions in the queue, and the average delay before the connection is accepted.

If your system has extra CPU cycles, keep incrementally increasing connection pool settings and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing the settings.

Connection pool setting that affect performance are:

Max Pending Count

This setting specifies the maximum number of pending connections on the listen socket. Adjust Max Pending Count only when there is a heavy load on the system. For low to medium loads, the default will be acceptable.

After observing system behavior, change the value accordingly, otherwise the server will start dropping connections. Connections that time out on a listen socket whose backlog queue is full will fail. If Max Pending Count is close to the limit, increase the maximum connection queue size to avoid dropping connections under heavy load.

Queue Size

This setting specifies the number of outstanding (yet to be serviced) connections that the server can have. For heavily loaded systems (with many users) that have limited request processing threads, adjust this setting to a higher value.


Caution

Setting the connection queue size too high can degrade server performance. It was designed to prevent the server from becoming overloaded with connections it cannot handle. If the server is overloaded, increasing the connection queue size will increase the latency of request handling, and the connection queue will fill up again.


HTTP Protocol

DNS Lookup Enabled

This setting specifies whether the server performs DNS (domain name service) lookups on clients that access the server. When DNS lookup is not enabled, when a client connects, the server knows the client's IP address but not its host name (for example, it knows the client as 198.95.251.30, rather than www.xyz.com). When DS lookup is enabled, the server will resolve the client's IP address into a host name for operations like access control, common gateway interface (CGI) programs, error reporting, and access logging.

If the server responds to many requests per day, reduce the load on the DNS or NIS (Network Information System) server by disabling DNS lookup. Enabling DNS lookup will increase the latency and load on the system—do so with caution.

HTTP File Cache

The Application Server uses a file cache to serve static information faster. The file cache contains information about static files such as HTML, CSS, image, or text files. Enabling the HTTP file cache will improve performance of applications that contain static files.

Set the file cache values in the Admin Console under Domain > Configurations > config-name > HTTP Service (HTTP File Cache).

Max Age

This parameter controls how long cached information is used after a file has been cached. An entry older than the maximum age is replaced by a new entry for the same file.

If your web site's content changes infrequently, increase this value for improved performance. Set the maximum age by entering or changing the value in the Maximum Age field of the File Cache Configuration page in the web-based Admin Console for the HTTP server node and selecting the File Caching Tab.

Set the maximum age based on whether the content is updated (existing files are modified) on a regular schedule or not. For example, if content is updated four times a day at regular intervals, you could set the maximum age to 21600 seconds (6 hours). Otherwise, consider setting the maximum age to the longest time you are willing to serve the previous version of a content file after the file has been modified.

Small/Medium File Size and File Size Limit

The cache treats small, medium, and large files differently. The contents of medium files are cached by mapping the file into virtual memory (Unix/Linux platforms). The contents of "small" files are cached by allocating heap space and reading the file into it. The contents of "large" files (larger than "medium") are not cached, although information about large files is cached.

The advantage of distinguishing between small files and medium files is to avoid wasting part of many pages of virtual memory when there are lots of small files. So the Small File Size Limit is typically a slightly lower value than the VM page size.

File Transmission

When File Transmission is enabled, the server caches open file descriptors for files in the file cache, rather than the file contents. Also, the distinction normally made between small, medium, and large files no longer applies since only the open file descriptor is being cached.

By default, File Transmission is enabled on Windows, and disabled on UNIX. On UNIX, only enable File Transmission for platforms that have the requisite native OS support: HP-UX and AIX. Don't enable it for other UNIX/Linux platforms.

Tuning HTTP Listener Settings

Change HTTP listener settings in the Admin Console under Domain > Configurations > config-name > HTTP Service > HTTP Listeners > listener-name .

Network Address

For machines with only one network interface card (NIC), set the network address to the IP address of the machine (for example, 192.18.80.23 instead of default 0.0.0.0). If you specify an IP address other than 0.0.0.0, the server will make one less system call per connection. Specify an IP address other than 0.0.0.0 for best possible performance. If the server has multiple NIC cards then create multiple listeners for each NIC's.

Acceptor Threads

The Acceptor Threads setting specifies how many threads you want in accept mode on a listen socket at any time. It is a good practice to set this to less than or equal to the number of CPUs in your system.

In the Application Server, acceptor threads on an HTTP Listener accept connections and put them onto a connection queue. Session threads then pick up connections from the queue and service the requests. The server posts more session threads if required at the end of the request.

The policy for adding new threads is based on the connection queue state:

To avoid creating too many threads when the backlog increases suddenly (such as the startup of benchmark loads), the server makes the decision whether more threads are needed only once every 16 or 32 connections, based on how many session threads already exist.

Blocking Enabled

The blocking-enabled parameter determines whether the listen socket and the accepted socket are put in to blocking mode. Enabling this will often improve performance.

Enable blocking with asadmin, using the flag

--blockingenabled=true

For example:

asadmin> create-http-listener --user admin --password adminadmin --host foo --port 7070 --address 0.0.0.0 --instance server1 --listenerport 7272 --defaultvs server1 --servername foo.bar.com --family inet6 --acceptorthreads 2 --blockingenabled=true --securityenabled=false --enabled=false sampleListener

Migrating From Version 7

If you are migrating an existing installation from Application Server version 7.x to version 8, consult the following table to see the mapping of tunable parameters.

Table 3-3 Mapping of tunable settings from version 7 to version 8

Tunable Setting in version 7.x

Tunable Setting in version 8.1

RqThrottle

thread-count

RqThrottleMin

initial-thread-count

ConnQueueSize

queue-size-in-bytes

KeepAliveThreads

keep-alive-thread-count

KeepAliveTimeout

timeout-in-seconds

MaxKeepAliveConnections

max-connections

KeepAliveQueryMeanTime

keep-alive-query-mean-time

KeepAliveQueryMaxSleepTime

keep-alive-query-max-sleep-time

ListenQ    

max-pending-count


ORB

The Application Server includes a high performance and scalable CORBA ORB (Object Request Broker). The ORB is the foundation of the EJB Container on the server.

Overview

Most of the functionality of the ORB is utilized when exercising Enterprise Java Beans via:

When a server instance makes a connection to another server instance ORB, the first instance acts as a client-side ORB. SSL over IIOP uses an optimized transport that is one of the fastest, and utilizes native implementations of cryptography algorithms to deliver high performance.

It is important to remember that the ORB is not used when using local interfaces for EJBs. In this situation, all arguments are passed by reference and no object copying is involved.

Monitoring the ORB

ORB statistics are disabled by default. To gather ORB statistics, enable monitoring with this asadmin command:

set serverInstance.iiop-service.orb.system.monitoringEnabled=true
reconfig serverInstance

Connection Statistics

The following statistics are gathered on ORB connections:

Use this command to get ORB connection statistics:

asadmin get --monitor
   serverInstance.iiop-service.orb.system.orb-connection.*

Thread Pools

The following statistics are gathered on ORB thread pools:

Use this command to get ORB thread pool statistics:

asadmin get --monitor
   serverInstance.iiop-service.orb.system.orb-thread-pool.*

Tuning the ORB

Tune ORB performance by setting ORB and ORB thread pool parameters. Also, decrease response time by leveraging load-balancing, multiple shared connections, thread pool and message fragment size. Improve scalability by load balancing between multiple ORB servers from the client, and tuning the number of connection between the client and the server.

Tunable ORB Parameters

Tune ORB parameters using the Admin Console. The following standard parameters are available to tune on the ORB:

ORB Thread Pool Properties

The ORB thread pool contains a task queue and a pool of threads. Tasks or jobs are inserted into the task queue and free threads pick tasks from this queue for execution. It is not advisable to size a thread pool size such that the task queue is always empty. It is normal for a significant application's Max Pool Size to be ten times the size of the current task queue.

The Application Server uses the ORB thread pool to:

Thus, even when one is not using ORB for remote-calls (via RMI/ IIOP), size the threadpool so that cleaning-up activity of the EJB pools and caches can be facilitated.

Set ORB thread pool attributes under Domain > Configurations > config-name > Thread Pools > thread-pool-ID, where thread-pool-ID is the thread pool ID selected for the ORB.

Non-standard ORB Properties and Functionality

The following values are specified as -D arguments when launching the client program:

Controlling connections between client and server ORB

When using the default JDK ORB on the client, a connection is established from the client ORB to the application server ORB every time an initial context is created. To pool or share these connections when they are opened from the same process by adding to the configuration on the client ORB.

-Djava.naming.factory.initial=com.sun.appserv.naming.S1ASCtxFactory

Using Multiple Connections for Better Throughput

When using the Sun One context factory, (com.sun.appserv.naming.S1ASCtxFactory) an important tunable is to specify the number of connections to open to the server from the client ORB (default is 1). This feature is seen to produce better throughput to and from the server for network intense application traffic. The configuration changes are specified on the client ORB(s) by adding the following jvm-options:

-Djava.naming.factory.initial=com.sun.appserv.naming.S1ASCtxFactory

-Dcom.sun.appserv.iiop.orbconnections=[number]

Load Balancing

To configure RMI/IIOP for multiple application server instances in a cluster, refer to the Application Server Administration Guide chapter on RMI-IIOP Load Balancing.

When tuning the client ORB for load-balancing and connections, consider the number of connections opened on the server ORB. Start from a low number of connections and then increase it to observe any performance benefits. A connection to the server translates to an ORB thread reading actively from the connection (these threads are not pooled, but exist currently for the lifetime of the connection).

The following table lists the tunable ORB settings.

Table 3-4 Tunable ORB Settings

Path

ORB modules

Server settings

RMI/ IIOP from application client to application server

communication infrastructure, thread pool

steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds

RMI/ IIOP from Sun ONE (server) ORB to Application Server

communication infrastructure, thread pool

steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds

RMI/ IIOP from a vendor ORB

parts of communication infrastructure, thread pool

steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds

In-process

thread pool

steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds

Thread Pool Sizing

After examining the number of inbound and outbound connections as explained above, tune the size of the thread pool appropriately. This can affect performance and response times significantly.

The size computation takes into account the number of client requests to be processed concurrently, the resource (number of CPUs and amount of memory) available on the machine and the response times required for processing the client requests.

Setting the size to a very small value can affect the ability of the server to process requests concurrently thus affecting the response times of requests as they will be sitting longer in the task queue waiting for a worker thread to process it. On the other hand having a large number of worker threads to service requests can also be detrimental because more system resources are used up by the large number of threads, which- increases concurrency. This can mean that threads take longer to acquire shared structures in the EJB container, thus affecting response times.

The worker thread pool is also used for the EJB containers housekeeping activity such as trimming the pools and caches. This activity needs to be accounted for also when determining the size. Having too many ORB worker threads is detrimental for performance since the server has to maintain all these threads. The idle threads are destroyed after the idle thread timeout period.

Examining IIOP Messages

It is sometimes useful to examine the IIOP messages passed by the Application Server. To make the server save IIOP messages to the server.log file, set the JVM option -Dcom.sun.CORBA.ORBDebug=giop. Use the same option on the client ORB.

The following is an example of IIOP messages saved to the server log:

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: ++++++++++++++++++++++++++++++

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: Message(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]):

createFromStream: type is 4 <

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: MessageBase(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): Message GIOP version: 1.2

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: MessageBase(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): ORB Max GIOP Version: 1.2

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: Message(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): createFromStream: message construction complete.

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: com.sun.corba.ee.internal.iiop.MessageMediator(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): Received message:

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: ----- Input Buffer -----

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: Current index: 0

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: Total length : 340

[29/Aug/2002:22:41:43] INFO (27179): CORE3282: stdout: 47 49 4f 50 01 02 00 04 0 0 00 01 48 00 00 00 05 GIOP.......H....


Note

The flag -Dcom.sun.CORBA.ORBdebug=giop generates many debug messages in the logs. This is used only when you suspect message fragmentation.


In this sample output above, the "createFromStream" type is shown as 4. This implies that the message is a fragment of a bigger message. One could change the fragment size to avoid fragmented messages. This would mean that messages would be sent as one unit and not as fragments thus saving the overhead of sending multiple messages and corresponding processing at the receiving end to piece the messages together.

It might be more efficient to increase the fragment size if most messages being sent in the application turn out to be fragmented because of a low fragment size specification. On the other hand if only a few messages are fragmented, it might be more efficient to have a lower fragment size as this would mean smaller buffers would be allocated for writing messages and only the occasional message would end up getting fragmented.

Improving ORB Performance with JSG

It is possible to improve ORB performance by using Java Serialization instead of standard CDR (Common Data Representation) as the mechanism to express data for transport over the network. This capability is called Java Serialization over GIOP (General Inter-ORB Protocol), or JSG.

In some cases, JSG can provide better performance throughput than CDR. The performance differences depend highly on the application. Applications with remote objects having small amounts data transmitted between client(s) and server(s) will most often perform better using JSG.

You must set this property on all servers that you want to use JSG. Add this system property through the Admin Console, as follows:

Using JSG for Application Clients

If an application will use standalone non-web clients (application clients), and you want to use JSG, you must also set a system property for the client applications. A common way to do this is to add the property to the Java command line used to start the client application, for example:

java -Dcom.sun.CORBA.encoding.ORBEnableJavaSerialization=true \
-Dorg.omg.CORBA.ORBInitialHost=gollum \
-Dorg.omg.CORBA.ORBInitialPort=35309 \
MyClientProgram


Thread Pools

Tuning Thread Pools

Configure thread pool settings through the Admin Console:

Thread Pools (Unix)

Since threads on Unix are always operating system (OS)-scheduled, as opposed to user-scheduled, Unix users do not need to use native thread pools. Therefore, this option is not offered in a Unix user interface. However, it is possible to edit the OS-scheduled thread pools and add new thread pools, if needed, using the web-based Administration interface.


Resources

JDBC Connection Pools

For optimum performance of database-intensive applications, tune the JDBC Connection Pools managed by the Application Server. These connection pools maintain numerous live database connections that can be reused to reduce the overhead of opening and closing database connections. This section describes how to tune JDBC Connection Pools to improve performance.

J2EE applications use JDBC Resources to obtain connections that are maintained by the JDBC Connection Pool. More than one JDBC Resource is allowed to refer to the same JDBC Connection Pool. In such a case, the physical connection pool is shared by all the resources.

JDBC Connection Pools can be defined and configured by using the web-based Admin Console. Though each connection pool is instantiated at server start-up, the pool is only populated with physical connections when accessed for the first time.

Monitoring JDBC Connection Pools

Statistics-gathering is enabled by default for JDBC Connection Pools. The following attributes are monitored:

To get the statistics, use these commands:

asadmin get --monitor=true
   
serverInstance.resources.jdbc-connection-pool.*
asadmin get --monitor=true
   
serverInstance.resources.jdbc-connection-pool. poolName.* *

Tuning JDBC Connection Pools

Table 3-5 JDBC Connection Pool Attributes


Name

Description

name

Unique name of the pool definition.

datasource-
classname

Name of the vendor supplied JDBC datasource resource manager. An XA or global transactions capable datasource class will implement javax.sql.XADatasource interface. Non XA or Local transactions only datasources will implement javax.sql.Datasource interface.

res-type

Datasource implementation class could implement one or both of javax.sql.DataSource, javax.sql.XADataSource interfaces. This optional attribute must be specified to disambiguate when a Datasource class implements both interfaces. An error is produced when this attribute has a legal value and the indicated interface is not implemented by the datasource class. This attribute has no default value.

steady-pool-
size

Minimum and initial number of connections created.

max-pool-size

Maximum number of connections that can be created.

max-wait-time-in-millis

Amount of time the caller will wait before getting a connection timeout. The default is 60 seconds. A value of 0 will force caller to wait indefinitely.

pool-resize-
quantity

Number of connections to be removed when idle-timeout-in-seconds timer expires. Connections that have idled for longer than the timeout are candidates for removal. When the pool size reaches steady-pool-size, the connection removal stops.

idle-timeout-in-seconds

Maximum time in seconds that a connection can remain idle in the pool. After this time, the pool implementation can close this connection. Note that this does not control connection timeouts enforced at the database server side.

Administrators are advised to keep this timeout shorter than the database server side timeout (if such timeouts are configured on the specific vendor's database), to prevent accumulation of unusable connection in Application Server.

transaction-
isolation-
level

Specifies the Transaction Isolation Level on the pooled database connections. This setting is optional and has no default.

If left unspecified the pool operates with default isolation level provided by the JDBC Driver.

A desired isolation level can be set using one of the standard transaction isolation levels: read-uncommitted, read-committed, repeatable-read, serializable.

is-isolation-level-guaranteed

Applicable only when a particular isolation level is specified for transaction-isolation-level. The default value is true.

This assures that every time a connection is obtained from the pool, it is guaranteed to have the isolation set to the desired value.

This setting can have some performance impact on some JDBC drivers. It can be set to false by that administrator when they are certain that the application does not change the isolation level before returning the connection.

is-connection-validation-
required

If true, connections are validated (checked to find out if they are usable) before being given out to the application. Also, the connection-validation-type specifies the type of validation to be performed. The default is false. Types of validation supported:

1) using connection.autoCommit(),
2) using connection.getMetaData()
3) performing a query on a user specified table (see validation-table-name).

The possible values are one of: auto-commit, or meta-data.

The table validation-table-name attribute specifies the table name to be used to perform a query to validate a connection. This parameter is mandatory, if connection-validation-type is set to table. Verification by accessing a user specified table can become necessary for connection validation, particularly if database driver caches calls to setAutoCommit() and getMetaData().

fail-all-connections:

Indicates if all connections in the pool must be closed if a single validation check fails. The default is false. One attempt will be made to re-establish failed connections.


The following table describes the attributes of a JDBC connection pool.

General Tips

Improve performance of JDBC connection pools by following these tips:

Sizing Connection Pools

When sizing connection pools, keep the following pros and cons in mind:

Table 3-6 Connection Pool Sizing Pros and Cons

Connection pool

Pros

Cons

Small Connection pool

  • faster access on the connection table.
  • not enough connections to satisfy requests.
  • most requests will spend more time in the queue.

Large Connection pool

  • more connections to fulfill requests.
  • less (or no) time in the queue
  • slower access on the connection table.

Transaction isolation levels

The transaction isolation levels listed from best performance to worst are:

  1. READ_UNCOMMITTED
  2. READ_COMMITTED
  3. REPEATABLE_READ
  4. SERIALIZABLE

Choose the correct isolation level that provides the best performance, yet still meets the concurrency and consistency needs of the application.

Set the transaction isolation level with the Admin Console under Domain > Resources > JDBC > Connection Pools > PoolName.

Database drivers

It is better to access database drivers using the classpath declared in the JVM classpath (Configurations > config-name > JVM Settings), rather than putting them in <instance_dir>/lib.

Classes in <instance_dir>/lib are loaded by the common class loader. But class accessed using the JVM classpath are loaded by the system class loader. The system-class loader is preferable for two reasons:

Connector Connection Pools

Transaction Support

Override the transaction support specified for each connector connection pool. It is possible to achieve better performance by changing the value of transaction support from the default.

For example, consider a case where an EIS has a higher-performance implementation of a LocalTransaction supporting connection factory than one supporting global transactions. If a resource from this EIS needs to be mixed with a resource coming from another resource manager, the default behavior forces the usage of XA transactions leading to degradation in performance. However, by changing the EIS's connector-connection-pool to have LocalTransaction transaction support and leveraging the Last Agent Optimization feature previously described, the administrator could leverage the better-performing EIS LocalTransaction implementation.

In the Admin Console, specify transaction support when you create a new connector connection pool, and when you edit a connector connection pool at Domain > Resources > Connectors > Connector Connection Pools.

Also set transaction support using asadmin. For example, the following asadmin command could be used to create a connector connection pool "TESTPOOL" with the transaction-support as "LOCAL".

asadmin> create-connector-connection-pool --raname jdbcra --connectiondefinition javax.sql.DataSource -transactionsupport LocalTransaction TESTPOOL



Previous      Contents      Index      Next     


Copyright 2004 Sun Microsystems, Inc. All rights reserved.