Previous     Contents     Index     Next     
iPlanet Application Server Performance and Tuning Guide



Chapter 4   Tuning iPlanet Application Server


This chapters provides a comprehensive guide to tuning iPlanet™ Application Server for maximum performance. The following topics are discussed in this section:



Optimizing Performance of Server Processes

The Executive Server (KXS), java engine (KJS), C++ engine (KCS), and RMI/IIOP bridge process (CXS) form the core of iPlanet Application Server. In this section, we will discuss how to tune these processes for maximum performance and scalability.

This section describes the following topics:


Tuning iPlanet Application Server Processes

The Executive Server (KXS), the java engine (KJS) and C++ engine (KCS) process requests asynchronously by employing a pool of worker threads. These threads handle user requests for application components. When iPlanet Application Server receives a request, it assigns the request to a free thread. The thread manages the system needs of the request. For example, if the request needs to use a system resource that is currently busy, the thread waits until that resource is free before allowing the request to use that resource.

You can adjust the number of request threads globally, for all processes used by that instance of iPlanet Application Server. You can also do this at process level.



Note Note that the process level setting overrides the server level setting. You can tune these settings using the iPlanet Application Server Administration Tool.



The following topics are discussed in this section:


Optimizing KXS Performance

The Web Connector Plug-in routes users requests aimed at iPlanet Application Server applications, to the Executive process (KXS). These requests are logged to the request queue in the Executive process.

You can perform the following tasks to optimize KXS performance:

  • Control the maximum number of threads the Web Connector Plug-in will use to process requests. This prevents the request queue from receiving more requests than it can process. On iPlanet Application Server installations, the number of KXS threads is set to 32 by default. This can be increased all the way to 128. A setting of 64 threads is sufficient. If the thread count is to be increased, it is recommended that KXS be bound to at least a single processor to avoid wasted time in threads acquiring mutex locks.

  • Set the maximum number of requests that are logged to the request queue to control the flow of requests. The maximum number is called the "high watermark".

  • Set the number of requests in the queue at which logging will resume. This number is called the "low watermark".

  • Bind KXS to a single processor or a processor set. This should only be done if requests are queued at KXS when testing loads. This should not be done for KJS because in iPlanet Application Server, the JDK is optimized for multiple processors and binding it to a single processor does not buy any performance benefits. In addition, if binding to a single processor does not improve KXS performance (as seen by CPU utilization being high on the KXS process), then you can create a processor set using two processors. KXS should then be bound to the processor set.

When a server process, such as Executive Server (KXS), Java Server (KJS), C++ Server (KCS) or Corba Executive Server (CXS) fails, the Administrative Server restarts it. You can set the restart option to either increase or decrease the number of times that a process is restarted. Fault tolerance and application availability are increased when all processes are running smoothly.


Optimizing KJS Performance

When you install iPlanet Application Server, the number of KJS threads is set to 32. This can be increased all the way to 48. A setting of 48 KJS threads is optimal.


Adjusting the Number of Request Threads

The thread pool is by default populated with 8 threads in each process. The maximum is set to 32 threads. You can specify the minimum and maximum number of threads that are reserved for requests from applications. The thread pool is dynamically adjusted between these two values. The minimum thread value you specify holds at least that many threads in reserve for application requests. That number is increased up to the maximum thread value that you specify.

Increasing the number of threads available to a process allows the process to respond to more application requests simultaneously. You can add and adjust threads for each process, or you can define the number of threads for all processes under a server, at the server level.

The optimal setting for these parameters would vary based on the application. For example, if the request involves significant amount of database processing and the database server hardware is lightly loaded and can handle increased concurrency, it is advisable to tweak the pool size up and allow greater number of requests to reach the database and improve throughput. In general, a larger thread pool size appears to benefit, till about 32-48 threads in both KXS and KJS processes. We recommend that this be fixed, by setting the minimum and maximum pool size to the same number, initially at 32, and, if necessary experiment with 48 threads. You can specify all the required parameters in one go, using iASAT.

By default, each process uses the threads assigned to iPlanet Application Server. For example, if iPlanet Application Server uses a minimum of 8 threads and a maximum of 64 threads, each individual process uses a minimum of 8 threads and a maximum of 64 threads.


Specifying Maximum Server and Engine Shutdown Time

You can set the maximum number of engine restarts of the Administration Server for both iPlanet Application Server and engine processes. For example, if you set the engine shutdown time to 60 seconds, application tasks being processed are allowed 60 seconds for completion. No new requests are accepted after this period has elapsed. Specifying a shutdown value avoids a "hard" shutdown that will return errors to the client. You can set these values using iPlanet Application Server Administration Tool.

Maximum Server Shutdown Time. . The Maximum Server Shutdown Time is the maximum time taken to shut down iPlanet Application Server. After this time, any engines that are still running are killed. The server typically shuts down quickly unless it is heavily loaded.

Maximum Engine Shutdown Time. . The Maximum Engine Shutdown Time is the maximum time that iPlanet Application Server will wait for an engine to shut down. After this time, the engine will be killed, and the next engine(s) will be shutdown.

Switch off all Logging

To reduce the strain on the system owing to continuous input / output operations, switch of all application logging that will be written to the KJS logs. This action has a marked improvement on performance.

Set MaxBackups = 1

In a normal application server architecture, a single Sync Backup server will reduce the amount of intra-cluster communication. For a cluster, set Maxbackups (maximum number of backups) to 1. Setting it to 0 will mean that there are no session backups in case the primary becomes unavailable. Setting it to 2 will increase intra-cluster communication, increasing the load on the server. Therefore, a setting of 1 is optimal for this parameter.


Performance Tuning RMI/IIOP

For deployment environments in which you expect the RMI/IIOP path to support more than a handful of concurrent users, you should experiment with the tuning guidelines described in this section. The default configuration of the JVM and the underlying OS do not yield optimal performance and capacity when you are using RMI/IIOP.

This section covers the following topics:


Recognizing Performance Issues

Before exercising your RMI/IIOP client application under load, ensure you have verified that basic mechanical tests are completed successfully.

As you begin exercising the client application under load, you may experience the following exceptions on the RMI/IIOP client:

org.omg.CORBA.COMM_FAILURE

java.lang.OutOfMemoryError

java.rmi.UnmarshalException

If you've verified that the basic mechanics of your application are working properly, and you experience any one of these exceptions while load testing your application, see the next section to learn how to tune the RMI/IIOP environment.


Basic Tuning Approaches

You should experiment with the following tuning recommendations in order to find the best balance for your specific environment.

Solaris File Descriptor Setting. On Solaris, setting the maximum number of open files property using ulimit has the biggest impact on your efforts to support the maximum number of RMI/IIOP clients. The default value for this property is 64 or 1024 depending on whether you are running Solaris 2.6 or Solaris 8. To increase the hard limit, add the following command to /etc/system and reboot once:

set rlim_fd_max = 8192

You can verify this hard limit by using the following command:

ulimit -a -H

Once the above hard limit is set, you can increase the value of this property explicitly (up to this limit) using the following command:

ulimit -n 8192

You can verify this limit by using the following command:

ulimit -a

For example, with the default ulimit of 64, a simple test driver can support only 25 concurrent clients, but with ulimit set to 8192, the same test driver can support 120 concurrent clients. The test driver spawns multiple threads, each of which performs a JNDI lookup and repeatedly calls the same business method with a think (delay) time of 500ms between business method calls, exchanging data of about 100KB.

These settings apply to both RMI/IIOP clients (on Solaris) and to the RMI/IIOP Bridge installed on a Solaris system. Refer to Solaris documentation for more information on setting the file descriptor limits.

Java Heap Settings. Apart from tuning file descriptor capacities, you may want to experiment with different heap settings for both the client and Bridge JVMs. For more information, see Chapter 5 "Tuning the Java Runtime System".


Enhancing Scalability

Beyond tuning the capacity of a single Bridge process and client systems, you can improve the scalability of the RMI/IIOP environment by using multiple RMI/IIOP Bridge processes. You may find that configuring multiple Bridge processes on the same application server instance improves the scalability of your application deployment. In certain cases, you may want to use a number of application server instances each configured with one or more Bridge processes.

In configurations where more than one Bridge process is active, you can partition the client load by either statically mapping sets of clients to different Bridges or by implementing your own logic on the client side to load balance against the known Bridge processes.


Firewall Configuration for RMI/IIOP

If the RMI/IIOP client is communicating through a firewall to the iPlanet Application Server, you must enable access from the client system to the IIOP port used by the RMI/IIOP Bridge processes. Since the clients port numbers are assigned dynamically, you must open up a range of source ports and a single destination port to allow RMI/IIOP traffic to flow from a client system through a firewall to an instance of the application server.

A snoop-based trace of the IIOP traffic between two systems during a single execution of the Converter sample application is given below. The host swatch is the RMI/IIOP client, while the host Mamba is the destination or application server system. The port number assigned to the RMI/IIOP Bridge process is 9010. Note that the two dynamically assigned ports (33046 and 33048) are consumed on the RMI/IIOP client, while only port 9010 is used to communicate with the Bridge process:

swatch -> mamba.red.iplanet.com TCP D=9010 S=33046 Syn Seq=140303570 Len=0 Win=24820

Options=<nop,nop,sackOK,mss 1460>

mamba.red.iplanet.com -> swatch TCP D=33046 S=9010 Syn Ack=140303571 Seq=1229729413 Len=0 Win=8760

Options=<mss 1460>

swatch -> mamba.red.iplanet.com TCP D=9010 S=33046 Ack=1229729414 Seq=140303571 Len=0 Win=24820

swatch -> mamba.red.iplanet.com TCP D=9010 S=33046 Ack=1229729414 Seq=140303571 Len=236 Win=24820

mamba.red.iplanet.com -> swatch TCP D=33046 S=9010 Ack=140303807 Seq=1229729414 Len=168 Win=8524

swatch -> mamba.red.iplanet.com TCP D=9010 S=33046 Ack=1229729582 Seq=140303807 Len=0 Win=24820

swatch -> mamba.red.iplanet.com TCP D=9010 S=33048 Syn Seq=140990388 Len=0 Win=24820

Options=<nop,nop,sackOK,mss 1460>

mamba.red.iplanet.com -> swatch TCP D=33048 S=9010 Syn Ack=140990389 Seq=1229731472 Len=0 Win=8760

Options=<mss 1460>

swatch -> mamba.red.iplanet.com TCP D=9010 S=33048 Ack=1229731473 Seq=140990389 Len=0 Win=24820

swatch -> mamba.red.iplanet.com TCP D=9010 S=33048 Ack=1229731473 Seq=140990389 Len=285 Win=24820

mamba.red.iplanet.com -> swatch TCP D=33048 S=9010 Ack=140990674 Seq=1229731473 Len=184 Win=8475

swatch -> mamba.red.iplanet.com TCP D=9010 S=33048 Ack=1229731657 Seq=140990674 Len=0 Win=24820

swatch -> mamba.red.iplanet.com TCP D=9010 S=33048 Ack=1229731657 Seq=140990674 Len=132 Win=24820

mamba.red.iplanet.com -> swatch TCP D=33048 S=9010 Ack=140990806 Seq=1229731657 Len=25 Win=8343



Comparing Distributed and Lite HTTP Sessions

Distributed Sessions offer improved availability by replicating session data on a backup iPlanet Application Server node. To acheive this, objects placed into a Distributed Session must implement the java.lang.Serializable interface.

However, this generates additional network traffic if the session sizes are large and are written to quite often. Due to queuing effects, JSPs and servlets that access distributed sessions also suffer a performance loss, the magnitude of which depends on the amount of HttpSession usage and size. Lite Sessions trade-off availability for better performance. Session Objects are locally cached in the KJS process, which serves as a home for all the requests in that session.

Lite Sessions also have the advantage when Java objects are stored in HttpSession. Since Lite session objects are cached in each KJS process, there is no need to serialize Java objects over the network. So if a Java object is stored in a Lite HttpSession, there is no need for that object to implement the java.lang.Serializable interface.

You can specify the desired session type in ias-web.xml, by editing the <session-info>/<impl> property.


Note Gains of 8-40% have been reported by switching to Lite sessions.





Configuring a Single Backup for Highly Available Sessions



This only applies if you are using the highly available sessions (dsync-distributed) and have configured a primary and secondary for HttpSession failover. The registry setting Maxbackups for the cluster should be set to 1. Setting it to 0 will mean that there are no session backups in case the primary becomes unavailable. Setting it to 2 will increase intra cluster chatter. A setting of 1 is optimal for this parameter (and is the default).

You can set this parameter in the following iPlanet Registry key:

Software\iPlanet\Application Server\6.0\Clusters\<cluster-name>\MaxBackups



Configuring Dsync Session Management Threads



Dsync is a distributed state synchronization service. The designated Dsync primary and Backup contain in-memory database of session nodes. As Http Sessions time-out, or are invalidated, they need to be promptly removed from the memory database. This removal and book keeping can be done with greater concurrency by increasing the number of dedicated threads.

Set the following property in the registry. This can solve transient memory growth problems and make the session node invalidation more efficient. The following sets the KXS cleaner thread count to 20, in iPlanet Registry:

Software\iPlanet\Application Server\6.0\CCSO\ENG\0\SyncTimeoutThreadCount=20

To accomplish the same for KJS engine #1, modify the following key in iPlanet Registry:

Software\iPlanet\Application Server\6.0\CCSO\ENG\1\SyncTimeoutThreadCount=20

There is another setting to configure the interval at which these threads are activated:

Software\iPlanet\Application Server\6.0\CCSO\ENG\<n>\SyncTimerInterval

This defaults to 60 and this setting is quite adequate.

Note While this does not improve performance directly, keeping a small session object store always helps. It also ensures that the KXS and KJS processes to not leak memory, by cleaning up promptly and efficiently.



We recommend that you not rely on HttpSession time-outs to clean up sessions. Use HttpSession.invalidate() method to clean up and where that is not possible, set the default HttpSession time-out to be as low as possible in the deployment environment.



Load Balancing Options



This section describes the following topics:


Load-Balancing Cluster Configuration

In the iPlanet Application Server environment, a cluster is defined as a collection of servers that shares the responsibility for saving state & session information, performed by the Data Synchronization Service (Dsync).

Dsync, therefore, is the shared resource that constrains the size of any given cluster. As a general rule, each cluster should run with no more than 4 instances of iPlanet Application Server. If more servers are required, then use sticky load balancing from the web server tier to multiple clusters. If state and session information is not stored in Dsync, there is no limit on the number of servers that can be run in parallel. Use the iPlanet Application Server Sizing Tool to find an optimal configuration.

We can explore 2 scenarios for load balancing cluster configuration, as follows:

Scenario 1: Two iPlanet Application Server clusters sharing a single LDAP configuration tree.

In this scenario, all iPlanet Application Servers share the same LDAP configuration tree. To support sticky load balancing, it is not necessary to turn the sticky load balancing method on the router.

The following figure illustrates this configuration. In the figure, incoming requests are represented by sharing configuration LDAP between blue arrows and LDAP access is represented by red arrows.

Figure 4-1    Two Application servers sharing a single LDAP tree


Recommended Usage:

This configuration is useful when each application only exists in a single cluster. Benefits of such a scenario include isolation of applications, and simplicity of the web tier.

Configuration:

During installation of iPlanet Application Server, specify the same LDAP and the same configuration root for all iPlanet Application Server in all clusters.

Scenario 2:

iPlanet Application Server clusters assigned to different LDAP configuration trees:

In this scenario, each iPlanet Application Server cluster has a branch in the configuration tree in LDAP. Each iPlanet Application Server cluster will have at least one iPlanet Web Server dedicated to it. When a request comes in to a iPlanet Web Server (since iPlanet Web Server only knows about one iPlanet Application Server cluster), it will always send requests to iPlanet Application Server in this cluster. The router is responsible for distributing the load between different iPlanet Web Servers.

To enable the support for sticky load balancing, the router's sticky option needs to be turned on so that subsequent requests will come to the same iPlanet Web Server.

The following figure illustrates this configuration. In the figure, incoming requests are represented by sharing the configuration LDAP between blue arrows and user LDAP access is represented by red arrows.

Figure 4-2    Two application servers clusters assigned to separate LDAP tress


Recommended Usage:

This configuration is useful when applications require cross-cluster deployment. Benefits of such a scenario include the ability to use one cluster to promote new versions of applications, and to support applications that require more processing power than a single cluster can provide.


Broadcasting and Updating Information

For load balancing to be effective, each server involved in the process must have the most current information about all the other servers. This means that information about the factors that affect load balancing must be broadcast to all the iPlanet Application Server machines, and every iPlanet Application Server machine must monitor and update this information to make load-balancing decisions. Broadcasting information too often, results in a high level of network traffic and could slow down response time. However, if the load-balancing information is not calculated and updated frequently, then application components risk not being optimally load balanced because the information iPlanet Application Server uses to make load-balancing decisions is outdated.

When making decisions about load balancing, you face two major dilemmas:

  • How frequently should an iPlanet Application Server server update its load-balancing information?

  • How frequently should every iPlanet Application Server installation broadcast its load-balancing information?

Update Interval. A minimum value of 5 seconds and a maximum value of 10 seconds is appropriate in most cases. In general, set the Update Intervals criteria for each server to be twice the response time, under stable conditions, of the most frequently used application component. For example, on a system where the most frequently used application component returns requests in 5 seconds, set the update interval to 10 seconds. Setting it to a more frequent update rate causes the server to do more work and could even alter load-balancing characteristics. Use caution with this calculation: if the response time of a heavily used application component is only 1.5 seconds, do not set the Update Interval to 3 seconds.



Note If the response time of a heavily used application component is only 1.5 seconds, do not set the Update Interval to 3 seconds.



Broadcast Interval. As mentioned earlier, broadcasting load-balancing information too frequently will not only increase network traffic, it will also increase the work load of your iPlanet Application Server as all the servers work to post and gather the information. In general, set the Broadcast Intervals criteria for a server to be twice the value of its Update Interval.

Set the Update Interval and the Broadcast Interval criteria using the Load Balancing tool in iPlanet Application Server Administration Tool.


Monitoring Load-Balancing Information

When you set load-balancing criteria, be patient about the fine-tuning process. Determining the best combination of load balancing criteria takes careful monitoring of your iPlanet Application Server configuration over a period of time, during which you must gather statistics about peak load, your mix of request types, response time averages, bottlenecks, and so on. There is no single load balancing solution for all iPlanet Application Server users, since every system is deployed with different parameters and criteria. As with any aspect of iPlanet Application Server deployment, only you can determine over time the best set of criteria for improving performance of the iPlanet Application Server system deployed at your site.

For more information about load balancing and using iASAT to set load-balancing criteria, see "Balancing User Request Loads" in iPlanet Application Server Administrator's Guide.


Recommended Load-Balancing Configuration for Clusters

During iPlanet Application Server installation, specify the same LDAP for all iPlanet Application Server in all clusters. Specify the same configuration root for all iPlanet Application Server in one cluster and different configuration root for different clusters.


Optimizing Session Size for Clusters

Session size, by far, has the largest effect in the performance of an iPlanet Application Server cluster. After observing large installations of iPlanet Application Server, we have determined that for maximum performance benefit, the session size should not be more than 4K. With larger sessions, the system will continue to work but with degraded performance. The main reason for performance degradation is the constant communication between the primary and the hot backups in the system to synchronize session data.

One of the following techniques can be employed in improving session performance:

  • Store only the most important elements in the session.

    The application architect needs to determine data that is important and store only that in the session. Data that need not be distributed should be kept away from the session.

  • Use sticky load balancing for sessions

    Sticky load balancing and session distribution are two separate but linked variables in the same equation and both can be enabled at the same time. With sticky load balancing, a single client is always directed to the same KJS. This enables data storage in the JVM memory rather than in the session. Only a key (for example, to a hashtable or to an array) needs to be stored in a session. As the amount of data in a session is significantly reduced, there is an improvement in performance.

    In this configuration, if the iPlanet Application Server which is servicing a client request becomes unavailable, then the request will be routed to another iPlanet Application Server instance in the cluster. Since the key is available in the session, the data can be recreated and stored in memory. The request will now be stuck to the new KJS.

    This option is useful if data in session has been accessed and stored from a secondary source such as an LDAP server.

  • Use a separate data store to serialize large session data.

    This concept introduces a new variable into the system - a database. The idea here is to store a large portion of the session in a database and to store only the primary key in session. The session therefore, is smaller and this improves performance. This will involve careful planning of the database schema and proper indexing of the lookups involved to speed up database access.


Load Balancing Individual JSPs

In iPlanet Application Server, JSPs can be load balanced individually. This is done by assigning a GUID to a JSP, similar to how GUIDs are assigned to servlets, in the XML descriptor. (See section on Registered JSPs). By assigning a GUID to a JSP, it becomes possible to load-balance JSPs just as servlets would, through iPlanet Application Server Administration Tool.


Using Sticky Session Load Balancing

The best web performance is achieved when the servlets in a Web application are configured for Sticky load balancing. In this setting, the application server or the Web-Connector Plugin load balance user sessions. The first user request in a session is load balanced to the best candidate KJS process or server. From then on, the same user's subsequent sessions are sent to the same process. This allows the attached KJS process to locally cache the session object and offer better performance. Sticky Load Balancing can be used along with Distributed Sessions.

A servlet or JSP can be configured to be sticky, by setting the <servlet>/<servlet-info>/<sticky> property to true in ias-web.xml. The deployment and packaging tools automatically set sticky to true.


Note Performance improvements of 10% or better have been reported in some tests.




Simplify Session Data

It is better not to serialize an object before storing it in session, for the following reasons:

  • The iPlanet Application Server distributed session has been tuned to work with simple data elements. Large serialized objects are clumsy and have a marked effect on performance. The data needs to be stored as separate simple data elements for maximum utilization of the session.

From a Java coding perspective, serialization and deserialization are expensive operations that should be avoided.



Configuring Database Connection Pool



iPlanet Application Server offers a connection pooling feature, which multiplexes a few database connection amongst many threads. Connections are tentatively established for the first use, but are not closed. The already open connections are reused as long as the application server process is alive.

A connection pool will be created for a unique combination of JDBC datasource and user. For the same datasource, multiple pools may be created if the application uses the DataSource.getConnection (user,password) style of getting connections where user and password changes with different call invocations.

Make sure the size of this pool is slightly greater than the number of worker threads configured in KJS. Set the CacheInitSlots and CacheMaxConn properties in the registry for the desired database.

For example, if you are using the bundled native Oracle JDBC Drivers in iPlanet Application Server, modify the properties under the following key in iPlanet Registry.

Software\iPlanet\Application Server\6.0\CCS0\DAE2\ORACLE_OCI


Note The number of connections in pool must be equal to the number of worker threads in KJS process, if you expect that all threads would be concurrently processing user requests and would need database access. The performance benefits will be obvious in a database access intensive application.



This section features the following topics:


Guidelines for Configuring Connection Pool

Connection Pools in iPlanet Application Server can be configured at various levels:

  • In the datasource XML file.

  • By modifying the registry.

  • Dynamic configuration through the Administration tool.

    For more information on the configurable connection pool parameters, see iPlanet Application Server Administrator's Guide.

Follow these common guidelines while configuring connection pool:

  • For every database backend use one logical datasource. If multiple logical datasources are pointing to the same backend, your resources may not be optimally utilized.

  • Keep the reclaim time as high as possible, as it is mainly aimed at reclaiming the connections which are given to applications which never release the connection. ******As there is a side effect of reclaiming the connections even if it is in use after the reclaim time.

  • Keep the maxPoolSize to the number of physical connections that you wanted to make from this datasource.

  • Keep the minPoolSize to the average number of concurrent client requests which involve database access to this datasource.

  • Before a connection is given to the application from the pool, it will be checked for its sanity. The first one is simple sanity, which is based on setAutoCommit, and the other one is table based sanity.

    In most cases, simple sanity will be good enough. However, with certain JDBC drivers it is not possible to recognize stale connections with simple sanity.

    Therefore, use simple sanity only in cases where the database drivers support it and make isSanityRequired to false if your database backend is reasonably failsafe.

  • If the application calls DataSource.getConnection (username,password) and the username and password are different each time, then keep the connection pool configuration very low.


Using Statistics to Configure the Connection Pool

iPlanet Application Server supports a rich set of connection pool statistics, which can be used to configure the connection pool.

For more information on how to setup and collect statistics, see iPlanet Application Server Administrator's Guide.

Some of the general suggestions are:

  • If the Total Connections dropped is not zero, and if the Peek Value for Total Connections in the Pool has not reached MaxPoolSize, the database backend will not able to give extra connections.

    To avoid this problem, the database backend can be configured for more number of connections or, the MaxPoolSize can be increased.

  • If the Peek Value for Queue Size is not zero, then the number of connection requests are more than the maxPoolSize and connection requests will get queued. If the Peak Value for ******Queue Size of more than 5 it is advisable to increase the maxPoolSize.

  • If the number of Cache Misses are more than zero and if the Peek Value for Total Connections in Pool has not reached maxPoolSize, then minPoolSize can be increased to a higher value.



Configuring EJB Parameters For Runtime

iPlanet Application Server provides an EJB container that enables you to build distributed applications using your own EJB components, and components from other vendors. When you configure iPlanet Application Server for your enterprise, you must set the EJB container's declarative parameters. These parameters determine, for example, session time-out when an EJB is removed after being inactive for a specified number of seconds. Set these parameters using the iPlanet Application Server Administration Tool.

You can set the following values:

    • Default Session Time-out

      Default Session Time-out is 14400 seconds. This denotes the time for which the server can keep a HttpSession object alive, before removing it due to inactivity. Set this to an acceptable and much lower value. This applies to stateful session EJBs.

    • Default Passivation Time-out

      Default Passivation Time-out is 60 seconds. If the bean creation rate is very low and bean size is large, there may be a need to increase this. However, increasing this value may not impact performance in most scenarios.

      Beans are passivated to the file system and if you see excessive file system activity, there may be excessive passivation activity and possible benefit from tweaking this parameter. This value must be less than the session time-out value.

    • Metadata Cache Size

      Meta Data Cache Size is 10 beans. This is a cache of Home Bean handles. You can make this as large as the number of different types of beans that exist in your application. Setting it to 50 or 60 should cover most user applications. Because it caches EJBHome instances, subsequent lookups of the same Home interface, will just pick up from cache.

    • Implementation Cache Size

      Implementation Cache Size is set at 10 instances. If you expect that N concurrent user sessions to access a stateful or session bean, make sure this is set to equal or larger than N. For stateless session beans, there is perhaps no benefit in setting this larger than the number of KJS threads. The same applies to Entity beans. This iPlanet Application Server Administration Tool setting applies to all deployed beans and is thus too coarse a control.

      The maximum cache size is in number of EJBs.

    • Timer Interval

      Timer Interval specifies the interval at which Bean implementation pools are scanned to find candidates for passivation.

      You can also specify this interval in the iPlanet Registry key CCSO\EB\EbInterval.

      This parameter determines the entity and stateful session bean clean up interval. Default value is 10 seconds. It was found under experimental conditions that setting the Timer Interval to a lower value leads to frequent pauses in EJB Container, thus affecting the response times. Setting this to a very high value may lead to increased passivation times.

    • Failover Save Interval

      Failover Save Interval specifies the time interval at which all active stateful session beans, configured for failover, have their state serialized and passivated to the Dsync in-memory database. This is an expensive operation and can impact performance if the bean size is too high or the save interl is too short.

      See iPlanet Application Server Developer's Guide, for guidelines on how to configure and use Stateful session Bean failover support.

      If the server fails, the last saved state of the EJB can be restored. Data saved is available to all engines in a cluster. This value is set on a per server basis and applies to EJBs that were deployed with Failover option enabled (on the General tab of the Deployment Tool EJB descriptor editor).



Caching JSPs and Servlets

You can specify the number of JSP pages that are cached by each KJS engine for each iPlanet Application Server instance. Caching JSPs optimizes application response time.

The cache size is set on a per-page bases. JSP caching aids in the development of compositional JSPs. This provides the functionality to cache JSPs within the java engine, thereby making it possible to have a master JSP which includes multiple JSPs, each of which can be cached using different cache criteria. For example, think of a portal page, which contains a window to view stock quotes, another to view weather information, and so on. The stock quote window can be cached for 10 minutes, and the weather report window for 30 minutes, and so on.

Note that caching of JSPs is in addition to result caching, so its possible that a JSP can be composed of several included JSPs, each of which has a separate cache criterion. The composed JSP itself can be cached in the KXS using the result-caching that becomes available as JSPs now have GUIDs (see section on Registered JSPs in documentation).

Caching of JSPs uses the custom tag library support provided by JSP 1.1. A typical cache-able JSP page looks as follows:

<%@ taglib prefix="ias" uri="CacheLib.tld"%>

<ias:cache>

<ias:criteria timeout="30">

<ias:check class="com.iplanet.server.servlet.test.Checker"/>

<ias:param name="y" value="*" scope="request"/>

</ias:criteria>

</ias:cache>

<%! int i=0; %>

<html>

<body>

<h2>Hello there</h2>

I should be cached.

No? <b><%= i++ %></b>

</body>

</html>

The <ias:cache> and </ias:cache> delimit the cache constraints. The <ias:criteria > tag specifies the time-out value, and encloses different cache criteria. Cache criteria can be expressed using any or both of the tags, <ias:check> and <ias:param>. The syntax for these tags is as follows:

<ias:criteria timeout="val" > specifies the timeout for the cached element, in seconds. The cache criteria are specified within this and the closing </ias:criteria> <ias:check class="classname" /> This is one of the mechanisms of specifying a cache criteria. The classname refers to a class that has a method called "check", which has the following signature:

public Boolean check(HttpServletRequest req)

This returns a boolean indicating whether the element is to be cached or not.

<ias:param name="paramName" value="paramValue" scope="request" /> : This is another mechanism to specify cache criteria.

paramName is the name of an attribute, passed in either in the request object (using setAttribute), or in the URI. This is the parameter used as a cache criterion. paramValue is the value of the parameter, which determines whether caching should be performed or not. This can be of the following kinds:

Constraint

Meaning

x = ""

x must be present either as a parameter or as an attribute.

x = "v1|...|vk", where vi might be "*"

The constraint is true of the current request if the request parameter for x has the same value as was used to store the cached buffer.

x = "1-u", where 1 and u are integers.

x is mapped to a value in the range [1,u]

The scope identifies the source of the attributes that are checked. These can be page, request (default), session, or application.

The following is an example of a cached JSP page:

<%@ taglib prefix="ias" uri="CacheLib.tld"%>

<ias:cache>

<ias:criteria timeout="30">

<ias:check class="com.iplanet.server.servlet.test.Checker"/>

<ias:param name="y" value="*" scope="request"/>

</ias:criteria>

</ias:cache>

<%! int i=0; %>

<html>

<body>

<h2>Hello there</h2>

I should be cached.

No? <b><%= i++ %></b>

</body>

</html>

where Checker is defined as:

package com.iplanet.server.servlet.test;

import javax.servlet.*;

import javax.servlet.http.*;

public class Checker {

String chk = "42";

public Checker()

{

}

public Boolean check(ServletRequest _req)

HttpServletRequest req = (HttpServletRequest)_req;

String par = req.getParameter("x");

return new Boolean(par == null ? false : par.equals(chk));

}

}

Given the above, a cached element is valid for a request with parameter x=42, and y equal to the value used to store the element. Note that it is possible to have multiple sets of <ias:param> and <ias:check> inside an <ias:criteria> block. Also, its possible to have multiple <ias:criteria> blocks inside a JSP.

Note * cache-criteria = "*" does not work

* when cache-criteria is properly established (arg="*"), behavior is inefficient, i.e., the behavior is an update on cache miss, a cache hit, then an update, then a cache hit.




Previous     Contents     Index     Next     
Copyright © 2002 Sun Microsystems, Inc. All rights reserved.

Last Updated March 06, 2002