Skip Headers
Oracle® Coherence User's Guide for Oracle Coherence*Web
Release 3.5

Part Number E14536-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

4 Coherence*Web Session Management Features

Coherence*Web can be configured in many ways to meet the demands of your environment. Consequently, you might have to change some default configuration options. The purpose of this chapter is to provide an in-depth look at the features that Coherence*Web supports so that you can make the appropriate configuration and deployment decisions.

4.1 Session Models

A session model describes how Coherence*Web physically represents and stores session state in Coherence. Coherence*Web supports a flexible data management model for session state. The session state is managed by an HttpSessionModel object, and the list of all sessions is managed by an HttpSessionCollection object. Coherence*Web includes these different session model implementations out of the box:

Figure 4-1 Traditional, Monolithic, and Split Session Models

Traditional, Monolithic, and Split Session Models

4.1.1 Traditional Model

TraditionalHttpSessionModel and TraditionalHttpSessionCollection manage all of the HTTP session data for a particular session in a single Coherence cache entry, but manage each HTTP session attribute (particularly, its serialization and deserialization) separately.

This model is suggested for applications with relatively small HTTP session objects (10KB or less) that do not have issues with object-sharing between session attributes. (Object-sharing between session attributes occurs when multiple attributes of a session have references to the same exact object, meaning that separate serialization and deserialization of those attributes cause multiple instances of that shared object to exist when the HTTP session is later deserialized.)

Figure 4-2 Traditional Session Model

Traditional Session Model

4.1.2 Monolithic Model

MonolithicHttpSessionModel and MonolithicHttpSessionCollection are similar to the Traditional Model, except that they solve the shared object issue by serializing and deserializing all attributes into a single object stream.

As a result, the Monolithic Model is often less performant than the Traditional Model.

Figure 4-3 Monolithic Session Model

Monolithic Session Model

4.1.3 Split Model

SplitHttpSessionModel and SplitHttpSessionCollection manage the core HTTP session data such as the session ID, creation time, last access time, and so on, with all of the small session attributes in the same manner as the Traditional Model, thus ensuring high performance by keeping that block of session data small. All large attributes are split out into separate cache entries to be managed individually, thus supporting very large HTTP session objects without unduly increasing the amount of data that must be accessed and updated within the cluster on each request. In other words, only the large attributes that are modified within a particular request incur any network overhead for their updates, and (because it uses Near Caching) the Split Model generally does not incur any network overhead for accessing either the core HTTP session data or any of the session attributes.

Figure 4-4 Split Session Model

Split Session Model

4.1.4 Session Model Recommendations

  • The Split Model is the recommended session model for most applications.

  • The Traditional Model may be more optimal for applications that are known to have small HTTP session objects.

  • The Monolithic Model is designed to solve a specific class of problems related to multiple session attributes that have references to the same shared object, and that must maintain that object as a shared object.

Session Management for Clustered Applications in Getting Started with Oracle Coherence, provides information on the behavior of these models in a clustered environment.

Note:

For configuration information, see Appendix A, "Coherence*Web Configuration Parameters."

4.2 Session and Session Attribute Scoping

Coherence*Web allows fine-grained control over how both session data and session attributes are scoped (or "shared") across application boundaries:

4.2.1 Session Scoping

Coherence*Web allows session data to be shared by different Web applications deployed in the same or different Web containers. To do so, you must correctly configure the Coherence*Web cookie context parameters and make the classes of objects stored in session attributes available to each Web application.

If you are using cookies to store session IDs (that is, you are not using URL rewriting), you must set the coherence-session-cookie-path context parameter to a common context path of all Web applications that share session data. For example, to share session data between two Web applications registered under the contexts paths /web/HRPortal and /web/InWeb, you should set the coherence-session-cookie-path parameter to /web. On the other hand, if the two Web applications are registered under the context paths /HRPortal and /InWeb, you should set the coherence-session-cookie-path parameter to /.

If the Web applications that you would like to share session data are deployed on different Web containers running on different machines (that are not behind a common load balancer), you must also set the coherence-session-cookie-domain parameter to a domain shared by the machines. For example, to share session data between two Web applications running on server1.mydomain.com and server2.mydomain.com, you must set the coherence-session-cookie-domain parameter to .mydomain.com.

To correctly serialize or deserialize objects stored in shared sessions, the classes of all objects stored in session attributes must be available to Web applications that share session data. For Web applications deployed on different containers, the classes may be placed in either the Web container or Web application classpath; however, for applications deployed in the same Web container, the classes must be placed in the Web container classpath. This is because most containers load each Web application using a separate ClassLoader.

Note:

For advanced use cases where EAR cluster node-scoping or application server JVM cluster scoping is employed and you do not want session data shared across individual Web applications see "Preventing Web Applications from Sharing Session Data".

4.2.1.1 Preventing Web Applications from Sharing Session Data

Sometimes you may want to explicitly prevent HTTP session data from being shared by different Java EE applications that participate in the same Coherence cluster. For example, assume you have two applications HRPortal and InWeb that share cached data in their EJB tiers but use different session data. In this case, it is desirable for both applications to be part of the same Coherence cluster, but undesirable for both applications to use the same clustered service for session data.

To prevent different Java EE applications from sharing session data, specify a unique session cache service name for each application:

  1. Locate the <service-name/> parameters in each session-cache-config.xml file found in your application.

  2. Set the parameters to a unique value for each application.

    This forces each application to use a separate clustered service for session data.

  3. Save the modified session-cache-config.xml files.

Example 4-1 illustrates a sample session-cache-config.xml file for an HRPortal application. To prevent the HRPortal application from sharing session data with the InWeb application, rename the <service-name> parameter for the replicated scheme to be ReplicationSessionsMiscHRP. Rename the <service-name> parameter for the distributed schemes to be DistributedSessionsHRP.

Example 4-1 Configuration to Prevent Applications from Sharing Session Data

<replicated-scheme>
  <scheme-name>default-replicated</scheme-name>
  <service-name>ReplicatedSessionsMisc</service-name> // rename this to ReplicatedSessionsMiscHRP 
  <backing-map-scheme>
    <class-scheme>
      <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
  </backing-map-scheme>
</replicated-scheme>

<distributed-scheme>
  <scheme-name>session-distributed</scheme-name>
  <service-name>DistributedSessions</service-name> // rename this to DistributedSessionsHRP
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <class-scheme>
      <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
  </backing-map-scheme>
</distributed-scheme>

<distributed-scheme>
  <scheme-name>session-certificate</scheme-name>
  <service-name>DistributedSessions</service-name> // rename this to DistributedSessionsHRP
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <local-scheme>
      <scheme-ref>session-certificate-autoexpiring</scheme-ref>
    </local-scheme>
  </backing-map-scheme>
</distributed-scheme>

4.2.1.2 Keeping Session Cookies Separate

If you are using cookies to store session IDs, you must ensure that session cookies created by one application are not propagated to another application. To do this, you must set each application's session cookie domain and path in their web.xml file. The context parameter coherence-session-cookie-path sets the context path for a Web application. To prevent cookies from being propagated, ensure that no two applications share the same context path.

For example, assume you have two Web applications registered under the contexts paths /web/HRPortal and /web/InWeb. To prevent the Web applications from sharing session data through cookies, set the coherence-session-cookie-path parameter in one application's web.xml file to /web/HRPortal; set the parameter in the other application's web.xml file to /web/InWeb.

If your applications are deployed on different Web containers running on separate machines, then you can set the context parameter coherence-session-cookie-domain to ensure that they are not in the same domain.

For example, assume you have two Web applications running on server1.mydomain.com and server2.mydomain.com. To prevent session cookies from being shared between them, then set the coherence-session-cookie-domain parameter in one application's web.xml file to server1.mydomain.com; set the parameter in the other application's web.xml file to server2.mydomain.com.

4.2.2 Session Attribute Scoping

In the case where sessions are shared across Web applications there are many instances where the application may want to scope individual session attributes so that they are either globally visible (that is, all Web applications can see and modify these attributes) or scoped to an individual Web application (that is, not visible to any instance of another application).

Coherence*Web provides the ability to control this behavior by using the AttributeScopeController interface. This optional interface is used to selectively scope attributes in cases when a session may be shared across multiple applications. This enables different applications to potentially use the same attribute names for application-scope state without accidentally reading, updating, or removing other applications' attributes. In addition to having application-scoped information in the session, it allows the session to contain global (unscoped) information that is readable, updatable, and removable by any of the applications that share the session.

There are two implementations of this interface available out of the box: the ApplicationScopeController and the GlobalScopeController.

Note:

After a configured AttributeScopeController is created, it is initialized with the name of the Web application, which it can use to qualify attribute names. You can configure the name of your Web application by using the display-name XML element in the Web application's web.xml file.

4.3 Cluster Node Isolation

When using Coherence*Web there are many deployment options to consider, one of which is the concept of cluster node isolation.

This option determines:

Applications can be application server-scoped, EAR-scoped, or WAR-scoped. This section describes these options. For detailed information on the XML configuration for each of these options, see "Packaging Applications and Configuring Cluster Nodes".

4.3.1 Application Server-Scoped Cluster Nodes

With this configuration, all deployed applications in a container using Coherence*Web become part of one Coherence node. This configuration produces the smallest number of Coherence nodes in the cluster (one for each Web container JVM) and since the Coherence library (coherence.jar) is deployed in the container's classpath, only one copy of the Coherence classes is loaded into the JVM. This minimizes the use of resources. On the other hand, since all applications are using the same cluster node, all applications are affected if one application misbehaves.

Figure 4-5 Application Server-Scoped Cluster

Application Server-Scoped Cluster

Requirements for using this configuration are:

  • Each deployed application must use the same version of Coherence and participate in the same cluster.

  • Objects placed in the HTTP session must have their classes in the container's classpath.

"Packaging and Configuring Application Server-Scoped Cluster Nodes" describes the XML configuration for application server-scoped cluster nodes.

Note:

The application server-scoped cluster node configuration should be considered very carefully and never used in environments where the interaction between applications is unknown or unpredictable.

An example of such an environment may be a deployment where multiple application groups are deploying applications written independently, without carefully coordinating and enforcing their conventions and naming standards. With this configuration, all applications are part of the same cluster and the likelihood of collisions between namespaces for caches, services and other configuration settings is quite high and may lead to unexpected results.

For these reasons, Oracle Coherence strongly recommends that you use EAR-scoped and WAR-scoped cluster node configurations. If you are in doubt regarding which deployment topology to choose, or if this warning applies to your deployment, then do not choose the application server-scoped cluster node configuration.

4.3.2 EAR-Scoped Cluster Nodes

With this configuration, all deployed applications within each EAR become part of one Coherence node. This configuration produces the next smallest number of Coherence nodes in the cluster (one for each deployed EAR that uses Coherence*Web). Since the Coherence library (coherence.jar) is deployed in the application's classpath, only one copy of the Coherence classes is loaded for each EAR. Since all Web applications in the EAR use the same cluster node, all Web applications in the EAR are affected if one of the Web applications misbehaves.

Figure 4-6 EAR-Scoped Cluster

EAR-Scoped Cluster

EAR-scoped cluster nodes reduce the deployment effort as no changes to the application server classpath are required. This option is also ideal if you plan on deploying only one EAR to an application server.

Requirements for using this configuration are:

  • The Coherence library (coherence.jar) must be deployed as part of the EAR file and listed as a Java module in META-INF/application.xml.

  • Objects placed into the HTTP session must have their classes deployed as a Java EAR module in a similar fashion.

"Packaging and Configuring EAR-Scoped Cluster Nodes" describes the XML configuration for EAR-scoped cluster nodes.

4.3.3 WAR-Scoped Cluster Nodes

With this configuration, each deployed Web application becomes its own Coherence node. This configuration produces the largest number of Coherence nodes in the cluster (one for each deployed WAR that uses Coherence*Web) and since the Coherence library (coherence.jar) is deployed in the Web application's classpath, there will be as many copies of the Coherence classes loaded as there are deployed WARs. This results in the largest resource utilization out of the three options. However, since each deployed Web application is its own cluster node, Web applications are completely isolated from other potentially misbehaving Web applications.

WAR scoped cluster nodes reduce the deployment effort as no changes to the application server classpath are required. This option is also ideal if you plan on deploying only one WAR to an application server.

Figure 4-7 WAR-Scoped Clusters

WAR-Scoped Clusters

Requirements for using this configuration are:

  • The Coherence library (coherence.jar) must be deployed as part of the WAR file (usually in WEB-INF/lib).

  • Objects placed into the HTTP session must have their classes deployed as part of the WAR file (in WEB-INF/lib or WEB-INF/classes).

"Packaging and Configuring WAR-Scoped Cluster Nodes" describes the XML configuration for WAR-scoped cluster nodes.

4.4 Session Locking Modes

Oracle Coherence provides these configuration options for concurrent access to HTTP sessions.

For more information on the parameters described in this section, see Appendix A, "Coherence*Web Configuration Parameters."

4.4.1 Optimistic Locking (Default)

The Optimistic Locking mode allows multiple Web container threads in one or more JVMs to access the same session concurrently. This setting does not use explicit locking; rather an optimistic approach is used to detect and prevent concurrent updates upon completion of an HTTP request that modifies the session. When Coherence*Web detects a concurrent modification, a ConcurrentModificationException is thrown to the application; therefore an application must be prepared to handle this exception in an appropriate manner.

This mode can be configured by setting the coherence-session-member-locking parameter to false.

4.4.2 Member Locking

The Member Locking mode allows multiple Web container threads in the same JVM to access and modify the same session concurrently, but prohibits concurrent access by threads in different JVMs. This is accomplished by acquiring a member-level lock for an HTTP session at the beginning of a request and releasing the lock upon completion of the request. For more information on member-level locks, see <lease-granularity> in the distributed-scheme section of the Developer's Guide for Oracle Coherence.

This mode can be configured by setting the coherence-session-member-locking parameter to true.

4.4.3 Application Locking

The Application Locking mode restricts access (and modification) to a session to threads in a single Web application instance at a time. This is accomplished by acquiring both a member-level and application-level lock for an HTTP session at the beginning of a request and releasing both locks upon completion of the request. For more information on member-level locks, see <lease-granularity> in the distributed-scheme section of the Developer's Guide for Oracle Coherence.This mode can be configured by setting the coherence-session-app-locking parameter to true. Note that setting this to true will imply a setting of true for coherence-session-member-locking.

4.4.4 Thread Locking

The Thread Locking mode restricts access (and modification) to a session to a single thread in a single JVM at a time. This is accomplished by acquiring both a member level, application level, and thread-level lock for an HTTP session at the beginning of a request and releasing all three locks upon completion of the request. For more information on member-level locks, see <lease-granularity> in the distributed-scheme section of the Developer's Guide for Oracle Coherence.

This mode can be configured by setting the coherence-session-thread-locking parameter to true. Note that setting this to true implies a setting of true for both coherence-session-member-locking and coherence-session-app-locking.

4.4.5 Using Locking in HTTP Sessions

Enabling Member, Application, or Thread Locking for HTTP session access indicates that Coherence*Web will acquire a cluster-wide lock for every HTTP request that requires access to a session; the exception to this is when sticky load balancing is available and the Coherence*Web sticky session optimization is enabled. By default, threads that attempt to access a locked session (locked by a thread in a different JVM) block until the lock can be acquired. If you want to enable a timeout for lock acquisition, you can configure it by using the tangosol.coherence.servlet.lock.timeout system property in the container's startup script (for example -Dtangosol.coherence.servlet.lock.timeout=30s).

Many Web applications do not have such a strict concurrency requirement. For these applications, using the Optimistic Locking mode has the following advantages:

  • The overhead of obtaining and releasing cluster wide locks for every HTTP request is eliminated.

  • Requests can be load balanced away from failing or unresponsive JVMs to healthy JVMs without requiring the unresponsive JVM to release the cluster-wide lock on the session.

4.4.6 Enabling Sticky Session Optimizations

If Member, Application, or Thread Locking is a requirement for a Web application that resides behind a sticky load balancer, Coherence*Web provides an optimization for obtaining the cluster-wide lock required for HTTP session access. By definition, a sticky load balancer attempts to route each request for a given session to the same application server JVM that it previously routed requests to for that same session, which initially is the application server JVM that created the session. The sticky session optimizations takes advantage of this behavior by retaining the cluster-wide lock for a session until the session expires or until it is asked to release it. If, for whatever reason, the sticky load balancer sends a request for the same session to another application server JVM, that JVM will ask the JVM that owns the lock on the session to release the lock as soon as possible. This is implemented using an invocation service. For more information, see the SessionOwnership entry in Table B-2.

Sticky session optimization can be enabled by setting the coherence-sticky-sessions parameter to true.

4.5 Deployment Topologies

Coherence*Web supports most of the same deployment topologies that Coherence does including in-process, out-of-process (that is, client/server deployment), and bridging clients and servers over Coherence*Extend. The major supported deployment topologies are described in the following sections.

4.5.1 In-Process

The In-Process topology is not recommended for production use. This topology is supported mainly for development and testing. By storing the session data in-process with the application server, this topology is very easy to get up and running quickly for smoke tests, development and testing.

Figure 4-8 In-Process Deployment Topology

In-Process Deployment Topology

4.5.2 Out-of-Process

In the Out of Process deployment topology, the application servers (that is, application server tier) are configured as cache clients (that is, tangosol.coherence.distributed.localstorage=false) and there are dedicated JVMs running as cache servers, physically storing and managing the clustered data.

This approach has these benefits:

  • Session data storage is off-loaded from the application server tier to the cache server tier. This reduces heap usage, garbage collection times, and so on.

  • It allows for the two tiers to be scaled independently of one another. If more application processing power is needed, just start more application servers. If more session storage capacity is needed, just start more cache servers.

The Out-of-Process topology is the default recommendation of Oracle Coherence due to its flexibility.

Figure 4-9 Out of Process Deployment Topology

Out of Process Deployment Topology

4.5.3 Out-of-Process with Coherence*Extend

The Out-of-Process with Coherence*Extend topology is similar to the Out-of-Process topology except that the communication between the application server tier and the cache server tier are over Coherence*Extend (TCP/IP). For information on configuring this scenario, see "Configuring Coherence*Web with Coherence*Extend".

This approach has the same benefits as the Out-of-Process topology and the ability to segment deployment of application servers and cache servers. This is ideal in an environment where application servers are on a network that does not support UDP. The cache servers can be set up in a separate dedicated network, with the application servers connecting to the cluster by using TCP.

Figure 4-10 Out-of-Process with Coherence*Extend Deployment Topology

Out-of-Process with Coherence*Extend Topology

4.6 Managing and Monitoring Applications with JMX

Note:

To enable Coherence*Web JMX Management and Monitoring, this section assumes that you have first set up the Coherence Clustered JMX Framework. To set up this framework, see the configuration and installation instructions in How to Manage Coherence with JMX in the Developer's Guide for Oracle Coherence.

The management attributes and operations for Web applications that use Coherence*Web for HTTP session management are exposed through the HttpSessionManagerMBean interface (com.tangosol.coherence.servlet.management.HttpSessionManagerMBean).

During startup, each Coherence*Web Web application registers a single instance of HttpSessionManagerMBean. The MBean is unregistered when the Web application shuts down. Table 4-1 describes the MBean's object name used for registration.

Table 4-1 Object Name for the HttpSessionManagerMBean

Managed Bean Object Name

HttpSessionManagerMBean

type=HttpSessionManager, nodeId=cluster node id, appId=web application id


Table 4-2 describes the information that is returned by the HttpSessionManagerMBean. All of the names represent attributes, except resetStatistics, which is an operation.

Several of the MBean attributes use the following prefixes:

Table 4-2 Information Returned by the HttpSessionManagerMBean

Name Data Type Description

CollectionClassName

String

The fully qualified class name of the HttpSessionCollection implementation in use. The HttpSessionCollection interface is an abstract model for a collection of HttpSessionModel objects. The interface is not at all concerned with how the sessions are communicated between the clients and the servers.

FactoryClassName

String

The fully qualified class name of the Factory implementation in use. The SessionHelper.Factory is used by the SessionHelper to obtain objects that implement various important parts of the Servlet specification. It can be placed in front of the application in place of the application server's own objects, thus changing the "apparent implementation" of the application server itself (for example, adding clustering.)

LocalAttributeCacheName

String

The name of the local cache that stores non-distributed session attributes. If the attribute displays null then local session attribute storage is disabled.

LocalAttributeCount

Integer

The number of non-distributed session attributes stored in the local session attribute cache. If the attribute displays -1, then local session attribute storage is disabled.

LocalSessionCacheName

String

The name of the local cache that stores non-distributed sessions. If the attribute displays null, then local session storage is disabled.

LocalSessionCount

Integer

The number of non-distributed sessions stored in the local session cache. If the attribute displays -1, then local session storage is disabled.

OverflowAverageSize

Integer

The average size (in bytes) of the session attributes stored in the "overflow" clustered cache since the last time statistics were reset. If the attribute displays -1, then a SplitHttpSessionCollection is not in use.

OverflowCacheName

String

The name of the clustered cache that stores the "large attributes" that exceed a certain size and thus are determined to be more efficiently managed as separate cache entries and not as part of the serialized session object itself. Null is displayed if a SplitHttpSessionCollection is not in use.

OverflowMaxSize

Integer

The maximum size (in bytes) of a session attribute stored in the "overflow" clustered cache since the last time statistics were reset. The attribute displays -1 if a SplitHttpSessionCollection is not in use.

OverflowThreshold

Integer

The minimum length (in bytes) that the serialized form of an attribute value must be for that attribute value to be stored in the separate "overflow" cache that is reserved for large attributes. The attribute displays -1 if a SplitHttpSessionCollection is not in use.

OverflowUpdates

Integer

The number of updates to session attributes stored in the "overflow" clustered cache since the last time statistics were reset. The attribute displays -1 if a SplitHttpSessionCollection is not in use.

SessionAverageLifetime

Integer

The average lifetime (in seconds) of session objects invalidated (either due to expiration or to an explicit invalidation) since the last time statistics were reset.

SessionAverageSize

Integer

The average size (in bytes) of session objects placed in the session storage clustered cache since the last time statistics were reset.

SessionCacheName

String

The name of the clustered cache that stores serialized session objects.

SessionIdLength

Integer

The length (in characters) of generated session IDs.

SessionMaxSize

Integer

The maximum size (in bytes) of a session object placed in the session storage clustered cache since the last time statistics were reset.

SessionMinSize

Integer

The minimum size (in bytes) of a session object placed in the session storage clustered cache since the last time statistics were reset.

SessionStickyCount

Integer

The number of session objects that are pinned to this instance of the Web application. The attribute displays -1 if sticky session optimizations are disabled.

SessionTimeout

Integer

The session expiration time (in seconds). The attribute displays -1 if sessions never expire.

SessionUpdates

Integer

The number of updates of session object stored in the session storage clustered cache since the last time statistics were reset.

ServletContextCacheName

String

The name of the clustered cache that stores javax.servlet.ServletContext attributes. The attribute displays null if the ServletContext is not clustered.

ServletContextName

String

The name of the Web application ServletContext.

resetStatistics (operation)

void

Reset the session management statistics.


Figure 4-11 illustrates the HttpSessionManagerMBean as it is displayed in the JConsole browser.

Figure 4-11 HttpSessionManagerMBean Displayed in the JConsole Browser

Http Session Manager MBean in the JConsole Browser

4.7 Cleaning Up Expired HTTP Sessions

As part of Coherence*Web Session Management Module, HTTP sessions are eventually cleaned up by the Session Reaper, and the associated memory is freed. The Session Reaper provides a service similar to the JVM's own Garbage Collection (GC) capability: the Session Reaper is responsible for destroying any session that is no longer used, which is determined when that session has timed out.

Each HTTP session contains two pieces of information that determine when it has timed out. The first is the LastAccessedTime property of the session, which is the timestamp of the most recent activity involving the session. The second is the MaxInactiveInterval property of the session, which specifies how long the session is kept alive without any activity; a typical value for this property is 30 minutes. The MaxInactiveInterval property defaults to the value specified for the coherence-session-expire-seconds configuration option, but it can be modified on a session-by-session basis.

Each time that an HTTP request is received by the server, if there is an HTTP session associated with that request, then the LastAccessedTime property of the session is automatically updated to the current time. As long as requests continue to arrive related to that session, it is kept alive, but when a period of inactivity occurs longer than that specified by the MaxInactiveInterval property, then the session expires. Session expiration is passive—occurring only due to the passing of time. The Coherence*Web Session Reaper scans for sessions that have expired, and when it finds expired sessions it cleans them up.

4.7.1 Understanding the Session Reaper

The Session Reaper configuration answers three basic questions:

  • On which servers will the Reaper run?

  • How frequently will the Reaper run?

  • When the Reaper runs, on which servers will it look for expired sessions?

The Session Reaper runs as part of the application server. That means that if Coherence is configured to provide a separate cache tier (made up of "cache servers"), then the Session Reaper does not run on those cache servers.

Consider the three different topologies used with Coherence*Web:

  • In-Process—The application servers that run Coherence*Web are storage-enabled, so that the HTTP session storage is co-located with the application servers. No separate cache servers are used for HTTP session storage.

  • Out-of-Process—The application servers that run Coherence*Web are storage-disabled members of the Coherence cluster. Separate cache servers are used for HTTP session storage.

  • Out-of-Process with Coherence*Extend—The application servers that run Coherence*Web are not part of a Coherence cluster; the application servers use Coherence*Extend to attach to a Coherence cluster which contains cache servers used for HTTP session storage.

Every application server running Coherence*Web runs the Session Reaper. By default, the Session Reaper runs concurrently on all of the application servers, so that all of the servers share the workload of identifying and cleaning up expired sessions. The coherence-reaperdaemon-cluster-coordinated configuration option causes the cluster to coordinate reaping so that only one server at a time is performing the actual reaping; the use of this option is not suggested, and it cannot be used with the Coherence*Web over Coherence*Extend topology.

The Session Reaper is configured to scan the entire set of sessions over a certain period, called a reaping cycle, which defaults to five minutes. This length of the reaping cycle is specified by the coherence-reaperdaemon-cycle-seconds option. Since the Session Reaper is expected to scan all of the sessions that it is responsible for and to clean up any expired sessions within the reaping cycle, this setting indicates to the Session Reaper how aggressively it must work. If the cycle length is configured too short, the Session Reaper uses additional resources without providing additional benefit. If the cycle length is configured too long, then sessions may not be cleaned up as quickly after they have expired. In most situations, it is far preferable to reduce resource usage than to ensure that sessions are cleaned up quickly after they expire. Consequently, the default cycle of five minutes is a good balance between promptness of cleanup and minimal resource usage.

During the reaping cycle, the Session Reaper scans for expired sessions. In most cases, the Session Reaper takes responsibility for scanning all of the HTTP sessions across the entire cluster, but there is an optimization available for the Single Tier topology. In the Single Tier topology, when all of the sessions are being managed by storage-enabled Coherence cluster members that are also running the application server, the session storage is co-located with the application server. Consequently, it is possible for the Session Reaper on each application server to only scan the sessions that are stored locally. This behavior can be enabled by setting the coherence-reaperdaemon-assume-locality configuration option to true.

Regardless of whether the Session Reaper scans only co-located sessions or all sessions, it does so in a very efficient manner by using these advanced capabilities of the Coherence data grid:

  • Starting with the current version of Coherence, the Session Reaper does not actually look at each session; instead, it delegates the search for expired sessions to the data grid using a custom ValueExtractor implementation. This ValueExtractor takes advantage of the BinaryEntry interface introduced in Coherence version 3.5 so that it can determine if the session has expired without even deserializing the session. As a result, the selection of expired sessions can be delegated to the data grid just like any other parallel query, and can be executed by storage-enabled Coherence members in a very efficient manner.

  • Instead of selecting all of the expired sessions immediately using a parallel query, the Session Reaper only queries one member at a time; this allows the Session Reaper to divide the work of the query across the duration of the reaping cycle. Additionally, this eliminates the need for group communication when querying for expired sessions.

  • Since the work of cleaning up expired sessions is broken up across the entire reaping cycle, this ensures that the selection of expired sessions is also broken up across the reaping cycle, so that the selection occurs close before the clean-up of expired sessions, thus reducing the chance that multiple application servers would attempt to clean up the same expired sessions. The Session Reaper uses the com.tangosol.net.partition.PartitionedIterator class to automatically query on a member-by-member basis, and in a random order that avoids harmonics in large-scale clusters.

Each storage-enabled member can very efficiently scan for any expired sessions, and it only has to scan one time per application server per reaper cycle. The result is an out-of-the-box Session Reaper configuration that works well for application server clusters with only two servers, and application server clusters with several hundred servers. Furthermore, the configuration works well for applications with several hundred concurrent sessions, and for applications with several million concurrent sessions.

To ensure that the Session Reaper does not impact the smooth operation of the application server, it breaks up its work into chunks and schedules that work in a manner that spreads the work across the entire reaping cycle. Since the Session Reaper has to know how much work it must schedule, it maintains statistics on the amount of work that it performed in previous cycles, and uses statistical weighting to ensure that statistics from recent reaping cycles count more heavily. There are several reasons why the Session Reaper breaks up the work in this manner:

  • If the Session Reaper consumed a large number of CPU cycles at one time, it could cause the application to be less responsive to users. By doing a small portion of the work at a time, the application remains responsive.

  • One of the key performance enablers for Coherence*Web is the near caching feature of Coherence; since the sessions that are expired are accessed through that same near cache to clean them, expiring too many sessions too quickly could cause the cache to evict sessions that are being used on that application server, leading to performance loss.

The Session Reaper performs its job efficiently, even with the default out-of-the-box configuration by:

  • delegating as much work as possible to the data grid

  • delegating work to only one member at a time

  • avoiding group communication

  • enabling the data grid to find expired sessions without even deserializing them

  • restricting the usage of CPU cycles

  • avoiding cache-thrashing of the near caches that Coherence*Web relies on for performance

4.7.2 Configuring the Session Reaper

The following list contains suggestions for tuning the out-of-the-box configuration of the Session Reaper:

  • If the application is deployed with the in-process topology, then set the coherence-reaperdaemon-assume-locality configuration option to true.

  • Since all of the application servers are responsible for scanning for expired sessions, it is reasonable to increase the coherence-reaperdaemon-cycle-seconds configuration option if the cluster is larger than ten application servers. The larger the number of application servers, the longer the cycle can be; for example, with 200 servers, it would be reasonable to set the length of the reaper cycle as high as 30 minutes (that is, setting the coherence-reaperdaemon-cycle-seconds configuration option to 1800).

4.8 Overriding the Distribution of HTTP Sessions and Attributes

The Coherence*Web Session Distribution Controller, described by the HttpSessionCollection.SessionDistributionController interface, enables you to override the default distribution of HTTP sessions and attributes in a Web application. An implementation of the SessionDistributionController interface can mark sessions and/or attributes in either of the following ways:

At any point during the life of a session, the session and/or attributes for that session can be transitioned from local or distributed. However, once a session and/or attribute is distributed it cannot transition back to local.

You can use the Session Distribution Controller in any of the following ways:

4.8.1 Implementing a Session Distribution Controller

Example 4-2 illustrates a sample implementation of the HttpSessionCollection.SessionDistributionController interface. In the sample, sessions are tested as to whether they have a shopping cart attached (only these sessions will be distributed). Next, the session is tested whether it contains a certain attribute. If the attribute is found to be present, then it is not distributed.

Example 4-2 Sample Session Distribution Controller Implementation

import com.tangosol.coherence.servlet.HttpSessionCollection;
import com.tangosol.coherence.servlet.HttpSessionModel;
 
/**
* Sample implementation of SessionDistributionController
*/
public class CustomSessionDistributionController
        implements HttpSessionCollection.SessionDistributionController
    {
    public void init(HttpSessionCollection collection)
        {
        }
 
    /**
    * Only distribute sessions that have a shopping cart.
    *
    * @param model Coherence representation of the HTTP session
    *
    * @return true if the session should be distributed
    */
    public boolean isSessionDistributed(HttpSessionModel model)
        {
        return model.getAttribute("shopping-cart") != null;
        }
 
    /**
    * If a session is "distributed", then distribute all attributes with the 
    * exception of the "ui-rendering" attribute.
    *
    * @param model Coherence representation of the HTTP session
    * @param sName name of the attribute to check
    *
    * @return true if the attribute should be distributed
    */
    public boolean isSessionAttributeDistributed(HttpSessionModel model,
            String sName)
        {
        return !"ui-rendering".equals(sName);
        }
    } 

4.8.2 Registering a Session Distribution Controller Implementation

Once you have written your SessionDistributionController implementation, you can register it with your application by using the coherence-distributioncontroller-class configuration parameter. Note that to use the Session Distribution Controller, you must also enable the coherence-sticky-sessions parameter. Appendix A, "Coherence*Web Configuration Parameters" provides more information on these parameters.

4.9 Configuring Coherence*Web with Coherence*Extend

One of the deployment options for Coherence*Web is to use Coherence*Extend to connect Web container JVMs to the cluster by using TCP/IP. This configuration should be considered if any of the following situations applies:

In this type of deployment, there are three types of participants:

These are the general steps to configure Coherence*Web to use Coherence*Extend:

  1. Configure Coherence*Web to use the Optimistic Locking mode (see "Optimistic Locking (Default)").

  2. Configure a cache configuration file for the proxy and storage JVMs

  3. Modify the Web tier cache configuration file to point to one or more of the proxy JVMs

The following sections describe these steps in more detail.

4.9.1 Configuring Coherence*Web for Optimistic Locking

To enable the Optimistic Locking mode for your Web application, make sure the Coherence*Web configuration parameters in Table 4-3 are set to the specified values.

Table 4-3 Coherence*Web Parameter Settings for Optimistic Locking

Parameter Name Value

coherence-session-member-locking

false

coherence-sticky-sessions

false

coherence-preserve-attributes

false


See Appendix A, "Coherence*Web Configuration Parameters" for more information on these parameters.

4.9.2 Configuring the Cache for Proxy and Storage JVMs

The session cache configuration file (WEB-INF/classes/session-cache-config.xml) is an example Coherence*Web cache configuration file that uses Coherence*Extend.

This session cache configuration file should be used for the proxy and server JVMs. It contains system property overrides that allow the same file to be used for both proxy and storage JVMs. When used by a proxy JVM, the system properties described in Table 4-4 should be specified.

Table 4-4 System Property Values for Proxy JVMs

System Property Name Value

tangosol.coherence.session.localstorage

false

tangosol.coherence.session.proxy

true

tangosol.coherence.session.proxy.localhost

the host name or IP address of the NIC the proxy will bind to

tangosol.coherence.session.proxy.localport

a unique port number the proxy will bind to


When used by a storage JVM, the system properties described in Table 4-5 should be specified.

Table 4-5 System Property Values for Storage JVMs

System Property Name Value

tangosol.coherence.session.localstorage

true

tangosol.coherence.session.proxy

false


Example 4-3 illustrates the complete server-side session cache configuration file.

Example 4-3 session-cache-config-server.xml File

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Server-side cache configuration descriptor for Coherence*Web over     -->
<!-- Coherence*Extend (see session-cache-config-client.xml).               -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<cache-config>
  <caching-scheme-mapping>
    <!--
    The clustered cache used to store Session management data.
    -->
    <cache-mapping>
      <cache-name>session-management</cache-name>

      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store ServletContext attributes.
    -->
    <cache-mapping>
      <cache-name>servletcontext-storage</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>
    <!--
    The clustered cache used to store Session attributes.
    -->
    <cache-mapping>
      <cache-name>session-storage</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store the "overflowing" (split-out due to size)
    Session attributes. Only used for the "Split" model.
    -->
    <cache-mapping>

      <cache-name>session-overflow</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store IDs of "recently departed" Sessions.
    -->
    <cache-mapping>
      <cache-name>session-death-certificates</cache-name>
      <scheme-name>session-certificate</scheme-name>

    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <!--
    Distributed caching scheme used by the various Session caches.
    -->
    <distributed-scheme>
      <scheme-name>session-distributed</scheme-name>
      <scheme-ref>session-base</scheme-ref>

      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>unlimited-local</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
    </distributed-scheme>

    <!--
    Distributed caching scheme used by the "recently departed" Session cache.
    -->
    <distributed-scheme>

      <scheme-name>session-certificate</scheme-name>
      <scheme-ref>session-base</scheme-ref>
      <backing-map-scheme>
        <local-scheme>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>4000</high-units>
          <low-units>3000</low-units>

          <expiry-delay>86400</expiry-delay>
        </local-scheme>
      </backing-map-scheme>
    </distributed-scheme>
    <!--
    "Base" Distributed caching scheme that defines common configuration.
    -->
    <distributed-scheme>
      <scheme-name>session-base</scheme-name>

      <service-name>DistributedSessions</service-name>
      <serializer>
        <class-name>com.tangosol.io.DefaultSerializer</class-name>
      </serializer>
      <thread-count>0</thread-count>
      <lease-granularity>member</lease-granularity>
      <local-storage system-property="tangosol.coherence.session.localstorage">true</local-storage>

      <partition-count>257</partition-count>
      <backup-count>1</backup-count>
      <backup-storage>
        <type>on-heap</type>
      </backup-storage>
      <backing-map-scheme>
        <local-scheme>

          <scheme-ref>unlimited-local</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <!--
    Proxy scheme that Coherence*Web clients used to connect to the cluster.
    -->
    <proxy-scheme>

      <service-name>SessionProxy</service-name>
      <thread-count>10</thread-count>
      <acceptor-config>
        <serializer>
          <class-name>com.tangosol.io.DefaultSerializer</class-name>
        </serializer>
        <tcp-acceptor>

          <local-address>
            <address system-property="tangosol.coherence.session.proxy.localhost">localhost</address>
            <port system-property="tangosol.coherence.session.proxy.localport">9099</port>
            <reusable>true</reusable>
          </local-address>
        </tcp-acceptor>
      </acceptor-config>

      <autostart system-property="tangosol.coherence.session.proxy">false</autostart>
    </proxy-scheme>

    <!--
    Local caching scheme definition used by all caches that do not require an
    eviction policy.
    -->
    <local-scheme>
      <scheme-name>unlimited-local</scheme-name>
      <service-name>LocalSessionCache</service-name>
    </local-scheme>  
  </caching-schemes>

</cache-config>

4.9.3 Configuring the Cache for Web Tier JVMs

The session-cache-config-client.xml file illustrated in Example 4-4 is an example Coherence*Web cache configuration file that uses Coherence*Extend. This cache configuration file should be used by the Web tier JVMs. To use and install this file, follow these steps:

  1. Add proxy JVM hostnames/IP addresses and ports to the <remote-addresses/> section of the file. In most cases, you should include the hostname/IP address and port of all proxy JVMs for load balancing and failover.

    Note:

    The <remote-addresses> element contains the proxy server(s) that the Web container will connect to. By default, the Web container will pick an address at random (assuming that there is more than one address in the configuration.) If the connection between the Web container and the proxy is broken, the container will connect to another proxy in the list.
  2. Rename the file to session-cache-config.xml.

  3. Place the file in the WEB-INF/classes directory of your Web application. If you used the WebInstaller to install Coherence*Web, replace the existing file that was added by the WebInstaller.

Example 4-4 illustrates the complete client-side session cache configuration file.

Example 4-4 session-cache-config-client.xml File

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Client-side cache configuration descriptor for Coherence*Web over     -->
<!-- Coherence*Extend (see session-cache-config-server.xml).               -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<cache-config>
  <caching-scheme-mapping>
    <!--
    The clustered cache used to store Session management data.
    -->
    <cache-mapping>
      <cache-name>session-management</cache-name>

      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store ServletContext attributes.
    -->
    <cache-mapping>
      <cache-name>servletcontext-storage</cache-name>
      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store Session attributes.
    -->
    <cache-mapping>
      <cache-name>session-storage</cache-name>
      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store the "overflowing" (split-out due to size)
    Session attributes. Only used for the "Split" model.
    -->
    <cache-mapping>

      <cache-name>session-overflow</cache-name>
      <scheme-name>session-remote</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store IDs of "recently departed" Sessions.
    -->
    <cache-mapping>
      <cache-name>session-death-certificates</cache-name>
      <scheme-name>session-remote</scheme-name>

    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <!--
    Near caching scheme used by the Session attribute cache. The front cache
    uses a Local caching scheme and the back cache uses a Remote caching
    scheme.
    -->
    <near-scheme>
      <scheme-name>session-near</scheme-name>
      <front-scheme>
        <local-scheme>

          <scheme-ref>session-front</scheme-ref>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>session-remote</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>

      <invalidation-strategy>present</invalidation-strategy>
    </near-scheme>

    <local-scheme>
      <scheme-name>session-front</scheme-name>
      <eviction-policy>HYBRID</eviction-policy>
      <high-units>1000</high-units>

      <low-units>750</low-units>
    </local-scheme>

    <remote-cache-scheme>
      <scheme-name>session-remote</scheme-name>
      <initiator-config>
        <serializer>
          <class-name>com.tangosol.io.DefaultSerializer</class-name>

        </serializer>
        <tcp-initiator>
          <remote-addresses>
            <!-- 
            The following list of addresses should include the hostname and port
            of all running proxy JVMs. This is for both load balancing and
            failover of requests from the Web tier.
            -->
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>

          </remote-addresses>
        </tcp-initiator>
      </initiator-config>
    </remote-cache-scheme>
  </caching-schemes>
</cache-config>