4 Coherence*Web Session Management Features

You can configure Coherence*Web in many ways to meet the demands of your environment. Consequently, you might have to change some default configuration options. This chapter provides an in-depth look at the features that Coherence*Web supports so that you can make the appropriate configuration and deployment decisions.

4.1 Session Models

A session model describes how Coherence*Web stores session state in Coherence. Session data is managed by an HttpSessionModel object while the session collection in a Web application is managed by an HttpSessionCollection object. You must configure only the collection type in web.xml—the model is implicitly derived from the collection type. Coherence*Web includes these different session model implementations out of the box:

  • Traditional Model, which stores all session state as a single entity but serializes and deserializes attributes individually

  • Monolithic Model, which stores all session state as a single entity, serializing and deserializing all attributes as a single operation

  • Split Model, which extends the Traditional Model, but separates the larger session attributes into independent physical entities

Note:

In general, Web applications that are part of the same Coherence cluster must use the same session model type. Inconsistent configurations may result in deserialization errors.

Figure 4-1 illustrates the three session models.

Figure 4-1 Traditional, Monolithic, and Split Session Models

Traditional, Monolithic, and Split Session Models
Description of "Figure 4-1 Traditional, Monolithic, and Split Session Models"

4.1.1 Traditional Model

TraditionalHttpSessionModel and TraditionalHttpSessionCollection manage all of the HTTP session data for a particular session in a single Coherence cache entry, but manage each HTTP session attribute (particularly, its serialization and deserialization) separately.

This model is suggested for applications with relatively small HTTP session objects (10KB or less) that do not have issues with object-sharing between session attributes. (Object-sharing between session attributes occurs when multiple attributes of a session have references to the same exact object, meaning that separate serialization and deserialization of those attributes cause multiple instances of that shared object to exist when the HTTP session is later deserialized.)

Figure 4-2 Traditional Session Model

Traditional Session Model
Description of "Figure 4-2 Traditional Session Model"

4.1.2 Monolithic Model

MonolithicHttpSessionModel and MonolithicHttpSessionCollection are similar to the Traditional Model, except that they solve the shared object issue by serializing and deserializing all attributes into a single object stream. As a result, the Monolithic Model often does not perform as well as the Traditional Model.

Figure 4-3 Monolithic Session Model

Monolithic Session Model
Description of "Figure 4-3 Monolithic Session Model"

4.1.3 Split Model

SplitHttpSessionModel and SplitHttpSessionCollection store the core HTTP session metadata and all of the small session attributes in the same manner as the Traditional Model, thus ensuring high performance by keeping that block of binary session data small. All large attributes are split out into separate cache entries to be managed individually, thus supporting very large HTTP session objects without unduly increasing the amount of data that must be accessed and updated within the cluster on each request. In other words, only the large attributes that are modified within a particular request incur any network overhead for their updates, and (because it uses Near Caching) the Split Model generally does not incur any network overhead for accessing either the core HTTP session data or any of the session attributes.

Figure 4-4 Split Session Model

Split Session Model
Description of "Figure 4-4 Split Session Model"

4.1.4 Session Model Recommendations

The following list offers some recommendations on which session model to choose for your applications:

  • The Split Model is the recommended session model for most applications.

  • The Traditional Model may be more optimal for applications that are known to have small HTTP session objects.

  • The Monolithic Model is designed to solve a specific class of problems related to multiple session attributes that have references to the same shared object, and that must maintain that object as a shared object.

"Session Management for Clustered Applications" in Getting Started with Oracle Coherence, provides information on the behavior of these models in a clustered environment.

Note:

See Chapter A, "Coherence*Web Context Parameters" for descriptions of the parameters used to configure session models.

4.2 Session and Session Attribute Scoping

Coherence*Web allows fine-grained control over how both session data and session attributes are scoped (or "shared") across application boundaries:

4.2.1 Session Scoping

Coherence*Web allows session data to be shared by different Web applications deployed in the same or different Web containers. To do so, you must correctly configure the session cookie context parameters and make the classes of objects stored in session attributes available to each Web application.

If you are using cookies to store session IDs (that is, you are not using URL rewriting), you must set the session cookie path to a common context path for all Web applications that share session data. For example, to share session data between two Web applications registered under the contexts paths /web/HRPortal and /web/InWeb, you should set the coherence-session-cookie-path parameter to /web. On the other hand, if the two Web applications are registered under the context paths /HRPortal and /InWeb, you should set the coherence-session-cookie-path parameter to /.

If the Web applications that you would like to share session data are deployed on different Web containers running on different machines (that are not behind a common load balancer), you must also configure the session cookie domain to a domain shared by the machines. For example, to share session data between two Web applications running on server1.mydomain.com and server2.mydomain.com, you must set the coherence-session-cookie-domain context parameter to .mydomain.com.

To correctly serialize or deserialize objects stored in shared sessions, the classes of all objects stored in session attributes must be available to Web applications that share session data.

Note:

For advanced use cases where EAR cluster node-scoping or application server JVM cluster scoping is employed and you do not want session data shared across individual Web applications see "Preventing Web Applications from Sharing Session Data".

4.2.1.1 Preventing Web Applications from Sharing Session Data

Sometimes you may want to explicitly prevent HTTP session data from being shared by different Java EE applications that participate in the same Coherence cluster. For example, assume you have two applications HRPortal and InWeb that share cached data in their EJB tiers but use different session data. In this case, it is desirable for both applications to be part of the same Coherence cluster, but undesirable for both applications to use the same clustered service for session data. One way to do this is to use ApplicationScopeController to define the scope of an applications's attributes. "Session Attribute Scoping" describes this technique. Another way is to specify a unique session cache service name for each application.

To specify a unique session cache service name for each application:

  1. Locate the <service-name/> elements in each session-cache-config.xml file found in your application.

  2. Set the elements to a unique value for each application.

    This forces each application to use a separate clustered service for session data.

  3. Include the modified session-cache-config.xml file with the application.

Example 4-1 illustrates a sample session-cache-config.xml file for an HRPortal application. To prevent the HRPortal application from sharing session data with the InWeb application, rename the <service-name> element for the replicated scheme to ReplicationSessionsMiscHRP. Rename the <service-name> element for the distributed schemes to DistributedSessionsHRP.

Example 4-1 Configuration to Prevent Applications from Sharing Session Data

<replicated-scheme>
  <scheme-name>default-replicated</scheme-name>
  <service-name>ReplicatedSessionsMisc</service-name> // rename this to ReplicatedSessionsMiscHRP 
  <backing-map-scheme>
    <class-scheme>
      <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
  </backing-map-scheme>
</replicated-scheme>

<distributed-scheme>
  <scheme-name>session-distributed</scheme-name>
  <service-name>DistributedSessions</service-name> // rename this to DistributedSessionsHRP
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <class-scheme>
      <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
  </backing-map-scheme>
</distributed-scheme>

<distributed-scheme>
  <scheme-name>session-certificate</scheme-name>
  <service-name>DistributedSessions</service-name> // rename this to DistributedSessionsHRP
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <local-scheme>
      <scheme-ref>session-certificate-autoexpiring</scheme-ref>
    </local-scheme>
  </backing-map-scheme>
</distributed-scheme>

4.2.1.2 Working with Multiple Cache Configurations

If you are working with two or more applications running under Coherence*Web, then they could have multiple different cache configurations. In this case, the cache configuration on the cache server must contain the union of these cache configurations regardless of whether you run in storage-enabled or storage-disabled mode. This will allow the applications to be supported in the same cache cluster.

4.2.1.3 Keeping Session Cookies Separate

If you are using cookies to store session IDs, you must ensure that session cookies created by one application are not propagated to another application. To do this, you must set each application's session cookie domain and path in their web.xml file. To prevent cookies from being propagated, ensure that no two applications share the same context path.

For example, assume you have two Web applications registered under the context paths /web/HRPortal and /web/InWeb. To prevent the Web applications from sharing session data through cookies, set the cookie path to /web/HRPortal in one application, and set the cookie path to /web/InWeb in the other application.

If your applications are deployed on different Web containers running on separate machines, then you can configure the cookie domain to ensure that they are not in the same domain.

For example, assume you have two Web applications running on server1.mydomain.com and server2.mydomain.com. To prevent session cookies from being shared between them, set the cookie domain in one application to server1.mydomain.com, and set the cookie domain in the other application to server2.mydomain.com.

4.2.2 Session Attribute Scoping

In the case where sessions are shared across Web applications there are many instances where the application may want to scope individual session attributes so that they are either globally visible (that is, all Web applications can see and modify these attributes) or scoped to an individual Web application (that is, not visible to any instance of another application).

Coherence*Web provides the ability to control this behavior by using the AttributeScopeController interface. This optional interface is used to selectively scope attributes in cases when a session may be shared across multiple applications. This enables different applications to potentially use the same attribute names for application-scope state without accidentally reading, updating, or removing other applications' attributes. In addition to having application-scoped information in the session, it allows the session to contain global (unscoped) information that is readable, updatable, and removable by any of the applications that share the session.

Two implementations of the AttributeScopeContoller interface are available out of the box: ApplicationScopeController and the GlobalScopeController. The GlobalScopeController implementation does not scope attributes, while ApplicationScopeController scopes all attributes to the application by pre-pending the name of the application to all attribute names.

You can use the coherence-application-name context parameter to specify the name of the application (and the Web module in which the application appears). The ApplicationScopeController will use the name of the application to scope the attributes. If you do not configure this parameter, then Coherence*Web uses the name of the class loader instead. For more information, see the description of coherence-application-name in Table 2-1.

Note:

After a configured AttributeScopeController is created, it is initialized with the name of the Web application, which it can use to qualify attribute names. Use the coherence-application-name context parameter to configure the name of your Web application.

4.2.2.1 Sharing Session Information Between Multiple Applications

Coherence*Web allows multiple applications to share the same session object. To do this, the session attributes must be visible to all applications. You must also specify which URLs served by WebLogic Server will be able to receive cookies.

To allow the applications to share and modify the session attributes, reference the GlobalScopeController (com.tangosol.coherence.servlet.AbstractHttpSessionCollection$GlobalScopeController) interface as the value of the coherence-scopecontroller-class context parameter in the web.xml file. GlobalScopeController is an implementation of the com.tangosol.coherence.servlet.HttpSessionCollection$AttributeScopeController interface that allows individual session attributes to be globally visible.

Example 4-2 illustrates the GlobalScopeController interface specified in the web.xml file.

Example 4-2 GlobalScopeController Specified in the web.xml File

<?xml version="1.0" encoding="UTF-8"?>  <web-app>    ...
    <context-param>
      <param-name>coherence-scopecontroller-class</param-name>
      <param-value>com.tangosol.coherence.servlet. AbstractHttpSessionCollection$GlobalScopeController</param-value>
    </context-param>
    ...
  </web-app>

4.3 Cluster Node Isolation

There are several different ways in which you can deploy Coherence*Web. One of the things to consider when deciding on a deployment option is cluster node isolation. Cluster node isolation considers:

  • the number of Coherence nodes that are created within an application server JVM

  • where the Coherence library is deployed

Applications can be application server-scoped, EAR-scoped, or WAR-scoped. This section describes these considerations. For detailed information on the XML configuration for each of these options, see "Configure Cluster Nodes (WebLogic Server 10.3.3 and Later)".

4.3.1 Application Server-Scoped Cluster Nodes

With this configuration, all deployed applications in a container using Coherence*Web become part of one Coherence node. This configuration produces the smallest number of Coherence nodes in the cluster (one for each Web container JVM) and, since the Coherence library (coherence.jar) is deployed in the container's classpath, only one copy of the Coherence classes is loaded into the JVM. This minimizes the use of resources. On the other hand, since all applications are using the same cluster node, all applications are affected if one application misbehaves.

Figure 4-5 Application Server-Scoped Cluster

Application Server-Scoped Cluster
Description of "Figure 4-5 Application Server-Scoped Cluster"

Requirements for using this configuration are:

  • each deployed application must use the same version of Coherence and participate in the same cluster

  • the classes of objects placed in the HTTP session must be available

"Configuring Application Server-Scoped Cluster Nodes" describes the XML configuration for application server-scoped cluster nodes.

Note:

The application server-scoped cluster node configuration should be considered very carefully and never used in environments where the interaction between applications is unknown or unpredictable.

An example of such an environment may be a deployment where multiple application groups are deploying applications written independently, without carefully coordinating and enforcing their conventions and naming standards. With this configuration, all applications are part of the same cluster and the likelihood of collisions between namespaces for caches, services and other configuration settings is quite high and may lead to unexpected results.

For these reasons, Oracle Coherence strongly recommends that you use EAR-scoped and WAR-scoped cluster node configurations. If you are in doubt regarding which deployment topology to choose, or if this warning applies to your deployment, then do not choose the application server-scoped cluster node configuration.

4.3.2 EAR-Scoped Cluster Nodes

With this configuration, all deployed applications within each EAR become part of one Coherence node. This configuration produces one Coherence node for each deployed EAR that uses Coherence*Web. Since the Coherence library (coherence.jar) is deployed in the application's classpath, only one copy of the Coherence classes is loaded for each EAR. Since all Web applications in the EAR use the same cluster node, all Web applications in the EAR are affected if one of the Web applications misbehaves.

Figure 4-6 EAR-Scoped Cluster

EAR-Scoped Cluster
Description of "Figure 4-6 EAR-Scoped Cluster"

EAR-scoped cluster nodes reduce the deployment effort as no changes to the application server classpath are required. This option is also ideal if you plan on deploying only one EAR to an application server.

Requirements for using this configuration are:

  • the Coherence library (coherence.jar) must be deployed as part of the EAR file and listed as a Java module in META-INF/application.xml

  • objects placed into the HTTP session must have their classes deployed as a Java EAR module in a similar fashion

"Configuring EAR-Scoped Cluster Nodes" describes the XML configuration for EAR-scoped cluster nodes.

4.3.3 WAR-Scoped Cluster Nodes

With this configuration, each deployed Web application becomes its own Coherence node. This configuration produces the largest number of Coherence nodes in the cluster (one for each deployed WAR that uses Coherence*Web) and since the Coherence library (coherence.jar) is deployed in the Web application's classpath, there will be as many copies of the Coherence classes loaded as there are deployed WARs. This results in the largest resource utilization out of the three options. However, since each deployed Web application is its own cluster node, Web applications are completely isolated from other potentially misbehaving Web applications.

WAR scoped cluster nodes reduce the deployment effort as no changes to the application server classpath are required. This option is also ideal if you plan on deploying only one WAR to an application server.

Figure 4-7 WAR-Scoped Clusters

WAR-Scoped Clusters
Description of "Figure 4-7 WAR-Scoped Clusters"

Requirements for using this configuration are:

  • the Coherence library (coherence.jar) must be deployed as part of the WAR file (usually in WEB-INF/lib)

  • objects placed into the HTTP session must have their classes deployed as part of the WAR file (in WEB-INF/lib or WEB-INF/classes)

"Configuring WAR-Scoped Cluster Nodes" describes the XML configuration for WAR-scoped cluster nodes.

4.4 Session Locking Modes

Oracle Coherence provides these configuration options for concurrent access to HTTP sessions.

  • Optimistic Locking, which allows concurrent access to a session by multiple threads in a single JVM or multiple JVMs while prohibiting concurrent modification. This is the default locking mode.

  • Last Write Wins Locking, which is a variation on Optimistic Locking. This allows concurrent access to a session by multiple threads in a single JVM or multiple JVMs. In this case, the last write is allowed to win.

  • Member Locking, which allows concurrent access and modification of a session by multiple threads in the same JVM while prohibiting concurrent access by threads in different JVMs.

  • Application Locking, which allows concurrent access and modification of a session by multiple threads in the same Web application instance while prohibiting concurrent access by threads in different Web application instances.

  • Thread Locking, which prohibits concurrent access and modification of a session by multiple threads in a single JVM.

Note:

Generally, Web applications that are part of the same cluster must use the same locking mode and sticky session optimizations setting. Inconsistent configurations may result in deadlock.

For more information on the parameters described in this section, see Appendix A, "Coherence*Web Context Parameters."

4.4.1 Optimistic Locking

Coherence*Web and the Coherence*Web SPI are configured with Optimistic Locking by default. The Optimistic Locking mode allows multiple Web container threads in one or more JVMs to access the same session concurrently. This setting does not use explicit locking; rather an optimistic approach is used to detect and prevent concurrent updates upon completion of an HTTP request that modifies the session. When Coherence*Web detects a concurrent modification, a ConcurrentModificationException is thrown to the application; therefore an application must be prepared to handle this exception in an appropriate manner. To view the exception, set the weblogic.debug.DebugHttpSessions system property to true in the container's startup script (for example: -Dweblogic.debug.DebugHttpSessions=true).

Optimistic Locking mode can be configured by setting the coherence-session-member-locking context parameter to false.

4.4.1.1 Last Write Wins Locking

Last Write Wins Locking mode is a variation on the Optimistic Locking mode. It allows multiple Web container threads in one or more JVMs to access the same session concurrently. This setting does not use explicit locking; it does not prevent concurrent updates upon completion of an HTTP request that modifies the session. Instead, the last write is allowed to modify the session.

Last Write Wins Locking mode can be configured by setting the coherence-session-locking parameter to false. This value will allow concurrent modification to sessions with the last update winning. If coherence-session-app-locking, coherence-session-member-locking, or coherence-session-thread-locking context parameter is set to true, this value is ignored (being logically true). Default is false.

4.4.2 Member Locking

Member Locking mode allows multiple Web container threads in the same cluster node to access and modify the same session concurrently, but prohibits concurrent access by threads in different JVMs. This is accomplished by acquiring a member-level lock for an HTTP session when the session is acquired. For more information on member-level locks, see <lease-granularity> in the "distributed-scheme" section of the Developer's Guide for Oracle Coherence.

Member Locking mode can be configured by setting the coherence-session-member-locking context parameter to true.

4.4.3 Application Locking

Application Locking mode restricts access (and modification) to a session to threads in a single Web application instance at a time. This is accomplished by acquiring both a member-level and application-level lock for an HTTP session when the session is acquired and releasing both locks upon completion of the request. For more information on member-level locks, see <lease-granularity> in the "distributed-scheme" section of the Developer's Guide for Oracle Coherence.

Application Locking mode can be configured by setting the coherence-session-app-locking context parameter to true. Note that setting this to true will imply a setting of true for coherence-session-member-locking.

4.4.4 Thread Locking

Thread Locking mode restricts access (and modification) to a session to a single thread in a single JVM at a time. This is accomplished by acquiring both a member level, application level, and thread-level lock for an HTTP session when the session is acquired and releasing all three locks upon completion of the request. For more information on member-level locks, see <lease-granularity> in the "distributed-scheme" section of the Developer's Guide for Oracle Coherence.

Thread Locking mode can be configured by setting the coherence-session-thread-locking context parameter to true. Note that setting this to true implies a setting of true for both coherence-session-member-locking and coherence-session-app-locking.

4.4.5 Troubleshooting Locking in HTTP Sessions

Enabling Member, Application, or Thread Locking for HTTP session access indicates that Coherence*Web will acquire a clusterwide lock for every HTTP request that requires access to a session; the exception to this is when sticky load balancing is available and the Coherence*Web sticky session optimization is enabled. By default, threads that attempt to access a locked session (locked by a thread in a different JVM) block until the lock can be acquired. If you want to enable a timeout for lock acquisition, configure it with the tangosol.coherence.servlet.lock.timeout system property in the container's startup script (for example: -Dtangosol.coherence.servlet.lock.timeout=30s).

Many Web applications do not have such a strict concurrency requirement. For these applications, using the Optimistic Locking mode has the following advantages:

  • The overhead of obtaining and releasing cluster wide locks for every HTTP request is eliminated.

  • Requests can be load balanced away from failing or unresponsive JVMs to healthy JVMs without requiring the unresponsive JVM to release the clusterwide lock on the session.

Coherence*Web provides a diagnostic invocation service that is executed when a member cannot acquire the cluster lock for a session. You can control whether this service is enabled by setting the coherence-session-log-threads-holding-lock context parameter. If this context parameter is set to true (default), then the invocation service will cause the member that has ownership of the session to log the stack trace of the threads that are currently holding the lock.

Like all Coherence*Web messages, the Coherence logging-config operational configuration element controls how the message is logged. For more information on how to configure logging in Coherence, see logging-config, in the "Operation Configuration Elements" appendix of the Developer's Guide for Oracle Coherence.

4.4.6 Enabling Sticky Session Optimizations

If Member, Application, or Thread Locking is a requirement for a Web application that resides behind a sticky load balancer, Coherence*Web provides an optimization for obtaining the clusterwide lock required for HTTP session access. By definition, a sticky load balancer attempts to route each request for a given session to the same application server JVM that it previously routed requests to for that same session. This should be the same application server JVM that created the session. The sticky session optimization takes advantage of this behavior by retaining the clusterwide lock for a session until the session expires or until it is asked to release it. If, for whatever reason, the sticky load balancer sends a request for the same session to another application server JVM, that JVM will ask the JVM that owns the lock on the session to release the lock as soon as possible. For more information, see the SessionOwnership entry in Table C-2.

Sticky session optimization can be enabled by setting the coherence-sticky-sessions context parameter to true. This setting requires that member, application, or thread locking is enabled.

4.5 Deployment Topologies

Coherence*Web supports most of the same deployment topologies that Coherence does including in-process, out-of-process (that is, client/server deployment), and bridging clients and servers over Coherence*Extend. The major supported deployment topologies are described in the following sections.

  • In-Process, also known as "local storage enabled", is where session data is stored "in-process" with the application server

  • Out-of-Process, also known as "local storage disabled", is where the application servers are configured as cache clients and dedicated JVMs run as cache servers, physically storing and managing the clustered data.

  • Out-of-Process with Coherence*Extend, where communication between the application server tier and the cache server tier are over Coherence*Extend (TCP/IP)

4.5.1 In-Process

The In-Process topology is not recommended for production use and is supported mainly for development and testing. By storing the session data in-process with the application server, this topology is very easy to get up and running quickly for smoke tests, development and testing. In this topology, local storage is enabled (that is, tangosol.coherence.distributed.localstorage=true).

Figure 4-8 In-Process Deployment Topology

In-Process Deployment Topology
Description of "Figure 4-8 In-Process Deployment Topology"

4.5.2 Out-of-Process

For the Out of Process deployment topology, the application servers (that is, application server tier) are configured as cache clients (that is, tangosol.coherence.distributed.localstorage=false) and there are dedicated JVMs running as cache servers, physically storing and managing the clustered data.

This approach has these benefits:

  • Session data storage is off-loaded from the application server tier to the cache server tier. This reduces heap usage, garbage collection times, and so on.

  • It allows for the two tiers to be scaled independently of one another. If more application processing power is needed, just start more application servers. If more session storage capacity is needed, just start more cache servers.

The Out-of-Process topology is the default recommendation of Oracle Coherence due to its flexibility.

Figure 4-9 Out of Process Deployment Topology

Out of Process Deployment Topology
Description of "Figure 4-9 Out of Process Deployment Topology"

4.5.3 Out-of-Process with Coherence*Extend

The Out-of-Process with Coherence*Extend topology is similar to the Out-of-Process topology except that the communication between the application server tier and the cache server tier are over Coherence*Extend (TCP/IP). For information on configuring this scenario, see "Configuring Coherence*Web with Coherence*Extend".

This approach has the same benefits as the Out-of-Process topology and the ability to segment deployment of application servers and cache servers. This is ideal in an environment where application servers are on a network that does not support UDP. The cache servers can be set up in a separate dedicated network, with the application servers connecting to the cluster by using TCP.

Figure 4-10 Out-of-Process with Coherence*Extend Deployment Topology

Out-of-Process with Coherence*Extend Topology
Description of "Figure 4-10 Out-of-Process with Coherence*Extend Deployment Topology"

4.6 Managing and Monitoring Applications with JMX

Note:

To enable Coherence*Web JMX Management and Monitoring, you must set up the Coherence Clustered JMX Framework. See the configuration and installation instructions in How to Manage Coherence with JMX in the Developer's Guide for Oracle Coherence.

The management attributes and operations for Web applications that use Coherence*Web for HTTP session management are exposed through the HttpSessionManagerMBean interface (com.tangosol.coherence.servlet.management.HttpSessionManagerMBean).

During startup, each Coherence*Web Web application registers a single instance of HttpSessionManagerMBean. The MBean is unregistered when the Web application shuts down. Table 4-1 describes the object name that the MBean uses for registration.

Table 4-1 Object Name for the HttpSessionManagerMBean

Managed Bean Object Name

HttpSessionManagerMBean

type=HttpSessionManager, nodeId=cluster node id, appId=web application id


Table 4-2 describes the information that the HttpSessionManagerMBean provides. All of the names represent attributes, except resetStatistics, which is an operation.

Several of the MBean attributes use the following prefixes:

  • LocalSession, which indicates a session that is not distributed to all members of the cluster. The session remains "local" to the originating server until a later point in the life of the session.

  • LocalAttribute, which indicates a session attribute that is not distributed to all members of the cluster.

  • Overflow, which is typically, a larger and slower back-end cache that catches entries evicted from a faster front-end cache.

Table 4-2 Information Returned by the HttpSessionManagerMBean

Name Data Type Description

AverageReapDuration

long

The average reap duration (the time it takes to complete a reap cycle) in milliseconds, since the statistic was reset. See "Getting Session Reaper Performance Statistics".

CollectionClassName

String

The fully qualified class name of the HttpSessionCollection implementation in use. The HttpSessionCollection interface is an abstract model for a collection of HttpSessionModel objects. The interface is not at all concerned with how the sessions are communicated between the clients and the servers.

FactoryClassName

String

The fully qualified class name of the Factory implementation in use. The SessionHelper.Factory is used by the SessionHelper to obtain objects that implement various important parts of the Servlet specification. It can be placed in front of the application in place of the application server's own objects, thus changing the "apparent implementation" of the application server itself (for example, adding clustering.)

LastReapDuration

long

The amount of time, in milliseconds, it took for the last reap cycle to finish. See "Getting Session Reaper Performance Statistics".

LocalAttributeCacheName

String

The name of the local cache that stores non-distributed session attributes. If the attribute displays null then local session attribute storage is disabled.

LocalAttributeCount

Integer

The number of non-distributed session attributes stored in the local session attribute cache. If the attribute displays -1, then local session attribute storage is disabled.

LocalSessionCacheName

String

The name of the local cache that stores non-distributed sessions. If the attribute displays null, then local session storage is disabled.

LocalSessionCount

Integer

The number of non-distributed sessions stored in the local session cache. If the attribute displays -1, then local session storage is disabled.

MaxReapedSessions

long

The maximum number of sessions reaped in a reap cycle since the statistic was reset. See "Getting Session Reaper Performance Statistics".

NextReapCycle

java.lang.Date

The time, expressed as a java.lang.Date, for the next reap cycle. See "Getting Session Reaper Performance Statistics".

OverflowAverageSize

Integer

The average size (in bytes) of the session attributes stored in the "overflow" clustered cache since the last time statistics were reset. If the attribute displays -1, then a SplitHttpSessionCollection is not in use.

OverflowCacheName

String

The name of the clustered cache that stores the "large attributes" that exceed a certain size and thus are determined to be more efficiently managed as separate cache entries and not as part of the serialized session object itself. Null is displayed if a SplitHttpSessionCollection is not in use.

OverflowMaxSize

Integer

The maximum size (in bytes) of a session attribute stored in the "overflow" clustered cache since the last time statistics were reset. The attribute displays -1 if a SplitHttpSessionCollection is not in use.

OverflowThreshold

Integer

The minimum length (in bytes) that the serialized form of an attribute value must be stored in the separate "overflow" cache that is reserved for large attributes. The attribute displays -1 if a SplitHttpSessionCollection is not in use.

OverflowUpdates

Integer

The number of updates to session attributes stored in the "overflow" clustered cache since the last time statistics were reset. The attribute displays -1 if a SplitHttpSessionCollection is not in use.

ReapedSessions

long

The number of sessions reaped during the last cycle. See "Getting Session Reaper Performance Statistics".

ReapedSessionsTotal

long

The number of expired sessions that have been reaped since the statistic was reset. See "Getting Session Reaper Performance Statistics".

ServletContextCacheName

String

The name of the clustered cache that stores javax.servlet.ServletContext attributes. The attribute displays null if the ServletContext is not clustered.

ServletContextName

String

The name of the Web application ServletContext.

SessionAverageLifetime

Integer

The average lifetime (in seconds) of session objects invalidated (either due to expiration or to an explicit invalidation) since the last time statistics were reset.

SessionAverageSize

Integer

The average size (in bytes) of session objects placed in the session storage clustered cache since the last time statistics were reset.

SessionCacheName

String

The name of the clustered cache that stores serialized session objects.

SessionIdLength

Integer

The length (in characters) of generated session IDs.

SessionMaxSize

Integer

The maximum size (in bytes) of a session object placed in the session storage clustered cache since the last time statistics were reset.

SessionMinSize

Integer

The minimum size (in bytes) of a session object placed in the session storage clustered cache since the last time statistics were reset.

SessionStickyCount

Integer

The number of session objects that are pinned to this instance of the Web application. The attribute displays -1 if sticky session optimizations are disabled.

SessionTimeout

Integer

The session expiration time (in seconds). The attribute displays -1 if sessions never expire.

SessionUpdates

Integer

The number of updates of session object stored in the session storage clustered cache since the last time statistics were reset.

resetStatistics (operation)

void

Reset the session management statistics.


Figure 4-11 illustrates the HttpSessionManagerMBean as it is displayed in the JConsole browser.

Figure 4-11 HttpSessionManagerMBean Displayed in the JConsole Browser

Http Session Manager MBean in the JConsole Browser
Description of "Figure 4-11 HttpSessionManagerMBean Displayed in the JConsole Browser"

4.7 Running Performance Reports

Coherence includes a JMX-based reporting utility known as the Reporter. The Reporter provides several preconfigured reports that help administrators and developers manage capacity and troubleshoot problems. These reports are specially tuned for Coherence*Web:

  • Web Session Storage Report, which records statistics on the activity between the cluster and the cache where the cluster's session objects and data are stored.

  • Web Session Overflow Report, which records statistics on the activity between the cluster and the cache where session objects and data are allowed to overflow from the Web session storage cache.

  • Web Report, which records information about Coherence*Web activity for the cluster.

  • Web Service Report, which records information on the service running the Coherence*Web application.

The Coherence*Web reports should be run as part of a batch report. They are defined in both the report-web-group.xml and the comprehensive report-all.xml batch reports. You can also include them in a custom batch report. The Coherence*Web reports are not defined in the default report group batch file, report-group.xml.

The Reporter runs the report-group.xml batch report by default. Use the tangosol.coherence.management.report.configuration system property to run report-web-group.xml, report-all.xml, or a custom batch report instead. Example 4-3 illustrates a command line where the property is used to change the report group batch file that is run to report-web-group.xml.

Example 4-3 Specifying a Report Group on the Command Line

java -Dcom.sun.management.jmxremote
-Dtangosol.coherence.management=all
-Dtangosol.coherence.management.remote=true
-Dtangosol.coherence.management.report.autostart=false
-Dtangosol.coherence.management.report.distributed=false
-Dtangosol.coherence.management.report.configuration=reports/report-web-group.xml
-jar coherence.jar

The report-web-group.xml, report-all.xml, and report-group.xml report group batch files, can be found in the reports folder in coherence.jar.

Note:

You can find a detailed discussion of the Reporter, including configuring the Reporter, running preconfigured reports, and creating custom reports, in the chapters under Managing Coherence in the Developer's Guide for Oracle Coherence.

4.7.1 Web Session Storage Report

The Web Session Storage report records statistics on the activity between the cluster and the cache where session objects and data are stored. The statistics include information on the number of puts, gets, and prunes performed on the session storage cache, and the amount of time spent on these operations.

The report is a tab-delimited file that is prefixed with the date in YYYYMMDDHH format and post fixed with -session-storage.txt. For example 2010013113-session-storage.txt would be created on January 31, 2010 1:00 pm. Table 4-3 describes the contents of the Web Session Storage report.

Table 4-3 Contents of the Web Session Storage Report

Column Data Type Description

Batch Counter

long

A sequential counter to help integrate information between related files. This value resets when the reporter restarts and is not consistent across nodes. However, it is helpful when trying to integrate files.

Cache Name

String

This value is always session-storage. It is used to maintain consistency with the Cache Utilization report.

Evictions

long

The total number of sessions that have been evicted for the cache across the cluster since the last time the report was executed.

Report Time

Date

The system time when the report executed.

Tier

String

Value can be either front or back. Describes whether the cache resides in the front tier (local cache) or back tier (remote cache).

TotalFailures

long

The total number of session storage write failures for the cache across the cluster since the last time the report was executed.

TotalGets

long

The total number of session gets across the cluster since the last time the report was executed.

TotalGetsMillis

long

The total number of milliseconds spent per get() invocation (GetsMillis) to get the sessions across the cluster since the last time the report was executed.

TotalHits

long

The total number of session hits across the cluster since the last time the report was executed.

TotalHitsMillis

long

The total number of milliseconds spent per get() invocation that is a hit (HitsMillis) for the session storage across the cluster since the last time the report was executed.

TotalMisses

long

The total number of sessions gets that returned misses for the cache across the cluster since the last time the report was executed.

TotalMissesMillis

long

The total number of milliseconds spent per get() invocation that is a miss (MissesMillis) for the session storage across the cluster since the last time the report was executed.

TotalPrunes

long

The total number of times the session storage cache has been pruned across the cluster since the last time the report was executed.

TotalPrunesMillis

long

The total number of milliseconds spent for the prune operations (PrunesMillis) to prune the session storage cache across the cluster since the last time the report was executed.

TotalPuts

long

The total number of session updates (puts) across the cluster since the last time the report was executed.

TotalPutsMillis

long

The total number of milliseconds spent per put() invocation (PutsMillis) to update sessions across the cluster since the last time the report was executed.

TotalQueue

long

The sum of the queue links for the session storage cache across the cluster.

TotalWrites

long

The total number of sessions written to an external cache storage for the cache across the cluster since the last time the report was executed.

TotalWritesMillis

long

The total number of milliseconds spent per write operation (WritesMillis) to update an external cache storage across the cluster since the last time the report was executed.


4.7.2 Web Session Overflow Report

The Web Session Overflow report records statistics on the activity between the cluster and the cache where the overflow of session objects and data are stored. The statistics include information on the number of puts, gets, and prunes performed on the session overflow cache, and the amount of time spent on these operations.

The report is a tab-delimited file that is prefixed with the date in YYYYMMDDHH format and post fixed with -cache-session-overflow.txt. For example 2010013113-cache-session-storage.txt would be created on January 31, 2010 1:00 pm. Table 4-4 describes the contents of the Web Session Overflow report.

Table 4-4 Contents of the Web Session Overflow Report

Column Data Type Description

Batch Counter

long

A sequential counter to help integrate information between related files. This value does reset when the reporter restarts and is not consistent across nodes. However, it is helpful when trying to integrate files.

Cache Name

String

The value is always session-overflow. It is used to maintain consistency with the cache utilization report.

Evictions

long

The total number of session overflows that have been evicted for the cache across the cluster since the last time the report was executed.

Report Time

Date

The system time when the report executed.

Tier

String

Value can be either front or back. Describes whether the cache resides in the front-tier (local cache) or back tier (remote cache).

TotalFailures

long

The total number of session overflows storage write failures for the cache across the cluster since the last time the report was executed.

TotalGets

long

The total number of session overflows gets across the cluster since the last time the report was executed.

TotalGetsMillis

long

The total number of milliseconds spent per get() invocation (GetsMillis) to get the session overflows across the cluster since the last time the report was executed.

TotalHits

long

The total number of session overflow hits across the cluster since the last time the report was executed.

TotalHitsMillis

long

The total number of milliseconds spent per get() invocation that is a hit (HitsMillis) for the session overflow across the cluster since the last time the report was executed.

TotalMisses

long

The total number of session overflow gets that returned misses for the cache across the cluster since the last time the report was executed.

TotalMissesMillis

long

The total number of milliseconds spent per get() invocation that is a miss (MissesMillis) for the session overflow across the cluster since the last time the report was executed.

TotalPrunes

long

The total number of times the session overflow cache has been pruned across the cluster since the last time the report was executed.

TotalPrunesMillis

long

The total number of milliseconds spent for the prune operations (PrunesMillis) to prune the session overflow cache across the cluster since the last time the report was executed.

TotalPuts

long

The total number of session overflows (puts) across the cluster since the last time the report was executed.

TotalPutsMillis

long

The total number of milliseconds spent per put() invocation (PutsMillis) to update session overflows across the cluster since the last time the report was executed.

TotalQueue

long

The sum of the queue link size for the session overflow cache across the cluster.

TotalWrites

long

The total number of session overflows written to an external cache storage for the cache across the cluster since the last time the report was executed.

TotalWritesMillis

long

The total number of milliseconds spent per write operation (WritesMillis) to update an external session overflow storage across the cluster since the last time the report was executed.


4.7.3 Web Report

The Web Report provides information about Coherence*Web activity for the cluster. The report is a tab delimited file that is prefixed with the date and hour in YYYYMMDDHH format and post-fixed with -web.txt. For example 2009013102-web.txt would be created on January 1, 2009 at 2:00 am. Table 4-5 describes the contents of the Web Report.

Table 4-5 Contents of the Web Report

Column Data Type Description

Application

String

The application name.

Batch Counter

long

A sequential counter to help integrate information between related files. This value does reset when the reporter restarts and is not consistent across nodes. However, it is helpful when trying to integrate files.

Current Overflow Updates

long

The number of overflow updates since the last time the report was executed.

Current Session Updates

long

The number of session updates since the last time the report was executed.

LocalAttributeCount

long

The Attribute count on the node.

LocalSessionCount

long

The session count on the node.

Node Id

integer

The Node identifier.

OverflowAvgSize

float

The average size for attribute overflows.

OverflowMaxSize

long

The maximum size for an attribute overflow.

OverflowUpdates

long

The total number of attribute overflow updates since the last time statistics were reset.

Report Time

Date

The system time when the report executed.

SessionAverageLifetime

float

The average number of seconds a session lives.

SessionAverageSize

float

The average size for a session.

SessionMaxSize

long

The maximum size for a session.

SessionMinSize

long

The minimum size for a session.

SessionStickyCount

long

The number of sticky sessions on the node.

SessionUpdateCount

long

The number of session updates since the last time statistics were reset.


4.7.4 Web Service Report

The Web Service report provides information on the service running the Coherence*Web application. The report records the requests processed, request failures, and request backlog, tasks processed, task failures, and task backlog. Request Count and Task Count are useful to determine performance and throughput of the service. RequestPendingCount and Task Backlog are useful in determining capacity issues or blocked processes. Task Hung Count, Task Timeout Count, Thread Abandoned Count, Request Timeout Count are the number of unsuccessful executions that have occurred in the system.

The report is a tab delimited file that is prefixed with the date and hour in YYYYMMDDHH format and post-fixed with -web-session-service.txt. For example 2009013102-web-session-service.txt would be created on January 1, 2009 at 2:00 am. Table 4-6 describes the contents of the Web Service Report.

Table 4-6 Contents of the Web Service Report

Column Data Type Description

Batch Counter

Long

A sequential counter to help integrate information between related files. This value does reset when the reporter restarts and is not consistent across nodes. However, it is helpful when trying to integrate files.

Node Id

String

The numeric node identifier.

Refresh Time

Date

The system time when the service information was updated from a remote node.

Request Count

Long

The number of requests by the Coherence*Web application since the last report execution.

RequestPendingCount

Long

The number of pending requests by the Coherence*Web application at the time of the report.

RequestPendingDuration

Long

The duration for the pending requests of the Coherence*Web application at the time of the report.

Request Timeout Count

Long

The number of request timeouts by the Coherence*Web application since the last report execution.

Report Time

Date

The system time when the report executed.

Service

String

A static value (DistributedSessions) used as the service name if merging the information with the service file.

Task Backlog

Long

The task backlog of the Coherence*Web application at the time of the report execution.

Task Count

Long

The number of tasks executed by the Coherence*Web application since the last report execution.

Task Hung Count

Long

The number of tasks that hung by the Coherence*Web application since the last report execution.

Task Timeout Count

Long

The number of task timeouts by the Coherence*Web application since the last report execution.

Thread Abandoned Count

Long

The number of threads abandoned by the Coherence*Web application since the last report execution.


4.8 Cleaning Up Expired HTTP Sessions

As part of Coherence*Web Session Management Module, HTTP sessions that have expired are eventually cleaned up by the Session Reaper. The Session Reaper provides a service similar to the JVM's own Garbage Collection (GC) capability: the Session Reaper is responsible for destroying any session that is no longer used, which is determined when that session has timed out.

Each HTTP session contains two pieces of information that determine when it has timed out. The first is the LastAccessedTime property of the session, which is the timestamp of the most recent activity involving the session. The second is the MaxInactiveInterval property of the session, which specifies how long the session is kept active without any activity; a typical value for this property is 30 minutes. The MaxInactiveInterval property defaults to the value configured for Coherence*Web, but it can be modified on a session-by-session basis.

Each time that an HTTP request is received by the server, if there is an HTTP session associated with that request, then the LastAccessedTime property of the session is automatically updated to the current time. As long as requests continue to arrive related to that session, it is kept active, but when a period of inactivity occurs longer than that specified by the MaxInactiveInterval property, then the session expires. Session expiration is passive—occurring only due to the passing of time. The Coherence*Web Session Reaper scans for sessions that have expired, and when it finds expired sessions it destroys them.

4.8.1 Understanding the Session Reaper

The Session Reaper configuration addresses three basic questions:

  • On which servers will the Reaper run?

  • How frequently will the Reaper run?

  • When the Reaper runs, on which servers will it look for expired sessions?

Every application server running Coherence*Web runs the Session Reaper. That means that if Coherence is configured to provide a separate cache tier (made up of "cache servers"), then the Session Reaper does not run on those cache servers.

By default, the Session Reaper runs concurrently on all of the application servers, so that all of the servers share the workload of identifying and cleaning up expired sessions. The coherence-reaperdaemon-cluster-coordinated context parameter causes the cluster to coordinate reaping so that only one server at a time performs the actual reaping; the use of this option is not suggested, and it cannot be used with the Coherence*Web over Coherence*Extend topology.

The coherence-reaperdaemon-cluster-coordinated context parameter should not be used if sticky optimization (coherence-sticky-sessions) is also enabled. Since only one server at a time performs the reaping, sessions owned by other nodes cannot be reaped. This means that it will take longer for sessions to be reaped as more nodes are added to the cluster. Also, the reaping ownership does not circulate over the nodes in the cluster in a controlled way; one node can be the reaping node for a long time before it is taken over by another node. During this time, only its own sessions are reaped.

The Session Reaper is configured to scan the entire set of sessions over a certain period, called a reaping cycle, which defaults to five minutes. This length of the reaping cycle is specified by the coherence-reaperdaemon-cycle-seconds context parameter. This setting indicates to the Session Reaper how aggressively it must work. If the cycle length is configured too short, the Session Reaper uses additional resources without providing additional benefit. If the cycle length is configured too long, then expired sessions will use heap space in the Coherence caches unnecessarily. In most situations, it is far preferable to reduce resource usage than to ensure that sessions are cleaned up quickly after they expire. Consequently, the default cycle of five minutes is a good balance between promptness of cleanup and minimal resource usage.

During the reaping cycle, the Session Reaper scans for expired sessions. In most cases, the Session Reaper takes responsibility for scanning all of the HTTP sessions across the entire cluster, but there is an optimization available for the Single Tier topology. In the Single Tier topology, when all of the sessions are being managed by storage-enabled Coherence cluster members that are also running the application server, the session storage is co-located with the application server. Consequently, it is possible for the Session Reaper on each application server to only scan the sessions that are stored locally. This behavior can be enabled by setting the coherence-reaperdaemon-assume-locality configuration option to true.

Regardless of whether the Session Reaper scans only co-located sessions or all sessions, it does so in a very efficient manner by using these advanced capabilities of the Coherence data grid:

  • The Session Reaper delegates the search for expired sessions to the data grid using a custom ValueExtractor implementation. This ValueExtractor takes advantage of the BinaryEntry interface so that it can determine if the session has expired without even deserializing the session. As a result, the selection of expired sessions can be delegated to the data grid just like any other parallel query, and can be executed by storage-enabled Coherence members in a very efficient manner.

  • The Session Reaper uses the com.tangosol.net.partition.PartitionedIterator class to automatically query on a member-by-member basis, and in a random order that avoids harmonics in large-scale clusters.

Each storage-enabled member can very efficiently scan for any expired sessions, and it only has to scan one time per application server per reaper cycle. The result is an out-of-the-box Session Reaper configuration that works well for application server clusters with one or multiple servers.

The Session Reaper can invalidate sessions either in parallel or serially. By default, it invalidates sessions in parallel. This ensures that sessions are invalidated in a timely manner. However, if the application server JVM is under high load due to a large number of concurrent threads then you have the option of invalidating serially. To configure the reaper to invalidate sessions serially, set coherence-reaperdaemon-parallel context parameter to false.

To ensure that the Session Reaper does not impact the smooth operation of the application server, it breaks up its work into chunks and schedules that work in a manner that spreads the work across the entire reaping cycle. Since the Session Reaper has to know how much work it must schedule, it maintains statistics on the amount of work that it performed in previous cycles, and uses statistical weighting to ensure that statistics from recent reaping cycles count more heavily. There are several reasons why the Session Reaper breaks up the work in this manner:

  • If the Session Reaper consumed a large number of CPU cycles simultaneously, it could cause the application to be less responsive to users. By doing a small portion of the work at a time, the application remains responsive.

  • One of the key performance enablers for Coherence*Web is the near caching feature of Coherence; since the sessions that are expired are accessed through that same near cache to clean them, expiring too many sessions too quickly could cause the cache to evict sessions that are being used on that application server, leading to performance loss.

The Session Reaper performs its job efficiently, even with the default out-of-the-box configuration by:

  • delegating as much work as possible to the data grid

  • delegating work to only one member at a time

  • enabling the data grid to find expired sessions without deserializing them

  • restricting the usage of CPU cycles

  • avoiding cache-thrashing of the near caches that Coherence*Web relies on for performance

4.8.2 Configuring the Session Reaper

The following list contains suggestions for tuning the out-of-the-box configuration of the Session Reaper:

  • If the application is deployed with the in-process topology, then set the coherence-reaperdaemon-assume-locality configuration option to true.

  • Since all of the application servers are responsible for scanning for expired sessions, it is reasonable to increase the coherence-reaperdaemon-cycle-seconds configuration option if the cluster is larger than ten application servers. The larger the number of application servers, the longer the cycle can be; for example, with 200 servers, it would be reasonable to set the length of the reaper cycle as high as 30 minutes (that is, setting the coherence-reaperdaemon-cycle-seconds configuration option to 1800).

4.8.3 Getting Session Reaper Performance Statistics

The HttpSessionManagerMBeanWeb provides several attributes that serve as performance statistics for the Session Reaper. These statistics include the average time duration for a reap cycle, the number of sessions reaped, and the time until the next reap cycle.

  • AverageReapDuration, which is the average reap duration (the time it takes to complete a reap cycle), in milliseconds, since the statistic was reset.

  • LastReapDuration, which is the time in milliseconds it took for the last reap cycle to finish.

  • MaxReapedSessions, which is the maximum number of sessions reaped in a reap cycle since the statistic was reset.

  • NextReapCycle, which is the time (as a java.lang.Date) for the next reap cycle.

  • ReapedSessions, which is the number of sessions reaped during the last cycle.

  • ReapedSessionsTotal, which is the number of expired sessions that have been reaped since the statistic was reset.

These attributes are also described in Table 4-2 under "Managing and Monitoring Applications with JMX".

You can access these attributes in a monitoring tool such as JConsole. However, you must set up the Coherence Clustered JMX Framework before you can access them. The configuration and installation instructions for the framework is provided in How to Manage Coherence with JMX in the Developer's Guide for Oracle Coherence.

4.9 Accessing Sessions with Lazy Acquisition

By default, Web applications instrumented with the WebInstaller will always acquire a session whenever a servlet or filter is called. The session is acquired regardless of whether the servlet or filter actually needs a session. This can be expensive in terms of time and processing power if you run many servlets or filters that do not require a session.

To avoid this behavior, enable lazy acquisition by setting the coherence-session-lazy-access context parameter to true in the web.xml file. The session will be acquired only when the servlet or filter attempts to access it.

4.10 Overriding the Distribution of HTTP Sessions and Attributes

The Coherence*Web Session Distribution Controller, described by the HttpSessionCollection.SessionDistributionController interface, enables you to override the default distribution of HTTP sessions and attributes in a Web application. An implementation of the SessionDistributionController interface can mark sessions and/or attributes in either of the following ways:

  • local, where a local session and/or attribute is stored on the originating server's heap, and thus, only accessible by that server

  • distributed, where a distributed session and/or attribute is stored within the Coherence grid, and thus, accessible to other server JVMs

At any point during the life of a session, the session and/or attributes for that session can transition from local or distributed. However, when a session and/or attribute is distributed it cannot transition back to local.

You can use the Session Distribution Controller in any of the following ways:

  • You can allow new sessions to remain "local" until you add an attribute (for example, when you add the first item to an on-line shopping cart); the idea is that a session must be fault-tolerant only when it contains valuable data.

  • Some Web frameworks use session attributes to store UI rendering state. Often, this data cannot be distributed because it is not serializable. Using the Session Distribution Controller, these attributes can be kept local while allowing the rest of the session attributes to be distributed.

  • The Session Distribution Controller can assist in the conversion from non-distributed to distributed systems, especially when the cost of distributing all sessions and all attributes is a consideration.

4.10.1 Implementing a Session Distribution Controller

Example 4-4 illustrates a sample implementation of the HttpSessionCollection.SessionDistributionController interface. In the sample, sessions are tested to see if they have a shopping cart attached (only these sessions will be distributed). Next, the session is tested whether it contains a certain attribute. If the attribute is found to be present, then it is not distributed.

Example 4-4 Sample Session Distribution Controller Implementation

import com.tangosol.coherence.servlet.HttpSessionCollection;
import com.tangosol.coherence.servlet.HttpSessionModel;
 
/**
* Sample implementation of SessionDistributionController
*/
public class CustomSessionDistributionController
        implements HttpSessionCollection.SessionDistributionController
    {
    public void init(HttpSessionCollection collection)
        {
        }
 
    /**
    * Only distribute sessions that have a shopping cart.
    *
    * @param model Coherence representation of the HTTP session
    *
    * @return true if the session should be distributed
    */
    public boolean isSessionDistributed(HttpSessionModel model)
        {
        return model.getAttribute("shopping-cart") != null;
        }
 
    /**
    * If a session is "distributed", then distribute all attributes with the 
    * exception of the "ui-rendering" attribute.
    *
    * @param model Coherence representation of the HTTP session
    * @param sName name of the attribute to check
    *
    * @return true if the attribute should be distributed
    */
    public boolean isSessionAttributeDistributed(HttpSessionModel model,
            String sName)
        {
        return !"ui-rendering".equals(sName);
        }
    } 

4.10.2 Registering a Session Distribution Controller Implementation

Once you have written your SessionDistributionController implementation, you can register it with your application by using the coherence-distributioncontroller-class configuration parameter. Appendix A, "Coherence*Web Context Parameters" provides more information on these parameters.

4.11 Configuring Coherence*Web with Coherence*Extend

One of the deployment options for Coherence*Web is to use Coherence*Extend to connect Web container JVMs to the cluster by using TCP/IP. This configuration should be considered if any of the following situations applies:

  • The Web tier JVMs are in a DMZ while the Coherence cluster is behind a firewall.

  • The Web tier is in an environment that does not support UDP.

  • Web tier JVMs experience long and/or frequent garbage collection (GC) pauses.

  • Web tier JVMs are restarted frequently.

In this type of deployment, there are three types of participants:

  • Web tier JVMs, which are Extend clients in this topology. They are not members of the cluster; instead, they connect to a proxy node in the cluster that will issue requests to the cluster on their behalf.

  • Proxy JVMs, which are storage-disabled members of the cluster that accept and manage TCP/IP connections from Extend clients. Requests that arrive from clients will be sent into the cluster, and responses will be returned through the TCP/IP connections.

  • Storage JVMs, which are used to store the actual session data in memory.

These are the general steps to configure Coherence*Web to use Coherence*Extend:

  1. Configure Coherence*Web to use the Optimistic Locking mode. See "Optimistic Locking".

  2. Create a cache configuration file for the proxy and storage JVMs. See "Configure the Cache for Proxy and Storage JVMs".

  3. Modify the Web tier cache configuration file to point to one or more of the proxy JVMs. See "Configuring the Cache for Web Tier JVMs".

The following sections describe these steps in more detail.

4.11.1 Configure the Cache for Proxy and Storage JVMs

The session cache configuration file (WEB-INF/classes/session-cache-config.xml) is a Coherence*Web cache configuration file that can be used to configure the proxy and server JVMs for Coherence*Extend. The file contains system property overrides that allow it to be used for both proxy and storage JVMs. When used by a proxy JVM, specify the system properties described in Table 4-7:

Table 4-7 System Property Values for Proxy JVMs

System Property Name Value

tangosol.coherence.session.localstorage

false

tangosol.coherence.session.proxy

true

tangosol.coherence.session.proxy.localhost

the host name or IP address of the NIC the proxy will bind to

tangosol.coherence.session.proxy.localport

a unique port number the proxy will bind to


When used by a cache server, specify the system properties described in Table 4-8:

Table 4-8 System Property Values for Storage JVMs

System Property Name Value

tangosol.coherence.session.localstorage

true

tangosol.coherence.session.proxy

false


Example 4-5 illustrates the complete server-side session cache configuration file.

Example 4-5 session-cache-config-server.xml File

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Server-side cache configuration descriptor for Coherence*Web over     -->
<!-- Coherence*Extend (see session-cache-config-client.xml).               -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<cache-config>
  <caching-scheme-mapping>
    <!--
    The clustered cache used to store Session management data.
    -->
    <cache-mapping>
      <cache-name>session-management</cache-name>

      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store ServletContext attributes.
    -->
    <cache-mapping>
      <cache-name>servletcontext-storage</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>
    <!--
    The clustered cache used to store Session attributes.
    -->
    <cache-mapping>
      <cache-name>session-storage</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store the "overflowing" (split-out due to size)
    Session attributes. Only used for the "Split" model.
    -->
    <cache-mapping>

      <cache-name>session-overflow</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store IDs of "recently departed" Sessions.
    -->
    <cache-mapping>
      <cache-name>session-death-certificates</cache-name>
      <scheme-name>session-certificate</scheme-name>

    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <!--
    Distributed caching scheme used by the various Session caches.
    -->
    <distributed-scheme>
      <scheme-name>session-distributed</scheme-name>
      <scheme-ref>session-base</scheme-ref>

      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>unlimited-local</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
    </distributed-scheme>

    <!--
    Distributed caching scheme used by the "recently departed" Session cache.
    -->
    <distributed-scheme>

      <scheme-name>session-certificate</scheme-name>
      <scheme-ref>session-base</scheme-ref>
      <backing-map-scheme>
        <local-scheme>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>4000</high-units>
          <low-units>3000</low-units>

          <expiry-delay>86400</expiry-delay>
        </local-scheme>
      </backing-map-scheme>
    </distributed-scheme>
    <!--
    "Base" Distributed caching scheme that defines common configuration.
    -->
    <distributed-scheme>
      <scheme-name>session-base</scheme-name>

      <service-name>DistributedSessions</service-name>
      <serializer>
        <class-name>com.tangosol.io.DefaultSerializer</class-name>
      </serializer>
      <thread-count>0</thread-count>
      <lease-granularity>member</lease-granularity>
      <local-storage system-property="tangosol.coherence.session.localstorage">true</local-storage>

      <partition-count>257</partition-count>
      <backup-count>1</backup-count>
      <backup-storage>
        <type>on-heap</type>
      </backup-storage>
      <backing-map-scheme>
        <local-scheme>

          <scheme-ref>unlimited-local</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <!--
    Proxy scheme that Coherence*Web clients used to connect to the cluster.
    -->
    <proxy-scheme>

      <service-name>SessionProxy</service-name>
      <thread-count>10</thread-count>
      <acceptor-config>
        <serializer>
          <class-name>com.tangosol.io.DefaultSerializer</class-name>
        </serializer>
        <tcp-acceptor>

          <local-address>
            <address system-property="tangosol.coherence.session.proxy.localhost">localhost</address>
            <port system-property="tangosol.coherence.session.proxy.localport">9099</port>
            <reusable>true</reusable>
          </local-address>
        </tcp-acceptor>
      </acceptor-config>

      <autostart system-property="tangosol.coherence.session.proxy">false</autostart>
    </proxy-scheme>

    <!--
    Local caching scheme definition used by all caches that do not require an
    eviction policy.
    -->
    <local-scheme>
      <scheme-name>unlimited-local</scheme-name>
      <service-name>LocalSessionCache</service-name>
    </local-scheme>  
  </caching-schemes>

</cache-config>

4.11.2 Configuring the Cache for Web Tier JVMs

The session-cache-config-client.xml file illustrated in Example 4-6 is a client-side Coherence*Web cache configuration file that uses Coherence*Extend. This file should be used by the Web tier JVMs. Follow these steps to install and use this file:

  1. Add proxy JVM hostnames/IP addresses and ports to the <remote-addresses/> section of the file. In most cases, you should include the hostname/IP address and port of all proxy JVMs for load balancing and failover.

    Note:

    The <remote-addresses> element contains the proxy server(s) that the Web container will connect to. By default, the Web container will pick an address at random if there is more than one address in the configuration. If the connection between the Web container and the proxy is broken, the container will connect to another proxy in the list.
  2. Rename the file to session-cache-config-client.xml.

  3. Place the file in the WEB-INF/classes directory of your Web application. If you used the WebInstaller to install Coherence*Web, replace the existing file that was added by the WebInstaller.

Example 4-6 illustrates the complete client-side session cache configuration file.

Example 4-6 session-cache-config-client.xml File

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Client-side cache configuration descriptor for Coherence*Web over     -->
<!-- Coherence*Extend (see session-cache-config-server.xml).               -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<cache-config>
  <caching-scheme-mapping>
    <!--
    The clustered cache used to store Session management data.
    -->
    <cache-mapping>
      <cache-name>session-management</cache-name>

      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store ServletContext attributes.
    -->
    <cache-mapping>
      <cache-name>servletcontext-storage</cache-name>
      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store Session attributes.
    -->
    <cache-mapping>
      <cache-name>session-storage</cache-name>
      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store the "overflowing" (split-out due to size)
    Session attributes. Only used for the "Split" model.
    -->
    <cache-mapping>

      <cache-name>session-overflow</cache-name>
      <scheme-name>session-remote</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store IDs of "recently departed" Sessions.
    -->
    <cache-mapping>
      <cache-name>session-death-certificates</cache-name>
      <scheme-name>session-remote</scheme-name>

    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <!--
    Near caching scheme used by the Session attribute cache. The front cache
    uses a Local caching scheme and the back cache uses a Remote caching
    scheme.
    -->
    <near-scheme>
      <scheme-name>session-near</scheme-name>
      <front-scheme>
        <local-scheme>

          <scheme-ref>session-front</scheme-ref>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>session-remote</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>

      <invalidation-strategy>present</invalidation-strategy>
    </near-scheme>

    <local-scheme>
      <scheme-name>session-front</scheme-name>
      <eviction-policy>HYBRID</eviction-policy>
      <high-units>1000</high-units>

      <low-units>750</low-units>
    </local-scheme>

    <remote-cache-scheme>
      <scheme-name>session-remote</scheme-name>
      <initiator-config>
        <serializer>
          <class-name>com.tangosol.io.DefaultSerializer</class-name>

        </serializer>
        <tcp-initiator>
          <remote-addresses>
            <!-- 
            The following list of addresses should include the hostname and port
            of all running proxy JVMs. This is for both load balancing and
            failover of requests from the Web tier.
            -->
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>
          </remote-addresses>
        </tcp-initiator>
      </initiator-config>
    </remote-cache-scheme>
  </caching-schemes>
</cache-config>

4.12 Configuring Coherence*Web for JSF and MyFaces

Java Server Faces (JSF) is a framework that enables you to build user interfaces for Web applications. MyFaces, from the Apache Software Foundation, provides JSF components that extend the JSF specification. MyFaces components are completely compatible with the Sun JSF 1.1 Reference Implementation or any other compatible implementation.

For all JSF and MyFaces Web-applications:

JSF and MyFaces attempt to cache the state of the view in the session object. This state data should be serializable by default, but there may be situations where this is not the case. For example:

  • If Coherence*Web reports an IllegalStateException due to a non-serializable class, and all the attributes placed in the session by your Web-application are Serializable, then you must configure JSF/MyFaces to store the state of the view in a hidden field on the rendered page.

  • If the Web-application puts non-serializable objects in the session object, you must enable the coherence-preserve-attributes context parameter.

The JSF parameter javax.faces.STATE_SAVING_METHOD identifies where the state of the view is stored between requests. By default, state is saved in the servlet session. Set the STATE_SAVING_METHOD parameter to client in the context-param stanza of web.xml, so that JSF stores the state of the entire view in a hidden field on the rendered page. If you do not, then JSF may attempt to cache that state, which is not serializable, in the session object.

Example 4-7 illustrates setting the STATE_SAVING_METHOD parameter.

Example 4-7 Setting STATE_SAVING_METHOD in web.xml

...
<context-param>
    <param-name>javax.faces.STATE_SAVING_METHOD</param-name>
    <param-value>client</param-value>
</context-param>
...

For Instrumented Applications that use MyFaces

If you are deploying the MyFaces application with the Coherence*Web WebInstaller (that is, an instrumented application), then you may have to complete an additional step based on the version of MyFaces.

  • If you are using Coherence*Web WebInstaller to deploy a Web-application built with a pre-1.1.x version of MyFaces, then nothing more needs to be done.

  • If you are using Coherence*Web WebInstaller to deploy a Web-application built with a 1.2.x version of MyFaces, then add the context parameter org.apache.myfaces.DELEGATE_FACES_SERVLET to web.xml. This parameter allows you to specify a custom servlet instead of the default javax.faces.webapp.FacesServlet.

    Example 4-8 illustrates setting the DELEGATE_FACES_SERVLET context parameter.

    Example 4-8 Setting DELEGATE_FACES_SERVLET in web.xml

    ...
    <context-param>
        <param-name>org.apache.myfaces.DELEGATE_FACES_SERVLET</param-name>
        <param-value>com.tangosol.coherence.servlet.api23.ServletWrapper</param-value>
    </context-param>
    ...
    

For Instrumented Applications that use the JSF Reference Implementation (Mojarra)

If you are using Coherence*Web WebInstaller to deploy a Web-application based on the JSF RI (Mojarra), then you must declare the Faces Servlet class in the servlet stanza of web.xml.

Example 4-9 Declaring the Faces Servlet in web.xml

...
<servlet>
     <servlet-name>Faces Servlet (for loading config)</servlet-name>
     <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
 </servlet>
...

For Non-instrumented Applications that use MyFaces and Coherence SPI

If you are using the Coherence SPI to deploy a Web-application built with MyFaces, then nothing more needs to be done. This is the recommended method of running MyFaces with Coherence*Web.

For Non-instrumented Applications that use the JSF Reference Implementation (Mojarra) and the Coherence SPI

If you are using the Coherence SPI to deploy a Web-application based on the JSF RI (Mojarra), then nothing needs to be done. This is the recommended method of running JSF with Coherence*Web.